Fact-checked by Grok 2 weeks ago

Primality test

A primality test is an designed to determine whether a given positive n > 1 is prime, i.e., divisible only by 1 and itself, or composite. These tests are fundamental in , as primes form the building blocks of the integers under multiplication, and efficient primality testing enables the identification of such numbers without exhaustive . Primality tests hold critical importance in modern , particularly for generating large prime numbers required in systems like , where the security relies on the difficulty of factoring products of two large primes. Beyond cryptography, they support applications in probabilistic factoring algorithms, such as the elliptic curve method (), which require verification that a number is not prime to continue searching for factors. The problem's complexity has been extensively studied, with primality in ∩ coNP since 1975 (Pratt) and later shown to be in ZPP (zero-error probabilistic polynomial time) in 1992 (Adleman-Huang). Historically, primality testing traces back to Fermat's Little Theorem in 1640, which states that if p is prime and \gcd(a, p) = 1, then a^{p-1} \equiv 1 \pmod{p}, providing a basis for early tests. Key advancements include Euler's extensions in the , Lucas's n-1 test in the , and the discovery of Carmichael numbers—composite numbers that fool Fermat-based tests—in the early , proven infinite in 1994. Modern milestones encompass probabilistic tests like Solovay-Strassen (1977) and Miller-Rabin (1980), alongside deterministic breakthroughs such as the Adleman-Pomerance-Rumely (APR) test (1983) and the AKS algorithm (2002), which placed primality testing in deterministic polynomial time, confirming it is in P. Primality tests are broadly classified into deterministic algorithms, which always produce correct results (e.g., AKS, running in \tilde{[O](/page/O)}(\log^6 n) time but practically slow, and elliptic curve primality proving (ECPP), heuristically \tilde{[O](/page/O)}(\log^4 n) ), and probabilistic , which offer faster performance with a controlled probability (e.g., Miller-Rabin, with at most $1/4 per and overall O((\log n)^3) complexity). Specialized tests include the Lucas-Lehmer algorithm for Mersenne primes (e.g., verifying $2^{6972593} - [1](/page/1)) and the Fermat test, a simple but unreliable early method vulnerable to pseudoprimes. In practice, combinations like the Baillie-PSW test enhance reliability for cryptographic use, balancing speed and certainty, with no known counterexamples as of 2025.

Overview

Definition and Scope

A is a positive greater than that has no positive divisors other than and itself, while a is a positive greater than that has additional positive divisors beyond and itself. These fundamental concepts form the basis for primality testing, which seeks to classify numbers accordingly without exhaustive . A primality test is an algorithm designed to determine whether a given positive integer n > 1 is prime or composite. The scope of such tests encompasses algorithms applicable to general integers as well as those tailored to numbers of special forms, such as Mersenne numbers of the form $2^p - 1 where p is prime. Regarding error handling, tests are often categorized by their error types: one-sided tests, common in probabilistic variants, guarantee that a declaration of compositeness is correct but may erroneously identify a composite as prime with low probability, whereas two-sided tests can err in both directions. Primality tests play a crucial role in , particularly for generating large s required in systems like , where two large primes p and q are selected to compute the N = pq. They are also essential in for advancing research on prime distribution and factorization challenges, as well as in practical generation for various mathematical computations. While deterministic tests provide absolute certainty, probabilistic ones offer efficiency with tunable error bounds.

Historical Context

The concept of primality emerged in , where provided the first recorded definition of prime numbers around 300 BCE in his , describing them as numbers greater than one that are not divisible by any smaller number other than one. A significant early advancement came around 240 BCE with ' , an that systematically identifies all primes up to a given limit by marking multiples of each prime starting from 2, though it serves more for generating lists of primes rather than testing the primality of a specific large number. In the 17th and 18th centuries, theoretical foundations for primality tests began to solidify. stated his Little Theorem in 1640, which posits that if p is prime, then for any integer a not divisible by p, a^{p-1} ≡ 1 mod p, laying groundwork for later probabilistic applications though not immediately used as a test. , conjectured by John Wilson and published by Edward Waring in 1770 before being proved by in 1773, states that for a prime p, (p-1)! ≡ -1 mod p, offering a theoretical primality criterion but impractical for computation due to the size. By the mid-19th century, introduced the Lucas-Lehmer test in 1878, a deterministic method specifically for verifying Mersenne primes of the form 2^p - 1, which Derrick Henry Lehmer further improved in the 1930s. The 20th century marked a shift toward efficient algorithms amid growing computational needs. Early trial division methods, optimized since Fibonacci's 1202 description of checking divisors up to the , remained basic but were supplemented by adapted into probabilistic tests in the 1970s for handling large numbers. Gary Miller introduced a deterministic version of what became the Miller-Rabin test in 1976, relying on the generalized , which Michael Rabin soon rendered fully probabilistic and unconditional, enabling rapid screening of candidates with low error rates. A landmark deterministic breakthrough arrived in 2002 with the AKS algorithm by , , and Nitin Saxena, proving that primality testing lies in P (polynomial time), with practical implementations following by 2004 despite its higher complexity compared to probabilistic methods. The advent of digital computers in the mid-20th century drove the transition to probabilistic tests like Miller-Rabin, as deterministic approaches such as trial division or AKS proved too slow for cryptographic-scale numbers, prioritizing efficiency and scalability in verifying probable primes.

Basic Methods

Trial Division

Trial division is the simplest deterministic method for determining whether a positive n > 1 is prime, relying on the that a has a prime factor less than or equal to its . The algorithm checks for divisibility by all integers from 2 up to \lfloor \sqrt{n} \rfloor; if none divide n evenly, then n is prime. To implement the basic trial division, initialize a loop starting from 2 and continue until the divisor reaches \lfloor \sqrt{n} \rfloor. For each candidate divisor d, compute n \mod d; if the remainder is 0, n is composite. This exhaustive check ensures certainty but requires up to approximately \sqrt{n}/2 divisions in the worst case for odd composites. Optimizations significantly reduce the number of checks without altering the method's determinism. First, handle the even case separately: if n = 2, it is prime; if n is even and greater than 2, it is composite. For odd n > 2, test only odd divisors from 3 to \lfloor \sqrt{n} \rfloor, skipping evens, which halves the iterations. The time complexity of this optimized version is O(\sqrt{n}), making it practical for n up to around $10^{12} on modern hardware, where \sqrt{n} \approx 10^6 trials are feasible in seconds. Consider testing n = 17. Compute \lfloor \sqrt{17} \rfloor = 4. Check divisibility by 2 (17 is odd, so no); then by 3 (17 ÷ 3 ≈ 5.666, remainder 2); no further odds needed as 5 > 4. Since no divisors found, 17 is prime. This example illustrates the method's straightforward application for small n. Despite its simplicity, trial division becomes inefficient for large n, such as those exceeding $10^{12}, where the O(\sqrt{n}) complexity demands billions of operations, rendering it impractical for cryptographic-scale numbers like 1024-bit integers. It remains the baseline for small-scale primality verification due to its ease of implementation and guaranteed correctness. A common variant is , which enhances trial division by pre-eliminating multiples of small primes (e.g., 2, 3, 5) using a "wheel" their product (e.g., ). Instead of checking all candidates up to \sqrt{n}, test only residues coprime to (1, , 11, , 17, 19, 23, 29), skipping about 2/3 of numbers and reducing checks by up to 77% for a 2-3-5 . Larger wheels with more small primes (e.g., up to , 210) further optimize for or primality in the initial sieving phase.

Wilson's Theorem

Wilson's Theorem provides a necessary and sufficient condition for the primality of an p > 1: (p-1)! \equiv -1 \pmod{p} p is prime. This congruence links the of p-1 to , distinguishing primes from composites. The theorem was conjectured by John Wilson and first published by Edward Waring in 1770, though it had been known earlier to Gottfried Leibniz. provided the first published proof in 1773, demonstrating both the theorem and its converse using properties of polynomials and binomial expansions. An elementary proof proceeds as follows. For p = 2 and p = 3, the condition holds directly: $1! = 1 \equiv -1 \pmod{2} and $2! = 2 \equiv -1 \pmod{3}. For p > 3, if p is composite, then (p-1)! includes factors that divide p, so (p-1)! \not\equiv -1 \pmod{p}. If p is prime, the numbers $1 to p-1 form the modulo p, where each element except $1 and p-1 pairs with its inverse such that a \cdot b \equiv 1 \pmod{p} for distinct a, b. Thus, the product $1 \cdot 2 \cdots (p-1) \equiv 1 \cdot (p-1) \pmod{p}, since the paired products yield $1 and p-1 \equiv -1 \pmod{p}, giving (p-1)! \equiv -1 \pmod{p}. As a primality test, the yields a : compute (n-1)! \mod n and verify if it equals n-1. For example, with n=5, $4! = 24 \equiv 4 \equiv -1 \pmod{5}, confirming primality; for n=4, $3! = 6 \equiv 2 \not\equiv 3 \pmod{4}, indicating compositeness. However, this method is impractical for large n because computing the requires handling enormous intermediate values, even with modular reductions at each step. The is O(n \log n) using for the n modular operations, or worse without optimized , making it infeasible beyond small n compared to trial division for modest sizes.

Probabilistic Methods

Fermat Primality Test

The is a probabilistic for determining whether a given n > 1 is likely to be , grounded in . This theorem asserts that if p is a and a is an not divisible by p (i.e., \gcd(a, p) = 1), then a^{p-1} \equiv 1 \pmod{p}. The test leverages this property by checking whether the congruence holds for randomly selected bases a, where $1 < a < n and \gcd(a, n) = 1. If the condition fails for any such a, n is definitively composite; if it passes for multiple bases, n is deemed a probable prime. The algorithm proceeds as follows: select a random base a coprime to n, compute a^{n-1} \mod n using efficient modular exponentiation, and verify if the result equals 1. This computation is performed via repeated squaring, requiring O(\log n) modular multiplications, making the test efficient even for large n. The process is repeated with several independent bases to reduce the error probability; however, due to the existence of Carmichael numbers, no worst-case exponential bound on the error exists, unlike in stronger tests. As a one-sided test, it correctly identifies composites when they fail the check but may err on the side of calling composites probable primes. A key limitation arises from Fermat pseudoprimes: composite numbers n that satisfy a^{n-1} \equiv 1 \pmod{n} for some base a coprime to n, falsely suggesting primality. For example, n = 91 = 7 \times 13 is composite, yet it passes the test for bases like a = 3 and a = 4 (where $3^{90} \equiv 1 \pmod{91} and $4^{90} \equiv 1 \pmod{91}), but fails for a = 2 and a = 7. In total, 36 such bases exist for 91 out of \phi(91) = 72 possible coprime a. This vulnerability is exacerbated by , absolute Fermat pseudoprimes that pass for every coprime base, such as 561; there are only seven known below 10,000, but infinitely many exist. Despite these pitfalls, the test's speed via modular exponentiation makes it valuable for initial screening of candidates, particularly for small to medium n, where witnesses (bases that detect compositeness) are abundant. It forms the basis for stronger variants like the , which addresses pseudoprime issues through additional checks.

Miller-Rabin Test

The Miller-Rabin primality test is a probabilistic algorithm for determining whether an odd integer n > 2 is prime, introduced by Gary L. Miller in 1976 as a deterministic method assuming the extended and later generalized into an unconditional randomized test by in 1980. Unlike the , which checks if a^{n-1} \equiv 1 \pmod{n} for random bases a, the Miller-Rabin test decomposes n-1 = 2^s \cdot d with d odd and examines intermediate values in the sequence of squarings to impose a stricter condition, reducing the likelihood of composite numbers passing as probable primes. The algorithm proceeds as follows: Select a random base a with $2 \leq a \leq n-2 and \gcd(a, n) = 1. Compute x_0 = a^d \pmod{n}. If x_0 \equiv 1 \pmod{n} or x_0 \equiv -1 \pmod{n}, then n passes the test for this a and is declared a . Otherwise, set r = 1 and compute x_r = x_{r-1}^2 \pmod{n} iteratively for r = 1 to s-1; if x_r \equiv -1 \pmod{n} for any such r, then n passes. If no such condition holds after s-1 squarings, declare n composite. This process relies on properties from , specifically that for prime n, the multiplicative order divides n-1, ensuring the sequence behaves as in a of order n-1. A composite number n that passes the test for a given base a is called a strong pseudoprime to base a. The key strength of the test lies in the bound on such liars: for any odd composite n, at most one-fourth of the possible bases a (coprime to n) are strong liars, so repeating the test with k independent random bases a yields an error probability of less than $4^{-k} (i.e., the probability that a composite n is misidentified as probable prime). This makes the test highly reliable in practice, with even small k (e.g., k=10) sufficient for cryptographic applications where error rates below $10^{-20} are needed. Deterministic variants exist by fixing a small set of bases that cover all composites up to a bound. For example, for all n < 2^{64}, testing with the specific witnesses a \in \{2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37\} deterministically proves primality, as no strong pseudoprime to all these bases exists in that range. Such sets are derived from exhaustive searches for strong pseudoprimes to multiple bases. As an illustration, consider n = 221 = 13 \times 17, a composite. Here, n-1 = 220 = 2^2 \cdot 55, so s=2, d=55. For base a=2, compute $2^{55} \mod 221 = 47. Since $47 \not\equiv 1 \pmod{221} and $47 \not\equiv -1 \pmod{221}, square to get $47^2 \mod 221 = 64, which is neither $1 nor -1. The test fails, correctly identifying n as composite. However, for some bases like a=7, 221 passes, making it a strong pseudoprime to base 7. The test's efficiency stems from its reliance on modular exponentiation: each trial requires O(\log n) squarings and multiplications modulo n, with each operation taking O(\log^2 n) time using standard algorithms, yielding overall complexity O(k \log^3 n) for k trials. It is widely implemented in cryptographic libraries, including OpenSSL, where it performs multiple rounds (at least 64 for numbers up to 2048 bits) to generate primes with negligible error.

Solovay-Strassen Test

The Solovay–Strassen primality test is a probabilistic algorithm introduced in 1977 by Robert Solovay and Volker Strassen for determining whether an odd integer n > 2 is prime or composite. It provides a Monte Carlo method that always correctly identifies composites but may err on primes with controlled probability, making it suitable for large-scale primality testing in cryptographic applications. The test is grounded in Euler's criterion from : for a prime p and a coprime to p, a^{(p-1)/2} \equiv \left( \frac{a}{p} \right) \pmod{p}, where \left( \frac{a}{p} \right) denotes the , which equals 1 if a is a modulo p, -1 if a non-residue, and 0 if p divides a. For composite n, the algorithm extends this using the \left( \frac{a}{n} \right), a generalization of the that is multiplicative and equals the Legendre symbol when n is prime. If n is prime, the congruence holds for all coprime a; deviations for composites serve as witnesses of compositeness. To apply the test, select a random a with $1 < a < n - 1. First, compute d = \gcd(a, n); if d > 1, then n is composite. Otherwise, calculate the J = \left( \frac{a}{n} \right) and the power P = a^{(n-1)/2} \mod n. If P \not\equiv J \pmod{n}, then a is an Euler witness and n is composite. Repeat this for k independent random a; if no witness appears, declare n probably prime. The algorithm assumes n is odd and greater than 2, with trivial checks for small n. Composites that pass the test for a given a are termed Euler pseudoprimes to base a, but such numbers are rare. For any odd composite n, at least half of the a coprime to n are Euler witnesses, yielding an error probability of less than $1/2 per trial unconditionally. Euler pseudoprimes exist but their density decreases rapidly with n; under the extended (ERH), the test becomes deterministic when applied to the first O((\log n)^2) primes as bases, ensuring no error for n < 2^{O((\log n)^3)}. For illustration, take the composite n = 15 = 3 \times 5. Choose a = 2, where \gcd(2, 15) = 1. The Jacobi symbol is \left( \frac{2}{15} \right) = \left( \frac{2}{3} \right) \left( \frac{2}{5} \right) = (-1) \times (-1) = -1, but $2^{(15-1)/2} = 2^7 = 128 \equiv 8 \pmod{15}, and $8 \not\equiv -1 \pmod{15}. Thus, 2 is an Euler witness, confirming compositeness. Each trial runs in O(\log^2 n) time: the Jacobi symbol requires O(\log n) operations via a Euclidean-like algorithm, while modular exponentiation uses O(\log n) multiplications, each O(\log n) via fast multiplication. With k = O(\log \log n) trials for negligible error, the full test is polynomial-time efficient.

Frobenius Test

The , introduced by in his 1998 paper, is a probabilistic primality testing method that leverages the from to generalize and strengthen earlier tests like . For a prime p, the Frobenius map \sigma: x \mapsto x^p acts as an automorphism on the field \mathbb{F}_p, preserving the field's structure. The test extends this concept to composite candidates n by considering quotient rings \mathbb{Z}/n\mathbb{Z} / (f(x)), where f(x) is a low-degree irreducible polynomial modulo n, and verifies whether the induced Frobenius map behaves as expected for a prime modulus. This approach detects compositeness by checking if the ring's elements satisfy prime-like algebraic properties under the map \sigma^n. The core algorithm, known as the quadratic Frobenius test (QFT), focuses on quadratic polynomials f(x) = x^2 - b x - c modulo n > 2, selected such that the Legendre symbols (b^2 + 4c / n) = -1 and (-c / n) = 1. After trial division by small primes and verifying n is not a square, the test computes powers in the ring \mathbb{Z}/n\mathbb{Z} / (f(x)), typically checking conditions like \alpha^{n+1} \equiv 1 \pmod{n, f(x)} for a root \alpha of f(x), along with trace or norm relations derived from the Frobenius automorphism. These computations can be performed efficiently using the companion matrix of f(x) and modular exponentiation. A randomized variant selects random (b, c) pairs satisfying the symbol conditions, while under the generalized Riemann hypothesis (GRH), a single iteration with a suitable random base suffices for very low error probability. The test briefly builds on Fermat and Miller-Rabin concepts by incorporating higher-degree field extensions for stronger discrimination. Variants of the QFT include deterministic versions using fixed polynomials, such as x^2 + 1 or x^2 - 2, which produce no known pseudoprimes for odd composites n < 2^{50} (approximately $10^{15}), making them reliable for numbers up to this bound without randomization. The error probability for the probabilistic QFT is unconditionally bounded above by $1/7710 for composite n, significantly lower than the $1/4 for a single , and it identifies more pseudoprimes (thus fewer false positives) than by exploiting quadratic extensions. Under GRH, the error drops further, approaching negligible levels with few trials. Computationally, the test runs in O(\log^2 n \log \log n) time, roughly three times the cost of a due to the extra exponentiation, but it has been implemented in libraries like for enhanced probabilistic proving. For a composite n, the test typically fails when the chosen polynomial f(x) does not split correctly in \mathbb{Z}/n\mathbb{Z}, violating the —for instance, if \alpha^{n+1} \not\equiv 1 \pmod{n, f(x)} for a root \alpha, exposing the non-field structure. This makes the QFT particularly effective against certain pseudoprimes that pass simpler tests.

Baillie-PSW Test

The is a probabilistic composite test that integrates the with a single fixed base and a , providing high reliability for identifying primes in practice. Proposed in 1980 by and , it leverages the strengths of both components to minimize the chance of false positives among composite numbers. The test has been widely adopted in software libraries like for generating large primes due to its efficiency and empirical effectiveness. The algorithm begins with the Miller-Rabin test using base 2. Given an odd integer n > 2, express n-1 = 2^s d where d is odd. Compute a = 2^d \mod n; if a \equiv 1 \pmod{n} or a \equiv -1 \pmod{n}, then n passes. Otherwise, for r = 1 to s-1, square the previous value and check if it equals -1 \pmod{n}; if so, n passes as a strong to base 2. If it fails, n is composite. If n passes the Miller-Rabin step, the test proceeds to the Lucas component. Parameters P and Q are selected based on n, typically using a deterministic such as finding the smallest non-residue D \pmod{n} (e.g., via up to a small bound or fixed discriminants like 5), setting P = 1 and Q = (1 - D)/4 if integer, or alternative choices ensuring \left( \frac{D}{n} \right) = -1. The is then generated n: \begin{align*} U_0 &= 0, \\ U_1 &= 1, \\ U_k &= P U_{k-1} - Q U_{k-2} \pmod{n} \quad \text{for } k \geq 2. \end{align*} n is declared a strong Lucas probable prime if it satisfies conditions analogous to the pseudoprime criteria, such as n dividing U_{n - \left( \frac{D}{n} \right)} and appropriate checks on the sequence powers of 2 related to n+1 = 2^t e with e odd (e.g., U_{e} \equiv 0 \pmod{n} or intermediate squaring to -1). If both tests pass, n is ; otherwise, composite. No composite numbers are known to pass the Baillie-PSW up to beyond $10^{18}, and it is conjectured to be deterministic for all practical sizes up to at least $2^{64}. For example, 341 ($11 \times 31), which is a to base 2 in Miller-Rabin, fails the Lucas with standard parameters (e.g., D=5, P=1, Q=-1), correctly identifying it as composite. The test's efficiency is comparable to a single Miller-Rabin iteration, which runs in O(\log^2 n) time with binary exponentiation, plus an additional O(\log^2 n) for computing the via similar fast methods. Over time, the test has been strengthened between 2010 and 2021 by incorporating additional Miller-Rabin bases (e.g., 3, 5, 7 for larger n) in implementations and refining the Lucas checks, such as adding a verification that V_{n+1} \equiv 2Q \pmod{n} (where V_k is the companion ), further reducing potential error rates without significant cost.

Recent Probabilistic Advances

Recent probabilistic primality tests have built upon the efficiency of established methods like the Miller-Rabin test by incorporating novel algebraic structures and hybrid approaches to enhance detection rates for large integers. In 2023, researchers introduced a Mersenne-specific deterministic-probabilistic hybrid test that leverages combinatorial checks on numbers of the form $2^p - 1, combining rigorous algebraic with randomized selection to reduce computational overhead for Mersenne candidates up to exponents around $10^{18}. This approach outperforms traditional Lucas-Lehmer iterations in hybrid scenarios by integrating probabilistic error bounds with deterministic subroutines, achieving near-certain primality confirmation in under 10% of the time for verified Mersenne primes. A significant 2024 advancement is the Pell's cubic test, an algorithm that utilizes solutions to Pell equations and projectivized sequences derived from cubic Pell curves to certify primality for general odd integers n. By analyzing the periodicity and rank of appearance in these sequences n, the test detects compositeness through non-residency conditions, offering deterministic guarantees under probabilistic assumptions for witnesses, and has been shown effective for numbers up to $10^{15} with error probabilities below $2^{-100}. In 2025, the test emerged as a innovative probabilistic method employing eigenvalue analysis of constructed over cyclotomic fields to identify composite structures. For an n, the test constructs a from n-th roots of and examines eigenvalue multiplicities; if certain eigenvalues fail to align with prime field expectations, compositeness is proven probabilistically, with applications demonstrating robustness for n exceeding $10^{20} and integration potential in cryptographic protocols. Another 2025 contribution is the Lucas-based natural integer test, which extends Lucas sequences with ties to Fibonacci progressions for accelerated witness computation. This method refines pseudoprime detection by evaluating sequence terms at specific indices linked to ranks, yielding faster convergence for probable primes and improved witness efficiency over classical Lucas tests, particularly for integers in the range $10^{12} to $10^{18}. These recent tests share common traits, including tightened error bounds approaching deterministic levels through multiple iterations and the incorporation of for optimal witness selection, which adaptively chooses bases to minimize false positives based on historical composite patterns. For instance, models trained on prior test data can select witnesses that achieve over 99.9% accuracy in under 50 trials for large n. Despite their promise, these methods remain experimental and are not yet integrated into standard cryptographic libraries like , though they hold substantial potential for enhancing post-quantum secure in .

Deterministic Methods

Pocklington's Theorem

Pocklington's theorem is a deterministic primality test that certifies the primality of an odd n > 1 using a partial of n-1. The theorem states that if n-1 = F \cdot R where F > \sqrt{n}, \gcd(F, R) = 1, and the prime of F is known, then n is prime provided that for every prime q dividing F, there exists an a (which may depend on q) such that a^{n-1} \equiv 1 \pmod{n} and \gcd(a^{(n-1)/q} - 1, n) = 1. This condition ensures that the multiplicative order of a n is divisible by q, implying that every prime factor of n must divide F; since F < n and F > \sqrt{n}, n cannot be composite. The theorem originates from the work of Henry Cabourn in and forms the basis for many subsequent deterministic primality proving methods. The algorithm based on Pocklington's theorem requires first obtaining a partial of n-1 into F and R satisfying the conditions, which may involve trial division or more advanced factoring techniques on the smaller . Once F is factored into primes q_1, \dots, q_r, for each q_i, a suitable a_i is found by testing small values of a until the conditions hold; the Fermat test a^{n-1} \equiv 1 \pmod{n} can be computed efficiently using , and the gcd condition verifies that a^{(n-1)/q_i} \not\equiv 1 \pmod{n}. If all conditions are satisfied for every q_i, primality is proven without needing to factor R. This approach contrasts with probabilistic methods by offering a complete of primality, albeit at the cost of effort. A representative example is the primality proof for n = [101](/page/101). Here, n-1 = 100 = 25 \cdot 4, so take F = 25 > \sqrt{[101](/page/101)} \approx 10.05 and R = 4, with \gcd(25, 4) = 1. The only distinct prime dividing F is q = 5. Choosing a = 2, compute $2^{100} \equiv 1 \pmod{[101](/page/101)} (by , as 101 is suspected prime) and \gcd(2^{20} - 1, [101](/page/101)) = \gcd(95, [101](/page/101)) = 1 since $2^{20} \equiv 95 \not\equiv 1 \pmod{[101](/page/101)}. Thus, the conditions hold, proving prime. The primary limitation of Pocklington's theorem is its dependence on partial factorization of n-1 to a factor F > \sqrt{n}, which becomes computationally intensive for large n if n-1 has large prime factors; however, it remains practical for numbers where n-1 is or when only a modest portion needs factoring, such as in generating large primes for cryptographic applications like where the form of the prime allows structured of p-1. The method has been extended in various ways, including the generalized Pocklington-Lehmer test, but the core theorem continues to underpin efficient primality certificates in .

Lucas-Lehmer Test

The Lucas-Lehmer test is a designed specifically to determine the primality of Mersenne numbers of the form M_p = 2^p - 1, where p is an odd prime. It provides a necessary and sufficient condition for M_p to be prime, making it highly efficient for this special case without requiring factorization of M_p - 1. The algorithm proceeds as follows: Define the sequence s_0 = 4, and for i \geq 1, s_i = s_{i-1}^2 - 2 \pmod{M_p}. Then M_p is prime if and only if s_{p-2} \equiv 0 \pmod{M_p}. This sequence is a special case of the , defined by the recurrence V_n(P, Q) = P V_{n-1}(P, Q) - Q V_{n-2}(P, Q) with initial conditions V_0 = 2, V_1 = P, using parameters P = 1 and Q = -1. The test was originally proposed by in 1878 as part of his work on periodic binary fractions and perfect numbers, though the full proof of its correctness was provided by Derrick Henry Lehmer in . For an illustrative example, consider p = 7, so M_7 = 127. The sequence begins with s_0 = 4, then s_1 = 14, s_2 = 194 \equiv 67 \pmod{127}, s_3 = 67^2 - 2 \equiv 42 \pmod{127}, s_4 = 42^2 - 2 \equiv 111 \pmod{127}, and s_5 = 111^2 - 2 \equiv 0 \pmod{127}. Since s_{5} = 0 \pmod{127} and p-2 = 5, M_7 is prime. In terms of computational efficiency, the test requires p-2 iterations, each involving the squaring of a number up to p bits long. Using fast multiplication algorithms such as the Schönhage-Strassen method, the time complexity is O(p \log p \log \log p) per squaring, leading to an overall complexity of O(p^2 \log p \log \log p). This efficiency has made the Lucas-Lehmer test indispensable for discovering the largest known primes, all of which are Mersenne primes; for instance, the record as of October 2024 is M_{136279841} = 2^{136279841} - 1, verified using this method by the (GIMPS). Recent variants leverage advanced algebraic identities to accelerate verification. In 2023, the "Eight Levels Theorem" was established, providing a framework for new Lucas-Lehmer-style tests that reduce the number of iterations by exploiting higher-order relations in the sequence, enabling faster primality proofs for certain Mersenne candidates.

AKS Primality Test

The is a landmark deterministic algorithm for verifying whether a positive n > 1 is prime, running in polynomial time without relying on unproven conjectures or randomization. Developed by , , and Nitin at the Indian Institute of Technology , it was announced in 2002 and formally published in 2004, resolving a major open problem in by proving that PRIMES ∈ P. Unlike earlier deterministic methods limited to special forms of numbers, AKS applies to arbitrary n, marking a theoretical while prioritizing rigorous polynomial-time guarantees. The algorithm proceeds in several steps to ensure n satisfies key number-theoretic properties unique to primes. First, it checks whether n is a perfect power, i.e., n = b^k for integers b > 1 and k > 1 up to \log n; if so, n is composite. Next, it identifies the smallest integer r, with O(\log^5 n) bound, such that the multiplicative of n modulo r exceeds (\log_2 n)^2 and r has no prime factors ≤ \log_2^2 n. It then tests for small factors by computing \gcd(n, a) for a = 2 to r; any nontrivial yields composite. If n \leq r, output prime. Otherwise, the core verification examines polynomial identities: for each a = 1 to \lfloor 2 \sqrt{\phi(r)} \log_2 n \rfloor, confirm (x + a)^n \equiv x^n + a \pmod{n, x^r - 1}. If all congruences hold, n is prime; failure indicates composite. This verification relies on the binomial theorem applied in the quotient ring \mathbb{Z}/n\mathbb{Z} / (x^r - 1), where the chosen r ensures the ring's structure captures the cyclic properties distinguishing primes via a generalization of Fermat's Little Theorem to polynomials. For primes n, the congruence holds identically due to freshman's dream expansion modulo n, while composites fail for sufficiently many a because their factorizations disrupt the ring's behavior. The parameter bounds guarantee the test's completeness and soundness without exhaustive factorization. To illustrate with a small prime like n = 5, select r = 7 (order of 5 7 is 6 > (\log_2 5)^2 \approx 5.38). Then \phi(7) = 6, so check up to a \approx 2 \sqrt{6} \cdot 2.32 \approx 9, say a = 1 to 9. For a = 1, compute (x + 1)^5 \mod (5, x^7 - 1): binomial expansion yields x^5 + 5x^4 + 10x^3 + 10x^2 + 5x + 1 \equiv x^5 + 1 \pmod{5}, and reduction modulo x^7 - 1 confirms equivalence to the right side x^5 + 1, holding true as expected for prime n. Similar checks for other a verify the identity via polynomial arithmetic. The original AKS implementation achieves heuristic time complexity \tilde{O}(\log^6 n) using fast polynomial multiplication, with rigorous worst-case \tilde{O}(\log^{10.5} n); subsequent refinements by the authors and others, including better bounds on r and optimized , have lowered it to \tilde{O}(\log^3 n) in certain variants. Despite these advances, high constant factors render it impractical for large-scale use, viable only for n < 10^{18} where it takes seconds to minutes on modern hardware, far slower than probabilistic alternatives for cryptographic applications.

Elliptic Curve Primality Proving

Elliptic Curve Primality Proving (ECPP) is a deterministic primality testing algorithm that generates verifiable certificates of primality for large integers using properties of elliptic curve groups defined over the ring \mathbb{Z}/n\mathbb{Z}. The method relies on selecting suitable elliptic curves and verifying the orders of points on these curves to establish that n cannot be composite, as such configurations are improbable for composite moduli. Unlike purely probabilistic tests, ECPP produces a short certificate that can be independently verified in polynomial time, making it particularly valuable for certifying the primality of enormous numbers in number theory and cryptography. The foundational idea, introduced by Shafi Goldwasser and Joe Kilian in 1986, hinges on a key theorem: for an odd integer n > 3 not divisible by 3, consider an elliptic curve E given by y^2 = x^3 + ax + b over \mathbb{Z}/n\mathbb{Z} with discriminant nonzero modulo n, and a point L \neq \mathcal{O} (the identity) on E such that qL = \mathcal{O} for some prime q > n^{1/2} + 2n^{1/4} + 1. Then n must be prime, because if n were composite, the Chinese Remainder Theorem would imply that the order q factors across the components of n, contradicting the bound on q. This proposition ensures that the existence of such a large prime-order point forces n to be prime. The algorithm begins with an (ECPRP) test to identify candidate curves. Random elliptic curves are generated over \mathbb{[Z](/page/Z)}/n\mathbb{Z}, and for each, a random point P is selected; scalar multiplications are performed to find the order of P, using methods like Pollard's rho for to estimate subgroup sizes efficiently. If the order includes a large prime factor q satisfying the bound from the for multiple independent curves (typically 20 or more to minimize error probability below $2^{-80}), n is deemed an ECPRP. This step is probabilistic but has negligible failure rate for primes. To generate a full , the process shifts to deterministic via finding on the group to confirm exact orders. The complete ECPP procedure, refined by A. O. L. Atkin in 1986 and implemented by François Morain, constructs a recursive certificate by iteratively reducing the primality proof of n to that of smaller primes. Starting with an ECPRP n, an elliptic curve E is found such that the group order #E(\mathbb{Z}/n\mathbb{Z}) = r \cdot s), where s > n^{1/2} is prime (verified by trial division) and r is smooth or smaller than n. A point P of order s is certified, proving n prime modulo the primality of the prime factors of r. The process recurses on these factors, forming a chain of elliptic curves and points until reaching trivial primes (e.g., below 2^{64}, proven by exhaustive checks). The certificate consists of the sequence of curves, points, and orders, verifiable by recomputing scalar multiplications and subgroup checks. This recursive structure ensures a full deterministic proof while keeping certificate size O((\log n)^2). In terms of efficiency, the Atkin-Morain ECPP variant achieves a heuristic running time of \tilde{O}(\log^4 n), with practical performance scaling subexponentially and vastly outperforming the AKS test for numbers beyond a few thousand digits; for instance, it has certified primes with over 100,000 decimal digits, such as the repunit prime R_{109297} with 109,297 digits (as of May 2025), whereas AKS implementations struggle beyond 1,000 digits in feasible time. This superiority stems from ECPP's reliance on random curve selection and efficient order computation via baby-step giant-step or rho methods, rather than the dense polynomial arithmetic in AKS. The algorithm is implemented in software like , which has proven primality for record-setting large primes. As a representative illustration, consider certifying a small prime like n = 1000003; one suitable is y^2 = x^3 + 2x + 1 \pmod{n}, where computations reveal a point of prime order satisfying the Goldwasser-Kilian bound, confirming n's primality via the after verifying the cofactor factors recursively. In general, the process checks conditions like qP = \mathcal{O} and the point's minimality in the group to ensure the order divides #E but no smaller multiple does.

Theoretical Aspects

Computational Complexity

The computational complexity of primality testing has been a central topic in , particularly due to its implications for and . The PRIMES —determining whether a given n > 1 is prime—belongs to the P, as established by the deterministic polynomial-time of Agrawal, Kayal, and Saxena in 2002, which runs in \tilde{O}(\log^{6} n) time on a . This breakthrough resolved a long-standing open question, showing that primality can be decided efficiently without randomness or unproven hypotheses. Prior to AKS, PRIMES was known to be in co-NP, since the complementary problem of compositeness admits short certificates in the form of nontrivial factors, verifiable in polynomial time. Additionally, Pratt demonstrated in 1975 that PRIMES is in NP via succinct certificates based on primitive roots and recursive factorization of n-1. Probabilistic primality tests, such as the Miller-Rabin algorithm, place PRIMES in the randomized polynomial-time class BPP (or more precisely for one-sided error variants), where the algorithm accepts primes with probability 1 and rejects composites with probability at least $3/4 per iteration, requiring O(k \log^3 n) time for error probability < 2^{-k}. These tests operate under the standard Turing machine model with access to random bits, offering practical efficiency far superior to deterministic counterparts despite the theoretical allowance for error. In contrast, the related integer factorization problem remains outside P (though in NP \cap co-NP), with no known polynomial-time algorithm, highlighting that recognizing primes is asymptotically easier than decomposing them. Under the Generalized Riemann Hypothesis (GRH), stronger bounds are achievable; for instance, becomes deterministic and runs in polynomial time with subexponential witness requirements, implying PRIMES \in DTIME(\tilde{O}(\log^5 n)) conditionally. In circuit complexity models, since PRIMES \in P, it admits polynomial-size uniform families of boolean circuits. Space complexity for primality testing is also favorable, with randomized algorithms like requiring only O(\log^2 n) space due to efficient modular arithmetic implementations.

Number-Theoretic Tests

Number-theoretic tests draw on algebraic structures, such as orders in quadratic fields or properties of recurrence sequences, to certify primality for numbers with special forms or under specific parameter choices. These methods provide deterministic proofs when applicable, often exploiting the structure of the multiplicative group modulo n or extensions thereof. Proth's theorem offers a deterministic primality criterion for numbers of the form n = k \cdot 2^m + 1, where k is an odd positive integer less than $2^m. Formulated by in 1878, the theorem states that if there exists an integer a that is a quadratic non-residue modulo n such that a^{(n-1)/2} \equiv -1 \pmod{n}, then n is prime. This condition leverages the fact that in the multiplicative group modulo a prime, the order of a divides n-1, and the non-residue property ensures the Legendre symbol (a/n) = -1, aligning with . The test is efficient due to the power-of-two exponent, allowing square-and-multiply algorithms to compute the power in O(m \log k) time. An illustrative example is n = 3 \cdot 2^4 + 1 = 49. Here, (n-1)/2 = 24, and testing quadratic non-residues like a = 3 (since (3/49) = -1) yields $3^{24} \equiv 1 \pmod{49}, not -1, confirming compositeness as $49 = 7^2. Primes satisfying the form are termed , encompassing (k = 1) and other generalized forms used in searches for large primes. These tests underpin distributed computing efforts, such as PrimeGrid's , which has identified record-breaking Proth primes exceeding millions of digits through volunteer resources. As of 2025, new number-theoretic tests based on circulant matrix eigenvalues in cyclotomic fields have been proposed, offering alternative deterministic methods. General Lucas tests generalize Fermat-based criteria using linear recurrence sequences defined by integers P and Q, with discriminant D = P^2 - 4Q. For an odd n > 2, select P and Q such that \gcd(n, 2Q) = 1 and the (D/n) = -1. The U_k is generated by U_0 = 0, U_1 = 1, and U_k = P U_{k-1} - Q U_{k-2} for k \geq 2. The rank of apparition of n in this sequence is the smallest positive d such that n divides U_d; for prime n, this rank divides n - (D/n). A key criterion is that, under these conditions, n is prime only if n divides U_{n - (D/n)}; full certification requires additional checks on the companion sequence V_k to exclude Lucas pseudoprimes. These tests, originating from Édouard Lucas's work in the late , tie into properties of the \mathbb{Z}[\alpha] where \alpha satisfies x^2 - P x + Q = 0, providing algebraic certificates via the sequence's period modulo n. Components of Lucas tests appear in composite criteria like the Baillie-PSW test for enhanced probabilistic verification.

References

  1. [1]
    [PDF] Notes on Primality Testing And Public Key Cryptography Part 1
    Even though the definition of primality is very simple, the structure of the set of prime numbers is highly nontrivial. The prime numbers are the basic building ...
  2. [2]
    [PDF] Primality Testing - cs.wisc.edu
    Primality testing is an important primitive in Cryptography. Though the complexity of this problem has not been completely ascertained, we know for example that ...
  3. [3]
    [PDF] 12 Primality proving - MIT Mathematics
    Mar 17, 2015 · In this lecture, we consider the following problem: given a positive integer N, how can we efficiently determine whether N is prime or not?<|control11|><|separator|>
  4. [4]
    [PDF] Primality Testing - Whitman College
    May 11, 2018 · Primality testing is the problem of deciding whether a given number n is prime. Efficient primality tests are needed for generating keys ...
  5. [5]
    [PDF] Primes: What is Known and Unknown - Keith Conrad
    Mar 30, 2017 · There is a special-purpose test to check primality of Mersenne numbers 2p − 1, called the Lucas–Lehmer test. There are 49 known Mersenne ...
  6. [6]
    [PDF] 23 Primality Testing - CMU School of Computer Science
    This chapter is devoted to primality testing. Primality testing has applications in many fields, including cryptography (see, for example, the RSA [61] ...
  7. [7]
    [PDF] 17.9.1 Introduction to Primality Testing - cs.wisc.edu
    Primality test is a test to determine whether a given number is prime or not. These tests can be either deterministic or probabilistic. Deterministic tests ...
  8. [8]
    Prime numbers - MacTutor History of Mathematics
    In about 200 BC the Greek Eratosthenes devised an algorithm for calculating primes called the Sieve of Eratosthenes. There is then a long gap in the history ...
  9. [9]
    Wilson's Theorem -- from Wolfram MathWorld
    This theorem was proposed by John Wilson and published by Waring (1770), although it was previously known to Leibniz. It was proved by Lagrange in 1773.
  10. [10]
    Lucas-Lehmer Test -- from Wolfram MathWorld
    The Lucas-Lehmer test is an efficient deterministic primality test for determining if a Mersenne number M_n is prime. Since it is known that Mersenne ...Missing: 1856 | Show results with:1856
  11. [11]
    A Brief History of Factoring and Primality Testing B. C. (Before ...
    Aug 6, 2025 · A Brief History of Factoring and Primality Testing ... Lehmer primality test (originated 1856), and the generalized Lucas primality test.
  12. [12]
    [PDF] the miller–rabin test - keith conrad
    Historically things were reversed: Miller introduced “Miller's test” in a deterministic form assuming GRH,3 and a few years later Rabin proved Theorem 2.9 to ...
  13. [13]
    AKS Primality Test -- from Wolfram MathWorld
    In August 2002, M. Agrawal and colleagues announced a deterministic algorithm for determining if a number is prime that runs in polynomial time.
  14. [14]
    [PDF] Lecture Notes on Primality Testing
    May 8, 2007 · Our first randomized algorithm is based on the following standard theorem: Fermat's Little Theorem: If p is prime, then ap−1 = 1 mod p for all a ...
  15. [15]
    Primality tests - Algorithms for Competitive Programming
    Apr 16, 2024 · This article describes multiple algorithms to determine if a number is prime or not. Trial division¶. By definition a prime number doesn't have ...
  16. [16]
    The Prime Glossary: trial division
    ### Summary of Trial Division for Primality
  17. [17]
    trial division - PlanetMath
    Mar 22, 2013 · Call the trial division algorithm with an integer n . 1. Initialize i=1 , length of factor list to 0, (if used) all exponents to 1, prime flag ...Missing: primality | Show results with:primality
  18. [18]
    Prime Factorization Algorithms -- from Wolfram MathWorld
    In this method, all possible factors are systematically tested using trial division to see if they actually divide the given number. It is practical only for ...
  19. [19]
    The Prime Glossary: wheel factorization
    ### Summary of Wheel Factorization
  20. [20]
    A proof of Wilson's Theorem - The Prime Pages
    Wilson's theorem states: Let p be an integer greater than one. p is prime if and only if (p-1)! = -1 (mod p). Here we prove this theorem and provide links ...
  21. [21]
    [PDF] Lagrange's Proof of Wilson's Theorem—and More!
    Jun 30, 2023 · John Wilson with this theorem, but he doesn't give a proof, and he even seems to imply that no one has yet found a proof; at least it seems he ...Missing: test | Show results with:test
  22. [22]
    Implementation of Wilson Primality test - GeeksforGeeks
    Jul 11, 2025 · Print '1' if the number is prime, else print '0'. Wilson's theorem ... Time Complexity: O(N) as recursive factorial function takes O(N) ...
  23. [23]
    5.3: Fermat's Little Theorem and Primality Testing - Mathematics ...
    Jul 7, 2021 · Primality testing via Fermat's little theorem can be done much faster than the naive method, provided one uses fast modular exponentiation algorithms.Missing: 1970s | Show results with:1970s
  24. [24]
    [PDF] The Rabin-Miller Primality Test - Sandiego
    Fermat Pseudoprimes; The Fermat Primality Test​​ Fermat's Little Theorem allows us to prove that a number is composite without actually factoring it. Fermat's ...
  25. [25]
    [PDF] 15 Number Theory 15.1 Primality Tests (9 units)
    An absolute Fermat pseudoprime, also called a Carmichael number, is a composite number which passes the Fermat test for any base a with (a, N) = 1. July 2023/ ...<|control11|><|separator|>
  26. [26]
    [PDF] Riemann's Hypothesis and Tests for
    Received October 20, 1975; revised January 30, 1976. In this paper we present two algorithms for testing primality of integer. The first algorithm in steps ...Missing: Rabin | Show results with:Rabin
  27. [27]
    Probabilistic algorithm for testing primality - ScienceDirect.com
    We present a practical probabilistic algorithm for testing large numbers of arbitrary form for primality. The algorithm has the feature that when it ...
  28. [28]
    on strong pseudoprimes to several bases
    From [3] we know that there are 4842 strong pseudoprimes to base 2 which are less than 25 • 109, but there does not exist any integer below this limit that is ...
  29. [29]
    BN_generate_prime - OpenSSL Documentation
    The functions do at least 64 rounds of the Miller-Rabin test giving a maximum false positive rate of 2^-128. If the size of p is more than 2048 bits, they do at ...
  30. [30]
    A Fast Monte-Carlo Test for Primality | SIAM Journal on Computing
    m-fold repetition using independent random numbers yields a Monte-Carlo test for primality with error probabilities 0 (if n is prime) and < 2 − m (if n is ...
  31. [31]
    EXPLICIT BOUNDS FOR PRIMALITY TESTING AND RELATED ...
    Abstract. Many number-theoretic algorithms rely on a result of Ankeny, which states that if the Extended Riemann Hypothesis (ERH) is true, any nontriv-.
  32. [32]
    [PDF] Solovay-Strassen test - Keith Conrad
    Historically, the Solovay–Strassen test was the first probabilistic primality test. The. Fermat test is not a probabilistic primality test because Carmichael ...
  33. [33]
    A Probable Prime Test with High Confidence - ScienceDirect
    In this paper, a probable prime test is developed using quadratic polynomials and the Frobenius automorphism. ... Grantham, Frobenius Pseudoprimes. Google Scholar.<|control11|><|separator|>
  34. [34]
    None
    ### Summary of arXiv:1903.06823 - A Probable Prime Test with High Confidence
  35. [35]
    [PDF] Quadratic Frobenius probable prime tests costing two selfridges - arXiv
    Jun 5, 2017 · By an elementary observation about the computation of the difference of squares for large in- tegers, deterministic quadratic Frobenius ...
  36. [36]
    GMP Development Projects - GNU MP
    Jon Grantham "Frobenius Pseudoprimes" (www.pseudoprime.com) describes a quadratic pseudoprime test taking about 3x longer than a plain test, but with only a 1/ ...
  37. [37]
    [2006.14425] Strengthening the Baillie-PSW primality test - arXiv
    Jun 25, 2020 · The Baillie-PSW primality test combines Fermat and Lucas probable prime tests. It reports that a number is either composite or probably prime.Missing: 1980 | Show results with:1980
  38. [38]
    Pseudoprime Statistics and Tables
    ### Baillie-PSW Test Summary
  39. [39]
    [PDF] PSW PRIMALITY TEST? Carl Pomerance 1984
    ARE THERE COUNTER-EXAMPLES TO. THE BAILLIE – PSW PRIMALITY TEST? Carl Pomerance. 1984 to Arjen K. Lenstra on the defense of his doctoral thesis. In [2] ...Missing: paper | Show results with:paper
  40. [40]
    Primality Testing via Circulant Matrix Eigenvalue Structure - arXiv
    This paper presents a novel primality test based on the eigenvalue structure of circulant matrices constructed from roots of unity.
  41. [41]
    (PDF) A Comparative Study between a Novel Deterministic Test for ...
    Mar 26, 2023 · In this article, a new deterministic primality test for Mersenne primes is presented. It also includes a comparative study between ...Missing: hybrid | Show results with:hybrid
  42. [42]
    [2411.01638] Novel performant primality test on a Pell's cubic - arXiv
    Nov 3, 2024 · In this paper, a novel primality test algorithm based on the Pell's cubic will be introduced, and its necessary primality conditions will be proved.
  43. [43]
    (PDF) A New Primality Test for Natural Integers - ResearchGate
    Aug 7, 2025 · PDF | On Dec 8, 2022, Sh. T. Ishmukhametov and others published A New Primality Test for Natural Integers | Find, read and cite all the ...
  44. [44]
    Optimizing the Miller-Rabin Primality Test Using Supervised ...
    Feb 5, 2025 · This research aims to optimize the Miller-Rabin test through supervised machine learning techniques for intelligent witness selection.
  45. [45]
    [PDF] Primality of numbers of the form apk + 1 - arXiv
    Apr 10, 2021 · In this paper we optimize Pocklington's primality test for integers of the form apk + 1 where p is prime, a<p, k ≥ 1. An extension of Lucas's ...Missing: original | Show results with:original
  46. [46]
    [PDF] Designer Primes - Cryptology ePrint Archive
    Dec 8, 2020 · Pocklington's primality test (theorem A.3), repeating these steps to ... Theorem A.2 (Pocklington, 1914): Let P, h, R be integers with.
  47. [47]
    [PDF] taxonomy and practical evaluation of primality testing algorithms
    Jun 15, 2020 · 4.2.3 Pocklington Test. The Pocklington primality test [31] is one of the first randomized primality test algorithms devised in 1914 by Henry.
  48. [48]
    Pocklington's Theorem -- from Wolfram MathWorld
    Pocklington's theorem, also known as the Pocklington-Lehmer test, then says that if there exists a b_i for i=1, ..., r such that b_i^(n-1)=1 (mod n) andMissing: exact statement
  49. [49]
    [PDF] THE LUCAS–LEHMER TEST 1. Introduction If 2n
    Here is some history about the Lucas–Lehmer test. In 1876 Lucas gave (without proof) a sufficient, but not necessary, condition for Mp to be prime if p ≡ 3 ...Missing: 1856 | Show results with:1856<|separator|>
  50. [50]
    Lucas-Lehmer test - Prime-Wiki
    Aug 11, 2024 · The Lucas-Lehmer test is a deterministic algorithm used to prove a Mersenne number either composite or prime. It is the last stage in the ...Missing: 1856 | Show results with:1856
  51. [51]
    Mersenne Prime Number discovery - 2 136279841 -1 is Prime!
    Largest Known Prime Number: 2136,279,841-1. BLOWING ROCK, NC, October 21, 2024 -- The Great Internet Mersenne Prime Search (GIMPS) has discovered the largest ...
  52. [52]
    [2305.14362] On the Eight Levels theorem and applications towards ...
    May 11, 2023 · The current paper proves what the author called the Eight Levels Theorem and then highlights and proves three new different versions for Lucas-Lehmer primality ...Missing: variants | Show results with:variants
  53. [53]
    Primes is in P - CSE - IIT Kanpur
    Missing: original | Show results with:original<|control11|><|separator|>
  54. [54]
    The AKS primality test | What's new - Terry Tao - WordPress.com
    Aug 11, 2009 · The Agrawal-Kayal-Saxena (AKS) primality test, discovered in 2002, is the first provably deterministic algorithm to determine the primality of a given number.Missing: history | Show results with:history
  55. [55]
    Primality testing using elliptic curves | Journal of the ACM
    GOLDWASSER, S., AND KILIAN, J. 1986. Almost all primes can be quickly certified. In Proceedings of the 18th Annual ACM Symposium on Theory of Computing ...
  56. [56]
    [PDF] 18.783 S2021 Lecture 11: Elliptic Curve Primality Proving (ECPP)
    Mar 29, 2021 · In this lecture, we consider the question of how to efficiently determine whether a given integer N is prime. This question is intimately ...
  57. [57]
    [PDF] An Overview of Elliptic Curve Primality Proving - Stanford CS Theory
    Dec 15, 2011 · This paper explores the inaugural ECPP algorithm presented by Goldwasser-Kilian [8] as well as later improvements on the algorithm. 2 Background.
  58. [58]
    The ECPP home page - LIX
    ECPP stands for Elliptic Curve Primality Proving. Why do I need ECPP? Proving the primality of a given integer is a basic task in number theory. You can also ...
  59. [59]
    [PDF] Better paths for elliptic curve primality proofs - Mathematics
    Starting point for the application of ECPP will always be a probable prime n0 = n; it is assumed that n will be free of small prime factors (in particular. 2 ...
  60. [60]
    [PDF] Primality Proving via One Round in ECPP and One Iteration in AKS
    In practice, ECPP performs much better than the current version of AKS. It has been used to prove primality of numbers up to thousands of decimal digits [10].
  61. [61]
    Primo - Prime-Wiki
    May 12, 2020 · Primo is a computer program which tests numbers for primality using the Elliptic Curve Primality Proving (ECPP) algorithm.Missing: efficiency | Show results with:efficiency
  62. [62]
  63. [63]
    Primality Proving 3.1: n-1 tests and Pepin's Test for Fermats
    Pocklington's Theorem (1914): Let n-1 = qkR where q is a prime which does not divide R. If there is an integer a such that an-1 ≡ 1 (mod n) and gcd(a(n-1)/q-1, ...
  64. [64]
    A PRIMALITY TEST FOR Kpn + 1 NUMBERS
    Jun 10, 2014 · A generalization of Proth's theorem. The primality test which follows from Proth's theorem is very useful since, if. N = K2n + 1 is a prime ...
  65. [65]
    PrimeGrid
    PrimeGrid's primary goal is to advance mathematics by enabling everyday computer users to contribute their system's processing power towards prime finding.Login · PrimeGrid Primes by Project · PrimeGrid Message boards · Challenge Series
  66. [66]
    [PDF] Primality testing: variations on a theme of Lucas - People | MIT CSAIL
    This survey traces an idea of Édouard Lucas that is a common el- ement in various primality tests. These tests include those based on Fermat's little theorem, ...