Prime number
A prime number is a natural number greater than 1 that is divisible only by 1 and itself, having exactly two distinct positive divisors.[1] These numbers form the foundational "atoms" of the integers under multiplication, as established by the fundamental theorem of arithmetic, which states that every integer greater than 1 can be expressed uniquely as a product of prime numbers, disregarding order.[2] The concept dates back to ancient times, with Euclid providing the first known proof around 300 BCE in his Elements that there are infinitely many primes, demonstrating this by assuming a finite list and constructing a number (their product plus one) that must have a prime factor not in the list.[3] Primes have intrigued mathematicians for millennia due to their irregular distribution and elusive patterns, influencing key developments in number theory, such as the prime number theorem, which approximates the density of primes up to a large number x as roughly x / ln(x).[4] Beyond pure mathematics, prime numbers underpin modern cryptography, particularly in systems like RSA, where the security relies on the computational difficulty of factoring the product of two large distinct primes into its components.[5] Ongoing research explores properties like twin primes—pairs differing by 2—[6] and the search for ever-larger primes, often using specialized algorithms to verify primality for numbers with tens of millions of digits as of 2025.[7]Basic Concepts
Definition
A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself.[1] Equivalently, if p is a prime number, then its only positive divisors d satisfy $1 \leq d \leq p and d = [1](/page/1) or d = p.[1] This modern definition evolved from earlier formulations in ancient Greek mathematics. In Euclid's Elements (circa 300 BCE), a prime number was described as "that which is measured by a unit alone," meaning it has no proper divisors other than the unit 1, which aligns conceptually with the contemporary view but is phrased in terms of measurement rather than explicit divisors.[8] Related terminology distinguishes primes within the natural numbers and broader algebraic structures. A composite number is a natural number greater than 1 that is not prime, expressible as a product of two or more prime numbers via the fundamental theorem of arithmetic.[9] The number 1 is neither prime nor composite. In the ring of integers \mathbb{Z}, the units are the elements \pm 1, which have multiplicative inverses within \mathbb{Z} and are excluded from consideration as primes.[10] In more general commutative rings, irreducible elements are nonzero, non-unit elements that cannot be expressed as a product of two non-unit elements; while all prime elements are irreducible, the converse does not hold in arbitrary integral domains.[11]Examples
Prime numbers, defined as natural numbers greater than 1 with no positive divisors other than 1 and themselves, are best understood through concrete examples that demonstrate their distribution and patterns. The smallest prime is 2, which stands out as the only even prime number, since all other even numbers greater than 2 are divisible by 2 and thus composite.[1] The first 25 prime numbers, comprising all primes less than 100, are as follows: These examples illustrate how primes become sparser among larger integers.[1] Notable patterns emerge among primes, such as twin primes, which are pairs of primes differing by 2. The first few twin prime pairs are (3, 5), (5, 7), (11, 13), (17, 19), and (29, 31).[12] Another pattern involves Mersenne primes, which are primes of the form $2^p - 1 where p is itself prime. Examples include $2^2 - 1 = 3, $2^3 - 1 = [7](/page/+7), $2^5 - 1 = 31, and $2^7 - 1 = 127.[13] Basic observations about primes reveal that their density decreases as numbers grow, with the gaps between consecutive primes generally increasing. For instance, while early gaps are small (e.g., 1 between 2 and 3), larger gaps appear later, such as 8 between 89 and 97, and even wider separations beyond 100. There is no simple formula to generate the nth prime directly, underscoring the irregular yet structured nature of their occurrence.[1] Visual representations aid in building intuition for primes. The Sieve of Eratosthenes provides a step-by-step visualization: starting with numbers from 2 to 100 listed in a grid, multiples of 2 are marked as composite, followed by multiples of the next unmarked number (3), then 5, and so on up to the square root of 100 (about 10), leaving the unmarked numbers as the 25 primes up to 97.[14] Similarly, the Ulam spiral arranges positive integers in a clockwise spiral on a grid, with primes highlighted (often in black or red), revealing striking diagonal lines and clusters that suggest underlying patterns in prime distribution despite their apparent randomness.[15]Historical Development
Ancient and Early Modern Periods
The concept of prime numbers emerged in ancient Greece as part of early number theory. In his Elements, composed around 300 BCE, Euclid provided a foundational definition in Book VII: a prime number is one measured by a unit alone, meaning it has no divisors other than itself and one.[16] Euclid further established key properties, such as the Euclidean algorithm for finding greatest common divisors, which relies on the structure of primes.[17] A landmark achievement was Euclid's proof of the infinitude of primes, stated and proved in Proposition 20 of Book IX: there are infinitely many prime numbers.[18] This theorem, argued through contradiction by assuming a finite list of primes and constructing a new one as their product plus one, underscored the endless nature of primes and influenced all subsequent number theory. Around 240 BCE, Eratosthenes of Cyrene developed an efficient method for identifying primes up to a given limit, known as the sieve of Eratosthenes; the earliest surviving description appears in Nicomachus of Gerasa's Introduction to Arithmetic (c. 100 CE), where it is portrayed as a process of sifting odds by eliminating multiples of successive primes starting from 3.[19] Medieval Islamic scholars built upon Greek foundations, integrating primes into broader algebraic and arithmetic studies. Abu Bakr al-Karaji (c. 953–1029), in works like Al-Fakhri fi al-jabr wa al-muqabala, advanced algebraic techniques such as polynomial operations and mathematical induction.[20] These contributions bridged arithmetic and emerging algebra during the Islamic Golden Age.[21] The early modern period saw renewed interest in primes through correspondence and treatises. In 1640, Pierre de Fermat stated in a letter to Bernard Frénicle de Bessy what is now called Fermat's Little Theorem: if p is a prime and a is an integer not divisible by p, then a^{p-1} \equiv 1 \pmod{p}. Fermat offered no proof, but the result highlighted modular arithmetic's connection to primes. Later, in 1736, Leonhard Euler provided the first rigorous proof of Fermat's theorem in his paper "Theorematum quorundam ad numeros primos spectantium demonstratio I," while also advancing factorization methods in number theory, including applications to cyclotomic polynomials and prime distribution.[22] Euler's work solidified primes as central to analytic approaches in the 18th century.[23]Debates on the Primality of One
In ancient Greek mathematics, the concept of prime numbers was introduced without explicitly addressing the status of 1, leading to initial ambiguity. Euclid, in his Elements (circa 300 BCE), defined a prime number as one "measured by a unit alone," a phrasing that technically applies to 1 since it has no divisors other than itself.[24] However, Euclid's treatments, such as his proof of the infinitude of primes in Book IX, Proposition 20, implicitly exclude 1 by considering numbers greater than 1 and listing primes starting from 2, reflecting the Pythagorean view that 1 was not a proper number but a unit of measure. This ambiguity persisted into the medieval period, where 1 was sometimes treated as prime but increasingly excluded for practical reasons. In his Liber Abaci (1202), Fibonacci described "incomposite" numbers—equivalent to primes—starting from 2 and omitting 1, marking an early shift toward exclusion in Western mathematics to align with factorization practices in arithmetic problems.[25] By the 19th century, a consensus emerged to firmly exclude 1 from primes, driven by the need to uphold key theorems in number theory. Carl Friedrich Gauss, in his Disquisitiones Arithmeticae (1801), stated that "a composite number can be resolved into prime factors in only one way," implying that primes must be greater than 1 to ensure the uniqueness of this decomposition, as including 1 would allow trivial multiplications that violate the theorem's intent.[25] Gauss's influence solidified this view, echoed by contemporaries like Legendre, who proved the existence of prime factorizations excluding units like 1.[25] In modern number theory, 1 is classified as neither prime nor composite, a standard adopted by the early 20th century for consistency across theorems. This exclusion stems from 1 having exactly one positive divisor (itself), unlike primes which have exactly two (1 and themselves), and from its status as a unit that would undermine the unique factorization theorem if treated as prime.[25] Dirichlet reinforced this in his Vorlesungen über Zahlentheorie (1863), noting it is "convenient not to include unity among the prime numbers" to simplify analytic results like those on primes in progressions.[25]19th- and 20th-Century Advances
In 1837, Peter Gustav Lejeune Dirichlet proved that if a and d are positive integers with \gcd(a, d) = 1, then there are infinitely many primes in the arithmetic progression a + nd for n = 0, 1, 2, \dots.[26] This theorem extended Euclid's result on the infinitude of primes by showing their infinite distribution across certain progressions, marking the birth of analytic number theory through Dirichlet's introduction of L-functions and characters.[26] Building on this, Bernhard Riemann's 1859 paper "Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse" analyzed the Riemann zeta function \zeta(s) = \sum_{n=1}^\infty n^{-s} for complex s, deriving its functional equation and conjecturing that all nontrivial zeros lie on the line \Re(s) = 1/2.[27][28] These analytic tools linked the zeros of \zeta(s) to the distribution of primes, providing a framework for estimating the prime-counting function \pi(x) and influencing subsequent developments in prime theory.[27] The culmination of these ideas came in 1896, when Jacques Hadamard and Charles Jean de la Vallée Poussin independently proved the Prime Number Theorem, stating that \pi(x) \sim x / \ln x as x \to \infty.[29] Their proofs relied on complex analysis of the zeta function, showing no zeros on the line \Re(s) = 1 and confirming the asymptotic density of primes as approximately $1 / \ln n near n.[29] At the 1900 International Congress of Mathematicians, David Hilbert posed his eighth problem, which encompassed the Riemann Hypothesis on zeta zeros, the Goldbach conjecture (every even integer greater than 2 as a sum of two primes), the twin prime conjecture (infinitely many primes differing by 2), and related questions on prime representations.[30][31] Hilbert emphasized resolving these to deepen understanding of prime distribution, including bounds on \pi(x) - \mathrm{Li}(x) and applications to algebraic number fields.[31] Throughout the 20th century, the Goldbach and twin prime conjectures persisted as unsolved, despite significant progress via sieve methods.[29] In 1919, Viggo Brun developed his sieve to show that the sum of reciprocals of twin primes converges to Brun's constant (approximately 1.902), implying twin primes are sparser than all primes.[32] For Goldbach, Ivan Vinogradov proved in 1937 that every sufficiently large odd integer is the sum of three primes, advancing a related weak form, while computational verifications extended to enormous even numbers without counterexamples.[33] These efforts highlighted the conjectures' resilience and spurred innovations in sieve theory and computational number theory.[33][32]Elementary Properties
Unique Factorization Theorem
The Fundamental Theorem of Arithmetic, also known as the Unique Factorization Theorem, asserts that every integer greater than 1 can be expressed as a product of prime numbers in a unique way, disregarding the order of the factors.[2] This theorem establishes the primes as the fundamental building blocks of the natural numbers under multiplication.[34] Formally, for any integer n > 1, there exist distinct primes p_1, p_2, \dots, p_k and positive integers a_1, a_2, \dots, a_k such that n = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}. This representation is unique up to the ordering of the primes.[2] The theorem was rigorously proved by Carl Friedrich Gauss in his 1801 work Disquisitiones Arithmeticae, building on earlier ideas from Euclid.[35] The proof consists of two main parts: existence and uniqueness. Existence follows from the well-ordering principle of the natural numbers. Consider the set of integers greater than 1 that cannot be factored into primes; if nonempty, it has a least element m. Since m is composite (as 1 is excluded and primes factor trivially), it has a divisor d with $1 < d < m, so d factors into primes by minimality, implying m does too—a contradiction. Thus, every integer greater than 1 has a prime factorization.[2] Uniqueness relies on Euclid's lemma, which states that if a prime p divides a product ab, then p divides a or p divides b. This lemma, proved in Euclid's Elements (Book VII, Proposition 30), extends by induction to products of more factors.[36] Assuming two distinct factorizations for n, Euclid's lemma implies one prime from the first must divide the second, leading to identical multisets of primes by contradiction.[34] This theorem underpins many computational tools in number theory, notably the greatest common divisor (gcd) and least common multiple (lcm) of integers. Given prime factorizations m = \prod p_i^{b_i} and n = \prod p_i^{c_i} (with exponents zero where primes are absent), the gcd is \prod p_i^{\min(b_i, c_i)} and the lcm is \prod p_i^{\max(b_i, c_i)}, satisfying \gcd(m, n) \cdot \operatorname{lcm}(m, n) = m \cdot n.[37] These operations, efficient via prime factorization, are essential for algorithms like the Euclidean algorithm extensions.[38] While the theorem holds in the ring of integers \mathbb{Z}, it fails in certain other integral domains, such as some rings of algebraic integers, where unique factorization into irreducibles does not occur.[39]Infinitude of Primes
One of the most celebrated results in number theory is the infinitude of prime numbers, first proved by the ancient Greek mathematician Euclid around 300 BCE in Proposition 20 of Book IX of his Elements. Euclid's proof proceeds by contradiction: assume there are only finitely many primes, say p_1, p_2, \dots, p_k, where p_1 = 2 < p_2 = 3 < \dots < p_k. Construct the integer N = p_1 p_2 \cdots p_k + 1. Since N > 1, it must have at least one prime factor p. However, p cannot equal any p_i, because dividing N by p_i leaves a remainder of 1 for each i. Thus, p is a prime not in the original list, contradicting the assumption that the list was complete. Therefore, there must be infinitely many primes.[18] The construction can be expressed formally as N = \prod_{i=1}^k p_i + 1, where N is greater than 1 and hence divisible by some prime outside the set \{p_1, \dots, p_k\}. This argument relies briefly on the fact that every integer greater than 1 has a prime factor, a key step toward the unique factorization of integers.[18] A simple variation of Euclid's proof establishes the infinitude of odd primes. Since 2 is the only even prime and Euclid's argument yields infinitely many primes overall, there must be infinitely many odd primes as well. Alternatively, assuming a finite list of odd primes q_1, \dots, q_m, one can construct M = 2 \cdot q_1 \cdots q_m - 1, which is odd and greater than 1, ensuring an odd prime factor not in the list.[40] Euclid's proof laid the foundational cornerstone of number theory, inspiring centuries of research into the distribution and properties of primes and demonstrating the power of deductive reasoning in mathematics. Its geometric presentation in the Elements reflects the era's emphasis on visual and spatial arguments, yet it remains a model of logical clarity more than two millennia later.[41] A common misconception about the proof is that it provides a method to construct or enumerate all primes sequentially; in reality, it only shows that any finite list can be extended by at least one more prime, without specifying how to find it. Another frequent error is assuming N itself is always prime, whereas the proof requires only that it has a new prime divisor, which may divide a proper factor of N.[40]Prime-Generating Formulas
One notable early attempt at a prime-generating formula is the quadratic polynomial n^2 + n + 41, discovered by Leonhard Euler in 1772. This expression yields prime numbers for n = 0, 1, \dots, 39, producing 40 consecutive primes, but it fails for n = 40, where the value 41^2 = 1681 = 41 \times 41 is composite.[42] Despite its impressive streak, the polynomial generates composites for larger n, illustrating the challenges in constructing formulas that produce only primes indefinitely. In 1947, W. H. Mills introduced a prime-representing function based on analytic number theory results about prime gaps. Specifically, Mills proved the existence of a constant \theta > 1 such that A(n) = \lfloor \theta^{3^n} \rfloor is prime for every positive integer n. The value \theta \approx 1.3063778838630806904686144926 (known as Mills' constant) was later computed numerically to ensure the formula works for small n, but the constant is not explicitly derived and relies on empirical verification for practical use.[43] This formula generates an infinite sequence of primes but is not constructive in a simple algebraic sense, as the choice of \theta depends on deep results like those of Ingham on prime distributions. Another explicit but highly inefficient formula for the nth prime p_n was given by C. P. Willans in 1964. Willans' formula is p_n = 1 + \sum_{i=1}^{2^n} \left\lfloor \left( \frac{n}{\sum_{j=1}^i \left\lfloor \left( \cos \frac{(j-1)! + 1}{j} \pi \right)^2 \right\rfloor} \right)^{1/n} \right\rfloor, which uses Wilson's theorem to count primes up to a bound and isolates p_n through summation and floor functions. Although it correctly produces the nth prime for any positive integer n, the computational complexity grows exponentially, making it impractical for large n.[44] Despite these efforts, fundamental limitations exist for polynomial-based prime generators. In 1951, Ivan Niven proved that no non-constant polynomial with integer coefficients can produce prime values for all sufficiently large positive integers n, as such polynomials will inevitably take composite values infinitely often due to their modular arithmetic properties. This result aligns with broader undecidability insights from Yuri Matiyasevich's 1970 resolution of Hilbert's tenth problem, which implies that no simple single-variable polynomial can enumerate exactly the primes without also producing composites or missing some primes.[45]Unsolved Elementary Problems
One of the most famous unsolved problems in number theory is the Goldbach conjecture, proposed by Christian Goldbach in a 1742 letter to Leonhard Euler, which states that every even integer greater than 2 can be expressed as the sum of two prime numbers. Despite extensive efforts, the conjecture remains unproven, though it has been empirically verified for all even integers up to 4 × 10¹⁸ through computational methods.[46] A significant partial result is Chen's theorem from 1966, which proves that every sufficiently large even integer is the sum of a prime and a number with at most two prime factors.[47] The twin prime conjecture, a special case of Polignac's conjecture proposed by Alphonse de Polignac in 1849, posits that there are infinitely many pairs of primes differing by 2. This remains open as of 2025, with no full proof established. Major progress came in 2013 when Yitang Zhang demonstrated that there are infinitely many pairs of primes differing by at most 70 million, providing the first bounded gap result.[48] Subsequent work by the Polymath project reduced this bound to 246, but the exact difference of 2 for infinitely many pairs is still unresolved. Polignac's conjecture generalizes the twin prime case, asserting that for every positive even integer 2k, there are infinitely many prime pairs (p, p + 2k). Like its special cases, it lacks a proof in 2025, though bounded gap results apply to small fixed differences, and computational searches have identified numerous such pairs for various k up to large values. These elementary conjectures continue to inspire research, with partial theorems like Chen's offering insights into prime distributions without resolving the infinitude questions.Analytic Properties
Advanced Proofs of Infinitude
One of the earliest analytic proofs of the infinitude of prime numbers was provided by Leonhard Euler in 1737, building on the elementary geometric argument attributed to Euclid around 300 BCE. While Euclid's proof relies on constructing a number larger than any finite set of primes to yield a new prime factor, Euler's approach uses infinite series and products to demonstrate that the primes cannot be finite in number. This method marks the inception of analytic number theory and leverages the divergence of the harmonic series to infer the unbounded nature of the primes.[49] Euler introduced the Riemann zeta function, defined for real numbers s > 1 as the infinite sum \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. This series converges for s > 1 but diverges as s \to 1^+, mirroring the behavior of the harmonic series \sum_{n=1}^\infty \frac{1}{n}, which grows like \ln N + \gamma (where \gamma is the Euler-Mascheroni constant) and thus tends to infinity. Euler established that this sum equals an infinite product over all primes p: \zeta(s) = \prod_p \frac{1}{1 - p^{-s}}, where the product converges absolutely for s > 1 due to the unique factorization of integers into primes. The equality holds because expanding each geometric series factor \frac{1}{1 - p^{-s}} = \sum_{k=0}^\infty p^{-ks} and multiplying over primes generates all positive integers exactly once in the denominator.[50] To prove the infinitude of primes, consider the limit as s \to 1^+. The left side \zeta(s) diverges to infinity, as it approaches the divergent harmonic series. If there were only finitely many primes, say p_1, \dots, p_m, the right side would reduce to a finite product \prod_{i=1}^m \frac{1}{1 - p_i^{-s}}, which converges to a finite nonzero value even at s=1. This contradiction implies there must be infinitely many primes. For a more quantitative insight, take the natural logarithm of the product: \log \zeta(s) = -\sum_p \log(1 - p^{-s}). Using the Taylor expansion \log(1 - x) = -\sum_{k=1}^\infty \frac{x^k}{k} for |x| < 1, the leading term yields \log \zeta(s) \sim \sum_p p^{-s} as higher powers p^{-ks} for k \geq 2 become negligible near s=1. Since \zeta(s) \sim \frac{1}{s-1} near s=1 (from the integral representation or partial summation), \log \zeta(s) \sim -\log(s-1) \to \infty. Thus, \sum_p p^{-1} diverges, confirming infinitely many primes and providing a stronger result than mere infinitude, as the sum's growth rate relates to prime density.[50][49] This analytic confirmation via sums and products contrasts with Euclid's constructive method by embedding the primes within the global structure of the natural numbers through convergence properties, paving the way for deeper connections in number theory. Euler's argument, though informal in its original presentation, was later rigorized using Weierstrass products for entire functions.[51]Prime Counting Function
The prime counting function, denoted \pi(x), counts the number of prime numbers less than or equal to a positive real number x. It is a step function that increases by 1 at each prime and remains constant in between.[52] The Prime Number Theorem provides the asymptotic behavior of \pi(x), stating that \pi(x) \sim \frac{x}{\log x} as x \to \infty. This result, which quantifies the density of primes among the positive integers, was proved independently in 1896 by Jacques Hadamard and Charles Jean de la Vallée Poussin.[53][54][55] Their proofs established that the theorem follows from the non-vanishing of the Riemann zeta function \zeta(s) on the line \operatorname{Re}(s) = 1. Specifically, Hadamard and de la Vallée Poussin showed that \zeta(s) \neq 0 for \operatorname{Re}(s) = 1 using properties of the zeta function's analytic continuation and growth estimates, which imply bounds on the Chebyshev functions and, via Tauberian theorems, the desired asymptotic for \pi(x).[56] A more precise approximation to \pi(x) is given by the offset logarithmic integral, defined as \operatorname{li}(x) = \int_2^x \frac{dt}{\log t}. The Prime Number Theorem implies \pi(x) \sim \operatorname{li}(x) as x \to \infty, and this integral outperforms the simpler \frac{x}{\log x} in capturing the distribution of primes. Refinements to the error term in the Prime Number Theorem have been pursued since the original proofs. In 1936, Harald Cramér established an improved bound of the form \pi(x) = \operatorname{li}(x) + O\left(x \exp\left(-c \sqrt{\log x}\right)\right) for some constant c > 0. Recent computational and analytic work in the 2020s has further tightened explicit versions of such bounds, verifying them for extremely large x through high-precision calculations of \pi(x) and \operatorname{li}(x).[57][58]Primes in Arithmetic Progressions
In 1837, Peter Gustav Lejeune Dirichlet proved that if two positive integers a and d are coprime, then there are infinitely many prime numbers congruent to a modulo d.[59] This result generalizes Euclid's ancient proof of the infinitude of primes by showing that primes are infinite not only overall but also within specific arithmetic progressions where the common difference d and the residue a share no common factors.[60] The proof relies on analytic methods involving Dirichlet L-functions, defined for a Dirichlet character \chi modulo d as L(s, \chi) = \sum_{n=1}^\infty \frac{\chi(n)}{n^s} for \operatorname{Re}(s) > 1, which admit Euler product representations L(s, \chi) = \prod_p (1 - \chi(p) p^{-s})^{-1}.[59] By orthogonality of characters, the sum over primes p \equiv a \pmod{d} of p^{-s} can be expressed in terms of these L-functions. Dirichlet showed that the principal L-function corresponds to the Riemann zeta function \zeta(s), which has a pole at s=1, while the non-principal L-functions extend holomorphically to \operatorname{Re}(s) > 0 and satisfy L(1, \chi) \neq 0 for all non-principal \chi.[60] This non-vanishing implies the logarithmic divergence of the relevant prime sum as s \to 1^+, establishing the infinitude of such primes.[59] Under the prime number theorem for arithmetic progressions, the primes congruent to a modulo d have asymptotic density \frac{1}{\phi(d)} among all primes, where \phi is Euler's totient function counting integers up to d coprime to d.[61] For example, the primes congruent to 1 modulo 4 and those congruent to 3 modulo 4 each constitute half of all primes asymptotically, illustrating the equitable distribution across coprime residue classes.[61] A key generalization is Linnik's theorem, which addresses the location of the smallest prime in such a progression. It states that for coprime a and d, the least prime p \equiv a \pmod{d} satisfies p \ll d^L for some absolute constant L > 0.[62] Originally proved in 1944 with a large L, modern refinements have reduced L to 5.[62] This bound quantifies how quickly primes appear in the progression, with implications for zero-free regions of L-functions.[62]Primes from Quadratic Polynomials
One notable example of a quadratic polynomial that generates many prime numbers is Euler's polynomial n^2 + n + 41, which produces prime values for all integers n from 0 to 39, yielding 40 consecutive primes.[63] This polynomial, discovered by Leonhard Euler in 1772, remains the quadratic that generates the longest known sequence of primes for consecutive nonnegative integers, though it fails to produce primes beyond this range, such as at n=40 where the value is composite (41 × 41).[64] The Bunyakovsky conjecture, proposed by Viktor Bunyakovsky in 1857, posits that an irreducible polynomial f(n) of degree greater than or equal to 1 with integer coefficients, positive leading coefficient, and such that the greatest common divisor of all its values f(1), f(2), \dots is 1, takes prime values for infinitely many positive integers n.[65] For quadratic polynomials satisfying these conditions—irreducibility over the integers, positive leading coefficient, and no fixed prime divisor across all values—the conjecture specifically predicts infinitely many primes.[65] Bunyakovsky's original formulation focused on quadratic forms but extends naturally to higher degrees, generalizing Dirichlet's theorem on primes in arithmetic progressions (which holds for linear polynomials of degree 1).[65] No general proof exists for the Bunyakovsky conjecture when the degree exceeds 1, including for quadratics, though partial results provide asymptotic bounds under additional assumptions.[66] A key limitation arises from the possible existence of Siegel zeros—real zeros close to 1 in Dirichlet L-functions—which obstruct effective error terms in the prime number theorem for polynomials, preventing unconditional proofs of infinitude even for specific quadratics.[67] For instance, assuming the nonexistence of Siegel zeros (or the generalized Riemann hypothesis) allows for positive lower bounds on the number of primes generated by such quadratics up to x, but these remain conditional.[68] A prominent open case is the infinitude of primes of the form n^2 + 1, where the polynomial satisfies Bunyakovsky's conditions but no proof exists despite numerical evidence of infinitely many such primes.[69] This problem, dating back to Euler's considerations in 1752, illustrates the rarity of such primes compared to linear forms, as their density is heuristically about \sqrt{x}/\log x up to x, yet remains unproven.[69] Analytic progress toward the conjecture for quadratics relies on the Hardy-Littlewood circle method, which yields conjectural asymptotic formulas for the count of primes generated by f(n) up to x.[70] The Bateman-Horn conjecture, building on this method, predicts that the number of n \leq x for which an irreducible quadratic f(n) is prime is asymptotically C \frac{x}{\log x}, where C is a positive constant depending on f via a product over primes involving the local densities of prime values.[66] This heuristic, derived from singular series in the circle method, aligns with Euler's polynomial producing roughly the expected number of primes in its initial streak and supports infinitude under the conjecture's assumptions.[70]Riemann Zeta Function and Hypothesis
The Riemann zeta function \zeta(s) is initially defined for complex numbers s with real part \operatorname{Re}(s) > 1 by the infinite series \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. This representation converges absolutely in that half-plane and admits an Euler product expansion \zeta(s) = \prod_p \left(1 - p^{-s}\right)^{-1}, where the product runs over all prime numbers p. The Euler product highlights the intimate connection between the zeta function and the primes, as it encodes the fundamental theorem of arithmetic. The function admits a meromorphic continuation to the entire complex plane, with a single simple pole at s=1 and no other singularities.[71][72][71] In his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude," Bernhard Riemann introduced an analytic continuation of \zeta(s) and conjectured that all non-trivial zeros—those not at negative even integers—lie on the critical line \operatorname{Re}(s) = 1/2. This statement, known as the Riemann hypothesis, posits that if \zeta(\rho) = 0 for a non-trivial zero \rho, then \operatorname{Re}(\rho) = 1/2. The hypothesis remains one of the Clay Mathematics Institute's Millennium Prize Problems, underscoring its centrality to number theory. The prime number theorem, which describes the asymptotic density of primes, follows from non-vanishing of \zeta(s) on \operatorname{Re}(s)=1, but the hypothesis refines this further.[73][74] The Riemann hypothesis carries significant implications for prime distribution, particularly sharpening the error term in the prime number theorem. Assuming the hypothesis, the prime-counting function satisfies \pi(x) = \operatorname{[li](/page/Li)}(x) + O(\sqrt{x} \log x), where \operatorname{li}(x) is the logarithmic integral; this bound, established by von Koch in 1901, is the strongest known conditional result and reflects the hypothesis's power in controlling oscillations from the zeros. A explicit link between the zeros and primes appears in von Mangoldt's formula for the Chebyshev function \psi(x) = \sum_{n \leq x} \Lambda(n), where \Lambda is the von Mangoldt function: \psi(x) = x - \sum_\rho \frac{x^\rho}{\rho} - \log(2\pi) - \frac{1}{2} \log\left(1 - \frac{1}{x^2}\right), with the sum over all non-trivial zeros \rho. This formula reveals how deviations of \psi(x) from x are governed by the zeros, providing a direct analytic bridge to prime powers.[74][75] As of 2025, the Riemann hypothesis remains unproven despite extensive efforts. Computational checks have verified that the first $10^{13} non-trivial zeros lie on the critical line, supporting the conjecture empirically up to immense heights. Recent theoretical advances include refined zero-free regions, such as the 2024 result establishing no zeros in \sigma \geq 1 - \frac{1}{57.54 (\log |t|)^{2/3} (\log \log |t|)^{1/3}} for |t| \geq 3, and improvements to Korobov-Vinogradov-type bounds, enhancing understanding of the zeta function's behavior near the critical line.[76][77][78]Algebraic Structures
Modular Arithmetic and Finite Fields
Modular arithmetic, the study of integers modulo a fixed integer n, reveals profound connections to prime numbers when n = p is prime. In this case, the ring \mathbb{Z}/p\mathbb{Z} of residue classes modulo p forms a field, denoted \mathrm{GF}(p), where every nonzero element has a multiplicative inverse. This structure arises because p being prime ensures that no nonzero residue is a zero divisor, allowing division (except by zero) within the system. The characteristic of \mathrm{GF}(p) is precisely p, the smallest positive integer such that p \cdot 1 = 0 in the field, and this prime order underpins the field's finite nature with exactly p elements.[79]/22:_Finite_Fields/22.01:_Structure_of_a_Finite_Field) A cornerstone property in this setting is Fermat's Little Theorem, which states that if p is prime and a is an integer not divisible by p, then a^{p-1} \equiv 1 \pmod{p}. This theorem, first stated by Pierre de Fermat in 1640 and proved by Leonhard Euler in 1736, highlights the cyclic multiplicative group of \mathbb{Z}/p\mathbb{Z}^\times, which has order p-1. An equivalent form, often called Fermat-Euler's theorem, extends to all integers a: a^p \equiv a \pmod{p}, holding even when p divides a since both sides are $0 \pmod{p}. These congruences exploit the field's properties to simplify exponentiations and demonstrate the theorem's utility in algebraic manipulations over finite fields.[80][81] Wilson's Theorem provides another characterization: for a prime p, (p-1)! \equiv -1 \pmod{p}. Proposed by John Wilson in the 18th century and proved by Joseph-Louis Lagrange in 1773, it follows from pairing inverses in the multiplicative group \mathbb{Z}/p\mathbb{Z}^\times, leaving $1 and p-1 \equiv -1 unpaired. This factorial congruence uniquely identifies primes among integers greater than 1, as the converse also holds.[82][83] These theorems underpin basic primality tests in modular arithmetic. For instance, checking if a^{p-1} \equiv 1 \pmod{p} for bases a coprime to p can indicate primality via Fermat's Little Theorem, though composites may pseudoprime for specific a. Similarly, verifying (p-1)! \equiv -1 \pmod{p} directly tests the condition from Wilson's Theorem, offering a deterministic but computationally intensive check for small p. Such applications leverage the algebraic structure of finite fields without delving into probabilistic or advanced sieving methods./01:_Chapters/1.25:_Primality_Tests)[84]p-adic Numbers
The p-adic numbers arise in number theory as a completion of the rational numbers \mathbb{Q} with respect to a metric defined using a prime p, providing a framework where primes play a central role in the topology and algebraic structure. For a fixed prime p, the p-adic valuation v_p(n) of a nonzero integer n is defined as the highest power of p that divides n, i.e., v_p(n) = k where p^k divides n but p^{k+1} does not, and v_p(0) = +\infty.[85] This valuation extends multiplicatively to the rationals: for x = p^k \cdot \frac{a}{b} with a, b integers not divisible by p, v_p(x) = k.[86] The associated p-adic absolute value is then given by |x|_p = p^{-v_p(x)} for x \neq 0, and |0|_p = 0, inducing a non-Archimedean metric on \mathbb{Q} where distances emphasize divisibility by powers of p.[87] The p-adic numbers \mathbb{Q}_p are the completion of \mathbb{Q} under this metric, while the p-adic integers \mathbb{Z}_p consist of those elements with |x|_p \leq 1. One standard construction of \mathbb{Z}_p views its elements as formal power series \sum_{k=0}^\infty a_k p^k, where each a_k is an integer satisfying $0 \leq a_k < p.[87] Addition and multiplication are defined componentwise with carry-over, analogous to base-p expansions but extending infinitely to the left.[88] In the ring \mathbb{Z}_p, which is an integral domain with no zero divisors, the prime p generates the unique maximal ideal (p), making \mathbb{Z}_p a discrete valuation ring where every nonzero element factors uniquely into units and powers of p. Elements with v_p(x) = 0 are units, while those with v_p(x) > 0 are non-units divisible by p, and there are no other prime elements beyond associates of p. This structure highlights the primacy of the fixed prime p in the p-adic setting, contrasting with the integers where multiple primes exist. A key tool connecting modular arithmetic modulo p to the p-adics is Hensel's lemma, which allows lifting solutions of polynomial equations from \mathbb{Z}/p\mathbb{Z} to solutions in \mathbb{Z}_p under suitable conditions. Specifically, if f(x) \in \mathbb{Z}_p satisfies f(a) \equiv 0 \pmod{p} and f'(a) \not\equiv 0 \pmod{p} for some a \in \mathbb{Z}_p, then there exists a unique b \in \mathbb{Z}_p such that f(b) = 0 and b \equiv a \pmod{p}. More generally, if f'(a) \equiv 0 \pmod{p} but f(a) \equiv 0 \pmod{p^2}, lifting is still possible with multiplicity. This lemma, originally due to Kurt Hensel in 1897, facilitates solving Diophantine equations p-adically by iteratively refining modular solutions.[89][90]Prime Elements in Rings
In a commutative ring R with identity, a prime element is defined as a nonzero, non-unit element \pi \in R such that whenever \pi divides a product ab for a, b \in R, then \pi divides a or \pi divides b.[91] This generalizes the notion of prime numbers from the integers \mathbb{Z} to broader algebraic structures, where the divisibility condition captures the "prime-like" behavior of not factoring non-trivially without dividing one of the factors. Examples of prime elements abound in specific rings. In the ring of integers \mathbb{Z}, the prime elements are precisely the usual prime numbers (up to units \pm 1). In the Gaussian integers \mathbb{Z} = \{a + bi \mid a, b \in \mathbb{Z}\}, prime elements include $1 + i (with norm 2), ordinary primes p \equiv 3 \pmod{4} that remain prime, and factors of primes p \equiv 1 \pmod{4} such as $2 + i and $2 - i (norm 5).[92] These Gaussian primes illustrate how prime elements can arise in quadratic integer rings beyond \mathbb{Z}, enabling unique factorization up to units. Prime elements are always irreducible, meaning they cannot be expressed as a product of two non-unit elements. However, the converse holds only in certain domains: in a unique factorization domain (UFD), every irreducible element is prime, allowing factorization into irreducibles (equivalently, primes) to be unique up to units and order.[93] For instance, the ring \mathbb{Z} is a UFD where this equivalence applies, as established by the fundamental theorem of arithmetic. Moreover, if \pi is a prime element in an integral domain, the principal ideal (\pi) it generates is a prime ideal, linking element-level primality to ideal structure without requiring full ideal theory.[94] Principal ideal domains (PIDs) provide a key distinction, as every PID is a UFD, ensuring that prime elements coincide with irreducibles and every nonzero prime ideal is principal, generated by a prime element.[95] In contrast, not all UFDs are PIDs—for example, the polynomial ring \mathbb{Z} is a UFD but not a PID, since the ideal (2, x) is not principal—yet both retain the prime-irreducible equivalence. This structure highlights how primality in rings extends the integer case while introducing nuances in factorization properties.Prime Ideals
In ring theory, a prime ideal of a commutative ring R with identity is a proper ideal P \neq R such that whenever the product ab \in P for elements a, b \in R, then either a \in P or b \in P.[96] This property generalizes the notion of primality from integers to ideals, capturing a form of "irreducibility" in the multiplicative structure of the ring. An equivalent characterization is that P is prime if and only if the quotient ring R/P is an integral domain, meaning it has no zero divisors.[97] In the specific case of the ring of integers \mathbb{Z}, the prime ideals are precisely the zero ideal (0) and the principal ideals (p) generated by prime numbers p.[98] Here, (0) corresponds to the generic prime in the spectrum, while (p) reflects the usual primes, with \mathbb{Z}/(p) \cong \mathbb{Z}/p\mathbb{Z} being a field, hence an integral domain. Prime elements in integral domains like \mathbb{Z} generate principal prime ideals.[99] The set of all prime ideals of R, denoted \operatorname{Spec}(R), forms the prime spectrum of the ring, equipped with the Zariski topology where closed sets are of the form V(I) = \{\mathfrak{p} \in \operatorname{Spec}(R) \mid I \subseteq \mathfrak{p}\} for ideals I \subseteq R.[100] This topological space encodes the structure of R and serves as the foundation for schemes in algebraic geometry. Hilbert's Nullstellensatz links \operatorname{Spec}(R) to algebraic varieties over algebraically closed fields, stating that for an ideal I in k[x_1, \dots, x_n] with k algebraically closed, the radical \sqrt{I} consists of all polynomials vanishing on the variety V(I), establishing a bijection between radical ideals and affine varieties.[101]Primes in Group Theory
In group theory, prime numbers are central to analyzing the structure of finite groups, especially through theorems that link the prime factors of a group's order to the existence of specific subgroups and elements. These results reveal how primes dictate the presence of cyclic components and constrain overall group properties, often leveraging the fact that orders modulo a prime influence subgroup indices. A foundational result is Cauchy's theorem, which states that if a prime p divides the order |G| of a finite group G, then G contains an element of order p.[102] Equivalently, G has a cyclic subgroup of order p.[103] This theorem, originally established by Augustin-Louis Cauchy, provides a partial converse to Lagrange's theorem by guaranteeing elements whose orders match prime divisors of |G|.[104] The Sylow theorems generalize Cauchy's result to powers of primes, offering powerful tools for decomposing groups. For a finite group G with |G| = p^k m where p is prime and p does not divide m, a Sylow p-subgroup of G is a subgroup of order p^k. The first Sylow theorem asserts that such subgroups exist and that every p-subgroup is contained in some Sylow p-subgroup.[105] The second theorem states that all Sylow p-subgroups are conjugate in G. The third theorem specifies that the number n_p of distinct Sylow p-subgroups satisfies n_p \equiv 1 \pmod{p} and n_p divides m = |G|/p^k.[106] These theorems, due to Peter Ludvig Sylow in 1872, enable detailed classification of groups by controlling the distribution of prime-power subgroups.[105] Prime orders yield particularly simple structures. Every group of prime order p is cyclic, isomorphic to the additive group \mathbb{Z}/p\mathbb{Z}, with p-1 generators (all non-identity elements).[107] Such groups are also simple, as their only subgroups are the trivial subgroup and the group itself, with no nontrivial normal subgroups.[108] Cyclic groups of prime order exemplify abelian simple groups, and they appear as building blocks in the classification of finite simple groups. Burnside's p^a q^b-theorem further illustrates primes' role in solvability: if |G| = p^a q^b for distinct primes p and q, then G is solvable.[109] This 1904 result by William Burnside shows that groups with orders involving only two distinct primes cannot be nonsolvable simple groups (except for prime order itself), relying on Sylow analysis to construct solvable series.[110]Computational Methods
Trial Division
Trial division is a fundamental algorithm for determining the primality of a positive integer n > 1 or for finding its prime factors by systematically testing potential divisors. The method relies on the basic property that if n is composite, it must have a divisor d satisfying $1 < d \leq \sqrt{n}. Thus, it suffices to check divisibility only up to \lfloor \sqrt{n} \rfloor, as any larger divisor would pair with a smaller one already examined.[111] The algorithm proceeds as follows: First, if n = 2, it is prime; if n is even and greater than 2, it is composite. For odd n > 2, check divisibility by 2 separately to handle the even case efficiently. Then, test successive odd integers d = 3, 5, 7, \dots up to \lfloor \sqrt{n} \rfloor. If any such d divides n (i.e., n \mod d = 0), then n is composite, and d is a factor. If no such d is found, n is prime. This optimization reduces the number of trials by approximately half compared to checking all integers from 2 onward.[112][111] Formally, n is composite if there exists an integer d such that $1 < d < n and d \mid n; otherwise, for n > 1, it is prime. In pseudocode, the optimized trial division for primality can be expressed as:This implementation confirms primality deterministically for small to moderate n.[1][113] The worst-case time complexity of trial division is O(\sqrt{n}), occurring when n is prime, as all potential divisors up to \sqrt{n} must be tested. This makes it efficient for small n but impractical for large numbers, where more advanced methods are required.[113] Historically, trial division has served as the basis for early primality tests and factorizations since antiquity, with roots in ancient Greek mathematics. It was employed by mathematicians like Leonhard Euler in the 18th century to verify large primes, such as confirming the primality of $2^{31} - 1 in 1772 through exhaustive division. The method's simplicity made it a cornerstone for compiling early tables of primes before the development of sieving techniques.[113]function is_prime(n): if n ≤ 1: return false if n = 2: return true if n even: return false for d = 3 to sqrt(n) step 2: if n mod d = 0: return false return truefunction is_prime(n): if n ≤ 1: return false if n = 2: return true if n even: return false for d = 3 to sqrt(n) step 2: if n mod d = 0: return false return true