Fact-checked by Grok 2 weeks ago

Computational number theory

Computational number theory, also known as algorithmic number theory, is a branch of mathematics that focuses on the design, analysis, and implementation of efficient algorithms for solving problems in classical number theory, particularly those involving integers, primes, and Diophantine equations. This field bridges pure mathematics and computer science by emphasizing computational methods to tackle challenges that are theoretically profound but practically demanding, such as determining whether a large integer is prime or factoring it into its prime components. Key areas include the study of unique factorization, congruences, the distribution of primes (as quantified by theorems like Chebyshev's, stating that the prime-counting function π(x) is Θ(x / log x)), and quadratic reciprocity, all approached through algorithmic lenses that ensure polynomial-time or subexponential-time solutions where possible. Probabilistic techniques play a central role, enabling practical implementations for enormous numbers; for instance, the Miller-Rabin primality test offers a polynomial-time probabilistic verification with extremely low error rates (e.g., less than 2^{-100} with sufficient iterations), while methods like the Quadratic Sieve provide subexponential-time factorization essential for cryptographic security assessments. Beyond foundational algorithms, computational number theory extends to abstract algebraic structures, such as finite fields and polynomial arithmetic, which underpin advanced applications in (e.g., Reed-Solomon error-correcting codes) and (e.g., the algorithm relying on the difficulty of large-integer factorization, or Diffie-Hellman key exchange based on discrete logarithms in cyclic groups). It also incorporates discrete probability to analyze randomized algorithms, distinguishing between Monte Carlo methods (which may err but run in expected polynomial time) and Las Vegas algorithms (which always succeed but with variable runtime). The field's development has been driven by the need for efficient computations in modern computing, with landmark results like the providing deterministic polynomial-time verification, though probabilistic variants remain dominant in practice due to superior efficiency. Overall, computational number theory not only advances theoretical understanding but also enables real-world technologies by making intractable problems computationally feasible.

Overview

Definition and Scope

Computational number theory is the branch of that focuses on the development and analysis of algorithms for solving mathematical problems involving integers and related structures, including primality testing, integer factorization, and Diophantine equations. This field emphasizes computational efficiency, particularly for operations with very large numbers, where naive methods become infeasible due to in time complexity. Unlike pure number theory, which prioritizes theoretical proofs and properties, computational number theory seeks practical implementations that leverage computer resources to handle real-world scales, such as numbers with thousands of digits. The scope of computational number theory extends to algorithmic complexity, efficient arithmetic algorithms for large integers, and the intersection with computer science, including data structures for multiprecision arithmetic and parallel computing techniques. It encompasses computations in algebraic structures such as rings and fields, where operations like polynomial factorization over finite fields or class group calculations in number fields are central. For instance, modular arithmetic serves as a foundational tool for many algorithms in this domain. The field also addresses Diophantine approximation and analytic problems, such as estimating prime distributions, through heuristic and exact methods. This discipline emerged from the practical need to perform computations that underpin theoretical , such as verifying instances of through in early algorithmic explorations. Key concepts include the distinction between deterministic algorithms, which guarantee correct outputs without randomness, and probabilistic algorithms, which introduce controlled error probabilities to achieve faster runtimes, often in expected polynomial time. Problems like primality testing exemplify this, as they fall into complexity classes solvable in polynomial time, contrasting with harder tasks like that resist such efficiency.

Importance and Applications

Computational number theory serves as a vital bridge between and applied , enabling the development of algorithms that provide practical solutions to theoretically intractable problems in . By leveraging computational power, it transforms abstract concepts like and into efficient tools essential for secure systems, such as , and scientific computing tasks involving large-scale numerical simulations. This field underpins the security of digital infrastructures by exploiting the computational hardness of problems like the problem, which remains infeasible for large numbers on classical computers, thereby ensuring data confidentiality and integrity in networked environments. One of the broad impacts of computational number theory lies in its ability to verify mathematical conjectures through exhaustive computation, far beyond manual capabilities. For instance, the even Goldbach conjecture—that every even integer greater than 2 can be expressed as the sum of two primes—has been empirically verified up to 4 × 10¹⁸ using optimized sieving algorithms and distributed computing resources requiring approximately 770 CPU-years. Additionally, it facilitates efficient handling of arbitrarily large integers, supporting computations in areas like symbolic algebra systems where precision exceeds hardware limits, thus advancing both theoretical insights and practical implementations. In terms of applications, computational number theory forms the foundation for secure communications through cryptographic protocols that rely on number-theoretic primitives, protecting global data exchanges in and . It also contributes to error-correcting codes, such as Reed-Solomon codes over finite fields, which ensure reliable data transmission in storage devices and telecommunications by detecting and correcting errors using algebraic properties of polynomials. Furthermore, it supports pseudorandom number generation via methods like linear congruential generators grounded in , providing statistically unbiased sequences critical for simulations, secure , and methods. These applications drive significant economic value in finance and technology sectors, where cryptographic systems safeguard trillions in daily transactions and enable innovations like , contributing to a global cybersecurity market valued at $218.98 billion as of 2025. A key challenge in computational number theory is the inherent between computational speed and security levels, particularly in applications where constraints demand efficient algorithms without compromising cryptographic strength. For example, larger key sizes enhance security against brute-force attacks but increase processing overhead, necessitating optimized implementations to balance performance in high-stakes environments like financial trading systems. This tension underscores the ongoing need for algorithmic advancements to meet evolving demands in secure, time-sensitive computations.

History

Early Developments

The foundations of computational number theory emerged in with algorithmic approaches to fundamental arithmetic problems. Around 300 BCE, described in his a procedure for computing the (GCD) of two integers through successive divisions and remainders, laying the groundwork for efficient integer operations that would influence later computational methods. This represented one of the earliest systematic techniques for numerical computation in number theory. Similarly, , around 240 BCE, developed the sieve method to identify all primes up to a specified limit by iteratively eliminating multiples of each prime starting from 2, providing a foundational algorithm for prime generation that relied on manual enumeration but demonstrated scalable . During the 17th and 18th centuries, advancements in further propelled computational aspects of number theory. , in 1640, formulated his little theorem, asserting that if p is prime and a is not divisible by p, then a^{p-1} \equiv 1 \pmod{p}, which enabled practical computations for verifying primality and solving congruences modulo primes. Leonhard Euler extended this in the 18th century with his theorem, generalizing to coprime integers a and m that a^{\phi(m)} \equiv 1 \pmod{m}, where \phi is , thus broadening the toolkit for modular calculations. These results shifted focus toward computable properties of integers under congruence relations. In the early 19th century, Carl Friedrich Gauss's (1801) formalized as a core framework, introducing algorithmic solutions for quadratic congruences and binary quadratic forms that emphasized systematic computation over ad hoc methods. The development of continued fractions also marked a key milestone, originating from the and evolving into a tool for rational approximations of irrationals. In the 17th century, John Wallis coined the term "continued fraction" while studying expansions for numbers like \pi, building on earlier work by William Brouncker. Leonhard Euler advanced the theory in the by deriving general formulas for continued fraction expansions of and , facilitating better Diophantine approximations. Joseph-Louis Lagrange, late in the , applied to solve completely, highlighting their utility in finding integer solutions to quadratic Diophantine equations through iterative computations. In the early , mechanical aids began augmenting manual computations for tasks like and prime tabulation. Derrick Norman Lehmer published extensive factor tables up to 10 million in 1909 and prime lists to over 10 million in 1914, employing mechanical sieving devices and desk calculators to accelerate trial divisions and eliminate composites efficiently. These efforts represented a transition from purely hand-computed tables—such as those by Euler in the —to semi-automated methods using gears and levers for arithmetic operations. and E.M. Wright's An Introduction to the Theory of Numbers (1938) underscored the computable nature of number-theoretic results, integrating algorithmic discussions of primes, congruences, and approximations to bridge classical theory with practical verification techniques.

Modern Advances

The advent of electronic computers in the revolutionized computational number theory, enabling the practical implementation of algorithms for large-scale integer arithmetic and prime searches that were previously infeasible by hand. Pioneers like Derrick Lehmer utilized early machines such as the SWAC to compute extensive tables of primes and s, laying the groundwork for algorithmic advancements. In 1974, R. Sherman Lehman introduced a method based on differences of cubes, achieving a sub-cubic of approximately O(N^{1/3}), which marked a significant improvement over trial division and influenced subsequent techniques. The decade closed with the Solovay-Strassen probabilistic in 1977, a that verifies primality with high probability using Euler's criterion and Jacobi symbols, running in O(\log^6 n) time and error probability less than $1/2^k for k iterations. The 1980s and saw intensified research driven by cryptographic needs, beginning with the public-key cryptosystem proposed in 1977, which relied on the presumed hardness of and spurred developments in efficient factoring and primality testing algorithms. To benchmark progress, RSA Laboratories launched the in 1991, offering prizes for factoring semiprime moduli up to 1024 bits, which encouraged efforts across global networks and highlighted the scalability of methods like the number field sieve. Refinements to the Miller-Rabin primality test, originally introduced in 1976 and randomized in 1980, included deterministic variants in the that guaranteed correctness for integers below specific bounds by selecting fixed witness sets. These eras built toward the deterministic announced in 2002, which proved PRIMES in P with a polynomial-time algorithm of O(\log^{6} n) using properties of cyclotomic polynomials, though practical implementations favored probabilistic methods due to speed. In the 2000s and beyond, posed existential threats to classical methods, with Shor's 1994 enabling efficient factorization on a quantum computer via period-finding in O((\log n)^3) time; experimental demonstrations on small instances, such as factoring 15 on a 2-qubit system in 2001 and 21 on an NMR quantum computer in 2004, underscored the urgency for post-quantum alternatives. Advances in lattice-based methods, rooted in the (LWE) problem, gained prominence for their resistance to quantum attacks, with key developments like the scheme refined in the early 2000s and the introduction of ring-LWE in 2010 enabling efficient and digital signatures. Integration of has emerged for pattern detection in primes, such as neural networks classifying primality with over 99% accuracy on datasets up to $10^{12} by learning residue patterns, aiding heuristic searches in computational number theory. Milestones include refinements to the Miller-Rabin test for larger numbers, such as deterministic witnesses up to $2^{64} identified in 2015, and the factorization of the 768-bit RSA-768 modulus in 2009 using the general number field sieve on a distributed cluster, taking approximately 2 years of computation and equivalent to 2000 AMD Opteron years. Subsequent records include the factorization of the 829-bit RSA-250 semiprime in 2020 using the general number field sieve. In response to quantum threats, NIST standardized lattice-based post-quantum algorithms in 2024, including ML-KEM (FIPS 203), ML-DSA (FIPS 204), and SLH-DSA (FIPS 205). As of November 2025, no larger RSA challenges have been publicly factored classically, with RSA-2048 remaining secure against classical attacks.

Fundamental Concepts and Algorithms

Basic Arithmetic Operations

In computational number theory, basic arithmetic operations on integers form the foundation for more advanced algorithms, particularly when dealing with arbitrary-precision integers that exceed the bounds of standard machine word sizes. Arbitrary-precision arithmetic represents large integers as arrays of digits in a chosen base b = 2^k, where each digit is stored as a machine integer, allowing for scalable computations without fixed limits. The bit-length n of an integer significantly influences operation efficiency, as algorithms must process O(n / \log b) digits, with carries or borrows propagating across the representation to ensure correctness. Addition and subtraction of large integers typically employ schoolbook methods, which process digits from least to most significant, incorporating carries for or borrows for . For two n-bit integers, these operations achieve O(n) , as each digit or requires constant time plus carry , which in the worst case scans the entire length. For example, adding 123 + 456 yields with carries at the units place, illustrating how overflow bits are managed digit-by-digit. In hardware-optimized implementations for very large integers, carry-save adders enhance parallelism by adding three operands at once to produce a and separate carry , avoiding immediate and enabling tree-like reduction for multi-operand sums; a single n-bit carry-save stage takes constant time, making it ideal for intermediate steps in broader computations. Multiplication of large integers begins with the schoolbook approach, which computes partial products for each pair and accumulates them with shifts, resulting in O(n^2) for n-bit inputs due to the quadratic number of operations. The improves this by recursively splitting operands into halves and reducing the number of sub-multiplications from four to three via the identity (a_0 + a_1)(b_0 + b_1) = a_0 b_0 + a_1 b_1 + (a_0 + a_1)(b_0 + b_1) - a_0 b_0 - a_1 b_1, yielding O(n^{\log_2 3}) time where \log_2 3 \approx 1.585; for instance, multiplying two 3-digit numbers like 123 × 456 requires only three 2-digit multiplications instead of four. For extremely large integers, (FFT)-based methods, such as the seminal Schönhage-Strassen algorithm, further optimize by converting multiplication to via FFTs over finite fields, achieving O(n \log n \log \log n) through chunked representations and cyclic convolutions. Recent advancements have refined this to O(n \log n) using multidimensional DFTs and Gaussian resampling, confirming long-standing conjectures on optimal asymptotic performance. Division of large integers often relies on approximating the reciprocal of the divisor using , which iteratively refines an initial estimate x_{i+1} = x_i (2 - d x_i) for $1/d with quadratic convergence, enabling the quotient via by the dividend; this avoids direct and suits arbitrary-precision contexts, with complexity dominated by a logarithmic number of multiplications. For example, dividing 1000 by 7 starts with an initial reciprocal guess of 0.142857 and converges rapidly to compute the quotient 142 with 6. In modular contexts, Barrett reduction enhances efficiency by precomputing \mu = \lfloor b^{2k} / m \rfloor for modulus m and base b, estimating the quotient as \hat{q} = \lfloor (r / b^{k-1}) \cdot \mu / b^{k+1} \rfloor to compute r \mod m using multiplications and subtractions instead of , at the cost of one large per reduction. These methods ensure that basic operations scale effectively, underpinning higher-level number-theoretic computations like those in .

Modular Arithmetic and GCD Computations

forms a cornerstone of computational number theory, enabling efficient handling of large integers by working within finite rings. Two integers a and b are congruent modulo m, denoted a \equiv b \pmod{m}, if m divides a - b, which implies that arithmetic operations can be reduced modulo m without altering the result's . This reduction is computationally advantageous, as it bounds the size of intermediate values during calculations, preventing overflow in fixed-precision arithmetic. The Chinese Remainder Theorem (CRT) extends this framework to solve systems of simultaneous congruences. For pairwise coprime moduli m_1, m_2, \dots, m_k, the system x \equiv a_i \pmod{m_i} for i = 1 to k has a unique solution modulo M = \prod m_i. The explicit solution is given by x = \sum_{i=1}^k a_i M_i y_i \pmod{M}, where M_i = M / m_i and y_i is the modular inverse of M_i modulo m_i. Computationally, CRT decomposes complex operations, such as multiplications of large numbers, into parallel computations modulo each m_i, followed by recombination, which is essential for protocols like RSA encryption. The (GCD) of two a and b, denoted \gcd(a, b), is the largest positive dividing both. The computes it efficiently via the recurrence \gcd(a, b) = \gcd(b, a \mod b) for a \geq b > 0, terminating when b = [0](/page/0) with \gcd(a, [0](/page/0)) = a. This relies on the property that any common of a and b also divides a \mod b. The algorithm terminates because each step replaces b with a strictly smaller non-negative a \mod b < b, and the process cannot continue indefinitely with positive . The extended Euclidean algorithm augments this to find Bézout coefficients x and y satisfying ax + by = \gcd(a, b), as guaranteed by Bézout's identity for any integers a, b. It proceeds recursively: for base case b = 0, set x = 1, y = 0; otherwise, compute coefficients x_1, y_1 for \gcd(b, a \mod b), then back-substitute x = y_1, y = x_1 - y_1 \cdot \lfloor a/b \rfloor. Here is a pseudocode implementation:
function extended_gcd(a, b):
    if b == 0:
        return a, 1, 0
    gcd, x1, y1 = extended_gcd(b, a % b)
    x = y1
    y = x1 - y1 * (a // b)
    return gcd, x, y
This yields the coefficients directly. When \gcd(a, m) = 1, the extended Euclidean algorithm computes the modular inverse of a modulo m, which is the integer x such that ax \equiv 1 \pmod{m}, obtained as the Bézout coefficient x from solving ax + my = 1. The result is adjusted to lie in [0, m-1] by x \mod m. For prime modulus p, Fermat's Little Theorem provides an alternative: if p is prime and \gcd(a, p) = 1, then a^{p-1} \equiv 1 \pmod{p}, so the inverse is a^{p-2} \mod p, computable via binary exponentiation in O(\log p) time. This method avoids the extended GCD but requires primality of the modulus. Both the Euclidean and extended variants have time complexity O(\log \min(a, b)) divisions, leading to bit complexity O((\log u)^2) for inputs of size u bits, though optimized implementations achieve O(\log u \cdot \log v). The binary GCD algorithm, or Stein's algorithm, improves efficiency by replacing divisions with bitwise shifts and subtractions: it factors out powers of 2 from even arguments and replaces odd-odd steps with \gcd(|u - v|/2, \min(u, v)), avoiding costly modulo operations altogether. This variant runs in O(\log n) steps using constant-time bit operations, making it preferable for hardware implementations and reducing computations in modular settings like CRT recombination.

Key Algorithms in Number Theory

Primality Testing

Primality testing in computational number theory involves algorithms designed to determine whether a given positive integer n > 1 is prime, a fundamental problem with applications in and . These algorithms range from simple deterministic methods suitable for small n to advanced probabilistic and deterministic approaches efficient for very large n. The efficiency is typically measured in terms of bit operations relative to \log n, the of n. Trial division serves as the baseline method, while probabilistic tests like Miller-Rabin offer practical speed for large numbers with tunable error probabilities, and the AKS algorithm provides unconditional deterministic polynomial-time verification. The simplest primality test is trial division, which checks for divisibility of n by all integers from 2 up to \lfloor \sqrt{n} \rfloor. If no such divisor exists, n is prime. This method has a of O(\sqrt{n}), or O(n^{1/2}) arithmetic operations, making it inefficient for large n where \sqrt{n} can exceed $10^{20} for numbers with hundreds of digits. Despite its limitations, trial division remains useful for small n or as a preprocessing step in more complex algorithms. Probabilistic primality tests leverage number-theoretic properties to achieve faster runtimes, accepting a small probability of error in declaring a prime (a one-sided error). These tests are based on , which states that if p is prime and a is an not divisible by p, then a^{p-1} \equiv 1 \pmod{p}. The naive Fermat test checks this congruence for randomly chosen bases a, but it fails against Carmichael numbers, which are composite yet satisfy the theorem for all coprime a. To address this, the Solovay-Strassen test, introduced in 1977, combines with Euler's criterion using the : for prime p, the (a/p) equals a^{(p-1)/2} \pmod{p}. By verifying this for random a, the test has an error probability at most $1/2 per trial, and k independent trials yield error less than $2^{-k}; its expected is O(( \log n )^3 ). However, it requires computing Jacobi symbols, which is slower than in practice. The Miller-Rabin test, a refinement proposed by Gary Miller in 1976 under the generalized and fully randomized by Michael Rabin in 1980, provides a stronger probabilistic alternative. It writes n-1 = 2^s d with d odd, then for a base a, checks if either a^d \equiv 1 \pmod{n} or a^{2^r d} \equiv -1 \pmod{n} for some $0 \leq r < s; failure for all such conditions marks n composite, while success for k random witnesses a implies primality with error probability less than $4^{-k}. Composites failing the test for a base a are strong pseudoprimes to base a. Witness selection is crucial: random bases work well, but deterministic variants use specific small sets of witnesses (e.g., the first few primes) that suffice for n < 2^{64}, derandomizing the test for practical sizes. The test's time complexity is O(k ( \log n )^3 ), dominated by modular exponentiation via repeated squaring, making it vastly faster than trial division for large n. It has been optimized further, with error bounds improving for certain witness choices. For unconditional deterministic primality testing, the AKS algorithm by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, published in 2002, proves that primality is in P. It checks if n is a perfect power, then verifies a set of binomial identities in the ring \mathbb{Z}/n\mathbb{Z} / (x^r - 1) for a carefully chosen r = O((\log n)^2), ensuring n is prime if the identities hold and n has no small factors. The original analysis yields a time complexity of O((\log n)^{10.5}), later improved to O((\log n)^6) through optimizations in polynomial multiplication and root finding. Though theoretically groundbreaking as the first unconditional polynomial-time test, AKS is slower in practice than for numbers up to thousands of digits due to high constants, but it guarantees correctness without assumptions. Specialized deterministic tests excel for structured forms like Mersenne numbers M_p = 2^p - 1 with prime p. The Lucas-Lehmer test, developed by Édouard Lucas in 1876 and refined by Derrick Lehmer, defines a sequence s_0 = 4, s_{i} = s_{i-1}^2 - 2 \pmod{M_p}, and declares M_p prime if s_{p-2} \equiv 0 \pmod{M_p}. This runs in O(p^2) time but is highly efficient with hardware optimizations, enabling the discovery of record primes via the Great Internet Mersenne Prime Search (GIMPS). As of November 2025, the largest known prime is the Mersenne prime $2^{136279841} - 1, with 41,024,320 digits, verified using the Lucas-Lehmer test.

Integer Factorization

Integer factorization involves decomposing a positive integer n > 1 into a product of prime numbers, a fundamental task in computational number theory that underpins many cryptographic systems despite its apparent simplicity. Algorithms for this purpose range from deterministic exact methods suitable for small n to probabilistic approaches that scale to much larger values, with running times often measured in subexponential complexity classes relative to the of n. The choice of method depends on the size and structure of n, as well as the expected size of its prime factors; for instance, methods exploiting special forms like differences of squares are efficient when factors are close, while general-purpose sieving techniques handle arbitrary composites. Basic exact methods include trial division and Fermat's factorization. Trial division systematically tests divisibility of n by all primes up to \sqrt{n}, requiring O(\sqrt{n}) operations in the worst case but quickly identifying small factors; it is often used as a preprocessing step in more advanced algorithms. Fermat's method, dating back to the , applies when n has factors p and q that are close in value, expressing n as a difference of squares: search for integers x > \sqrt{n} such that x^2 - n = y^2, yielding factors (x - y) and (x + y); the expected time is O(|p - q|) if |p - q| is small compared to \sqrt{n}. These methods are inefficient for large n without small or closely spaced factors but serve as building blocks, often combined with the to verify and refine potential factors. Heuristic methods like offer faster detection of medium-sized factors. Introduced in 1975, it generates a pseudorandom sequence via the x_{i+1} = (x_i^2 + c) \mod n for a random starting x_0 and constant c \neq 0, -2, using Floyd's cycle-detection to find indices i, j where \gcd(|x_i - x_j|, n) yields a nontrivial factor with high probability; the expected running time is O(\sqrt{p}) to find the smallest prime factor p. The method's efficiency stems from the birthday paradox in the sequence p, forming a rho-shaped cycle that collides after roughly \sqrt{p} steps. The method (ECM), proposed by Lenstra in 1985 (full analysis in 1987), extends this idea to over \mathbb{Z}/n\mathbb{Z}: select a random E: y^2 = x^3 + ax + b and point P, compute scalar multiples kP until the group order causes a a factor p, revealing it via GCD; it excels at finding factors of 20–60 digits in practice, with heuristic time O(\exp(\sqrt{2 \log p \log \log p})). Advanced sieving algorithms achieve subexponential performance by finding many smooth numbers—integers with all prime factors below a bound B—whose product yields congruences modulo n. The smoothness probability for a random integer x of size around n^{1/2} to be B-smooth, with B \approx \exp((\log n)^{1/2} (\log \log n)^{1/2}), is approximately \rho(u)/u where u = \log x / \log B and \rho is the Dickman-de Bruijn function satisfying \rho(u) = 1/u \int_{u-1}^u \rho(t) \, dt for u > 1, with \rho(u) \approx u^{-u} asymptotically. Dixon's method (1981) was an early linear sieve precursor, but the quadratic sieve (QS), developed by Pomerance in the 1980s, improved it by sieving values of the quadratic Q(x) = (x + \lfloor \sqrt{n} \rfloor)^2 - n near zero for smoothness over a factor base of small primes, collecting relations until a matrix of exponents has a kernel dependency solvable by Gaussian elimination to produce a square congruence factoring n; its complexity is L_n[1/2, 1 + o(1)], where L_n[\alpha, c] = \exp(c (\log n)^\alpha (\log \log n)^{1-\alpha}). The number field sieve (NFS), introduced in the 1990s, generalizes QS to multiple algebraic number fields for better smoothness, making it the fastest classical method for large general n. The special NFS (SNFS) targets n of form r^e \pm s for small r, s and exponent e, sieving smooth values in \mathbb{Z} and the ring \mathbb{Z}[\alpha] where \alpha is a root of an irreducible polynomial of degree d \approx (\log n / \log \log n)^{1/3}, aligning norms via a large prime relation; it factored the first 100+ digit numbers in records. The general NFS (GNFS) adapts this without special form, using two coprime polynomials to map sieving in rational and algebraic integers, with complexity L_n[1/3, (64/9)^{1/3} + o(1)] \approx L_n[1/3, 1.923], enabling factorization of numbers up to 250 digits as of 2020 using distributed computing. GNFS holds all major records, such as the 232-digit RSA-768 in 2009, but requires massive resources for the sieving and linear algebra phases. Quantum algorithms dramatically alter the landscape: Shor's 1994 algorithm factors n in polynomial time O((\log n)^3) on a quantum computer by reducing to period-finding of the function f(x) = a^x \mod n for random a coprime to n, using to find the order r and hence factors via \gcd(a^{r/2} \pm 1, n); while not yet practical due to quantum hardware limitations, it motivates .

Applications

Cryptography

Computational number theory underpins by leveraging the computational hardness of problems such as and the in finite fields or elliptic curves. These problems enable secure key generation, , and digital signatures without requiring prior shared secrets, forming the basis for protocols that ensure confidentiality and authenticity in digital communications. Seminal developments in this area, starting from the mid-1970s, have led to widely adopted systems that rely on efficient algorithms for while assuming certain number-theoretic problems remain intractable on classical computers. The RSA cryptosystem, introduced in 1978 by Rivest, Shamir, and Adleman, is a foundational public-key encryption scheme based on the difficulty of factoring the product of two large primes. Key generation involves selecting two large primes p and q—typically verified using probabilistic primality tests—and computing the modulus n = p q, followed by choosing a public exponent e coprime to (p-1)(q-1) and deriving the private exponent d such that e d \equiv 1 \pmod{(p-1)(q-1)}. Encryption of a message m uses the public key (n, e) to compute c = m^e \mod n, while decryption recovers m = c^d \mod n using the private key. The security of RSA rests on the RSA assumption, which posits that given n and e, it is computationally infeasible to compute d or factor n without knowledge of p and q. This assumption underpins security reductions for RSA-based protocols, ensuring that breaking encryption is equivalent to solving the factorization problem. RSA remains a cornerstone of secure communications, such as in TLS, despite advances in factoring algorithms. The Diffie-Hellman key exchange protocol, proposed by Diffie and Hellman in 1976, enables two parties to agree on a over an insecure channel by exploiting the problem in the modulo a large prime p. Participants select a prime p and generator g, then exchange public values g^a \mod p and g^b \mod p (where a and b are private), computing the shared key g^{a b} \mod p. Security relies on the intractability of computing , preventing an eavesdropper from deriving the secret from the exchanged values. This protocol forms the basis for secure key establishment in protocols like and SSH. Elliptic curve cryptography (ECC), independently proposed by Miller in 1985 and Koblitz in 1987, adapts discrete logarithm problems to elliptic curves over finite fields, offering stronger security per bit than RSA or Diffie-Hellman. Curves are typically defined by Weierstrass equations y^2 = x^3 + a x + b \mod p, with operations like point addition providing the group structure. ECC enables key exchanges and signatures with smaller key sizes—for instance, 256-bit ECC keys provide security comparable to 3072-bit RSA—due to the higher complexity of the elliptic curve discrete logarithm problem. This efficiency makes ECC suitable for resource-constrained devices, and it is standardized in protocols like TLS 1.3. ElGamal encryption, described by ElGamal in 1985, extends Diffie-Hellman principles to -key encryption using logarithms in finite fields. A key consists of a prime p, g, and g^x \mod p (where x is private); to encrypt m, a random k yields ciphertext (g^k \mod p, m \cdot (g^x)^k \mod p), decrypted using x. The scheme's security reduces to the computational Diffie-Hellman assumption. Similarly, the (), standardized by NIST in FIPS 186 (initially 1994), provides signatures based on logarithms in a subgroup of \mathbb{Z}_p^*, using parameters like a prime p, subgroup order q, and g; it signs messages via modular exponentiations and verifies using keys, with security tied to log hardness. influenced standards like ECDSA for elliptic curves. As quantum computers threaten and log-based systems via algorithms like Shor's, post-quantum alternatives draw on other number-theoretic hardness assumptions. The cryptosystem, introduced by Hoffstein, Pipher, and Silverman in 1998, is a lattice-based operating over rings \mathbb{Z}/(x^N - 1), using short keys and relying on the difficulty of finding short vectors in certain lattices. offers efficient and decryption with resistance to quantum attacks, though its variants were not selected in NIST's post-quantum standardization process, which finalized standards like ML-KEM (FIPS 203, based on module ) in August 2024 and added the code-based HQC in March 2025. Cryptographic implementations in computational number theory emphasize efficiency through algorithms like square-and-multiply for , which computes a^b \mod n in O(\log b) multiplications by iterative squaring and reduction, essential for and Diffie-Hellman operations. However, these face side-channel attacks, such as timing attacks that exploit variable execution times in to infer bits of private keys, as demonstrated by Kocher in 1996. Defenses include constant-time implementations and blinding techniques to mitigate information leakage from power consumption or electromagnetic emissions.

Coding Theory and Other Uses

In , computational number theory provides foundational tools for constructing error-correcting codes over s, which are algebraic structures built on to ensure operations remain within a . Reed-Solomon codes, a prominent example, operate as evaluation codes where a of degree less than k is evaluated at n distinct points in the \mathbb{F}_q, with q a , allowing correction of up to (n-k)/2 errors through decoding that leverages . These codes excel in handling burst errors due to their maximum distance separable property, derived from the field's properties, and have been optimized using fast Fourier transforms over s for efficient encoding and decoding. BCH codes extend this framework by defining cyclic codes whose generator polynomials are products of minimal polynomials of consecutive powers of a element in \mathbb{F}_{q^m}, often using cyclotomic polynomials to identify roots via cyclotomic cosets, enabling correction of up to t errors where the designed distance \delta = 2t+1. The reliance on number-theoretic constructs like primitive roots and field extensions ensures BCH codes achieve near-optimal performance for binary and non-binary alphabets in practical systems. Beyond error correction, computational number theory supports pseudorandom number generation and hashing for reliable data processing. The Blum-Blum-Shub generator produces a sequence of bits from iterative squaring modulo N = p q, where p and q are large congruent-to-3-mod-4 primes, yielding unpredictability under the quadratic residuosity assumption and the hardness of integer factorization, making it suitable for simulations requiring high-quality randomness. Hash functions such as SHA-256 incorporate modular operations, including 32-bit word additions and rotations, to process input blocks through compression functions that mix data via bitwise operations and constants derived from square roots of primes, ensuring avalanche effects and resistance to preimage attacks through the iterative nature of the Merkle-Damgård construction. These applications draw briefly on modular arithmetic for finite computations, distinct from primality testing used in key generation. In scientific computing, number-theoretic algorithms address Diophantine approximations by seeking rationals p/q minimizing |\alpha - p/q| for irrational \alpha, often via the continued fraction expansion or LLL lattice reduction to find short vectors in approximation lattices, which is crucial for verifying transcendental numbers or solving Pell equations efficiently. The PSLQ algorithm detects integer relations \sum b_i x_i = 0 among real vectors x by maintaining an integer matrix basis and iteratively applying partial reduction steps to shrink a monitored entry until it falls below a threshold, offering numerical stability and polynomial-time convergence for moderate precision inputs, as proven through bounds on the growth of intermediate values. These methods enable empirical discoveries, such as relations in constants like \pi and e. Practical deployments highlight these techniques' impact: Reed-Solomon codes are used for in modern GNSS navigation messages, such as in the Galileo system, to correct multiple symbol errors and improve reliability amid noise. However, scalability challenges arise with , as operations on high-precision integers in or relation-finding exceed standard hardware limits, demanding parallelization or approximation heuristics.

Computational Tools

Software Packages

PARI/GP is an open-source specifically designed for efficient computations in , including arithmetic operations, , , , modular forms, and L-functions. It consists of the PARI C library for high-speed calculations and the GP interactive shell, which features a for automating tasks and a (gp2c) that accelerates scripts by a of 3-4. Installation involves downloading the source or binaries from the official site, compiling with dependencies like GMP for optimal performance, and running the GP interpreter via the command gp for interactive sessions; for example, basic usage includes commands like factor(123456789) to factor or ellinit([0,-1,1,-10,5]) for initialization. Benchmarks demonstrate its efficiency, such as the Lewis-Wester suite where versions as of 2025 (e.g., 2.15.1 with GMP) complete integer arithmetic tests rapidly on modern hardware for operations up to 1000-bit numbers, with ongoing improvements in factorization tasks. SageMath serves as a comprehensive open-source mathematics software system that integrates computational number theory with broader mathematical domains, leveraging Python as its primary interface for scripting and interactivity. It incorporates PARI/GP as a core component for specialized number theory functions, such as primality testing and elliptic curve computations, while extending capabilities through additional libraries like FLINT for fast arithmetic. Installation is available via pre-built packages for , macOS, and Windows, or by building from source using make after cloning the repository; a basic usage example is factor(314159265358979323846264338327950288419716939937510) in the Sage or terminal, which utilizes PARI's backend for rapid factorization (version 10.7 as of August 2025). SageMath supports number theory tasks like and Diophantine equations, making it suitable for research and education, with benchmarks showing it handles 512-bit modulus factorization in seconds on standard hardware when interfacing with PARI. YAFU (Yet Another Factoring Utility) is a standalone command-line tool optimized for , particularly employing the self-initializing (SIQS) for numbers up to 100 digits and the general number field sieve (GNFS) for larger composites up to 160 digits (version 3.0 as of January 2025). It automates algorithm selection, including elliptic curve method () for small s, and provides detailed logs for sieving and linear algebra phases. No formal installation is required; users the binary, place it in a directory, and execute commands like ./yafu "factor(123456789012345678901)" to initiate , with options for batch processing ranges of inputs. YAFU's efficiency is evident in benchmarks where it factors a 100-digit using SIQS typically within tens of minutes to hours on a single core, integrating support for primality testing via built-in probabilistic methods. Historically, early computational number theory efforts in the relied on custom software for projects like the tables, which catalog factors of numbers of the form b^n \pm 1 for bases b = 2 to $12 and large exponents n. The Project, initiated manually in 1925 but computationally advanced from the mid-1970s by researchers including John Brillhart and , used pioneering programs on mainframes to compute and extend these tables, contributing to developments in factorization techniques. These early tools, such as those implemented in for the , processed numbers up to 50 digits and laid groundwork for modern sieving methods, with tables updated collaboratively to include over 6000 entries by the 1980s.

Libraries and Implementations

Computational number theory relies on specialized libraries that provide efficient implementations of arithmetic operations and algorithms, enabling developers to integrate high-precision computations into applications ranging from to algebraic research. These libraries often focus on arbitrary-precision integers and polynomials, supporting operations like multiplication and that underpin number-theoretic tasks. Widely adopted tools such as GMP form the foundation for many systems, while specialized libraries like NTL and FLINT extend capabilities to advanced structures like lattices and fields. The GNU Multiple Precision Arithmetic Library (GMP) is a foundational C library for arbitrary-precision arithmetic, supporting signed integers, rational numbers, and floating-point operations with an emphasis on speed and portability. Released under the GNU Lesser General Public License, GMP is utilized in numerous tools including OpenSSL for cryptographic computations and SageMath for mathematical research, handling integers up to thousands of digits efficiently through assembly-optimized routines (version 6.3.0 as of 2023, with no major updates reported by 2025). A key API example is the mpz_mul function, which multiplies two arbitrary-precision integers: in C, one declares mpz_t a, b, c; mpz_init(a); mpz_init(b); mpz_init(c); mpz_set_ui(a, 123); mpz_set_ui(b, 456); mpz_mul(c, a, b);, resulting in c holding 56088, demonstrating GMP's straightforward interface for basic arithmetic that scales to large operands. Performance-wise, GMP achieves multiplication speeds of over 1 billion 1024-bit operations per second on modern hardware, making it a benchmark for other libraries. The Library (NTL), developed by Victor Shoup, is a high-performance C++ library designed for advanced number-theoretic computations, including arbitrary-length integers, polynomials over integers or finite fields, vectors, matrices, and lattices, with built-in support for factoring algorithms like the (version 11.5.1 as of 2021). Distributed under a permissive , NTL emphasizes modularity and ease of use for research in and , integrating seamlessly with GMP for low-level arithmetic. For instance, multiplication in NTL can be performed via ZZX f, g, h; f.SetLength(100); g.SetLength(100); Mul(h, f, g);, leveraging FFT-based methods for efficiency on degrees up to millions. Dated benchmarks (from 2021) show NTL competitive with contemporaries like FLINT in univariate multiplication over integers for degrees around 10^4, with strengths in lattice-based operations relevant to ; more recent FLINT versions (3.0 as of 2025) have shown improvements that may alter relative performance. FLINT (Fast Library for Number Theory) is an optimized C library tailored for , providing implementations for rings like integers, rationals, , and finite fields, alongside matrix operations and via methods such as the elliptic curve method. Licensed under the GNU Lesser General Public License version 3 or later, FLINT prioritizes computational speed through low-level optimizations and multi-threading support, making it suitable for large-scale computations in tools like (version 3.0 as of 2025). An example usage for multiplying two over the integers is fmpq_poly_t a, b, c; fmpq_poly_init(a); fmpq_poly_init(b); fmpq_poly_init(c); fmpq_poly_set_str(a, "1 + 2*X + 3*X^2"); fmpq_poly_set_str(b, "4 + 5*X"); fmpq_poly_mul(c, a, b);, yielding a rational . In benchmarks, FLINT is approximately twice as fast as GMP for univariate at 1024-bit and up to 6.6 times faster with multi-threading, establishing its impact on high-performance algebraic tasks. For integration with higher-level languages, bindings like gmpy2 extend GMP's functionality to , supporting multiple-precision integers, rationals, and floating-point numbers with correct via MPFR and MPC libraries, facilitating in scientific computing. In , gmpy2 allows operations such as from gmpy2 import mpz; a = mpz(123); b = mpz(456); c = a * b; print(c) # outputs 56088, mirroring GMP's efficiency while adding Pythonic features like context managers for precision control. This integration has enabled performance gains in Python-based scripts, often matching native C speeds for arithmetic operations. In modern ecosystems, Rust's num-bigint provides big arithmetic with a focus on safety and concurrency, supporting signed and unsigned arbitrary-precision integers through a BigInt type that implements standard traits like addition and modular operations. As part of the broader num family, it is tested for Rust 1.65 and later, with via the rand feature, making it suitable for secure applications in and . For quantum extensions, IBM's library includes implementations of for , allowing simulation and execution on quantum hardware through circuits that leverage quantum Fourier transforms, as demonstrated in tutorials for factoring small numbers like 15. thus bridges classical libraries with quantum simulators, supporting hybrid workflows.

References

  1. [1]
    [PDF] A Computational Introduction to Number Theory and Algebra
    The book could also be used as a textbook in a graduate or upper-division undergraduate course on (computational) number theory and algebra, perhaps geared ...
  2. [2]
    [PDF] Computational number theory - Dartmouth Mathematics
    Our topic here is computational number theory, which may be argued to be in the vanguard for the revisiting of pure mathematics to its compu- tational roots. A ...
  3. [3]
    [PDF] A Computational Introduction to Number Theory and Algebra (BETA ...
    The mathematical material covered includes the basics of number theory (including unique factorization, congruences, the distribution of primes, quadratic ...
  4. [4]
    [PDF] Computational Number Theory - Dartmouth Mathematics
    Factorization into primes is a very basic issue in number theory, but essentially all branches of number theory have a computational component. And in some.
  5. [5]
    Computational Number Theory -- from Wolfram MathWorld
    Computational number theory is the branch of number theory concerned with finding and implementing efficient computer algorithms for solving various problems ...Missing: definition scope
  6. [6]
    [PDF] The Role of Number Theory in Cryptography and Cybersecurity
    By analyzing published research and data, the paper explores how number theory aids in developing secure, scalable, and resource-efficient cryptographic systems ...
  7. [7]
    [PDF] How Number Theory Enables a Secure Internet
    Not only that but the resulting cryptographic systems would be more secure and more efficient than the ones based ... Math., 142, 443–551, 1995. Nigel P. Smart,.
  8. [8]
    None
    Summary of each segment:
  9. [9]
    Empirical Verification of Goldbach's Conjecture Beyond Four ...
    Mar 12, 2025 · We present an empirical verification of the Goldbach Conjecture for even integers after four quintillion. We even go on to extend this range ...3. Material And Methods · 3.3. Experimentation And... · 3.4. Formal Proof For A...
  10. [10]
    [PDF] Computational Number Theory - Gyarmati Katalin
    Definition 1.1 A random bit generator is a device or algorithm that generates bits that are statistically independent and unbiased. Historically, hardware-based ...
  11. [11]
    [PDF] The Economic Impacts of NIST's Data Encryption Standard (DES ...
    Only benefits in the form of operational efficiencies of major users of encryption technology could be estimated. Based on interviews with industry ...
  12. [12]
    [PDF] Number Theory and Cryptography: Enhancing Data Security in the ...
    Oct 8, 2025 · The objective is to enhance the security, efficiency, or usability of cryptographic systems that heavily rely on number theory concepts. Overall ...<|separator|>
  13. [13]
    Euclidean algorithm | Algorithm, Division & GCD - Britannica
    Oct 6, 2025 · Euclidean algorithm, procedure for finding the greatest common divisor (GCD) of two numbers, described by the Greek mathematician Euclid in his Elements (c. ...
  14. [14]
    Eratosthenes (276 BC - 194 BC) - Biography - University of St Andrews
    Eratosthenes also worked on prime numbers. He is remembered for his prime number sieve, the 'Sieve of Eratosthenes' which, in modified form, is still an ...
  15. [15]
    Number theory - Fermat, Math, Puzzles | Britannica
    Oct 6, 2025 · In 1640 he stated what is known as Fermat's little theorem—namely, that if p is prime and a is any whole number, then p divides evenly into ap − ...Missing: modular computations<|separator|>
  16. [16]
  17. [17]
    Derrick Henry Lehmer (1905 - 1991) - Biography - MacTutor
    He was a pioneer in the application of mechanical methods, including digital computers, to the solution of problems in number theory and he talked about some of ...
  18. [18]
    [PDF] A reconstruction of Lehmer's table of primes (1914) - LOCOMAT
    Oct 9, 2011 · Lehmer became interested in prime numbers and factors very early and he published his table of factors in 1909 and his table of primes in 1914.Missing: calculator | Show results with:calculator
  19. [19]
    [PDF] Mastering Multiprecision Arithmetic
    Multiprecision arithmetic involves design choices like representation, using a base (typically 2k), and avoiding uncivilized arithmetic. It uses a sum of ...
  20. [20]
    The Complexity of Mental Integer Addition
    Similarly, the complexity of addition using the schoolbook method (including carries) is O(n).
  21. [21]
    [PDF] Carry-Save Addition
    n-bit carry-save adder take 1FA time for any n. •. For n x n bit ... – very large numbers, e.g., 3.15576 × 109. •. Representation: – sign, exponent ...
  22. [22]
    [PDF] Generalizations of the Karatsuba Algorithm for Efficient ...
    The present paper provides a generalization and detailed analysis of the algorithm by Karatsuba [2] to multiply two polynomials which was introduced in 1962.
  23. [23]
    None
    Summary of each segment:
  24. [24]
    None
    ### Summary of Newton's Method for Integer Division
  25. [25]
    [PDF] A Fast Modular Reduction Method - Cryptology ePrint Archive
    Barrett's reduction [1] is applicable when many reductions are performed with a single modulus. Both two methods are similar in that expensive divisions in ...
  26. [26]
    Number Theory - The Chinese Remainder Theorem
    The Chinese Remainder Theorem tells us there is always a unique solution up to a certain modulus, and describes how to find the solution efficiently.Missing: congruence | Show results with:congruence
  27. [27]
    Euclidean algorithm for computing the greatest common divisor
    Oct 15, 2024 · The Euclidean algorithm was formulated as follows: subtract the smaller number from the larger one until one of the numbers is zero.Algorithm · Implementation · Time Complexity · Least common multipleMissing: termination | Show results with:termination
  28. [28]
    [PDF] Euclid's Algorithm
    The process eventually terminates because b and r are respectively smaller than a and b. The relation to the Euclidean algorithm is immediately apparent. ...
  29. [29]
    Extended Euclidean Algorithm
    Oct 12, 2025 · The extended Euclidean algorithm finds the GCD of two integers and represents it in terms of coefficients, unlike the standard algorithm.
  30. [30]
    Modular Inverse - Algorithms for Competitive Programming
    Aug 20, 2023 · In this article, we present two methods for finding the modular inverse in case it exists, and one method for finding the modular inverse for all numbers in ...
  31. [31]
    [PDF] The greatest common divisor - University of Waterloo
    Bit Complexity of Euclid's Algorithm. We have seen that Euclid's algorithm performs. E(u; v) = O(log u) division steps on inputs (u; v) with u>v> 0. The naive ...
  32. [32]
    [PDF] Riemann's Hypothesis and Tests for
    INTRODUCTION. Two classic computational problems are finding efficient for: (1) testing primality (deciding whether an integer is prime or composite), ...
  33. [33]
    Primality tests - Algorithms for Competitive Programming
    Apr 16, 2024 · This article describes multiple algorithms to determine if a number is prime or not. Trial division¶. By definition a prime number doesn't have ...
  34. [34]
    A Fast Monte-Carlo Test for Primality | Semantic Scholar
    This paper presents an efficient method for generating a random integer with known factorization, and can be implemented with randomized primality testing; ...
  35. [35]
    "Theorematum quorundam ad numeros primos spectantium ...
    Sep 25, 2018 · Content Summary. This paper presents the first proof of the Euler-Fermat theorem, also known as Fermat's Little Theorem, that ap-1 = 1 (mod p) ...
  36. [36]
    Probabilistic algorithm for testing primality - ScienceDirect.com
    We present a practical probabilistic algorithm for testing large numbers of arbitrary form for primality. The algorithm has the feature that when it ...
  37. [37]
    Primes is in P - CSE - IIT Kanpur
    Missing: original | Show results with:original
  38. [38]
    [PDF] THE LUCAS–LEHMER TEST 1. Introduction If 2n
    Primes of the form Mp := 2p −1 are called Mersenne primes. It is conjectured that there are infinitely many such primes, but data suggest they are rare ...
  39. [39]
    Mersenne Prime Number discovery - 2 136279841 -1 is Prime!
    Mersenne Prime Number discovery - 2136279841-1 is Prime! · GIMPS Discovers Largest Known Prime Number: 2136,279,841-1.
  40. [40]
    [PDF] A Survey of Modern Integer Factorization Algorithms
    Modern integer factorization algorithms include those that find small prime factors quickly, and those that factor regardless of prime factor sizes, but are ...
  41. [41]
    A monte carlo method for factorization | BIT Numerical Mathematics
    Pollard, J.M. A monte carlo method for factorization. BIT 15, 331–334 (1975). https://doi.org/10.1007/BF01933667. Download citation. Received: 16 June 1975.
  42. [42]
    Factoring Integers with Elliptic Curves - jstor
    By H. W. LENSTRA, JR. Abstract. This paper is devoted to the description and analysis of a new algorithm to factor positive integers ...Missing: DOI | Show results with:DOI
  43. [43]
    The Quadratic Sieve Factoring Algorithm - SpringerLink
    The quadratic sieve algorithm is currently the method of choice to factor very large composite numbers with no small factors.Missing: original | Show results with:original
  44. [44]
    [PDF] A Method for Obtaining Digital Signatures and Public-Key ...
    We demonstrate in this paper how to build these capabilities into an electronic mail system. At the heart of our proposal is a new encryption method. This ...
  45. [45]
    A method for obtaining digital signatures and public-key cryptosystems
    Feb 1, 1978 · An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key.
  46. [46]
    [PDF] RSA Problem - People | MIT CSAIL
    Dec 10, 2003 · It is conceivable that RSA could be “secure” in the sense that the RSA. Assumption holds (i.e. RSA is hard to invert), yet that RSA “leaks” ...<|separator|>
  47. [47]
    [PDF] New Directions in Cryptography - Stanford University
    DIFFIE. AND. HELLMAN: NEW. DIRECTIONS. IN CRYPTOGRAPHY. 653 of possible keys. Though the problem is far too difficult to be laid to rest by such simple methods ...
  48. [48]
    Use of Elliptic Curves in Cryptography - SpringerLink
    Dec 1, 2000 · We discuss the use of elliptic curves in cryptography. In particular, we propose an analogue of the Diffie-Hellmann key exchange protocol.
  49. [49]
    [PDF] A public key cryptosystem and a signature scheme based on ...
    The paper described a public key cryptosystem and a signature scheme based on the difficulty of computing discrete logarithms over finite fields. The ...
  50. [50]
    FIPS 186, Digital Signature Standard (DSS) | CSRC
    This standard specifies a Digital Signature Algorithm (DSA) which can be used to generate a digital signature. Digital signatures are used to detect ...
  51. [51]
    [PDF] A ring-based public key cryptosystem - NTRU
    ABSTRACT. We describe NTRU, a new public key cryptosystem. NTRU features reasonably short, easily created keys, high speed, and low memory requirements.
  52. [52]
    [PDF] Efficient Software Implementations of Modular Exponentiation
    Feb 20, 2010 · We concentrate here on 512-bit modular exponentiation, used for 1024-bit RSA. We propose optimizations in two directions. At the primitives' ...
  53. [53]
    PARI/GP Development Headquarters
    PARI/GP is a cross platform and open-source computer algebra system designed for fast computations in number theory: factorizations, algebraic number theory ...Download · Development version · Documentation · PARI/GP BuildlogsMissing: features | Show results with:features
  54. [54]
    Benchmarks - PARI/GP Development Headquarters
    Aug 17, 2022 · This page contains timings for various benchmarks, and various version of the PARI library. As a rule, recent versions equipped with a GMP kernel fare better.Missing: software computational theory
  55. [55]
    SageMath Mathematical Software System - Sage
    ### Summary of SageMath Features and Use Cases
  56. [56]
    pari: Computer algebra system for fast computations in number theory
    PARI/GP is a widely used computer algebra system designed for fast computations in number theory (factorizations, algebraic number theory, elliptic curves)Missing: benchmarks | Show results with:benchmarks
  57. [57]
    bbuhrow/yafu: Automated integer factorization - GitHub
    YAFU is primarily a command-line driven tool. You provide the number to factor and, via screen output and log files, YAFU will provide you the factors.
  58. [58]
    yafu download | SourceForge.net
    Rating 5.0 (6) · Free · ReferenceYAFU is primarily a command-line driven tool. You provide the number to factor and, via screen output and log files, YAFU will provide you the factors.
  59. [59]
    [PDF] The Cunningham Project - Purdue University
    The goal of the Cunningham Project is to factor numbers of the form bn ± 1 for integers 2 ≤ b ≤ 12. The factors of these numbers are important ingredients in.Missing: software | Show results with:software
  60. [60]
    The GNU MP Bignum Library
    GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers.GMP manual · GMP 6.3 release notes · GMP Repository Usage
  61. [61]
    NTL: A Library for doing Number Theory
    NTL is a high-performance, portable C++ library providing data structures and algorithms for manipulating signed, arbitrary length integers.A Tour of NTL · Download NTL · Summary of Changes
  62. [62]
    FLINT: Fast Library for Number Theory
    FLINT is a C library for doing number theory, freely available under the GNU Lesser General Public License version 3 or later.
  63. [63]
    Top (GNU MP 6.3.0)
    This manual describes how to install and use the GNU multiple precision arithmetic library, version 6.3.0.Installing GMP · Introduction to GNU MP · C++ Class Interface · GMP Basics
  64. [64]
    Victor Shoup's Home Page
    NTL is a high-performance, portable, and free C++ library providing data structures and algorithms for manipulating signed, arbitrary length integers, and for ...
  65. [65]
    A Tour of NTL
    Introduction · Examples · Programming Interface · Summary of NTL's Main Modules · Obtaining and Installing NTL for UNIX · Obtaining and Installing NTL for Windows ...Missing: Number Theory
  66. [66]
    Fast Library for Number Theory — FLINT 3.4.0-dev documentation
    FLINT is a C library for doing number theory. FLINT is free software distributed under the GNU Lesser General Public License (LGPL), version 3 or later.
  67. [67]
    Fast Library for Number Theory - Applications & benchmarks - FLINT
    FLINT is about twice as fast as GMP and five times faster than MPFR on this benchmark. When allowed to use multiple threads, FLINT is 6.6x faster than GMP ( ...Missing: NTL | Show results with:NTL
  68. [68]
    Welcome to gmpy2's documentation! — gmpy2 2.2.2a1 documentation
    gmpy2 ... Python extension module that supports multiple-precision arithmetic. It is the successor to the original gmpy module (supported only the GMP library).
  69. [69]
    Overview — gmpy2 2.2.2a1 documentation - Read the Docs
    In case you are interested only in the GMP bindings, you can use the python-gmp package. The mpfr and mpc types provide support for correctly rounded multiple ...
  70. [70]
    num_bigint - Rust - Docs.rs
    num-bigint supports the generation of random big integers when the rand feature is enabled. To enable it include rand as rand = "0.8"
  71. [71]
    Shor's algorithm | IBM Quantum Documentation
    Shor's algorithm · Step 1: Map classical inputs to a quantum problem · Step 2: Optimize problem for quantum hardware execution · Step 3: Execute using Qiskit ...Setup · Specific example with N = 15... · Step 3: Execute using Qiskit...