Fact-checked by Grok 2 weeks ago

Coppersmith's attack

Coppersmith's attack is a family of lattice-based cryptanalytic techniques developed by in the mid-1990s, primarily used to find small roots of univariate or bivariate equations a composite N (often an modulus) or over the integers, enabling the recovery of secret information in cryptographic systems like when partial data is known. These methods rely on the Lenstra–Lenstra–Lovász () lattice basis reduction algorithm to construct from the polynomials and efficiently identify short vectors corresponding to the roots. At its core, the attack addresses the problem of solving equations of the form f(x) \equiv 0 \pmod{N} for small |x| < N^{1/d}, where f is a monic polynomial of degree d and N is composite with unknown factorization; Coppersmith's theorem guarantees finding all such roots in polynomial time if the bound holds, with extensions to approximate roots and higher dimensions. For bivariate integer polynomials f(x, y) = 0, the method finds solutions where the product of the root sizes is sufficiently small relative to the polynomial's norm, typically XY < W^{2/(3\delta) - \epsilon} for some parameters. This framework has been refined over time, with improvements in bounds and efficiency for specific cases, such as exposing a quarter of the bits of an RSA prime factor to fully factor the modulus. In RSA cryptanalysis, Coppersmith's methods are particularly effective against low-exponent variants, such as when the public exponent e=3; for instance, the attack recovers a plaintext message if an adversary knows roughly two-thirds of its bits or if multiple encryptions share overlapping plaintext portions exceeding eight-ninths of the modulus length. It also applies to scenarios with partial key exposure, where knowing the most or least significant bits of a private prime allows efficient factorization, and to short-padding attacks on multiple ciphertexts with distinct low-entropy pads. These vulnerabilities underscore the need for proper padding schemes and balanced key generation in RSA implementations to mitigate such lattice-based threats.

Background

RSA Cryptosystem

The RSA cryptosystem is an asymmetric public-key encryption scheme that enables secure data transmission without the need to share a secret key in advance. Invented by Ronald L. Rivest, Adi Shamir, and Leonard Adleman at MIT in 1977, it was first publicly described in a seminal paper published in 1978. The algorithm has since become a cornerstone of modern cryptography, widely adopted in protocols such as SSL/TLS for secure web communications and in digital signature standards. At its core, RSA relies on the mathematics of large integer factorization and modular arithmetic. Key generation begins with the selection of two large, distinct prime numbers p and q, typically of similar bit length to ensure balance. The modulus N is then computed as N = p \times q. Euler's totient function is applied to derive \phi(N) = (p-1)(q-1), which counts the integers up to N that are coprime to it. A public exponent e is chosen such that $1 < e < \phi(N) and \gcd(e, \phi(N)) = 1, ensuring e is coprime to \phi(N). The corresponding private exponent d is calculated as the modular multiplicative inverse of e modulo \phi(N), satisfying the congruence e \times d \equiv 1 \pmod{\phi(N)}. The public key consists of the pair (N, e), while the private key is d (with p and q often discarded after key generation for added security). Encryption and decryption in RSA operate directly on numerical representations of messages. To encrypt a plaintext message M where $0 \leq M < N, the ciphertext C is produced via C = M^e \mod N. Decryption reverses this process by computing M = C^d \mod N, which recovers the original message due to the properties of modular exponentiation and Fermat's Little Theorem (extended via Euler's theorem). This mechanism ensures that anyone with the public key can encrypt messages, but only the holder of the private key can decrypt them efficiently. The security of RSA fundamentally rests on two related assumptions: the hardness of the integer factorization problem—specifically, that no efficient algorithm exists to factor the product N = p \times q into its prime factors when p and q are sufficiently large—and the RSA problem, which posits that computing the e-th root modulo N (i.e., finding M given C, N, and e) is computationally infeasible without knowledge of the factorization. While an efficient factoring algorithm would break RSA by allowing recovery of d, the converse remains an open question, though practical implementations assume equivalence for security purposes. RSA has been standardized in the Public-Key Cryptography Standards (PKCS) #1, which specifies encoding schemes, padding methods, and key formats to mitigate implementation vulnerabilities. In practice, RSA parameters are selected to balance security, performance, and compatibility. The public exponent e is commonly chosen from small prime values such as 3, 17, or 65537 (the latter being $2^{16} + 1, favored for its efficiency in modular reductions as it requires only two squarings in exponentiation). The modulus N typically ranges from 2048 to 4096 bits in length to provide adequate protection against current computational capabilities, with 1024-bit keys now considered deprecated for most applications. These choices ensure rapid encryption (using the small e) while maintaining the computational intractability of decryption without d.

Polynomial Root-Finding in Cryptanalysis

In polynomial root-finding within cryptanalysis, the core challenge is to identify small integer solutions to a polynomial congruence. Given a polynomial f(x) \in \mathbb{Z} of degree k and a large composite modulus N, the task is to find an integer x_0 satisfying f(x_0) \equiv 0 \pmod{N} with |x_0| < N^{1/k}. This formulation assumes f(x) is monic for simplicity, though extensions handle general integer coefficients. Such problems arise frequently in cryptanalysis, where protocol weaknesses or side-channel leaks reduce security to solving equations with small unknown variables. For example, in the with low public exponents, partial knowledge of plaintext or padding can yield polynomials whose small roots reveal full messages or keys. These reductions exploit the fact that cryptographic moduli like N = pq (with large primes p, q) obscure factorization, making direct root isolation over \mathbb{Z}/N\mathbb{Z} computationally hard without specialized tools. Traditional methods for locating these roots, including brute-force enumeration, fail for cryptographically relevant sizes, as the effort scales exponentially—roughly O(N^\beta) for bound parameter \beta > 0—rendering them impractical for N with hundreds of bits. Earlier probabilistic techniques, such as those by Vallée et al., achieve only suboptimal bounds like |x_0| < N^{2/[k(k+1)]} and require assumptions on N's factorization or multiple samples, limiting their reliability in adversarial settings. Coppersmith's breakthroughs in the mid-1990s introduced lattice-based heuristics that solve these problems in polynomial time for sufficiently small roots, transforming cryptanalysis by enabling attacks on flawed implementations. His methods rely on the lattice basis reduction algorithm to uncover short vectors in lattices built from the polynomial's coefficients, providing deterministic guarantees under the target bound. Central to this framework is the construction of modular approximations via shifts of f(x), such as multiples by powers of x and scaled by high powers of N, which encode the root's smallness into lattice vectors whose shortness reveals x_0 upon reduction. These shifts effectively "unwind" the modular constraint, allowing LLL to isolate candidates verifiable by substitution into f(x).

Core Methods

Univariate Modular Polynomials

Coppersmith's algorithm for univariate modular polynomials solves the problem of finding small integer roots x_0 of a monic polynomial equation f(x) \equiv 0 \pmod{N}, where N is a composite modulus and f(x) = x^k + a_{k-1} x^{k-1} + \cdots + a_1 x + a_0 \in \mathbb{Z} with the root bound |x_0| < N^{1/k}. This approach leverages lattice basis reduction to identify polynomials with small coefficients that share the root x_0 over the integers, enabling recovery of x_0 through standard algebraic techniques. The method, introduced by , operates in polynomial time relative to the bit length of N and provides an asymptotic bound of \delta \approx 1/k for the normalized root size N^\delta, with heuristic guarantees for exact recovery when the root is sufficiently small. The core construction builds a lattice of dimension roughly k w, where w is a parameter determining the number of shifts, typically chosen as w = \lceil k / \beta \rceil for a target \beta < 1/k. The lattice basis consists of coefficient vectors from shifted and powered versions of the polynomial, specifically f(x \cdot N^{\beta m})^m for m = 1 to w, scaled by powers of N to balance the Euclidean norms of the basis vectors. This setup ensures that if x_0 is a small root, the combination corresponding to (x - x_0) times a higher-degree factor will have unusually small coefficients modulo N. The basis matrix B is structured such that its entries incorporate terms like N^{m \beta} \cdot x^i \cdot a_j for the coefficients a_j in the expanded shifts, forming a nearly diagonal matrix with the monic leading terms on the diagonal scaled by N^{m \beta}. A refined presentation of this construction appears in Howgrave-Graham's analysis, which simplifies the original formulation while preserving the guarantees. Applying the LLL algorithm to the lattice generated by B produces a reduced basis, from which the shortest vector v is extracted; this vector corresponds to a polynomial v(x) of degree less than k w with small integer coefficients such that v(x_0) = 0 over \mathbb{Z}. The root x_0 is then recovered by factoring v(x) over \mathbb{Z} (e.g., using Berlekamp's algorithm) or by numerical approximation to find its small integer roots, as the coefficients are bounded small. Under the assumptions of the method, the norm of v(x) is bounded by $2^{(d-1)/2} \cdot \det(B)^{1/d}, where d = k w, ensuring the coefficients are small enough for integer root finding via techniques like Berlekamp's algorithm or numerical approximation. The overall complexity is polynomial in \log N and k, though the LLL reduction introduces a practical exponential dependence on the lattice dimension d \approx k^2. This technique establishes a foundational tool in lattice-based cryptanalysis, applicable in scenarios like low public exponent RSA where partial information yields a small modular root.

Bivariate Integer Polynomials

Coppersmith's method for bivariate integer polynomials addresses the problem of finding small integer solutions (x_0, y_0) to an equation f(x, y) = 0, where f \in \mathbb{Z}[x, y] is a polynomial of low total degree k, and the solutions satisfy |x_0| < X, |y_0| < Y with XY sufficiently small relative to the height H of f (the maximum absolute value of its coefficients). This approach is particularly effective when the roots are sufficiently small relative to H, enabling polynomial-time recovery using lattice reduction techniques. The method extends ideas from univariate cases but accounts for the two-variable structure through direct lattice construction in the bivariate monomial basis. The core technique involves constructing a lattice from scaled monomial multiples of f(x, y), specifically considering polynomials of the form x^i y^j f(x, y) for $0 \leq i + j \leq w (with w chosen based on k), scaled by factors X^{-i} Y^{-j} to account for the expected root bounds. To facilitate LLL application, the lattice is padded with multiples of a large integer W (chosen larger than expected norms) in the lower dimensions to ensure full rank and balance the basis vectors' norms. The lattice basis consists of the coefficient vectors of these scaled polynomials in the monomial basis \{x^a y^b\} up to sufficiently high degree. For a polynomial of degree k, the lattice dimension is O((k+1)^2), making LLL reduction feasible in polynomial time. Applying LLL yields a short vector corresponding to a polynomial h(x, y) with small integer coefficients such that h(x_0, y_0) \equiv 0 \pmod{W}. By Howgrave-Graham's lemma, if the sup-norm of h evaluated at the scaled point (x_0 X, y_0 Y) is less than W / \sqrt{\deg h}, then h(x_0, y_0) = 0 over \mathbb{Z}. The roots can then be recovered by computing the resultant \operatorname{Res}_y(h(x, y), f(x, y)) to obtain a univariate polynomial in x whose small root is x_0, followed by substitution into f or h to solve for y_0, using standard integer root-finding methods. The method provides asymptotic bounds on recoverable root sizes, with heuristic guarantees allowing solutions where XY < H^{2/(k+1) - \epsilon} for total degree k and small \epsilon > 0. For balanced root sizes (X \approx Y = H^\delta), this implies \delta \approx 1/(k+1); for example, with k=1, roots up to H^{1/2} are recoverable. For fixed low k, the bounds are in \log H, superior to exhaustive search. These results build on the original formulation by and have been refined for better constants and practicality. This general technique for bivariate integer equations finds applications in cryptanalysis, such as recovering partial private keys or exploiting related messages in systems like RSA, where small unknowns manifest in polynomial relations over the integers.

RSA-Specific Attacks

Low Public Exponent Attack

In the low public exponent attack on RSA, Coppersmith's univariate modular root-finding method is applied to recover small plaintext messages encrypted using a small public exponent. This scenario arises when the public exponent e is small (e.g., e = 3), the plaintext M is smaller than N^{1/e} where N is the RSA modulus, and the ciphertext C = M^e \mod N is known. The attack solves for the small root x = M in the modular equation x^e \equiv C \mod N by constructing the monic univariate f(x) = x^e - C and applying Coppersmith's lattice-based technique to find roots bounded by |x_0| < N^{1/e} such that f(x_0) \equiv 0 \mod N. This directly leverages the univariate modular method with degree k = e. Coppersmith's theorem guarantees recovery of M provided M < N^{1/e}; for e = 3 and a typical 1024-bit N, this permits decryption of messages up to roughly 341 bits in length. The process involves building a lattice from shifts of f(x), performing reduction (e.g., via LLL algorithm), and extracting small roots from the resulting short vectors. This attack exposes vulnerabilities in unpadded RSA implementations with small exponents, as it enables efficient decryption of low-entropy or short messages, including those from repeated encryptions. It prompted the widespread adoption of randomized padding schemes like to mitigate such risks by expanding effective message size and introducing randomness. First detailed by in 1996, the method remains a foundational result in RSA cryptanalysis.

Håstad's Broadcast Attack

Håstad's broadcast attack targets scenarios where the same plaintext message M is encrypted under RSA for multiple recipients using the same small public exponent e but distinct moduli N_1, N_2, \dots, N_k, yielding ciphertexts C_i = M^e \mod N_i for i = 1, \dots, k, assuming the N_i are pairwise coprime and M < \min_i N_i. This setup arises in broadcast communications without proper padding, allowing an eavesdropper to recover M efficiently when k is sufficiently large relative to e. The attack exploits the structure of the simultaneous modular equations x^e \equiv C_i \pmod{N_i} to reconstruct M. The core of the attack implicitly leverages the Chinese Remainder Theorem by constructing a univariate polynomial f(x) = \prod_{i=1}^k (x^e - C_i) such that f(M) = 0 \pmod{N'}, where N' = \prod_{i=1}^k N_i. This polynomial has degree k e and modulus N', but direct computation of f(x) modulo the potentially enormous N' is avoided through Coppersmith's lattice-based technique, which uses univariate polynomial shifts—multiples like x^j f(x)^l for small j, l—to build a lattice whose short vectors reveal the root M. Coppersmith's method applies here to find small roots of this high-degree modular univariate polynomial in polynomial time, provided M satisfies the size bound. For k = e, the attack recovers M if M < ( \prod_{i=1}^k N_i )^{1/e^2}. Asymptotically, assuming all N_i are of comparable bit length to a single modulus N, this yields M < N^{1/e}. The method originates from Håstad's 1988 algebraic approach to solving systems of low-degree modular equations, which required roughly k > e(e+1)/2 equations for success, but was extended by in 1997 using to achieve viability with fewer equations, specifically k \approx e. This breaks unpadded broadcasts by enabling recovery of the full message under the stated conditions. Generalizations of the attack optimize the number of required ciphertexts: with Coppersmith's improvements, k = e suffices for a full break even for messages up to full modulus size, as the effective root bound exceeds \min_i N_i. For fewer than e ciphertexts, partial information attacks recover approximate or partial bits of M using relaxed bounds on the root size. These variants highlight the vulnerability of low-exponent RSA in multi-recipient settings without randomization. The Franklin-Reiter related-message attack targets low-exponent RSA when two plaintext messages are linearly related and encrypted under the same public key (N, e). Consider the scenario where the messages are M_1 = M + a and M_2 = M + b, with known constants a and b, yielding ciphertexts C_1 = M_1^e \mod N and C_2 = M_2^e \mod N. The goal is to recover the common component M. The attack constructs two univariate polynomials over \mathbb{Z}: f(x) = (x + a)^e - C_1 g(x) = (x + b)^e - C_2 Both polynomials have M as a root modulo N, i.e., f(M) \equiv 0 \mod N and g(M) \equiv 0 \mod N. Computing the GCD of f and g in the ring (\mathbb{Z}/N\mathbb{Z}) yields a polynomial whose roots include M modulo N, allowing recovery of M by solving for the roots of the GCD result. This GCD computation is efficient for small e, with time complexity polynomial in e and the bit length of N, as the polynomials are of degree e. For e = 3, the attack is practical even for 512-bit moduli N. The method generalizes to higher-degree relations by iteratively computing resultants to reduce variables until a univariate polynomial is obtained. Coppersmith extended the attack using lattice reduction techniques to find small roots of the resulting univariate polynomial (of degree up to e \cdot \deg(relation)), or equivalently via bivariate integer polynomial methods for relations with small coefficients. The extension succeeds when the root size (e.g., |M|) is bounded by approximately N^{1/e} for the linear case, though practical variants achieve recovery for |M| < N^{1/(2e)} using optimized lattice constructions. For e = 3 and 512-bit N, this renders the scheme vulnerable if M is smaller than roughly 2^{85} bits. Limitations include the assumption of an exact linear relation and inefficiency for large e without the small-root assumption, as GCD computation becomes prohibitive.

Short-Pad Attack

In the short-pad attack, a common vulnerability arises when the same message M is encrypted multiple times under the same RSA public key (N, e) with different short random pads, such as in naive implementations lacking proper randomization. Specifically, consider two encryptions: plaintexts P_1 || M and P_2 || M where pads P_1, P_2 are short, yielding ciphertexts C_1 = (P_1 \cdot 2^{k} + M)^e \mod N and C_2 = (P_2 \cdot 2^{k} + M)^e \mod N, with k = |M| in bits. If the pad length is sufficiently small, an attacker can recover the full plaintext M (and pads) efficiently. The attack, due to Coppersmith (1997), models the problem by considering the difference in pads r = P_1 - P_2, assumed small. For e=3, it computes the resultant with respect to m = P_1 \cdot 2^k + M of the polynomials m^3 - C_1 and (m + r \cdot 2^k)^3 - C_2, yielding a univariate equation in r: r^9 + 3(C_1 - C_2) r^6 + 3(C_1^2 - 2 C_1 C_2 + C_2^2) (3 r^3) + (C_1 - C_2)^3 \equiv 0 \pmod{N}. Coppersmith's univariate modular method is then applied to find the small root r < N^{1/9}, after which M can be recovered from either ciphertext. The approach succeeds when the effective pad difference satisfies |r| < N^{1/e^2}, allowing recovery in polynomial time relative to \log N. For e=3 and a 1024-bit modulus N, this permits attacks on pads up to approximately 113 bits in length. This attack demonstrates the insecurity of textbook RSA with short or low-entropy pads, particularly for repeated encryptions of the same message, motivating the use of full-length random padding schemes like to ensure security. The method highlights the need for pads exceeding N^{1/e^2} bits (in difference) to resist such root-finding attacks.

Extensions and Modern Applications

Small Private Exponent Attacks

In the context of , Coppersmith's methods have been extended to target scenarios where the private exponent d is small relative to the modulus N = pq, with the public key (N, e) known. Specifically, if d < N^{0.292}, the private key can be recovered efficiently using lattice-based techniques. This vulnerability arises because small d is sometimes chosen to optimize decryption performance, but it compromises security. The seminal Boneh-Durfee attack, introduced in 1999 and published in 2000, leverages a bivariate integer polynomial to approximate and solve for d. The core idea builds on the relation ed = 1 + k \phi(N) for some integer k, where \phi(N) = (p-1)(q-1) \approx N (1 - 1/p - 1/q). By approximating \phi(N) \approx N - (p + q), the attack constructs the polynomial f(x, y) = e(x + 1) - (y + 1)N, seeking small integer roots (x_0, y_0) that relate to deviations in this approximation, with x_0 \approx k and y_0 \approx p + q - 2. These roots are found using for small roots of bivariate integer polynomials, preceded by continued fraction approximations to estimate k. The method employs a lattice of dimension 4, generated from shifts of the polynomial equation to apply LLL reduction and identify short vectors corresponding to the roots. This approach achieves a bound of \delta \approx 0.292 for d < N^\delta, rigorously proven for this threshold, though heuristic analyses suggest feasibility up to \delta = 0.5 under ideal lattice reduction conditions. The attack's success relies on the polynomial's low-degree structure and the geometric properties of the lattice, ensuring polynomial-time recovery of d once roots are obtained, followed by factoring N. Recent advancements as of 2025 have further refined these techniques by explicitly combining continued fractions with , improving bounds in specific parameter regimes. For instance, using convergents of e/N to better estimate prime sums p + q, the attack extends to d < N^{1 - \alpha/3 - \gamma/2}, where \alpha = \log_N e and \gamma measures approximation error in p + q, outperforming prior bounds like Herrmann-May's when $0.201 < \alpha < 2.799 and \gamma \approx 1/2. These updates also apply to RSA variants with primes sharing significant bits, enhancing practical recovery for marginally larger d. Overall, such attacks underscore the risks of using small private exponents in for performance gains, recommending d > N^{0.292} to mitigate lattice-based .

Applications in Other Cryptosystems

Coppersmith's methods have found applications in cryptanalyzing systems beyond traditional , particularly in post-quantum and number-theoretic primitives, where finding small roots of equations reveals weaknesses in parameter choices. These extensions often leverage multivariate generalizations and asymptotic bounds to target structured algebraic problems, weakening without achieving full breaks. In isogeny-based cryptography, Coppersmith's technique has been adapted to improve bounds for finding fixed-degree isogenies between supersingular elliptic curves, a core hardness assumption. A 2025 analysis uses a four-variable equation to derive tighter small-root bounds via Coppersmith's method, reducing the classical complexity from O(\sqrt{d}) to subexponential in certain regimes and quantumly involving O(d^{1/2}) terms with optimizations. This approach exploits on equations, demonstrating a 9-bit security drop for a specific ACNS 2024 isogeny signature scheme. For pseudorandom number generators (PRNGs), 's method enables seed recovery in number-theoretic constructions like non-linear congruential generators (PCGs). A 2025 study provides asymptotic attacks on , power, and Pollard generators by solving systems of modular equations for small roots, using an automated Coppersmith framework to derive symbolic bounds. This improves prior lattice-based , allowing prediction of infinite output sequences when coefficients are partially known, with numerical validation showing practical recovery for 512-bit moduli. RSA variants incorporating algebraic structures, such as those based on cubic Pell curves, remain vulnerable to enhanced attacks. In a 2025 cryptanalysis, the method combined with factors the N = pq for public exponent e=3 when the exponent d falls below a bound of approximately N^{0.292}, extending prior work by solving bivariate equations derived from the key relation ed - k\phi(N) = 1. This breaks insecure parameter sets in time, highlighting risks in exponent-3 configurations. In lattice-based schemes, recent advances compute precise asymptotic bounds for small roots using Coppersmith's method via , enhancing of problems like the commutative hidden number problem. This work provides provable outperforming heuristics, applicable to weakening in and . Similarly, for multivariate schemes like SNOVA, 2025 wedge product attacks exploit block-ring structure to recover keys, referencing Coppersmith's block Wiedemann for efficient sparse linear algebra over , though not directly for root-finding. These efforts drop for several SNOVA parameter sets below 256 bits, from claimed levels up to 310 bits. Overall, Coppersmith's lattice-based heuristics continue to underpin these multivariate and applications, providing foundational tools for parameter weakening amid quantum advances, without yet enabling complete breaks in well-designed systems.

References

  1. [1]
    Small Solutions to Polynomial Equations, and Low Exponent RSA ...
    Don Coppersmith. IBM Research, T. J. Watson Research Center ... Notice that for a 1024-bit RSA key, this attack tolerates 100 bits of padding fairly easily.
  2. [2]
    [PDF] Twenty Years of Attacks on the RSA Cryptosystem 1 Introduction
    Unlike the attack of the previous section, attacks that apply when a small e is used are far from a total break. 4.1 Coppersmith's Theorem. The most powerful ...
  3. [3]
    [PDF] A Method for Obtaining Digital Signatures and Public-Key ...
    R.L. Rivest, A. Shamir, and L. Adleman. ∗. Abstract. An encryption method is presented with the novel property that publicly re- vealing an encryption key ...
  4. [4]
    A method for obtaining digital signatures and public-key cryptosystems
    Feb 1, 1978 · An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key.
  5. [5]
    RFC 3447 - Public-Key Cryptography Standards (PKCS) #1
    RFC 3447 provides recommendations for RSA-based public-key cryptography, covering cryptographic primitives, encryption, signature schemes, and ASN.1 syntax.
  6. [6]
    RSA Algorithm - di-mgt.com.au
    Common choices for e are 3, 5, 17, 257 and 65537 (216+1). These particular values are chosen because they are primes and make the modular exponentiation ...
  7. [7]
    Finding a Small Root of a Univariate Modular Equation
    Abstract. We show how to solve a polynomial equation (mod N ) of degree k in a single variable z, as long as there is a solution smaller.
  8. [8]
    [PDF] Finding Faster Small Roots of Univariate Polynomial Congruences
    May 30, 2013 · Abstract. In a seminal work at EUROCRYPT '96, Coppersmith showed how to find all small roots of a univariate polynomial.
  9. [9]
  10. [10]
    Finding small roots of univariate modular equations revisited
    Jun 17, 2005 · An alternative technique for finding small roots of univariate modular equations is described. This approach is then compared with that taken in (Coppersmith, ...
  11. [11]
  12. [12]
  13. [13]
    [PDF] LNCS 1070 - Low-Exponent RSA with Related Messages
    The attack applies for any value of el but is limited by the cost of computing the gcd of two polynomials of degree e. A straightforward implementation of.
  14. [14]
    Small Solutions to Polynomial Equations, and Low Exponent RSA ...
    Nov 1, 1997 · A New Attack on Three Variants of the RSA Cryptosystem. Chapter ... Cite this article. Coppersmith, D. Small Solutions to Polynomial ...Missing: paper | Show results with:paper
  15. [15]
    [PDF] Cryptanalysis of RSA with Private Key d Less than N0.292
    ): EUROCRYPT'99, LNCS 1592, pp. 1–11, 1999. c Springer-Verlag Berlin Heidelberg 1999. Page 2. 2. Dan Boneh and Glenn Durfee. Note that both e and d are less ...
  16. [16]
    Improving RSA Cryptanalysis: Combining Continued Fractions and ...
    Jul 14, 2025 · In this paper, we present a new small private exponent attack on RSA by combining continued fractions and Coppersmith's techniques.Missing: PRNG | Show results with:PRNG
  17. [17]
  18. [18]
    Better Bounds for Finding Fixed-Degree Isogenies via ...
    Oct 8, 2025 · The hardness of finding isogenies of degree d between supersingular elliptic curves is a fundamental assumption in isogeny-based cryptography.
  19. [19]
  20. [20]
    Improved Cryptanalysis of an RSA Variant Based on Cubic Pell Curve
    In 2024, based on the cubic Pell curve, Nitaj and Seck proposed a variant of the RSA cryptosystem where the modulus is in the form N = p r q s , and the public ...
  21. [21]
    (PDF) On RSA-2048 Today: Quantum Computing with Algebra and ...
    Oct 28, 2025 · Coppersmith's method for fast convergence. This work presents a computable, finitist reformulation of Shor's. algorithm—Improved Shor's ...
  22. [22]
    [PDF] Exploiting SNOVA's Structure in the Wedge Product Attack
    Cryptology ePrint Archive, Paper 2025/1137 (2025), https://eprint.iacr.org/2025/1137. 16. Kaltofen, E.: Analysis of Coppersmith's block Wiedemann algorithm ...