The abc conjecture is a conjecture in number theory and Diophantine geometry, independently proposed by Joseph Oesterlé and David Masser in 1985.[1] It posits that, for any real number \epsilon > 0, there exists a positive constant C_\epsilon such that if a, b, and c are coprime positive integers satisfying a + b = c, then c < C_\epsilon \cdot \mathrm{rad}(abc)^{1 + \epsilon}, where \mathrm{rad}(n) denotes the square-free kernel of n, or the product of its distinct prime factors.[2] This inequality links the size of the sum c to the prime factors of the product abc, suggesting that c cannot be much larger than the radical of abc raised to a power slightly greater than 1, with the bound holding uniformly for all such triples except finitely many exceptions implicit in the constant C_\epsilon.[1]The conjecture has profound implications for arithmetic geometry, including effective versions of the Mordell conjecture (now Faltings's theorem) for elliptic curves and bounds on the ranks of elliptic curves over the rationals.[1] It also implies Fermat's Last Theorem for all sufficiently large exponents and provides insights into the distribution of prime factors in additive relations among integers.[2] Variants of the conjecture, such as the weak abc conjecture or refinements by Alan Baker and Andrew Granville, explore related bounds and have been partially proven in function field settings.[1]As of 2025, the abc conjecture remains unproven in the number field case, despite significant efforts.[3] In 2012, Japanese mathematician Shinichi Mochizuki claimed a proof using his novel framework of inter-universal Teichmüller theory, culminating in a 500-page manuscript published in 2021 by the Publications of the Research Institute for Mathematical Sciences.[3] However, the proof has sparked intense controversy, with prominent mathematicians like Peter Scholze and Jakob Stix identifying a potential flaw in a key corollary in 2018, leading to ongoing debates about its validity; it is accepted by a small group of experts, primarily in Japan, but rejected by the broader mathematical community.[3] Recent attempts, such as Kirti Joshi's 2025 preprint addressing the alleged flaw, have not resolved the impasse, maintaining the conjecture's status as one of the most significant open problems in number theory.[3]
Formulation
Standard statement
The ABC conjecture, also known as the Oesterlé–Masser conjecture, asserts that for every real number \varepsilon > 0, there exist only finitely many triples of coprime positive integers a, b, and c satisfying a + b = c and c > \rad(abc)^{1 + \varepsilon}, where \rad(n) denotes the radical of the positive integer n, defined as the product of its distinct prime factors.[4] This formulation was proposed independently by Joseph Oesterlé and David Masser around 1985 as a means to generalize observations from arithmetic geometry, particularly in relation to elliptic curves.[5]In the standard setup, a and b are taken to be coprime positive integers (i.e., \gcd(a, b) = 1), with c = a + b, which ensures that \gcd(a, b, c) = 1 since any common divisor of a and b would divide c, and conversely, any common divisor of all three would contradict the coprimality of a and b.[4] The condition emphasizes triples where the sum c is unusually large compared to the prime factors involved in a, b, and c, highlighting a balance between addition and the multiplicative structure captured by the radical.A key measure associated with such triples is the quality q(a, b, c) = \frac{\log c}{\log \rad(abc)}, which quantifies the extent to which c exceeds \rad(abc).[6] The ABC conjecture implies that q(a, b, c) < 1 + \varepsilon holds for all but finitely many such triples, underscoring that high-quality triples—those where the sum grows significantly faster than the radical—are exceedingly rare.[4]
Radical and quality definitions
The radical of a positive integer n, denoted \operatorname{rad}(n), is defined as the product of the distinct prime numbers dividing n.[7] If the prime factorization of n is n = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k} where each a_i \geq 1, then \operatorname{rad}(n) = p_1 p_2 \cdots p_k.[7] This function, also known as the square-free kernel of n, captures the essential prime factors without regard to their multiplicities.[8]The radical function is multiplicative in the sense that if \gcd(m, n) = 1, then \operatorname{rad}(mn) = \operatorname{rad}(m) \operatorname{rad}(n).[7] More generally, for any positive integers m and n, \operatorname{rad}(mn) = \operatorname{rad}(m) \operatorname{rad}(n) / \operatorname{rad}(\gcd(m, n)).[9] Additionally, \operatorname{rad}(n) \leq n for all n > 1, with equality if and only if n is square-free.[8] These properties follow directly from the connection to prime factorization, where overlapping primes in m and n are accounted for via the gcd.[7]For example, \operatorname{rad}(12) = \operatorname{rad}(2^2 \cdot 3) = 2 \cdot 3 = 6, while \operatorname{rad}(16) = \operatorname{rad}(2^4) = 2.[10] Similarly, \operatorname{rad}(72) = \operatorname{rad}(2^3 \cdot 3^2) = 2 \cdot 3 = 6.[7]In the context of the ABC conjecture, the quality of a triple (a, b, c) of coprime positive integers with a + b = c is defined as q(a, b, c) = \frac{\log c}{\log \operatorname{rad}(abc)}.[10] This measures how much c exceeds the radical of the product abc, and the conjecture implies that \sup q(a, b, c) < \infty, or more precisely, that q(a, b, c) < 1 + \varepsilon for any \varepsilon > 0 and sufficiently large radicals.[10]
Equivalent reformulations
One equivalent reformulation of the ABC conjecture arises in the context of elliptic curves, where it links directly to Szpiro's conjecture through bounds on the minimal discriminant. Specifically, for a Weierstrass equation y^2 = x^3 + Ax + B defining an elliptic curve over \mathbb{Q}, with discriminant \Delta = -16(4A^3 + 27B^2), Szpiro's conjecture posits that for any \varepsilon > 0, there exists a constant N(\varepsilon) such that |\Delta| \leq N(\varepsilon) N(E)^{6 + \varepsilon}, where N(E) is the conductor of the curve. The ABC conjecture is equivalent to this generalized Szpiro conjecture, as shown by relating the discriminant of the Frey elliptic curve associated to an ABC-triple (a, b, c) with a + b = c—whose discriminant is proportional to abc—to the conductor via the height of the curve. This reformulation expresses the conjecture in terms of arithmetic invariants like the canonical height on the Mordell-Weil group, bounding the height of integral points in terms of the conductor raised to a power close to 6.[11]A refinement attributed to Andrew Granville strengthens the original statement by incorporating more precise control over the constant, while maintaining the core inequality. In this version, for any \varepsilon > 0, there exists a constant K(\varepsilon) (potentially explicit in terms of \varepsilon) such that for coprime positive integers a, b with c = a + b, c < K(\varepsilon) \rad(abc)^{1 + \varepsilon}. This form arises from probabilistic models of prime factorizations, emphasizing that the constant K(\varepsilon) can be optimized based on the distribution of squarefree kernels, leading to sharper effective bounds in applications. Related work by Oliver Robert, Cameron L. Stewart, and Gérald Tenenbaum further refines this by providing asymptotic expressions without \varepsilon, such as c < k \exp\left( \frac{4}{\sqrt{3}} \sqrt{\frac{\log k}{\log_2 k}} \left(1 + O\left(\frac{\log_3 k}{\log_2 k}\right)\right) \right), where k = \rad(abc), building on Granville's heuristics for the squarefree part of integers.[12]The ABC conjecture also connects to Hall's conjecture on primitive integer solutions to certain Diophantine equations. Hall's conjecture states that there exists C > 0 such that for integers x, y with y^2 = x^3 + k and k \neq 0, |x| \leq C |k|^2. The ABC conjecture implies this, yielding for any \varepsilon > 0 a constant C_\varepsilon > 0 with |x| \leq C_\varepsilon |k|^{2(1 + \varepsilon)}; moreover, when \gcd(x, y) = 1, it strengthens to |x| \leq C'_\varepsilon \rad(k)^{2(1 + \varepsilon)}. In fact, the "radical version" of Hall's conjecture—replacing |k| with \rad(k)—is equivalent to the ABC conjecture, as the proofs interreduce via Frey curves and integral point bounds on Mordell curves.[13]An asymptotic form of the conjecture incorporates the prime factors of abc more explicitly through products over primes. Heuristically, the bound relates to \prod_{p \mid abc} (1 - 1/p)^{-1}, which measures the "density" of primes dividing abc and adjusts the effective constant in the inequality c \ll \rad(abc)^{1 + \varepsilon}, reflecting the expected size of c under random prime distributions. This form underscores the conjecture's ties to sieve methods and the distribution of prime factors.[14]Finally, Paul Vojta's conjectures provide a higher-dimensional generalization, embedding the ABC conjecture into a broader framework for Diophantine approximation on varieties. Vojta's general ABC conjecture uses a truncated countingfunction N^{(1)} on smooth complete varieties over global fields, positing N^{(1)}(D, P) + d_k(P) \geq h_{K(D)}(P) - \varepsilon h_A(P) - O(1) for points P outside an exceptional Zariski-closed set, where D is an ample divisor. For \mathbb{P}^1 over \mathbb{Q}, this recovers the classical ABC with exponent $1 + \varepsilon; in higher dimensions, it extends to n-tuples (a_1, \dots, a_n) summing to zero, predicting bounds like \max |a_i| \ll \rad(a_1 \cdots a_n)^{n/(n-1) + \varepsilon}.[15]
Historical development
Origins and early motivations
The origins of the ABC conjecture lie in the broader field of Diophantine approximation, where efforts to bound solutions to equations involving algebraic numbers have long been central. Early motivations stemmed from results like Roth's theorem (1955), which provides effective bounds on how well algebraic numbers can be approximated by rationals, and the subspace theorem (Schmidt, 1976), which generalizes such ideas to higher dimensions using tools from Diophantine geometry. These theorems address linear forms in logarithms, offering insights into the distribution of prime factors in integers related by additive relations. However, they often yield ineffective constants, limiting their utility for explicit computations in problems like S-unit equations, where solutions to equations of the form x + y = z with x, y, z in a finitely generated multiplicative group are sought.[16][14]A key precursor was Baker's method (1968), which uses transcendental number theory to derive bounds on linear forms in logarithms, providing ineffective but existent upper bounds for the heights of solutions to S-unit equations. This approach highlighted the need for sharper, effective estimates to tackle Diophantine problems more concretely. Independently, David Masser advanced work on S-unit equations around the same period, seeking refined bounds that could unify disparate results in arithmetic geometry. These efforts underscored a desire to quantify how "balanced" the prime factors of integers in additive relations must be, bridging approximation theory with the study of elliptic curves.[16][14]The conjecture's formulation was particularly inspired by challenges in elliptic curve theory, including the quest for an effective version of the Shafarevich theorem, which asserts the finiteness of isomorphism classes of elliptic curves over a number field with bounded conductor. Joseph Oesterlé, motivated by bounds on elliptic curve conductors and the related Szpiro conjecture (1981) linking discriminants to conductors, proposed the ABC conjecture during a 1985 seminar at the Centre de Mathématiques de l'École Polytechnique. Szpiro's conjecture posits that for an elliptic curve over \mathbb{Q} with conductor N and minimal discriminant \Delta, |\Delta| \ll N^{6+\epsilon} for any \epsilon > 0, and understanding this required controlling prime factors in relations akin to a + b = c. Masser's concurrent work on S-unit equations aligned with this, leading to the independent proposal of the conjecture in 1985, named for the equation a + b = c with coprime positive integers a, b and c = a + b.[16][14]
Key contributors and timeline
The ABC conjecture originated from discussions in number theory concerning the distribution of prime factors in sums of integers, building on longstanding awareness of related Diophantine problems predating 1985. In particular, the Fermat-Catalan conjecture, first posed by Eugène Charles Catalan in 1844 and refined by Pierre de Fermat's earlier work on powers, highlighted limitations on solutions to equations of the form a^p + b^q = c^r with exponents greater than 1; the ABC conjecture later emerged as a broader framework implying only finitely many such solutions beyond the known case $3^2 + (-2)^3 = 1.[17]The conjecture was independently proposed in 1985 by the French mathematician Joseph Oesterlé and the British mathematician David Masser; Oesterlé and Masser elaborated on its implications in subsequent publications in the late 1980s.[2] In 1987, Joseph H. Silverman, an American mathematician specializing in arithmetic geometry, began exploring connections between the ABC conjecture and elliptic curves, demonstrating how it could yield bounds on the ranks of elliptic curves over the rationals.[18] Around 1988, Noam D. Elkies, a Harvard mathematician known for his computational prowess, contributed early examples and extensive computations of ABC triples with high quality measures, helping to test the conjecture's boundaries and identify potential extremal cases.[2]Throughout the 1990s, computational efforts intensified, with mathematicians including Elkies conducting systematic searches for ABC triples exhibiting quality values q(a, b, c) > 1.4, revealing patterns that supported the conjecture's predicted rarity of high-quality instances without disproving it.[19] In 1996, Dorian Goldfeld, a Columbia University number theorist, popularized the conjecture in a widely read article, describing it as "the most important unsolved problem in Diophantine analysis" due to its potential to resolve numerous open questions in arithmetic geometry.[20] This period solidified the conjecture's centrality in the field, with ongoing refinements to its statements and equivalents.
Examples and illustrations
Triples with small radicals
Triples with small radicals exemplify cases where the product of the distinct prime factors of abc, denoted rad(abc), is notably small compared to the size of c, resulting in high quality values that test the sharpness of the ABC conjecture. These instances are considered "strong" because they achieve q > 1 with rad(abc) much smaller than c, meaning a, b, and c are composed of few prime factors raised to high powers, pushing close to the conjectured bound c < rad(abc)^{1+\epsilon} for small \epsilon > 0.[6] Such triples highlight how the conjecture would be "barely true" if correct, as higher q values become increasingly rare.[6]Concrete examples include the primitive triple (1, 8, 9), where 1 + 2^3 = 3^2, rad(abc) = 6, and q \approx 1.226; this is one of the smallest such triples with q > 1. Another early example is (1, 80, 81), satisfying 1 + 2^4 \cdot 5 = 3^4, with rad(abc) = 30 and q \approx 1.292, again relying on powers of small primes.[6] The triple (3, 125, 128) = (3, 5^3, 2^7) has rad(abc) = 30 and q \approx 1.427, illustrating balanced powers across a, b, and c.The following table presents selected highest-quality ABC triples discovered computationally with c < 10^{12}, drawn from extensive searches; these feature particularly small rad(abc) relative to c and are unbeaten in terms of quality within their size range.[21]
a
b
c
rad(abc)
q
1
8
9
6
1.226
1
80
81
30
1.292
3
125
128
30
1.427
1
2400
2401
210
1.456
1
4374
4375
210
1.568
2
6436341
6436343
15042
1.630
These triples often exhibit patterns involving high powers of small primes, such as one term being nearly a pure power (e.g., 23^5 or 7^4 in the table), or infinite families derived from expressions like 1 + 2^{p(p-1)} for prime p, which yield small radicals through concentrated prime factors.[6] Some high-quality cases also arise from Fibonacci-like sequences, where consecutive terms sum to the next but with controlled prime factors leading to reduced radicals.[6]
Notable counterexamples and extremal cases
While the ABC conjecture posits a bound on the size of c relative to \mathrm{rad}(abc)^{1+\epsilon} for any \epsilon > 0, certain triples exhibit low quality q(a,b,c) = \frac{\log c}{\log \mathrm{rad}(abc)}, approaching but not exceeding 1 from below, illustrating cases where the radical is comparably large to c. A representative family is (1, 2^n, 2^n + 1) for large n, where a + b = c holds with coprime terms; here, \mathrm{rad}(abc) = 2 \cdot \mathrm{rad}(2^n + 1), and if $2^n + 1 is prime (as for Fermat primes), \log \mathrm{rad}(abc) \approx (n+1) \log 2 while \log c \approx n \log 2, yielding q \approx n/(n+1) < 1. For n=100, q \approx 0.99, demonstrating how high powers can produce near-equality without violating the conjectured exponent greater than 1.[6]In contrast, extremal cases push the quality toward the conjectured limit of 1 from above, serving as near-misses that test the bound closely. A seminal example, discovered by Noam Elkies, is the triple (2, 3^{10} \cdot 109, 23^5), where $2 + 6436341 = 6436343 and \mathrm{rad}(abc) = 2 \cdot 3 \cdot 23 \cdot 109 = 15042, giving q \approx 1.6299. This remains the highest known quality among computed triples, highlighting how sparse prime factors can amplify q significantly. Such cases, while finite for each \epsilon > 0 under the conjecture, underscore the rarity of high-quality configurations.[22]Solutions to equations in the Fermat–Catalan conjecture, which posits only finitely many coprime positive integers x^p + y^q = z^r with p,q,r \geq 2 and $1/p + 1/q + 1/r \leq 1, yield ABC triples with potentially high quality due to the concentration of prime factors in powers. For instance, the known solution $2^5 + 7^2 = 3^4 forms the triple (32, 49, 81) with q \approx 1.17, but larger-exponent cases (if any exist beyond the verified finite list) would challenge the ABC bound more stringently, as the ABC conjecture implies the Fermat–Catalan statement by controlling the radical growth. These triples represent extremal behaviors where additive relations among high powers strain the conjectured inequality.[22]Triples with q > 1.5 are exceedingly rare; computational enumerations identify only about 241 "good" triples with q > 1.4 up to large c, and none exceed the Elkies record. No true counterexamples to the ABC conjecture have been found, despite extensive searches via projects like ABC@Home, which have verified the bound for all triples with c up to $10^{18} and small \epsilon, supporting the conjecture empirically without resolving it.[21]
Implications and consequences
Arithmetic geometry applications
The ABC conjecture has profound implications in arithmetic geometry, particularly through its equivalence to Szpiro's conjecture on elliptic curves. The modified Szpiro conjecture states that for every \epsilon > 0, there exists a constant C(\epsilon) such that for any elliptic curve E defined over \mathbb{Q}, the conductor N_E satisfies N_E \leq C(\epsilon) |\Delta_E|^{6 + \epsilon}, where \Delta_E is the minimal discriminant of E. This equivalence was established by showing that the ABC conjecture implies the bound on the conductor-discriminant ratio, and conversely, applying the conjecture to Frey curves derived from elliptic curves yields the ABC statement.[1]A key application is an effective strengthening of Roth's theorem on Diophantine approximations of algebraic numbers. Roth's theorem asserts that for any algebraic irrational \alpha of degree d \geq 2 and \epsilon > 0, there are only finitely many rationals p/q such that |\alpha - p/q| < 1/q^{2 + \epsilon}. The ABC conjecture implies an effective version, providing explicit bounds on the height of such approximations depending on \epsilon and d, as demonstrated by analyzing the radical and quality in suitable ABC triples arising from approximations. This effective form enhances applications in transcendental number theory and uniform estimates for algebraic points.[23]The conjecture also implies Fermat's Last Theorem for exponents n \geq 3 as a special case. Suppose there exist positive integers a, b, c with a^n + b^n = c^n and \gcd(a, b, c) = 1. The associated Frey curve E: y^2 = x(x - a^n)(x + b^n) has conductor dividing $2 \cdot \mathrm{rad}(abc) and discriminant related to abc^{2n}. Applying the ABC conjecture (via its equivalence to Szpiro's conjecture) to this setup yields a contradiction by bounding the conductor against the large discriminant, implying no such solutions exist. This approach, though superseded by Wiles' modularity proof, highlights the conjecture's role in linking Diophantine equations to elliptic curve geometry.[1]Finally, the ABC conjecture implies the uniform boundedness of torsion points on elliptic curves over \mathbb{Q}. While Mazur's theorem already establishes this with a specific bound of order at most 12, the conjecture strengthens it by providing uniform bounds over all number fields of fixed degree, via height and conductor estimates that limit torsion orders independently of the field. This has ramifications for the arithmetic of elliptic curves in higher-degree extensions.[24]
Links to other Diophantine problems
The ABC conjecture has profound implications for various classical Diophantine problems, particularly those involving equations with perfect powers or units in number fields. By bounding the radical of the product abc relative to the size of c in coprime triples a + b = c, it provides finiteness results and effective estimates that resolve or sharpen long-standing conjectures on the distribution and differences of powers.A key application is to the Fermat-Catalan conjecture, which posits that there are only finitely many triples of coprime positive integers a, b, c and exponents p, q, r \geq 2 satisfying a^p + b^q = c^r where \frac{1}{p} + \frac{1}{q} + \frac{1}{r} < 1. The ABC conjecture implies this finiteness directly, as the quality measure q(a,b,c) = \frac{\log c}{\log \mathrm{rad}(abc)} bounds the exponents and sizes of solutions, ensuring only finitely many such primitive power sums exist.[25]Similarly, the ABC conjecture implies Pillai's conjecture, which states that for each fixed positive integer k, there are only finitely many solutions in positive integers a, b and exponents m, n \geq 2 to the equation a^m - b^n = k. This follows from applying the conjecture to triples derived from the powers, where the radical controls the growth and limits the number of near-perfect powers differing by k. The quality measure further ensures that solutions cannot accumulate indefinitely as powers grow.[14]The conjecture also yields effective bounds on solutions to superelliptic equations, such as y^2 = x^n + k for fixed integers k \neq 0 and n > 2. Under ABC, the size of x is bounded in terms of the radical of k, implying only finitely many integer solutions (x, y) and providing explicit height estimates that improve upon unconditional results like those from Siegel's theorem. For instance, it sharpens the effective constants in the solution sizes for generalized Mordell equations.[14]In the context of S-unit equations, such as ax + by = 1 where a, b are S-units in a number field and S is a finite set of places, the ABC conjecture implies improved effective bounds on the solutions. It refines the constants in height estimates for the solutions, making them depend polynomially on the regulator and discriminant of the field, thus enhancing finiteness theorems like those of Evertse and van der Poorten by providing sharper, explicit dependencies on S.[14]Finally, the uniform version of the ABC conjecture over number fields connects to class number problems through heuristics inspired by Stark. Specifically, Granville and Stark showed that it implies a lower bound h(-d) \gg (\log d)^{1/3 + o(1)} for the class number h(-d) of the imaginary quadratic field \mathbb{Q}(\sqrt{-d}), where d > 0 is the discriminant; this rules out Siegel zeros for associated L-functions and aligns with Stark's conjectures on the distribution of class numbers by preventing exceptionally small values.[26]
Evidence supporting the conjecture
Computational verifications
Early computational efforts to verify the abc conjecture involved exhaustive searches for abc triples with high quality, defined as q = \frac{\log c}{\log \rad(abc)}, where no counterexamples were found for small values of \epsilon. In a pioneering tabulation, Noam Elkies and Joseph Kanapka enumerated all abc triples with c < 2^{32} \approx 4.3 \times 10^9 and quality exceeding 1.2, revealing no violations of the conjecture for \epsilon = 0.1, as all such triples satisfied c < \rad(abc)^{1.1}.Subsequent searches expanded the scope dramatically, leveraging improved algorithms and computing power to probe much larger values of c. By the early 2000s, comprehensive scans up to c = 10^{18} had been conducted through distributed computing initiatives, confirming that typical qualities remained below 1.4, with the highest observed q values approaching but not exceeding bounds predicted by the conjecture for \epsilon > 0.1. These efforts, including the ABC@Home project, cataloged millions of triples and identified numerous high-quality examples, such as those with q \approx 1.37, yet uncovered no counterexamples that would falsify the conjecture for modest \epsilon.[27]Theoretical work complemented these empirical investigations by providing effective versions of the abc conjecture amenable to computational verification, particularly for triples with small radicals. C. L. Stewart and Kunrui Yu established explicit, computable bounds showing that for abc triples where \rad(abc) is bounded, the inequality c < A \rad(abc)^{1 + \delta} holds for specific constants A and small \delta > 0, allowing direct checks on small-radical cases without exhaustive enumeration. Their results verified the conjecture's predictions for all such triples up to significant limits on the radical size.In the 2020s, ongoing distributed computing projects and algorithmic advancements continued to explore high-quality triples, discovering new examples with q > 1.3 while consistently failing to find counterexamples to the conjecture for \epsilon \geq 0.1. These modern searches, often employing optimized sieving and factorization techniques, have extended coverage to even larger c, reinforcing the empirical support but highlighting the practical limitations of computation for extremely large c, where factoring becomes infeasible due to the immense resources required.[28]
Partial theoretical proofs
One of the earliest partial theoretical results supporting the ABC conjecture is due to C. L. Stewart and R. Tijdeman, who in 1986 established an explicit upper bound for c in terms of the radical rad(abc) for coprime positive integers a, b, c with a + b = c. Their theorem states that there exists an effectively computable constant K such that c < K rad(abc)^2 (log rad(abc))^3, providing a polynomial bound with exponent 2 plus logarithmic factors, which is weaker than the conjectured exponent of 1 + ε for any ε > 0. This result relies on lower bounds for linear forms in logarithms and demonstrates the conjecture in a limited form.Building on this, C. L. Stewart and K. Yu provided stronger exponential bounds in their 1991 paper. They proved that for any ε > 0, there exists a constant C(ε) such that c < C(ε) exp(rad(abc)^{1/2 + ε}), improving the exponent in the exponential growth to 1/2 + ε. In a 2001 follow-up, they refined this further to c < exp(κ rad(abc)^{1/3} (log rad(abc))^{2/3}) for some absolute constant κ, representing the best known unconditional upper bound and offering theoretical evidence for the ABC conjecture's direction, though still far from the optimal exponent.[29]Joseph H. Silverman contributed partial results in the context of elliptic curves in his 1987 paper, establishing average bounds on the number of integral points on elliptic curves over the rationals. Specifically, he proved that the average number of integral points on elliptic curves with bounded conductor is bounded by a constant depending on the rank, providing quantitative versions of Siegel's theorem that align with implications of the ABC conjecture for arithmetic geometry. These bounds show that, on average, elliptic curves have few integral points, supporting the conjecture's role in limiting solutions to Diophantine equations related to elliptic curves.Andrew Granville advanced theoretical support in his work on the distribution of ABC triples, demonstrating in 2002 (with updates in subsequent surveys) that the ABC conjecture holds asymptotically for a positive proportion of triples under heuristic assumptions about prime distributions, with the proportion approaching 1 as the size increases in certain measures. His analysis indicates that "most" coprime triples (a, b, c) with a + b = c satisfy the inequality with exponent close to 1, providing probabilistic theoretical backing.[16]A 2025 expository note by Jared Duker Lichtman highlights the classical de Bruijn estimate from 1962, which implies that almost all coprime positive integer triples (a, b, c) with a + b = c satisfy the strict ABC inequality c < rad(abc), in the sense that the number of exceptions up to height N is O(N^{2/3 + o(1)}). This asymptotic result shows the conjecture holds for nearly all triples, with the exceptional set being sublinear in density.[30]Finally, ineffective proofs using Schmidt's subspace theorem provide further theoretical support. The subspace theorem implies that for any κ > 1/2, there are only finitely many coprime a, b, c with a + b = c and rad(abc) < c^κ, yielding a weak form of the ABC conjecture with exponent 1/κ < 2, though the finiteness constant is ineffective due to the theorem's nature. This approach, developed by W. M. Schmidt and extended by J.-H. Evertse, links Diophantine approximation to ABC-type inequalities but does not yield explicit bounds.
Variants and generalizations
Refined ABC statements
Refinements of the ABC conjecture aim to provide more precise bounds, explicit constants, or asymptotic forms that sharpen the original statement while maintaining its core idea. These versions often incorporate additional parameters like the number of distinct prime factors or logarithmic terms to better capture the distribution of abc-triples, drawing on heuristics from the prime number theorem or applications to arithmetic geometry.One prominent explicit version is due to Alan Baker, who conjectured that for pairwise coprime positive integers a, b, c with a + b = c and N = rad(abc), the number of distinct prime factors ω = ω(N) satisfiesc < \frac{6}{5} N \frac{(\log N)^\omega}{\omega !}.This bound implies that for any δ > 0, there are only finitely many abc-triples with quality q(a,b,c) = \log c / \log \rad(abc) > 1 + δ, with the finiteness made explicit through the factorial and logarithmic growth terms that limit the size of c relative to N.[31]A significant asymptotic refinement, motivated by the prime number theorem and the statistical distribution of squarefree kernels, was proposed by Olivier Robert, Cameron L. Stewart, and Gérald Tenenbaum. Their conjecture posits that for coprime positive integers a, b with c = a + b and k = rad(abc), there exists a constant C_1 such thatc < k \exp\left(4 \sqrt{\frac{3 \log k}{\log \log k}} \left(1 + \frac{\log \log \log k}{2 \log \log k} + C_1 \frac{1}{\log \log k}\right)\right).This form captures the expected behavior for "most" triples, where the exponent effectively approaches 1 from above as k grows large, reflecting the heuristic that high-quality triples are rare due to the sparsity of primes. A matching lower bound holds for infinitely many triples with a similar constant C_2, providing a two-sided estimate.[12]In the context of arithmetic geometry, height-adjusted versions adapt the ABC conjecture to elliptic curves, linking the radical of the conductor to the minimal height of the discriminant. Specifically, Szpiro's conjecture, which follows from the ABC conjecture, states that for any ε > 0, there exists a constant K_ε such that for any elliptic curve E over ℚ with conductor N(E) and minimal discriminant Δ(E), the height h(Δ(E)) satisfiesh(\Delta(E)) < K_ε (\log N(E))^6 N(E)^{6 + ε}.This refinement replaces the radical with the conductor while adjusting for the logarithmic height, offering a parameterized bound tailored to the geometry of elliptic curves and their minimal models.[1]Andrew Granville provided further refinements incorporating the number of distinct primes, conjecturing bounds on c in terms of rad(abc) and ω(abc) that improve upon the standard ε-form for triples with bounded ω. These versions emphasize the role of ω in controlling the growth, leading to stronger implications for Diophantine equations under ABC assumptions.From 2021 onward, Kirti Joshi has explored refinements of the ABC conjecture within the framework of inter-universal Teichmüller theory (IUTT), proposing structural insights into the anabelian geometry underlying abc-triples that align with Mochizuki's approach. In subsequent preprints up to 2025, Joshi claimed to construct arithmetic Teichmüller spaces providing foundations to address criticisms of IUTT, offering potential pathways for explicit bounds via p-adic and arithmetic Teichmüller spaces, though these developments remain under debate in the mathematical community.[32][33]
Related conjectures and extensions
Vojta's conjectures provide a broad generalization of the ABC conjecture to Diophantine approximation on varieties over global fields, extending the framework to higher-dimensional settings and curves of arbitrary genus. In particular, Vojta's more general ABC conjecture, formulated in 1998, replaces the radical in the original ABC statement with a truncated counting function N^{(1)}(D, P) for a smooth projective variety X over a global field k of characteristic zero, a normal crossings divisor D \subset X, and an algebraic point P \in X(\overline{k}) of bounded degree. The conjecture asserts that there exists a proper Zariski-closed subset Z \subset X such that for P \notin Z, N^{(1)}(D, P) + d_k(P) \geq h_{K_X}(D)(P) - \epsilon h_A(P) - O(1), where h_{K_X}(D) is the height relative to the canonical sheaf twisted by D, h_A is a height from a big line sheaf A, and \epsilon > 0 is arbitrary. This reduces to the original ABC conjecture when X = \mathbb{P}^1, k = \mathbb{Q}, D consists of three points, and the truncation level is adjusted appropriately. For curves of genus g \geq 1, Vojta's framework implies height inequalities that bound the canonical height of points in terms of their proximity to a divisor, generalizing Roth's theorem and providing potential resolutions to problems like the uniform boundedness of torsion points on elliptic curves over number fields.[34]The Bogomolov-Miyaoka-Yau inequality, originally established for complex surfaces, has an arithmetic analogue proposed by Parshin in 1988 for arithmetic surfaces over the spectrum of the ring of integers in a number field. This analogue posits an upper bound on the arithmetic self-intersection of the relative dualizing sheaf \overline{\omega}_{X/S} for a semi-stable arithmetic surface X \to S = \mathrm{Spec}(\mathcal{O}_F), where F is a number field and the generic fiber has genus at least 2: specifically, \hat{c}_1(\overline{\omega}_{X/S})^2 \leq 6\chi(\mathcal{O}_X), mirroring the geometric inequality c_1^2(\Omega_X^1) \leq 3c_2(\Omega_X^1) but incorporating archimedean and non-archimedean components. Parshin's conjecture implies the ABC conjecture by providing effective bounds on heights of points on such surfaces, linking geometric invariants to Diophantine properties; for instance, it would yield that the height of a point P on an elliptic curve over \mathbb{Q} satisfies h(P) \ll \mathrm{rad}(N)^ {1+\epsilon} for some conductor-related radical N. While the geometric version holds for minimal surfaces of general type, the arithmetic case remains open, though partial progress has been made using hermitian metrics and successive minima bounds.[35]Hyperbolic analogues of the ABC conjecture arise in function fields, where the strong form has been proven in several cases using tools from value distribution theory and Nevanlinna's Second Main Theorem. For polynomials f, g, h \in \mathbb{C} that are pairwise coprime and satisfy f + g = h, Mason's theorem (1957) establishes that \max\{\deg f, \deg g, \deg h\} \leq N_0(fgh) - 1, where N_0 counts distinct roots, providing a sharp analogue without the \epsilon in the exponent. This extends to more general settings over function fields of curves, such as the strong ABC for points on arithmetic surfaces X \to \mathrm{Spec}(\mathcal{O}_K), bounding the relative canonical height h_{K_X/\mathcal{O}_K(D)}(P) \leq (1 + \epsilon) N^D(P) + O(\log |\Delta_L| + [L:K]) for an ample divisor D and extension L/K. Independent proofs were given by McQuillan (2006), using integration on algebraic stacks and logarithmic differentials, and by Yamanoi (2007), employing Ahlfors' theory and the moduli space of stable curves \mathcal{M}_{0,n}. These results hold in characteristic zero and connect to hyperbolic geometry via the non-isotrivial case of Nevanlinna theory, where entire maps into hyperbolic varieties satisfy truncated counting function inequalities. In positive characteristic, however, the analogue fails due to Frobenius effects.[36]The Davenport-Heilbronn theorem (1969) on the density of discriminants of cubic fields provides asymptotic counts that intersect with ABC implications for Diophantine problems involving cubic forms. The theorem states that the number of isomorphism classes of cubic fields with absolute discriminant at most X is \sim \frac{3}{4\pi^2} \zeta(3)^{-1} X for totally real fields and a similar constant times X for complex cubic fields, derived via the correspondence between cubic rings and binary cubic forms under \mathrm{SL}_2(\mathbb{Z}). Under the ABC conjecture, this yields bounds on the 3-primary part of class groups in quadratic fields; specifically, ABC implies that the exponent of the class group of \mathbb{Q}(\sqrt{d}) is O((\log |d|)^{2+\epsilon}), using the Davenport-Heilbronn density to control the distribution of cubic resolvents and torsion in the 3-Selmer group. This connection highlights ABC's role in refining estimates for cubic forms ax^3 + by^3 + cz^3 + 3dxyz = 0, where solutions correspond to cubic fields, and ABC bounds the size of integer solutions relative to the form's coefficients.[37]Speculative connections have been drawn between the ABC conjecture and the Langlands program through the lens of motives, positing that ABC-type height inequalities might inform the arithmetic of motives attached to Galois representations or automorphic forms on higher-dimensional varieties. For instance, Vojta's generalizations suggest that ABC could underpin bounds on conductor norms in the Langlands correspondence for elliptic curves, though no rigorous implications have been established, and the link remains exploratory within the broader framework of Diophantine motives.
Proof attempts and current status
Mochizuki's inter-universal Teichmüller theory
Shinichi Mochizuki, a mathematician at Kyoto University, announced in 2012 a claimed proof of the abc conjecture based on his inter-universal Teichmüller theory (IUTT), a framework he developed to extend classical Teichmüller theory into arithmetic geometry.[38] This theory deforms the structure of Teichmüller spaces—traditionally used to study deformations of Riemann surfaces—into the context of anabelian geometry, where geometric objects are reconstructed from their étale fundamental groups.[39] IUTT focuses on number fields equipped with elliptic curves and hyperbolic orbicurves, employing tools like semi-graphs of anabelioids, Frobenioids, log-shells, and étale theta functions to model arithmetic structures and link global and local data.[39]The core concept of IUTT revolves around "inter-universal" maps, which establish connections between distinct Frobenius neighborhoods—local structures surrounding Frobenius endomorphisms in tempered Frobenioids—across different "universes" of arithmetic and geometric data.[39] These maps, facilitated by Θ-links, D-NF-bridges, and D-Θ-bridges, allow comparisons of ring structures and cohomology classes without conventional compatibility with schemetheory, disentangling additive and multiplicative aspects of number fields.[39] The theory relies on prerequisites from Mochizuki's prior work, including anabelian reconstruction, which recovers full geometric data from étale fundamental groups via Belyi maps and prime-strips, and mono-anabelian categories that ensure uniqueness in such reconstructions.[39]Mochizuki presented IUTT in four papers published in 2012 as preprints, totaling approximately 500 pages, with formal publication in the Publications of the Research Institute for Mathematical Sciences in 2021.[38] The first paper constructs Hodge theaters, integrating number fields, elliptic curves, and theta functions to evaluate arithmetic degrees via Hodge-Arakelov theory. The second applies Hodge-Arakelov-theoretic evaluations to these structures.[40] The third introduces canonical splittings of the log-theta-lattice, a non-commutative diagram linking miniature models of elliptic curves.[41] The fourth performs log-volume computations using p-adic Hodge theory, yielding Diophantine inequalities that imply the abc conjecture for any \varepsilon > 0, with the constant explicitly depending on \varepsilon.
Criticisms, debates, and recent developments
In 2018, mathematicians Peter Scholze and Jakob Stix published a detailed analysis identifying a fundamental gap in the proof of Corollary 3.12 within Shinichi Mochizuki's Inter-universal Teichmüller theory (IUTT), arguing that this undermined the claimed proof of the abc conjecture.[42] Their report followed extensive discussions with Mochizuki in Kyoto, where they concluded that the reasoning involving the "multiradial representation" failed to establish the necessary identifications between different copies of rings, rendering the corollary unjustified.[43]Mochizuki issued immediate responses starting in May 2018, asserting that the concerns stemmed from misunderstandings of the IUTT framework's non-standard notational conventions and emphasizing that no substantive error existed in Corollary 3.12.[44] Over the following years, through 2020, he released additional clarifications and revised expositions to address perceived ambiguities, including detailed commentaries on the Scholze-Stix critique and efforts to bridge interpretive gaps for external readers.A dedicated workshop titled "Expanding Horizons of Inter-universal Teichmüller Theory," held online in Kyoto from September 7 to 10, 2021, and organized by Mochizuki and collaborators, sought to facilitate deeper engagement with IUTT but attracted only limited participation from the international community, with most attendees being prior advocates of the theory.[45] This event highlighted ongoing divisions, as prominent skeptics like Scholze declined involvement, underscoring the proof's lack of broad consensus.Recent developments in 2025 have further intensified the debate. In May, Jared Duker Lichtman posted a preprint on arXiv proving that the abc conjecture holds "almost always," in the sense that exceptions form a set of bounded size up to height N, providing strong heuristic support but not a full resolution.[30] In November, Kirti Joshi released a comprehensive FAQ defending Mochizuki's approach, particularly emphasizing the role of anabelomorphy—a concept from his own extensions of IUTT—as resolving the Scholze-Stix gap through novel arithmetic reconstructions.[32] However, James Douglas Boyd's contemporaneous report critiqued persistent foundational issues in IUTT's handling of universes and loops, prompting a sharp rebuttal from Mochizuki accusing misrepresentation.[46]As of November 2025, the abc conjecture remains unproven in the mainstream mathematical community, with Mochizuki's claimed proof accepted primarily within a small circle of collaborators and viewed skeptically elsewhere due to unresolved interpretive barriers.[3] In a development reported on November 10, 2025, Mochizuki has accepted that translating his proof into a computer-readable formal language could help resolve the controversy.[47] A separate claim in August 2025 purporting an AI-verified proof via a novel "Ghost Drift Theory" framework was quickly dismissed by experts as lacking rigor and peer review.