Waring's problem is a central question in additive number theory, originally conjectured by the English mathematician Edward Waring in 1770, which asks whether, for every integer k \geq 2, there exists a positive integer s such that every natural number can be expressed as the sum of at most s k-th powers of non-negative integers.[1] This conjecture generalizes Lagrange's four-square theorem from 1770, which establishes that s = 4 suffices for k = 2, meaning every natural number is the sum of at most four squares.[1]David Hilbert provided the first general proof in 1909 that such an s exists for every k, resolving the existence question affirmatively and spurring extensive research into the minimal values of s.[1]The smallest s such that every natural number is a sum of at most s k-th powers is denoted g(k), while G(k) is the smallest number such that every sufficiently large natural number can be represented this way.[1] For cubes (k=3), Waring claimed g(3) = 9, a result later proven, though it is known that only finitely many numbers require nine cubes and almost all require at most four.[1] Exact values include g(2) = 4, g(3) = 9, g(4) = 19, and g(6) = 73, with g(k) verified computationally up to very large k and conjectured to follow the formula g(k) = 2^k + \lfloor (3/2)^k \rfloor - 2 for sufficiently large k.[1]Progress on G(k) has relied heavily on the Hardy-Littlewood circle method, with key bounds including G(3) \leq 7 and G(4) = 16.[1] Recent advancements provide upper bounds such as G(k) \leq 3.094686k + o(k) for large k not a power of 2, and specific values like G(5) \leq 17 and G(20) \leq 63.[2] These results highlight ongoing refinements, with asymptotic estimates showing G(k) grows like k (\log k + \log \log k + O(1)), underscoring the problem's deep connections to analytic number theory and its influence on related questions in additive bases.[1]
History and Origins
Edward Waring's Conjecture
In the third century AD, the Greek mathematician Diophantus of Alexandria posed the question of whether every positive integer could be expressed as the sum of four perfect squares, laying early groundwork for problems in additive number theory.[3] This ancient inquiry influenced later European mathematicians, culminating in Joseph-Louis Lagrange's proof in 1770 that four squares indeed suffice for any natural number.[4]In the same year, Edward Waring, the Lucasian Professor of Mathematics at the University of Cambridge, published his seminal work Meditationes Algebraicae, where he extended this line of investigation to higher powers.[5] Waring conjectured without proof that every natural number can be represented as the sum of at most four squares, nine cubes, nineteen fourth powers, thirty-seven fifth powers, seventy-three sixth powers, and 143 seventh powers.[4] He arrived at these bounds through extensive computations of representations for small natural numbers, observing a pattern in the maximum number required for each exponent up to seven, though he provided no general formula for larger k.[6]To support his claims, Waring highlighted specific examples that necessitated the full quota of terms. For instance, the number 23 cannot be written as a sum of fewer than nine positive cubes, as $23 = 2 \cdot 2^3 + 7 \cdot 1^3, while all smaller combinations fall short.[4] Similarly, 79 requires exactly nineteen fourth powers, expressed as $79 = 4 \cdot 2^4 + 15 \cdot 1^4, demonstrating the tightness of his bound for k=4.[4] These cases underscored Waring's empirical approach, emphasizing that while most numbers require far fewer terms, the conjectured maxima ensure universality.[6]
Hilbert-Waring Theorem
The Hilbert–Waring theorem states that for every positive integer k \geq 2, there exists a finite positive integer g(k) such that every natural number can be expressed as the sum of at most g(k) kth powers of nonnegative integers. This result, proved by David Hilbert in 1909, resolved the general form of Edward Waring's 1770 conjecture by establishing the existence of such a finite g(k) for all k. Hilbert's proof employed advanced analytic methods, including generating functions and polynomial identities inspired by results in convex geometry, such as Carathéodory's theorem on convex hulls. Specifically, he constructed identities like M (X_1^2 + \cdots + X_5^2)^k = \sum_{i=1}^Q (a_{i1} X_1 + \cdots + a_{i5} X_5)^{2k}, where M and the a_{ij} are positive integers, allowing the representation of multiples of sums of kth powers as sums of a fixed number of (2k)th powers; iterating this process across even and odd exponents demonstrated that every natural number admits a representation as a sum of finitely many kth powers.[7]Following Waring's empirical observations, key developments in the 18th and 19th centuries laid groundwork for Hilbert's resolution. In 1772, Johann Albrecht Euler, son of Leonhard Euler, established a significant lower bound for g(k), showing that g(k) \geq \lfloor (3/2)^k \rfloor + 2^k - 2 for k > 2, by considering numbers of the form $2^k q - 1 where q = \lfloor (3/2)^k \rfloor and analyzing the minimal number of summands needed modulo powers of 2 and 3.[8] In the 1850s, Joseph Liouville refined upper bounds for specific cases, particularly proving in 1859 that g(4) \leq 53 by combining Lagrange's four-square theorem with the identity $6(x^2 + y^2 + z^2 + t^2)^2 = sum of 12 fourth powers, thereby expressing any natural number as $6p + r (with p a sum of four squares and r \leq 5) and bounding the remainder by five additional fourth powers.[8] These contributions highlighted the problem's challenges while motivating analytic approaches.The Hilbert–Waring theorem falls under the branch of additive number theory, classified in the Mathematics Subject Classification as MSC 11P05, which encompasses Waring's problem and its variants.[9] Its impact was profound: it elevated Waring's conjecture from an unproven assertion to a foundational theorem in additive combinatorics, confirming the universal representability of natural numbers by kth powers while leaving the precise determination of g(k) as an open challenge for subsequent research.[7]
Mathematical Formulation
Statement of the Problem
Waring's problem is a fundamental question in additive number theory concerning the extent to which the set of k-th powers of non-negative integers can form an additive basis of finite order for the natural numbers.[10]Formally, for each fixed integer k \geq 2, the problem asks whether there exists a positive integer s such that every natural number n can be expressed as the sum of at most s k-th powers of non-negative integers. That is, n = x_1^k + \cdots + x_s^k where each x_i \in \mathbb{N}_0 = \{0, 1, 2, \dots \}.[11] This formulation emphasizes representations using only non-negative bases, with powers computed as x_i^k \geq 0, and allows for the possibility of fewer than s nonzero terms in the sum. The problem was first proposed by Edward Waring in 1770.[12]The minimal such s that works for all natural numbers n is denoted by g(k). A related concept addresses the minimal number of terms needed to represent all sufficiently large natural numbers, which is denoted by G(k) and satisfies G(k) \leq g(k).[11] These quantities capture the core challenge of determining the representational power of k-th powers as an additive structure.
Definitions of g(k) and G(k)
In Waring's problem, the function g(k) denotes the smallest positive integer s such that every natural number can be expressed as the sum of at most s kth powers of non-negative integers. Formally,g(k) = \min \left\{ s \in \mathbb{N} \mid \forall n \in \mathbb{N}, \, n = x_1^k + x_2^k + \cdots + x_s^k \text{ for some } x_i \in \mathbb{N}_0 \right\},where \mathbb{N}_0 includes zero.[13] This definition captures the universal representability required for all natural numbers, with Hilbert's theorem from 1909 establishing that g(k) is finite for every positive integer k \geq 2.In contrast, G(k) is the smallest positive integer s such that all sufficiently large natural numbers can be written as the sum of at most s kth powers of non-negative integers.[14] More precisely,G(k) = \min \left\{ s \in \mathbb{N} \mid \exists N \in \mathbb{N} \text{ such that } \forall n > N, \, n = x_1^k + x_2^k + \cdots + x_s^k \text{ for some } x_i \in \mathbb{N}_0 \right\}.This existential quantifier over a threshold N distinguishes G(k) from g(k), focusing on asymptotic behavior rather than complete coverage.The functions satisfy G(k) \leq g(k) for all k \geq 2, with equality holding for k=2. For k=1, g(1)=1 holds trivially, as every natural number is its own first power.[13]
Special Case: Squares
Lagrange's Four-Square Theorem
Lagrange's four-square theorem asserts that every natural number can be expressed as the sum of at most four squares of non-negative integers, establishing that g(2) = 4 in the context of Waring's problem.[15] This result resolves the specific case for squares, confirming that no more than four such terms are ever needed to represent any positive integer.[16]The theorem was proved by Joseph-Louis Lagrange in 1770 and published in 1772, predating and independent of Edward Waring's broader conjecture on sums of higher powers.[17]Lagrange's proof built upon earlier work by mathematicians such as Fermat and Euler, utilizing techniques like descent and identities for sums of squares.[18]Formally, the theorem states that for any natural number n, there exist non-negative integers a, b, c, d satisfyingn = a^2 + b^2 + c^2 + d^2.Numbers such as 7 and 15 require four squares, as do all natural numbers of the form $4^m(8k+[7](/page/+7)); most numbers can be expressed using 1, 2, or 3 squares.[19]
Examples and Proof Sketch
To illustrate Lagrange's four-square theorem, consider concrete representations of small natural numbers as sums of squares. The number 1 requires only one square: $1 = 1^2. The number 2 requires two squares: $2 = 1^2 + 1^2. The number 3 requires three squares: $3 = 1^2 + 1^2 + 1^2. In contrast, 7 requires four squares: $7 = 2^2 + 1^2 + 1^2 + 1^2.[20]The proof of the theorem combines Euler's four-square identity with a descent argument. Euler's identity establishes the multiplicative property: the product of two sums of four squares is itself a sum of four squares,\begin{aligned}
&(a^2 + b^2 + c^2 + d^2)(w^2 + x^2 + y^2 + z^2) \\
&= (aw - bx - cy - dz)^2 + (aw + bx + cy - dz)^2 \\
&\quad + (ay - bz + cx + dw)^2 + (az + by - cx + dw)^2,
\end{aligned}allowing representations to be built from those of factors.[21] A key step is the Fermat-Euler theorem, which states that every prime is a sum of four squares; for example, the prime 2 satisfies $2 = 1^2 + 1^2 + 0^2 + 0^2, and odd primes follow by showing a suitable multiple is a sum of four squares, then applying descent to reduce.[22] Lagrange's descent method then handles general cases by iteratively reducing the size of coefficients in a representation until a primitive form is reached, ensuring every natural number factors into primes each representable by four squares.[23] Jacobi's four-square theorem complements this by counting the representations, showing for odd n there are $8 \sum_{d \mid n, 4 \nmid d} d ways (up to order and signs), which is positive, confirming existence.[23]Infinitely many numbers require four squares, namely all of the form $4^m(8k+7). These are precisely the numbers that cannot be sums of three squares, as established by Legendre's three-square theorem.[16] This establishes that g(2) = 4.[21]
General Results for g(k)
Known Exact Values
The exact values of g(k) have been determined for k \leq 7. For k=1, g(1)=1, since every positive integer n is trivially n = n^1. For k=2, g(2)=4, as established by Lagrange's four-square theorem, which shows that every natural number is the sum of at most four squares of nonnegative integers.For cubes, g(3)=9, proven by Wieferich in 1909, who demonstrated that every sufficiently large integer is a sum of at most nine positive cubes, with Kempner in 1912 verifying the result for all remaining small integers by direct computation. Only 23 and 239 require nine cubes; for instance, $23 = 2 \times 2^3 + 7 \times 1^3, while all larger integers use at most eight. Fifteen numbers require eight cubes, including 15, 22, and 50, but most natural numbers can be expressed with four or fewer.For fourth powers, g(4)=19, established by Balasubramanian, Deshouillers, and Dress in 1986 through analytic methods combined with extensive computational checks to confirm that 19 suffice for all integers and that some require exactly 19. Notably, 79 and 223 demand 19 fourth powers; an example is $79 = 4 \times 2^4 + 15 \times 1^4. All integers greater than 223 use at most 16 fourth powers.The value g(5)=37 was proven by Chen Jing-run in 1964 using the Hardy-Littlewood circle method to bound representations for large numbers, supplemented by verification for smaller ones. Similarly, g(6)=73 follows from Pillai's 1940 work, which established an upper bound matching the lower bound derived from numbers like $15 \times 2^6 + 1^6 requiring many small sixth powers. For k=7, g(7)=143, determined by Davenport in 1939 via asymptotic estimates, with the exactness confirmed by the necessity for numbers such as $23 \times 2^7 + 120 \times 1^7 to use 143 terms and computations ensuring no more are needed.[24]For k \geq 8, exact values remain unproven, though computational efforts, such as those by Kubina and Wunderlich in 1990 verifying representations up to n = 4.7 \times 10^8, support conjectured values up to k=24 based on the Eulerian lower bound g(k) \geq 2^k + \left\lfloor \frac{3^k}{2^k} \right\rfloor - 2.
Formulas and Bounds
In 1772, J. A. Euler derived a fundamental lower bound for g(k) by analyzing the minimal number of k-th powers required to represent certain natural numbers using only the smallest possible summands, specifically $1^k and $2^k. Consider the number N = 3 \times 2^{k-1} - 1. For k \geq 2, N < 3^k, so any representation of N as a sum of k-th powers can use only $1^k = 1 and $2^k, as larger bases would exceed N. The maximal number of $2^k terms is 1, leaving a remainder of $2^{k-1} - 1, which requires $2^{k-1} - 1 copies of $1^k, for a total of $2^{k-1} terms. However, a tighter bound arises from considering numbers less than $3^k that maximize the number of terms using only $1^k and $2^k, such as those requiring nearly \lfloor (3/2)^k \rfloor copies of $2^k and up to $2^k - 2 copies of $1^k when the fractional part allows, yielding the lower boundg(k) \geq 2^k + \left\lfloor \left( \frac{3}{2} \right)^k \right\rfloor - 2.This bound is achieved when the fractional part of (3/2)^k allows the construction without exceeding the threshold for using $3^k.[25][1]The formula g(k) = 2^k + \lfloor (3/2)^k \rfloor - 2 has been conjectured to give the exact value for all k \geq 1, matching known exact values for small k and verified computationally up to k=24. This conjecture remains unproven for larger k, though extensive checks support it up to much higher values, such as k \leq 471,600,000. As of 2025, no accepted proof exists despite recent preprints. The derivation stems from the same considerations of maximal reliance on small powers near powers of 3, where the bound captures the worst-case requirement.[25]Upper bounds for g(k) align closely with Euler's lower bound. In 1939, L. E. Dickson established that g(k) \leq 2^k + \lfloor (3/2)^k \rfloor - 2 under certain conditions on the fractional part of (3/2)^k, refined by S. S. Pillai in 1940 to show the bound holds for all but finitely many k by analyzing the maximal number of $1^k needed for residues just below powers of 2. These results confirm that the conjectured formula serves as both lower and upper bound, with the equation derived from ensuring all numbers up to thresholds like $3^k can be covered without excess terms. For example, the bound ensures representations using at most \lfloor (3/2)^k \rfloor copies of $2^k and the remainder in $1^k, adjusted to avoid overflow into larger bases.[25][1]
Asymptotic Results for G(k)
Lower Bounds
A lower bound for G(k) is established by demonstrating that there are infinitely many natural numbers n that cannot be expressed as the sum of fewer than s k-th powers of natural numbers, thereby implying G(k) \ge s.One primary method to obtain such bounds is through congruence conditions, which reveal residue classes modulo some integer m that cannot be attained as sums of fewer than s k-th powers. For k = 3, the possible residues of cubes modulo 9 are 0, 1, or 8. The sums of three such residues yield only the classes 0, 1, 2, 3, 6, 7, or 8 modulo 9, excluding 4 and 5. Consequently, every natural number n \equiv 4 or $5 \pmod{9} requires at least four cubes, and as there are infinitely many such n, it follows that G(3) \ge 4.[26]Similar congruence obstructions yield G(4) = 16 exactly, where the lower bound of 16 arises from local solubility conditions modulo powers of 2 that prevent representation with 15 or fewer fourth powers for infinitely many n.[26] For k = 5, analogous arguments establish G(5) \ge 6.[26] In general, for all k \ge 3, congruence conditions imply G(k) \ge 4; for even k, this follows from residues modulo 8 (where k-th powers are 0 or 1, and sums of three reach at most 3, excluding 7), while for odd k, it stems from conditions like those modulo 9.[26]The Hardy-Littlewood circle method provides an analytic framework for these lower bounds, as the associated singular series vanishes for residue classes where the local density is zero, confirming that the representation function r_s(n) = 0 for infinitely many n when s is insufficient.[25]A further general lower bound, G(k) \ge k + 1 for k > 1, arises from volume considerations in the space of lattice points: the number of sums of s k-th powers up to X is asymptotically c X^{s/k}, and for s \le k, this is o(X), leaving infinitely many gaps for large X.[26]
Upper Bounds and Recent Progress
Upper bounds for G(k) in Waring's problem have been progressively refined using analytic number theory techniques, particularly the Hardy-Littlewood circle method, which decomposes the representation function into contributions from major and minor arcs to derive asymptotic formulas for the number of solutions to n = \sum_{i=1}^s x_i^k. A seminal advance came from Vinogradov in 1959, who established G(k) \leq k (\log k + 2 \log \log k + C) for some constant C > 0 and sufficiently large k, leveraging bounds on exponential sums \sum_{x=1}^P e( \alpha x^k ) via his mean value theorem to control the minor arc contributions.[1]In the 1980s, Vaughan improved these estimates by incorporating iterative methods and better approximations for Weyl sums, achieving G(k) \ll k^{7/4 + \epsilon} for any \epsilon > 0, which marked a significant reduction in the exponent compared to earlier polynomial bounds.[1] Further refinements by Wooley in 1992 utilized advanced sieve theory alongside exponential sum estimates to obtain G(k) \leq k (\log k + \log \log k + O(1)) for large k, representing an influential bound of the form O(k \log k).[1]More recent progress has achieved linear bounds in k. In 2024, Li An-Ping established G(k) \leq 3.094686k + o(k) for sufficiently large k not a power of 2, and G(k) \leq 4k otherwise, using parameterized recursions and the Hardy-Littlewood method. This provides specific values such as G(5) \leq 17 and G(20) \leq 63. These results align better with the conjectured form G(k) = \max\{k + 1, \Gamma_0(k)\}, suggesting linear growth in k for most k, where \Gamma_0(k) arises from local density conditions.[2][1]These analytic methods rely on estimating the number of representations r_s(n) for large n through the circle method, where upper bounds on G(k) follow from showing r_s(n) > 0 for all sufficiently large n when s exceeds the given threshold, often employing Vinogradov's mean value theorem to bound higher moments of exponential sums and sieve techniques to handle exceptional sets. For small k, specific computations yield tighter results: G(2) = 4, G(4) = 16, G(3) \leq 7, and G(5) \leq 17.[1]
Open Problems and Conjectures
Unresolved Values
Exact values of g(k) are known for k \leq 5, with g(2) = 4, g(3) = 9, g(4) = 19, and g(5) = 37. The conjectured formula g(k) = 2^k + \lfloor (3/2)^k \rfloor - 2 holds for $6 \leq k \leq 471{,}600{,}000, as verified computationally by Kubina and Wunderlich in 1990, and is believed to hold for all larger k based on theoretical bounds, though exhaustive confirmation for extremely large k remains computationally intensive.[4] For k \geq 8, while tight bounds exist, such as $2^k + \lfloor (3/2)^k \rfloor - 2 \leq g(k) \leq 2^k + \lfloor (3/2)^k \rfloor - 2 + \nu(k) where \nu(k) is small (often 0 for large k), the exact value is considered resolved via the formula in practice, stemming from Hilbert's theorem and refinements using the Hardy-Littlewood circle method.The function G(k), representing the minimal number of kth powers needed for all sufficiently large integers, is exactly known only for k=1,2,4. For other values, tight bounds persist without resolution: $4 \leq G(3) \leq 7 (with conjecture favoring 4, as only 23 and 239 require 9 cubes, and large numbers require at most 7); $6 \leq G(5) \leq 17; and $9 \leq G(6) \leq 42. These intervals reflect a combination of lower bounds from modular arithmetic obstructions (e.g., numbers congruent to certain residues modulo powers of small primes) and upper bounds from asymptotic estimates via exponential sums.[4]Determining these values computationally is hindered by the immense scale required; for instance, verifying representations for cubes (k=3) has involved checking up to $10^{15} or beyond, necessitating specialized algorithms and vast resources, with comprehensive updates continuing into the 21st century. For larger k, while the growth of kth powers allows theoretical verification of the formula for g(k), simulations for G(k) must handle exponentially larger thresholds. As of 2025, no additional exact values for G(k) beyond those for small k have been established, though bounds have been refined.[27]
Generalizations
The signed version of Waring's problem considers representations of integers as sums of signed kth powers, that is, sums of terms of the form \pm x^k where x is a non-negative integer. In this setting, the quantity g'(k) denotes the smallest number s such that every integer is a sum of at most s signed kth powers. For cubes (k=3), Davenport proved that every integer can be expressed as the sum of four signed cubes, establishing g'(3)=4.[28] For even k, since (\pm x)^k = x^k, the signed and unsigned problems coincide, so g'(k)=g(k). For odd k greater than 3, g'(k) is generally smaller than g(k), but exact values remain unknown beyond small cases.Modular variants of Waring's problem investigate representations modulo m, seeking the smallest s = g(k,m) such that every residue class modulo m is a sum of s kth powers modulo m. This problem is fully solvable for any fixed m and k, as the finite number of residues allows exhaustive checking, though explicit computations can be complex for large m. For example, when m is a prime power, recursive relations often determine g(k,m). Ewell provided a comprehensive treatment, computing g(k,n) for various n and showing connections to the classical case via lifting solutions modulo powers of primes.[29]Over finite fields \mathbb{F}_q, Waring's problem asks for the smallest s = g(k,q) such that every element of \mathbb{F}_q is a sum of s kth powers from \mathbb{F}_q. This is influenced by the structure of the multiplicative group and the characteristic; for instance, if k divides q-1, then g(k,q) = (q-1)/\gcd(k,q-1) + 1 in many cases. Exact values are known for numerous pairs (k,q), particularly when q is prime and k is large relative to q. Glibichuk and Konyagin established bounds using sum-product estimates from additive combinatorics, showing g(k,q) \leq C k \log k for oddcharacteristic q > k. Recent progress links this to the diameter of generalized Paley graphs, yielding g(k,q) equal to the graph diameter when it exists.[30]In vector spaces over the integers, such as \mathbb{Z}^d, generalizations consider whether every lattice point can be written as a sum of s vectors each raised to the kth power componentwise. For sufficiently large norms, every element of \mathbb{Z}^d is a sum of O_d(k^{d}) such kth powers, extending the circle method to multiple dimensions. Birch's work on the representation of integers by diagonal forms provides the foundational asymptotic estimates, showing that the number of representations grows like the singular series times the volume when s is large enough. Over the rationals \mathbb{Q}^d, similar Hasse principle results hold, ensuring local solvability implies global representations for s exceeding a dimension-dependent threshold.Related problems include the multidimensional Waring problem, where one seeks representations of large integers in multiple variables as sums of kth powers, often with asymptotic densities approaching 1 for s \geq G(k). The set of integers representable as sums of s kth powers has asymptotic density 1 when s is at least the generalized G(k), with the exceptional set having density zero by the Hardy-Littlewood method. In additive combinatorics, Waring's problem relates to the structure of sumsets of power sets; for example, the kth powers form an additive basis of order g(k). Recent developments connect these to Szemerédi's theorem on arithmetic progressions, yielding improved lower bounds for G(k) via progression-free subsets avoiding power sums. Shkredov used such techniques to refine estimates on the size of sumsets of cubes, impacting upper bounds in the classical problem.