Sums of powers
Sums of powers, also known as power sums, refer to the finite sum S_p(n) = \sum_{k=1}^n k^p, where p is a non-negative integer and n is a positive integer representing the upper limit of summation.[1] These sums express the total of the p-th powers of the first n positive integers and result in a polynomial in n of degree p+1.[1] They are fundamental in mathematics, appearing in number theory for studying arithmetic progressions and figurate numbers, in analysis through connections to integrals and series, and in statistics as moments of discrete distributions.[1] The investigation of sums of powers has ancient origins. The Pythagoreans in the 6th century BCE recognized the sum for p=1, S_1(n) = \frac{n(n+1)}{2}, as the formula for triangular numbers.[2] In the 3rd century BCE, Archimedes derived the formula for p=2, S_2(n) = \frac{n(n+1)(2n+1)}{6}, using geometric methods in his works On Conoids and Spheroids and On Spirals to compute areas and volumes.[2] Early medieval contributions include Nicomachus's work on cubes in the 1st century CE and formulas for p=3 and p=4 by Islamic mathematicians such as Al-Karaji around 1000 CE and Alhazen (Ibn al-Haytham) in the 11th century.[2] Significant advancements occurred in the 17th century with European mathematicians. Pierre de Fermat and Blaise Pascal developed recursive methods using binomial coefficients, while Johann Faulhaber published explicit formulas for sums up to p=17 (and mentioned up to p=100) in his 1631 treatise Academia Algebræ, expressing them in terms of binomial coefficients without a unified general form.[3] A breakthrough came posthumously in 1713 when Jakob Bernoulli introduced the Bernoulli numbers in Ars Conjectandi, enabling a compact general expression for all p.[4] This led to Faulhaber's formula in its modern form: S_p(n) = \frac{1}{p+1} \sum_{j=0}^p (-1)^j \binom{p+1}{j} B_j n^{p+1-j}, where B_j are the Bernoulli numbers (with B_1 = -\frac{1}{2} in the convention used here).[3] Carl Gustav Jacob Jacobi provided a rigorous proof of this formula for all positive integers p in 1834.[3] Beyond closed forms, sums of powers connect to broader mathematical structures, such as the Euler-Maclaurin formula for approximating integrals and Nicomachus's theorem stating S_3(n) = \left( S_1(n) \right)^2.[1] They also relate to the Riemann zeta function via S_p(n) = \zeta(-p) - \zeta(-p, n+1) for analytic continuations, highlighting their role in analytic number theory.[1]Fundamentals
Definition
In mathematics, the p-th power sum, often denoted S_p(n), is defined as the finite sum S_p(n) = \sum_{m=1}^n m^p, where n and p are positive integers.[2] This expression aggregates the p-th powers of the first n natural numbers, providing a foundational object in number theory and discrete analysis.[5] Power sums differ from power series, which are infinite expansions of the form \sum_{k=0}^\infty a_k x^k for a variable x, or from convergent infinite series such as \sum_{k=1}^\infty \frac{1}{k^2} = \frac{\pi^2}{6}.[2] Instead, they emphasize finite summation over the initial segment of natural numbers, avoiding convergence issues associated with infinite terms.[5] These sums hold relevance in assessing polynomial behaviors, such as degree and leading coefficients in their closed forms, and act as discrete analogs to integrals, approximating quantities like \int_0^n x^p \, dx through Riemann sum interpretations scaled appropriately.[2] Bernoulli numbers serve as key tools for deriving general expressions for power sums.[2] For illustration, when p=1 and n=3, the computation yields S_1(3) = 1^1 + 2^1 + 3^1 = 6.[5]Notation and basic examples
In mathematics, the sum of the p-th powers of the first n positive integers, \sum_{k=1}^n k^p, is commonly denoted as S_p(n), where the subscript p indicates the exponent and the argument n specifies the number of terms.[1] Alternative notations include p_k(n) to emphasize the k-th power, with the subscript or superscript varying by context to highlight the power index. Uppercase letters such as S typically represent the summation, while lowercase p or k denotes the power, reflecting conventions in analytic number theory and combinatorics.[1] To illustrate, consider basic computations for small values of n and p. For p=1 and n=3, the sum is $1^1 + 2^1 + 3^1 = 6, showing linear accumulation. For p=2 and n=4, it yields $1^2 + 2^2 + 3^2 + 4^2 = 1 + 4 + 9 + 16 = 30, where the increasing contributions from larger bases highlight quadratic growth. These manual summations for modest n and p reveal emergent patterns, such as accelerating increments with higher powers, without relying on closed forms. Power sums also arise descriptively in generating functions, where they serve as coefficients in formal power series expansions, particularly for encoding sequences in combinatorial identities and symmetric polynomial bases.[6] The following table lists computed values for p=1,2,3 and n=1 to $5, offering a concise reference for these foundational cases:| n \ p | 1 | 2 | 3 |
|---|---|---|---|
| 1 | 1 | 1 | 1 |
| 2 | 3 | 5 | 9 |
| 3 | 6 | 14 | 36 |
| 4 | 10 | 30 | 100 |
| 5 | 15 | 55 | 225 |
Historical development
Early contributions
The study of sums of powers traces its origins to ancient Greek mathematics. The Pythagoreans in the 6th century BCE recognized the sum for p=1, S_1(n) = \frac{n(n+1)}{2}, as the formula for triangular numbers.[2] In the 3rd century BCE, Archimedes derived the formula for p=2, S_2(n) = \frac{n(n+1)(2n+1)}{6}, using geometric methods in his works On Conoids and Spheroids and The Quadrature of the Parabola.[2] Nicomachus in the 1st century CE contributed to sums of cubes (p=3).[2] Subsequent advancements occurred in ancient Indian mathematics, where Aryabhata (c. 476–550 CE) provided some of the earliest known formulas for the sums of the first n squares and cubes of natural numbers in his treatise Aryabhatiya (499 CE). These contributions represented a significant advancement in understanding arithmetic series raised to powers, focusing on explicit expressions for low exponents to aid astronomical and geometric calculations.[7] In the Islamic Golden Age, scholars built upon such ideas, with Abu Bakr al-Karaji (d. 1019) making notable progress in Baghdad on related problems, including a geometric proof of the identity linking the sum of the first n cubes to the square of the sum of the first n natural numbers. Al-Karaji's work extended to pyramidal numbers, which involve cumulative sums akin to higher-order power sums, demonstrating iterative methods for computing these aggregates without a general algebraic framework. His approaches emphasized practical computation for engineering and algebraic texts.[8] Overall, pre-17th-century work on power sums centered on specific low exponents, driven by practical needs and constrained by the absence of a comprehensive theoretical structure.[9]Key advancements by Bernoulli and Euler
In 1631, Johann Faulhaber published empirical formulas for the sums of powers of the first n positive integers up to the 17th power in his book Academia Algebrae, marking a significant empirical advancement in the study of power sums.[10] These formulas, expressed using a notation involving "cossic" terms for powers, extended earlier particular cases and included expressions up to the 23rd power in encoded form, though presented explicitly only for higher exponents like the 13th power as a polynomial in n divided by 105.[10] Jakob Bernoulli advanced this work in his posthumously published Ars Conjectandi in 1713, where he derived symbolic formulas for sums of powers up to the 10th power and conjectured a general pattern involving a sequence of coefficients now known as Bernoulli numbers.[11] Bernoulli linked these sums to finite differences and integrals, using integral notation \int for summation and observing that the general sum \sum_{k=1}^n k^p could be expressed as \frac{1}{p+1} n^{p+1} + \frac{1}{2} n^p + \sum terms with Bernoulli numbers B_j, specifically conjecturing the form \sum_{m=1}^n m^p = \frac{1}{p+1} \sum_{j=0}^p (-1)^j \binom{p+1}{j} B_j n^{p+1-j}.[11][12] In the 18th century, Leonhard Euler provided rigorous proofs of Bernoulli's conjectures around 1730 through his development of a general summation formula, confirming the pattern for arbitrary powers and establishing its theoretical foundation.[12] Euler further refined the approach by connecting Bernoulli numbers to exponential generating functions in the early 1730s, defining them as coefficients in the expansion \frac{x}{e^x - 1} = \sum_{k=0}^\infty B_k \frac{x^k}{k!}, which facilitated broader applications in analysis.[13]General closed-form expressions
Faulhaber's formula
Faulhaber's formula gives an explicit closed-form expression for the sum of the m-th powers of the first n positive integers as \sum_{k=1}^n k^m = \frac{1}{m+1} \sum_{j=0}^m (-1)^j \binom{m+1}{j} B_j n^{m+1-j}, where B_j denotes the j-th Bernoulli number (with the convention B_1 = -\frac{1}{2}).[3] This representation reveals that the power sum is a polynomial in n of degree exactly m+1, with leading term \frac{n^{m+1}}{m+1}. The components of the formula include binomial coefficients \binom{m+1}{j}, which determine the scaling of each term, alternating signs (-1)^j that alternate the contributions, and powers n^{m+1-j} that decrease from m+1 to 1, ensuring the polynomial structure.[3] The Bernoulli numbers B_j serve as coefficients that encapsulate the combinatorial essence of the sum. Johann Faulhaber first published specific formulas for power sums up to the 17th power in his 1631 treatise Academia Algebrae, expressing them as explicit polynomials without reference to Bernoulli numbers, which were developed later by Jakob Bernoulli. A modern general form equivalent to Faulhaber's approach, avoiding Bernoulli numbers, rewrites the sum using Stirling numbers of the second kind S(m,j) and falling factorials: \sum_{k=1}^n k^m = \sum_{j=0}^m S(m,j) \frac{(n+1)^{\underline{j+1}}}{j+1}, where x^{\underline{j}} = x(x-1)\cdots(x-j+1).[14] To verify the formula, consider m=1: \sum_{k=1}^n k = \frac{1}{2} \sum_{j=0}^1 (-1)^j \binom{2}{j} B_j n^{2-j}. For j=0: (-1)^0 \binom{2}{0} B_0 n^2 = 1 \cdot 1 \cdot n^2 = n^2. For j=1: (-1)^1 \binom{2}{1} B_1 n = -2 \cdot \left(-\frac{1}{2}\right) n = n. Thus, \frac{1}{2} (n^2 + n) = \frac{n(n+1)}{2}, matching the known triangular number formula.[3]Role of Bernoulli numbers
Bernoulli numbers B_k are a sequence of rational numbers defined by the exponential generating function \frac{x}{e^x - 1} = \sum_{k=0}^\infty B_k \frac{x^k}{k!}, which holds for |x| < 2\pi.[4][15] The first few values are B_0 = 1 and B_1 = -\frac{1}{2}, with subsequent terms determined recursively.[15] A key property is that odd-indexed Bernoulli numbers vanish beyond the first: B_{2n+1} = 0 for all integers n \geq 1.[4][15] This vanishing simplifies the expressions in power sum formulas, as terms involving odd powers greater than 1 drop out, leading to more compact polynomials for even exponents and adjustments for odd ones.[4] In the context of sums of powers, the Bernoulli numbers serve as the coefficients in the polynomial expansion provided by Faulhaber's formula, weighting the powers of n to yield the exact sum \sum_{k=1}^n k^p.[4] They arise naturally from the theory of finite differences because the associated Bernoulli polynomials B_m(x) satisfy the difference equation B_m(x+1) - B_m(x) = m x^{m-1}, allowing the indefinite sum of powers to be expressed as a telescoping series involving these polynomials.[16] The Bernoulli numbers are computed using the recursive relation \sum_{j=0}^m \binom{m+1}{j} B_j = 0 for m > 1, with the initial condition B_0 = 1.[4] This relation enables sequential determination of higher terms. The first ten Bernoulli numbers are given in the following table:| n | B_n |
|---|---|
| 0 | 1 |
| 1 | -\frac{1}{2} |
| 2 | \frac{1}{6} |
| 3 | 0 |
| 4 | -\frac{1}{30} |
| 5 | 0 |
| 6 | \frac{1}{42} |
| 7 | 0 |
| 8 | -\frac{1}{30} |
| 9 | 0 |
| 10 | \frac{5}{66} |