Fact-checked by Grok 2 weeks ago

Summation

In , summation is the operation of adding together a of numbers or mathematical terms, known as addends or summands, to obtain their total or sum. This process is fundamental to and higher mathematics, applicable to both finite sequences and infinite series. The concept is most commonly represented using summation notation, also called sigma notation, which employs the uppercase letter Σ (sigma) to concisely express the sum of terms evaluated over a specified range of an index variable. The standard form of summation notation is \sum_{i=a}^{b} f(i), where i serves as the index of summation (which can be any letter, such as k or r), a denotes the lower (the starting value of the ), b indicates the upper (the ending value), and f(i) is the general or expression being summed, with the substituted sequentially from a to b. For instance, \sum_{i=1}^{3} (2i - 1) evaluates to $1 + 3 + 5 = 9. This notation allows for efficient representation of lengthy s without listing each explicitly, and the can begin or end at any , provided a \leq b. The symbol for summation was introduced by the mathematician Leonhard Euler in , building on earlier uses of elongated 'S' shapes for sums by ; a capital 'S' was sometimes employed for series summation, both before and after Euler's introduction. Several key properties govern summation, enabling simplification: the sum of a c over n terms is nc; a multiple can be factored out, as in c \sum k = \sum c k; and summation is linear, so \sum (f(k) + g(k)) = \sum f(k) + \sum g(k). These rules extend to more complex expressions, such as powers or exponentials, and are essential for deriving formulas like the sum of the first n naturals, \sum_{k=1}^{n} k = \frac{n(n+1)}{2}. Summation notation plays a central role across mathematical disciplines, including —where Riemann sums approximate definite integrals as limits of summations—statistics for computing means (\bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i) and variances, and for analyzing sequences and series convergence. It also appears in advanced contexts like measure theory, where more general notations handle integrals over non-discrete spaces, underscoring its versatility in both theoretical and .

Notation

Sigma notation

Sigma notation, also known as summation notation, employs the uppercase Greek letter sigma (∑) to concisely represent the sum of a sequence of terms. The symbol ∑ is positioned to the left of the summand expression, with the index variable typically placed below it, the lower limit as a subscript, and the upper limit as a superscript above the index. The summand, which is the function or expression evaluated at each index value, follows the ∑ to the right. For finite sums, the standard syntax is \sum_{k=a}^{b} f(k), where k denotes the index variable that increments from the lower limit a to the upper limit b, and f(k) is the summand. This notation expands to f(a) + f(a+1) + \cdots + f(b). A classic example is the sum of the first n natural numbers, expressed as \sum_{k=1}^{n} k = \frac{n(n+1)}{2}. Infinite series extend this notation by replacing the upper limit with infinity: \sum_{k=1}^{\infty} f(k). However, such series only have a well-defined sum if they converge, meaning the sequence of partial sums approaches a finite limit; otherwise, the series diverges. The index variable in sigma notation is a dummy variable, allowing substitution with another letter provided the limits and summand are adjusted consistently to preserve the sum's value. For reindexing, a common technique is shifting the index, such as setting j = k + m for some m, which alters the summation bounds accordingly—for instance, if the original sum runs from k=1 to n, the reindexed sum runs from j=1+m to n+m. In printed mathematical texts, the sigma symbol appears as a distinct upright ∑, often enlarged for clarity. Handwritten forms vary but commonly resemble a stylized "S" or a lunate curving from top-left to bottom-right, ensuring in .

Special cases and variations

Ellipses notation provides a simple, informal way to express sums of sequences, such as $1 + 2 + \dots + n, where the of consecutive integers is evident. This approach is commonly used in educational settings or brief illustrations to visually convey the addition of terms without formal symbols, particularly when the number of terms is small or the progression is straightforward. However, it becomes imprecise for non-obvious s or variable limits, as the (\dots) implies continuation without specifying exact bounds or conditions. In comparison, sigma notation offers greater precision and compactness for the same sum, written as \sum_{k=1}^n k, making it preferable for rigorous mathematical manipulation and generalization. The product notation, symbolized by the capital Greek letter pi (\prod), functions as the multiplicative counterpart to summation. Whereas \sum_{k=1}^n a_k = a_1 + a_2 + \dots + a_n aggregates terms through addition, \prod_{k=1}^n a_k = a_1 \cdot a_2 \cdot \dots \cdot a_n combines them via multiplication, which is vital for expressions like factorials (n! = \prod_{k=1}^n k) or products in differentiation rules. This distinction ensures clarity in operations, with the empty product conventionally defined as 1 to serve as the multiplicative identity, analogous to the empty sum being 0. An algebraic sum incorporates the signs of terms, allowing positive and negative contributions within the , as in \sum_{k=1}^n \epsilon_k a_k where each \epsilon_k = \pm 1. This contrasts with an arithmetic sum of solely positive quantities and aligns with the structure of polynomials, defined as the algebraic or of monomials. Such signed summations are practical in contexts requiring net effects, like balancing equations in physics or . The empty sum convention stipulates that a summation over no terms, such as \sum_{k=a}^{a-1} f(k), equals 0, reflecting the property where adding nothing preserves the original value. This rule maintains consistency in algebraic identities, such as those involving disjoint unions of index sets, by eliminating the need for exceptional cases when ranges are inverted or void. For example, it ensures \sum_{i \in A} f(i) + \sum_{i \in \emptyset} f(i) = \sum_{i \in A \cup \emptyset} f(i) holds without alteration. Vector summations extend scalar component-wise: for \vec{v}_k = (v_{k1}, v_{k2}, \dots, v_{km}) in \mathbb{R}^m, the is \sum_{k=1}^n \vec{v}_k = \left( \sum_{k=1}^n v_{k1}, \sum_{k=1}^n v_{k2}, \dots, \sum_{k=1}^n v_{km} \right). This operation, visualized via parallelogram or tail-to-tip methods, upholds axioms and applies analogously to matrices through element-wise , facilitating computations in multivariable linear algebra.

Historical development

The of summation has roots in ancient civilizations, where methods for sums of series were developed without modern symbolic notation. In around 2000 BCE, scribes used algorithmic procedures to solve problems involving sequences and series, often presented through paradigmatic examples in tablets. These techniques included calculating totals for arithmetic progressions in practical contexts like divisions or , relying on verbal descriptions and tabular computations rather than abstract symbols. Greek mathematicians advanced these ideas geometrically around 300 BCE. Euclid, in his Elements (Book IX), provided deductive proofs for properties of odd numbers and progressions from first principles, influencing later European mathematics. During the medieval period, Indian scholars contributed verbal and formulaic descriptions of sums. Aryabhata (476–550 CE), in his Aryabhatiya, introduced explicit formulas for the sums of squares and cubes of the first n natural numbers, such as the sum of squares as n(n+1)(2n+1)/6, derived through inductive methods and applied to astronomical computations. These were expressed in Sanskrit verse without algebraic symbols but laid groundwork for systematic summation. The marked the shift toward representation in Europe. (1540–1603) pioneered literal notation in , using letters to denote quantities in equations and expressions, including repeated terms that implied summation, as seen in his 1591 work In artem analyticam isagoge. This facilitated the transition from rhetorical to mathematics. In the , Leonhard Euler introduced the sigma symbol (Σ) for summation in 1755, in his Institutiones calculi differentialis, denoting the sum of a compactly, such as Σ a for the total of terms a. The saw standardization through rigorous . formalized summation in the context of limits and , providing arithmetical definitions for infinite series in works like Cours d'analyse (1821), emphasizing to distinguish valid sums from divergent ones. This, alongside contributions from , integrated summation into modern . In the 20th century, summation notation was refined and extended in set theory and . Developments in axiomatic set theory generalized summation over arbitrary s, extending the notation to foundational .

Foundations

Formal definition

In , the summation operation can be formally defined as iterated in an . Given an I and a f: I \to G, where G is an under addition, the \sum_{i \in I} f(i) is defined for finite I as the result of iteratively applying the group's addition operation to the values f(i) for i \in I. This definition relies on the additive structure of G, ensuring that the is independent of the order and grouping of terms due to the associativity and commutativity of . When the codomain is a commutative ring R (where the additive group is abelian), and linearity is considered with scalars in R, the summation aligns with the ring structure. For the specific finite case where I = \{1, 2, \dots, n\} with n \in \mathbb{N}, the sum is given by \sum_{i=1}^n f(i) = f(1) + f(2) + \dots + f(n). This can be established recursively: the base case for n=1 is \sum_{i=1}^1 f(i) = f(1); assuming it holds for n=k, then for n=k+1, \sum_{i=1}^{k+1} f(i) = \sum_{i=1}^k f(i) + f(k+1). By mathematical induction, the equality follows for all positive integers n. In an G, finite sums exhibit associativity and commutativity, meaning that for disjoint finite index sets I and J, \sum_{i \in I \cup J} f(i) = \left( \sum_{i \in I} f(i) \right) + \left( \sum_{j \in J} f(j) \right) regardless of partitioning, as in G satisfies these properties. Additionally, if G is an [R](/page/R)-module for a [R](/page/R), summation is linear: for scalars a, b \in [R](/page/R) and functions f, g: I \to G with finite I, \sum_{i \in I} (a f(i) + b g(i)) = a \sum_{i \in I} f(i) + b \sum_{i \in I} g(i). This follows directly from the distributivity in the . For countable index sets I, the sum \sum_{i \in I} f(i) extends the finite case via limits of partial sums over exhausting sequences of finite subsets. Consider an exhausting sequence of finite subsets (F_n)_{n=1}^\infty of I, where each F_n is finite, F_n \subseteq F_{n+1}, and every finite subset of I is contained in some F_n. The partial sums are s_n = \sum_{i \in F_n} f(i), and the infinite sum is the limit \lim_{n \to \infty} s_n in the extended reals, provided it exists unconditionally (independent of the exhausting sequence chosen).

Generalizations to sequences and sets

Summation extends naturally from finite sums to sequences, where the partial sum of a sequence (a_k)_{k=1}^\infty is defined as s_n = \sum_{k=1}^n a_k, representing the sum of the first n terms. The infinite sum, or series, \sum_{k=1}^\infty a_k is then the limit \lim_{n \to \infty} s_n, provided this limit exists and is finite; otherwise, the series diverges. This construction bridges finite summation to the study of convergence in real or complex analysis, allowing the evaluation of expressions like geometric series where the partial sums approach a closed form. For finite sets, summation can be defined without regard to order, as \sum_{i \in S} f(i) for a function f: S \to \mathbb{R} and finite set S, where the result is independent of any enumeration due to the commutativity and associativity of addition in \mathbb{R}. This unordered sum coincides with the standard iterated sum over any bijection from a finite index set to S, making it well-defined for combinatorial applications such as counting subsets or aggregating values over discrete structures. Multi-index summation generalizes to products of index sets, such as the double sum \sum_{i \in I} \sum_{j \in J} f(i,j) for finite or countably infinite sets I and J. When f(i,j) \geq 0 for all i,j, the order of summation can be interchanged by a discrete analogue of Fubini's theorem, ensuring \sum_{i \in I} \sum_{j \in J} f(i,j) = \sum_{j \in J} \sum_{i \in I} f(i,j), which facilitates computations in lattice theory and generating functions. In , summation appears in structures like rings, where R[] for a R consists of infinite formal sums \sum_{n=0}^\infty a_n x^n with a_n \in R, equipped with termwise addition and a for multiplication, independent of convergence. This framework extends summation to modules over rings or group rings, enabling algebraic manipulations in and without analytic concerns. A key convention arises for empty index sets, where the vacuous sum \sum_{i \in \emptyset} f(i) is defined as 0, serving as the and ensuring in inductive definitions and recursive formulas across empty cases. This aligns with the general principle that operations over empty collections yield the of the structure.

Advanced Frameworks

Measure-theoretic summation

In measure theory, summation over discrete spaces is formalized using the Lebesgue integral with respect to a . For a countable I and a f: I \to \mathbb{R}, the \mu on I assigns \mu(\{i\}) = 1 for each i \in I, so the integral \int_I f \, d\mu = \sum_{i \in I} f(i). This notation extends classical discrete summation to the measure-theoretic framework, where \sum_{i \in I} f(i) \mu(\{i\}) recovers the sum when \mu is the on discrete spaces. The concept generalizes to simple functions, which are finite sums of the form \phi = \sum_{k=1}^n c_k \chi_{A_k}, where c_k \in \mathbb{R} and \chi_{A_k} is the of a measurable set A_k with finite measure. The Lebesgue integral of such a simple function is \int \phi \, d\mu = \sum_{k=1}^n c_k \mu(A_k), providing an approximation to the integral of more general measurable functions. For a non-negative measurable function f, the Lebesgue integral is defined as the supremum over all such integrals: \int f \, d\mu = \sup \left\{ \sum_{k=1}^n c_k \mu(A_k) \mid 0 \leq \phi = \sum_{k=1}^n c_k \chi_{A_k} \leq f \right\}. This construction links discrete to continuous by viewing the latter as limits of refined sums over partitions. Point masses are handled via , where the \delta_x at a point x satisfies \delta_x(A) = 1 if x \in A and 0 otherwise. The with respect to \delta_x yields \int f \, d\delta_x = f(x), allowing classical to be recovered as over superpositions of on points. For instance, the on a set is the of at each point. In , this framework unifies as a Lebesgue integral. For a discrete X taking values in a with P(X = x), the is E[X] = \sum_x x P(X = x) = \int x \, dP, where P is the induced on the range of X. This representation treats probabilities as weights analogous to measures in the summation.

Calculus of finite differences

In the calculus of finite differences, summation serves as the discrete analog to integration, acting as the inverse operation to the finite difference operator. The forward difference operator, denoted Δ, is defined for a function f defined on the integers as Δf(n) = f(n+1) - f(n). Higher-order differences are obtained by iterated application: the k-th forward difference is Δ^k f(n) = Δ(Δ^{k-1} f(n)), with Δ^0 f(n) = f(n). These operators capture discrete changes in a manner parallel to derivatives in continuous calculus. A key property linking differences to summation is the telescoping nature of sums of differences: for integers a ≤ b, the definite sum ∑_{k=a}^b Δf(k) = f(b+1) - f(a). This identity demonstrates how summation "undoes" the differencing process, much like the fundamental theorem of calculus relates integrals to antiderivatives. Indefinite summation, or antidifferentiation, seeks a function F such that ΔF(n) = f(n); such an F is called an antidifference of f. The notation ∫_Δ f(n) , Δn is sometimes used to denote this discrete integral, emphasizing its role as the inverse of Δ. Newton's forward difference formula provides a means to interpolate functions using finite differences, which in turn facilitates the evaluation of sums by expressing them in closed form. For equally spaced points with step size h=1 starting at x_0, the formula interpolates f(x) as f(x) = ∑_{k=0}^m \binom{s}{k} Δ^k f(x_0), where s = (x - x_0)/h and m is the of the interpolating . When applied to polynomial sequences, this interpolation yields exact summation formulas; for instance, the sum of the first n cubes can be derived by fitting a cubic to the partial sums via forward differences, resulting in (n(n+1)/2)^2. Finite differences and summation find practical application in solving linear recurrence relations, where the relation itself can be viewed as a difference equation. For a non-homogeneous linear recurrence of the form a_n = c a_{n-1} + f(n), the involves the homogeneous part plus a particular obtained by indefinite summation of the non-homogeneous term f(n), adjusted by the corresponding to the homogeneous . This method leverages the antidifference to express the general explicitly, providing a discrete counterpart to in differential equations.

Approximations

Integral approximations

Integral approximations provide a bridge between discrete summation and continuous integration, particularly useful for estimating sums over large intervals where the summand function f varies smoothly. For a sum \sum_{k=a}^{b} f(k), a basic approach replaces the discrete points with a continuous integral \int_{a}^{b} f(x) \, dx, but this often underestimates or overestimates depending on the function's behavior. More refined methods incorporate endpoint corrections and higher-order terms to improve accuracy. Simple approximations include the trapezoidal and midpoint rules, adapted from numerical quadrature techniques. The trapezoidal rule approximates the sum as \int_{a}^{b} f(x) \, dx + \frac{f(a) + f(b)}{2}, treating the sum as a Riemann sum with linear interpolation between points. This corresponds to the leading terms of more advanced expansions and reduces error for monotonically decreasing functions. The midpoint rule shifts the integration limits to \int_{a-1/2}^{b+1/2} f(x) \, dx, effectively evaluating the function at interval centers, which can yield better results for convex functions by avoiding endpoint biases. These methods are first-order accurate but serve as foundational tools before employing sophisticated formulas. The Euler-Maclaurin formula offers a systematic way to approximate sums with increasing precision: \sum_{j=a}^{n} f(j) = \int_{a}^{n} f(x) \, dx + \frac{f(a) + f(n)}{2} + \sum_{s=1}^{m-1} \frac{B_{2s}}{(2s)!} \left( f^{(2s-1)}(n) - f^{(2s-1)}(a) \right) + R_m, where B_{2s} are Bernoulli numbers, f^{(k)} denotes the k-th derivative, and R_m is the remainder term given by R_m = \int_{a}^{n} \frac{\widetilde{B}_{2m}(x)}{(2m)!} f^{(2m)}(x) \, dx, with \widetilde{B}_{2m}(x) the periodic Bernoulli function. This expansion refines the trapezoidal approximation by adding correction terms involving even-powered Bernoulli numbers, capturing the discrepancy between discrete and continuous evaluations through finite differences implicitly. The formula, discovered independently by Euler and Maclaurin in the 1730s, is asymptotic and particularly effective for smooth f with decaying higher derivatives. A classic example is the approximation of the nth H_n = \sum_{k=1}^{n} \frac{1}{k}. Applying the Euler-Maclaurin formula to f(x) = 1/x from 1 to n yields \int_1^n \frac{1}{x} \, dx = \ln n, plus endpoint corrections \frac{1 + 1/n}{2}, and higher terms involving derivatives of $1/x. The series expansion simplifies to H_n \approx \ln n + \gamma + \frac{1}{2n} - \sum_{s=1}^{\infty} \frac{B_{2s}}{2s \, n^{2s}}, where \gamma \approx 0.57721 is the Euler-Mascheroni constant, defined as the limit \gamma = \lim_{n \to \infty} (H_n - \ln n). The leading approximation H_n \approx \ln n + \gamma captures the , with subsequent terms providing refinements for finite n. This derivation highlights how the formula extracts asymptotic behavior from the integral while accounting for discrete effects. Error bounds in the Euler-Maclaurin formula are controlled by the R_m, whose magnitude depends on the supremum of |f^{(2m)}(x)| over [a, n] and the of the periodic function, typically |R_m| \leq \frac{|\widetilde{B}_{2m}|_{\max}}{(2m)!} (n - a) \max |f^{(2m)}(x)|. For practical use, truncating after a few s often suffices when higher derivatives decrease rapidly, as in or cases; the shares the sign of the first omitted , aiding conservative estimates. These bounds ensure the approximation's reliability for large n. The approximations can fail or require caution for oscillatory functions or those with rapidly varying behavior, where higher derivatives grow unbounded or alternate in sign, causing the correction series to diverge despite the integral providing a rough estimate. In such scenarios, the remainder does not diminish, and alternative methods like Poisson summation may be preferable.

Asymptotic growth rates

The asymptotic growth rate of a finite sum provides insight into its dominant behavior as the upper limit n approaches infinity, often analyzed using Big-O, Big-Omega, and Theta notations to capture upper, lower, and tight bounds, respectively. For polynomial sums, such as \sum_{k=1}^n k, the exact closed form is \frac{n(n+1)}{2}, which is \Theta(n^2) since it grows quadratically. This bound can be established via integral comparison: the sum \sum_{k=1}^n k is sandwiched between \int_1^n x \, dx = \frac{n^2}{2} - \frac{1}{2} and \int_0^n (x+1) \, dx = \frac{n^2}{2} + n, both of which are \Theta(n^2). More generally, \sum_{k=1}^n k^p = \Theta(n^{p+1}) for fixed p > -1, reflecting the leading term from the formula or integral bounds. The harmonic series partial sum H_n = \sum_{k=1}^n \frac{1}{k} diverges logarithmically, with the asymptotic expansion H_n \sim \ln n + \gamma + \frac{1}{2n} - \frac{1}{12n^2} + \cdots, where \gamma \approx 0.57721 is the defined as \gamma = \lim_{n \to \infty} (H_n - \ln n). This expansion arises from the , linking the sum to the integral of $1/x plus corrections. For the finite geometric series \sum_{k=0}^{n-1} ar^k = a \frac{1 - r^n}{1 - r} with r \neq 1, the asymptotic behavior as n \to \infty depends on |r|: if |r| < 1, the sum converges to \frac{a}{1 - r} since r^n \to 0; if |r| > 1, it diverges exponentially. The infinite p-series \sum_{k=1}^\infty \frac{1}{k^p} converges if p > 1 and diverges if p \leq 1, as determined by the integral test: \int_1^\infty x^{-p} \, dx converges precisely when p > 1. For p = 1, the divergence is logarithmic, mirroring the series growth \sim \ln n. Stirling's approximation connects factorial growth to sums via \ln(n!) = \sum_{k=1}^n \ln k \sim n \ln n - n + \frac{1}{2} \ln(2\pi n), yielding n! \sim \sqrt{2\pi n} \, (n/e)^n as n \to \infty. This derives from approximating the sum of logarithms by its \int_1^n \ln x \, dx = n \ln n - n + 1, refined by the Euler-Maclaurin formula.

Identities

General summation rules

Summation operations in obey several fundamental algebraic rules that facilitate manipulation and simplification of expressions involving . These rules are derived from the definition of a as the result of repeated over a finite I, and they hold for finite sums where the index set has a finite |I|.

Linearity

One of the core properties of summation is , which states that for scalar constants \alpha and \beta, and functions f and g defined on the I, \sum_{i \in I} (\alpha f(i) + \beta g(i)) = \alpha \sum_{i \in I} f(i) + \beta \sum_{i \in I} g(i). This property follows directly from the distributive law of and the definition of summation as repeated . To see this, expand both sides: the left side unfolds to \alpha f(i_1) + \beta g(i_1) + \cdots + \alpha f(i_n) + \beta g(i_n), where n = |I|, which groups into \alpha (f(i_1) + \cdots + f(i_n)) + \beta (g(i_1) + \cdots + g(i_n)), matching the right side. A special case arises when one term is , yielding \sum_{i \in I} f(i) = \sum_{i \in I} (f(i) + 0), but the full enables and superposition of . This rule is essential for decomposing into manageable parts.

Interchange of Summation and Constants

For a c independent of the index and a finite index set I with |I| = n, \sum_{i \in I} c = c \cdot n = c \cdot |I|. The proof is immediate from the definition: the sum adds c exactly n times, so c + c + \cdots + c (n terms) equals c n. This follows from the linearity property by setting f(i) = 1 for all i, yielding \sum_{i \in I} c \cdot 1 = c \sum_{i \in I} 1 = c n. This rule simplifies expressions where constants appear inside sums, such as in average calculations or when extracting fixed factors. It applies only to finite sums, as infinite cases require convergence considerations not addressed here.

Splitting Sums

Sums over contiguous index ranges can be split additively. For a function f and integers $1 \leq m < n, \sum_{k=1}^n f(k) = \sum_{k=1}^m f(k) + \sum_{k=m+1}^n f(k). This holds by the definition of summation: the full sum is the addition of the first m terms followed by the remaining n - m terms, with no overlap or omission. Linearity confirms it as \sum_{k=1}^n f(k) = \sum_{k=1}^m f(k) + \sum_{k=1}^n 0 + \cdots + \sum_{k=m+1}^n f(k), but the direct partitioning suffices. More generally, for disjoint finite subsets I_1 and I_2 partitioning I, \sum_{i \in I} f(i) = \sum_{i \in I_1} f(i) + \sum_{i \in I_2} f(i). Such splitting is useful for isolating partial sums or applying different techniques to subranges, as in telescoping series or inductive proofs.

Change of Index

The value of a sum is invariant under a bijective reindexing of the domain. Specifically, for integers a, b, c with b \leq c, and a shift j = k + a, the sum transforms as \sum_{k=b}^c f(k) = \sum_{j=b+a}^{c+a} f(j - a). To verify, note that as k ranges from b to c, j ranges from b+a to c+a, and each term f(k) = f(j - a) pairs uniquely, preserving the terms added. This follows from the bijection between indices and the additivity of the sum. The index variable is a dummy, so renaming without bounds adjustment would alter the sum, but proper shifting maintains equality. Reindexing standardizes starting points (e.g., from 1 to 0) or aligns terms in manipulations like differentiation of series.

Product of Sums

For finite index sets I and J, and elements a_i for i \in I, b_j for j \in J, \left( \sum_{i \in I} a_i \right) \left( \sum_{j \in J} b_j \right) = \sum_{i \in I} \sum_{j \in J} a_i b_j. This distributive property expands the product: each a_i multiplies the entire sum of b_j, yielding double sums of products a_i b_j over all pairs (i, j). For example, with |I| = m, |J| = n, the right side has m n terms matching the left's expansion. It relies on finite cardinality to ensure well-definedness. This rule underpins expansions in algebra and probability, such as computing variances or generating functions, but requires caution with infinite cases due to potential non-convergence.

Sums of powers and progressions

The sum of an arithmetic progression with first term a and common difference d is given by \sum_{k=0}^{n-1} (a + k d) = n a + \frac{d n (n-1)}{2}. This formula arises from pairing terms or using the difference of the last and first terms, and was famously applied by as a child to compute the sum of the first 100 integers. For a geometric progression, the finite sum starting from the zeroth power is \sum_{k=0}^{n-1} r^k = \frac{1 - r^n}{1 - r}, valid for r \neq 1. When |r| < 1, the infinite series converges to \sum_{k=0}^{\infty} r^k = \frac{1}{1 - r}. This closed form follows from multiplying the sum by r and subtracting, isolating the geometric terms. Power sums \sum_{k=1}^n k^m admit polynomial closed forms. For small exponents, explicit expressions are \sum_{k=1}^n k = \frac{n(n+1)}{2}, \quad \sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}, \quad \sum_{k=1}^n k^3 = \left( \frac{n(n+1)}{2} \right)^2. In general, Faulhaber's formula expresses the p-th power sum as a polynomial of degree p+1 in n: \sum_{k=1}^n k^p = \frac{1}{p+1} \sum_{j=0}^p (-1)^j \binom{p+1}{j} B_j n^{p+1-j}, where B_j are the Bernoulli numbers, which encode the coefficients and satisfy B_1 = -\frac{1}{2} and B_j = 0 for odd j > 1. This formula, originally discovered by Johann Faulhaber in 1631, was later refined by using his namesake numbers. The sum \sum_{k=1}^n \ln(1 + k) equals exactly \ln((n+1)!), by the property that the logarithm of a product is the sum of the logarithms. For large n, Stirling's approximation provides \ln(n!) \approx n \ln n - n + \frac{1}{2} \ln(2 \pi n), derived from integral approximations to the sum, such as \sum_{k=1}^n \ln k \approx \int_1^n \ln x \, dx.

Binomial and factorial identities

Binomial identities play a central role in summation involving combinatorial coefficients, providing closed forms for sums that arise in counting problems and generating functions. The binomial theorem expresses the expansion of a binomial raised to a power as a sum of terms weighted by binomial coefficients. Specifically, for nonnegative integer n and variables x and y, (x + y)^n = \sum_{k=0}^n \binom{n}{k} x^{n-k} y^k. This identity serves as a generating function where the coefficients \binom{n}{k} enumerate the ways to choose k items from n, and the sum captures the total expansion. A key convolution identity for binomial coefficients is Vandermonde's identity, which equates a sum of products of two binomial coefficients to a single binomial coefficient. It states that \sum_{k=0}^r \binom{m}{k} \binom{n}{r-k} = \binom{m+n}{r}, for nonnegative integers m, n, and r. This result, named after Alexandre-Théophile Vandermonde, has a combinatorial interpretation: the right side counts the ways to choose r elements from a set of m + n elements, while the left side partitions the choices into k from the first m and r - k from the remaining n. Algebraic proofs rely on generating functions, such as the product of (1 + x)^m and (1 + x)^n. Another important summation identity for binomial coefficients is the hockey-stick identity, which sums binomial coefficients along a diagonal in . It asserts that \sum_{k=r}^n \binom{k}{r} = \binom{n+1}{r+1}, for integers n \geq r \geq 0. This identity derives its name from the shape of the summed terms in and admits a : the right side counts the ways to choose r+1 elements from n+1, while the left side accumulates selections where the largest element is fixed at positions from r to n. Inductive and algebraic proofs confirm its validity across nonnegative integers. Summation identities involving factorials often connect to exponential generating functions, though closed forms are limited. The infinite sum \sum_{k=1}^\infty \frac{1}{k!} = e - 1, where e is the of the natural logarithm, follows from the expansion of e^x at x=1, excluding the k=0 term which is 1. This relation highlights the of reciprocals of factorials in approximating e \approx 2.71828. In , the finite sum \sum_{k=1}^n k! lacks a simple closed form and is typically expressed using like the , reflecting its incomplete nature relative to the exponential series. Identities for permutation-related sums involve Stirling numbers of the second kind, S(n,k), which count the ways to partition n distinct objects into k nonempty unlabeled subsets. The total number of partitions of an n-element set, given by the B_n, is the sum \sum_{k=0}^n S(n,k) = B_n. This summation identity underscores the combinatorial structure of set partitions, with B_n growing rapidly (e.g., B_5 = 52) and serving as a generating function coefficient in exponential series for permutations.

Harmonic numbers

The nth harmonic number is defined as the partial sum of the harmonic series, H_n = \sum_{k=1}^n \frac{1}{k}, which arises in various contexts such as the analysis of and approximations in . These numbers grow logarithmically with n and are fundamental in studying sums of reciprocals. The generalized harmonic numbers extend this concept to higher orders, defined as H_n^{(s)} = \sum_{k=1}^n \frac{1}{k^s} for a positive s, where H_n^{(1)} = H_n recovers the standard case; for s > 1, these sums converge as n approaches infinity to the value \zeta(s). numbers satisfy a simple H_n = H_{n-1} + \frac{1}{n} for n \geq 1, with the base case H_0 = 0. This recursive structure allows efficient computation and highlights their cumulative nature. An elegant integral representation is given by H_n = \int_0^1 \frac{1 - t^n}{1 - t} \, dt, which follows from expanding the integrand as a geometric series \frac{1 - t^n}{1 - t} = \sum_{k=0}^{n-1} t^k and integrating term by term. Additionally, harmonic numbers connect to special functions via the digamma function \psi(z), the logarithmic derivative of the gamma function, through the relation \psi(n+1) = -\gamma + H_n, where \gamma \approx 0.57721 is the Euler-Mascheroni constant, defined as \gamma = \lim_{n \to \infty} (H_n - \ln n). This link facilitates analytic continuations and deeper properties in complex analysis. Key identities include the asymptotic series expansion H_n = \ln n + \gamma + \frac{1}{2n} - \frac{1}{12n^2} + \frac{1}{120n^4} - \cdots, which provides precise approximations for large n and reveals the divergent yet useful nature of the full Euler-Maclaurin series. A notable variant is the alternating harmonic series, whose infinite sum is \sum_{k=1}^\infty \frac{(-1)^{k+1}}{k} = \ln 2, demonstrating conditional convergence and serving as a classic example in the study of series rearrangements. These representations and identities underscore the harmonic numbers' role in bridging discrete sums with continuous functions and transcendental constants.

References

  1. [1]
    Summation notation (also called sigma notation) (article)
    Summation notation (or sigma notation) allows us to write a long sum in a single expression.
  2. [2]
    [PDF] Sigma notation - Mathcentre
    Sigma notation is a method used to write out a long sum in a concise way. In this unit we look at ways of using sigma notation, and establish some useful ...
  3. [3]
    Summation notation
    Leonhard Euler introduced the notation Sigma for summation in 1755 (he also introduced e for the base of the natural logarithm, pi for the ratio of the ...
  4. [4]
    Definition of the summation symbol - Math Insight
    The symbol ∑ indicates summation and is used as a shorthand notation for the sum of terms that follow a pattern.
  5. [5]
    Sum of n, n², or n³ | Brilliant Math & Science Wiki
    Each of these series can be calculated through a closed-form formula. The case a=1,n=100 is famously said to have been solved by Gauss as a young schoolboy.Missing: source | Show results with:source<|control11|><|separator|>
  6. [6]
    Calculus II - Series - The Basics - Pauls Online Math Notes
    Nov 16, 2022 · If the sequence of partial sums, {sn}∞n=1 { s n } n = 1 ∞ , is convergent and its limit is finite then we also call the infinite series, ∞∑i=1 ...
  7. [7]
    [PDF] The sum of an infinite series - Mathcentre
    Once again we can use sigma notation to express this series. We write down the sigma sign and the rule for the k-th term. But now we put the symbol for ...
  8. [8]
    Changing Summation Limits | The Infinite Series Module - UBC Blogs
    Two methods exist to change summation limits: Method 1 uses a transformation, and Method 2 adds/subtracts terms to shift the index. Method 2 does not change ...
  9. [9]
    Handwritten sigma in mathematics
    Mar 5, 2016 · Rather than using the standard glyph "σ" to denote the Greek letter sigma, people invariably wrote a backwards delta "δ" (that's to say, with the top tail ...<|control11|><|separator|>
  10. [10]
    [PDF] CSE 20—Discrete Math
    Sigma notation. Ellipsis (…) notation is vague and wordy. Sigma notation is more compact: Parts of the notation. □. Summand. □. Index variable. □. Lower limit.
  11. [11]
    3.3 The Product Rule
    Product notation Suppose f1,f2,…fn are functions. The product of all these functions can be written n∏k=1fk. This is similar to the use of ∑ to denote a sum.Missing: distinction | Show results with:distinction
  12. [12]
    [PDF] Notes on summations and related topics
    Dec 13, 2010 · The other difference is that while an empty sum is defined to have the value. 0, an empty product is defined to have the value 1. The reason ...
  13. [13]
    [PDF] MATH 101 College Algebra
    A polynomial is a monomial or the algebraic sum or difference of monomials. The degree of a polynomial is the largest of the degrees of its terms after like ...
  14. [14]
    [PDF] MATH 304 Linear Algebra Lecture 3: Applications of systems of ...
    Node B: i2 + i3 = i1. Page 14. Electrical network. Kirchhof's law #2 (loop rule): around every loop the algebraic sum of all voltages is zero. Ohm's law: for ...
  15. [15]
    [PDF] Math 2270 - Lecture 1: Vectors
    1.2 Adding Vectors. We add vectors componentwise. For example. ( 12 ) + ( 23 ) = ( 35 ). We can view this addition visually using either the tail to tip method:.
  16. [16]
    [PDF] Sequences and Series in Old Babylonian Mathematics
    Old Babylonian word problems typically conveyed the solution procedure via a paradigmatic example. As here, each step of the procedure uses values arising ...
  17. [17]
    [PDF] Euclid's Elements of Geometry - Richard Fitzpatrick
    Book 1 outlines the fundamental propositions of plane geometry, includ- ing the three cases in which triangles are congruent, various theorems involving ...
  18. [18]
    II. Aryabhata and his commentators - Indian Mathematics - MacTutor
    We can accurately claim that Aryabhata was born in 476 AD, as he writes that he was 23 years old when he wrote his most significant mathematical work the ...
  19. [19]
    François Viète - Biography - MacTutor - University of St Andrews
    Viète introduced the first systematic algebraic notation in his book In artem analyticam isagoge published at Tours in 1591. The title of the work may seem ...Missing: sums | Show results with:sums
  20. [20]
    Earliest Uses of Symbols of Operation - MacTutor
    The summation symbol Σ was first used by Leonhard Euler (1707-1783) in 1755: Quemadmodum ad differentiam denotandam vsi sumus signo Δ, ita summam indicabimus ...Addition And Subtraction... · Division Symbols · Exponents
  21. [21]
    [PDF] The Origins of Cauchy's Rigorous Calculus
    Augustin-Louis Cauchy gave the first reasonably success- ful rigorous foundation for the calculus. Beginning with a precise definition of limit, ...
  22. [22]
    Earliest Uses of Symbols of Operation
    Jun 2, 2017 · In 1636 James Hume brought out an edition of the algebra of Vieta, in which he introduced a superior notation, writing down the base and ...
  23. [23]
    SummationNotation
    The variable i is called the index of summation, a is the lower bound or lower limit, and b is the upper bound or upper limit. Mathematicians invented this ...Missing: distinction | Show results with:distinction
  24. [24]
    Is any finite sum of real numbers well-defined?
    Oct 8, 2016 · Yes, the real numbers form an Abelian group under addition. The result you mention is a general property of Abelian groups (even of commutative ...How to show this obvious and basic property of abelian groups?Why are abelian groups of interest? What is their usefulness?More results from math.stackexchange.com
  25. [25]
    [PDF] Infinite sums
    Assume that the index set I is countable. Then the sum x = (xi)i∈I makes sense unconditionally, and its value is either a nonnegative real number or +∞.
  26. [26]
    Infinite Series
    An infinite series is a sum of infinitely many terms, defined as the limit of partial sums as n goes to infinity. If the limit is finite, the series is ...
  27. [27]
    [PDF] Unordered summation
    The unordered sum over a finite indexing set is just the ordinary sum. The unordered sum over a countable indexing set is the usual sum of a series, if that ...
  28. [28]
    [PDF] Notes on Unordered Sums - UC Davis Math
    Jan 13, 2007 · To motivate the definition of unordered sums, we first consider the con- vergence of series. If (xn)∞ n=1 is a sequence in a normed space X, ...
  29. [29]
    [PDF] The Fubini Principle in Discrete Math
    Introduction: Double Summation ... This phenomenon can only happen for infinite sums with both positive and negative terms, but it's worth keeping in mind.
  30. [30]
    [PDF] FORMAL POWER SERIES - IVAN NIVEN, University of Oregon
    Formal power series are infinite sequences of complex numbers, denoted by P, and are used to avoid questions of convergence in infinite series.
  31. [31]
    [PDF] Section III.5. Rings of Polynomials and Formal Power Series
    Apr 24, 2024 · R[x] is the ring of polynomials over R, defined as sequences of elements of R, (a0,a1,...), where ai=0 for all but a finite number of indices i.
  32. [32]
    [PDF] Sums and Products
    In this section, I'll review the notation for sums and products. Addition and multiplication are binary operations: They operate on two numbers at a time.Missing: ∏ distinction
  33. [33]
    [PDF] MEASURES
    Counting measure on the integers, δZ, and the point mass at a point x, δ{x}. , are special cases. Integrals with respect to counting measure are just sums.
  34. [34]
    245A, Notes 2: The Lebesgue integral | What's new - Terry Tao
    Sep 19, 2010 · In this set of notes, we use Lebesgue measure to define the Lebesgue integral \displaystyle \int_{{\bf R}^d} f(x)\ dx of functions {f: {\bf R}^d \rightarrow {\
  35. [35]
    [PDF] The Lebesgue integral
    Functions which are the finite sums of constant multiples of the characteristic functions of measurable sets of finite measure are called 'simple functions' and.
  36. [36]
    Expected value and the Lebesgue integral - StatLect
    The Lebesgue integral is used to give a completely general definition of expected value. This lecture introduces the Lebesgue integral, first in an intuitive ...
  37. [37]
    [PDF] The Elements of the Calculus of Finite Difference
    Definition 1. Let f : Z → R. Then the diffenence, ∆f, of f is the function. ∆f(x) = f(x + 1) − f(x). The operator ∆ is called the difference operator.
  38. [38]
    [PDF] Calculus of Finite Differences
    ∆Hn = Hn+1 - Hn = 1 n + 1 = n−1. Thus, the antidifference of n−1 is Hn. ∆(af(n) + bg(n)) = a∆f(n) + b∆g(n). = ...
  39. [39]
    [PDF] Solving Linear Recurrence Equations With Polynomial Coefficients
    Summation is closely related to solving linear recurrence equations, since an indefinite sum satisfies a first-order linear recurrence with con- stant ...
  40. [40]
    [PDF] Interpolation and Polynomial Approximation Divided Differences ...
    Another form, Newton's Forward Difference Formula is constructed by using the forward difference operator ∆: ... Summary: Divided Difference Formulas.
  41. [41]
    DLMF: §2.10 Sums and Sequences ‣ Areas ‣ Chapter 2 Asymptotic ...
    For extensions of the Euler–Maclaurin formula to functions f ⁡ ( x ) with singularities at x = a or x = n (or both) see Sidi (2004, 2012b, 2012a) . See also ...
  42. [42]
    [PDF] A trapezoidal rule error bound unifying the Euler–Maclaurin formula ...
    The bound gives the Euler–Maclaurin formula in one limit and the geometric convergence of the trapezoidal rule for periodic analytic functions in another.
  43. [43]
    [PDF] Euler-Maclaurin Formula 1 Introduction - People | MIT CSAIL
    Proof of this theorem using h−calculus is given in the book [Ka] by Victor Kac. In this paper we would like to discuss several applications of this formula.
  44. [44]
    [PDF] 6.042J Chapter 9: Sums and asymptotics - MIT OpenCourseWare
    Asymptotic notation is often used to bound the error terms when there is no exact closed form expression for a sum or product. It also provides a convenient way ...
  45. [45]
    [PDF] Lecture 3: Summations and Analyzing Programs with Loops
    Feb 3, 1998 · Using this formula, we can approximate the above quadratic sum. In this case, f(x) = x. 2 . n. X i=1 i. 2 ≤. Z n+1. 1 x. 2 dx = x. 3. 3 n+1 x=1.
  46. [46]
    Euler-Mascheroni Constant -- from Wolfram MathWorld
    The Euler-Mascheroni constant gamma, sometimes also called 'Euler's constant' or 'the Euler constant' (but not to be confused with the constant e=2.718281.
  47. [47]
    [PDF] Lecture 1: Asymptotics - CMU School of Computer Science
    Sep 9, 2013 · Figure 2: Comparing the sum ln 1 + ln 2 + ททท + lnn to an integral. 4 ... We know that pk,n has a constant value for k = Θ(. √ n), so ...
  48. [48]
    Geometric Series -- from Wolfram MathWorld
    A geometric series sum_(k)a_k is a series for which the ratio of each two consecutive terms a_(k+1)/a_k is a constant function of the summation index k.
  49. [49]
    Proof of p-series convergence criteria (article) - Khan Academy
    If p=1, then the the p-series is divergent by definition, as a divergent p-series has a value of p greater than zero but lesser than or equal to 1 (as given ...Missing: logarithmic | Show results with:logarithmic
  50. [50]
    The p-series | The Infinite Series Module - UBC Blogs
    Therefore, the infinite series converges when p > 1, and diverges when p is in the interval (0,1). Step (2): Consider p ≤ 0 and p = 1. If p=1, then we have the ...
  51. [51]
    Stirling's Approximation -- from Wolfram MathWorld
    Stirling's approximation gives an approximate value for the factorial function n! or the gamma function Gamma(n) for n>>1. The approximation can most simply ...
  52. [52]
    7.2: Summation Notation - Mathematics LibreTexts
    Sep 6, 2023 · Manipulate sums using properties of summation notation. Compute the values of arithmetic and geometric summations. Use summations within ...
  53. [53]
    Calculus I - Summation Notation - Pauls Online Math Notes
    Nov 16, 2022 · In this section we give a quick review of summation notation. Summation notation is heavily used when defining the definite integral and ...
  54. [54]
    Double Series -- from Wolfram MathWorld
    A double sum is a series having terms depending on two indices, sum_(i,j)b_(ij). A finite double series can be written as a product of series.
  55. [55]
    Arithmetic Series -- from Wolfram MathWorld
    An arithmetic series is the sum of a sequence {a_k} , k=1 , 2, ..., in which each term is computed from the previous one by adding (or subtracting) a constant d.Missing: source | Show results with:source
  56. [56]
    Power Sum -- from Wolfram MathWorld
    S_p(n)=sum_(k=1)^nk^p. ... General power sums arise commonly in statistics. For example, k-statistics are most commonly defined in terms of power sums. Power sums ...
  57. [57]
    Faulhaber's Formula -- from Wolfram MathWorld
    In a rare 1631 work entitled Academiae Algebrae, J. Faulhaber published a number of formulae for power sums of the first n positive integers.
  58. [58]
  59. [59]
  60. [60]
    e -- from Wolfram MathWorld
    ### Summary of Definition of e as a Sum
  61. [61]
    Vandermonde's Identity | Brilliant Math & Science Wiki
    I'll leave the combinatorial proof of this identity as an exercise for you to work out. Generalized Vandermonde's Identity. In the algebraic proof of the ...
  62. [62]
    Hockey Stick Identity | Brilliant Math & Science Wiki
    The hockey stick identity is an identity regarding sums of binomial coefficients. ... The hockey stick identity gets its name by how it is represented in Pascal's ...Missing: source | Show results with:source
  63. [63]
    Factorial Sums -- from Wolfram MathWorld
    The sum-of-factorial powers function is defined by sf^p(n)=sum_(k=1)^nk!^p. For p=1, where Ei(z) is the exponential integral, Ei(1) approx 1.89512.
  64. [64]
    Stirling Number of the Second Kind -- from Wolfram MathWorld
    . The Stirling numbers of the second kind can be computed from the sum. S(n,k) ... Bell Number, Bell Polynomial, Combination Lock, Complementary Bell Number ...
  65. [65]
    Harmonic Number -- from Wolfram MathWorld
    A harmonic number is a number of the form H_n=sum_(k=1)^n1/k (1) arising from truncation of the harmonic series. A harmonic number can be expressed ...Missing: source | Show results with:source
  66. [66]
    Integral forms of sums associated with harmonic numbers
    Jan 15, 2009 · The nth Harmonic number H n ( 1 ) = H n = ∫ t = 0 1 1 - t n 1 - t dt = ∑ r = 1 n 1 r = γ + ψ ( n + 1 ) , where γ denotes the ...
  67. [67]
    Alternating Harmonic Series -- from Wolfram MathWorld
    The alternating harmonic series is the series sum_(k=1)^infty((-1)^(k-1))/k=ln2, which is the special case eta(1) of the Dirichlet eta function eta(z).Missing: ln | Show results with:ln