Dirichlet's test is a convergence criterion in mathematical analysis for infinite series of real or complex numbers. It states that if the partial sums of the series \sum a_n are bounded by some constant M > 0, and if the sequence \{b_n\} is monotone decreasing to 0, then the series \sum a_n b_n converges.[1]Named after the German mathematician Peter Gustav Lejeune Dirichlet (1805–1859), the test was introduced in his 1829 paper on the convergence of Fourier series, where it played a key role in establishing pointwise convergence under certain conditions on the function.[2] The criterion generalizes the alternating series test (Leibniz test), which is recovered by setting a_n = (-1)^{n+1} and b_n = c_n > 0 decreasing to 0, and it is particularly useful for proving conditional convergence of series that do not converge absolutely.[1]The proof of Dirichlet's test relies on summation by parts, analogous to integration by parts, which bounds the remainder of the partial sums and shows they form a Cauchy sequence.[3] Applications include the convergence of trigonometric series such as \sum_{n=1}^\infty \frac{\cos(nx)}{n} for x \in (0, 2\pi), where the partial sums of \sum \cos(nx) are bounded, and more broadly in Fourier analysis and analytic number theory.[2] A related criterion known as Abel's test states that if \sum a_n converges and the sequence \{b_n\} is monotone and bounded, then the series \sum a_n b_n converges.[1]
Background Concepts
Infinite Series and Convergence
An infinite series is formally defined as the sum \sum_{n=1}^\infty a_n, where a_n represents the terms of an infinite sequence of real or complex numbers, and convergence of the series is determined by the behavior of its partial sums.[4]The partial sums of the series are given by s_k = \sum_{n=1}^k a_n for each positive integer k, and the series converges if and only if the sequence of partial sums \{s_k\} converges to a finite limit as k \to \infty.[5]This convergence is equivalently characterized by the Cauchy criterion: for every \epsilon > 0, there exists a positive integer N such that |s_m - s_n| < \epsilon whenever m, n > N.[6]A series \sum a_n exhibits absolute convergence if the series of absolute values \sum |a_n| converges; in this case, the original series \sum a_n necessarily converges.[7]However, a series may converge conditionally, meaning \sum a_n converges while \sum |a_n| diverges, illustrating that absolute convergence is a stronger condition than mere convergence.To assess convergence, several standard tests are employed, including the ratio test—which examines \lim_{n \to \infty} |a_{n+1}/a_n|—the root test—which considers \lim_{n \to \infty} \sqrt{|a_n|}—the comparison test, and the integral test, each applicable to specific forms of series and serving as foundational tools before advancing to more specialized criteria.[8][9]
Partial Sums and Boundedness
In the context of infinite series, the partial sums of a series \sum_{n=1}^\infty a_n are defined as the sequence s_k = \sum_{n=1}^k a_n for each positive integer k. These partial sums are bounded if there exists a constant M > 0 such that |s_k| \leq M for all k \in \mathbb{N}. Boundedness of the partial sums is a fundamental property in the study of series convergence, as it provides a necessary condition: if the series converges to a finite limit, then the sequence of partial sums must converge and hence be bounded.[10]Consider the harmonic series \sum_{n=1}^\infty \frac{1}{n}, whose partial sums H_k = \sum_{n=1}^k \frac{1}{n} grow without bound, approximately as H_k \approx \ln k + \gamma where \gamma \approx 0.57721 is the Euler-Mascheroni constant; this logarithmic divergence implies the series diverges. In contrast, the alternating harmonic series \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} has partial sums that remain bounded between 0 and 1, and in fact converges conditionally to \ln 2. These examples illustrate how unbounded partial sums signal divergence, while boundedness can occur in both convergent and certain divergent cases.[11][12]Although bounded partial sums are necessary for convergence, they are not sufficient, as demonstrated by the series \sum_{n=1}^\infty (-1)^{n+1}, where the partial sums alternate between 1 (for odd k) and 0 (for even k), remaining bounded within [0, 1] but failing to approach a single limit and thus diverging by oscillation. This highlights the limitations of boundedness alone in determining convergence. The notion of bounded partial sums as a prerequisite for series convergence traces back to Augustin-Louis Cauchy's foundational work in his 1821 textbook Cours d'analyse de l'École Royale Polytechnique, where he established criteria involving the behavior of partial sums, predating Peter Gustav Lejeune Dirichlet's formalization of the test in his 1829 paper on the convergence of Fourier series.[13]
Formulation of the Test
Formal Statement
Dirichlet's test, named after the German mathematician Peter Gustav Lejeune Dirichlet who introduced it in his 1829 paper on the convergence of trigonometric series that represent arbitrary functions between given limits, provides a sufficient condition for the convergence of the infinite series \sum_{n=1}^\infty a_n b_n.[14]The precise statement is as follows: Let \{a_n\}_{n=1}^\infty and \{b_n\}_{n=1}^\infty be sequences of real numbers such that the partial sums S_k = \sum_{n=1}^k a_n are bounded, i.e., there exists M > 0 with |S_k| \leq M for all k \in \mathbb{N}, and such that \{b_n\} is monotonically decreasing to zero, i.e., b_n \geq b_{n+1} \geq 0 for all n \in \mathbb{N} and \lim_{n \to \infty} b_n = 0. Then the series \sum_{n=1}^\infty a_n b_n converges.[15]Although originally formulated for real-valued series in the context of Fourier analysis, the test extends to complex-valued sequences a_n with bounded partial sums when b_n is real, non-negative, and monotonically decreasing to zero.[16]
Necessary Conditions
Dirichlet's test for the convergence of the series \sum a_n b_n relies on two key conditions: the partial sums of \sum a_n must be bounded, and the sequence \{b_n\} must be monotone decreasing with \lim_{n \to \infty} b_n = 0. The boundedness of the partial sums \{s_n\}, where s_n = \sum_{k=1}^n a_k, is essential because it prevents the uncontrolled accumulation that leads to divergence; without this, even if \{b_n\} satisfies its conditions, the series may diverge. For instance, consider a_n = 1 and b_n = 1/n: the partial sums s_n = n are unbounded, and \sum a_n b_n = \sum 1/n diverges, illustrating how unbounded growth overrides the decay of b_n.[17][18]The condition that \{b_n\} is monotone decreasing to zero, with b_n \geq b_{n+1} > 0, ensures that the terms diminish in a controlled manner, allowing the test's mechanism to bound the remainder effectively. Non-negativity is implicit in this setup, as the decreasing positive sequence facilitates the summation process. Without monotonicity, oscillation in \{b_n\} can disrupt this control, leading to divergence despite bounded partial sums of \{a_n\} and \lim b_n = 0. For example, if b_n oscillates while tending to zero, the series may fail to converge even with bounded partial sums for a_n.[17][18]These conditions are interdependent: bounded partial sums alone do not guarantee convergence if \{b_n\} fails to decrease monotonically to zero, nor does monotonic decrease to zero suffice without bounded partial sums. Neither condition in isolation ensures the convergence of \sum a_n b_n, highlighting their joint necessity for the test's validity.[17]
Proof
Core Lemma
The summation by parts identity serves as the foundational tool for proving Dirichlet's test, providing a discrete counterpart to the continuous integration by parts technique. Let \{a_n\} and \{b_n\} be sequences of real or complex numbers, and define the partial sums A_k = \sum_{n=1}^k a_n for k \geq 1, with A_0 = 0. Then, for integers m \leq p,\sum_{n=m}^p a_n b_n = A_p b_{p+1} - A_{m-1} b_m - \sum_{n=m}^p A_n (b_{n+1} - b_n).This formula rearranges the sum of products into boundary terms involving the partial sums A_n and a weighted sum that highlights differences in the b_n sequence.[19]To derive this identity, start with the relation a_n = A_n - A_{n-1} for n \geq 1. Substitute into the product:a_n b_n = (A_n - A_{n-1}) b_n = A_n b_n - A_{n-1} b_n.Rearranging givesa_n b_n = A_n (b_n - b_{n+1}) + A_n b_{n+1} - A_{n-1} b_n,where the last two terms form a telescoping difference. Summing from n = m to p yields\sum_{n=m}^p a_n b_n = \sum_{n=m}^p A_n (b_n - b_{n+1}) + \sum_{n=m}^p (A_n b_{n+1} - A_{n-1} b_n).The second sum telescopes to A_p b_{p+1} - A_{m-1} b_m, establishing the identity. This telescoping structure mirrors the boundary evaluation in continuous integration.[19]The summation by parts formula is the discrete analog of the integration by parts rule \int u \, dv = uv - \int v \, du, where the partial sums A_n play the role of the antiderivative of a_n (analogous to u), and the differences b_{n+1} - b_n correspond to dv (with b_n akin to -v). This analogy facilitates proofs of convergence by transferring boundedness properties from one sequence to the product sum.[19]The identity is attributed to Niels Henrik Abel, who introduced it in 1826. It was later employed by Peter Gustav Lejeune Dirichlet in his 1829 paper on the convergence of trigonometric series, where it underpins the test bearing his name.[20]
Detailed Derivation
To derive the convergence of the series \sum_{n=1}^\infty a_n b_n under the conditions of Dirichlet's test—where the partial sums A_N = \sum_{n=1}^N a_n satisfy |A_N| \leq M for some constant M > 0 and all N, and \{b_n\} is a monotone decreasing sequence with b_n \to 0 as n \to \infty—apply the summation-by-parts formula from the core lemma.[3]The partial sum is given bys_N = \sum_{n=1}^N a_n b_n = \sum_{n=1}^N A_n (b_n - b_{n+1}) + A_N b_{N+1},assuming A_0 = 0 (so the initial term vanishes).[3]To establish convergence, show that \{s_N\} is a Cauchy sequence. For M > N, the difference in partial sums iss_M - s_N = \sum_{n=N+1}^M a_n b_n = \sum_{n=N+1}^M A_n (b_n - b_{n+1}) + A_M b_{M+1} - A_N b_{N+1}.[3]Bound the summation term: since |A_n| \leq M and b_n is monotone decreasing (so b_n - b_{n+1} \geq 0),\left| \sum_{n=N+1}^M A_n (b_n - b_{n+1}) \right| \leq M \sum_{n=N+1}^M (b_n - b_{n+1}) = M (b_{N+1} - b_{M+1}) \leq M b_{N+1},which approaches 0 as N, M \to \infty because b_n \to 0.[3]The boundary terms satisfy |A_M b_{M+1}| \leq M b_{M+1} \to 0 and |A_N b_{N+1}| \leq M b_{N+1} \to 0 as N, M \to \infty. Thus, |s_M - s_N| \to 0 as N, M \to \infty, so \{s_N\} is Cauchy and converges to some limit.[3]This convergence may be conditional, as the test applies even when \sum |a_n b_n| diverges (e.g., due to oscillation in \{a_n\}), distinguishing it from absolute convergence criteria.
Applications
To Infinite Series
Dirichlet's test finds prominent application in establishing the pointwise convergence of Fourier series for functions of bounded variation. Specifically, consider the series \sum_{n=1}^\infty \frac{\sin(nx)}{n} for fixed x \in (0, 2\pi). Here, set a_n = \sin(nx) and b_n = 1/n. The sequence b_n decreases monotonically to 0, and the partial sums \sum_{k=1}^N a_k = \sum_{k=1}^N \sin(kx) are bounded, as \left| \sum_{k=1}^N \sin(kx) \right| \leq \csc(|x|/2), a constant independent of N. Thus, by Dirichlet's test, the series converges for each such x.The test also encompasses the alternating series test (Leibniz test) as a special case, highlighting its utility for conditionally convergent series. For an alternating series \sum_{n=1}^\infty (-1)^{n+1} c_n where c_n > 0 decreases monotonically to 0, set a_n = (-1)^{n+1} and b_n = c_n. The partial sums of a_n are bounded by 1, satisfying the conditions of Dirichlet's test and implying convergence.Another illustrative example is the series \sum_{n=2}^\infty \frac{(-1)^n}{\log(n+1)}, which converges conditionally but not absolutely. Taking a_n = (-1)^n with partial sums bounded by 1, and b_n = 1/\log(n+1) decreasing monotonically to 0, Dirichlet's test applies directly.However, the test requires monotonicity of b_n, limiting its scope; for instance, it fails to address certain conditionally convergent series where b_n oscillates, necessitating alternative criteria like Abel's test.
To Improper Integrals
The integral version of Dirichlet's test provides a criterion for the convergence of improper Riemann integrals \int_a^\infty f(t) g(t) \, dt, where f and g are real-valued functions on [a, \infty). Specifically, if F(x) = \int_a^x f(t) \, dt is bounded for all x \geq a (i.e., there exists M > 0 such that |F(x)| \leq M for all x \geq a), and g is monotonically decreasing to 0 as x \to \infty, then the improper integral converges.[21]A proof sketch relies on integration by parts in the Riemann-Stieltjes sense, treating dg as the integrator since g is monotone. With F(a) = 0, for b > a,\int_a^b f(t) g(t) \, dt = F(b) g(b) - \int_a^b F(t) \, dg(t).As b \to \infty, the boundary term F(b) g(b) \to 0 because |F(b)| \leq M and g(b) \to 0. The remaining integral \int_a^\infty F(t) \, dg(t) converges absolutely since |F(t)| \leq M and the total variation of g over [a, \infty) is finite (equal to g(a) - \lim_{x \to \infty} g(x) = g(a)), yielding \left| \int_a^\infty F(t) \, dg(t) \right| \leq M g(a). Thus, the original improper integral converges.[22]A prominent application is the evaluation of the Dirichlet integral \int_0^\infty \frac{\sin x}{x} \, dx = \frac{\pi}{2}. Here, take f(t) = \sin t, so F(x) = \int_0^x \sin t \, dt = 1 - \cos x, which is bounded by 2 for all x \geq 0, and g(x) = 1/x, which is monotonically decreasing to 0 for x > 0. The test thus establishes convergence, while the exact value \pi/2 follows from methods such as Fourier transforms or contour integration.[23][24]Dirichlet originally applied this test to establish the convergence of Fourier integrals in his 1829 work on Fourier series, addressing oscillatory integrals central to heat conduction and harmonic analysis.[25] In modern contexts, the test remains essential in asymptotic analysis for verifying the convergence of integrals involving oscillatory functions and decaying amplitudes, such as those arising in Mellin-Barnes representations or saddle-point approximations.
Examples and Extensions
Basic Examples
One prominent example illustrating Dirichlet's test is the series \sum_{n=1}^\infty \frac{\sin n}{n}. Here, set a_n = \sin n and b_n = \frac{1}{n}. The sequence \{b_n\} decreases monotonically to 0, as n increases. The partial sums \sum_{k=1}^N a_k = \sum_{k=1}^N \sin k are bounded for all N, with \left| \sum_{k=1}^N \sin k \right| \leq \frac{1}{2 \sin(1/2)} \approx 1.04, obtained by considering the imaginary part of the geometric series \sum_{k=1}^N e^{ik} and bounding its magnitude.[26] Thus, Dirichlet's test implies the series converges.Another basic application is to the alternating series \sum_{n=2}^\infty \frac{(-1)^n}{n \log n}. Take a_n = (-1)^n and b_n = \frac{1}{n \log n}. The partial sums \sum_{k=2}^N a_k are bounded by 1, since they equal 1 when N is even and 0 when N is odd. The sequence \{b_n\} for n \geq 2 decreases monotonically to 0, as both n and \log n increase, making the reciprocal smaller. By Dirichlet's test, the series converges.[27]To highlight the necessity of b_n \to 0, consider the counterexample where a_n = (-1)^n (partial sums bounded by 1, as above) but b_n = 1 (constant, so not tending to 0). The product series \sum_{n=1}^\infty a_n b_n = \sum_{n=1}^\infty (-1)^n diverges, oscillating without settling to a limit.In practice, boundedness of partial sums like those for \sum \sin n can be verified computationally by calculating the first several thousand terms and observing that the sums remain within the theoretical bound, such as using numerical software to confirm |S_N| < 2.1 for large N.[28]
Generalizations
Abel's test provides a stricter but often more applicable generalization of Dirichlet's test for the convergence of series \sum a_n b_n. Specifically, if the series \sum a_n converges and the sequence \{b_n\} is monotonic and bounded, then \sum a_n b_n converges.[29] This condition on \sum a_n is stronger than the mere boundedness of its partial sums required in Dirichlet's test, making Abel's test easier to verify in cases where convergence of \sum a_n is known, such as alternating series or those established by other criteria.[29]A uniform version of Dirichlet's test extends the result to families of functions, ensuring uniform convergence. For improper integrals, if f and g are continuous on [a, \infty), the partial integrals \left| \int_a^x f(t) \, dt \right| \leq M for some constant M and all x \geq a, and g decreases monotonically to 0 uniformly on [a, \infty), then \int_a^\infty f(t) g(t) \, dt converges uniformly on [a, \infty).[30] An analogous statement holds for series of functions, where the partial sums of \sum a_n(x) are uniformly bounded and b_n(x) decreases monotonically to 0 uniformly, implying uniform convergence of \sum a_n(x) b_n(x).[30] These uniform variants are crucial in analysis for interchanging limits and integrals or sums in parametric settings.Pringsheim's theorem offers a specialized generalization for power series with non-negative coefficients. If f(z) = \sum_{n=0}^\infty a_n z^n where a_n \geq 0 for all n and the radius of convergence is R > 0, then z = R is a singular point of f(z).[31] This result highlights boundary behavior and connects to Dirichlet's test through the role of monotonic sequences in determining convergence radii.Dirichlet's test also applies briefly to Dirichlet series \sum a_n n^{-s}, where for \operatorname{Re}(s) > 0, the terms n^{-s} form a monotonic decreasing sequence to 0; thus, bounded partial sums of \{a_n\} ensure convergence.[32] This connection, formalized in the Jensen-Cahen theorem, extends the test to analytic number theory without altering the core mechanism.[32]In the 1930s, Antoni Zygmund developed extensions of Dirichlet's test within harmonic analysis, particularly for trigonometric series and Fourier coefficients. These improvements refined convergence bounds and applicability to functions with specific modulus of continuity, enhancing estimates for Fourier series summability in L^p spaces.