Fact-checked by Grok 2 weeks ago

Dirichlet's test

Dirichlet's test is a criterion in for infinite series of real or complex numbers. It states that if the partial sums of the series \sum a_n are bounded by some constant M > 0, and if the sequence \{b_n\} is monotone decreasing to 0, then the series \sum a_n b_n converges. Named after the German mathematician (1805–1859), the test was introduced in his 1829 paper on the , where it played a key role in establishing under certain conditions on the function. The criterion generalizes the (Leibniz test), which is recovered by setting a_n = (-1)^{n+1} and b_n = c_n > 0 decreasing to 0, and it is particularly useful for proving of series that do not converge absolutely. The proof of Dirichlet's test relies on , analogous to , which bounds the remainder of the partial sums and shows they form a . Applications include the convergence of trigonometric series such as \sum_{n=1}^\infty \frac{\cos(nx)}{n} for x \in (0, 2\pi), where the partial sums of \sum \cos(nx) are bounded, and more broadly in and . A related criterion known as states that if \sum a_n converges and the sequence \{b_n\} is monotone and bounded, then the series \sum a_n b_n converges.

Background Concepts

Infinite Series and Convergence

An infinite series is formally defined as the sum \sum_{n=1}^\infty a_n, where a_n represents the terms of an infinite sequence of real or complex numbers, and convergence of the series is determined by the behavior of its partial sums. The partial sums of the series are given by s_k = \sum_{n=1}^k a_n for each positive integer k, and the series converges if and only if the sequence of partial sums \{s_k\} converges to a finite limit as k \to \infty. This convergence is equivalently characterized by the Cauchy criterion: for every \epsilon > 0, there exists a positive N such that |s_m - s_n| < \epsilon whenever m, n > N. A series \sum a_n exhibits if the series of absolute values \sum |a_n| converges; in this case, the original series \sum a_n necessarily converges. However, a series may converge conditionally, meaning \sum a_n converges while \sum |a_n| diverges, illustrating that absolute convergence is a stronger than mere . To assess , several tests are employed, including the —which examines \lim_{n \to \infty} |a_{n+1}/a_n|—the —which considers \lim_{n \to \infty} \sqrt{|a_n|}—the comparison test, and the test, each applicable to specific forms of series and serving as foundational tools before advancing to more specialized criteria.

Partial Sums and Boundedness

In the context of infinite series, the partial sums of a series \sum_{n=1}^\infty a_n are defined as the sequence s_k = \sum_{n=1}^k a_n for each positive k. These partial sums are bounded if there exists a constant M > 0 such that |s_k| \leq M for all k \in \mathbb{N}. Boundedness of the partial sums is a fundamental property in the study of series , as it provides a necessary condition: if the series converges to a finite , then the sequence of partial sums must converge and hence be bounded. Consider the harmonic series \sum_{n=1}^\infty \frac{1}{n}, whose partial sums H_k = \sum_{n=1}^k \frac{1}{n} grow without bound, approximately as H_k \approx \ln k + \gamma where \gamma \approx 0.57721 is the Euler-Mascheroni constant; this logarithmic implies the series diverges. In contrast, the alternating harmonic series \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} has partial sums that remain bounded between 0 and 1, and in fact converges conditionally to \ln 2. These examples illustrate how unbounded partial sums signal , while boundedness can occur in both convergent and certain divergent cases. Although bounded partial sums are necessary for convergence, they are not sufficient, as demonstrated by the series \sum_{n=1}^\infty (-1)^{n+1}, where the partial sums alternate between 1 (for odd k) and 0 (for even k), remaining bounded within [0, 1] but failing to approach a single and thus diverging by . This highlights the limitations of boundedness alone in determining . The notion of bounded partial sums as a prerequisite for series traces back to Augustin-Louis Cauchy's foundational work in his 1821 textbook Cours d'analyse de l'École Royale Polytechnique, where he established criteria involving the behavior of partial sums, predating Peter Gustav Lejeune Dirichlet's formalization of the in his 1829 paper on the .

Formulation of the Test

Formal Statement

Dirichlet's test, named after the German mathematician who introduced it in his 1829 paper on the of trigonometric series that represent arbitrary functions between given limits, provides a sufficient condition for the of the series \sum_{n=1}^\infty a_n b_n. The precise statement is as follows: Let \{a_n\}_{n=1}^\infty and \{b_n\}_{n=1}^\infty be sequences of real numbers such that the partial sums S_k = \sum_{n=1}^k a_n are bounded, i.e., there exists M > 0 with |S_k| \leq M for all k \in \mathbb{N}, and such that \{b_n\} is monotonically decreasing to zero, i.e., b_n \geq b_{n+1} \geq 0 for all n \in \mathbb{N} and \lim_{n \to \infty} b_n = 0. Then the series \sum_{n=1}^\infty a_n b_n . Although originally formulated for real-valued series in the context of Fourier analysis, the test extends to complex-valued sequences a_n with bounded partial sums when b_n is real, non-negative, and monotonically decreasing to zero.

Necessary Conditions

Dirichlet's test for the convergence of the series \sum a_n b_n relies on two key conditions: the partial sums of \sum a_n must be bounded, and the sequence \{b_n\} must be monotone decreasing with \lim_{n \to \infty} b_n = 0. The boundedness of the partial sums \{s_n\}, where s_n = \sum_{k=1}^n a_k, is essential because it prevents the uncontrolled accumulation that leads to divergence; without this, even if \{b_n\} satisfies its conditions, the series may diverge. For instance, consider a_n = 1 and b_n = 1/n: the partial sums s_n = n are unbounded, and \sum a_n b_n = \sum 1/n diverges, illustrating how unbounded growth overrides the decay of b_n. The condition that \{b_n\} is monotone decreasing to zero, with b_n \geq b_{n+1} > 0, ensures that the terms diminish in a controlled manner, allowing the test's to bound the effectively. Non-negativity is implicit in this setup, as the decreasing positive facilitates the process. Without monotonicity, in \{b_n\} can disrupt this control, leading to despite bounded partial sums of \{a_n\} and \lim b_n = 0. For example, if b_n oscillates while tending to zero, the series may fail to converge even with bounded partial sums for a_n. These conditions are interdependent: bounded partial sums alone do not guarantee if \{b_n\} fails to decrease monotonically to zero, nor does monotonic decrease to zero suffice without bounded partial sums. Neither condition in isolation ensures the of \sum a_n b_n, highlighting their joint necessity for the test's validity.

Proof

Core Lemma

The identity serves as the foundational tool for proving Dirichlet's test, providing a counterpart to the continuous integration by parts technique. Let \{a_n\} and \{b_n\} be sequences of real or numbers, and define the partial sums A_k = \sum_{n=1}^k a_n for k \geq 1, with A_0 = 0. Then, for integers m \leq p, \sum_{n=m}^p a_n b_n = A_p b_{p+1} - A_{m-1} b_m - \sum_{n=m}^p A_n (b_{n+1} - b_n). This formula rearranges the sum of products into boundary terms involving the partial sums A_n and a weighted sum that highlights differences in the b_n sequence. To derive this identity, start with the relation a_n = A_n - A_{n-1} for n \geq 1. Substitute into the product: a_n b_n = (A_n - A_{n-1}) b_n = A_n b_n - A_{n-1} b_n. Rearranging gives a_n b_n = A_n (b_n - b_{n+1}) + A_n b_{n+1} - A_{n-1} b_n, where the last two terms form a telescoping . Summing from n = m to p yields \sum_{n=m}^p a_n b_n = \sum_{n=m}^p A_n (b_n - b_{n+1}) + \sum_{n=m}^p (A_n b_{n+1} - A_{n-1} b_n). The second sum telescopes to A_p b_{p+1} - A_{m-1} b_m, establishing the identity. This telescoping structure mirrors the boundary evaluation in . The formula is the discrete analog of the rule \int u \, dv = uv - \int v \, du, where the partial sums A_n play the role of the of a_n (analogous to u), and the differences b_{n+1} - b_n correspond to dv (with b_n akin to -v). This analogy facilitates proofs of by transferring boundedness properties from one to the product sum. The is attributed to , who introduced it in 1826. It was later employed by in his 1829 paper on the of trigonometric series, where it underpins the test bearing his name.

Detailed Derivation

To derive the convergence of the series \sum_{n=1}^\infty a_n b_n under the conditions of Dirichlet's test—where the partial sums A_N = \sum_{n=1}^N a_n satisfy |A_N| \leq M for some constant M > 0 and all N, and \{b_n\} is a monotone decreasing sequence with b_n \to 0 as n \to \infty—apply the summation-by-parts formula from the core lemma. The partial sum is given by s_N = \sum_{n=1}^N a_n b_n = \sum_{n=1}^N A_n (b_n - b_{n+1}) + A_N b_{N+1}, assuming A_0 = 0 (so the initial term vanishes). To establish convergence, show that \{s_N\} is a Cauchy sequence. For M > N, the difference in partial sums is s_M - s_N = \sum_{n=N+1}^M a_n b_n = \sum_{n=N+1}^M A_n (b_n - b_{n+1}) + A_M b_{M+1} - A_N b_{N+1}. Bound the summation term: since |A_n| \leq M and b_n is monotone decreasing (so b_n - b_{n+1} \geq 0), \left| \sum_{n=N+1}^M A_n (b_n - b_{n+1}) \right| \leq M \sum_{n=N+1}^M (b_n - b_{n+1}) = M (b_{N+1} - b_{M+1}) \leq M b_{N+1}, which approaches 0 as N, M \to \infty because b_n \to 0. The boundary terms satisfy |A_M b_{M+1}| \leq M b_{M+1} \to 0 and |A_N b_{N+1}| \leq M b_{N+1} \to 0 as N, M \to \infty. Thus, |s_M - s_N| \to 0 as N, M \to \infty, so \{s_N\} is Cauchy and converges to some limit. This convergence may be conditional, as the test applies even when \sum |a_n b_n| diverges (e.g., due to oscillation in \{a_n\}), distinguishing it from absolute convergence criteria.

Applications

To Infinite Series

Dirichlet's test finds prominent application in establishing the of for functions of . Specifically, consider the series \sum_{n=1}^\infty \frac{\sin(nx)}{n} for fixed x \in (0, 2\pi). Here, set a_n = \sin(nx) and b_n = 1/n. The sequence b_n decreases monotonically to 0, and the partial s \sum_{k=1}^N a_k = \sum_{k=1}^N \sin(kx) are bounded, as \left| \sum_{k=1}^N \sin(kx) \right| \leq \csc(|x|/2), a constant independent of N. Thus, by Dirichlet's test, the series converges for each such x. The test also encompasses the (Leibniz test) as a special case, highlighting its utility for conditionally . For an \sum_{n=1}^\infty (-1)^{n+1} c_n where c_n > 0 decreases monotonically to 0, set a_n = (-1)^{n+1} and b_n = c_n. The partial sums of a_n are bounded by 1, satisfying the conditions of Dirichlet's test and implying . Another illustrative example is the series \sum_{n=2}^\infty \frac{(-1)^n}{\log(n+1)}, which converges conditionally but not absolutely. Taking a_n = (-1)^n with partial sums bounded by 1, and b_n = 1/\log(n+1) decreasing monotonically to 0, Dirichlet's test applies directly. However, the test requires monotonicity of b_n, limiting its scope; for instance, it fails to address certain where b_n oscillates, necessitating alternative criteria like .

To Improper Integrals

The integral version of Dirichlet's test provides a criterion for the of improper Riemann integrals \int_a^\infty f(t) g(t) \, dt, where f and g are real-valued functions on [a, \infty). Specifically, if F(x) = \int_a^x f(t) \, dt is bounded for all x \geq a (i.e., there exists M > 0 such that |F(x)| \leq M for all x \geq a), and g is monotonically decreasing to 0 as x \to \infty, then the improper integral converges. A proof sketch relies on in the Riemann-Stieltjes sense, treating dg as the integrator since g is . With F(a) = 0, for b > a, \int_a^b f(t) g(t) \, dt = F(b) g(b) - \int_a^b F(t) \, dg(t). As b \to \infty, the boundary term F(b) g(b) \to 0 because |F(b)| \leq M and g(b) \to 0. The remaining \int_a^\infty F(t) \, dg(t) converges absolutely since |F(t)| \leq M and the of g over [a, \infty) is finite (equal to g(a) - \lim_{x \to \infty} g(x) = g(a)), yielding \left| \int_a^\infty F(t) \, dg(t) \right| \leq M g(a). Thus, the original improper integral converges. A prominent application is the evaluation of the Dirichlet integral \int_0^\infty \frac{\sin x}{x} \, dx = \frac{\pi}{2}. Here, take f(t) = \sin t, so F(x) = \int_0^x \sin t \, dt = 1 - \cos x, which is bounded by 2 for all x \geq 0, and g(x) = 1/x, which is monotonically decreasing to 0 for x > 0. The test thus establishes convergence, while the exact value \pi/2 follows from methods such as Fourier transforms or . Dirichlet originally applied this test to establish the convergence of integrals in his 1829 work on , addressing oscillatory integrals central to heat conduction and . In modern contexts, the test remains essential in for verifying the convergence of integrals involving oscillatory functions and decaying amplitudes, such as those arising in Mellin-Barnes representations or saddle-point approximations.

Examples and Extensions

Basic Examples

One prominent example illustrating Dirichlet's test is the series \sum_{n=1}^\infty \frac{\sin n}{n}. Here, set a_n = \sin n and b_n = \frac{1}{n}. The sequence \{b_n\} decreases monotonically to 0, as n increases. The partial sums \sum_{k=1}^N a_k = \sum_{k=1}^N \sin k are bounded for all N, with \left| \sum_{k=1}^N \sin k \right| \leq \frac{1}{2 \sin(1/2)} \approx 1.04, obtained by considering the imaginary part of the \sum_{k=1}^N e^{ik} and bounding its magnitude. Thus, Dirichlet's test implies the series converges. Another basic application is to the alternating series \sum_{n=2}^\infty \frac{(-1)^n}{n \log n}. Take a_n = (-1)^n and b_n = \frac{1}{n \log n}. The partial sums \sum_{k=2}^N a_k are bounded by 1, since they equal 1 when N is even and 0 when N is odd. The sequence \{b_n\} for n \geq 2 decreases monotonically to 0, as both n and \log n increase, making the reciprocal smaller. By Dirichlet's test, the series converges. To highlight the necessity of b_n \to 0, consider the counterexample where a_n = (-1)^n (partial sums bounded by 1, as above) but b_n = 1 (constant, so not tending to 0). The product series \sum_{n=1}^\infty a_n b_n = \sum_{n=1}^\infty (-1)^n diverges, oscillating without settling to a limit. In practice, boundedness of partial sums like those for \sum \sin n can be verified computationally by calculating the first several thousand terms and observing that the sums remain within the theoretical bound, such as using numerical software to confirm |S_N| < 2.1 for large N.

Generalizations

Abel's test provides a stricter but often more applicable generalization of for the convergence of series \sum a_n b_n. Specifically, if the series \sum a_n converges and the sequence \{b_n\} is monotonic and bounded, then \sum a_n b_n converges. This condition on \sum a_n is stronger than the mere boundedness of its partial sums required in , making Abel's test easier to verify in cases where convergence of \sum a_n is known, such as alternating series or those established by other criteria. A uniform version of Dirichlet's test extends the result to families of functions, ensuring uniform convergence. For improper integrals, if f and g are continuous on [a, \infty), the partial integrals \left| \int_a^x f(t) \, dt \right| \leq M for some constant M and all x \geq a, and g decreases monotonically to 0 uniformly on [a, \infty), then \int_a^\infty f(t) g(t) \, dt converges uniformly on [a, \infty). An analogous statement holds for series of functions, where the partial sums of \sum a_n(x) are uniformly bounded and b_n(x) decreases monotonically to 0 uniformly, implying uniform convergence of \sum a_n(x) b_n(x). These uniform variants are crucial in analysis for interchanging limits and integrals or sums in parametric settings. Pringsheim's theorem offers a specialized generalization for power series with non-negative coefficients. If f(z) = \sum_{n=0}^\infty a_n z^n where a_n \geq 0 for all n and the radius of convergence is R > 0, then z = R is a singular point of f(z). This result highlights boundary behavior and connects to Dirichlet's test through the role of monotonic sequences in determining convergence radii. Dirichlet's test also applies briefly to \sum a_n n^{-s}, where for \operatorname{Re}(s) > 0, the terms n^{-s} form a monotonic decreasing to 0; thus, bounded partial sums of \{a_n\} ensure convergence. This connection, formalized in the Jensen-Cahen theorem, extends the test to without altering the core mechanism. In the 1930s, developed extensions of Dirichlet's test within , particularly for trigonometric series and Fourier coefficients. These improvements refined convergence bounds and applicability to functions with specific modulus of continuity, enhancing estimates for summability in L^p spaces.