Fact-checked by Grok 2 weeks ago

Alternating series test

The alternating series test, also known as Leibniz's test, is a fundamental convergence test in calculus for determining whether an infinite series with terms that alternate in sign converges. Specifically, for a series of the form \sum_{n=1}^{\infty} (-1)^{n+1} b_n where b_n > 0 for all n, the test states that the series converges if two conditions are satisfied: the sequence \{b_n\} is monotonically decreasing (i.e., b_{n+1} \leq b_n for all sufficiently large n), and \lim_{n \to \infty} b_n = 0. This test applies only to alternating series and guarantees convergence, but it does not address or divergence if the conditions fail. Named after the mathematician , who contributed to its early development in the , the test has been a cornerstone of series analysis since its formalization. Leibniz's work on laid the groundwork for modern criteria. The theorem's proof typically relies on bounding partial sums between consecutive terms, demonstrating that the sequence of partial sums is Cauchy and thus convergent. Beyond , the alternating series test enables of the or when approximating the series with a finite partial sum; the is at most the magnitude of the first omitted . This makes it particularly useful in and applications in physics, , and , where alternating series model oscillatory phenomena or approximations like Taylor expansions for . While simpler than tests like the ratio or , it complements them by handling cases where fails but holds.

Formal Statement

Definition of Alternating Series

An alternating series is a mathematical series in which the signs of the terms alternate between positive and negative, typically expressed in the form \sum_{n=1}^\infty (-1)^{n+1} b_n or \sum_{n=1}^\infty (-1)^n b_n, where b_n > 0 for all n \in \mathbb{N}. This structure ensures that consecutive terms have opposite signs, distinguishing it from series with consistent signs or irregular sign patterns. The positive terms b_n are often assumed to be decreasing and approaching zero for convergence analysis, though the core definition focuses solely on the alternation. For example, the series \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots represents the alternating harmonic series, where b_n = \frac{1}{n}. In general, any series \sum a_n qualifies as alternating if it can be rewritten such that a_n = (-1)^{k+1} b_n for some indexing k, with b_n nonnegative and the signs strictly alternating without gaps. This property enhances the potential for conditional convergence, even when the absolute series \sum |a_n| diverges.

Convergence Test

The alternating series test provides a sufficient for the of series with terms that alternate in sign. Consider a series of the form \sum_{n=1}^\infty (-1)^{n+1} b_n, where b_n > 0 for all n. The test asserts that if the sequence \{b_n\} is decreasing (i.e., b_{n+1} \leq b_n for all n \geq 1) and \lim_{n \to \infty} b_n = 0, then the series converges. This condition on monotonicity need not hold from the first term; it suffices for the sequence to be eventually decreasing, meaning there exists some N such that b_{n+1} \leq b_n for all n \geq N. The limit condition ensures that the terms approach zero, a necessary requirement for any convergent series, while the decreasing property guarantees that the partial sums form a Cauchy sequence bounded above and below. The test applies specifically to alternating series and does not determine the value of the sum, nor does it address . For instance, the alternating harmonic series \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} satisfies the conditions since \frac{1}{n} is positive, decreasing, and approaches zero, hence converges (though not absolutely). Failure of the conditions, such as non-monotonicity or non-zero limit, implies the test cannot confirm convergence, but divergence is not necessarily indicated.

Remainder Estimate

The remainder estimate, also known as the estimation , quantifies the error incurred when approximating the of a convergent using a partial . For an \sum_{n=1}^\infty (-1)^{n+1} b_n where b_n > 0 for all n, the sequence \{b_n\} is monotonically decreasing, and \lim_{n \to \infty} b_n = 0, the asserts that the R_N = s - s_N after the Nth partial s_N satisfies |R_N| \leq b_{N+1}. This bound arises because the partial sums of such a series oscillate around the true s, with each subsequent pulling the approximation closer, and the error being trapped between consecutive partial sums. Specifically, the R_N has the same sign as the first omitted (-1)^{N+1} b_{N+1}, ensuring that s lies between s_N and s_N + (-1)^{N+1} b_{N+1}. The estimate is sharp in the sense that the error is at most the of the next , providing a practical tool for determining the number of terms required to achieve a desired level, such as |R_N| < \epsilon, by choosing N such that b_{N+1} < \epsilon. In applications, this theorem is particularly useful for series like the Taylor expansions of \ln(1+x) or e^x near points where alternation occurs, allowing reliable numerical approximations without computing the infinite sum directly. For instance, to approximate \ln 2 = \sum_{n=1}^\infty (-1)^{n+1} / n with error less than $10^{-3}, one selects N where $1/(N+1) < 10^{-3}, yielding N \geq 999. The estimate's effectiveness stems from the decreasing nature of b_n, which guarantees the remainder diminishes monotonically.

Proofs

Convergence Proof

The alternating series test, also known as , provides a criterion for the convergence of series of the form \sum_{n=1}^\infty (-1)^{n+1} a_n, where a_n > 0 for all n. Specifically, the test states that if the sequence \{a_n\} is monotonically decreasing (i.e., a_{n+1} \leq a_n for all n \geq 1) and \lim_{n \to \infty} a_n = 0, then the series converges to some finite sum S. To prove convergence, consider the partial sums s_k = \sum_{n=1}^k (-1)^{n+1} a_n. Separate the partial sums into even and odd indices: let s_{2m} = \sum_{n=1}^{2m} (-1)^{n+1} a_n for even terms and s_{2m+1} = \sum_{n=1}^{2m+1} (-1)^{n+1} a_n for odd terms. First, observe that $0 < s_2 < s_4 < \cdots < s_{2m} < \cdots forms an increasing bounded above by a_1, since s_{2m+2} - s_{2m} = a_{2m+1} - a_{2m+2} > 0 due to the decreasing nature of \{a_n\}, and s_{2m} = a_1 - (a_2 - a_3) - (a_4 - a_5) - \cdots - (a_{2m-2} - a_{2m-1}) - a_{2m} \leq a_1. Similarly, s_1 > s_3 > s_5 > \cdots > s_{2m+1} > \cdots forms a decreasing bounded below by 0, as s_{2m+1} - s_{2m+3} = a_{2m+2} - a_{2m+3} > 0 and s_{2m+1} = s_{2m} + a_{2m+1} > 0. Moreover, for all m, s_{2m} \leq s_{2m+1}, ensuring the even partial sums are always less than or equal to the odd ones. By the for , the even partial sums converge to some limit L_e \leq a_1 and the odd partial sums converge to some limit L_o \geq 0. To show L_e = L_o, note that the difference |s_{2m+1} - s_{2m}| = a_{2m+1} \to 0 as m \to \infty, since \lim_{n \to \infty} a_n = 0. Thus, L_o - L_e = 0, so both subsequences converge to the same limit S. This implies the full of partial sums \{s_k\} converges to S, proving the series converges. An alternative approach uses the Cauchy criterion directly. For any \varepsilon > 0, choose N such that a_n < \varepsilon for all n \geq N. Then, for m < n both at least N, the difference |s_n - s_m| is at most the sum of the terms from m+1 to n, which, due to the alternating and decreasing properties, is bounded by a_{m+1} < \varepsilon. Thus, \{s_k\} is a Cauchy sequence in \mathbb{R} and converges.

Remainder Proof

The remainder estimate for the alternating series test provides a bound on the error when approximating the sum of a convergent using a partial sum. Consider the \sum_{n=1}^{\infty} (-1)^{[n+1](/page/N+1)} b_n, where b_n > 0, the sequence \{b_n\} is decreasing (b_{n+1} \leq b_n for all n), and \lim_{n \to \infty} b_n = 0. Let s denote the sum of the series, and let s_k = \sum_{n=1}^{k} (-1)^{n+1} b_n be the kth partial sum. The after k terms is R_k = s - s_k. The alternating series states that |R_k| \leq b_{k+1}, and moreover, R_k has the same sign as the first omitted term (-1)^{k+1} b_{k+1}. To prove this, first recall the key property established in the proof of the alternating series test: the partial sums \{s_{2m}\} form an increasing bounded above (hence convergent), while the partial sums \{s_{2m-1}\} form a decreasing bounded below (hence convergent), and both subsequences converge to the same s. This implies that for any k \geq 1, the s lies between consecutive partial sums s_k and s_{k+1}. Specifically, if k is even, then s_k \leq s \leq s_{k+1}; if k is odd, then s_{k+1} \leq s \leq s_k. From this bracketing, it follows that |s - s_k| \leq |s_{k+1} - s_k|. But s_{k+1} - s_k = (-1)^{k+1} b_{k+1}, so |s_{k+1} - s_k| = b_{k+1}. Therefore, |R_k| = |s - s_k| \leq b_{k+1}. Additionally, since s lies between s_k and s_{k+1}, the remainder R_k must have the same sign as s_{k+1} - s_k = (-1)^{k+1} b_{k+1}, confirming that the error does not change the direction of the next term. This bound is sharp in the sense that the remainder can approach b_{k+1} under the given conditions, but the decreasing monotonicity ensures it never exceeds it. The proof relies solely on the properties assumed in the alternating series test and does not require absolute convergence.

Improved Bounds

The standard remainder estimate for a convergent \sum_{k=1}^\infty (-1)^{k+1} a_k, where a_k > 0 is decreasing to 0, provides an upper bound |R_n| \leq a_{n+1}, where R_n is the after n terms. However, for certain classes of alternating series, sharper bounds on the can be derived, offering tighter control over the error in partial sum approximations. These improvements often rely on representations, hypergeometric functions, or extremal to determine optimal constants in the inequalities. One notable class involves series of the form S(\alpha, \beta) = \sum_{n=1}^\infty (-1)^{n-1} / (\alpha n + \beta), where \alpha > 0 and \beta > -\alpha. For the remainder R_n = S(\alpha, \beta) - s_n, where s_n is the partial sum up to n terms, sharp estimates take the form \rho / (2\alpha n) < |R_n| < \sigma / (2\alpha n) for large n, with best constants \rho = 2(2\alpha + \beta) - (1/(\alpha + \beta) - S(\alpha, \beta))^{-1} and \sigma = \alpha. These constants are derived using isometries in appropriate function spaces and extremal problems for point-evaluation functionals, ensuring the bounds are optimal. A specific corollary provides refined two-sided bounds: \frac{1}{2\alpha n} + a < |R_n| < \frac{1}{2\alpha n} + b, where the optimal a = (1/(\alpha + \beta) - S(\alpha, \beta))^{-1} - \frac{2}{\alpha} and b = \alpha + 2\beta. This improves upon the standard estimate by incorporating the parameters \alpha and \beta, yielding asymptotically precise error control proportional to $1/n. For the classical \sum_{n=1}^\infty (-1)^{n-1} / (2n-1), which equals \pi/4, the sharp constants are c = \frac{4}{4 - \pi} - 4 and d = 0 in a related inequality form. These sharp estimates are particularly useful in numerical analysis and approximation theory, where minimizing computational effort while guaranteeing error precision is critical. They extend the basic alternating series framework by leveraging the specific linear denominator structure, though generalizing to arbitrary decreasing a_k remains challenging without additional assumptions on the decay rate.

Examples

Standard Application

The alternating series test is routinely applied to series of the form \sum_{n=1}^\infty (-1)^{n+1} b_n, where b_n > 0 for all n, to determine convergence by verifying two key conditions: the sequence \{b_n\} is monotonically decreasing (i.e., b_{n+1} \leq b_n for sufficiently large n), and \lim_{n \to \infty} b_n = 0. If both hold, the series converges (conditionally, if the absolute series diverges). This test is particularly useful for series where other tests, such as the ratio or root tests, fail to provide conclusive results due to the alternating signs. A classic standard application is the alternating harmonic series \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}. Here, b_n = \frac{1}{n} > 0, the sequence \{b_n\} is decreasing because larger denominators yield smaller values, and \lim_{n \to \infty} \frac{1}{n} = 0. Thus, the series converges by the alternating series test; in fact, its sum is \ln 2 \approx 0.693147. The absolute series \sum_{n=1}^\infty \frac{1}{n} diverges (harmonic series), confirming conditional convergence. Another typical example is \sum_{n=1}^\infty (-1)^{n+1} \frac{\ln n}{n}. Setting b_n = \frac{\ln n}{n} > 0 for n \geq 3, the sequence decreases for n \geq 3 (verified by the of f(x) = \frac{\ln x}{x}, where f'(x) = \frac{1 - \ln x}{x^2} < 0 for x > [e](/page/E!)), and \lim_{n \to \infty} \frac{\ln n}{n} = 0 by standard limits. The series therefore converges by the test. The test also provides a practical remainder estimate for approximations: for a convergent alternating series satisfying the conditions, the error |R_N| after N terms satisfies |R_N| \leq b_{N+1}, and the sum lies between consecutive partial sums s_N and s_{N+1}. Consider approximating \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n^2} ( variant, sum \frac{\pi^2}{12}) with the first 4 terms: s_4 = 1 - \frac{1}{4} + \frac{1}{9} - \frac{1}{16} \approx 0.7986. Then |R_4| \leq \frac{1}{25} = 0.04, so the sum is between s_4 and s_5 \approx 0.7986 + 0.04 = 0.8386, bounding the true value $0.822467\ldots within 0.04. This bound ensures reliable numerical estimates without computing infinite sums.

Necessity of Monotonicity

The monotonicity condition in the alternating series test is essential because its absence can lead to divergence even when the terms approach zero. Specifically, there exist alternating series \sum_{n=1}^\infty (-1)^{n+1} b_n with b_n > 0 and \lim_{n \to \infty} b_n = 0, but \{b_n\} not monotonically decreasing, where the series fails to converge. A classic counterexample constructs b_n as follows: for odd indices, b_{2k-1} = \frac{1}{2^k} where k = 1, 2, 3, \dots, so b_1 = \frac{1}{2}, b_3 = \frac{1}{4}, b_5 = \frac{1}{8}, and so on; for even indices, b_{2k} = \frac{1}{k} where k = 1, 2, 3, \dots, so b_2 = 1, b_4 = \frac{1}{2}, b_6 = \frac{1}{3}, etc. This yields the sequence b_n = \frac{1}{2}, 1, \frac{1}{4}, \frac{1}{2}, \frac{1}{8}, \frac{1}{3}, \frac{1}{16}, \frac{1}{4}, \dots. Clearly, \lim_{n \to \infty} b_n = 0, as both subsequences \{b_{2k-1}\} and \{b_{2k}\} tend to zero. However, the sequence is not monotonically decreasing, since, for instance, b_1 = \frac{1}{2} < b_2 = 1 and b_3 = \frac{1}{4} < b_4 = \frac{1}{2}, with repeated increases thereafter. The corresponding series is \sum_{n=1}^\infty (-1)^{n+1} b_n = \frac{1}{2} - 1 + \frac{1}{4} - \frac{1}{2} + \frac{1}{8} - \frac{1}{3} + \frac{1}{16} - \frac{1}{4} + \cdots. To see divergence, consider the partial sums at even indices: the sum up to $2m is \sum_{k=1}^m (b_{2k-1} - b_{2k}) = \sum_{k=1}^m \left( \frac{1}{2^k} - \frac{1}{k} \right) = \sum_{k=1}^m \frac{1}{2^k} - \sum_{k=1}^m \frac{1}{k}. The first sum converges to 1 as m \to \infty, while the harmonic sum \sum_{k=1}^m \frac{1}{k} diverges to +\infty, so the even partial sums tend to -\infty. The odd partial sums similarly diverge, confirming that the series diverges. This illustrates that monotonicity cannot be omitted without risking divergence. In essence, without monotonicity, the cancellation between positive and negative terms is insufficient to bound the partial sums, allowing the divergent behavior of subsumed harmonic-like components to dominate.

Sufficiency Without Necessity

The alternating series test establishes sufficient conditions for the convergence of an alternating series \sum_{n=1}^\infty (-1)^{n+1} a_n, where a_n > 0, the sequence \{a_n\} is monotonically decreasing, and \lim_{n \to \infty} a_n = 0. However, these conditions are not necessary for convergence. Specifically, while \lim_{n \to \infty} a_n = 0 is a necessary condition for the convergence of any series (by the term test for divergence), the requirement that \{a_n\} be monotonically decreasing is not. There exist alternating series that converge conditionally despite violations of the monotonicity condition, often verifiable through direct analysis of partial sums or more general convergence tests such as the Dirichlet test. A concrete example is the series $1 - 2 + 1 - \frac{1}{2} + 1 - \frac{1}{2} + \frac{1}{4} - \frac{1}{2} + \frac{1}{4} - \frac{1}{8} + \frac{1}{4} - \frac{1}{8} + \cdots, where the absolute values of the terms are $1, 2, 1, \frac{1}{2}, 1, \frac{1}{2}, \frac{1}{4}, \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{4}, \frac{1}{8}, \dots. This sequence of absolute values is not monotonically decreasing, as evident from the initial terms where a_1 = 1 < a_2 = 2, and later increases such as from \frac{1}{2} to $1 in subsequent blocks. Despite this, the terms approach zero, and the partial sums oscillate while bounding toward zero: for instance, the partial sums include $1, -1, 0, -\frac{1}{2}, \frac{1}{2}, 0, \frac{1}{4}, -\frac{1}{4}, 0, -\frac{1}{8}, \frac{1}{8}, 0, \dots, demonstrating convergence to 0 by direct computation. Such examples underscore that the alternating series test, while useful for straightforward verification, can be supplemented by broader criteria when monotonicity fails. In practice, non-monotonic cases may require grouping terms or applying the , which replaces monotonicity with bounded partial sums of the alternating factors and a monotonic sequence tending to zero. This highlights the test's role as a sufficient but non-exhaustive tool in series analysis.

Historical Background

Leibniz's Discovery

Gottfried Wilhelm formulated the alternating series test in 1682 during his early investigations into infinite series and the foundations of calculus, recognizing the convergence behavior of series with alternating signs under specific conditions. At that time, the rigorous notion of convergence had not yet been established, but intuitively grasped that if the absolute values of the terms decrease monotonically and approach zero, the series sums to a finite value trapped between successive partial sums. This insight arose amid his broader work on transcendental curves and quadrature problems, where alternating series appeared in approximations for functions like the arctangent. Leibniz did not publish a detailed exposition of the test immediately. The test's core principle—that an alternating series \sum_{n=1}^{\infty} (-1)^{n+1} a_n with a_n > 0, a_{n+1} \leq a_n, and \lim_{n \to \infty} a_n = 0 converges—emerged from his practical computations, such as those involving the Leibniz formula for \pi, though the full criterion was refined over subsequent years. In a letter to dated October 25, 1713, Leibniz provided a clearer of the test, stating that for such a series, the sum lies between two consecutive partial sums. This correspondence highlighted both the and the bounding of the sum by partial sums, effectively including the remainder estimate that the error after n terms is less than a_{n+1}. The letter served as a key dissemination point, influencing later mathematicians like Euler in formalizing infinite series theory. Leibniz's discovery marked a pivotal advancement in understanding , distinguishing from absolutely convergent ones, though was not formalized until Cauchy's work a century later. His approach emphasized geometric over epsilon-delta proofs, aligning with the pre-rigorous era of .

Subsequent Developments

Following Leibniz's initial discovery of the test in 1682, which he communicated to in letters from 1713 and 1714, the test lacked a rigorous foundation due to the undeveloped concept of at the time. The early brought the necessary mathematical rigor through the works of and . In his 1821 text Cours d'analyse de l'École Royale Polytechnique, Cauchy established precise definitions for the limit and of sequences and series, providing the analytical framework essential for validating . This laid the groundwork for treating infinite series as limits of partial sums. The specific rigorous proof of the alternating series test was provided by Abel in his 1826 paper "Untersuchungen über die Reihe 1 + (m/1)x + m(m-1)/(1·2)x² + …," published in Journal für die reine und angewandte Mathematik. Abel demonstrated convergence by showing that the partial sums form a when the terms alternate in sign, decrease monotonically in , and approach zero, using his own lemma on of partial sums. This proof not only confirmed the test's validity but also derived the remainder estimate, bounding the truncation error |R_n| by the first omitted term's , which proved invaluable for approximating series sums in applications. These advancements integrated the test into the burgeoning field of , influencing subsequent criteria and numerical methods throughout the 19th century.

Dirichlet Test

The Dirichlet test provides a criterion for the of series of the form \sum_{n=1}^\infty a_n b_n, where the sequences \{a_n\} and \{b_n\} satisfy specific conditions. It generalizes the alternating series test by relaxing the requirement that \{a_n\} alternates in sign, instead requiring only that the partial sums of \{a_n\} remain bounded. Theorem (Dirichlet Test). Suppose \{a_n\} is a of complex numbers whose partial sums A_N = \sum_{n=1}^N a_n are bounded, i.e., there exists M > 0 such that |A_N| \leq M for all N. Suppose further that \{b_n\} is a decreasing of nonnegative real numbers with \lim_{n \to \infty} b_n = 0. Then the series \sum_{n=1}^\infty a_n b_n converges. The boundedness of the partial sums \{A_N\} ensures that the terms a_n do not grow too wildly, while the monotonic decrease of \{b_n\} to zero controls the magnitude of the products a_n b_n. This condition on \{b_n\} is crucial; without monotonicity, the series may diverge even if the partial sums of \{a_n\} are bounded and b_n \to 0. For instance, the alternating series test follows as a special case: take a_n = (-1)^{n+1} (so |A_N| \leq 1) and b_n decreasing to zero. The proof relies on , analogous to for integrals. Let A_0 = 0. For $1 \leq m < N, \sum_{n=m}^N a_n b_n = A_N b_{N+1} - A_{m-1} b_m + \sum_{n=m}^N A_n (b_n - b_{n+1}). Since b_n is monotone decreasing, b_n - b_{n+1} \geq 0, and the sum telescopes to bound the remainder: \left| \sum_{n=m}^N a_n b_n \right| \leq M b_m + M \sum_{n=m}^N (b_n - b_{n+1}) = 2M b_m. As m \to \infty, b_m \to 0, so the partial sums form a , implying convergence. A classic application is the convergence of \sum_{n=1}^\infty \frac{\sin n}{n}. Here, a_n = \sin n and b_n = 1/n. The sequence \{b_n\} is decreasing to zero. The partial sums \sum_{k=1}^N \sin k = \frac{\sin(N/2) \sin((N+1)/2)}{\sin(1/2)} are bounded by $1/|\sin(1/2)| \approx 2.19, since |\sin(N/2) \sin((N+1)/2)| \leq 1. Thus, the series converges by the Dirichlet test (though not absolutely, as \sum 1/n diverges). The test was introduced by (1805–1859) and published posthumously.

Abel's Test

Abel's test is a convergence criterion for infinite series of the form \sum_{n=1}^\infty a_n b_n, where \{a_n\} and \{b_n\} are sequences of real or complex numbers. It asserts that if \sum_{n=1}^\infty a_n converges and the sequence \{b_n\} is monotonic (either non-increasing or non-decreasing) and bounded, then \sum_{n=1}^\infty a_n b_n converges. This condition on \{b_n\} ensures that the variations in b_n are controlled, allowing the convergence of \sum a_n to "transfer" to the product series despite the multiplication by b_n. The test relies on , a discrete integration-by-parts analogue introduced by in his foundational work on infinite series. Let A_n = \sum_{k=1}^n a_k, so A_n converges to some limit A as n \to \infty since \sum a_n converges. The partial sum of the product series up to N can be rewritten as: \sum_{n=1}^N a_n b_n = A_N b_{N+1} - \sum_{n=1}^N A_n (b_{n+1} - b_n). As N \to \infty, the term A_N b_{N+1} \to A \cdot \lim_{n \to \infty} b_n (which exists because \{b_n\} is monotonic and bounded, hence convergent). The remaining series \sum_{n=1}^\infty A_n (b_{n+1} - b_n) converges because |A_n| is bounded (say by M) and \sum_{n=1}^\infty |b_{n+1} - b_n| = |b_1 - \lim b_n| < \infty, implying of the series. In the context of alternating series, Abel's test serves as a related tool but differs from the standard test, which requires the terms to decrease monotonically to zero. Abel's test applies more broadly to non-alternating products where one factor's series converges outright, rather than merely having bounded partial sums (as in the Dirichlet test, of which Abel's test is a special case). For instance, consider \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} \left(1 - \frac{1}{n}\right): here, a_n = (-1)^{n+1}/n with \sum a_n converging (to \ln 2), and b_n = 1 - 1/n monotonic increasing and bounded (by 1), so the series converges by . This highlights its utility for series where the multiplier sequence converges monotonically. The test bears the name of (1802–1829), the whose 1826 paper in Crelle's Journal für die Reine und Angewandte Mathematik advanced rigorous criteria for series convergence amid contemporary debates on paradoxes like those of . Abel's contributions, including early forms of ratio-based convergence assessments, influenced later generalizations like the Dirichlet test.