Limit of a sequence
In mathematics, the limit of a sequence is a foundational concept in real analysis that describes the value L toward which the terms of an infinite sequence \{a_n\} of real numbers approach as the index n tends to infinity, provided such a value exists.[1] Informally, the sequence converges to L if, for all sufficiently large n, the terms a_n can be made arbitrarily close to L.[2] Formally, \lim_{n \to \infty} a_n = L if for every \varepsilon > 0, there exists a positive integer N such that |a_n - L| < \varepsilon whenever n > N.[3] This \varepsilon-N definition ensures that the tail of the sequence lies within any given neighborhood of L, capturing the intuitive notion of eventual proximity.[4] If no such finite L exists, the sequence may diverge to \pm \infty or oscillate without converging; for instance, a_n = n diverges to +\infty, while a_n = (-1)^n fails to converge due to perpetual alternation between -1 and 1.[4] A convergent sequence has a unique limit, a property proven using the triangle inequality in the real numbers.[3] Limits of sequences underpin key theorems in analysis, such as algebraic properties allowing operations like \lim_{n \to \infty} (a_n + b_n) = \lim_{n \to \infty} a_n + \lim_{n \to \infty} b_n for convergent sequences \{a_n\} and \{b_n\}, and the squeeze theorem, which states that if a_n \leq c_n \leq b_n for large n and both \{a_n\} and \{b_n\} converge to L, then so does \{c_n\}.[2] This concept is among the most subtle and essential in mathematical analysis, serving as the basis for defining continuity of functions, derivatives, integrals, and more advanced structures like metric spaces and topology.[5] Sequences without limits, or those diverging in specific ways, are crucial for studying series convergence and asymptotic behavior in applied fields like physics and engineering.Historical Development
Early Intuitive Notions
The concept of limits in sequences emerged intuitively in ancient Greek philosophy through paradoxes that challenged notions of motion and infinity. Zeno of Elea, around the 5th century BCE, posed the paradox of Achilles and the tortoise, where the swift Achilles appears unable to overtake a slower tortoise due to an infinite series of ever-diminishing intervals that he must traverse.[6] This puzzle intuitively suggested that infinite processes could converge to a finite outcome, foreshadowing the idea of a limit without providing a resolution.[7] In the Hellenistic period, Archimedes of Syracuse (c. 287–212 BCE) advanced these ideas through the method of exhaustion, a technique for approximating areas by inscribing and circumscribing polygons that increasingly approached the curved boundary. In his treatise Quadrature of the Parabola, Archimedes demonstrated that the area of a parabolic segment equals four-thirds the area of the inscribed triangle by iteratively adding triangles whose areas summed in a geometric series, effectively bounding the region between lower and upper limits that converged to the exact value.[8] This approach relied on the principle that if two quantities could be made arbitrarily close without equaling, one must be equal to the other, providing an early rigorous yet intuitive handling of convergence.[9] Medieval and Renaissance mathematics further explored infinite processes, particularly in Indian traditions. While Aryabhata (476–550 CE) contributed rational approximations, such as π ≈ 3.1416 derived from circumference-to-diameter ratios, the Kerala school in the 14th–16th centuries developed infinite series expansions for trigonometric functions and π, like the series for arctangent that Madhava of Sangamagrama used to compute precise values through partial sums approaching a limit.[10] These methods echoed exhaustion by summing infinitely many terms to approximate transcendental quantities. In Europe, Renaissance scholars revisited Archimedean techniques, applying them to volumes and areas in preparation for calculus. By the 17th century, intuitive notions of limits underpinned the invention of calculus. Isaac Newton developed fluxions around 1665–1666, treating quantities as flowing variables whose instantaneous rates of change—moments or infinitesimally small increments—approximated tangents and areas through limiting processes.[11] Independently, Gottfried Wilhelm Leibniz formulated infinitesimals in the 1670s as "inassignable" quantities smaller than any given positive number yet non-zero, using them to derive rules for differentiation and integration as ratios of these evanescent differences.[12] These precursors treated limits as the outcome of infinite approximations in continuous change, setting the stage for 19th-century formalization.Formalization in the 19th Century
The formalization of the limit concept for sequences in the 19th century emerged as a direct response to longstanding philosophical critiques of infinitesimal methods in calculus, particularly George Berkeley's 1734 attack in The Analyst, where he derided infinitesimals as "ghosts of departed quantities" lacking logical foundation.[13] This prompted mathematicians to develop rigorous, non-infinitesimal definitions grounded in inequalities, transforming intuitive notions from ancient paradoxes—such as Zeno's—into precise analytical tools. By mid-century, these efforts established the epsilon-based framework that underpins modern real analysis. Bernard Bolzano laid early groundwork in his 1817 pamphlet Rein analytischer Beweis des Lehrsatzes, daß zwischen je zwey Werthen, die ein entgegengesetztes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung vorhanden sey. While primarily proving the intermediate value theorem for continuous functions, Bolzano introduced a definition of continuity that implicitly relied on limit concepts: a function is continuous if, for points sufficiently close, the difference in function values can be made arbitrarily small.[14] This approach used the bounded set theorem to ensure the existence of limit points in infinite sets, bridging geometric intuition to algebraic precision without invoking infinitesimals.[15] Augustin-Louis Cauchy advanced this rigor in his 1821 textbook Cours d'analyse de l'École Polytechnique, where he provided the first systematic definition of the limit of a sequence. Cauchy stated: "When the successive values attributed to the same variable indefinitely approach a fixed value, so as to end by differing from it by as little as one wishes, this last is called the limit of all the others."[16] For proofs, he operationalized this with an epsilon condition, paraphrased as: a sequence converges to L if for every \epsilon > 0, there exists a natural number N such that for all n > N, |a_n - L| < \epsilon.[16] This formulation, applied extensively to series and functions, eliminated reliance on fluxions and established limits as the cornerstone of calculus.[17] Karl Weierstrass further refined these ideas in his Berlin University lectures beginning in the 1850s, culminating in a fully epsilon-N formalization by 1861 that dispelled any residual ambiguity.[18] He defined the limit of a sequence p_n as L if, for every \epsilon > 0, there exists an integer N such that for all n > N, |p_n - L| < \epsilon, emphasizing arithmetic verification over geometric intuition.[18] Delivered to students like Hermann Amandus Schwarz, these lectures—later disseminated through notes—ensured the epsilon method's adoption, purging infinitesimals entirely and solidifying sequence limits as a discrete, verifiable process.[19]Limits over the Real Numbers
Formal Definition
A sequence of real numbers is a function a: \mathbb{N} \to \mathbb{R}, where \mathbb{N} denotes the set of positive integers, often denoted as \{a_n\}_{n=1}^\infty or simply \{a_n\}.[20] The real numbers \mathbb{R} are equipped with the standard metric given by the absolute value |x - y| for x, y \in \mathbb{R}, which measures the distance between points.[21] The formal definition of the limit of a sequence in \mathbb{R}, known as the \varepsilon-N definition, is as follows: A sequence \{a_n\} converges to a limit L \in \mathbb{R} if for every \varepsilon > 0, there exists N \in \mathbb{N} such that for all n > N, |a_n - L| < \varepsilon. \lim_{n \to \infty} a_n = L \iff \forall \varepsilon > 0 \, \exists N \in \mathbb{N} \, \forall n > N, \, |a_n - L| < \varepsilon. [20] This definition was introduced by Augustin-Louis Cauchy in 1821 and rigorously formalized by Karl Weierstrass in the mid-19th century.[22] Common notations for this convergence include \lim_{n \to \infty} a_n = L or a_n \to L.[5] If a limit exists, it is unique. To see this, suppose \lim_{n \to \infty} a_n = L and \lim_{n \to \infty} a_n = M with L \neq M. Let \varepsilon = |L - M|/2 > 0. Then there exists N_1 \in \mathbb{N} such that for all n > N_1, |a_n - L| < \varepsilon, and N_2 \in \mathbb{N} such that for all n > N_2, |a_n - M| < \varepsilon. For n > \max(N_1, N_2), the triangle inequality yields |L - M| \leq |L - a_n| + |a_n - M| < 2\varepsilon = |L - M|, a contradiction. Thus, L = M.[5] A constant sequence \{a_n\} where a_n = c \in \mathbb{R} for all n converges to c, since |a_n - c| = 0 < \varepsilon holds for any \varepsilon > 0 and any N \in \mathbb{N} (e.g., N = 1).[20]Illustrative Examples
To illustrate the concept of limits for sequences in the real numbers, consider the sequence defined by a_n = \frac{1}{n} for n \in \mathbb{N}. This sequence converges to 0, as for any \epsilon > 0, choosing N = \lceil 1/\epsilon \rceil ensures that for all n > N, |a_n - 0| = \frac{1}{n} < \epsilon.[23] The following table shows the first ten terms of this sequence, demonstrating its approach to 0:| n | a_n = 1/n |
|---|---|
| 1 | 1.000 |
| 2 | 0.500 |
| 3 | 0.333 |
| 4 | 0.250 |
| 5 | 0.200 |
| 6 | 0.167 |
| 7 | 0.143 |
| 8 | 0.125 |
| 9 | 0.111 |
| 10 | 0.100 |