The natural logarithm, often denoted as ln(x) or loge(x), is the logarithm of a positive real numberx to the base e, where e is the irrational mathematical constant approximately equal to 2.718281828459045.[1] It is fundamentally defined as the definite integral \ln x = \int_1^x \frac{1}{t} \, dt for x > 0, which establishes it as the antiderivative of the function $1/x. This definition underscores its central role in calculus, as the derivative of \ln x is precisely $1/x, facilitating the integration of rational functions and modeling continuous growth processes.[1]Historically, the natural logarithm emerged from early 17th-century developments in logarithmic tables by John Napier, whose work in 1614 aimed to simplify astronomical computations through addition rather than multiplication.[2] Refinements by figures like John Speidell in 1622 introduced tables aligned with base e, though the explicit connection to e was formalized later.[2] Leonhard Euler solidified its modern form in the 18th century, naming the constant e in 1731 and demonstrating its properties in exponential and logarithmic identities, such as \ln(e^x) = x and the Taylor series expansion \ln(1 + x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \cdots for |x| < 1.[3][1]Beyond pure mathematics, the natural logarithm is indispensable in fields like physics, engineering, and biology for describing phenomena such as radioactive decay, population growth, and signal processing, where exponential relationships predominate. Its complex extension, \ln z = \ln|z| + i \arg(z), extends its utility to analytic functions in the complex plane, with applications in electrical engineering and quantum mechanics.[1]
Definitions and Notation
Inverse of the exponential function
The exponential function, denoted as \exp(x) or e^x, where e is the base of the natural logarithm (approximately 2.71828), maps real numbers to positive real numbers and serves as the counterpart to the natural logarithm. This function is continuous, strictly increasing, and one-to-one, ensuring it has an inverse over its range.[4][5]The natural logarithm, denoted \ln(y), is defined as the inverse of the exponential function: it is the unique real number x such that e^x = y for any y > 0. This definition establishes the natural logarithm as the function that "undoes" the exponential operation, solving equations of the form e^x = y for x.[4][6][7]Graphically, the natural logarithm functionhas adomain of all positive real numbers (0, \infty), a range of all real numbers \mathbb{R}, and is strictly monotonic increasing. It approaches a vertical asymptote at x = [0](/page/0) from the right, where \ln(x) \to -\infty as x \to 0^+, and passes through the point (1, [0](/page/0)). The curve starts near the y-axis on the left and rises slowly to the right, reflecting its concave down shape near the origin before becoming nearly linear for large x.[8][9][10]Basic examples illustrate this inverse relationship: \ln(1) = 0 because e^0 = 1, and \ln(e) = 1 because e^1 = e. These values highlight the function's behavior at key points.[4][6]The inverse properties are expressed as:\ln(e^x) = x \quad \text{for all real } xande^{\ln(x)} = x \quad \text{for all } x > 0.These identities confirm the bidirectional inverse relationship between the functions.[7][5][11]
Integral representation
The natural logarithm of a positive real number x > 0 can be defined as the definite integral\ln x = \int_1^x \frac{1}{t} \, dt.This definition arises from recognizing the natural logarithm as the antiderivative of $1/x, normalized such that \ln 1 = 0.[12]Geometrically, this integral represents the net signed area between the curve y = 1/t, the x-axis, and the vertical lines at t = 1 and t = x. The function y = 1/t describes the rectangular hyperbola xy = 1, so the area corresponds to the region bounded by this hyperbola from 1 to x. For x > 1, the area is positive; for $0 < x < 1, the integral evaluates to a negative value, reflecting the area "below" the x-axis in the reversed limits. At x = 1, the integral from 1 to 1 is zero, confirming \ln 1 = 0.[12]An equivalent form for x > 0 is\ln x = -\int_x^1 \frac{1}{t} \, dt,which reverses the limits and introduces a negative sign; this is particularly convenient when $0 < x < 1, as it computes a positive area from x to 1.[12]This integral definition establishes the natural logarithm as the inverse of the exponential function \exp y, where \exp y is the unique function satisfying \frac{d}{dy} \exp y = \exp y with \exp 0 = 1. To verify, apply the Fundamental Theorem of Calculus to the integral definition: \frac{d}{dx} \ln x = \frac{1}{x}. Now consider \ln(\exp y) = \int_1^{\exp y} \frac{1}{t} \, dt. Substitute t = \exp u, so dt = \exp u \, du; the limits change from u = 0 to u = y, yielding\int_0^y \frac{\exp u}{\exp u} \, du = \int_0^y 1 \, du = y.Thus, \ln(\exp y) = y. Similarly, \exp(\ln x) = x, confirming the inverse relationship.[12]
Notational conventions
The natural logarithm is most commonly denoted by \ln(x) in mathematical writing, particularly within physics and engineering contexts, where it explicitly signifies the logarithm with base e.[1] In pure mathematics, especially in advanced texts and research papers, \log(x) is frequently used as an alternative to denote the natural logarithm, with the base e assumed by convention to avoid ambiguity in specialized fields like number theory.[13] This usage of \log(x) contrasts with applied sciences, where \log(x) without subscript often implies the common logarithm (base 10).[1]The notation \ln(x) derives from the French phrase logarithme naturel (natural logarithm) and was first introduced in print by Irving Stringham in his 1893 text Uniplanar Algebra.[14] Its adoption marked a historical shift from the generic "log" notation, which had previously encompassed various bases, to clearly distinguish the base-e logarithm from the base-10 common logarithm amid growing use of both in calculations.[15] Less common alternatives include \log_e(x), which explicitly specifies the base but is verbose and rarely employed in modern writing.[1]For the complex logarithm, the principal branch is conventionally denoted by \Log(z), with a capital "L" to indicate the primary value, typically defined for z \neq 0 in the complex plane excluding the non-positive real axis.[16]In typography and typesetting, such as in LaTeX, the symbols "ln" and "Log" are rendered in upright (roman) font to signify they are operator names rather than italicized variables, using commands like \ln or \Log for proper spacing and style.[17] For real-valued functions, the domain is invariably restricted to x > 0 to ensure the argument is positive, as the natural logarithm is undefined for non-positive reals.[1]
Historical Development
Origins in logarithmic concepts
The concept of logarithms originated as a computational tool to simplify multiplication and division, particularly for astronomical and navigational calculations. In 1614, Scottish mathematician John Napier published Mirifici Logarithmorum Canonis Descriptio, introducing logarithms as a method to transform products into sums, thereby easing complex arithmetic. Napier's logarithms were not based on a fixed base but were designed around the geometry of proportional parts, inspired by the need to compute sines and tangents efficiently. This innovation dramatically reduced the labor of calculations that previously required extensive manual effort.[18]Building on Napier's work, English mathematician Henry Briggs refined the system by proposing a base-10 logarithm in correspondence with Napier around 1616–1617, leading to the publication of Arithmetica Logarithmica in 1624. Briggs's tables provided common logarithms (base 10) for numbers from 1 to 20,000 and from 90,000 to 100,000, computed to 10 decimal places, which became widely adopted for practical use in science and engineering. These tables marked a shift toward standardized logarithmic computation, emphasizing decimal convenience over Napier's more geometric approach.[19]The foundation for the natural logarithm emerged from investigations into continuous growth processes. In 1683, Swiss mathematician Jacob Bernoulli explored compound interest with infinitesimally small intervals, deriving the limit \lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n \approx 2.718, later known as the constant e, without recognizing its full significance. This limit provided the basis for the exponential function central to natural logarithms. Independently, in 1668, German mathematician Nicolaus Mercator developed an infinite series approximation for the natural logarithm, \ln(1 + x) \approx x for small x, published in Logarithmotechnia, linking it to the area under the hyperbola xy = 1. This series represented an early analytic expression for the function.[20]The natural logarithm gained recognition in early calculus through the integral of $1/x. In the late 1600s, Scottish mathematician James Gregory demonstrated in 1668 that the quadrature (area) under the curve y = 1/x from 1 to x yields a logarithmic function, using geometric methods to establish its properties. Concurrently, Isaac Newton incorporated this integral into his method of fluxions around 1669–1671, treating it as the inverse of the exponential and using it to solve differential equations in physics. These developments highlighted the natural logarithm's role as the antiderivative of $1/x, distinguishing it from common logarithms by its direct connection to continuous change.[21]
Formalization and naming
The term "hyperbolic logarithm" originated with the work of Grégoire de Saint-Vincent in his 1647 publication Opus geometricum quadraturae circuli et sectionum coni, where he demonstrated that the area under the rectangular hyperbola xy = 1 from a to b is proportional to the ratio a/b, establishing a geometric foundation for what would later be recognized as the natural logarithm.[22] This approach, further elaborated by his pupil Alphonse Antonio de Sarasa, shifted focus from arithmetic logarithms to an integral-based definition tied to hyperbolic areas, laying the groundwork for its transition to the "natural" designation in analytical mathematics.Leonhard Euler advanced the formalization of this function beginning in 1728, when he first employed the letter e to denote the base of the logarithm in an unpublished manuscript, defining it through its infinite series expansion and linking it to exponential growth.[23] In his 1736 treatise Mechanica sive motus scientia analytice exposita, Euler incorporated e as the base of natural logarithms in integral expressions for motion under resistance, marking its earliest printed appearance and demonstrating its utility in solving differential equations of physical systems.[24] Euler justified the "natural" label by emphasizing the base e's emergence from limits such as continuous compounding (the limit of (1 + 1/n)^n as n approaches infinity) and its central role in differential equations like dy/dx = y, where the exponential function is its own derivative, simplifying calculus operations without arbitrary constants.[20]Euler solidified this framework in his 1748 Introductio in analysin infinitorum, providing a rigorous series definition of e \approx 2.718281828459045235 and explicitly naming logarithms to this base as "natural or hyperbolic," with the latter term retaining the hyperbolic area connection while "natural" highlighted its intrinsic analytical properties.[25] By the 19th century, the natural logarithm achieved widespread standardization in mathematical texts and applications, as seen in Carl Friedrich Gauss's astronomical computations in Theoria motus corporum coelestium (1809), where it facilitated precise orbital calculations, and in subsequent analytic works that integrated it as the standard for calculus and number theory.[26] This adoption reflected its foundational status in rigorous analysis, supplanting earlier terminological variations.
Fundamental Properties
Algebraic and order properties
The natural logarithm, denoted \ln x, is defined for all positive real numbers, so its domain is the open interval (0, \infty). The range of \ln x is the entire set of real numbers \mathbb{R}, meaning that for every real number y, there exists a unique x > 0 such that \ln x = y. This surjectivity onto \mathbb{R} follows from the function's continuous and unbounded behavior on its domain.[27][28]As x approaches the boundaries of its domain, \ln x exhibits divergent limits: \lim_{x \to 0^+} \ln x = -\infty and \lim_{x \to \infty} \ln x = \infty. These limits underscore the function's asymptotic properties, with \ln x decreasing without bound near 0 and increasing without bound as x grows large. The natural logarithm is strictly increasing on (0, \infty), satisfying \ln x < \ln y if and only if $0 < x < y. This monotonicity is a direct consequence of the first derivative \frac{d}{dx} \ln x = \frac{1}{x} > 0 for all x > 0, ensuring the function preserves order on the positive reals.[27][29][30]A fundamental inequality for the natural logarithm is \ln x \leq x - 1 for all x > 0, with equality holding if and only if x = 1. This inequality captures the function's growth relative to its tangent line at x = 1, where the derivative is 1. Applications of Bernoulli's inequality, which states that (1 + z)^n \geq 1 + n z for natural numbers n \geq 1 and z \geq -1, extend to bounding logarithmic expressions; for instance, it aids in deriving log-concavity-based estimates for sequences and means involving \ln x, such as in generalizations of Maclaurin's inequality.[31][32]The concavity of \ln x is evident from its second derivative, \frac{d^2}{dx^2} \ln x = -\frac{1}{x^2} < 0 for all x > 0, confirming that the function is strictly concave down on (0, \infty). This negative second derivative implies that the graph of \ln x lies below any of its tangent lines, reinforcing the inequality \ln x \leq x - 1 and enabling Jensen's inequality applications for convex combinations in the logarithm's domain.[27][33]
Change of base and identities
The change of base formula expresses the natural logarithm in terms of logarithms with any other positive base b \neq 1:\ln x = \frac{\log_b x}{\log_b e}for x > 0. This identity, which holds for all valid bases, facilitates computation and theoretical analysis by converting between different logarithmic systems.[34] It underscores the natural logarithm's role as a universal reference, since any logarithm can be reduced to it via the base e.[35]The natural logarithm obeys key functional identities that reflect its algebraic structure. The product rule states that \ln(xy) = \ln x + \ln y for all x, y > 0, allowing the logarithm of a product to be decomposed into a sum.[35] Similarly, the power rule gives \ln(x^a) = a \ln x for x > 0 and real a, which extends the function's behavior under exponentiation.[34] These properties derive from the definition of the logarithm as the inverse of exponentiation and are foundational for simplifying expressions in analysis.[35]As the inverse of the exponential function with base e, the natural logarithm satisfies \ln(e^x) = x for all real x and e^{\ln x} = x for x > 0. These reciprocal relations highlight the bijective correspondence between the natural logarithm and the exponential on the positive reals.[1] A specific case of the change of base relates it to the common logarithm (base 10): \ln x = \log_{10} x \cdot \ln 10 for x > 0, providing a practical link for numerical evaluations.[34]
Calculus Aspects
Derivative
The derivative of the natural logarithm function \ln x, defined for x > 0, is given by\frac{d}{dx} \ln x = \frac{1}{x}. \tag{1}\label{eq:deriv}This result follows from the inverse function theorem, as \ln x is the inverse of the exponential function e^x, whose derivative is e^x itself. To derive it, let y = \ln x, so x = e^y. Differentiating both sides implicitly with respect to x yields $1 = e^y \cdot \frac{dy}{dx}, and solving for the derivative gives \frac{dy}{dx} = \frac{1}{e^y} = \frac{1}{x}.[5][36]Alternatively, using the integral definition \ln x = \int_1^x \frac{1}{t} \, dt, the fundamental theorem of calculus implies that the derivative is the integrand evaluated at the upper limit, yielding \frac{d}{dx} \ln x = \frac{1}{x}.[27][37]The higher-order derivatives of \ln x follow a pattern: the nth derivative for n \geq 1 is\frac{d^n}{dx^n} \ln x = (-1)^{n-1} \frac{(n-1)!}{x^n}. \tag{2}\label{eq:nth}This can be verified by successive differentiation, starting from the first derivative and applying the product rule repeatedly.[38]In applications, the derivative \frac{1}{x} interprets the rate of change of \ln x as the reciprocal of the input, which models relative growth rates in exponential processes. For instance, if a quantity P(t) grows exponentially as P(t) = P_0 e^{rt}, then the relative growth rate \frac{1}{P} \frac{dP}{dt} = r equals the derivative of \ln P(t), providing a measure of proportional change independent of scale.[39]
Integrals involving the natural logarithm
The natural logarithm serves as the antiderivative of the reciprocal function, providing a fundamental result in calculus. Specifically, the indefinite integral of \frac{1}{x} is given by\int \frac{1}{x} \, dx = \ln |x| + C,where C is the constant of integration, and the absolute value ensures the expression is defined for x \neq 0.[40] This formula arises because the derivative of \ln |x| is \frac{1}{x} for x > 0 and x < 0, confirming the antiderivative relationship.[40]For definite integrals over positive intervals, the natural logarithm evaluates the accumulated area under the curve y = \frac{1}{x}. The integral from a to b, where $0 < a < b, yields\int_a^b \frac{1}{x} \, dx = \ln \left( \frac{b}{a} \right),which simplifies the change in logarithmic scale between the bounds.[41] This result follows directly from applying the Fundamental Theorem of Calculus to the antiderivative.[41]A broader class of integrals leverages substitution to reveal the natural logarithm's role. For a differentiable function f(x) with f(x) \neq 0, the integral\int \frac{f'(x)}{f(x)} \, dx = \ln |f(x)| + Cholds, as substitution u = f(x) transforms the integrand into \frac{1}{u} \, du.[42] For instance, \int \frac{3x^2}{x^3 + 1} \, dx substitutes u = x^3 + 1, yielding \ln |x^3 + 1| + C.[42]In applications, the natural logarithm quantifies areas and measures uncertainty. The definite integral \int_1^x \frac{1}{t} \, dt = \ln x represents the area under the hyperbola xy = 1 from x = 1 to x, a geometric interpretation dating to 17th-century discoveries.[43] In probability, differential entropy for a continuous random variable with density f(x), defined as h(X) = -\int f(x) \ln f(x) \, dx, uses the natural logarithm to measure average uncertainty in nats.[44]Improper integrals highlight the logarithm's behavior at infinity. The integral \int_1^\infty \frac{1}{x} \, dx diverges, as\int_1^\infty \frac{1}{x} \, dx = \lim_{t \to \infty} \ln t = \infty,indicating unbounded growth in the harmonic series context.[45]
Series and Approximations
Taylor series expansion
The Taylor series expansion of the natural logarithm function, centered at x = 1 (or equivalently, for \ln(1 + x) centered at x = 0), provides a power series representation that converges to the function within a specific interval. This expansion is particularly useful for approximating \ln(x) near x = 1, where \ln(1) = 0. The series takes the form\ln(1 + x) = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{x^n}{n}, \quad |x| < 1.[1] This representation, known historically as the Mercator series, was first published by the German mathematician Nicolaus Mercator in his 1668 treatise Logarithmotechnia.[46]One standard derivation of this series integrates the geometric series expansion of \frac{1}{1 + x}. The geometric series \sum_{n=0}^{\infty} (-1)^n x^n = \frac{1}{1 + x} for |x| < 1 is integrated term by term from 0 to x, yielding\ln(1 + x) = \int_0^x \frac{1}{1 + t} \, dt = \int_0^x \sum_{n=0}^{\infty} (-1)^n t^n \, dt = \sum_{n=0}^{\infty} (-1)^n \frac{x^{n+1}}{n+1} = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{x^n}{n},with the constant of integration zero since \ln(1) = 0.[47] Alternatively, the series can be obtained directly via the Taylor theorem by computing successive derivatives of \ln(1 + x) at x = 0: the nth derivative (for n \geq 1) is (-1)^{n+1} (n-1)! (1 + x)^{-n}, so evaluating at 0 gives coefficients \frac{(-1)^{n+1}}{n}.[1]The radius of convergence is 1, determined by the ratio test on the series terms, ensuring absolute convergence for |x| < 1. At the endpoints x = 1 (where the series becomes the alternating harmonic series summing to \ln 2) and x = -1 (the negative harmonic series, which diverges), conditional convergence or divergence applies, respectively.[1] As an alternating series for $0 < x < 1, it allows error bounds via the alternating series estimation theorem, where the remainder after k terms is less than the next term's magnitude, facilitating practical approximations like \ln(1 + x) \approx x - \frac{x^2}{2} + \frac{x^3}{3} for small x.[47]
Continued fraction representations
The natural logarithm admits a continued fraction representation through its connection to the inverse hyperbolic tangent function, which provides an effective means for approximation in the real domain.The inverse hyperbolic tangent is given by\tanh^{-1}(x) = \frac{1}{2} \ln\left( \frac{1+x}{1-x} \right)for |x| < 1. Consequently,\ln\left( \frac{1+x}{1-x} \right) = 2 \tanh^{-1}(x).This yields the continued fraction\ln\left( \frac{1+x}{1-x} \right) = 2 \cfrac{x}{1 + \cfrac{x^{2}}{3 + \cfrac{4x^{2}}{5 + \cfrac{9x^{2}}{7 + \cfrac{16x^{2}}{9 + \cdots}}}}}for |x| < 1, where the numerators are successive squares k^2 x^2 and the denominators are odd integers starting from 1.[48]This representation converges more rapidly than the corresponding Taylor series expansion for values of x away from zero, offering superior accuracy with fewer terms in practical computations.[49]Continued fraction expansions for the natural logarithm trace their development to the 18th century, particularly through Leonhard Euler's foundational work on transforming series into such forms, building upon 17th-century advancements in continued fractions by William Brouncker and others.[50][51]As an illustration, the value \ln 2 can be approximated by setting x = \frac{1}{3}, since \frac{1 + \frac{1}{3}}{1 - \frac{1}{3}} = 2, yielding \ln 2 = 2 \tanh^{-1}\left( \frac{1}{3} \right). Substituting into the continued fraction and truncating after a few levels produces rational approximants that converge quickly to \ln 2 \approx 0.693147.[48]
Numerical Computation
Algorithms and methods
Computing the natural logarithm ln(x) for x > 0 in numerical libraries generally begins with range reduction to bring the argument into a convenient interval, typically [1/2, 2] or [√2/2, √2], where approximations are more efficient. For large x, this is achieved by expressing x = 2^k * y with y in [1, 2) and k = floor(log_2 x), yielding ln(x) = k ln(2) + ln(y).[52] This step leverages precomputed or efficiently calculable values of ln(2) and reduces the problem to evaluating ln(y) near 1, minimizing errors in subsequent approximations.[53]The arithmetic-geometric mean (AGM) iteration provides a rapidly converging method for computing ln(2) and generalizes to arbitrary x through connections to complete elliptic integrals. For ln(2), the method uses appropriate initials derived from elliptic integral relations, where the iteration defines a_n = (a_{n-1} + b_{n-1})/2 and b_n = √(a_{n-1} b_{n-1}), converging quadratically to enable high-precision evaluation after O(log p) steps for p bits.[54] Generalizations for ln(x) scale the argument via powers of 2 and apply AGM to compute intervals bounding the elliptic integral expressions like I(1, b) = ∫_0^{π/2} dt / √(1 - b sin² t), relating to log((1 + √b)/(1 - √b)) via asymptotic properties, achieving ~5 log p multiplications and ~2 log p square roots for p-bit precision.[55]Binary splitting accelerates the evaluation of series expansions for ln(x), particularly for high-precision computations of constants like ln(2) or ln(p) for primes p. This technique recursively divides the series sum into binary tree structures, computing partial numerators and denominators to avoid intermediate expansions, yielding O(M(d) log² d) time for d digits where M(d) is multiplication time, outperforming naive summation by orders of magnitude for d > 10^6.[56] It is especially effective for hypergeometric-type series tailored to logarithms, such as Ramanujan-style identities, enabling multiprecision ln(x) via argument reduction followed by accelerated summation.[57]Newton's method offers quadratic convergence for solving e^y = x to find y = ln(x), starting from an initial guess like y_0 = log_2(x) * ln(2) approximated via bit manipulation. The iteration y_{n+1} = y_n - (e^{y_n} - x)/e^{y_n} = y_n + (x e^{-y_n} - 1) simplifies to cubic convergence variants in some implementations, requiring few steps (typically 3-5) after range reduction for double-precision accuracy in floating-point arithmetic.[58] This approach is favored in software libraries for its simplicity and efficiency when combined with hardware exponentiation.IEEE 754-compliant implementations of ln(x) in hardware often combine range reduction with polynomial approximations or table lookups for the mantissa, ensuring correctly rounded results to the last ulp. For single-precision, architectures decompose the IEEE 754 format into exponent E and mantissa m ∈ [1, 2), compute ln(m) via minimax polynomials over subintervals, and add E ln(2) with fused operations to meet the standard's accuracy requirements, achieving latencies of 10-20 cycles on modern FPUs.[59] Double-precision variants extend this with higher-degree approximations or iterative refinement, prioritizing fused multiply-add for error control.[53]Taylor series may serve as a baseline for small arguments near 1, but are typically augmented by these methods for broader ranges.[56]
Special values and precision
The natural logarithm of 10, denoted \ln 10, is approximately 2.302585092994045684. This value is crucial for converting between natural and common (base-10) logarithms, as \log_{10} x = \ln x / \ln 10, enabling efficient computation across logarithmic bases in numerical applications. One efficient method to compute \ln 10 to high precision involves the arithmetic-geometric mean (AGM) iteration, which relates the logarithm to elliptic integrals and converges quadratically; starting with initial values a_0 = 1 and b_0 = (2 \cdot 10 / (10^2 - 1))^2, the iteration yields tight bounds around \ln 10 after approximately $2 \log_2 p steps for p bits of precision. Similarly, the natural logarithm of 2, \ln 2, is approximately 0.693147180559945. High-precision evaluation of \ln 2 employs binary splitting on series expansions, such as the Taylor series for \ln(1 + x) with x = 1, using a divide-and-conquer approach on rational sums to achieve O((\log N)^2 M(N)) time complexity, where N is the bit precision and M(N) is the multiplication time for N-bit numbers.For arbitrary-precision computation beyond standard floating-point limits, libraries like mpmath in Python support natural logarithms to thousands of decimal digits by dynamically adjusting precision during evaluation; for instance, mpmath.log(2) yields \ln 2 at 50 decimal places as 0.69314718055994530941723212145817656807550013436025. The Arb library (now integrated into FLINT), a C library for ball arithmetic, computes logarithms with rigorous error bounds using strategies like Ziv's method, allocating extra guard bits exponentially until the desired precision is met, supporting computations up to millions of bits while tracking numerical uncertainty.In standard double-precision floating-point arithmetic (IEEE 754), the natural logarithm is typically computed with a small relative error, bounded by a few multiples of the machine epsilon \epsilon \approx 2.22 \times 10^{-16}; for \ln(1 + x) with small x > 0, the relative error is at most $5\epsilon under conditions including a guard digit and computation within half a unit in the last place (ulp). More generally, the relative error in floating-point representation and operations, including logarithms, remains on the order of \epsilon, ensuring that the computed value \mathrm{fl}(\ln x) satisfies |\mathrm{fl}(\ln x) - \ln x| / |\ln x| \leq \epsilon for well-conditioned inputs.The computational complexity of evaluating the natural logarithm to n bits of precision is O(M(n) \log n), where M(n) denotes the time for n-bit multiplication, achieved via AGM-based methods requiring O(\log n) iterations of arithmetic operations whose cost scales with M(n). This complexity holds for both \ln 10 via AGM and \ln 2 via binary splitting variants, making high-precision logarithm computation feasible on modern hardware for extensive digit counts.
Extension to Complex Numbers
Principal value and branch cuts
The principal value of the complex natural logarithm, often denoted as \Log z, is defined for any nonzero complex number z by the formula\Log z = \ln |z| + i \Arg(z),where \ln |z| is the real natural logarithm of the modulus of z, and \Arg(z) is the principal argument of z, restricted to the interval (-\pi, \pi]./01:_Complex_Algebra_and_the_Complex_Plane/1.11:_The_Function_log(z)) This choice of argument range ensures that \Log z provides a single-valued, analytic continuation of the real natural logarithm into the complex plane, except at the origin.[60]To make \Log z single-valued, a branch cut is introduced, conventionally along the negative real axis from z = 0 to z = -\infty. This cut corresponds to the ray where \Arg z = \pm \pi, and the function is continuous in the slit plane excluding this ray./08:_Branch_Points_and_Branch_Cuts/8.02:_Branches) Crossing the branch cut from the upper half-plane (where \Arg z approaches \pi from below) to the lower half-plane (where \Arg z approaches -\pi from above) results in a jump discontinuity in \Log z of $2\pi i.[61] Specifically, if z approaches a point on the cut from above, \Log z takes a value differing by $2\pi i from the value approached from below.[62]The principal logarithm inverts the complex exponential function precisely on its domain: \exp(\Log z) = z for all z \neq 0 not on the branch cut./01:_Complex_Algebra_and_the_Complex_Plane/1.11:_The_Function_log(z)) This property holds because the exponential maps the strip \{w : -\pi < \Im w \leq \pi\} bijectively onto the complex plane minus the non-positive real axis. For instance, \Log(-1) = i\pi, since |-1| = 1 and \Arg(-1) = \pi.[63]
Multi-valued logarithm
In the complex plane, the natural logarithm extends to a multi-valued function for nonzero complex numbers z, expressed as \log z = \ln |z| + i (\Arg z + 2\pi k) where \Arg z is the principal argument (typically in (-\pi, \pi]) and k is any integer.[64] This form arises because the argument of z is defined only up to multiples of $2\pi, yielding infinitely many distinct values for each z \neq 0. The principal branch corresponds to the case k=0.[65]To resolve this multi-valuedness and make the logarithm single-valued and analytic, it is defined on a Riemann surface consisting of infinitely many sheets of the complex plane, each corresponding to a different integer k, connected along branch cuts (commonly the negative real axis).[66] These sheets form an infinite helical or spiral structure over the punctured plane \mathbb{C} \setminus \{0\}, with the logarithm serving as a holomorphic covering map from this surface to \mathbb{C}.[65] Encircling the origin once—known as monodromy—shifts the function to the adjacent sheet by adding $2\pi i to the value, reflecting the periodic nature of the exponential inverse.[64]This construction ensures that \exp(\log z) = z holds universally on the Riemann surface, while \log(\exp w) = w + 2\pi i k for some integer k depending on the path taken, highlighting the non-invertibility in the multi-valued sense.[66] Applications include evaluating contour integrals around branch points, such as \int \frac{dz}{z} over closed paths yielding multiples of $2\pi i based on winding number, and solving transcendental equations like z^n = w by selecting appropriate branches on the surface.[65]