A slowly varying function is a positive measurable function L: (0, \infty) \to (0, \infty) such that for every fixed t > 0,\lim_{x \to \infty} \frac{L(tx)}{L(x)} = 1.This limiting condition defines functions whose growth or decay at infinity is negligible compared to any positive power of x, distinguishing them from rapidly varying behaviors.[1][2]The concept was introduced by J. Karamata in the early 1930s as a foundational element of regular variation theory, where regularly varying functions take the form x^\rho L(x) for some real exponent \rho.[1] Karamata's representation theorem provides an integral form for such functions: for large x, L(x) = \exp\left( \eta(x) + \int_a^x \frac{\varepsilon(t)}{t} \, dt \right), where \eta(x) \to c (a constant) and \varepsilon(x) \to 0 as x \to \infty.[2] Slowly varying functions exhibit uniform convergence on compact sets in t under mild regularity conditions like monotonicity or measurability, enabling powerful asymptotic results.[3]Slowly varying functions are ubiquitous in mathematical analysis, particularly in Tauberian and Abelian theorems for transforms like Laplace and Stieltjes integrals, where they facilitate equivalence between original and transformed functions at infinity.[1] In probability theory, they characterize the domains of attraction for stable laws and describe heavy-tailed distributions, such as generalized Pareto or Fréchet extremes, where the tail is x^{-\alpha} L(x).[2] Examples include constants, \log x, and \log \log x (for x sufficiently large), all of which satisfy the defining limit while illustrating hierarchical slow variation.[3]
Definitions and Context
Formal Definition
A slowly varying function is a type of function defined on the positive reals that exhibits particularly gradual growth or decay as its argument approaches infinity, serving as a foundational concept in the theory of regular variation. Specifically, such functions map to positive values and are required to be measurable to ensure their applicability in analytical contexts like integration and probability.Formally, a function L: (0, \infty) \to (0, \infty) is said to be slowly varying at infinity if it is measurable and satisfies\lim_{x \to \infty} \frac{L(tx)}{L(x)} = 1 \quad \forall \, t > 0.This limit condition captures the function's insensitivity to multiplicative scaling by any fixed positive factor t in the regime of large x, meaning L neither grows nor diminishes significantly relative to itself under such transformations.[4]The point at infinity here refers to the behavior as x \to \infty, emphasizing asymptotic properties far from the origin, where the function's value stabilizes in a normalized sense. The requirement that the limit holds for every fixed t > 0, rather than merely for integer multiples, ensures the condition applies uniformly across continuous scalings, which is essential for the function's role in broader asymptotic analyses. Slowly varying functions form the special case of regularly varying functions where the index of variation \rho = 0, such that the limit becomes t^\rho = 1.
Relation to Regular Variation
A function f is said to be regularly varying with index \rho \in \mathbb{R} if it satisfies \lim_{x \to \infty} \frac{f(tx)}{f(x)} = t^\rho for every t > 0.[4] This definition generalizes the notion of power-law growth, where the asymptotic behavior scales like x^\rho, and encompasses a broad class of functions used in asymptotic analysis.[5]Slowly varying functions form a special case of regular variation when \rho = 0, as they satisfy \lim_{x \to \infty} \frac{L(tx)}{L(x)} = 1 for every t > 0, representing functions that grow or decay subpolynomially at infinity.[4] More fundamentally, they serve as the building blocks for all regularly varying functions: any such f can be decomposed as f(x) = x^\rho L(x), where L is slowly varying.[4] This representation highlights how the power-law component x^\rho captures the dominant scaling, while the slowly varying L(x) modulates finer asymptotic details without altering the overall index of variation.[5]The theory of regular variation, including the role of slowly varying functions, was introduced by Jovan Karamata in the 1930s as a framework for studying asymptotic behaviors in analysis and probability.[6] Karamata's foundational work established these concepts to handle limits of ratios and integrals involving functions with heavy tails, laying the groundwork for subsequent developments in Tauberian theorems and beyond.
Fundamental Properties
Uniformity of Limits
A fundamental property of slowly varying functions is the uniformity of their defining limit over compact subsets of the positive reals. For a slowly varying function L, the convergence \lim_{x \to \infty} \frac{L(tx)}{L(x)} = 1 holds uniformly for t in any compact interval [a, b] \subset (0, \infty). Formally, this is expressed as\sup_{t \in [a,b]} \left| \frac{L(tx)}{L(x)} - 1 \right| \to 0 \quad \text{as} \quad x \to \infty.This result, known as the Uniform Convergence Theorem, ensures that the slow variation remains consistent when scaling by factors within bounded ranges, avoiding discontinuities or rapid fluctuations in the asymptotic behavior across finite scales.The theorem guarantees that the pointwise limit defining slow variation extends to uniform convergence on compacts, which is crucial for maintaining stability in asymptotic approximations. Intuitively, it implies that for large x, the ratio L(tx)/L(x) stays close to 1 simultaneously for all t in [a, b], reflecting the function's gradual change relative to power-law growth. This uniformity holds under the assumption that L is measurable or locally bounded near infinity, as required in the standard definition.[5]A proof sketch proceeds from the pointwise definition and leverages properties of measurable functions. Suppose the supremum does not converge to zero; then there exist \epsilon > 0, a sequence x_n \to \infty, and t_n \in [a, b] such that \left| L(t_n x_n)/L(x_n) - 1 \right| \geq \epsilon. By compactness, t_n has a subsequence converging to some t_0 \in [a, b]. The pointwise slow variation at t_0 implies \lim L(t_0 x_n)/L(x_n) = 1, and continuity arguments or bounds on the function's variation (such as those from the definition for nearby points) yield a contradiction. For the measurable case, the proof invokes the outer regularity of the measure or direct estimates to establish the sup norm convergence.[4][5]This uniform convergence has key implications for asymptotic analysis, enabling the interchange of limits and suprema in expressions involving slowly varying functions, such as in the evaluation of integrals or the simplification of regularly varying tails. It underpins the robustness of slowly varying functions in broader contexts like regular variation theory, where non-uniformity could otherwise disrupt limit theorems.
Karamata's Characterization Theorem
Karamata's characterization theorem establishes a fundamental equivalence between the limit definition of regular variation and its multiplicative decomposition form. Specifically, a positive measurable function f: (0, \infty) \to (0, \infty) is regularly varying with index \rho \in \mathbb{R} if and only if there exists a slowly varying function L such that f(x) = x^\rho L(x) for all sufficiently large x.[7]The direct implication follows straightforwardly from the definitions. If f(x) = x^\rho L(x) with L slowly varying, then for any \lambda > 0,\frac{f(\lambda x)}{f(x)} = \lambda^\rho \frac{L(\lambda x)}{L(x)},and since \lim_{x \to \infty} L(\lambda x)/L(x) = 1, it follows that \lim_{x \to \infty} f(\lambda x)/f(x) = \lambda^\rho. For the converse, suppose f is regularly varying with index \rho. Define L(x) = f(x)/x^\rho. Then for any \lambda > 0,\frac{L(\lambda x)}{L(x)} = \frac{f(\lambda x)/(\lambda x)^\rho}{f(x)/x^\rho} = \frac{1}{\lambda^\rho} \cdot \frac{f(\lambda x)}{f(x)},so \lim_{x \to \infty} L(\lambda x)/L(x) = 1, showing that L is slowly varying. This decomposition holds up to asymptotic equivalence, meaning if f(x) \sim x^\rho L_1(x) and f(x) \sim x^\rho L_2(x), then L_1(x) \sim L_2(x) as x \to \infty.[7]A key corollary is that the class of slowly varying functions coincides exactly with the regularly varying functions of index zero, as setting \rho = 0 in the characterization yields f(x) = L(x) where L satisfies \lim_{x \to \infty} L(\lambda x)/L(x) = 1 for all \lambda > 0. This equivalence underscores the role of slowly varying functions as the "residual" component in the asymptotic behavior of regularly varying functions.[7]
Karamata Representation Theorem
The Karamata representation theorem provides an explicit integral form for slowly varying functions, revealing their structural behavior as a combination of bounded oscillation and a slowly accumulating integral term. Specifically, a positive measurable function L defined on [0, \infty) is slowly varying if and only if there exists some a \geq 0 and a representationL(x) = \exp\left\{ \eta(x) + \int_a^x \frac{\varepsilon(t)}{t} \, dt \right\}, \quad x > a,where \eta: [a, \infty) \to \mathbb{R} is bounded (i.e., there exist constants m, M such that m \leq \eta(x) \leq M for all x \geq a) and \varepsilon: [a, \infty) \to \mathbb{R} satisfies \varepsilon(x) \to 0 as x \to \infty.[7][8]In this representation, the term \eta(x) accounts for any bounded oscillatory or persistent behavior in the logarithm of L(x), while the integral \int_a^x \frac{\varepsilon(t)}{t} \, dt captures the cumulative effect of infinitesimal relative changes that approach zero, ensuring the overall slow variation without power-like growth or decay. This form highlights how slowly varying functions deviate minimally from constants in a logarithmic scale, distinguishing them from regularly varying functions of non-zero index.[7]The proof of the theorem relies on Karamata's uniform convergence theorem, which ensures that L(\lambda x)/L(x) \to 1 uniformly for \lambda in compact intervals, and proceeds via logarithmic differentiation of L. Consider h(y) = \log L(e^y); the uniform convergence implies that h(y + \log \lambda) - h(y) \to 0 uniformly, allowing decomposition of h(y) into a bounded part \eta and an integral of a vanishing function \varepsilon(y) = h'(y) (in a distributional sense for measurable functions). The sufficiency direction follows directly by verifying the slow variation condition using the properties of \eta and \varepsilon.[7][5]An equivalent variant expresses the representation multiplicatively asL(x) = c(x) \exp\left\{ \int_a^x \frac{\varepsilon(t)}{t} \, dt \right\}, \quad x > a,where c: [a, \infty) \to (0, \infty) is bounded away from zero and infinity (i.e., $0 < m \leq c(x) \leq M < \infty for some constants m, M) and \varepsilon(x) \to 0 as x \to \infty. Here, \eta(x) = \log c(x) is the bounded term recast as a positive bounded function. This form is particularly useful for constructive examples and extensions in regular variation theory.[8][7]This representation complements Karamata's characterization theorem by providing a constructive tool for analyzing the decomposition of regularly varying functions into power and slowly varying components.[7]
Examples
Basic Examples
The simplest examples of slowly varying functions are the constant functions. For any constant c > 0, the function L(x) = c satisfies the condition \lim_{x \to \infty} \frac{L(tx)}{L(x)} = 1 for every t > 0, since \frac{L(tx)}{L(x)} = \frac{c}{c} = 1.A fundamental non-constant class consists of powers of the logarithm. The function L(x) = (\log x)^\beta for \beta \in \mathbb{R} and x > 1 is slowly varying, as direct computation yields\lim_{x \to \infty} \frac{L(tx)}{L(x)} = \lim_{x \to \infty} \left( \frac{\log(tx)}{\log x} \right)^\beta = \lim_{x \to \infty} \left(1 + \frac{\log t}{\log x}\right)^\beta = 1^\beta = 1,because \frac{\log t}{\log x} \to 0 as x \to \infty.Iterated logarithms also qualify as basic examples. Consider L(x) = \log \log x defined for x > e (to ensure the argument is positive). This function is slowly varying, since\lim_{x \to \infty} \frac{L(tx)}{L(x)} = \lim_{x \to \infty} \frac{\log(\log x + \log t)}{\log \log x} = \lim_{x \to \infty} \frac{\log \log x + \log\left(1 + \frac{\log t}{\log x}\right)}{\log \log x} = 1 + 0 = 1,as \log\left(1 + u\right) \sim u \to 0 for u = \frac{\log t}{\log x} \to 0.These canonical cases—constants, logarithmic powers, and iterated logarithms—demonstrate the defining property of slow variation through negligible relative change under fixed scaling t > 0.
Non-Trivial Examples
A prominent non-trivial example of a slowly varying function is the oscillatory function L(x) = \sin(\log \log x) + 2 for x > e^e, which remains positive and bounded between 1 and 3. This function exhibits bounded oscillations that become increasingly slow relative to the logarithmic scales, ensuring the defining limit \lim_{x \to \infty} L(tx)/L(x) = 1 for every fixed t > 0.Another sophisticated example arises from the Karamata representation theorem, which expresses a slowly varying function as L(x) = c(x) \exp\left( \int_{1}^{x} \frac{\varepsilon(t)}{t} \, dt \right), where c(x) \to c > 0 and \varepsilon(t) \to 0 as t \to \infty. Choosing \varepsilon(t) = 1/\log t for t > e yields L(x) \approx \log x, a non-constant slowly varying function.[9]In contrast, functions like L(x) = x^\varepsilon for fixed \varepsilon > 0 serve as a counterexample, as they are regularly varying with index \varepsilon but not slowly varying, since \lim_{x \to \infty} L(tx)/L(x) = t^\varepsilon \neq 1 unless \varepsilon = 0.[9]To verify slow variation for the oscillatory example, consider \lim_{x \to \infty} \frac{\sin(\log \log (tx)) + 2}{\sin(\log \log x) + 2}. Here, \log \log (tx) = \log (\log x + \log t) = \log \log x + \log\left(1 + \frac{\log t}{\log x}\right), so the argument shifts by \log\left(1 + \frac{\log t}{\log x}\right) \to 0 as x \to \infty. The continuity of the sine function and the bounded denominator away from zero ensure the ratio approaches 1. For the constructed example, direct computation of the integral confirms \log x satisfies \lim_{x \to \infty} \frac{\log (tx)}{\log x} = 1, as the additive \log t becomes negligible relative to the growing \log x.[9]
Applications
In Probability Theory
In probability theory, slowly varying functions are essential for modeling the tail behavior of heavy-tailed distributions, particularly those in the domain of attraction of stable laws. For α-stable distributions with 0 < α < 2, the tail probabilities of random variables attracted to such laws satisfyP(|X| > x) \sim x^{-\alpha} L(x),where L is a slowly varying function at infinity, capturing the nuanced asymptotic decay beyond a pure power law.[10] This form ensures that the tails are heavier than exponential, enabling the characterization of infinite variance cases where classical moments fail.[11]The generalized central limit theorem extends the classical result to heavy-tailed settings by incorporating slowly varying functions in the normalization. When independent and identically distributed random variables have infinite variance, the normalized sums converge in distribution to an α-stable law, with centering and scaling sequences involving L(x) to account for the slowly varying tail modulation; specifically, the normalizing factor often takes the form b_n such that n L(b_n)/b_n^α → 1.[10] This convergence holds without finite second moments, highlighting the role of regular variation in unifying limit behaviors across diverse heavy-tailed families.[11]Examples of heavy-tailed distributions illustrate this application concretely. The Pareto distribution, with survival function \bar{F}(x) = (x_m / x)^\alpha for x > x_m, corresponds to L(x) = 1, a constant slowly varying function, yielding exact power-law tails that attract to stable laws for α < 2.[12] In contrast, the Student-t distribution with 0 < ν < 2 degrees of freedom exhibits regularly varying tails P(|X| > x) \sim c_\nu x^{-\nu}, where the slowly varying component is constant, though variants with logarithmic perturbations, such as L(x) = (\log x)^\beta for β > 0, arise in perturbed heavy-tailed models to refine asymptotic approximations.[13]Historically, Jovan Karamata's foundational work on regular variation laid the groundwork for these probabilistic applications, particularly in renewal theory where slowly varying tails describe the persistence of heavy-tailed interarrival times, leading to anomalous growth in renewal counts.[14] His theorems also underpin large deviation principles for sums of random variables with regularly varying tails, where probabilities of rare events decay like x^{-\alpha} L(x), extending classical Cramér results to infinite-variance regimes.[15][16]
In Asymptotic Analysis
Slowly varying functions are central to deterministic asymptotic expansions, particularly through their role in Tauberian theorems. Karamata's extensions of classical Tauberian results link the asymptotic behavior of Laplace transforms to slowly varying functions, enabling the recovery of original function behaviors from transform asymptotics under minimal regularity conditions. For instance, if the Laplace transform \hat{f}(s) \sim L(1/s)/s as s \to 0^+, where L is slowly varying, then f(x) \sim L(x) as x \to \infty, assuming f is non-negative and non-decreasing.[17][1]In the context of asymptotic equivalences, the relation f \sim g as x \to \infty denotes that f(x)/g(x) \to 1. Slowly varying functions preserve these equivalences under multiplication: if f \sim g and L is slowly varying, then f(x) L(x) \sim g(x) L(x) as x \to \infty. This preservation property facilitates the analysis of composite asymptotic behaviors in expansions, ensuring that the slowly varying component does not alter the leading-order equivalence. The Karamata representation theorem, which expresses slowly varying functions as L(x) = c(x) \exp\left(\int_a^x \frac{\epsilon(t)}{t} dt\right) for some slowly varying c and continuous \epsilon with \epsilon(t) \to 0, underpins such relations by providing an integral form that aligns with asymptotic manipulations.[1][2]A key application arises in the asymptotics of integrals involving slowly varying functions. Karamata's integral theorem asserts that if L is slowly varying and positive, then \int_1^x L(t) \, dt \sim x L(x) as x \to \infty. This result, which holds without proof here, extends to more general regularly varying cases and is instrumental in deriving asymptotic expansions for cumulative functions modulated by slowly varying terms.[2][1]In analytic number theory, slowly varying functions appear in growth rate estimates, such as for the divisor function d(n), which counts the positive divisors of n. The summatory function satisfies \sum_{n \leq x} d(n) \sim x \log x as x \to \infty, where \log x serves as a canonical slowly varying function capturing the logarithmic growth. This equivalence highlights how slowly varying components refine power-law asymptotics in arithmetic progressions and related problems.[18]
Generalizations and Extensions
O-Regular Variation
O-regularly varying functions generalize the concept of regularly varying functions by relaxing the requirement that the limit \lim_{x \to \infty} f(tx)/f(x) = t^\rho exists exactly, instead requiring only that the deviation from this power-law behavior remains bounded asymptotically. Specifically, a positive measurable function f: [a, \infty) \to (0, \infty) for some a > 0 is O-regularly varying with index \rho \in \mathbb{R} if, for every t > 0,\limsup_{x \to \infty} \left| \frac{f(tx)}{f(x)} - t^\rho \right| < \infty.This condition ensures that f(tx)/f(x) stays within a finite distance of t^\rho as x grows large, capturing functions whose asymptotic growth mimics x^\rho up to bounded perturbations.[9]Slowly varying functions, which satisfy \lim_{x \to \infty} f(tx)/f(x) = 1 for all t > 0 and thus exhibit regular variation with index 0, form a subclass of O-regularly varying functions with \rho = 0. In this case, the limsup condition simplifies to bounded oscillation around 1, accommodating functions that approach constant scaling without converging precisely to it. The O-regular class thus encompasses standard regular variation (where the limsup equals the limit) but extends to cases where oscillations prevent exact convergence while preserving bounded relative errors.[9]Key properties of O-regularly varying functions include their inclusion in the broader Karamata framework, which studies asymptotic behaviors akin to power functions, though the absence of exact limits means some classical theorems (like uniform convergence on compact sets) hold only in weakened forms. For instance, products and quotients of O-regularly varying functions with compatible indices remain O-regularly varying, but the resulting index may require additional verification of boundedness. Unlike strictly regularly varying functions, O-regular ones do not necessarily admit a representation as f(x) = x^\rho \ell(x) with \ell slowly varying, highlighting their role as a permissive extension for asymptotic analysis.[9]A representative example is the product of a regularly varying function with index \rho and a bounded positive function, which yields an O-regularly varying function with the same index. Consider f(x) = x^\rho (2 + \sin(\log x)) for \rho \in \mathbb{R}; here, x^\rho is regularly varying, and $2 + \sin(\log x) is bounded between 1 and 3, ensuring \left| f(tx)/f(x) - t^\rho \right| remains finite as x \to \infty due to the controlled oscillation. This construction illustrates how bounded multipliers introduce deviations without altering the dominant power-law growth.[9]
Multivariate Slowly Varying Functions
In the multivariate setting, a measurable function L: \mathbb{R}_+^d \to \mathbb{R}_+ is slowly varying at infinity if, for every t > 0,\lim_{\|x\| \to \infty} \frac{L(t x)}{L(x)} = 1,where \| \cdot \| is the Euclidean norm. This definition extends the univariate notion to vector arguments and is typically considered on closed convex cones C \subset \mathbb{R}^d, such as the nonnegative orthant [0, \infty)^d \setminus \{0\}, to capture directional behavior at infinity.[19]Multivariate slowly varying functions play a central role in spectral decompositions of multivariate regular variation, where the tail behavior of a distribution is decomposed into a radial component governed by a regularly varying scalingfunction and an angular component involving a slowly varying function on the unit sphere. Specifically, in polar coordinates (R, [\Theta](/page/Theta)) for a random vector in \mathbb{R}^d \setminus \{0\}, the limiting spectral measure S on the unit sphere satisfies properties tied to vague convergence of the form t \mathbb{P}[ (X / b(t)) \in \cdot ] \xrightarrow{v} \nu(\cdot) in \mathbb{R}^d \setminus \{0\}, with \nu a nonzero Radon measure homogeneous of order -\alpha < 0 and b(t) a scalingfunction; the slowly varying part emerges in the normalization ensuring the limit is independent of scaling in angular directions.These functions underpin multivariate regular variation, essential for modeling joint extremes in extreme value theory, such as the convergence of componentwise maxima to max-stable distributions or the analysis of tail dependence in high-dimensional risks. For instance, in peaks-over-threshold methods, the slowly varying component helps estimate the angular measure S, quantifying dependence structures in heavy-tailed data like financial returns or environmental extremes.[20]Developments in multivariate slowly varying functions gained prominence post-1980s, driven by applications in risk management—such as modeling aggregated insurance losses from Danish fire data (1980–1990)—and spatial statistics for analyzing multivariate spatial extremes in environmental sciences. Influential works include those by Ledford and Tawn on hidden regular variation (1996) and Heffernan and Resnick on conditional extremes (2007), which extended these concepts to practical estimation in finance and hydrology.[20][19]