Fact-checked by Grok 2 weeks ago

Law of the iterated logarithm

In probability theory, the law of the iterated logarithm (LIL) is a central limit theorem refinement that quantifies the almost sure asymptotic fluctuations of partial sums of independent and identically distributed random variables with mean zero and finite unit variance. For a sequence of such random variables X_1, X_2, \dots , let S_n = \sum_{i=1}^n X_i; then, with probability one, \limsup_{n \to \infty} \frac{S_n}{\sqrt{2n \log \log n}} = 1, \quad \liminf_{n \to \infty} \frac{S_n}{\sqrt{2n \log \log n}} = -1. $$ This boundary function $\sqrt{2n \log \log n}$ sharply delineates the growth rate of deviations beyond the $\sqrt{n}$ scale of the central limit theorem, capturing oscillatory behavior where the sums repeatedly approach but do not exceed these limits infinitely often. The LIL originated in the context of metric number theory and early probability investigations into the laws of large numbers. An initial version for [Bernoulli](/page/Bernoulli) trials—where the random variables take values 0 or 1 with probability $p$ and $1-p$—was established by Aleksandr Khintchine in 1924, showing that the normalized deviations $\frac{S_n - np}{\sqrt{2np(1-p) \log \log n}}$ have limsup 1 and liminf -1 [almost surely](/page/Almost_surely).[](https://doi.org/10.4064/fm-6-1-9-20) This result built on prior estimates by [Hardy](/page/Hardy) and Littlewood (1914) and Hausdorff (1914) for [uniform distribution](/page/Uniform_distribution) modulo 1, linking probabilistic limits to [Diophantine approximation](/page/Diophantine_approximation). [Andrey Kolmogorov](/page/Andrey_Kolmogorov) extended the theorem in 1929 to general i.i.d. random variables with finite second moment, providing the modern standard form and proving it via exponential bounds and Borel-Cantelli lemmas. Beyond sums of i.i.d. variables, the [LIL](/page/Lil) has been generalized to diverse stochastic settings, underscoring its role in understanding pathwise regularity. For standard [Brownian motion](/page/Brownian_motion) $B(t)$, Paul Lévy (1937) and others derived the functional analogue: almost surely, \limsup_{t \to \infty} \frac{B(t)}{\sqrt{2t \log \log t}} = 1, \quad \liminf_{t \to \infty} \frac{B(t)}{\sqrt{2t \log \log t}} = -1, $$ which implies the invariance principle connecting random walks to diffusion processes. Extensions include martingale versions (e.g., Stout, 1970), laws for dependent sequences like stationary processes, and applications to empirical processes and U-statistics (e.g., Dehling and Wendler, 2011). These variants often retain the \sqrt{2 \log \log n} factor, adjusted for covariance structures. The theorem's implications span , where it informs sequential testing and self-normalized sums (e.g., Jing et al., 2003), and theory, yielding s for edge counts in Erdős–Rényi models (e.g., Borgs et al., 2019). In , it applies to Banach space-valued processes (e.g., Ledoux and Talagrand, 1991). Proving the LIL typically involves Skorokhod embedding or Strassen's functional form (1964), which embeds the sums into while preserving the logarithmic boundary. Overall, the exemplifies the interplay between convergence and dispersion, providing a precise "law of small fluctuations" for almost sure paths.

Formal Statement

For Sums of Independent Random Variables

The law of the iterated logarithm for sums of random variables considers a of and identically distributed random variables X_1, X_2, \dots with \mathbb{E}[X_i] = 0 and finite positive variance \sigma^2 = \mathbb{E}[X_i^2] > 0. The partial sums are defined as S_n = \sum_{i=1}^n X_i, and the normalized process is B_n = S_n / (\sigma \sqrt{n \log \log n}). Under these assumptions of centering at the mean and finite second moment, the law states that almost surely, \limsup_{n \to \infty} B_n = \sqrt{2} and \liminf_{n \to \infty} B_n = -\sqrt{2}. The normalization factor \sigma \sqrt{n \log \log n} derives from asymptotic analysis of the tail probabilities and maximal inequalities for the sums, providing the boundary for almost sure oscillations beyond the \sqrt{n} scale of the central limit theorem. A representative example occurs when each X_i is a symmetric Bernoulli random variable taking values +1 and -1 with probability $1/2 each, so \sigma^2 = 1 and S_n traces a simple symmetric random walk on the integers; in this case, the walk reaches heights and depths of order \pm \sqrt{2 n \log \log n} infinitely often almost surely, touching the precise fluctuation boundary.

For Brownian Motion

The law of the iterated logarithm for Brownian motion provides a precise description of the almost sure asymptotic fluctuations of a standard Brownian motion W(t) on [0, \infty). Consider the normalized process W(t) / \sqrt{2 t \log \log t} as t \to \infty. Almost surely, \limsup_{t \to \infty} \frac{W(t)}{\sqrt{2 t \log \log t}} = 1, \quad \liminf_{t \to \infty} \frac{W(t)}{\sqrt{2 t \log \log t}} = -1. This result, establishing the boundary for the pathwise oscillations of W(t), was originally proved by Paul Lévy in 1937. An additional pathwise property concerns the in the large-time regime. Specifically, , \lim_{t \to \infty} \frac{\sup_{0 \leq s \leq t} |W(s) - W(t)|}{\sqrt{2 t \log \log t}} = 1. This follows from the equivalence in law between \sup_{0 \leq s \leq t} |W(s) - W(t)| and \sup_{0 \leq s \leq t} |W(s)| via time reversal of the Brownian path, combined with the law of the iterated logarithm for the running supremum process M(t) = \sup_{0 \leq s \leq t} W(s), for which \lim_{t \to \infty} M(t) / \sqrt{2 t \log \log t} = 1 . The continuous version of the law connects to its discrete analogue through Donsker's invariance principle, which embeds scaled sums of independent random variables into the path space of Brownian motion, thereby transferring limit properties like the law of the iterated logarithm from random walks to the continuous setting.

Interpretation

Asymptotic Fluctuations

The law of the iterated logarithm delineates the boundary between the almost sure convergence to the mean provided by the law of large numbers, which operates at the scale of n, and the oscillatory fluctuations beyond the \sqrt{n} scale captured distributionally by the central limit theorem. The double logarithm \log \log n emerges as the pivotal term marking this transition, where normalized partial sums remain O(\sqrt{n \log \log n}) almost surely but exhibit persistent deviations reaching the order \sqrt{n \log \log n} infinitely often. This fluctuation scale quantifies the magnitude of rare large deviations in the tail behavior of the sums: for i.i.d. random variables with mean zero and unit variance, the partial sums S_n satisfy \limsup_{n \to \infty} S_n / \sqrt{2 n \log \log n} = 1 and \liminf_{n \to \infty} S_n / \sqrt{2 n \log \log n} = -1 almost surely, implying that excursions of size \sqrt{2 n \log \log n} occur repeatedly, while any larger order does not. Thus, the LIL establishes the almost sure envelope for pathwise behavior, refining the probabilistic limits of prior theorems by specifying the exact growth rate of these boundary-touching fluctuations. Intuitively, the normalized sums display an oscillatory pattern, repeatedly approaching and retreating from the boundaries \pm \sqrt{2 \log \log n}, which provides a parabolic envelope constraining the maximum deviations as n grows. This recurrent touching of the boundary illustrates the law's role in describing the fine-scale, almost sure dynamics of random walks, where the path fills the region within these limits without escaping them. In comparison, the law of large numbers ensures deviations vanish relative to n, the central limit theorem approximates fluctuations relative to \sqrt{n} in distribution, and the law of the iterated logarithm pins down the \sqrt{n \log \log n} scale for the almost sure extremal behavior.

Relation to Central Limit Theorem

The (CLT) asserts that, for a S_n of s with zero and finite variance \sigma^2 > 0, the normalized S_n / \sqrt{n} converges in distribution to a standard normal N(0, \sigma^2), thereby describing the typical scale of fluctuations around the as being of order \sqrt{n}. This distributional limit captures the average behavior of the partial s over many realizations but does not provide pathwise control on the sequence for a single realization. In contrast, the law of the iterated logarithm (LIL) refines the CLT by establishing the precise almost sure growth rate of the maximal deviations in S_n, showing that the fluctuations oscillate between bounds of order \sqrt{2n \log \log n} infinitely often, which is finer than the \sqrt{n} scale suggested by the CLT alone. While the CLT suffices for understanding probabilistic tails and average-case performance, the LIL reveals that the \sqrt{n} normalization is inadequate for uniform or supremum norms over the entire trajectory, as the actual extreme excursions grow slightly faster due to the logarithmic factor. The distinction in strength underscores that the CLT provides weak convergence in distribution, applicable across probabilistic limits, whereas the LIL delivers a stronger almost sure result that holds pathwise with probability one, implying the CLT but offering sharper control on the boundary of fluctuations. For instance, in the case of a simple symmetric (i.i.d. steps of \pm 1), the CLT predicts Gaussian-like tails for fixed n, but the LIL demonstrates that the walk reaches levels \pm \sqrt{2 n \log \log n} infinitely often , highlighting recurrent boundary touching beyond typical CLT predictions.

Historical Context

Origins in the 1920s

The law of the iterated logarithm originated in the amid investigations into the asymptotic behavior of sums of random variables, building on the strong to quantify the precise scale of fluctuations. This development addressed limitations in earlier convergence results, particularly for the tails of partial sums in probabilistic series. In 1924, introduced the first formulation of the law for sums of identically distributed Bernoulli random variables, determining the exact boundary for the growth of deviations from the . This built on prior estimates by and Littlewood (1914) and Hausdorff (1914) for modulo 1, linking probabilistic limits to . Khinchin's result highlighted the role of the in capturing the almost sure limsup behavior of normalized sums. Building on this, published the inaugural proof of the upper bound in 1929, applicable to sums of bounded independent random variables in the context of the . Specifically, Kolmogorov established that \limsup_{n \to \infty} \frac{S_n}{\sqrt{2 n \log \log n}} \leq \sigma almost surely, where S_n is the partial sum and \sigma^2 is the common variance, thereby refining convergence criteria for infinite series in probability. This upper bound emerged from contemporaneous efforts to extend fluctuation theorems, with related explorations by figures like contributing to the evolving framework shortly thereafter.

Key Proofs and Contributors

The represented a crucial phase in the establishment of the law of the iterated logarithm, building on earlier precursors from the to provide complete proofs including lower bounds for sums of independent random variables and extensions to . In 1941, Philip Hartman and Aurel Wintner proved the full law for partial sums of independent random variables with zero means and finite variances, establishing both the upper bound and the lower bound with the precise constant 1 . This result completed the characterization for the case by confirming that the lim sup and lim inf of the normalized sums oscillate between 1 and -1. Paul Lévy extended these ideas to continuous-time processes, developing the law for through a series of works from 1940 to 1948; his 1948 monograph definitively proved that for standard B(t), \limsup_{t \to \infty} \frac{B(t)}{\sqrt{2t \log \log t}} = 1 and \liminf_{t \to \infty} \frac{B(t)}{\sqrt{2t \log \log t}} = -1 . These efforts by Hartman, Wintner, and Lévy solidified the exact constants 1 and -1 in the almost sure statement of the law. In the mid-20th century, Yuri Linnik and collaborators further refined the law for sums involving non-identical distributions and stationary sequences, broadening its applicability beyond identically distributed variables. Key milestones include Hartman and Wintner's publication in the American Journal of Mathematics in 1941, William Feller's 1946 paper on identically distributed variables in the Annals of Mathematics, and Lévy's comprehensive treatment in Comptes Rendus notes leading to his 1948 book, collectively establishing the law by the end of the decade.

Proof Techniques

Upper Bound Construction

The upper bound in the classical law of the iterated logarithm asserts that for partial sums S_n of independent random variables with mean zero and variance one, \limsup_{n \to \infty} S_n / \sqrt{2n \log \log n} \leq 1 almost surely. To establish this, fix \epsilon > 0 and show that deviations exceeding (1 + \epsilon) \sqrt{2n \log \log n} occur only finitely often with probability one. The construction relies on probabilistic tail estimates to bound the likelihood of large deviations and analytic tools to control the process along subsequences and within intervals. This approach was pioneered by Kolmogorov in 1929, who utilized exponential moment bounds and maximal inequalities for the sums. Exponential inequalities provide the foundational tail estimates for S_n. For i.i.d. random variables with finite moment generating function, Chernoff-type bounds yield P(S_n > x \sqrt{n}) \leq \exp(-x^2 / 2) for large x, derived from Markov's inequality applied to \mathbb{E}[\exp(\lambda S_n)]. More precisely, for the normalized boundary, the probability P(S_n > a \sqrt{2n \log \log n}) is bounded above by (\log n)^{-a^2 (1 + \delta)} for some \delta > 0 and sufficiently large n, using moderate and large deviation principles. The full series \sum_n P(S_n > (1 + \epsilon) \sqrt{2n \log \log n}) diverges due to dependence and slow decay, so Borel-Cantelli cannot be applied directly. Instead, consider a subsequence n_k = \lfloor \theta^k \rfloor for \theta > 1, where \sum_k P(S_{n_k} > (1 + \epsilon) \sqrt{2 n_k \log \log n_k}) < \infty, as the terms decay like k^{-(1+\epsilon)^2}. The Borel-Cantelli lemma applies to these independent events A_k = \{S_{n_k} > (1 + \epsilon) \sqrt{2 n_k \log \log n_k}\}, implying P(A_k \text{ i.o.}) = 0. To extend to the full limsup, a blocking divides the sequence into geometrically growing s. Within each [n_k + 1, n_{k+1}], the increment S_{n_{k+1}} - S_{n_k} has variance approximately n_{k+1} - n_k \sim (\theta - 1) n_k, and its tail is similarly bounded, ensuring P(|S_{n_{k+1}} - S_{n_k}| > (1 + \epsilon/2) \sqrt{2 n_{k+1} \log \log n_{k+1}}) < C / (\log \log n_k)^b for some C, b > 0, making the series summable. To bound the maximum within blocks, Kolmogorov's maximal inequality is invoked: for the partial sums in the block, P(\max_{n_k < m \leq n_{k+1}} |S_m - S_{n_k}| > \lambda) \leq \mathrm{Var}(S_{n_{k+1}} - S_{n_k}) / \lambda^2 \approx n_k / \lambda^2. Setting \lambda = (1 + \epsilon/2) \sqrt{2 n_k \log \log n_k}, the probability is on the order of $1 / \log \log n_k, and summing over k converges. Thus, along the , \limsup_{k \to \infty} S_{n_k} / \sqrt{2 n_k \log \log n_k} \leq 1 + \epsilon , and the controlled block maxima ensure the overall process stays below this level with high probability. Almost sure convergence follows by letting \epsilon \to 0, as the events of exceeding $1 + \epsilon are nested and occur finitely often for each \epsilon. This argument, combined with uniform control in intervals, yields the tight upper bound without requiring the lower bound .

Lower Bound Argument

The lower bound for the law of the iterated logarithm in the case of sums of random variables with 0 and finite variance demonstrates that the normalized partial sums S_n / \sqrt{2n \log \log n} reach values arbitrarily close to 1 infinitely often , establishing \limsup_{n \to \infty} S_n / \sqrt{2n \log \log n} = 1 a.s. This is complemented by the upper bound, which prevents overshooting beyond the boundary. The proof relies on constructing a sequence of events along a subsequence where large positive increments occur with probabilities summing to infinity, allowing the second Borel-Cantelli lemma to imply infinitely many occurrences . To implement this, consider a geometric subsequence n_k = \lfloor \lambda^k \rfloor for \lambda > 1 sufficiently large, chosen so that the increments S_{n_k} - S_{n_{k-1}} are sums over disjoint blocks of i.i.d. variables, with S_{n_{k-1}} = o( \sqrt{n_k \log \log n_k} ) by the upper bound. Define the events A_k = \{ S_{n_k} > (1 - \epsilon) \sqrt{2 n_k \log \log n_k} \} for fixed $0 < \epsilon < 1. Thus, A_k occurs if the increment \Delta_k = S_{n_k} - S_{n_{k-1}} exceeds the threshold. The events A_k are independent due to disjoint blocks. By the second Borel-Cantelli lemma, since \sum_k P(A_k) = \infty, the events A_k occur infinitely often , implying the limsup is at least $1 - \epsilon a.s. Letting \epsilon \to 0 yields the full lower bound. This approach originates in Khintchine's work for bounded variables and was generalized by Kolmogorov. The key to the divergent sum is a lower bound on P(A_k) using central limit theorem approximations for the tail. Let \Delta_k have mean 0 and variance \approx n_k. The normalized increment \Delta_k / \sqrt{n_k} \approx N(0,1), so P(\Delta_k > (1 - \epsilon) \sqrt{2 n_k \log \log n_k} ) \sim \frac{1}{(1 - \epsilon) \sqrt{4 \pi \log \log n_k}} \cdot (\log n_k)^{-(1 - \epsilon)^2}, but more precisely, the asymptotic tail gives order $1 / (\log n_k \sqrt{\log \log n_k}). For n_k = \lambda^k, \log n_k \approx k \log \lambda, \log \log n_k \approx \log k, so P(A_k) \sim C / (k \sqrt{\log k}) for some C > 0, and \sum_k 1/(k \sqrt{\log k}) = \infty. This confirms the required divergence. The geometric spacing ensures blocks are separated, with n_k / n_{k-1} = \lambda, and previous contributions negligible relative to the growing \sqrt{\log \log n_k} \to \infty. For the negative boundary, symmetry of the distribution implies \liminf_{n \to \infty} S_n / \sqrt{2n \log \log n} = -1 a.s. by applying the argument to -S_n, which has the same distribution.

Generalizations

To Martingales and Dependent Sequences

The law of the iterated logarithm extends to martingale differences sequences, relaxing the independence assumption of the classical case while preserving the characteristic fluctuation bound. For a martingale \{S_n, \mathcal{F}_n\} with differences \Delta S_n = S_n - S_{n-1} satisfying E(\Delta S_n \mid \mathcal{F}_{n-1}) = 0 and bounded increments |\Delta S_n| \leq M almost surely, Stout established that if the conditional variances satisfy \sum_{k=1}^n E(\Delta S_k^2 \mid \mathcal{F}_{k-1}) = n \sigma^2 + o(n) with uniform integrability ensuring the sum behaves asymptotically like n \sigma^2, then \limsup_{n \to \infty} \frac{S_n}{\sqrt{2 n \sigma^2 \log \log n}} = 1 \quad \text{almost surely.} This result, known as Stout's martingale LIL, requires the uniform integrability of the conditional second moments to control the variance growth precisely, allowing the martingale to mimic the i.i.d. boundary behavior under weaker dependence structures induced by the filtration. Extensions to dependent sequences further broaden the LIL's applicability by accommodating weak forms of dependence, such as m-dependence or mixing conditions where correlations decay sufficiently fast. For m-dependent sequences, where variables beyond lag m are , the LIL holds under finite second moments and centering, as the dependence can be blocked into chunks for . For strongly mixing processes, Peligrad proved an invariance principle implying the LIL for partial sums of sequences with mixing rate \alpha(k) = O(k^{-\beta}) for some \beta > 1, finite variance \sigma^2 > 0, and appropriate moment conditions, yielding the same limsup bound \sqrt{2 \log \log n} after by \sqrt{n}. These results rely on maximal inequalities tailored to the decay of dependence, ensuring the sums fluctuate like ones over long horizons. A example arises in autoregressive processes of order 1 (AR(1)), defined by X_t = \rho X_{t-1} + \epsilon_t with |\rho| < 1 and i.i.d. innovations \epsilon_t of mean 0 and variance \sigma_\epsilon^2. The stationary AR(1) sequence is strongly mixing, so after centering by the process mean (which is 0 if E(\epsilon_t) = 0), the partial sums S_n = \sum_{t=1}^n X_t satisfy the LIL with normalization \sqrt{2 v_n \log \log n}, where v_n is the asymptotic variance accounting for the autocorrelation structure, specifically v_n \sim n \sigma^2 (1 + \rho)/(1 - \rho) for the long-run variance \sigma^2.

Extensions to Other Stochastic Processes

The law of the iterated logarithm extends to Lévy processes with finite variance, where the Gaussian component dominates the fluctuations at large times. For a Lévy process X(t) with E[X(1)^2] = \sigma^2 < \infty, the result states that \limsup_{t \to \infty} \frac{|X(t)|}{\sqrt{2 \sigma^2 t \log \log t}} = 1 \quad \text{almost surely}, provided the jumps are controlled to align with the finite-variance assumption. This generalizes the classical Brownian motion case, as the functional central limit theorem implies asymptotic equivalence to a scaled Wiener process. For fractional Brownian motion B^H(t) with Hurst parameter H \in (0,1) \setminus \{1/2\}, the long-range dependence alters the normalization, leading to a modified . Specifically, \limsup_{t \to \infty} \frac{|B^H(t)|}{t^H \sqrt{2 \log \log t}} = 1 \quad \text{almost surely}. The constant 1 here reflects the self-similar structure with index H, and results hold under the standard definition of B^H as a Gaussian process with covariance E[B^H(t) B^H(s)] = \frac{1}{2} (t^{2H} + s^{2H} - |t-s|^{2H}). Ornstein-Uhlenbeck processes, defined as stationary solutions to the SDE dY(t) = -\theta Y(t) dt + \sigma dW(t) for \theta > 0, exhibit LIL behavior for their path supremum over growing intervals. For the standard case with \theta = 1/2 and \sigma = 1, \limsup_{t \to \infty} \frac{\sup_{0 \leq s \leq t} |Y(s)|}{\sqrt{2 \log t}} = 1 \quad \text{almost surely}. This captures the bounded variance of the stationary process, contrasting with non-stationary growth in Brownian motion. In boundary cases involving infinite variance, such as symmetric stable Lévy processes with index \alpha \in (1,2), the heavy-tailed jumps modify the logarithmic scaling. A Chover-type LIL applies, where \limsup_{t \to \infty} \frac{|X(t)|}{(t \log \log t)^{1/\alpha}} = c_{\alpha} \quad \text{almost surely}, with c_{\alpha} a positive constant depending on \alpha, reflecting the stable domain of attraction and lack of finite second moments.

Applications

In Statistical Estimation

The law of the iterated logarithm (LIL) provides sharp almost sure bounds on the fluctuations of the sample mean \bar{X}_n = n^{-1} \sum_{i=1}^n X_i for independent and identically distributed (i.i.d.) random variables X_1, X_2, \dots with mean \mu and finite positive variance \sigma^2. Specifically, under these conditions, \limsup_{n \to \infty} \frac{|\bar{X}_n - \mu|}{\sqrt{\frac{\log \log n}{n}}} = \sigma \sqrt{2} \quad \text{almost surely}. This result refines the law of large numbers by quantifying the precise rate at which \bar{X}_n approaches \mu, oscillating within boundaries that grow like \sqrt{(\log \log n)/n}. These non-asymptotic bounds are valuable for controlling estimation errors in settings where sample sizes vary, enabling the construction of confidence intervals that adapt to the logarithmic growth without relying solely on asymptotic normality from the central limit theorem. For instance, they ensure that deviations exceeding \sigma \sqrt{2 \log \log n / n} occur only finitely often almost surely, supporting robust error assessment in parametric inference. In sequential analysis, the extends to stopped sums and aids in bounding overshoots for procedures like the (SPRT), where observations continue until a is crossed. Anscombe's theorem establishes that, for stopping times N bounded in a suitable sense (e.g., E[N] < \infty and \text{Var}(N) = O((E[N])^2)), the normalized stopped sum \sqrt{N} (\bar{X}_N - \mu) behaves asymptotically like a fixed-sample counterpart, preserving the LIL's fluctuation limits. This allows precise control of overshoot—the excess beyond the boundary at stopping—which is critical for maintaining error rates in sequential hypothesis testing. For example, in SPRT for testing means under i.i.d. assumptions, the LIL implies that the overshoot is asymptotically negligible relative to \sqrt{\log \log N}, ensuring efficient sample size determination and type I/II error guarantees without fixed-sample assumptions. The LIL also applies to non-parametric estimation, particularly kernel density estimators, where it yields uniform almost sure convergence rates over classes of smooth densities. For a kernel density estimator \hat{f}_n(x) = n^{-1} h^{-1} \sum_{i=1}^n K((x - X_i)/h) with bandwidth h = h_n \to 0 and n h \to \infty, the LIL establishes that, pointwise, \limsup_{n \to \infty} \frac{|\hat{f}_n(x) - f(x)|}{\sqrt{\frac{\log \log n}{n h}}} = \sqrt{\frac{f(x) \int K^2(u) \, du}{2}} \quad \text{almost surely}, under standard smoothness conditions on the density f and kernel K. Uniform versions over compact intervals or function classes further bound the supremum norm \sup_x |\hat{f}_n(x) - f(x)|, providing rates sharper than mean squared error analyses and enabling adaptive bandwidth selection for strong uniform consistency. These bounds are essential for goodness-of-fit testing and mode estimation in density smoothing, where they quantify the trade-off between bias and the iterated logarithmic variance term. A key example of the LIL's role in non-parametric settings is its application to the empirical distribution function (EDF) F_n(x) = n^{-1} \sum_{i=1}^n \mathbf{1}_{\{X_i \leq x\}}, offering almost sure rates beyond finite-sample uniform inequalities like the Dvoretzky-Kiefer-Wolfowitz bound. For continuous F, the LIL implies \limsup_{n \to \infty} \frac{\sqrt{n} \sup_x |F_n(x) - F(x)|}{\sqrt{2 \log \log n}} = 1 \quad \text{almost surely}. This refines the probabilistic DKW inequality P(\sup_x |F_n(x) - F(x)| > \epsilon) \leq C e^{-2 n \epsilon^2} by providing pathwise fluctuation limits, which are crucial for the EDF and deriving strong laws for processes. Such rates support applications in censored data and high-dimensional settings, where they ensure uniform consistency with explicit logarithmic corrections.

In Risk Analysis and Finance

In insurance risk models based on random walks with negative drift, the law of the iterated logarithm (LIL) provides almost sure bounds on the supremum of the surplus process, quantifying the extent of upward fluctuations and thus assessing long-term ruin risks. Specifically, under conditions of investment volatility, LIL aids in evaluating the maximum surplus excursions, enabling precise assessment of infinite-time ruin probabilities in dual risk models. Extensions of the LIL to generalized autoregressive conditional heteroskedasticity (GARCH) processes deliver almost sure upper and lower bounds on realized volatility paths, essential for managing fluctuations in financial time series. For nonstationary GARCH(1,1) models at the unit root boundary, the Hartman-Wintner LIL establishes that the lim sup of the volatility h_t normalized by √(2t log log t) equals e σ almost surely, while the lim inf equals σ, where σ is the long-run variance, allowing risk managers to quantify extreme volatility deviations beyond central limit approximations. In , the supplies almost sure path bounds for intraday returns modeled as semimartingales, guiding the setting of stop-loss thresholds to mitigate adverse price excursions. Martingale generalizations of the further apply to by delimiting opportunities in continuous-time models.

References

  1. [1]
  2. [2]
    Über das Gesetz des iterierten Logarithmus - EuDML
    Über das Gesetz des iterierten Logarithmus. A. Kolmogoroff · Mathematische Annalen (1929). Volume: 101, page 126-135; ISSN: 0025-5831; 1432-1807/e. Access Full ...
  3. [3]
    Le caractère universel de la courbe du mouvement brownien et la ...
    Le caractère universel de la courbe du mouvement brownien et la loi du logarithme itéré. Rend. Circ. Mat. Palermo 4, 337–366 (1955). https://doi.org/10.1007 ...
  4. [4]
    The Law of the Iterated Logarithm - SpringerLink
    The law of the iterated logarithm (LIL) does, in that it provides a parabolic bound on how large the oscillations of the partial sums may be as a function of ...
  5. [5]
  6. [6]
    Über einen Satz der Wahrscheinlichkeitsrechnung - EuDML
    Khintchine, Aleksandr. "Über einen Satz der Wahrscheinlichkeitsrechnung." Fundamenta Mathematicae 6.1 (1924): 9-20. <http://eudml.org/doc/214283>. @article ...
  7. [7]
    On the Law of the Iterated Logarithm - jstor
    a finite interval; so that the theorem breaks down even in the simplest cases to which the classical limit theorems of the theory of probability are applicable.
  8. [8]
    Processus stochastiques et mouvement brownien - Google Books
    Title, Processus stochastiques et mouvement brownien. Grands classiques Gauthier-Villars, ISSN 0989-0602 ; Author, Paul Lévy ; Edition, 2 ; Publisher, Éditions ...
  9. [9]
    [PDF] law of the iterated logarithm
    Introduction. The law of the iterated logarithm can be seen as a refinement of the law of large numbers and central limit theorem.
  10. [10]
    [PDF] Laws of the Iterated Logarithm. - DSpace@MIT
    This series will converge or diverge depending on whether L > 1 or L < 1. Even though these events are not independent in some sense they are ”almost ...
  11. [11]
    [PDF] probability theory - part 2 independent random variables - IISc Math
    Applications of Borel-Cantelli lemmas and Kolmogorov's zero-one law. 18 ... THE LAW OF ITERATED LOGARITHM. If an ↑ ∞ is a deterministic sequence, then ...
  12. [12]
    A martingale analogue of Kolmogorov's law of the iterated logarithm
    Kolmogorov's law of the iterated logarithm is extended to the martingale case ... Kolmogorov, A.: über das Gesetz des iterierten Logarithmus. Math. Ann. 101 ...
  13. [13]
    A law of the iterated logarithm for processes with independent ...
    By using the Ito's calculus, a law of the iterated logarithm is established for the processes with independent increments (PII). LetX = {X t,t ≥ 0} be a PII ...
  14. [14]
    A Law of the Iterated Logarithm for Fractional Brownian Motions
    Chung's Functional Law of the Iterated Logarithm for Increments of a Fractional Brownian Motion ... References. Ehm, W. (1981), Sample function properties of ...Missing: reference | Show results with:reference
  15. [15]
    The Limit Law of the Iterated Logarithm
    Feb 23, 2013 · Abstract. For the partial sum { S n } of an i.i.d. sequence with zero mean and unit variance, it is pointed out that. lim n → ∞ ( 2 log ⁡ log ...
  16. [16]
    Laws of the iterated logarithm of Chover-type for operator stable ...
    Integral tests for sample paths of operator stable Lévy process are given. Laws of the iterated logarithm of Chover-type are derived from them as corollaries.
  17. [17]
    [PDF] A nonasymptotic law of iterated logarithm for general M-estimators
    May 24, 2019 · asymptotic versions of the law of iterated logarithm. ... Note that these recent results apply exclusively to the sample mean; there is no ...
  18. [18]
    [PDF] Strong laws for nonstationary GARCH models - Ele-Math
    The Hartman-Wintner law of the iterated logarithm shows that the dynamic behavior of ht at the boundary is sharply different from that in the explosive region, ...
  19. [19]
    [PDF] High Frequency Asymptotics for the Limit Order Book - NYU Stern
    Feb 24, 2014 · the properly scaled measure-valued limit order book process in the high frequency regime. ... Indeed, by the Law of Iterated Logarithm (see, for ...
  20. [20]
    Tail asymptotics for exponential functionals of Lévy processes
    ... law of the iterated logarithm for an increasing self-similar Markov process. ... Ruin probabilities and overshoots for general Lévy insurance risk processes.