Fact-checked by Grok 2 weeks ago

Iterated logarithm

The iterated logarithm, commonly denoted as \log^*_b x (or simply \log^* x for base 2 or e), is a function in and that quantifies the minimum number of iterations required to apply the -b logarithm to a positive x > 1 until the result is less than or equal to 1. Formally, it is defined recursively: \log^*_b x = 0 if x \leq 1, and \log^*_b x = 1 + \log^*_b (\log_b x) otherwise, making it the discrete inverse of or a super-logarithm in the hierarchy of fast-growing functions. This function exhibits extraordinarily slow growth, increasing by 1 only after the argument surpasses an exponential tower of 2's whose height equals the previous value; for example, \log^*_2 2 = 1, \log^*_2 4 = 2, \log^*_2 16 = 3, \log^*_2 65536 = 4, and \log^*_2 2^{65536} = 5. Such sluggish growth renders \log^* n effectively constant (at most 5) for all practical computational inputs up to n \approx 10^{10^{10^{10^{10}}}}, distinguishing it from standard logarithms or polylogarithms in . In , the iterated logarithm prominently features in the amortized of the union-find (, where optimizations like union by rank and path compression achieve nearly linear performance bounded by the inverse \alpha(n), which grows even more slowly than \log^* n, enabling efficient handling of in graphs and other applications. In mathematics, it arises in , including as a , though it is distinct from the probabilistic , which describes fluctuation limits in random processes.

Definition and notation

Formal definition

The iterated logarithm of a positive n, commonly denoted \log^* n and read as "log star of n," counts the number of times the logarithm function must be iteratively applied to n before the result is less than or equal to 1. This process begins with n and repeatedly takes the logarithm until the value drops to or below 1, with the total count of applications yielding \log^* n. The standard recursive formulation in computer science uses the binary logarithm \lg n = \log_2 n and is given by: \lg^* n = \begin{cases} 0 & \text{if } n \leq 1, \\ 1 + \lg^* (\lg n) & \text{if } n > 1. \end{cases} For n \leq 0, the function is typically undefined, as the logarithm is not defined for non-positive arguments in the real numbers. When $0 < n \leq 1, \lg^* n = 0 by the base case, since no applications are needed. For $1 < n \leq 2, \lg n \leq 1, so \lg^* n = 1 + \lg^*(\lg n) = 1 + 0 = 1. This definition generalizes to an arbitrary base b > 1, denoted \log_b^* n, by replacing the with \log_b n in the : \log_b^* n = \begin{cases} 0 & \text{if } n \leq 1, \\ 1 + \log_b^* (\log_b n) & \text{if } n > 1. \end{cases} The iteration proceeds similarly, counting the applications of \log_b until the result is at most 1, with the same handling for base cases and domain restrictions.

Notations and conventions

The iterated logarithm is commonly denoted as \log^* n (read as "log star of n"), where the asterisk indicates iteration of the logarithm function until the result is at most 1. In contexts, a variant \lg^* n is frequently used to specify the base-2 logarithm, reflecting the nature of computational analyses. In , \log^* n often defaults to the logarithm (base e). The choice of base follows contextual conventions: base 2 is the default in literature due to its alignment with binary representations and algorithmic efficiencies, while base e predominates in theoretical analysis for its compatibility with continuous functions and limits. When the base must be explicitly stated, the notation \log_b^* n is used, where b > 1 is the base. These notations emerged in the 1970s within , notably introduced by in his analysis of union-find algorithms, where the iterated logarithm provided tight bounds on operation complexities. Earlier mathematical texts occasionally used variations such as repeated explicit logs (e.g., \log \log n), but the compact \log^* n form standardized in algorithmic contexts post-Tarjan. The function is typically defined for positive real numbers n > 0, with \log^* n = 0 when n \leq 1, as no iterations are needed to reach the threshold. For non-integer values, the definition extends naturally via the real logarithm, provided n > 0. Negative values and complex extensions are not standard, as the logarithm is undefined for non-positive reals in this context, restricting applications to positive inputs.

Properties

Computational values

The iterated logarithm \log^*_2 n, assuming the binary logarithm base, can be computed by repeatedly applying the arithm until the result is at most , counting the number of applications required. For example, consider n = [16](/page/16): \log_2 [16](/page/16) = 4 > [1](/page/1), \log_2 4 = 2 > [1](/page/1), \log_2 2 = [1](/page/1) \leq [1](/page/1), requiring three applications, so \log^*_2 [16](/page/16) = 3. The following illustrates \log^*_2 n for select values, particularly powers of 2 expressed in , highlighting the extremely slow growth:
nExpression in up-arrow notation\log^*_2 n
1-0
2-1
4$2 \uparrow\uparrow 22
16$2 \uparrow\uparrow 33
65{,}536$2 \uparrow\uparrow 44
$2^{65{,}536}$2 \uparrow\uparrow 55
A practical to compute \log^*_2 n uses an iterative loop, as shown in the following :
function log_star_2(n):
    if n <= 1:
        return 0
    count = 0
    while n > 1:
        n = log2(n)
        count += 1
    return count
This procedure is computationally trivial, with the loop iterating at most five times for any n encountered in real-world applications, owing to the function's minimal growth. For all practical purposes, including values of n on the scale of the (approximately $10^{80} particles, or roughly $2^{266}), \log^*_2 n \leq 5.

Asymptotic growth

The iterated logarithm exhibits an extraordinarily slow rate of asymptotic , rendering it effectively constant for all computationally relevant values of n. It increases by 1 only when n surpasses an exponential tower whose corresponds to the prior value, such as reaching 5 for n > 65536 = 2 \uparrow\uparrow 4 and remaining 5 until n = 2^{65536} = 2 \uparrow\uparrow 5. This behavior implies that \log^* n = \Theta([\log](/page/Log)^* \log n), as applying the logarithm once more typically decreases the iteration count by exactly 1 for sufficiently large n. A concrete illustration of this sluggishness is the bound \log^* n \leq 5 for all n \leq 2^{65536}, beyond which it reaches 6 but stays there for numbers vastly exceeding any physical scale. In general, \log^* n \leq \frac{\log \log n}{\log \log \log n} + O(1), offering a simplified upper estimate that underscores its sub-polylogarithmic pace while remaining unbounded as n \to \infty. Compared to other logarithmic functions, the iterated logarithm grows asymptotically slower than any fixed number of iterations of the plain logarithm; specifically, for any constant k \geq 1, \log^* n = o(\log^{(k)} n), where \log^{(k)} n denotes the k-fold iterated logarithm. Thus, it advances far more gradually than \log \log n, which in turn lags behind \log n. The function is non-decreasing for integers n > 1, though its stepwise nature means it plateaus over exponentially expanding intervals before incrementing. For real arguments greater than 0, extensions using or operations preserve monotonicity while approximating a continuous variant suitable for analysis.

Applications in algorithms

Union-find data structure

The , also known as the disjoint-set union structure, maintains a collection of supporting two primary operations: find, which determines the representative (root) of the set containing a given , and union, which merges two sets into one. In a naive implementation using singly linked lists, each operation takes O(n) time in the worst case, where n is the number of elements, due to potential linear traversals along degenerate chains or trees formed during repeated s. To improve efficiency, the structure employs two heuristics: union by rank, which attaches the shorter tree to the root of the taller one during merges (using approximate tree heights as ranks), and path compression, which flattens the tree by setting each node along the find path directly to the root. These optimizations ensure that the amortized for a sequence of m operations on n elements is O(α(m, n)), where α is a functional inverse of the . This bound was established in Robert Tarjan's seminal 1975 analysis, which provided the first tight worst-case amortized characterization for the structure, demonstrating that α(m, n) grows slower than any iterated logarithm and serves as an effectively constant factor for all practical purposes. The inverse Ackermann function grows even more slowly than the iterated logarithm. Tarjan's proof uses an via a that accounts for the "level" of each based on and lengths, charging the cost of operations to changes in this potential. Specifically, union by limits rank increases to O(log n) levels, while reduces path lengths; each find operation traverses at most α(n) "effective" levels before compression, as higher levels are rarely created due to the slow growth of the Ackermann hierarchy. The total potential change over m operations bounds the amortized cost per operation by O(α(n)), with the inverse Ackermann emerging as the maximum depth traversable in the rank structure. Modern simplifications, such as those using subtree-size potentials, confirm this by showing that path halvings or compressions occur at most O(α(n)) times per across all operations. In practice, α(n) ≤ 4 for n up to 2^{2^{65536}}—a number vastly exceeding the atoms in the observable universe—making union-find operations effectively constant time even for astronomical scales.

Other data structures

The iterated logarithm plays a key role in analyzing decremental dynamic connectivity algorithms for sparse graphs, such as planar graphs, where updates involve only edge deletions. In such settings, an algorithm processes n updates in O(n log* n) total time while supporting connectivity queries in O(log* n) time, achieving nearly linear performance by recursively applying graph reductions and r-divisions with r = log² n. This bound improves upon prior O(n log n) results and leverages the slow growth of log* n to handle sparse structures efficiently without full recomputation. In sorting and searching data structures, the iterated logarithm emerges in advanced variants that build upon basic O(log log n) operations. The van Emde Boas tree supports predecessor searches over a universe of size u in O(log log u) time using a recursive structure of square-root subuniverses. Parallel algorithms in the PRAM model use the iterated logarithm to bound the number of rounds in work-depth tradeoffs for problems like prefix sums, , and related primitives. On a Sum-CRCW PRAM, prefix sums can be computed in O(log* n) time with optimal work O(n), by iteratively reducing the problem size through pointer doubling or coloring techniques that halve active elements per phase until convergence in log* n steps. Similarly, linear-time integer achieves O(log* n) parallel time by partitioning keys into exponentially decreasing ranges, with log* n iterations to resolve dependencies in sparse input distributions. These bounds highlight log* n's utility in , where it minimizes depth without excessive processors. In geometric data structures, the iterated logarithm appears in update and query bounds for incremental constructions, such as orthogonal point enclosure in , which supports insertions and reports intersections with axis-aligned rectangles in O(log n + k) query time using O(n log* n) space via shallow cuttings and . For incremental in higher dimensions or dynamic variants, log* n factors arise in amortized analyses of graphs or maintenance, yielding near-linear total time O(n log* n) for n point insertions while preserving the empty-circle property. These structures exploit log* n's near-constancy to enable practical scalability in sparse geometric inputs, such as scattered point sets.

Inverse Ackermann function

The inverse Ackermann function, commonly denoted \alpha(n), is defined as the smallest nonnegative k such that A(k) \geq n, where A(k) denotes the k-th value in the diagonal of the , often taken as A(k, k). This definition arises in to bound the growth of certain algorithmic complexities, where the full Ackermann function is too unwieldy for direct computation. In the context of the Ackermann hierarchy, the iterated logarithm \log^* n corresponds to the inverse at the third level, roughly inverting the exponentiation stage A(3, n) \approx 2^{n}. The inverse Ackermann function inverts higher levels of the hierarchy, such as tetration and beyond, resulting in even slower growth; for instance, a computable variant defines a sequence \alpha_k(n) where \alpha_2(n) = \lceil \log_2 n \rceil, \alpha_3(n) = \log^* n, and \alpha(n) = \min \{ k \mid \alpha_k(n) \leq 3 \}. Consequently, \alpha(n) < \log^* n + 3 for all n \geq 1, and \alpha(n) \leq 4 for all practical values of n encountered in computing (e.g., up to n = 2^{2^{65536}}). This slower growth makes \alpha(n) valuable for providing tighter asymptotic bounds in algorithm analysis compared to \log^* n. For example, in the union-find data structure with union by rank and path compression, the amortized time per operation is \Theta(\alpha(n)), which is stricter than the looser O(\log^* n) bound.

Generalizations

The iterated logarithm function, typically defined for a fixed base, can be generalized to different bases b > 1, where the number of iterations required to reduce the argument below a varies by at most a small constant depending on the ratio of the bases. For instance, the values of \log^*_2 n and \log^*_e n differ by at most 3 for all n \geq 1, since changing the base corresponds to multiplying the argument by a constant factor \log_2 e, which shifts the iteration count by a bounded amount. A further extension involves applying the iterated logarithm componentwise to vectors in higher-dimensional spaces, often in the context of multivariate processes or higher-dimensional analytic estimates. In such settings, the function is computed using a , such as \log^*(\| \mathbf{v} \|), to measure the "" or in multi-dimensional analysis, appearing in generalizations of central limit theorems or discrepancy estimates for -valued sequences. For example, in multivariate laws for empirical processes, iterated logarithms scale the fluctuations in . Fractional or continuous versions of the iterated logarithm arise through functional iteration theory, where non-integer iterates are constructed by solving equations like the Schröder functional equation for the exponential function, the inverse of the logarithm. This allows defining \log^{ \alpha * } n for real \alpha > 0, interpolating between integer iterations, and is useful for analyzing continuous dynamical systems or solving equations involving partial iterations. Such constructions rely on embedding the logarithm into a flow of functions via its Abel function. These generalizations find applications in , where multiple iterated logarithms refine bounds on prime gaps; for instance, the largest gap between primes up to x exceeds c \frac{\log x \log \log x \log \log \log \log x}{\log \log \log x} for some constant c > 0, capturing finer asymptotic scales. In physics, particularly in theory, iterated logarithms model successive scaling transformations in , leading to logarithmic corrections in correlation lengths, while models exhibit distributions with iterated logarithmic tails during aggregation or processes.

References

  1. [1]
    IteratedLog | Wolfram Function Repository
    It is defined to be the smallest (integer) number of times that the logarithm must be applied to a number to yield a result less than 1. Examples.
  2. [2]
    [PDF] Union Find
    Sep 8, 2013 · Iterated logarithm function. Def. The iterated logarithm function is defined by: Note. We have lg* n ≤ 5 unless n exceeds the # atoms in the ...
  3. [3]
    [PDF] Simpler Analyses of Union-Find - arXiv
    Aug 17, 2023 · We analyze union-find using potential functions motivated by continuous algorithms, and give alternate proofs of the O(log log n), O(log∗ n), O ...<|control11|><|separator|>
  4. [4]
    Nested Logarithm -- from Wolfram MathWorld
    The symbol log_kx is commonly used to mean the nested logarithm (also called the repeated logarithm or iterated logarithm)
  5. [5]
    [PDF] Introduction to Algorithms
    ... iterated logarithm function. We use the notation lg. ∗ n (read “log star of n”) to denote the iterated logarithm, which is defined as follows. Let lg. (i) n ...
  6. [6]
    Iterated logarithm - OeisWiki
    May 27, 2013 · The base b iterated logarithm is defined as the number of iterations of the base b , b > 1 , logarithm before the result is less than or equal ...Missing: arbitrary | Show results with:arbitrary
  7. [7]
    Iterated logarithm | log star | lg
    Jun 4, 2025 · The iterated logarithm base b of a positive number x, written log* b (x) is the number of times you have to take the log base b until you get a result less ...Missing: definition | Show results with:definition
  8. [8]
  9. [9]
    Amortized cost analysis results for path compression Find
    log* 2 = 1; log* 4 = 2 (note that 4 = 22 ); log* 16 = 3 (note that 16 = ). log* 65536 = 4 (note that 65536 = ). log* 2 65536 = 5 (note that 2 65536 = is a huge ...
  10. [10]
    Power Tower -- from Wolfram MathWorld
    The power tower of order k is defined as a^^k=a^(a^(·^(·^(·^a))))_()_(k), where ^ is Knuth up-arrow notation (Knuth 1976).
  11. [11]
    Iterated Logarithm log*(n) - GeeksforGeeks
    Aug 19, 2022 · Iterated Logarithm or Log*(n) is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1.
  12. [12]
    Efficiency of a Good But Not Linear Set Union Algorithm
    Efficiency of a Good But Not Linear Set Union Algorithm. Author: Robert Endre Tarjan.
  13. [13]
    Inverse of Ackermann function and $\log^* n
    Nov 20, 2021 · No, since the Ackermann function grows faster than any primitive recursive function. The iterated logarithm is one of the two inverse functions of tetration.Arguing that the inverse of Ackermann function is upper bounded by ...Do functions with slower growth than inverse Ackermann appear in ...More results from cs.stackexchange.com
  14. [14]
    Inverse Ackermann - Gabriel Nivasch
    The inverse Ackermann function is an extremely slow-growing function which occasionally turns up in computer science and mathematics.
  15. [15]
    [PDF] Optimal decremental connectivity in planar graphs
    Moreover, since the fully dynamic connectivity has ... Moreover, we define the iterated logarithm log ... Decremental dynamic connectivity. J. Algorithms ...
  16. [16]
    [PDF] Lecture 4: Divide and Conquer: van Emde Boas Trees
    , then we have O(log log n) time operations. – Exponentially faster than Balanced Binary Search Trees. – Cooler queries than hashing. • Application: Network ...
  17. [17]
    [PDF] Uniquely Represented Data Structures for Computational Geometry
    It is based on a three level structure. The top two levels use treaps and the bottom level uses state transitions. The bottom level contains only O(log log n).
  18. [18]
    O(log*n) algorithms on a Sum-CRCW PRAM | Computing
    Dec 22, 2006 · We present work- and cost-optimal O(log*n) algorithms for prefix sums and linear integer sorting on a Sum-CRCW PRAM.<|separator|>
  19. [19]
    [PDF] Algorithms and Data Structures for Geometric Intersection Query ...
    Table 3.1: Summary of our results for orthogonal point enclosure in R3. log∗ n is the iterated logarithm of n. log(1) n = log n and log(i) n = log(log ...
  20. [20]
    A functional proof pearl: inverting the Ackermann hierarchy
    Jan 22, 2020 · A functional proof pearl: inverting the Ackermann hierarchy · PREVIOUS CHAPTER · NEXT CHAPTER.
  21. [21]
    inverse Ackermann function
    Definition: A function of two parameters whose value grows very, very slowly. ; Formal Definition: α(m,n) = min{i≥ 1: A(i, ⌊ m/n⌋) > log2 n} where A(i,j) is ...
  22. [22]
    On the Multivariate Law of the Iterated Logarithm - Project Euclid
    Abstract. A Hilbert space law of the iterated logarithm is proved which generalizes Kolmogorov's law for bounded random variables and which generalizes results ...Missing: multivariable log*
  23. [23]
    Fractional iteration of exponentially growing functions
    The theorem enables us to determine the asymptotically best behaved. Abel function and fractional iterates of any /(*) of the form (1) and for in- stance to ...
  24. [24]
    Very Large Gaps between Consecutive Primes - ScienceDirect.com
    ... log Xlog 2 Xlog 4 X(log 3 X) −2 , where logν Xdenotes theν-fold iterated logarithm function andγis Euler's constant. The new tool used is a combinatorial ...
  25. [25]
    Shlesinger-Hughes stochastic renormalization, iterated logarithmic ...
    The probability distribution attached to a succession of q + 2 aggregation processes has a very long tail with a logarithmic shape: it is an inverse power of ...