Fact-checked by Grok 2 weeks ago

Regular number

In number theory, a regular number is a positive integer whose prime factorization consists solely of the primes 2, 3, and 5, expressible in the form $2^a \times 3^b \times 5^c where a, b, and c are non-negative integers. These numbers are equivalently defined as 5-smooth numbers, meaning no prime factor exceeds 5. Regular numbers hold historical significance in ancient Babylonian mathematics, where the sexagesimal (base-60) numeral system was employed; since 60 factors as $2^2 \times 3 \times 5, the reciprocals of regular numbers terminate in finite sexagesimal expansions, facilitating division without long divisions. Babylonian scribes compiled extensive reciprocal tables limited to regular numbers up to 3600 or higher, using them for practical computations in astronomy, land measurement, and commerce, while non-regular numbers (those with other prime factors like 7 or 11) required approximations or irregular methods. In modern contexts, regular numbers are studied within the broader class of smooth numbers, which appear in , particularly in estimates of the distribution of integers with small prime factors, such as in the context of the Dickman-de Bruijn function that quantifies their density. The infinite sequence of regular numbers in ascending order—beginning 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, ...—is generated multiplicatively and has applications in for assessing the smoothness of integers in factorization algorithms like the . In , regular numbers are commonly termed Hamming numbers after , who posed the challenge of efficiently generating the sequence using computer algorithms in the mid-20th century, highlighting issues in management and . This problem has become a classic benchmark for languages, demonstrating techniques like to avoid duplicates and ensure ordered output without exhaustive search.

Definition and Properties

Definition

Regular numbers are positive integers whose only prime factors are 2, 3, and 5, expressible in the form $2^a \times 3^b \times 5^c, where a, b, and c are non-negative integers. These numbers form a subset of the natural numbers and are characterized by their limited prime factorization, making them particularly amenable to certain computational and historical contexts. Equivalently, regular numbers are the positive divisors of $60^k for any positive integer k, since $60 = 2^2 \times 3 \times 5 and increasing k allows for arbitrarily high exponents in the prime factorization. Examples of regular numbers include 1 ($2^0 \times 3^0 \times 5^0), 2, 3, 4 ($2^2), 5, 6 ($2 \times 3), 8 ($2^3), 9 ($3^2), 10 ($2 \times 5), 12 ($2^2 \times 3), 15 ($3 \times 5), 16 ($2^4), 18 ($2 \times 3^2), 20 ($2^2 \times 5), 24 ($2^3 \times 3), 25 ($5^2), 27 ($3^3), and 30 ($2 \times 3 \times 5). In , regular numbers are synonymous with 5-smooth numbers, denoting integers with no prime factors exceeding 5. In , they are commonly referred to as Hamming numbers, a term originating from a problem posed by on efficiently generating such numbers, which Edsger Dijkstra attributed to him in the 1976 book A Discipline of Programming. Regular numbers hold historical significance in the Babylonian sexagesimal system, where their reciprocals have finite representations.

Basic Properties

Regular numbers, being positive integers of the form $2^a 3^b 5^c where a, b, c \geq 0 are , form a multiplicative under . The product of any two such numbers is again a regular number, as the exponents add accordingly while preserving the restricted prime factors. This structure arises directly from the closure property inherent in the prime factorization limited to 2, 3, and 5. The of regular numbers up to a x > 1 is given by the counting function \Psi(x, 5), which enumerates the nonnegative solutions (a, b, c) to $2^a 3^b 5^c \leq x. Due to the of \log 2, \log 3, and \log 5 over , each regular number has a unique representation, and \Psi(x, 5) equals the number of lattice points in the defined by a \log 2 + b \log 3 + c \log 5 \leq \log x with a, b, c \geq 0. The asymptotic approximation is \Psi(x, 5) \sim \frac{(\log x)^3}{6 \log 2 \cdot \log 3 \cdot \log 5}, obtained as the volume of this region divided by the of the number of primes involved (here, 3). Lower-order terms are O((\log x)^2), ensuring the leading term dominates for large x. The regular numbers form a strictly increasing sequence h_1 = 1 < h_2 < h_3 < \cdots, ordered by magnitude. This sequence can be generated recursively by starting with 1 and repeatedly adjoining the smallest integer greater than the current maximum that is a multiple of an earlier term by 2, 3, or 5, ensuring no duplicates due to unique factorizations. Equivalently, each subsequent term is the minimum among the sets \{2 h_i : i \leq k\}, \{3 h_j : j \leq k\}, and \{5 h_m : m \leq k\} excluding previously appearing values, where k is the current sequence length. This process yields the sequence 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, ... . A subset of regular numbers corresponds to highly composite numbers when restricted to the primes 2, 3, and 5. Specifically, those with non-increasing exponents a \geq b \geq c \geq 0 in $2^a 3^b 5^c are the 5-smooth highly composite numbers, as this satisfies the condition that exponents decrease (or stay equal) with increasing primes, a defining property of highly composite numbers. Examples include 1 (a=b=c=0), 2 ($1,0,0), 6 ($1,1,0), 12 ($2,1,0), and 60 ($2,1,1). These form a proper subsequence of all regular numbers. The nth regular number h_n is defined as the smallest positive integer such that \Psi(h_n, 5) = n. Determining h_n thus requires inverting the counting function, typically by solving \frac{(\log h_n)^3}{6 \log 2 \cdot \log 3 \cdot \log 5} \approx n asymptotically, yielding \log h_n \sim (6 n \log 2 \cdot \log 3 \cdot \log 5)^{1/3}, though exact computation involves enumerating the lattice points up to the boundary. This inversion highlights the connection between the geometric volume and the ordering of the sequence. These foundational properties underpin applications of regular numbers in number theory, such as addressing factorization challenges.

Historical Development

Babylonian Mathematics

The Babylonians utilized a sexagesimal (base-60) positional numeral system in their arithmetic from the Old Babylonian period, approximately 2000–1600 BCE, which inherently supported fractional computations through the use of reciprocals. Regular numbers—natural numbers whose prime factors are exclusively 2, 3, and 5—played a central role as denominators for unit fractions, as their reciprocals possess finite expansions in sexagesimal notation, permitting exact representations without recurring or infinite digits. Scribes compiled extensive tables of these reciprocals to streamline division, a fundamental operation in their mathematical practices, transforming it into multiplication by the precomputed inverse. Clay tablets from this era, such as those cataloged in collections like the , demonstrate the systematic documentation of reciprocals for up to 81, covering all one- and two-digit values in the system. A prominent example is the , dated to around 1800 BCE and housed at , which employs reciprocals of in generating tables of for astronomical and geometric purposes. In this artifact, parameters like the reciprocals of 48 and 60 (both ) facilitate the derivation of side ratios in right triangles, underscoring their utility in precise mensuration tasks. Specific reciprocals illustrate the system's elegance: the inverse of 2 is expressed as 0;30 (equivalent to 30/60 or 1/2), that of 3 as 0;20 (20/60 or 1/3), and that of 5 as 0;12 (12/60 or 1/5), all terminating after a single sexagesimal place. These finite forms enabled scribes to perform exact arithmetic on fractions that would require infinite series in base-10 systems, such as 1/7, by approximating with nearby regular denominators when necessary. The adoption of regular numbers in the sexagesimal framework offered a significant advantage over other ancient numeral systems, like the , by allowing divisions by non-divisors of 60 to be managed through finite approximations or exact reciprocals, thus enhancing accuracy in fields like land measurement and celestial tracking. This approach, rooted in the factorization of 60 as $2^2 \times 3 \times 5, minimized computational errors and supported complex problem-solving evident in surviving cuneiform records.

Modern Recognition

The rediscovery of regular numbers in Western mathematics gained momentum during the Renaissance, as scholars engaged with ancient Greek texts that transmitted Babylonian sexagesimal methods. Translations such as George of Trebizond's 1451 Latin version of Ptolemy's Almagest exposed mathematicians to sexagesimal computations, where regular numbers—those factorable solely by 2, 3, and 5—enabled finite fractional representations essential for astronomical tables. John Dee, a leading 16th-century figure, highlighted the value of these numerical systems in his preface to the 1570 English translation of Euclid's Elements, advocating their application in astronomy and navigation amid ongoing debates over decimal versus sexagesimal divisions. In the 19th century, breakthroughs in cuneiform decipherment by and in the 1840s–1850s unlocked original Babylonian tablets, revealing extensive tables of reciprocals and multiplications centered on . This direct evidence integrated the concept into European number theory, identifying them as 5-smooth numbers whose limited prime factors facilitated precise calculations. Their relevance emerged in Diophantine approximation studies, as exemplified by 's 1844 work on , where rationals with smooth denominators demonstrated exceptionally close approximations to irrationals, influencing subsequent theorems by and others on the distribution and utility of such forms. The 20th century brought formalization through Otto Neugebauer's seminal analyses of Babylonian mathematics, where he coined "regular numbers" to describe their role in producing terminating sexagesimal expansions and enabling efficient reciprocal computations. Neugebauer's Quellen und Studien zur Geschichte der Mathematik (1935–1939) and The Exact Sciences in Antiquity (1957) established their foundational importance, drawing parallels to modern number-theoretic properties. Concurrently, in computational contexts, in the mid-20th century, Richard Hamming posed the challenge of efficiently generating the ordered sequence of these numbers without redundancy or overflow. Edsger Dijkstra's structured solution in A Discipline of Programming (1976) popularized it as a benchmark for algorithmic elegance, with the numbers thereafter known as Hamming numbers in computer science; the problem persists in programming contests like the International Olympiad in Informatics.

Mathematical Applications

Number Theory

In number theory, regular numbers, also known as 5-smooth numbers, represent the extremal case within the broader theory of smooth numbers, where all prime factors are bounded by a fixed value; specifically, they consist solely of the primes 2, 3, and 5, making them the "smoothest" integers under this bound of 5. This classification arises from the general study of y-smooth numbers, whose prime factors do not exceed y, and regular numbers exemplify the simplest multiplicative structure in this hierarchy, facilitating analysis in Diophantine problems and algorithmic efficiency. The Dirichlet series associated with the indicator function of regular numbers connects directly to the Riemann zeta function through restricted Euler products. Formally, the generating function is given by \zeta(s,5) = \sum_{\substack{n=1 \\ n \text{ regular}}}^\infty \frac{1}{n^s} = \prod_{p \in \{2,3,5\}} \frac{1}{1 - p^{-s}} = \zeta(s) \prod_{p > 5} (1 - p^{-s}), for \Re(s) > 1, where the partial product over the first three primes yields the sum over regular numbers, and the inverse product over larger primes isolates the contribution from 5-smooth terms. Partial sums of this series, such as the summatory function \Psi(x,5) counting regular numbers up to x, inherit asymptotic behaviors from zeta's analytic properties, including links to zero-free regions that influence error terms in estimates. Regular numbers play a targeted in , particularly as candidate divisors in trial division methods for integers suspected to have only small prime factors. In such procedures, one tests divisibility by all regular numbers up to the of the target, leveraging their complete via powers of 2, , and 5 to efficiently identify factors when the number is itself 5- or has a smooth cofactor. This approach optimizes early stages of algorithms, as the density of regular numbers allows rapid sieving compared to general trial division over all integers. The distribution of regular numbers up to x is asymptotically estimated using the Hardy-Littlewood circle method, which decomposes the counting function \Psi(x,5) into arc contributions via exponential sums. This method yields precise formulas, such as \Psi(x,5) \sim x \cdot \rho(\log x / \log 5), where \rho is the Dickman-de Bruijn function, and method's singular series aligns with the restricted Euler product over primes 2, 3, and 5 to capture the main term. Such estimates underpin conjectural refinements in , linking the sparse distribution of regular numbers to broader prime patterns.

Music Theory

In music theory, regular numbers—products of the prime factors 2, 3, and 5—form the basis of intervals derived from the harmonic series, where overtones produce simple frequency ratios that align with human perception of harmony. The corresponds to the ratio $2:1, the to $3:2, and the to $5:4, all of which are ratios of regular numbers and appear prominently in the lower partials of a vibrating or air column. These ratios ensure minimal beating between simultaneously sounded tones, promoting a sense of purity and in chords, as opposed to intervals involving higher primes that introduce greater complexity and dissonance. Just intonation scales are constructed as products of powers of 2, 3, and 5, limiting the prime factors to these three to maintain simple ratios and avoid the dissonant intervals arising from higher primes like 7 or 11, which yield more intricate fractions such as $7:4 or $11:8. This 5-limit approach allows for scales like the with ratios 1:1, 9:8, , 4:3, 3:2, , 15:8, and 2:1, where each step preserves harmonic consonance without accumulating irregularities from extraneous factors. By restricting to regular numbers, musicians achieve intervals that closely match the natural , enhancing the acoustic coherence of polyphonic music. Historically, relied solely on powers of 2 and 3 to generate a through stacked perfect fifths, producing pure octaves and fifths but imperfect thirds like the at $81:64. This system, while consonant in two-voice , revealed limitations in triadic , prompting extensions in meantone temperaments that incorporated the prime 5 to refine thirds to $5:4 and $6:5, respectively. Such adjustments, popularized in keyboard practice, improved chordal sweetness at the expense of slight fifth tempering. The mathematical foundation for these applications lies in the simplicity of regular number ratios, which minimizes the denominator and numerator values to foster acoustic consonance through low-beat frequencies. A key irregularity in , the of $81:80—the discrepancy between the Pythagorean major third ($81:64) and the just ($5:4)—arises from excluding 5, leading to a slight detuning of about 21.5 cents that disrupts triads. Including 5 resolves this comma by aligning ratios more closely with the series, enabling more stable harmonic progressions in systems.

Computational Aspects

Algorithms for Generation

One straightforward method for generating regular numbers involves enumerating all possible non-negative integer exponents a, b, and c such that $2^a \times 3^b \times 5^c does not exceed a predefined upper bound, collecting these products, sorting them in ascending order, and removing any duplicates. Appropriate bounds for the exponents are determined by the logarithms: a \leq \log_2 L, b \leq \log_3 L, and c \leq \log_5 L, where L is the limit, leading to approximately O((\log L)^3) candidate numbers to generate and sort. To address the challenge of efficiently generating the ordered sequence of regular numbers without an explicit limit—known as Hamming's problem, posed by for computing the 1000th such number—a more sophisticated approach uses a (min-heap) to produce terms incrementally while avoiding duplicates. This method, popularized by in his manuscript EWD 792, leverages the recursive property that every regular number greater than 1 is obtained by multiplying a smaller regular number by 2, 3, or 5. It initializes the queue with 1 and iteratively extracts the minimum value, generates its multiples by 2, 3, and 5 (if they do not exceed bounds and are unique), and inserts them back into the queue. Duplicate removal can be handled via a set tracking seen values. The resulting sequence is strictly increasing by construction. The following illustrates a -based in Python-like syntax for generating the first n regular numbers:
import heapq

def generate_regular_numbers(n):
    if n == 0:
        return []
    heap = [1]
    seen = set([1])
    result = []
    while len(result) < n:
        x = heapq.heappop(heap)
        result.append(x)
        for factor in [2, 3, 5]:
            y = x * factor
            if y not in seen:
                seen.add(y)
                heapq.heappush(heap, y)
    return result
This algorithm ensures each regular number is generated exactly once and has a time complexity of O(n \log n) for the first n terms, as each of the O(n) insertions and extractions into the takes O(\log n) time; this is superior to the O(n^2) complexity of a naive brute-force without optimized bounds or merging. An alternative dynamic programming formulation maintains separate lists or indices for multiples of 2, 3, and 5 from previously generated regular numbers, merging them in order similar to a multi-way merge. Starting with an containing 1, indices i_2 = i_3 = i_5 = 0 track the next candidates: the next term is the minimum of h[i_2] \times 2, h[i_3] \times 3, and h[i_5] \times 5, where h is the growing of regular numbers; the corresponding index is advanced only when that multiple is selected, preventing duplicates through careful incrementing. This achieves O(n) time overall, as each term requires constant-time operations.

Implementations and Efficiency

Practical implementations of regular number generation commonly leverage priority queues to produce the sequence in ascending order while minimizing computational overhead. In , the heapq module facilitates this by maintaining a min-heap of candidate numbers derived from multiplying existing regular numbers by the primes 2, 3, and 5; duplicates are avoided either through a set to track seen values or by merging sorted iterator streams. A comparable approach in C++ employs std::priority_queue as a min-heap to store potential next values, paired with std::set for duplicate elimination, enabling scalable generation without redundant computations. Key optimizations focus on , where the sequence is generated on-demand via generators in or iterators in C++, deferring computation until values are requested and conserving memory by avoiding full precomputation. Performance benchmarks demonstrate that heap-based methods can generate the first 1,000,000 regular numbers in under 1 second on modern hardware, highlighting their suitability for large-scale applications. Extensions to k-smooth numbers involve expanding the prime set to the first k primes (e.g., including for 7-smooth), then applying the same merging to incorporate multiples of these additional factors. Common pitfalls in these implementations include from high exponents, mitigated by adopting such as Python's built-in int or libraries like GMP in C++; ensuring duplicate-free output also necessitates rigorous set checks or monotonic merging, as naive multiplications can introduce redundancies otherwise.

Other Uses

Timekeeping and Calendars

The division of time into units based on regular numbers has deep historical roots in systems designed for precise astronomical tracking. The modern sexagesimal structure of timekeeping— per minute and per hour—originates from the ancient Babylonian adoption of a base-60 , where 60 factors as $2^2 \times 3 \times 5, a regular number with abundant divisors that permitted fractional divisions without complex remainders. This facilitated accurate observations of celestial bodies, as calculations in sexagesimal arithmetic allowed for straightforward subdivisions of and intervals in astronomy. A full 24-hour day equates to seconds, which factors as $2^7 \times 3^3 \times 5^2, another regular number whose extensive divisors (96 in total) support even partitioning for scheduling and scientific computations. This numerical property enhances the practicality of time divisions in observational contexts, enabling clean multiples and submultiples that align with periodic astronomical phenomena without introducing awkward fractions. In calendar systems, regular numbers similarly underpin cyclical structures. The ancient Maya utilized a vigesimal (base-20) system in their Long Count and ritual calendars, with 20 factoring as $2^2 \times 5, allowing for hierarchical counts that integrated , lunar, and periods into a cohesive framework. Modern timekeeping incorporates approximations to handle irregularities, such as leap seconds added to (UTC) to reconcile the second's regularity with the Earth's variable rotation, ensuring clocks remain aligned with solar observations while retaining the benefits of regular number-based units. These applications underscore the advantages of regular numbers in timekeeping: their prime factors limited to 2, 3, and 5 yield highly divisible units ideal for fraction-free divisions in astronomical and calendrical contexts, promoting precision across diverse cultures.

Signal Processing

In digital signal processing for audio, sampling rates that are 5-smooth numbers—products of powers of 2, 3, and 5—facilitate efficient computation in tasks like rate conversion and buffering due to their divisibility by small integers, aligning well with hardware and algorithmic requirements. For instance, the standard 48 kHz rate factors as $2^7 \times [3](/page/3) \times 5^3, making it preferable over 44.1 kHz (which includes a factor of 7) for minimizing computational overhead in polyphase resampling filters and block processing. This property extends to higher rates like 192 kHz used in Blu-ray audio, which factors as $2^9 \times [3](/page/3) \times 5^3 and supports lossless high-resolution playback with optimized digital workflows. Filter design in audio synthesis, particularly for just-intonation systems relying on 2-3-5 prime ratios for intervals, uses these ratios to generate tones with simple frequency relationships like or 5:4. Modern FFT implementations in libraries leverage 5-smooth lengths for input arrays to optimize performance via mixed-radix algorithms, which decompose the transform into efficient stages using factors of 2, 3, and 5. For example, functions like SciPy's next_fast_len select the nearest 5-smooth number greater than or equal to a target length, ensuring faster execution on audio data compared to prime or highly composite lengths requiring more complex factoring. This approach is rooted in libraries like FFTPACK and , where small-prime factorizations minimize multiplications and enable adaptive planning for in audio applications.