List of numbers
A list of numbers in mathematics refers to the systematic classification of numerical values and structures that form the basis of arithmetic, algebra, and other branches of the field, encompassing sets such as natural numbers, whole numbers, integers, rational numbers, irrational numbers, real numbers, and complex numbers.[1][2] These categories build upon one another hierarchically, starting with the simplest counting elements and extending to more abstract systems that enable solutions to equations involving roots and imaginaries.[3] Natural numbers are the foundational set used for counting and ordering, commonly consisting of positive integers beginning with 1 (i.e., 1, 2, 3, ...), excluding zero and negatives, though some definitions (such as in set theory) include zero.[4][5] Whole numbers extend this set by including zero, forming {0, 1, 2, 3, ...}, which supports basic operations without fractions.[4] Integers broaden the scope further to include negative values, yielding the complete set {..., -3, -2, -1, 0, 1, 2, 3, ...}, essential for representing differences and debts in real-world modeling.[1] Rational numbers comprise all integers and fractions where the numerator and denominator are integers (with denominator nonzero), such as 1/2 or -3/4, and they are dense on the number line, meaning any two can be separated by another rational. In contrast, irrational numbers cannot be expressed as such fractions, including constants like √2 or π, and they fill the gaps in the rationals to complete the real numbers, which include all points on the infinite number line and support continuous functions.[3] Finally, complex numbers incorporate imaginary units, expressed as a + bi where i^2 = -1, allowing solutions to equations like x^2 + 1 = 0 and extending applications to fields like electrical engineering and quantum mechanics.[2] This progression from basic counting to advanced structures mirrors the historical evolution of mathematical systems.[6]Natural Numbers
Small Natural Numbers
Small natural numbers, ranging from 1 to 10, serve as the foundational elements of counting, arithmetic, and numerical representation in mathematics and daily life. These numbers emerged in ancient civilizations through early counting systems, where tallies or notches on bones and stones represented quantities starting from 1, as seen in Mesopotamian records dating back over 5,000 years and Egyptian hieroglyphs around 3000 BCE.[7][8] Over time, these systems evolved into structured numeral notations, with base-10 counting likely influenced by human finger anatomy in many cultures.[9] The following table summarizes the unique properties of these small natural numbers, highlighting their basic roles in arithmetic and number theory:| Number | Key Properties |
|---|---|
| 1 | Multiplicative identity (a \times 1 = a for any a); the multiplicative unit, neither prime nor composite.[10] |
| 2 | Smallest prime number; smallest even number; the only even prime.[11] |
| 3 | Smallest odd prime; sum of first two natural numbers (1+2).[11] |
| 4 | Smallest composite number ($2 \times 2); first perfect square after 1 ($2^2).[11] |
| 5 | Prime number; forms the basis for the pentagon in geometry.[11] |
| 6 | First perfect number (sum of proper divisors 1+2+3=6); smallest number divisible by 1, 2, and 3.[12] |
| 7 | Prime number; generates the heptagon in geometry.[11] |
| 8 | First power of 2 greater than 4 ($2^3); highly composite with divisors 1, 2, 4.[10] |
| 9 | First perfect square after 4 ($3^2); sum of first three odd numbers (1+3+5).[10] |
| 10 | Basis of the decimal system; sum of first four natural numbers (1+2+3+4).[9][10] |
Mathematical Properties
The natural numbers \mathbb{N} are rigorously defined through the Peano axioms, a set of five foundational postulates introduced by Italian mathematician Giuseppe Peano in his 1889 publication Arithmetices principia. These axioms establish the structure of \mathbb{N} (starting from 1) as follows:- One axiom: 1 is a natural number.
- Successor axiom: For every natural number n, there exists a unique successor S(n) that is also a natural number.
- No predecessor for one: There is no natural number n such that S(n) = 1.
- Injectivity of successor: If S(m) = S(n), then m = n.
- Induction axiom: If a set K \subseteq \mathbb{N} contains 1 and, whenever it contains n, it also contains S(n), then K = \mathbb{N}.[13]
Cultural and Practical Uses
Natural numbers play a prominent role in cultural calendars and symbolic traditions worldwide. The number 7, for instance, structures the seven-day week, a cycle rooted in ancient Mesopotamian astronomy and religious practices, where it symbolized completeness and was used in rituals like temple offerings and exorcisms.[16] This septenary division influenced Jewish, Christian, and Islamic calendars, reflecting the seven classical planets known to antiquity.[17] Similarly, the number 12 divides the year into twelve months in the Gregorian calendar, derived from the Roman lunar-solar system that approximated the solar year with twelve lunar cycles.[18] In astrology, the twelve zodiac signs, originating from Babylonian astronomy around the 5th century BCE, partition the ecliptic into equal segments for horoscopic purposes.[19] In practical applications, natural numbers underpin measurement and computational systems essential to commerce and technology. The base-10 (decimal) system, likely developed independently in ancient Egypt and Mesopotamia due to human finger counting, facilitated trade by enabling efficient recording of quantities on clay tablets and papyrus for accounting and inventory.[20] This positional notation, refined by Indian mathematicians around 500 CE and transmitted via Arab scholars, became the global standard for economic transactions.[21] In contrast, the binary system, a base-2 numeral system using only 0 and 1, was formalized by Gottfried Wilhelm Leibniz in the 1690s, inspired by ancient Chinese I Ching hexagrams, and later adopted in the 20th century for digital computing due to its compatibility with electronic switches.[22] Leibniz envisioned binary arithmetic as a foundation for mechanical calculators, paving the way for modern computers.[23] Cultural taboos associated with certain natural numbers reveal symbolic fears tied to language and history. In Western cultures, 13 is deemed unlucky, tracing to Norse mythology where Loki, the 13th guest at a divine banquet, caused the death of Balder, and reinforced by the Christian Last Supper with 13 attendees including Judas the betrayer.[24] This triskaidekaphobia leads to omissions like skipping the 13th floor in buildings.[25] In East Asian societies, particularly China, Japan, and Korea, the number 4 evokes tetraphobia because its pronunciation ("shi") homophonically resembles "death," prompting avoidance in addresses, phone numbers, and hospital rooms.[26] In ancient philosophy, perfect numbers such as 6 and 28, equal to the sum of their proper divisors, symbolized cosmic harmony for Pythagoreans, who viewed them as embodiments of divine order.[12]Special Classes of Natural Numbers
Prime Numbers
A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself.[27] This definition originates from ancient Greek mathematics, where Euclid described such numbers as those "measured by a unit alone."[27] Primes form the foundational building blocks of all natural numbers through multiplication, underscoring their central role in number theory as the atoms of the integer system.[28] The first few prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, and 97.[29] Note that 2 is the only even prime number, with all subsequent primes being odd.[28] These examples illustrate the distribution of primes among natural numbers, becoming sparser as numbers increase, yet remaining essential for understanding divisibility and factorization in number theory.[28] Euclid proved that there are infinitely many prime numbers in his Elements, a result known as Euclid's theorem on the infinitude of primes.[30] The proof proceeds by contradiction: assume there are finitely many primes p_1, p_2, \dots, p_k, and construct the number N = p_1 p_2 \cdots p_k + 1. This N is greater than 1 and not divisible by any p_i, so it must either be prime itself or divisible by some prime larger than p_k, contradicting the assumption of finiteness.[31] This theorem highlights the unbounded nature of primes, influencing countless results in analytic and algebraic number theory.[30] Twin primes are pairs of primes that differ by 2, such as (3, 5), (5, 7), (11, 13), and (17, 19).[32] The twin prime conjecture posits that there are infinitely many such pairs, a problem dating back to ancient times but formally stated in the 19th century.[33] Similarly, Goldbach's conjecture, proposed in 1742, states that every even integer greater than 2 can be expressed as the sum of two primes, such as 4 = 2 + 2 and 10 = 3 + 7.[34] These unsolved conjectures underscore the enduring mysteries surrounding primes and their additive properties in number theory.[33]Highly Composite Numbers
A highly composite number is a positive integer n such that the number of its positive divisors, denoted by the divisor function d(n), is greater than d(m) for every positive integer m < n.[35] This property makes these numbers particularly rich in factors relative to their magnitude, with the sequence beginning 1, 2, 4, 6, 12, 24, 36, 48, 60, 120, and continuing to larger values like 840.[35] For instance, 12 has d(12) = 6 divisors (1, 2, 3, 4, 6, 12), exceeding the divisor counts of all smaller positives; similarly, 60 has d(60) = 12 divisors, and 840 has d(840) = 32. In 1915, Srinivasa Ramanujan extended this concept by defining superior highly composite numbers, a subclass where, for some \epsilon > 0, the ratio d(n)/n^\epsilon is at least as large as d(k)/k^\epsilon for all positive integers k.[36] These numbers satisfy a stricter condition on divisor density and form a subsequence of highly composite numbers, including 1, 2, 6, 12, 60, 840, 1260, and 2520, among others.[37] Ramanujan's formulation emphasizes their optimal structure in terms of prime factorization, where exponents in the prime power decomposition are non-increasing.[36] Highly composite numbers find applications in factorization efficiency due to their composition from small primes with balanced exponents, allowing rapid decomposition compared to numbers with fewer or larger factors.[35] In geometry, they enable constructions of highly symmetric polygons; for example, a regular 60-sided polygon benefits from the 12 divisors of 60, supporting diverse rotational and reflectional symmetries in its dihedral group.[38] Additionally, their abundance of divisors makes them practical in engineering and measurement systems, such as using 12 for inches in a foot or 60 for seconds in a minute, simplifying fractional calculations and subdivisions.[39]Perfect and Amicable Numbers
A perfect number is a positive integer equal to the sum of its proper divisors, excluding the number itself.[40] For example, 6 is perfect because its proper divisors are 1, 2, and 3, and $1 + 2 + 3 = 6; similarly, 28 is perfect as $1 + 2 + 4 + 7 + 14 = 28.[40] The ancient Greeks, including Euclid, studied these numbers, recognizing the first four: 6, 28, 496, and 8128.[12] All known perfect numbers are even, and Euclid proved in the 3rd century BCE that every number of the form $2^{p-1}(2^p - 1), where $2^p - 1 is prime, is perfect.[40] In the 18th century, Leonhard Euler demonstrated that this form generates all even perfect numbers, establishing the Euclid-Euler theorem.[40] Here, $2^p - 1 must be a Mersenne prime, linking perfect numbers to the search for such primes.[40] As of 2025, 52 even perfect numbers are known, each corresponding to a discovered Mersenne prime.[41] Whether odd perfect numbers exist remains an unsolved problem in number theory, dating back over two millennia and considered one of the oldest open questions in mathematics.[42] Extensive searches and theoretical bounds, such as requiring at least 101 prime factors and exceeding $10^{1500} in size, have failed to find any, but no proof of nonexistence exists.[43] Amicable numbers form pairs of distinct positive integers where the sum of the proper divisors of each equals the other number.[44] The smallest such pair is 220 and 284: the proper divisors of 220 sum to 284 ($1 + 2 + 4 + 5 + 10 + 11 + 20 + 22 + 44 + 55 + 110 = 284), and those of 284 sum to 220 ($1 + 2 + 4 + 71 + 142 = 220).[44] Discovered by ancient mathematicians like Iamblichus around 200 CE, amicable pairs extend the perfect number concept to mutual divisor sums.[12] Over 1.2 billion such pairs are known as of 2023, with no general formula for generating all pairs, though specific constructions exist for certain forms.[45][44]Other Integers
Zero
Zero occupies a unique position in the number system as neither positive nor negative, serving as the boundary between the two and embodying the concept of nothingness in quantitative terms. This neutrality distinguishes it from all other integers, enabling its foundational role in arithmetic and algebra. Unlike positive or negative numbers, zero does not contribute to magnitude in addition or multiplication but is indispensable for representing absence or balance in equations and measurements. The concept of zero as a number originated in ancient India, where it evolved from a placeholder in positional notation to a full-fledged numeral with arithmetic properties. By the 7th century CE, the mathematician Brahmagupta formalized its use in his treatise Brahmasphuṭasiddhānta (628 CE), providing the first explicit rules for operations involving zero, such as stating that zero added to or subtracted from any number yields that number unchanged. This innovation built on earlier Indian developments, including the Bakhshali manuscript from around the 3rd–4th century CE, which used a dot as a placeholder zero in decimal calculations. Brahmagupta's work treated zero as an independent number, revolutionizing mathematics by allowing consistent handling of voids in numerical systems.[46][47] These Indian advancements were adopted into the Arabic numeral system during the Islamic Golden Age, with scholars like Muhammad ibn Musa al-Khwarizmi incorporating zero into his 9th-century texts on algebra and arithmetic, which used the term sifr (meaning "empty") for zero. This integration facilitated the spread of the decimal place-value system to Europe via translations and the work of Fibonacci in the 13th century, establishing zero as a core element of modern numerals. The adoption transformed global computation, enabling efficient representation of large and small quantities.[48][46] Mathematically, zero functions as the additive identity in the ring of integers, satisfying the property that for any integer a, a + 0 = 0 + a = a. This ensures that adding zero preserves the value of any expression, forming the basis for many algebraic identities and proofs. However, division by zero remains undefined, as no integer q satisfies $0 \cdot q = a for a \neq 0, and $0/0 leads to indeterminacy since any number multiplied by zero yields zero. This undefined nature prevents inconsistencies in the field axioms and underscores zero's exceptional status in arithmetic operations./07:_The_Properties_of_Real_Numbers/7.05:_Properties_of_Identity_Inverses_and_Zero)[49] In set theory, zero represents the cardinality of the empty set \emptyset, which contains no elements and thus has measure zero. This correspondence defines the natural number 0 in axiomatic constructions like von Neumann's ordinals, where \emptyset is the initial empty set with zero elements. The empty set's cardinality establishes zero as the foundational integer in set-theoretic foundations of mathematics.Negative Integers
Negative integers are the whole numbers less than zero, expressed as -1, -2, -3, and continuing indefinitely. They form the set { ..., -3, -2, -1 }, symmetric to the positive integers through the operation of additive inversion, where for any positive integer n, the negative integer -n satisfies n + (-n) = 0, formally defined as -n = 0 - n.[50][51] The historical development of negative integers traces back to ancient financial practices, particularly in 7th-century India, where mathematician Brahmagupta (c. 598–668 CE) formalized their use in his treatise Brahmasphutasiddhanta (628 CE), interpreting positive numbers as fortunes or assets and negative numbers as debts or deficiencies to balance accounts.[52] This practical application in debt systems provided an early conceptual foundation, enabling arithmetic operations like addition and multiplication involving negatives. In Europe, acceptance grew in the 17th century through René Descartes' introduction of analytic geometry in La Géométrie (1637), where the coordinate plane naturally extended the number line to include negative directions, despite Descartes' initial reluctance to treat negative roots of equations as valid solutions, labeling them "false."[52][53] A key property of negative integers is the absolute value function, which for a negative integer -n (with n > 0) yields |-n| = n, measuring the distance from zero on the number line irrespective of sign.[54] This function underscores their magnitude equivalence to positives while preserving directional distinction. On the real number line, negative integers occupy positions to the left of zero, ordered such that \dots < -4 < -3 < -2 < -1 < 0, with greater negativity indicating smaller values and increasing distance from the origin.[55] Zero acts as the neutral boundary separating negative integers from their positive counterparts.[50]Powers of Ten and SI Prefixes
Powers of ten provide a systematic way to denote large integers by expressing them as 10 raised to an integer exponent, where 10^n equals 1 followed by n zeros in decimal notation. This base-10 system is foundational in mathematics and science for scaling numbers efficiently, starting from 10^0 = 1 and increasing to extremely large values such as 10^3 = 1,000 or 10^6 = 1,000,000.[56] The International System of Units (SI) formalizes the naming of these powers through prefixes, which attach to base units to indicate multiples by factors of 10^3 or higher (and submultiples for smaller scales, though the focus here is on large integers). These prefixes, established by the International Bureau of Weights and Measures (BIPM), extend up to 10^30 as of the 2022 revision, enabling concise expression of vast quantities in fields like physics and computing. For powers below 10^3, prefixes like deca (10^1) and hecto (10^2) exist but are less commonly applied to standalone integers.[57][58] The following table lists the official SI prefixes for positive powers of ten, including the most recent additions of ronna (10^27) and quetta (10^30), which address growing needs in data storage and cosmology.[57]| Factor | Name | Symbol |
|---|---|---|
| 10^0 | (none) | — |
| 10^1 | deca | da |
| 10^2 | hecto | h |
| 10^3 | kilo | k |
| 10^6 | mega | M |
| 10^9 | giga | G |
| 10^12 | tera | T |
| 10^15 | peta | P |
| 10^18 | exa | E |
| 10^21 | zetta | Z |
| 10^24 | yotta | Y |
| 10^27 | ronna | R |
| 10^30 | quetta | Q |
Rational Numbers
Unit Fractions
A unit fraction is a positive rational number expressed as \frac{1}{n}, where n is a positive integer.[61] These fractions formed the basis of fractional notation in ancient Egyptian mathematics, where all rational numbers were represented as sums of distinct unit fractions, a practice documented in surviving papyri.[61] This system avoided repeated denominators and non-unit numerators except for the special case of \frac{2}{3}, reflecting the Egyptians' preference for additive decompositions in arithmetic problems related to division, area, and volume.[61] The Rhind Mathematical Papyrus, dating to approximately 1650 BC and attributed to the scribe Ahmes, contains extensive tables and examples of Egyptian fraction expansions, including a 2/n table for odd n from 5 to 101.[62] For instance, the fraction \frac{2}{5} is decomposed as \frac{1}{3} + \frac{1}{15}, illustrating how Egyptians solved practical problems like dividing loaves or measuring fields by summing unit fractions.[61] Such representations ensured exactness without relying on a general fractional notation, influencing later mathematical traditions.[61] Among notable unit fractions, \frac{1}{2} equals 0.5 in decimal form and serves as the foundational halving operation in many ancient calculations.[63] The fraction \frac{1}{3} has the repeating decimal expansion $0.\overline{3}, embodying the concept of thirds central to Egyptian divisions of resources.[63] Similarly, \frac{1}{7} yields the six-digit repeating decimal $0.\overline{142857}, a cyclic pattern arising from the long period of its denominator in base 10.[64] Unit fractions are fundamental to the harmonic series, whose partial sums are the harmonic numbers defined by H_n = \sum_{k=1}^n \frac{1}{k} = 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}. This sum approximates \ln n + \gamma, where \gamma \approx 0.5772156649 is the Euler-Mascheroni constant, providing insight into the divergent yet slowly growing nature of the series.[65] The difference H_n - \ln n converges to the irrational \gamma, linking unit fractions to transcendental analysis.[65]Famous Rational Approximations
One of the most renowned rational approximations is \frac{22}{7} \approx 3.142857, which serves as an upper bound for \pi. This fraction was established by Archimedes in the 3rd century BCE through his method of inscribing and circumscribing polygons around a circle to bound the value of \pi, proving \frac{223}{71} < \pi < \frac{22}{7}.[66] A superior approximation to \pi is \frac{355}{113} \approx 3.14159292, accurate to six decimal places and remaining the best rational approximation with a denominator up to four digits until the 16th century. This fraction was independently discovered by the Chinese mathematician Zu Chongzhi around 480 CE using an iterative polygon method extending Archimedes' approach with a 24,576-sided polygon.[67] For the base of the natural logarithm e, continued fraction convergents provide notable rational approximations, such as \frac{19}{7} \approx 2.714286, which is the fifth convergent in the expansion e = [2; \overline{1, 2k, 1}] for increasing integers k. This arises from the series definition of e and yields an error of less than 0.004.[68] Rational approximations to the golden ratio \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618034 are generated by ratios of consecutive Fibonacci numbers, with \frac{8}{5} = 1.6 as an early convergent accurate to two decimal places. These convergents stem from the continued fraction \phi = [1; \overline{1}], where the error decreases as | \phi - \frac{F_{n+1}}{F_n} | < \frac{1}{F_n F_{n+1}}, with F_n denoting the nth Fibonacci number.[69] Simple rationals like \frac{3}{4} = 0.75 appear in basic decimal approximations or quadratic equation solutions, illustrating everyday uses beyond advanced constants.Real Numbers
Algebraic Real Numbers
Algebraic real numbers are the real roots of non-zero polynomial equations with integer (or equivalently, rational) coefficients. These numbers form a field extension of the rational numbers \mathbb{Q}, and every algebraic real number \alpha has a minimal polynomial over \mathbb{Q} of some finite degree n \geq 1, meaning [\mathbb{Q}(\alpha) : \mathbb{Q}] = n. Rationals themselves are algebraic of degree 1, while irrationals like \sqrt{2} have higher degrees.[70] A classic example is \sqrt{2}, the positive real solution to the quadratic equation x^2 - 2 = 0, which has minimal degree 2 over \mathbb{Q} and approximates 1.414213562. Another prominent example is the golden ratio \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618033989, which satisfies the minimal polynomial x^2 - x - 1 = 0 and also has degree 2; it arises in geometry, such as the ratio of successive Fibonacci numbers. These examples illustrate how algebraic reals can be expressed explicitly using radicals in some cases, though not always for higher degrees.[70][69][71] Among algebraic real numbers, constructible numbers form an important subclass, defined as those obtainable from the integers via a finite sequence of field operations (addition, subtraction, multiplication, division) and square root extractions. The field extension generated by a constructible number over \mathbb{Q} has degree $2^k for some non-negative integer k, reflecting the quadratic nature of each adjoining step; for instance, \sqrt{2} is constructible (degree 2), as is \sqrt{2 + \sqrt{2}} (degree 4). This property underpins classical compass-and-straightedge constructions in Euclidean geometry, limiting which regular polygons (e.g., pentagon, but not heptagon) can be constructed.[72][73]Transcendental Real Numbers
Transcendental real numbers are real numbers that are not algebraic, meaning they are not roots of any non-zero polynomial equation with rational coefficients.[74] This class of numbers arises naturally in analysis and geometry, distinguishing them from algebraic reals that satisfy polynomial relations.[75] The number e, the base of the natural logarithm, is a fundamental example of a transcendental real number. It was proved transcendental by Charles Hermite in 1873 using integral representations and properties of exponential functions.[76] Hermite's proof relies on showing that assuming e algebraic leads to a contradiction via continued fraction approximations and factorial growth rates.[76] The number e admits the infinite series expansion e = \sum_{n=0}^\infty \frac{1}{n!}, which converges rapidly and defines e approximately as 2.71828.[77] Another prominent example is \pi, the ratio of a circle's circumference to its diameter, proved transcendental by Ferdinand von Lindemann in 1882. Lindemann's proof extends Hermite's techniques, establishing the Lindemann-Weierstrass theorem that e^{i\alpha} is transcendental for non-zero algebraic \alpha, implying \pi (as -i \ln(-1)) cannot be algebraic. This result resolved the ancient problem of squaring the circle, showing it impossible with straightedge and compass.[75] The Leibniz formula provides a series for \pi: \pi = 4 \sum_{n=0}^\infty \frac{(-1)^n}{2n+1}, which alternates and converges to approximately 3.14159, though slowly. The existence of transcendental numbers was first demonstrated in 1844 by Joseph Liouville through an explicit construction using Diophantine approximation.[75] Liouville's theorem bounds how well algebraic irrationals can be approximated by rationals, allowing him to build a number \lambda = \sum_{k=1}^\infty 10^{-k!} that violates this bound for any algebraic degree, proving it transcendental. Such Liouville numbers are highly approximable by rationals and form a dense set in the reals, though they have Lebesgue measure zero.[75]Numbers of Uncertain Irrationality
Numbers of uncertain irrationality encompass real constants that mathematicians widely believe to be irrational or even transcendental, yet definitive proofs remain elusive despite centuries of investigation. These open problems highlight the challenges in transcendental number theory, where established techniques like continued fractions or modular forms have succeeded for some constants but falter for others. Key examples include linear and multiplicative combinations of fundamental constants such as e and \pi, as well as values of the Riemann zeta function at odd integers. A classic case is e + \pi, whose irrationality has been an unresolved question since the 18th century, when Leonhard Euler explored properties of e and \pi. Although it is proven that at least one of e + \pi and e\pi must be irrational—via the non-vanishing of the minimal polynomial for algebraic dependence—the specific irrationality of e + \pi is unknown.[78] Similarly, \pi / e is suspected to be irrational, but no proof exists, representing another longstanding open problem in the field.[78] Apéry's constant, denoted \zeta(3) = \sum_{n=1}^\infty \frac{1}{n^3}, provides a notable instance where irrationality is established but higher-degree properties are not. In 1979, Roger Apéry demonstrated its irrationality through a clever construction of continued fraction convergents satisfying a specific recurrence relation, marking a breakthrough after prior failures for odd zeta values beyond \zeta(2).[79] However, whether \zeta(3) is transcendental—meaning not algebraic over the rationals—remains unknown, with conjectures suggesting it is, based on patterns in zeta function values.[79] Other examples include $2^e, where irrationality is anticipated but unproven, illustrating the difficulty in handling exponential forms with transcendental bases or exponents.[80] These cases underscore ongoing research directions, such as improving irrationality measures or leveraging arithmetic geometry to resolve such statuses.Imprecisely Known Real Constants
Imprecisely known real constants are mathematical entities whose numerical values can only be approximated within wide bounds due to prohibitive computational demands, often stemming from exponential growth in required resources as precision increases. Unlike computable constants with algorithmic paths to arbitrary accuracy, these arise in number theory and combinatorics where exhaustive search or simulation becomes infeasible beyond certain scales. This imprecision highlights the boundaries of current computational capabilities, even as supercomputers push limits in other domains. A classic example is Skewes' number, an upper bound for the smallest integer where the prime counting function π(n) exceeds the logarithmic integral li(n), assuming the Riemann hypothesis holds. Stanley Skewes originally derived a bound of approximately $10^{10^{10^{34}}} in 1933, but refinements over decades have reduced it significantly; as of 2005, the bound stands below about $1.4 \times 10^{316}.[81] The difficulty arises from the need to analyze oscillatory behaviors in prime distribution, which requires evaluating complex integrals and zeta function zeros over vast ranges, rendering exact determination practically impossible with current methods. Ramsey numbers exemplify combinatorial constants with imprecise values due to exponential verification times. The diagonal Ramsey number R(5,5), defined as the minimal vertices ensuring a monochromatic K_5 in any 2-edge-coloring of the complete graph, is bounded by 43 ≤ R(5,5) ≤ 46. The lower bound of 43 was established via a explicit coloring construction in 1989, while the upper bound of 46 was proven in 2024 by showing no critical colorings exist at that scale.[82][83] Computing exact values demands checking an astronomically large number of graph colorings, with the search space growing factorially, limiting progress to incremental bound improvements. In dynamical systems like the Collatz conjecture, bounds on sequence behaviors illustrate similar challenges. The conjecture posits that iterating the rule (3n+1 if odd, n/2 if even) reaches 1 for any positive integer n, verified computationally up to approximately 4 × 10^{21} as of 2025, but establishing global bounds on stopping times or maximum heights requires simulating paths that can explode to exponential sizes relative to n. This leads to imprecise estimates, as full exploration for larger n demands resources scaling exponentially with bit length. In contrast, transcendental constants like π, detailed elsewhere, have been computed to 300 trillion digits as of April 2025 using efficient series accelerations, underscoring how algorithmic tractability varies across real constants.[84][85]Complex and Hypercomplex Numbers
Standard Complex Numbers
Complex numbers extend the real numbers by incorporating the imaginary unit i, defined such that i^2 = -1. A standard complex number is expressed as z = a + bi, where a and b are real numbers representing the real and imaginary parts, respectively. This form allows solutions to equations like x^2 + 1 = 0, which have no real roots, and underpins applications in fields such as electrical engineering and quantum mechanics.[2]/03%3A_Polynomial_and_Rational_Functions/3.01%3A_Complex_Numbers) The modulus of a complex number z = a + bi, denoted |z|, measures its distance from the origin in the complex plane and is given by |z| = \sqrt{a^2 + b^2}. This Euclidean norm facilitates geometric interpretations, such as representing complex numbers as points or vectors in a two-dimensional plane with real and imaginary axes. For instance, the modulus of z = 3 + 4i is \sqrt{3^2 + 4^2} = 5, illustrating the Pythagorean theorem in this context.[2]/03%3A_Polynomial_and_Rational_Functions/3.01%3A_Complex_Numbers) Gaussian integers form a significant subset of complex numbers, consisting of those where both a and b are integers, denoted as a + bi with a, b \in \mathbb{Z}. Introduced by Carl Friedrich Gauss in 1832, they constitute the ring \mathbb{Z} and enable unique factorization in a manner analogous to ordinary integers, though with different primes; for example, $5 = (1 + 2i)(1 - 2i). A representative Gaussian integer is $1 + i, whose modulus is \sqrt{2}. These integers are fundamental in algebraic number theory for studying quadratic fields.[86][87] A profound relation in complex analysis is Euler's identity, which states e^{i\pi} + 1 = 0, connecting five fundamental constants: e, i, \pi, 1, and 0. Derived from the exponential form of complex numbers, e^{i\theta} = \cos \theta + i \sin \theta, it emerges when \theta = \pi, highlighting the deep interplay between exponential, trigonometric, and imaginary functions. First published by Leonhard Euler in 1748, this identity exemplifies the elegance of complex numbers.[88]/01%3A_Complex_Algebra_and_the_Complex_Plane/1.06%3A_Euler%27s_Formula)Quaternions and Octonions
Quaternions represent a four-dimensional extension of the complex numbers, introduced by William Rowan Hamilton in 1843 to handle three-dimensional rotations and geometric transformations. A quaternion is expressed as q = a + bi + cj + dk, where a, b, c, d are real numbers, and i, j, k are imaginary units satisfying the relations i^2 = j^2 = k^2 = -1 and ijk = -1. These rules ensure that quaternion multiplication is non-commutative, distinguishing them from complex numbers, which are commutative in two dimensions. Hamilton's original formulation appeared in his 1843 communication to the Royal Irish Academy, with a systematic treatment published later in his 1853 lectures.[89][90] Quaternions find significant applications in three-dimensional rotations, particularly in computer graphics, robotics, and aerospace engineering, where they provide a compact and numerically stable representation that avoids singularities like gimbal lock associated with Euler angles. Unit quaternions, those with norm 1, form the group SU(2), which double-covers the rotation group SO(3), enabling efficient interpolation and composition of rotations. This computational utility was highlighted in early work on quaternion-based rotation algorithms, which demonstrated their efficiency over matrix methods for storage and computation.[91][92] Octonions extend this progression to eight dimensions, forming an eight-dimensional algebra discovered independently by John T. Graves in 1843 and Arthur Cayley in 1845. An octonion can be written as o = a_0 + a_1 e_1 + \cdots + a_7 e_7, where the e_i are basis elements with specific multiplication rules that render octonions non-commutative and non-associative, though they remain alternative algebras. Unlike quaternions, the lack of associativity complicates their use, but octonions appear in advanced theoretical physics, including string theory, where their structure relates to exceptional Lie groups and the geometry of higher-dimensional spacetime. For instance, octonionic formulations have been proposed to unify aspects of superstring theory and the standard model of particle physics.[93] The quaternions and octonions are linked through the Cayley-Dickson construction, a recursive process that doubles the dimension of a normed algebra while preserving certain properties like the norm but progressively losing commutativity and associativity. Starting from the real numbers, one application yields the complexes, a second the quaternions, and a third the octonions; further iterations produce sedenions and beyond, though only up to octonions retain division algebra properties. This construction, formalized by Leonard Eugene Dickson in the early 20th century but rooted in 19th-century discoveries, underscores the unique sequence of real division algebras in dimensions 1, 2, 4, and 8.[94]Transfinite Numbers
Cardinal Numbers
Cardinal numbers, also known as cardinals, provide a measure of the size of a set, extending the intuitive counting of elements in finite collections to infinite ones through the concept of bijections between sets. Two sets have the same cardinality if there exists a one-to-one correspondence between their elements, allowing sets of vastly different appearances to be deemed equally large. This framework, developed by Georg Cantor in the late 19th century, revolutionized the understanding of infinity by revealing that not all infinite sets are of the same size.[95] For finite sets, cardinal numbers coincide with the natural numbers, where the cardinality of a set is simply the number of distinct elements it contains; for example, a set with three elements has cardinality 3. This aligns directly with everyday counting, and any two finite sets of the same size can be paired bijectively without leftovers. In contrast, infinite cardinalities begin with \aleph_0 (aleph-null), the smallest infinite cardinal, which is the size of the set of natural numbers (including 0 in the set-theoretic sense) \mathbb{N} = \{0, 1, 2, \dots \}. Cantor established that the set of integers \mathbb{Z}, including negatives and zero, also has cardinality \aleph_0, as it can be bijectively mapped to \mathbb{N} via a simple enumeration that alternates positive and negative values. Similarly, the rational numbers \mathbb{Q} are countable, possessing the same cardinality \aleph_0, demonstrating that some infinite sets can be listed in a sequence despite their apparent density.[96] A fundamentally larger infinite cardinal is the cardinality of the continuum, denoted $2^{\aleph_0} or \mathfrak{c}, which is the size of the set of real numbers \mathbb{R}. Cantor proved this in 1891 using his diagonal argument: assuming a bijection between \mathbb{N} and \mathbb{R} leads to a contradiction, as one can construct a real number differing from every listed number in at least one decimal place, corresponding to the diagonal of the enumeration matrix. This shows that the power set of \mathbb{N}, whose elements are all subsets of the naturals, has cardinality $2^{\aleph_0}, and since \mathbb{R} can be injectively mapped into this power set (via binary expansions), \mathfrak{c} = 2^{\aleph_0} > \aleph_0. The argument highlights the existence of uncountably infinite sets, establishing a hierarchy of infinities.[97][98] The continuum hypothesis (CH) posits that no cardinal exists strictly between \aleph_0 and $2^{\aleph_0}, implying \mathfrak{c} = \aleph_1, the next cardinal after \aleph_0 in the aleph hierarchy. In 1938, Kurt Gödel demonstrated that CH is consistent with Zermelo-Fraenkel set theory including the axiom of choice (ZFC), by constructing a model (the constructible universe L) where CH holds, assuming ZFC is consistent. This relative consistency result showed CH cannot be disproved within ZFC. Complementing this, Paul Cohen proved in 1963 that the negation of CH is also consistent with ZFC, using his forcing technique to build a model where $2^{\aleph_0} > \aleph_1. Together, these results establish the independence of CH from ZFC, meaning neither CH nor its negation can be proved or disproved using standard set-theoretic axioms.[99][100]Ordinal Numbers
Ordinal numbers, or ordinals, generalize the concept of counting positions in a sequence to well-ordered sets, extending beyond finite lengths to transfinite ones. They represent the order type of such sets, where each ordinal α is the set of all ordinals strictly less than it, ensuring a transitive and well-ordered structure under membership. The finite ordinals correspond precisely to the natural numbers in set-theoretic constructions, such as the von Neumann ordinals, where 0 is the empty set ∅, 1 is {∅}, 2 is {∅, {∅}}, and so on, up to any finite n built by successive successors. These form the initial segment of all ordinals and capture the order types of finite well-orderings.[101] The first infinite ordinal is denoted ω, which is the least upper bound of all finite ordinals and represents the order type of the natural numbers under their usual ordering. Successor ordinals beyond ω include ω + 1, obtained by adjoining a single element after the ω-sequence, and limit ordinals like ω + 2 or higher. Further examples involve multiplication and exponentiation: ω ⋅ 2 is the order type of two copies of ω concatenated, while ω² = sup{ω ⋅ n | n < ω} denotes the limit of ω ⋅ n for finite n, illustrating polynomial-like growth in ordinal hierarchy. Transfinite induction provides a foundational proof principle for ordinals, analogous to mathematical induction on naturals but extended to all ordinals. To prove a property P(α) holds for every ordinal α, it suffices to show that if P(β) holds for all β < α, then P(α) holds; the well-ordering ensures no infinite descending chains, validating the argument across transfinite structures. This principle underpins many results in set theory, such as the existence of ordinal arithmetic operations.[102] Ordinal arithmetic defines operations on order types, preserving well-orderings. Addition α + β is the order type of the disjoint union of sets with order types α and β, where the β portion follows the α portion entirely; this operation is associative but not commutative. For instance, 1 + ω equals ω, as adjoining a single element before an infinite sequence absorbs into the limit, whereas ω + 1 strictly exceeds ω by placing the extra element after the infinite sequence. Multiplication α ⋅ β concatenates β copies of α, and exponentiation α^β builds iterated multiplications, yielding a non-commutative algebra distinct from cardinal arithmetic.[103] Among countable ordinals, the Church-Kleene ordinal, denoted ω₁^{CK}, is the smallest ordinal greater than ω that is inadmissible, meaning it lacks a definable well-ordering relative to recursive functions and fails to satisfy the replacement schema for recursive notations. It serves as the supremum of all recursive ordinals, marking the boundary of computably representable well-orderings in the hyperarithmetic hierarchy.[104]Numbers in Physical Contexts
Fundamental Physical Constants
Fundamental physical constants are fixed numerical values that underpin the laws of physics, appearing in equations describing electromagnetic, gravitational, quantum, and other phenomena. These constants are either defined exactly through international agreements, such as the 2019 revision of the SI units, or determined through high-precision measurements. The Committee on Data for Science and Technology (CODATA) periodically recommends values based on least-squares adjustments of experimental data to ensure consistency across scientific fields. The speed of light in vacuum, denoted c, is a cornerstone of special relativity and electromagnetism, representing the maximum speed at which information can propagate. It is defined exactly as c = 299792458 m/s, establishing the meter as the distance light travels in vacuum in $1/299792458 of a second. This exact value has been in place since the 17th General Conference on Weights and Measures in 1983.[105] The Planck constant, h, quantifies the scale of quantum mechanical effects and is central to the energy-frequency relation E = h\nu for photons. It is now defined exactly as h = 6.62607015 \times 10^{-34} J s, fixing the kilogram in terms of the Planck constant, elementary charge, and speed of light. This definition was adopted in 2019 to replace the previous artifact-based mass standard, improving measurement precision in quantum technologies.[105] The fine-structure constant, \alpha, is a dimensionless quantity that characterizes the strength of the electromagnetic interaction between elementary charged particles. It appears in the fine structure of atomic spectral lines and is approximately \alpha \approx 7.2973525643 \times 10^{-3}, or equivalently \alpha^{-1} \approx 137.035999177(21). Unlike the others, \alpha is not exactly defined but determined from measurements such as quantum Hall effect and anomalous magnetic moment of the electron, with its value refined in the 2022 CODATA adjustment to reflect improved experimental accuracy.[106] Avogadro's constant, N_A, links the microscopic scale of atoms to the macroscopic scale of moles in chemistry, defining the number of constituent particles in one mole of substance. It is exactly N_A = 6.02214076 \times 10^{23} mol^{-1} , established in the 2019 SI revision to fix the mole independently of the kilogram. This exact value facilitates precise stoichiometry and supports advancements in materials science and nanotechnology.[105]| Constant | Symbol | Value | Units | Status | Role |
|---|---|---|---|---|---|
| Speed of light | c | 299792458 | m s⁻¹ | Exact (defined 1983) | Basis for relativity and meter definition |
| Planck constant | h | $6.62607015 \times 10^{-34} | J s | Exact (defined 2019) | Quantum scale and kilogram definition |
| Fine-structure constant | \alpha | $7.2973525643(11) \times 10^{-3} | Dimensionless | Measured (2022 CODATA) | Electromagnetic coupling strength |
| Avogadro constant | N_A | $6.02214076 \times 10^{23} | mol⁻¹ | Exact (defined 2019) | Atomic scale to molar scale |
Quantities in Measurements
In measurements at the atomic scale, the Bohr radius represents a fundamental length associated with the hydrogen atom, defined as the most probable distance between the proton and the electron in its ground state. This quantity is approximately a_0 \approx 5.29 \times 10^{-11} m.[107] It serves as a characteristic scale for atomic dimensions, influencing models in quantum mechanics and chemistry where electron orbits or wavefunctions are approximated.[107] At human scales, everyday measurements reflect biological norms and physical limits. The average adult human height is approximately 1.7 m, varying by population but providing a baseline for ergonomics, architecture, and health assessments.[108] In terms of speed, the peak human running velocity record stands at about 12.4 m/s, achieved by Usain Bolt during his 100-meter sprint in 2009, highlighting the biomechanical constraints of muscle power and aerodynamics.[109] Energy measurements often employ convenient units derived from fundamental constants, such as the electronvolt (eV), which quantifies energy on the atomic scale. One eV equals $1.602 \times 10^{-19} J, corresponding to the kinetic energy gained by an electron accelerated through a potential difference of 1 volt.[110] This unit is essential in particle physics and materials science for expressing binding energies, photon wavelengths, and ionization potentials without resorting to SI joules.[110] Derived dimensionless quantities like the Reynolds number further illustrate how measurements integrate multiple physical parameters to characterize phenomena. In fluid dynamics, the Reynolds number Re = \frac{\rho v L}{\mu}, where \rho is fluid density, v is velocity, L is a characteristic length, and \mu is dynamic viscosity, predicts flow regimes—laminar for low Re (e.g., below 2000 in pipes) and turbulent for high Re (e.g., above 4000).[111] For instance, blood flow in arteries typically yields Re around 1000–4000, aiding biomedical engineering designs.[111]Numbers in Geographical and Astronomical Contexts
Planetary and Stellar Distances
In astronomy, planetary and stellar distances provide essential scales for understanding the vastness of space, often expressed in units like the astronomical unit (AU) for solar system measurements and light-years for interstellar and cosmic expanses. These distances highlight the progression from nearby celestial bodies to the structure of galaxies and the observable universe itself. The average distance from Earth to the Sun defines the astronomical unit, which is approximately 149.6 million kilometers (or 92.96 million miles). This unit serves as a fundamental baseline for measuring distances within our solar system, such as the orbit of planets like Mars at about 1.52 AU or Jupiter at 5.2 AU.[112] Beyond the solar system, the light-year measures interstellar distances as the distance light travels in one Julian year, approximately $9.461 \times 10^{12} kilometers (or 5.879 × 10^{12} miles). The nearest star to our Sun, Proxima Centauri, lies at about 4.24 light-years away, underscoring the isolation of our stellar neighborhood. On galactic scales, the Milky Way galaxy has a diameter of roughly 100,000 light-years, encompassing hundreds of billions of stars within its spiral arms. The observable universe, limited by the speed of light and the age of the cosmos, extends to a radius of approximately 46.5 billion light-years from Earth, containing an estimated 2 trillion galaxies.Earthly Geographical Scales
Earth's equatorial circumference measures approximately 40,075 kilometers, reflecting its oblate spheroid shape due to rotational forces, while the meridional circumference is slightly shorter at about 40,008 kilometers. This dimension establishes the baseline for terrestrial navigation and geospatial calculations, encompassing the planet's vast surface traversable by land and sea routes.[113] In terms of vertical extremes, Earth's topography features profound contrasts between elevations and depressions. Mount Everest, the highest peak above sea level, reaches 8,848.86 meters in the Himalayas, a height confirmed through joint surveys by Nepal and China using advanced GPS and leveling techniques. Conversely, the Mariana Trench in the western Pacific Ocean plunges to the Challenger Deep, the deepest known point at 10,935 meters below sea level, measured via submersible transects and acoustic methods in recent expeditions. These extremes highlight the dynamic geological processes shaping Earth's crust, from tectonic uplift to subduction zones.[114][115] Continental landmasses further illustrate Earth's geographical scale, with Asia dominating as the largest at approximately 44.5 million square kilometers, accounting for nearly 30% of global land area and encompassing diverse terrains from deserts to plateaus. Other continents, such as Africa at 30.37 million square kilometers, provide comparative scales for understanding habitat distribution and resource allocation. These areas, derived from satellite mapping and boundary delineations, underscore the uneven distribution of land across the planet's surface.[116]| Feature | Approximate Value | Unit | Notes |
|---|---|---|---|
| Equatorial Circumference | 40,075 | km | Baseline for global circumnavigation[113] |
| Mount Everest Height | 8,848.86 | m | Highest point above sea level[114] |
| Challenger Deep Depth | 10,935 | m | Deepest ocean point[115] |
| Asia Area | 44,500,000 | km² | Largest continent[116] |