Factorial
In mathematics, the factorial of a non-negative integer n, denoted by n!, is defined as the product of all positive integers less than or equal to n; specifically, n! = n \times (n-1) \times \cdots \times 1, with the convention that $0! = 1.[1] This notation was introduced by the French mathematician Christian Kramp in 1808.[1] Factorials play a fundamental role in combinatorics, where n! represents the number of distinct permutations of n unique objects, providing the basis for counting arrangements in probability and discrete mathematics.[2] They also appear in the denominators of binomial coefficients for combinations, as \binom{n}{k} = \frac{n!}{k!(n-k)!}, which quantify selections without regard to order. Beyond integers, the factorial function is extended to real and complex numbers via the gamma function \Gamma(z), defined as \Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \, dt for positive real z, satisfying \Gamma(n+1) = n! and enabling interpolation for non-integer values.[3] The concept of factorials has historical roots in early combinatorial problems, though the modern notation and rigorous definition emerged in the 19th century alongside advancements in analysis.[4] In applied contexts, factorials underpin series expansions like the Taylor series for the exponential function and appear in statistical distributions, such as the Poisson distribution's probability mass function.Definition
For Non-Negative Integers
The factorial of a positive integer n, denoted by n!, is defined as the product of all positive integers from 1 up to and including n.[5] This can be expressed mathematically asn! = n \times (n-1) \times (n-2) \times \cdots \times 2 \times 1. [6] The notation n! was introduced to compactly represent this repeated multiplication, which arises naturally in counting problems such as permutations.[5] An equivalent recursive formulation defines the factorial for positive integers n \geq 1 with the base case $1! = 1 and the relation n! = n \times (n-1)! for n > 1.[7] This recursive definition highlights the self-similar structure of the operation, where computing n! builds upon the result for n-1.[7] To illustrate, the first few factorials are computed as follows:
- $1! = 1
- $2! = 2 \times 1 = 2
- $3! = 3 \times 2 \times 1 = 6
- $4! = 4 \times 3 \times 2 \times 1 = 24
- $5! = 5 \times 4 \times 3 \times 2 \times 1 = 120.[6]
Extension to Zero and Negatives
The factorial of zero is defined as $0! = 1 by mathematical convention. This arises from viewing the factorial as a product, where $0! represents the empty product over no terms, which is consistently taken to equal 1, similar to how the empty sum equals 0.[8] Combinatorially, this definition aligns with the fact that there is exactly one way to arrange or permute zero distinct objects: the empty arrangement or bijection from the empty set to itself.[9] This convention is historically justified in the context of binomial coefficients, where it ensures the formula \binom{n}{k} = \frac{n!}{k!(n-k)!} yields \binom{n}{0} = 1 for any non-negative integer n, reflecting the single empty subset of an n-set.[10] Without $0! = 1, fundamental counting principles in combinatorics, such as those underlying Pascal's triangle, would require special cases, disrupting the uniformity of the recursive structure n! = n \times (n-1)! for n \geq 1.[11] The factorial is undefined for negative integers. Extending the recursive definition n! = n \times (n-1)! to n = 0 implies $0! = 0 \times (-1)!, or $1 = 0 \times (-1)!, which cannot hold for any finite value of (-1)! and involves an inherent division by zero in reverse.[9] This issue persists for all negative integers, as the recursion leads to an infinite regress without resolution. In the broader context of the gamma function, which generalizes the factorial via \Gamma(z+1) = z!, the function exhibits poles—points of infinite discontinuity—at the non-positive integers, confirming the undefined nature for negative integers without providing a finite extension there.[12]Historical Development
Early Concepts in Combinatorics
The earliest known concepts resembling factorial products emerged in ancient Indian mathematics, particularly in the field of prosody and combinatorics. Around the 3rd century BCE, the mathematician Pingala, in his work Chandaḥśāstra, explored the enumeration of poetic meters composed of short and long syllables, which involved calculating the number of possible arrangements or permutations of these units. This approach implicitly required computing products akin to factorials to determine the total count of distinct sequences for a given length, such as the arrangements of n syllables where order matters, laying foundational ideas for permutation counts in combinatorial problems.[13] In the medieval Islamic world, these ideas advanced through applications in linguistics and arrangement problems. In the late 13th century, Kamāl al-Dīn al-Fārisī (d. circa 1320) addressed combinatorial formulas in his treatise on number theory, specifically computing the number of distinct arrangements of letters without repetition, such as the permutations of subsets from an alphabet. His methods for calculating these totals, including for larger sets like 20 letters, relied on systematic product calculations that prefigured factorial reasoning, contributing to early developments in enumerative combinatorics within Islamic scholarship.[14] By the 17th century in Europe, similar implicit uses of factorial equivalents appeared in the study of binomial coefficients. Blaise Pascal, in his 1653 Traité du triangle arithmétique, presented what is now known as Pascal's triangle, where each entry represents a binomial coefficient \binom{n}{k} = \frac{n!}{k!(n-k)!}, effectively employing factorial products to compute combinations and expansions without explicit notation for the factorial itself. This work built on earlier global traditions but marked a key step toward formalizing such computations in Western mathematics, influencing later probabilistic and combinatorial theories.[15]Formalization and Notation
The factorial concept, with roots in early combinatorial problems of counting arrangements, saw its notation evolve in the 18th century through explicit product representations employed by mathematicians such as Leonhard Euler, who denoted it as $1 \times 2 \times \cdots \times n in works like his Introductio in analysin infinitorum (1748).[16] In 1800, Louis François Antoine Arbogast introduced the term "factorielle" (from which "factorial" derives) in his Du calcul des dérivations, where he represented the product using notations like n: or the expanded form $1 \cdot 2 \cdot 3 \cdot \dots \cdot n, applying it to series expansions and derivations.[17] Christian Kramp, building on Arbogast's terminology, invented the exclamation point notation in his 1808 treatise Élémens d'arithmétique universelle, defining n! as the successive product of integers from 1 to n (e.g., $5! = 1 \times 2 \times 3 \times 4 \times 5 = 120) for convenience in printing and combinatorial calculations.[18][16] This compact symbolism was widely adopted during the 19th century, paralleling advancements in permutation theory and mathematical analysis that emphasized rigorous symbolic manipulation.[16]Algebraic Properties
Product Representation
The factorial of a positive integer n is fundamentally expressed as the continuous product of all positive integers from 1 to n, written in product notation asn! = \prod_{k=1}^n k.
This representation underscores the cumulative multiplicative structure inherent in the definition of the factorial, distinguishing it from recursive formulations.[9] Taking the natural logarithm of both sides yields an equivalent sum representation, derived directly from the additivity of the logarithm over products:
\ln(n!) = \sum_{k=1}^n \ln k.
This identity transforms the multiplicative nature of the factorial into an additive form, facilitating analytical treatments such as integral approximations or entropy calculations in information theory. Factorials appear prominently in combinatorial identities, particularly through their role in binomial coefficients, which count the number of ways to choose k items from n without regard to order:
\binom{n}{k} = \frac{n!}{k!(n-k)!}.
This relation links the factorial to Pascal's triangle and the binomial theorem, where the coefficients expand ([x + y](/page/X&Y))^n. The formula originates from early combinatorial work and was formalized in the context of polynomial expansions. A notable number-theoretic identity involving factorials is Wilson's theorem, which states that for a prime number p > 1,
(p-1)! \equiv -1 \pmod{p}.
First proved by Joseph-Louis Lagrange in 1773, this theorem provides a primality criterion based on the factorial product modulo p.[19] A sketch of the proof proceeds as follows: In the field of integers modulo p, the nonzero elements $1, 2, \dots, p-1 form a multiplicative group of order p-1. Each element x (where $1 < x < p-1) pairs with its distinct inverse x^{-1} such that x \cdot x^{-1} \equiv 1 \pmod{p}, contributing 1 to the product. The elements satisfying x^2 \equiv 1 \pmod{p} are precisely x \equiv 1 and x \equiv -1 \pmod{p} (since p > 2 for primes beyond 2), and -1 is its own inverse. Thus, the full product (p-1)! \equiv 1 \cdot (-1) \equiv -1 \pmod{p}. This pairing argument establishes the congruence directly from group-theoretic properties modulo a prime.[20]
Multiplication and Division Theorems
One key relation involving products and quotients of factorials arises in the expression of the double factorial for odd positive integers. For a positive integer n, the double factorial (2n-1)!!—the product of all positive odd integers up to $2n-1—can be written as (2n-1)!! = \frac{(2n)!}{2^n n!}.[21] This formula connects the factorial of an even number to a scaled version of the factorial of half that value, serving as a precursor to more general duplication formulas in the gamma function extension of factorials. Rearranging, it implies (2n)! = 2^n n! \cdot (2n-1)!!, highlighting a multiplicative structure between factorials and double factorials.[21] This relation finds application in the central binomial coefficient, where \binom{2n}{n} = \frac{(2n)!}{(n!)^2} = \frac{2^n (2n-1)!!}{n!}.[21] The exact statement underscores how division by powers of 2 and factorials simplifies expressions involving products of consecutive integers, though double factorials themselves are not defined here. For division properties, when k and n are non-negative integers with k < n, the quotient simplifies to \frac{n!}{k!} = (k+1)(k+2) \cdots n, the product of integers from k+1 to n.[9] This follows directly from the recursive definition of the factorial, n! = n \cdot (n-1)!, by repeated cancellation. Such simplifications are fundamental for computational efficiency and appear in product identities like those for binomial coefficients. The factorial is undefined for negative integers, a fact reinforced algebraically through the recurrence relation. Assuming (-1)! exists leads to $0! = 0 \cdot (-1)!, implying $1 = 0, a contradiction.[22]Growth and Approximation
Rate of Growth
The factorial function n! exhibits super-exponential growth, surpassing any exponential function c^n for fixed constant c > 0 as n increases, since \log(n!) \sim n \log n dominates the logarithmic scale of exponentials.[23][24] This property arises because each successive multiplication in the product n! = 1 \times 2 \times \cdots \times n incorporates increasingly larger factors, accelerating the growth beyond fixed-base exponentiation.[25] A key lower bound highlighting this rapidity is Stirling's inequality, which states that for positive integers n \geq 1, n! > \left( \frac{n}{e} \right)^n, where e \approx 2.71828 is the base of the natural logarithm; this confirms n! exceeds even variable-base exponentials tied to n.[24] For example, $10! = 3,628,800, already a seven-digit figure that underscores the swift escalation from smaller values like $5! = 120.[24] In practical applications, such as enumerating permutations in combinatorics or assessing brute-force algorithms in computer science, this growth renders exact evaluation of n! for n > 20 computationally prohibitive, often requiring asymptotic tools like Stirling's full approximation for feasible handling.[25]Stirling's Approximation
Stirling's approximation provides an asymptotic formula for the factorial n! when n is large, given by n! \approx \sqrt{2\pi n} \left(\frac{n}{e}\right)^n, where the relative error tends to zero as n \to \infty.[26] This approximation arises from recognizing that the logarithm of the factorial, \ln(n!) = \sum_{k=1}^n \ln k, can be approximated by the integral \int_1^n \ln x \, dx = n \ln n - n + 1. More precisely, using the Euler-Maclaurin formula or Laplace's method on the Gamma function integral representation n! = \int_0^\infty x^n e^{-x} \, dx, the substitution x = n t leads to a Gaussian integral that yields the \sqrt{2\pi n} prefactor after evaluating \int_{-\infty}^\infty e^{-t^2/2} \, dt = \sqrt{2\pi}.[24] An alternative derivation employs the Wallis product for \pi, which bounds the sequence a_n = n! / (n/e)^n and shows it converges to \sqrt{2\pi} by analyzing monotonicity and limits.[27] The error in the basic approximation is on the order of $1/(12n), making it suitable for large n but less accurate for small values.[26] For improved precision, Stirling's series extends the approximation asymptotically as n! \sim \sqrt{2\pi n} \left(\frac{n}{e}\right)^n \left(1 + \frac{1}{12n} + \frac{1}{288n^2} - \frac{139}{51840n^3} - \frac{571}{2488320n^4} + \cdots \right), where the series terms involve Bernoulli numbers and diverge for fixed n if taken infinitely far, but truncating at the optimal point minimizes the error.[28] This expansion refines the basic formula by incorporating higher-order corrections derived from the Euler-Maclaurin summation formula applied to \ln(n!).[24] To illustrate accuracy, for n=10, the exact value is $10! = 3,628,800, while the basic Stirling approximation gives \sqrt{20\pi} \cdot (10/e)^{10} \approx 3,598,696, an underestimate by about 0.83% or 30,104. Including the first correction term $1 + 1/(12 \cdot 10) yields approximately $3,628,224, reducing the error to roughly 0.016%.[24]Divisibility and Structural Properties
Prime Factors and Divisibility
The prime factorization of n! is determined by the exponents of each prime p \leq n in its decomposition, where the exponent \epsilon_p(n) of p is given by the infinite sum \epsilon_p(n) = \sum_{k=1}^{\infty} \left\lfloor \frac{n}{p^k} \right\rfloor.[29] This formula counts the multiples of p, p^2, and higher powers up to n, ensuring the precise contribution of each prime to the factorial.[9] Known as Legendre's formula, this expression was introduced in 1808 to compute the p-adic valuation of n!, providing an efficient way to find the highest power of p dividing n! without enumerating all factors.[9] For example, consider $10!: the primes less than or equal to 10 are 2, 3, 5, and 7. The exponent of 2 is \left\lfloor 10/2 \right\rfloor + \left\lfloor 10/4 \right\rfloor + \left\lfloor 10/8 \right\rfloor = 5 + 2 + 1 = 8; for 3, it is \left\lfloor 10/3 \right\rfloor + \left\lfloor 10/9 \right\rfloor = 3 + 1 = 4; for 5, \left\lfloor 10/5 \right\rfloor = 2; and for 7, \left\lfloor 10/7 \right\rfloor = 1. Thus, $10! = 3,628,800 = 2^8 \times 3^4 \times 5^2 \times 7^1.[29] A direct consequence of the product definition is that n! is divisible by every integer k with $1 \leq k \leq n, as each such k appears as a factor in the multiplication n! = 1 \times 2 \times \cdots \times n.[30] For a general positive integer m with prime factorization m = \prod q_i^{a_i}, the highest power of m dividing n! is m^e, where e = \min_i \left\lfloor \frac{\epsilon_{q_i}(n)}{a_i} \right\rfloor. This follows from the multiplicativity of the p-adic valuation extended to composite bases via the prime exponents.[31]Digit Count and Trailing Zeros
The number of digits in the decimal representation of n! is given by \lfloor \log_{10}(n!) + 1 \rfloor, where \log_{10}(n!) can be computed exactly as \sum_{k=1}^n \log_{10} k or approximated using Stirling's series for large n: \log_{10}(n!) \approx n \log_{10} n - n \log_{10} e + \frac{1}{2} \log_{10} (2 \pi n). This approximation provides a practical way to estimate the digit count without calculating the full factorial, which becomes infeasible for large n. For example, $10! = 3,628,800 has 7 digits and 2 trailing zeros. Trailing zeros in the decimal representation of n! arise from factors of 10, each contributed by a pair of prime factors 2 and 5 in the prime factorization of n!. Since the exponent of 2 always exceeds that of 5, the number of trailing zeros is determined by the exponent of 5, calculated as \sum_{k=1}^\infty \lfloor n / 5^k \rfloor. This is known as de Polignac's formula (also called Legendre's formula when applied to the prime 5), which gives the exponent of a prime p in n! as \sum_{k=1}^\infty \lfloor n / p^k \rfloor. For instance, $25! has 6 trailing zeros, computed as \lfloor 25/5 \rfloor + \lfloor 25/25 \rfloor = 5 + 1 = 6.| n | n! (partial) | Digits | Trailing Zeros |
|---|---|---|---|
| 10 | 3,628,800 | 7 | 2 |
| 25 | ...984,000,000 | 26 | 6 |
Generalizations
Gamma Function Extension
The gamma function serves as the principal analytic continuation of the factorial function to the real and complex numbers, enabling the definition of n! for non-integer values of n.[32] It is defined for complex numbers z with positive real part by Euler's integral representation: \Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \, dt, where the integral converges for \operatorname{Re}(z) > 0.[22] This formulation extends the factorial such that \Gamma(n+1) = n! for every positive integer n, thereby interpolating the discrete factorial values with a continuous meromorphic function.[22] A fundamental property of the gamma function is its functional equation, \Gamma(z+1) = z \Gamma(z), which mirrors the recursive nature of the factorial (n+1)! = (n+1) n! and allows analytic continuation to the entire complex plane except at certain singularities.[33] The function exhibits simple poles at the non-positive integers z = 0, -1, -2, \dots, rendering \Gamma(z) undefined there; notably, $0! = \Gamma(1) = 1, while negative integer factorials remain undefined due to these poles.[22] The gamma function originated with Leonhard Euler in the 1720s–1730s, who introduced the integral form to generalize the factorial beyond integers through correspondence and early publications.[34] Later, Karl Weierstrass provided an alternative infinite product representation in the late 19th century, expressing \Gamma(z) as \frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^\infty \left(1 + \frac{z}{n}\right) e^{-z/n}, where \gamma is the Euler-Mascheroni constant, which confirms the function's meromorphic character and poles.[22]Non-Integer and Continuous Versions
The gamma function provides the standard continuous extension of the factorial to non-integer positive real numbers, satisfying \Gamma(n+1) = n! for nonnegative integers n and enabling evaluation at fractional arguments. A representative example is $0.5! = \Gamma(3/2) = (1/2) \Gamma(1/2) = (1/2) \sqrt{\pi}, where \Gamma(1/2) = \sqrt{\pi} follows from the integral representation \Gamma(1/2) = \int_0^\infty t^{-1/2} e^{-t} \, dt, which evaluates to \sqrt{\pi} via the known Gaussian integral \int_{-\infty}^\infty e^{-u^2} \, du = \sqrt{\pi}.[35] The Bohr–Mollerup theorem establishes the uniqueness of this extension among functions with desirable analytic properties: it states that \Gamma(x) is the only positive function f: (0, \infty) \to (0, \infty) such that f(1) = 1, f(x+1) = x f(x) for all x > 0, and \log f(x) is convex on (0, \infty).[36] This log-convexity condition ensures that the extension preserves the rapid growth and smoothness of the factorial while avoiding pathological behaviors in other potential interpolants.[37] Newton interpolation offers an alternative approach to extending the factorial to real numbers using finite differences and divided differences at integer points. The Newton divided-difference formula constructs a polynomial P_n(x) of degree at most n such that P_n(k) = k! for k = [0](/page/0), 1, \dots, n, given by P_n(x) = \sum_{k=0}^n f[0,1,\dots,k] \prod_{j=0}^{k-1} (x - j), where the divided differences satisfy f[0,1,\dots,k] = 1/k! for the factorial function.[38] For large n, this polynomial approximates the factorial well near the interpolation points, and the infinite series form relates to broader interpolation theory, though it diverges outside finite approximations; the gamma function emerges as the analytic continuation compatible with this framework for \operatorname{Re}(x) > [0](/page/0).[39] Other interpolants, such as the Hadamard gamma function H(x), provide distinct continuous versions tailored to specific needs like entire functionality without poles. Defined as H(x) = \frac{1}{\Gamma(1-x)} \frac{d}{dx} \log \left( \frac{\Gamma(1/2 - x/2)}{\Gamma(1 - x/2)} \right), it satisfies H(n) = (n-1)! for positive integers n and is holomorphic everywhere in the complex plane, unlike the classical gamma function.[40] In certain contexts, such as avoiding singularities or emphasizing multiplicative properties, fractional factorials may employ the Hadamard gamma or similar pseudogamma functions instead of the standard extension.[41]Computation Methods
Recursive and Iterative Algorithms
The recursive algorithm for computing the factorial of a non-negative integer n directly mirrors the mathematical definition, expressing n! as n \times (n-1)! with the base case $0! = 1. This approach is implemented by making a recursive call on n-1 until reaching the base case, then multiplying back up the call chain.[42] The following pseudocode illustrates the recursive factorial function:This recursive method exhibits a time complexity of O(n), as it performs a constant amount of work at each of the n levels of recursion.[42] Its space complexity is also O(n), owing to the accumulation of n stack frames during the recursive descent.[43] A key limitation arises for large n, where the recursion depth can exceed the system's stack size, resulting in a stack overflow error.[44] In contrast, the iterative algorithm avoids recursion by using a loop to accumulate the product from 1 to n, starting with an initial value of 1 and multiplying sequentially. This method is more space-efficient for large inputs and sidesteps stack-related issues entirely.[42] The following pseudocode depicts the iterative factorial function:function factorial(n): if n == 0: return 1 else: return n * [factorial](/page/Function)(n - 1)function factorial(n): if n == 0: return 1 else: return n * [factorial](/page/Function)(n - 1)
Like the recursive version, the iterative algorithm has O(n) time complexity due to the single loop executing n iterations, each involving constant-time multiplication.[42] However, it achieves O(1) space complexity, relying only on a fixed number of variables independent of n.[45]function factorial(n): result = 1 for i = 1 to n: result = result * i return resultfunction factorial(n): result = 1 for i = 1 to n: result = result * i return result
Efficient Techniques for Large Values
Computing large factorials directly via successive multiplication becomes infeasible for n > 10^6 due to the enormous number of digits and computational overhead, necessitating specialized techniques that leverage mathematical structure or hardware acceleration. One prominent method involves prime factorization, where n! is expressed as the product of primes up to n, each raised to their respective exponents in the factorization. The exponent of a prime p in n! is given by the formula \sum_{k=1}^{\infty} \lfloor n / p^k \rfloor, known as Legendre's formula, which efficiently counts occurrences without enumerating all factors. This approach avoids redundant multiplications by precomputing exponents for all primes \leq n (generated via sieve methods) and then assembling the product using arbitrary-precision arithmetic, significantly reducing operations for large n. For instance, implementations in languages supporting big integers can compute the factorization of $10^6! in seconds by multiplying fewer than \pi(10^6) \approx 78,000 prime powers.[46] To achieve high precision, Stirling's approximation can serve as a starting point, refined via its asymptotic series expansion. The basic Stirling formula is n! \approx \sqrt{2\pi n} \left( \frac{n}{e} \right)^n, providing a relative error of about $1/(12n) for large n, but the full series n! \sim \sqrt{2\pi n} \left( \frac{n}{e} \right)^n \exp\left( \frac{1}{12n} - \frac{1}{360n^3} + \frac{1}{1260n^5} - \cdots \right) allows successive terms to improve accuracy exponentially until the desired precision is reached.[47] With advanced summation techniques like Borel summation, the series can yield high-precision values of the gamma function for moderate arguments, though for very large integer n, multiplicative methods remain preferable for exact results (see "Growth and Approximation" for details). This refinement is effective for approximation in scientific computing. Software libraries with arbitrary-precision arithmetic are essential for handling the intermediate and final results in these methods. Python's built-inint type supports unlimited digit lengths natively, enabling seamless computation of factorials up to n = 10^6 in under a minute via iterative multiplication or prime-based assembly, as the language automatically manages overflow by promoting to long integers. For even larger scales or performance-critical applications, the GNU Multiple Precision Arithmetic Library (GMP) optimizes factorial computation through a hybrid algorithm: it separates the odd part of n! (product of odd integers up to n) and the power of 2, using subproduct trees for multiplication that scale as O(n \log n \cdot 2^{O(\log^* n)}), allowing $10^{7}! to be computed on modern hardware in minutes.[48]
For extraordinarily large n (e.g., $10^9 or beyond), exact computation of n! is rarely practical due to storage requirements exceeding terabytes; instead, properties such as the prime factorization, logarithm, or approximations are computed. Parallel computing frameworks can accelerate feasible cases by exploiting multi-core CPUs or GPUs to distribute the multiplications inherent in iterative or prime-based methods. GPU implementations, such as those using CUDA or OpenCL, parallelize big-integer multiplications across thousands of cores, achieving speedups of 10-100x over CPU for n > 10^7 by breaking the product into concurrent subproducts (e.g., via Strassen-like algorithms for large blocks). These techniques are particularly impactful in scientific computing, where factorials appear in combinatorial expansions, though they require careful synchronization to handle carry propagation in the final assembly.