Fact-checked by Grok 2 weeks ago

Factorial

In , the factorial of a non-negative n, denoted by n!, is defined as the product of all positive integers less than or equal to n; specifically, n! = n \times (n-1) \times \cdots \times 1, with the convention that $0! = 1. This notation was introduced by the French mathematician Christian Kramp in 1808. Factorials play a fundamental role in , where n! represents the number of distinct permutations of n unique objects, providing the basis for counting arrangements in probability and . They also appear in the denominators of coefficients for combinations, as \binom{n}{k} = \frac{n!}{k!(n-k)!}, which quantify selections without regard to order. Beyond integers, the factorial function is extended to real and complex numbers via the \Gamma(z), defined as \Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \, dt for positive real z, satisfying \Gamma(n+1) = n! and enabling interpolation for non-integer values. The concept of factorials has historical roots in early combinatorial problems, though the modern notation and rigorous definition emerged in the alongside advancements in . In applied contexts, factorials underpin series expansions like the for the and appear in statistical distributions, such as the Poisson distribution's .

Definition

For Non-Negative Integers

The factorial of a positive n, denoted by n!, is defined as the product of all positive integers from 1 up to and including n. This can be expressed mathematically as
n! = n \times (n-1) \times (n-2) \times \cdots \times 2 \times 1. The notation n! was introduced to compactly represent this repeated multiplication, which arises naturally in problems such as permutations.
An equivalent recursive formulation defines the factorial for positive integers n \geq 1 with the base case $1! = 1 and the relation n! = n \times (n-1)! for n > 1. This recursive definition highlights the self-similar structure of the operation, where computing n! builds upon the result for n-1. To illustrate, the first few factorials are computed as follows:
  • $1! = 1
  • $2! = 2 \times 1 = 2
  • $3! = 3 \times 2 \times 1 = 6
  • $4! = 4 \times 3 \times 2 \times 1 = 24
  • $5! = 5 \times 4 \times 3 \times 2 \times 1 = 120.
The iterative product computation involves sequentially multiplying the integers in descending order from n down to 1, starting with an initial value of n and accumulating the product step by step. For instance, to compute $4!, begin with 4, multiply by 3 to get 12, then by 2 to get 24, and finally by to yield 24; this process scales efficiently for small to moderate n using basic arithmetic. This definition extends to the zero factorial as a special case where $0! = [1](/page/1), ensuring consistency in combinatorial applications.

Extension to Zero and Negatives

The factorial of zero is defined as $0! = 1 by mathematical . This arises from viewing the factorial as a product, where $0! represents the over no terms, which is consistently taken to equal , similar to how the equals 0. Combinatorially, this definition aligns with the fact that there is exactly one way to arrange or permute zero distinct objects: the empty arrangement or from the to itself. This convention is historically justified in the context of binomial coefficients, where it ensures the formula \binom{n}{k} = \frac{n!}{k!(n-k)!} yields \binom{n}{0} = 1 for any non-negative integer n, reflecting the single empty of an n-set. Without $0! = 1, fundamental counting principles in , such as those underlying , would require special cases, disrupting the uniformity of the recursive structure n! = n \times (n-1)! for n \geq 1. The factorial is undefined for negative integers. Extending the recursive definition n! = n \times (n-1)! to n = 0 implies $0! = 0 \times (-1)!, or $1 = 0 \times (-1)!, which cannot hold for any finite value of (-1)! and involves an inherent in reverse. This issue persists for all negative integers, as the recursion leads to an without resolution. In the broader context of the , which generalizes the factorial via \Gamma(z+1) = z!, the function exhibits poles—points of infinite discontinuity—at the non-positive integers, confirming the undefined nature for negative integers without providing a finite extension there.

Historical Development

Early Concepts in Combinatorics

The earliest known concepts resembling factorial products emerged in ancient , particularly in the field of prosody and . Around the 3rd century BCE, the mathematician , in his work Chandaḥśāstra, explored the of poetic meters composed of short and long syllables, which involved calculating the number of possible arrangements or s of these units. This approach implicitly required computing products akin to factorials to determine the total count of distinct sequences for a given length, such as the arrangements of n syllables where order matters, laying foundational ideas for permutation counts in combinatorial problems. In the medieval , these ideas advanced through applications in and arrangement problems. In the late , Kamāl al-Dīn al-Fārisī (d. circa 1320) addressed combinatorial formulas in his treatise on , specifically computing the number of distinct arrangements of letters without repetition, such as the permutations of subsets from an . His methods for calculating these totals, including for larger sets like 20 letters, relied on systematic product calculations that prefigured factorial reasoning, contributing to early developments in within Islamic scholarship. By the in , similar implicit uses of factorial equivalents appeared in the study of coefficients. , in his 1653 Traité du triangle arithmétique, presented what is now known as , where each entry represents a \binom{n}{k} = \frac{n!}{k!(n-k)!}, effectively employing factorial products to compute combinations and expansions without explicit notation for the factorial itself. This work built on earlier global traditions but marked a key step toward formalizing such computations in Western mathematics, influencing later probabilistic and combinatorial theories.

Formalization and Notation

The factorial concept, with roots in early combinatorial problems of counting arrangements, saw its notation evolve in the through explicit product representations employed by mathematicians such as Leonhard Euler, who denoted it as $1 \times 2 \times \cdots \times n in works like his (1748). In 1800, Louis François Antoine Arbogast introduced the term "factorielle" (from which "factorial" derives) in his Du calcul des dérivations, where he represented the product using notations like n: or the expanded form $1 \cdot 2 \cdot 3 \cdot \dots \cdot n, applying it to series expansions and derivations. Christian Kramp, building on Arbogast's terminology, invented the exclamation point notation in his 1808 treatise Élémens d'arithmétique universelle, defining n! as the successive product of integers from 1 to n (e.g., $5! = 1 \times 2 \times 3 \times 4 \times 5 = 120) for convenience in printing and combinatorial calculations. This compact symbolism was widely adopted during the , paralleling advancements in permutation theory and that emphasized rigorous symbolic manipulation.

Algebraic Properties

Product Representation

The factorial of a positive n is fundamentally expressed as the continuous product of all positive integers from 1 to n, written in product notation as
n! = \prod_{k=1}^n k.
This representation underscores the cumulative multiplicative structure inherent in the definition of the factorial, distinguishing it from recursive formulations.
Taking the natural logarithm of both sides yields an equivalent sum representation, derived directly from the additivity of the logarithm over products:
\ln(n!) = \sum_{k=1}^n \ln k.
This identity transforms the multiplicative nature of the factorial into an additive form, facilitating analytical treatments such as approximations or calculations in .
Factorials appear prominently in combinatorial identities, particularly through their role in coefficients, which count the number of ways to choose k items from n without regard to order:
\binom{n}{k} = \frac{n!}{k!(n-k)!}.
This relation links the factorial to and the , where the coefficients expand ([x + y](/page/X&Y))^n. The formula originates from early combinatorial work and was formalized in the context of expansions.
A notable number-theoretic identity involving factorials is , which states that for a p > 1,
(p-1)! \equiv -1 \pmod{p}.
First proved by in 1773, this theorem provides a primality criterion based on the factorial product modulo p.
A sketch of the proof proceeds as follows: In of integers modulo p, the nonzero elements $1, 2, \dots, p-1 form a of order p-1. Each element x (where $1 < x < p-1) pairs with its distinct inverse x^{-1} such that x \cdot x^{-1} \equiv 1 \pmod{p}, contributing 1 to the product. The elements satisfying x^2 \equiv 1 \pmod{p} are precisely x \equiv 1 and x \equiv -1 \pmod{p} (since p > 2 for primes beyond 2), and -1 is its own inverse. Thus, the full product (p-1)! \equiv 1 \cdot (-1) \equiv -1 \pmod{p}. This pairing argument establishes the congruence directly from group-theoretic properties modulo a prime.

Multiplication and Division Theorems

One key relation involving products and quotients of factorials arises in the expression of the for odd positive integers. For a positive n, the (2n-1)!!—the product of all positive odd integers up to $2n-1—can be written as (2n-1)!! = \frac{(2n)!}{2^n n!}. This formula connects the factorial of an even number to a scaled version of the factorial of half that value, serving as a precursor to more general duplication formulas in the extension of factorials. Rearranging, it implies (2n)! = 2^n n! \cdot (2n-1)!!, highlighting a multiplicative structure between factorials and s. This relation finds application in the central binomial coefficient, where \binom{2n}{n} = \frac{(2n)!}{(n!)^2} = \frac{2^n (2n-1)!!}{n!}. The exact statement underscores how division by powers of 2 and factorials simplifies expressions involving products of consecutive integers, though double factorials themselves are not defined here. For division properties, when k and n are non-negative integers with k < n, the quotient simplifies to \frac{n!}{k!} = (k+1)(k+2) \cdots n, the product of integers from k+1 to n. This follows directly from the recursive definition of the factorial, n! = n \cdot (n-1)!, by repeated cancellation. Such simplifications are fundamental for computational efficiency and appear in product identities like those for binomial coefficients. The factorial is undefined for negative integers, a fact reinforced algebraically through the recurrence relation. Assuming (-1)! exists leads to $0! = 0 \cdot (-1)!, implying $1 = 0, a contradiction.

Growth and Approximation

Rate of Growth

The factorial function n! exhibits super-exponential growth, surpassing any exponential function c^n for fixed constant c > 0 as n increases, since \log(n!) \sim n \log n dominates the of exponentials. This property arises because each successive multiplication in the product n! = 1 \times 2 \times \cdots \times n incorporates increasingly larger factors, accelerating the growth beyond fixed-base . A key lower bound highlighting this rapidity is Stirling's inequality, which states that for positive integers n \geq 1, n! > \left( \frac{n}{e} \right)^n, where e \approx 2.71828 is the base of the natural logarithm; this confirms n! exceeds even variable-base exponentials tied to n. For example, $10! = 3,628,800, already a seven-digit figure that underscores the swift escalation from smaller values like $5! = 120. In practical applications, such as enumerating permutations in or assessing brute-force algorithms in , this growth renders exact evaluation of n! for n > 20 computationally prohibitive, often requiring asymptotic tools like Stirling's full approximation for feasible handling.

Stirling's Approximation

Stirling's approximation provides an asymptotic formula for the factorial n! when n is large, given by n! \approx \sqrt{2\pi n} \left(\frac{n}{e}\right)^n, where the relative error tends to zero as n \to \infty. This approximation arises from recognizing that the logarithm of the factorial, \ln(n!) = \sum_{k=1}^n \ln k, can be approximated by the integral \int_1^n \ln x \, dx = n \ln n - n + 1. More precisely, using the Euler-Maclaurin formula or on the integral representation n! = \int_0^\infty x^n e^{-x} \, dx, the substitution x = n t leads to a that yields the \sqrt{2\pi n} prefactor after evaluating \int_{-\infty}^\infty e^{-t^2/2} \, dt = \sqrt{2\pi}. An alternative derivation employs the for \pi, which bounds the sequence a_n = n! / (n/e)^n and shows it converges to \sqrt{2\pi} by analyzing monotonicity and limits. The error in the approximation is on the of $1/(12n), making it suitable for large n but less accurate for small values. For improved precision, Stirling's series extends the asymptotically as n! \sim \sqrt{2\pi n} \left(\frac{n}{e}\right)^n \left(1 + \frac{1}{12n} + \frac{1}{288n^2} - \frac{139}{51840n^3} - \frac{571}{2488320n^4} + \cdots \right), where the series terms involve Bernoulli numbers and diverge for fixed n if taken infinitely far, but truncating at the optimal point minimizes the error. This refines the by incorporating higher-order derived from the Euler-Maclaurin applied to \ln(n!). To illustrate accuracy, for n=10, the exact value is $10! = 3,628,800, while the basic approximation gives \sqrt{20\pi} \cdot (10/e)^{10} \approx 3,598,696, an underestimate by about 0.83% or 30,104. Including the first correction term $1 + 1/(12 \cdot 10) yields approximately $3,628,224, reducing the error to roughly 0.016%.

Divisibility and Structural Properties

Prime Factors and Divisibility

The prime of n! is determined by the exponents of each prime p \leq n in its decomposition, where the exponent \epsilon_p(n) of p is given by the infinite sum \epsilon_p(n) = \sum_{k=1}^{\infty} \left\lfloor \frac{n}{p^k} \right\rfloor. This formula counts the multiples of p, p^2, and higher powers up to n, ensuring the precise contribution of each prime to the factorial. Known as , this expression was introduced in 1808 to compute the p-adic valuation of n!, providing an efficient way to find the highest power of p dividing n! without enumerating all factors. For example, consider $10!: the primes less than or equal to 10 are 2, 3, 5, and 7. The exponent of 2 is \left\lfloor 10/2 \right\rfloor + \left\lfloor 10/4 \right\rfloor + \left\lfloor 10/8 \right\rfloor = 5 + 2 + 1 = 8; for 3, it is \left\lfloor 10/3 \right\rfloor + \left\lfloor 10/9 \right\rfloor = 3 + 1 = 4; for 5, \left\lfloor 10/5 \right\rfloor = 2; and for 7, \left\lfloor 10/7 \right\rfloor = 1. Thus, $10! = 3,628,800 = 2^8 \times 3^4 \times 5^2 \times 7^1. A direct consequence of the product definition is that n! is divisible by every k with $1 \leq k \leq n, as each such k appears as a in the n! = 1 \times 2 \times \cdots \times n. For a general positive m with prime m = \prod q_i^{a_i}, the highest power of m dividing n! is m^e, where e = \min_i \left\lfloor \frac{\epsilon_{q_i}(n)}{a_i} \right\rfloor. This follows from the multiplicativity of the p-adic valuation extended to composite bases via the prime exponents.

Digit Count and Trailing Zeros

The number of digits in the decimal representation of n! is given by \lfloor \log_{10}(n!) + 1 \rfloor, where \log_{10}(n!) can be computed exactly as \sum_{k=1}^n \log_{10} k or approximated using Stirling's series for large n: \log_{10}(n!) \approx n \log_{10} n - n \log_{10} e + \frac{1}{2} \log_{10} (2 \pi n). This approximation provides a practical way to estimate the digit count without calculating the full factorial, which becomes infeasible for large n. For example, $10! = 3,628,800 has 7 digits and 2 trailing zeros. Trailing zeros in the of n! arise from factors of 10, each contributed by a pair of prime factors 2 and 5 in the prime of n!. Since the exponent of 2 always exceeds that of 5, the number of trailing zeros is determined by the exponent of 5, calculated as \sum_{k=1}^\infty \lfloor n / 5^k \rfloor. This is known as de Polignac's formula (also called when applied to the prime 5), which gives the exponent of a prime p in n! as \sum_{k=1}^\infty \lfloor n / p^k \rfloor. For instance, $25! has 6 trailing zeros, computed as \lfloor 25/5 \rfloor + \lfloor 25/25 \rfloor = 5 + 1 = 6.
nn! (partial)DigitsTrailing Zeros
103,628,80072
25...984,000,000266

Generalizations

Gamma Function Extension

The serves as the principal of the factorial function to the real and complex numbers, enabling the definition of n! for non-integer values of n. It is defined for complex numbers z with positive real part by Euler's representation: \Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \, dt, where the integral converges for \operatorname{Re}(z) > 0. This formulation extends the factorial such that \Gamma(n+1) = n! for every positive integer n, thereby interpolating the discrete factorial values with a continuous meromorphic function. A fundamental property of the gamma function is its functional equation, \Gamma(z+1) = z \Gamma(z), which mirrors the recursive nature of the factorial (n+1)! = (n+1) n! and allows analytic continuation to the entire complex plane except at certain singularities. The function exhibits simple poles at the non-positive integers z = 0, -1, -2, \dots, rendering \Gamma(z) undefined there; notably, $0! = \Gamma(1) = 1, while negative integer factorials remain undefined due to these poles. The gamma function originated with Leonhard Euler in the 1720s–1730s, who introduced the integral form to generalize the factorial beyond integers through correspondence and early publications. Later, provided an alternative representation in the late , expressing \Gamma(z) as \frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^\infty \left(1 + \frac{z}{n}\right) e^{-z/n}, where \gamma is the Euler-Mascheroni constant, which confirms the function's meromorphic character and poles.

Non-Integer and Continuous Versions

The gamma function provides the standard continuous extension of the factorial to non-integer , satisfying \Gamma(n+1) = n! for nonnegative integers n and enabling evaluation at fractional arguments. A representative example is $0.5! = \Gamma(3/2) = (1/2) \Gamma(1/2) = (1/2) \sqrt{\pi}, where \Gamma(1/2) = \sqrt{\pi} follows from the integral representation \Gamma(1/2) = \int_0^\infty t^{-1/2} e^{-t} \, dt, which evaluates to \sqrt{\pi} via the known \int_{-\infty}^\infty e^{-u^2} \, du = \sqrt{\pi}. The Bohr–Mollerup theorem establishes the uniqueness of this extension among functions with desirable analytic properties: it states that \Gamma(x) is the only positive function f: (0, \infty) \to (0, \infty) such that f(1) = 1, f(x+1) = x f(x) for all x > 0, and \log f(x) is convex on (0, \infty). This log-convexity condition ensures that the extension preserves the rapid growth and smoothness of the factorial while avoiding pathological behaviors in other potential interpolants. Newton interpolation offers an alternative approach to extending the factorial to real numbers using finite differences and at points. The divided-difference formula constructs a P_n(x) of degree at most n such that P_n(k) = k! for k = [0](/page/0), 1, \dots, n, given by P_n(x) = \sum_{k=0}^n f[0,1,\dots,k] \prod_{j=0}^{k-1} (x - j), where the divided differences satisfy f[0,1,\dots,k] = 1/k! for the factorial function. For large n, this polynomial approximates the factorial well near the interpolation points, and the infinite series form relates to broader theory, though it diverges outside finite approximations; the emerges as the compatible with this framework for \operatorname{Re}(x) > [0](/page/0). Other interpolants, such as the Hadamard gamma function H(x), provide distinct continuous versions tailored to specific needs like entire functionality without poles. Defined as H(x) = \frac{1}{\Gamma(1-x)} \frac{d}{dx} \log \left( \frac{\Gamma(1/2 - x/2)}{\Gamma(1 - x/2)} \right), it satisfies H(n) = (n-1)! for positive integers n and is holomorphic everywhere in the , unlike the classical . In certain contexts, such as avoiding singularities or emphasizing multiplicative properties, fractional factorials may employ the Hadamard gamma or similar pseudogamma functions instead of the standard extension.

Computation Methods

Recursive and Iterative Algorithms

The recursive algorithm for computing the factorial of a non-negative integer n directly mirrors the mathematical definition, expressing n! as n \times (n-1)! with the base case $0! = 1. This approach is implemented by making a recursive call on n-1 until reaching the base case, then multiplying back up the call chain. The following illustrates the recursive factorial function:
function factorial(n):
    if n == 0:
        return 1
    else:
        return n * [factorial](/page/Function)(n - 1)
This recursive method exhibits a of O(n), as it performs a constant amount of work at each of the n levels of . Its is also O(n), owing to the accumulation of n frames during the recursive descent. A key limitation arises for large n, where the recursion depth can exceed the system's stack size, resulting in a error. In contrast, the iterative algorithm avoids recursion by using a to accumulate the product from 1 to n, starting with an initial value of 1 and multiplying sequentially. This method is more space-efficient for large inputs and sidesteps stack-related issues entirely. The following depicts the iterative factorial function:
function factorial(n):
    result = 1
    for i = 1 to n:
        result = result * i
    return result
Like the recursive version, the iterative algorithm has O(n) time complexity due to the single loop executing n iterations, each involving constant-time multiplication. However, it achieves O(1) space complexity, relying only on a fixed number of variables independent of n.

Efficient Techniques for Large Values

Computing large factorials directly via successive becomes infeasible for n > 10^6 due to the enormous number of digits and computational overhead, necessitating specialized techniques that leverage mathematical structure or . One prominent method involves , where n! is expressed as the product of primes up to n, each raised to their respective exponents in the . The exponent of a prime p in n! is given by the formula \sum_{k=1}^{\infty} \lfloor n / p^k \rfloor, known as , which efficiently counts occurrences without enumerating all factors. This approach avoids redundant s by precomputing exponents for all primes \leq n (generated via methods) and then assembling the product using , significantly reducing operations for large n. For instance, implementations in languages supporting big integers can compute the of $10^6! in seconds by multiplying fewer than \pi(10^6) \approx 78,000 prime powers. To achieve high precision, Stirling's approximation can serve as a starting point, refined via its asymptotic series expansion. The basic Stirling formula is n! \approx \sqrt{2\pi n} \left( \frac{n}{e} \right)^n, providing a relative error of about $1/(12n) for large n, but the full series n! \sim \sqrt{2\pi n} \left( \frac{n}{e} \right)^n \exp\left( \frac{1}{12n} - \frac{1}{360n^3} + \frac{1}{1260n^5} - \cdots \right) allows successive terms to improve accuracy exponentially until the desired precision is reached. With advanced summation techniques like Borel summation, the series can yield high-precision values of the gamma function for moderate arguments, though for very large integer n, multiplicative methods remain preferable for exact results (see "Growth and Approximation" for details). This refinement is effective for approximation in scientific computing. Software libraries with are essential for handling the intermediate and final results in these methods. Python's built-in int type supports unlimited digit lengths natively, enabling seamless of factorials up to n = 10^6 in under a minute via iterative or prime-based , as the automatically manages by promoting to long integers. For even larger scales or performance-critical applications, the GNU Multiple Precision Arithmetic Library (GMP) optimizes factorial through a hybrid : it separates the odd part of n! (product of odd integers up to n) and the power of 2, using subproduct trees for that scale as O(n \log n \cdot 2^{O(\log^* n)}), allowing $10^{7}! to be computed on modern hardware in minutes. For extraordinarily large n (e.g., $10^9 or beyond), exact computation of n! is rarely practical due to storage requirements exceeding terabytes; instead, properties such as the prime , logarithm, or approximations are computed. Parallel computing frameworks can accelerate feasible cases by exploiting multi-core CPUs or GPUs to distribute the multiplications inherent in iterative or prime-based methods. GPU implementations, such as those using or , parallelize big-integer multiplications across thousands of cores, achieving speedups of 10-100x over CPU for n > 10^7 by breaking the product into concurrent subproducts (e.g., via Strassen-like algorithms for large blocks). These techniques are particularly impactful in scientific , where factorials appear in combinatorial expansions, though they require careful to handle carry propagation in the final assembly.

Applications

In Combinatorics and Permutations

In , the factorial function plays a central role in counting the number of permutations of a set of n distinct objects. The total number of ways to arrange these n objects in a linear is given by n!, which accounts for the sequential choices: the first position has n options, the second n-1, and so on down to 1. This fundamental result underpins many problems involving ordered arrangements. For instance, arranging 5 distinct items yields $5! = 120 possible permutations, illustrating how factorials quantify the scale of such combinatorial structures. Factorials also appear prominently in the binomial theorem, which expands powers of binomials using coefficients derived from them. Specifically, the theorem states that (x + y)^n = \sum_{k=0}^{n} \binom{n}{k} x^{n-k} y^k, where the binomial coefficient \binom{n}{k} = \frac{n!}{k!(n-k)!} determines the multiplicity of each term in the expansion. This coefficient arises combinatorially as the number of ways to select k positions out of n for the y terms (or equivalently, the x terms), with the factorial in the numerator capturing the total permutations of n items and the denominators adjusting for indistinguishability within each group. The presence of n! ensures the coefficients sum to $2^n, reflecting the total expansions. For more general partitioning problems, multinomial coefficients extend this idea to multiple groups. The multinomial coefficient \binom{n}{k_1, k_2, \dots, k_m} = \frac{n!}{k_1! k_2! \cdots k_m!}, where \sum k_i = n, counts the number of ways to divide n distinct objects into m labeled groups of sizes k_1, k_2, \dots, k_m. This formula derives from first permuting all n objects (n! ways) and then dividing by the internal arrangements within each group (k_i! for each i), providing an essential tool for enumerating distributions in combinatorial designs and arrangements. When m=2, it reduces to the case, unifying these principles under factorial-based expressions.

In Probability and Other Fields

In , the factorial appears prominently in the of the , which models the number of events occurring in a fixed of time or when these events occur with a known constant mean rate λ and independently of the time since the last event. The probability that exactly k events occur is given by P(X = k) = \frac{\lambda^k e^{-\lambda}}{k!}, where k! normalizes the probabilities to sum to 1 over all non-negative integers k. This form arises as a limiting case of the when the number of trials approaches infinity while the success probability approaches zero, maintaining a constant product λ. Stirling numbers of the first kind also involve factorials in their connection to probability, particularly in analyzing the cycle structure of . The unsigned Stirling number of the first kind, |s(n, k)|, counts the number of permutations of n elements that consist of exactly k disjoint (including fixed points as 1-cycles), and the total number of permutations satisfies n! = ∑_{k=1}^n |s(n, k)|. In probabilistic terms, for a uniformly of n elements, the probability of having exactly k cycles is |s(n, k)| / n!, which underpins models in theory and probabilities. In physics, factorials feature in the normalization of quantum mechanical wavefunctions for the , a fundamental model for vibrational modes in molecules and fields. The nth energy eigenfunction is \psi_n(x) = \frac{1}{\sqrt{2^n n!}} \left( \frac{m \omega}{\pi \hbar} \right)^{1/4} e^{-m \omega x^2 / 2 \hbar} H_n \left( \sqrt{\frac{m \omega}{\hbar}} x \right), where H_n is the nth Hermite polynomial, and the 1/√(2^n n!) term ensures the wavefunction is such that ∫ |ψ_n(x)|^2 dx = . In statistical mechanics, the factorial accounts for the indistinguishability of particles in the partition function for an , Z = Z_1^N / N!, preventing overcounting of identical microstates and enabling derivations of thermodynamic quantities like . In , factorials establish the information-theoretic lower bound for comparison-based algorithms, as there are n! possible permutations of n distinct elements, requiring at least log_2(n!) ≈ n log_2 n - n log_2 e (by ) comparisons in the worst case to identify the sorted order. This Ω(n log n) bound highlights why algorithms like mergesort achieve optimal asymptotic performance. For tree enumerations, the number of distinct binary search trees that can be formed with n distinct keys is given by the nth C_n = \frac{1}{n+1} \binom{2n}{n} = \frac{(2n)!}{(n+1)! n!}, which relies on factorials in the and counts valid tree structures for applications in data organization and .

Subfactorials and Derangements

The subfactorial of a non-negative n, denoted !n, counts the number of of n objects. A is a of n distinct elements in which no element appears in its original position, also known as a fixed-point-free . This concept arises in as a measure of permutations avoiding fixed points, distinct from the total permutations counted by the factorial n!. The explicit formula for the subfactorial is derived from the inclusion-exclusion principle: !n = n! \sum_{k=0}^{n} \frac{(-1)^k}{k!} This summation alternates signs to subtract and add back overcounts of permutations with fixed points. For computational purposes, subfactorials satisfy the recurrence relation !n = (n-1) \left[ !(n-1) + !(n-2) \right] with base cases !0 = 1 (the empty permutation) and !1 = 0 (no derangement of one element). This recurrence facilitates efficient calculation by building from smaller values, mirroring structural properties of permutations where the last element swaps with one of n-1 others, leading to deranged or partially deranged substructures. An important approximation for large n is !nn! / e, where e ≈ 2.71828 is the base of the natural logarithm; more precisely, !n equals the nearest to n! / e, computed as ⌊n! / e + 0.5⌋. Representative values illustrate growth: !2 = (the single swap of two items), !3 = (cycles like (1→2→3) and (1→3→2)), and !4 = 9 (out of 24 total permutations). These examples highlight how !n grows nearly as fast as n! but remains strictly smaller, approaching the proportion 1/e ≈ 0.3679 of all permutations as n increases.

Double and Multifactorials

The double factorial of a positive integer n, denoted n!!, is defined as the product of all positive integers from n down to 1 or 2, decreasing by 2 each time. Specifically, for odd n > 0, n!! = n \cdot (n-2) \cdot \dots \cdot 3 \cdot 1; for even n > 0, n!! = n \cdot (n-2) \cdot \dots \cdot 4 \cdot 2. Closed-form expressions relate the to the regular factorial. For even n = 2m, (2m)!! = 2^m m!. For odd n = 2m + 1, (2m + 1)!! = \frac{(2m + 1)!}{2^m m!}, or equivalently, (2n - 1)!! = \frac{(2n)!}{2^n n!}. The double factorial is a special case of the more general multifactorial, which extends the concept to skipping every k-th in the product. The k-fold multifactorial, denoted n!(k), is defined for positive s n and k as n!(k) = n \cdot (n - k) \cdot (n - 2k) \cdot \dots, continuing until the terms are non-positive or reach the appropriate endpoint. When k = 1, this reduces to the standard n!; when k = 2, it yields the n!!; and for k = 3, it gives the triple factorial n!!! = n \cdot (n - 3) \cdot (n - 6) \cdot \dots. Double and multifactorials appear in the evaluation of , such as expressions for central coefficients and certain series expansions in .