Fact-checked by Grok 2 weeks ago

Discrete uniform distribution

The discrete uniform distribution is a fundamental in and , characterized by a of possible outcomes, each of which occurs with equal probability. It is typically defined over a set of consecutive from a minimum value a to a maximum value b, where the number of outcomes is n = b - a + 1, and the (PMF) assigns a probability of P(X = x) = \frac{1}{n} to each x in that range. The is then F(x) = \frac{x - a + 1}{n} for x in the . Key statistical properties include the (mean), given by \mu = \frac{[a + b](/page/List_of_French_composers)}{2}, which represents the average outcome, and the variance, \sigma^2 = \frac{n^2 - 1}{12}, measuring the spread around the mean. The distribution is symmetric, with zero skewness, making it a simple model for scenarios without bias toward any outcome. Common examples include the outcome of rolling a six-sided die, where a = 1, b = 6, n = 6, each face has probability \frac{1}{6}, the is $3.5, and the variance is \frac{35}{12} \approx 2.917. Another instance is randomly selecting an from 1 to 5, yielding a of 3 and equal probabilities for each selection. In practice, the discrete uniform distribution models situations like random sampling without replacement, simulations for approximating integrals, and basic games of chance, serving as a baseline for understanding more complex discrete distributions.

Definition

Formal definition

The discrete uniform distribution describes a random variable that assumes values from a of distinct outcomes, each with equal probability. A common parameterization is over a set of consecutive integers \{a, a+1, \dots, b\}, where a and b are integers with a \leq b. In this case, if X is such a random variable, then X follows a discrete uniform distribution denoted X \sim \text{DU}(a, b). Equivalently, the distribution can be parameterized by the number of outcomes n = b - a + 1, where X takes values in \{1, 2, \dots, n\} with equal probability for each. This setup assumes familiarity with basic probability concepts, such as random variables and probability spaces, but emphasizes uniformity as the property where every point in the support receives the same . The total probability sums to 1 across the n points, ensuring a valid . The concept of the originated in early during the 17th century, when mathematicians like and developed foundational ideas to model games of chance involving fair dice and coins, where outcomes are equally likely. It was formalized as a distinct distribution in the , alongside other , with the term "uniform distribution" first appearing in J. V. Uspensky's 1937 text Introduction to Mathematical Probability.

Examples

A classic illustration of the discrete uniform distribution is the outcome of rolling a fair six-sided die, where the X represents the face value shown and takes each from 1 to 6 with equal probability. This setup exemplifies the distribution's core characteristic: all possible outcomes in the finite support set are equally likely, making it a foundational model for unbiased random selection. A more generalized example arises in sampling without replacement from a finite , such as drawing a single from a , where X denotes the specific card selected and each of the 52 possibilities occurs with probability $1/52. Here, the uniformity stems from the assumption of a well-shuffled , ensuring no card is favored over another, which mirrors real-world scenarios like random allocation in experiments. The discrete uniform distribution is not limited to consecutive integers; it can apply to any of distinct values with equal probabilities. For instance, consider X uniformly distributed over the non-consecutive set \{1, 3, 5\}, corresponding to the odd integers up to 5, where each value has probability $1/3. This flexibility highlights the distribution's applicability to arbitrary discrete supports, as long as the outcomes are equiprobable, in contrast to biased distributions where probabilities vary across the set.

Probability functions

Probability mass function

The (PMF) of a discrete uniform X supported on the consecutive integers from a to b, where a and b are integers with a \leq b, is defined as P(X = k) = \begin{cases} \frac{1}{b - a + 1} & \text{if } k = a, a+1, \dots, b, \\ 0 & \text{otherwise}. \end{cases} This formula assigns equal probability to each outcome within the finite , reflecting the nature of the . The follows from the that probabilities must be equal across the n = b - a + 1 possible outcomes and sum to 1, yielding P(X = k) = 1/n for each k in the . Equivalently, when the support is standardized to \{1, 2, \dots, n\} for positive n, the PMF simplifies to P(X = k) = 1/n for k = 1, 2, \dots, n, and 0 otherwise. A key property of this PMF is its constancy over the —each probability equals $1/n—while it is zero outside, including at boundaries such as P(X = a - 1) = 0 or P(X = b + 1) = 0. To verify normalization, the sum over the support is \sum_{k=a}^{b} P(X = k) = \sum_{k=a}^{b} \frac{1}{n} = n \cdot \frac{1}{n} = 1, confirming the probabilities integrate to unity.

Cumulative distribution function

The cumulative distribution function (CDF) of a discrete uniform random variable X supported on the consecutive integers \{a, a+1, \dots, b\}, where a and b are integers with a \leq b, is given by F(x) = \begin{cases} 0 & \text{if } x < a, \\ \frac{x - a + 1}{b - a + 1} & \text{if } a \leq x < b \ (x \text{ integer}), \\ 1 & \text{if } x \geq b. \end{cases} For non-integer x, F(x) equals F(\lfloor x \rfloor), maintaining constancy between integers. In the standard case where the support is \{1, 2, \dots, n\} (so a = 1, b = n), the CDF simplifies to F(k) = \frac{k}{n} for k = 1, 2, \dots, n. This CDF forms a , starting at 0 and increasing by equal increments of \frac{1}{n} (where n = b - a + 1) at each point in the , resembling a series of rising that reach 1 at b. The inverse CDF, or quantile function Q(p), for $0 < p \leq 1 is Q(p) = a + \lceil n p \rceil - 1, which is useful for generating samples from the via the inverse transform method.

Moments

Expected value

The expected value of a discrete uniform X supported on the consecutive integers from a to b (where a < b) is given by the formula E[X] = \frac{a + b}{2}. This represents the of the endpoints of the support, reflecting the equal probability assigned to each integer value in the range. To derive this, recall that the expected value is the sum of each possible outcome weighted by its probability: E[X] = \sum_{k=a}^{b} k \cdot P(X = k) = \sum_{k=a}^{b} k \cdot \frac{1}{b - a + 1} = \frac{1}{b - a + 1} \sum_{k=a}^{b} k. The sum of the integers from a to b is \frac{(b - a + 1)(a + b)}{2}, so substituting yields E[X] = \frac{1}{b - a + 1} \cdot \frac{(b - a + 1)(a + b)}{2} = \frac{a + b}{2}. This derivation leverages the arithmetic series formula and the uniform probability mass function. For the common standard case where the support is \{1, 2, \dots, n\}, the simplifies to E[X] = \frac{n + 1}{2}. The derivation follows similarly: E[X] = \frac{1}{n} \sum_{k=1}^{n} k = \frac{1}{n} \cdot \frac{n(n + 1)}{2} = \frac{n + 1}{2}, using the formula for the of the first n positive integers. This serves as the center of for the , around which the probabilities are balanced. For instance, the expected outcome of a six-sided die ( on \{1, 2, \dots, 6\}) is 3.5, the between the third and fourth faces. Notably, this matches the of the over the interval [a, b], which is also \frac{a + b}{2}; however, in the standard case from 1 to n, the value \frac{n + 1}{2} is offset by 0.5 from the naive \frac{n}{2}, accounting for the spacing.

Variance

The variance of a discrete uniform random variable X supported on the integers \{a, a+1, \dots, b\}, where a \leq b are integers, is given by \operatorname{Var}(X) = \frac{(b - a)(b - a + 2)}{12}. Let m = b - a + 1 denote the number of possible outcomes in the support; then the formula simplifies to \operatorname{Var}(X) = \frac{m^2 - 1}{12}. For the standard case where X is supported on \{1, 2, \dots, n\} (so a = 1 and m = n), this becomes \operatorname{Var}(X) = \frac{n^2 - 1}{12}. To derive this, first compute the second moment E[X^2]. For the standard case, E[X^2] = \frac{1}{n} \sum_{k=1}^n k^2 = \frac{1}{n} \cdot \frac{n(n+1)(2n+1)}{6} = \frac{(n+1)(2n+1)}{6}, using the known formula for the . The variance then follows from the \operatorname{Var}(X) = E[X^2] - (E[X])^2, where E[X] = \frac{n+1}{2}. Substituting yields \operatorname{Var}(X) = \frac{(n+1)(2n+1)}{6} - \left( \frac{n+1}{2} \right)^2 = \frac{n^2 - 1}{12}, after algebraic simplification. The general case follows similarly by reindexing the sum of squares from the standard support. The standard deviation is \sigma = \sqrt{\operatorname{Var}(X)}. For large n, this approximates \frac{n}{\sqrt{12}} \approx 0.289 n, reflecting the spread relative to the support size. As a measure of dispersion, the variance is zero when n=1 (degenerate distribution with no variability) and grows quadratically with n, indicating that larger supports lead to proportionally greater uncertainty in outcomes.

Properties

Generating functions

The probability-generating function (PGF) of a discrete uniform X taking values in \{1, 2, \dots, n\} with equal probability $1/n is given by G(s) = \frac{1}{n} \sum_{k=1}^n s^k = \frac{s(1 - s^n)}{n(1 - s)}, \quad s \neq 1, and G(1) = 1. This arises from the sum, facilitating analytical manipulations specific to the uniform structure. The moment-generating function (MGF) for the same distribution is M(t) = \frac{1}{n} \sum_{k=1}^n e^{tk} = \frac{e^t (1 - e^{nt})}{n (1 - e^t)}, \quad t \neq 0, with M(0) = 1. Like the PGF, this derives from the finite for exponentials, providing a compact tool for deriving higher-order moments via . These generating functions are particularly useful for extracting moments of X; for instance, the expected value is obtained by differentiating the PGF and evaluating at s = 1, yielding E[X] = G'(1) = \frac{n+1}{2}, which aligns with direct results. Higher moments follow similarly from further , such as E[X(X-1)] = G''(1). The availability of these explicit closed forms distinguishes the discrete uniform from more irregular discrete distributions, where generating functions often lack simple analytical expressions and require numerical evaluation.

Symmetry and order statistics

The discrete uniform distribution on the set \{1, 2, \dots, m\} exhibits around its mean \mu = (m+1)/2. Specifically, the satisfies P(X = k) = P(X = m + 1 - k) = 1/m for each k = 1, 2, \dots, m. This balanced structure ensures that the distribution is under any of its points, as all outcomes carry equal probability weight, making it maximally in this sense. For a random sample of size n drawn independently and identically from this distribution, the order statistics X_{(1)} \leq X_{(2)} \leq \dots \leq X_{(n)} capture the sorted values. The of the r-th X_{(r:n)} is given by f_{r:n}(x) = \frac{1}{m^n} \sum_{i=r}^n \binom{n}{i} \binom{i-1}{r-1} (x-1)^{i-r} (m-x)^{n-i}, for x = 1, 2, \dots, m, which demonstrates symmetric forms through inductive equivalence. The of the k-th approximates E[X_{(k:n)}] \approx \frac{k(m+1)}{n+1}, providing a useful akin to the exact result for the on [0, m+1]. This approximation highlights the uniformity's implication that order statistics, when suitably transformed and scaled, behave like shifted discrete uniforms with boundary adjustments to account for the finite . The symmetry of the underlying distribution further implies zero for the marginals.

Applications

Random permutations

In a random permutation of the set \{1, 2, \dots, n\}, each of the n! possible arrangements is equally likely, establishing a joint uniform distribution over all outcomes. This setup models scenarios where orderings must be selected without bias, such as in combinatorial sampling or algorithmic randomization. The discrete uniform distribution arises naturally here as the foundational building block for individual selections within the permutation. For any fixed position i, the value \pi(i) follows a discrete uniform distribution on \{1, 2, \dots, n\}. By symmetry of the uniform measure on permutations, the probability that \pi(i) = k equals $1/n for each k \in \{1, 2, \dots, n\}, since exactly (n-1)! permutations assign k to position i. This marginal uniformity holds independently for each position, though the values across positions are dependent due to the bijective constraint. Algorithms for generating random permutations, such as the Fisher-Yates shuffle, produce this uniform joint distribution by sequentially sampling from shrinking discrete uniform distributions without replacement. Starting with the full set \{1, 2, \dots, n\}, the algorithm selects an element uniformly at random for the first position, then repeats on the remaining n-1 elements, and so on, until the is complete. This process ensures every has equal probability $1/n!. In applications like and , the uniform assumption on s guarantees fairness by treating all elements equivalently. For instance, in randomized , preprocessing the input via a uniform ensures that choices lead to expected linear-time partitions, avoiding worst-case biases from adversarial orderings and promoting equitable performance across inputs.

Parameter estimation

The maximum likelihood estimator (MLE) for the upper bound parameter n of a discrete uniform distribution on \{1, 2, \dots, n\}, given a of size m without replacement (i.e., m distinct values) X_1, \dots, X_m, is the sample maximum \hat{n} = \max\{X_1, \dots, X_m\}. For exact sampling without replacement, the likelihood is L(n) = 1 / \binom{n}{m} if n \geq \max X_i and the X_i are distinct and \leq n, and 0 otherwise (under the approximation for large n relative to m, or exactly for sampling with replacement, L(n) = n^{-m} if \max X_i \leq n and 0 otherwise); in both cases, the MLE is \hat{n} = \max\{X_1, \dots, X_m\}. Thus, any n < \max X_i yields zero likelihood, while increasing n beyond \max X_i decreases the likelihood. The MLE is biased downward, as the expected value of the sample maximum satisfies E[\max X_i] < n. For the without-replacement case, E[\max X_i] = \frac{m (n+1)}{m+1}, leading to the unbiased estimator \hat{n} = \frac{m+1}{m} \max X_i - 1. This adjustment accounts for the tendency of the observed maximum to underestimate the true upper bound, and the estimator is consistent as m \to \infty. An alternative approach is the method of moments estimator, which equates the sample mean \bar{X} = \frac{1}{m} \sum_{i=1}^m X_i to the population mean \frac{n+1}{2}, yielding \hat{n} = 2\bar{X} - 1. This estimator is unbiased under the model but has higher variance than the unbiased MLE for moderate m, as it underutilizes the information in the upper tail of the distribution. A historical application of these ideas is the during , where Allied forces estimated German tank production totals from s on captured vehicles, assuming a discrete uniform distribution from 1 to N. Using the maximum observed serial number with the correction \frac{m+1}{m} \max - 1 (where m is the number of distinct captures), they obtained reliable estimates that informed strategic decisions.

References

  1. [1]
    5.3: Discrete Uniform Distribution - Statistics LibreTexts
    Aug 26, 2025 · A discrete uniform distribution is a probability distribution in which all outcomes are equally likely. This means each value in the ...
  2. [2]
    Discrete Uniform Distribution - an overview | ScienceDirect Topics
    The discrete uniform distribution is defined as a probability distribution in which all outcomes between a minimum (a) and maximum (b) value are equally ...
  3. [3]
    Discrete Uniform Distribution -- from Wolfram MathWorld
    The discrete uniform distribution is also known as the "equally likely outcomes" distribution. Letting a set S have N elements, each of them having the same ...
  4. [4]
    [PDF] SPECIAL DISCRETE DISTRIBUTIONS Definition. A random variable ...
    Definition. A random variable X has a discrete uniform distribution if it is equally likely to assume any one of a finite set of possible values. Examples.
  5. [5]
    [PDF] Discrete Random Variables - PPU Staff
    Discrete Uniform Distribution. Definition (Discrete Uniform Distribution). A random variable X has a discrete uniform distribution if each of the n values in ...<|control11|><|separator|>
  6. [6]
    [PDF] Grinstead and Snell's Introduction to Probability
    Probability theory began in seventeenth century France when the two great French mathematicians, Blaise Pascal and Pierre de Fermat, corresponded over two ...
  7. [7]
    Earliest Known Uses of Some of the Words of Mathematics (U)
    Uspensky. Page 237 reads, "A stochastic variable is said to have uniform distribution of probability if probabilities attached to two equal intervals are equal.
  8. [8]
    [PDF] Discrete uniform distribution
    A discrete uniform distribution, X ∼ discrete uniform(a,b), has a probability mass function f(x) = 1/(b-a+1) for x = a, a+1,...,b, where a < b.
  9. [9]
    [PDF] 3.5.1 The Uniform (Discrete) Random Variable
    A uniform random variable, denoted X ∼ Unif(a, b), has equally likely values in the range {a, a+1, ..., b}, where a and b are integers.
  10. [10]
    7.2 - Probability Mass Functions | STAT 414 - STAT ONLINE
    The probability mass function, P ( X = x ) = f ( x ) , of a discrete random variable is a function that satisfies the following properties:
  11. [11]
    6.2 Discrete Uniform Distribution - Mathematics
    In this section, you will investigate distributions that begin with individual outcomes that are equally likely and expand into more general settings. · ...
  12. [12]
    Proof: Cumulative distribution function of the discrete uniform ...
    Jul 28, 2020 · Proof: Cumulative distribution function of the discrete uniform distribution ... Theorem: Let X X be a random variable following a discrete ...
  13. [13]
    3.6 Discrete Uniform distribution (Rolling a dice)
    Coming to the CDF of the Uniform distribution, it is calculated as: f(y,k)=y−a+1k,(3.18) (3.18) f ( y , k ) = y − a + 1 k , where a is the lowest possible ...
  14. [14]
    5.22: Discrete Uniform Distributions - Statistics LibreTexts
    Apr 24, 2022 · Recall that \( F(x) = G\left(\frac{x - a}{h}\right) \) for \( x \in S \), where \( G \) is the CDF of \( Z \). The quantile function \( F^{-1} ...Uniform Distributions on a... · Uniform Distributions on... · The Standard DistributionMissing: cumulative | Show results with:cumulative
  15. [15]
    Expected value and variance of a random variable - Stat 20
    Let X and Y be two random variables, and consider their sum X + Y . Then, E ( X + Y ) = E ( X ) + E ( Y ) This is true regardless of whether X and Y are ...
  16. [16]
    The Uniform Distribution of the First N Natural Numbers
    ... two basic summation formulas: ∑ i = 1 n i = n ( n + 1 ) 2 and ∑ i = 1 n i 2 = n ( n + 1 ) ( 2 n + 1 ) 6. First, let us consider the expected value of X :.<|control11|><|separator|>
  17. [17]
    [PDF] Stat 5101 Lecture Slides Deck 2 - School of Statistics
    Variance of the Discrete Uniform Distribution (cont.) Then we check that if the n = k case is correct, this implies the n = k + 1 case. k+1. X i=1 i. 2. = (k + ...Missing: derivation | Show results with:derivation
  18. [18]
    [PDF] Undergraduate Probability I - Engineering People Site
    A (continuous) uniform random variable is such that all intervals of a same ... expected value of a discrete uniform random variables, but it does not rely on.
  19. [19]
    [PDF] Handbook on probability distributions - Rice Statistics
    This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution ...
  20. [20]
    (PDF) Probability Functions of Order Statistics from Discrete Uniform ...
    Aug 8, 2025 · In this paper, we firstly give basic definitions and theorems for order statistics. Later, we show that r. probability function of order statistics from ...<|control11|><|separator|>
  21. [21]
  22. [22]
    [PDF] An Introduction to Discrete Probability - UPenn CIS
    Oct 31, 2025 · Roughly speaking, probability theory deals with experiments whose out- comes are not predictable with certainty.
  23. [23]
    [PDF] The Quick Sort Algorithm
    Assumptions. We assume that the algorithm is as shown previously and that the pivot is chosen as a uniformly random element of the input sequence. To ...
  24. [24]
    [PDF] Discrete uniform distribution - L2TI
    Mar 13, 2020 · The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased. If samples are not ...
  25. [25]
    [2210.15339] Generalizing the German Tank Problem - arXiv
    Oct 27, 2022 · The German Tank Problem dates back to World War II when the Allies used a statistical approach to estimate the number of enemy tanks produced or ...