Fact-checked by Grok 2 weeks ago

Uniform distribution

In and , the uniform distribution is a family of symmetric probability distributions in which every outcome or interval within a specified finite or continuous has an equal probability of occurrence, making it a fundamental reference distribution for modeling equally likely events. It encompasses both and continuous forms, with the continuous variant often serving as a baseline for more complex distributions due to its simplicity and lack of bias toward any particular value in the range. The , denoted U(a, b) where a < b are real numbers defining the interval, has a probability density function given by f(x) = \frac{1}{b - a} for a \leq x \leq b, and f(x) = 0 otherwise. This ensures the total probability integrates to 1 over the interval, with the mean (expected value) at \frac{a + b}{2} and the variance at \frac{(b - a)^2}{12}. A special case is the standard uniform distribution on [0, 1], which is pivotal for generating random variables from other distributions via transformation methods. The discrete uniform distribution applies to a finite set of equally likely outcomes, such as the integers from a to b inclusive, where each has probability mass p(x) = \frac{1}{b - a + 1}. For the common case of outcomes from 1 to n, the probability mass function is p(x) = \frac{1}{n} for x = 1, 2, \dots, n, the mean is \frac{n + 1}{2}, and the variance is \frac{n^2 - 1}{12}. Notable applications of the uniform distribution include random number generation in computational simulations, where it underpins pseudorandom sequences essential for Monte Carlo methods. It also models scenarios with inherent uniformity, such as random sampling in surveys, assignment of treatments in randomized experiments, and approximations for physical processes like bus arrival times under ideal conditions. In order statistics and reliability engineering, uniform assumptions facilitate analysis of extremes and spacings in data sets.

Overview

Definition

In probability theory, the uniform distribution is a probability distribution that assigns equal probability to every outcome within a specified finite set or bounded interval, modeling scenarios where no outcome is preferred over another due to symmetry. This concept presupposes familiarity with fundamental elements of probability, including the sample space as the collection of all possible outcomes and the uniform assignment of probabilities to events or subsets thereof. Uniform distributions manifest in two primary forms: discrete and continuous. The discrete uniform distribution arises in cases with a finite or countably infinite sample space, where each discrete outcome receives identical probability weight. A classic example is the flip of a fair coin, which yields a discrete uniform distribution over the two equally likely outcomes of heads or tails. Conversely, the continuous uniform distribution applies to uncountable outcomes spanning a continuous interval, ensuring that probabilities are proportionally equal across subintervals of the same length. For instance, choosing a random point on a line segment from 0 to 1 exemplifies this, as every point or equal-length subsegment is equally probable. The mean and variance serve as key measures of central tendency and spread for uniform distributions.

Historical development

The concept of equal likelihood in probabilistic outcomes traces its origins to the 17th century, particularly through the correspondence between and in 1654, who analyzed problems in games of chance such as dice and card games, assuming uniform probability across possible results to resolve disputes like the "problem of points." This foundational work laid the groundwork for treating discrete outcomes as equiprobable in fair games, marking an early implicit recognition of the . In the early 18th century, Jacob Bernoulli advanced this idea in his seminal 1713 work Ars Conjectandi, where he formalized equiprobability for discrete cases in the context of games and the law of large numbers, treating outcomes like coin tosses or dice rolls as having equal probability under fairness assumptions. Later that century, Abraham de Moivre contributed to continuous extensions by developing approximations for discrete probabilities and introducing a uniform model for remaining lifetime in actuarial contexts, known as de Moivre's law, which posits a uniform distribution over a finite interval to represent equal mortality risk. The 19th century saw further refinement through Pierre-Simon Laplace's integral formulations of probability, where he employed uniform distributions as priors in and for modeling errors, emphasizing the principle of insufficient reason to assign equal probabilities across continuous intervals. This culminated in Andrey Kolmogorov's 1933 axiomatic foundation of probability theory, which integrated measure-theoretic approaches and solidified the uniform distribution—particularly the continuous uniform on [0,1]—as a core element by normalizing to define probability spaces rigorously. Post-1940s developments in computational statistics elevated the uniform distribution's role, driven by the need for pseudorandom number generators to simulate uniform deviates on computers, with early electronic methods emerging in the late 1940s to produce sequences approximating the continuous uniform for Monte Carlo simulations. These advancements recognized discrete and continuous uniform variants as essential building blocks for generating more complex distributions in statistical computing.

Discrete uniform distribution

Probability mass function

The probability mass function (PMF) of a discrete uniform distribution describes the probability of each possible outcome when all outcomes in a finite set are equally likely. For a random variable X that takes integer values from a to b inclusive, where a and b are integers with a \leq b, the PMF is defined as P(X = x) = \frac{1}{b - a + 1}, \quad x = a, a+1, \dots, b. This formula arises because there are exactly b - a + 1 possible outcomes, each assigned equal probability to ensure the total probability sums to 1: \sum_{x=a}^{b} P(X = x) = (b - a + 1) \cdot \frac{1}{b - a + 1} = 1. A common special case occurs when the outcomes are labeled from 1 to n, yielding P(X = k) = \frac{1}{n} for k = 1, 2, \dots, n. For example, the outcome of rolling a fair six-sided die follows this distribution, with P(X = k) = \frac{1}{6} for each face k = 1 to $6. While the discrete is well-defined only for finite support, attempts to extend it to an infinite countable set, such as all integers, result in an improper distribution where the probabilities cannot sum to 1, as each would need to be zero to avoid divergence. The PMF can be visualized as a bar graph with equal-height bars at each integer outcome within the support, reflecting the uniform probability assignment. In the limit as the number of discrete points increases and the spacing approaches zero, the discrete uniform PMF converges to the probability density function of the continuous uniform distribution.

Statistical properties

The discrete uniform distribution exhibits symmetry around its mean, which is located at (n+1)/2 for outcomes ranging from 1 to n, making the probabilities mirror each other equidistant from the center. This property holds for both even and odd n, though the exact centering differs slightly—for even n, no single point lies at the mean, while for odd n, the middle value does. Since all outcomes have equal probability, the distribution lacks a unique mode; every value in the support serves as a mode. The Shannon entropy of the discrete uniform distribution measures its maximum uncertainty among distributions over n outcomes, given by H(X) = \log_2 n bits. For example, a fair six-sided die (n=6) has entropy \log_2 6 \approx 2.585 bits, higher than a biased die where one face has probability 0.5 and the others share the rest equally, yielding lower entropy due to reduced uncertainty. The probability generating function is G(s) = \frac{1 - s^n}{n(1 - s)} for s \neq 1, which simplifies to \frac{s(1 - s^n)}{n(1 - s)} when outcomes start from 1. The discrete uniform distribution arises as a special case of the with a single trial and equal probabilities across categories, equivalent to the uniform categorical distribution.

Continuous uniform distribution

Probability density function

The probability density function (PDF) of a continuous uniform random variable X \sim U(a, b), where a < b, describes a distribution where every value in the interval [a, b] is equally likely. The PDF is defined as f(x) = \begin{cases} \frac{1}{b - a} & a \leq x \leq b, \\ 0 & \text{otherwise}. \end{cases} This constant density arises from the requirement that the total probability over the support [a, b] must equal 1, as mandated for any valid PDF. Integrating the function over the interval yields \int_a^b \frac{1}{b - a} \, dx = 1, confirming the height \frac{1}{b - a} produces a rectangular area of unit measure. The uniform density thus represents an even spread of probability mass, analogous to a rod of uniform material density where mass is proportional to length. A special case is the standard uniform distribution U(0, 1), where the PDF simplifies to f(x) = 1 for $0 \leq x \leq 1, and 0 otherwise; this serves as a foundational building block in probability theory and random number generation. The PDF's rectangular shape visually emphasizes uniformity: a flat line at height \frac{1}{b - a} between a and b, dropping to zero outside, with probabilities computed as the area under this curve over subintervals. For instance, if X \sim U(0, 10), the probability P(2 \leq X \leq 5) is the integral from 2 to 5, equaling \frac{3}{10}, reflecting the subinterval length relative to the full interval.

Cumulative distribution function

The cumulative distribution function (CDF) of a continuous uniform random variable X \sim U(a, b), where a < b, is defined as F(x) = P(X \leq x). It is given by the piecewise function: F(x) = \begin{cases} 0 & \text{if } x < a, \\ \frac{x - a}{b - a} & \text{if } a \leq x \leq b, \\ 1 & \text{if } x > b. \end{cases} This CDF is derived by integrating the probability density function (PDF) from -\infty to x: F(x) = \int_{-\infty}^x f(t) \, dt, where f(t) = \frac{1}{b - a} for a \leq t \leq b and 0 otherwise. For a \leq x \leq b, the evaluates to \int_a^x \frac{1}{b - a} \, dt = \frac{x - a}{b - a}, producing a linear increase from 0 at x = a to 1 at x = b. The CDF exhibits key properties: it is non-decreasing overall, strictly increasing and piecewise linear between a and b with slope \frac{1}{b - a}, and continuous everywhere, ranging from 0 to 1 asymptotically. The , or inverse CDF, is obtained by solving F(x_p) = p for x_p where p \in [0,1], yielding x_p = a + p(b - a). This provides the value at which the CDF reaches probability p. For the standard uniform distribution U(0,1), the CDF simplifies to F(x) = x for $0 \leq x \leq 1, so F(0.5) = 0.5, which corresponds to the of the distribution.

Moments and characteristics

Expected value and variance

The , or , of a discrete uniform distribution over the integers from 1 to n, denoted U(1,n), is \mu = \frac{n+1}{2}. This represents the arithmetic average of the possible outcomes, serving as the of the . For the over the interval [a, b], where a < b, the is \mu = \frac{a + b}{2}. The mean for the continuous case can be derived from the definition of expectation: E[X] = \int_{a}^{b} x \cdot \frac{1}{b - a} \, dx = \frac{1}{b - a} \left[ \frac{x^2}{2} \right]_{a}^{b} = \frac{b^2 - a^2}{2(b - a)} = \frac{(b - a)(b + a)}{2(b - a)} = \frac{a + b}{2}. The variance of the discrete uniform distribution U(1,n) is \sigma^2 = \frac{n^2 - 1}{12}. For the continuous uniform distribution U(a,b), the variance is \sigma^2 = \frac{(b - a)^2}{12}. This formula quantifies the average squared deviation from the mean, measuring the distribution's spread across the support interval. To derive the continuous variance, first compute the second raw moment: E[X^2] = \int_{a}^{b} x^2 \cdot \frac{1}{b - a} \, dx = \frac{1}{b - a} \left[ \frac{x^3}{3} \right]_{a}^{b} = \frac{b^3 - a^3}{3(b - a)} = \frac{(b - a)(a^2 + ab + b^2)}{3(b - a)} = \frac{a^2 + ab + b^2}{3}. Then, apply the : \text{Var}(X) = E[X^2] - (E[X])^2 = \frac{a^2 + ab + b^2}{3} - \left( \frac{a + b}{2} \right)^2 = \frac{a^2 + ab + b^2}{3} - \frac{a^2 + 2ab + b^2}{4} = \frac{(b - a)^2}{12}. The mean acts as the central location or midpoint of the uniform support, while the variance captures the even spread of probability mass, with the factor of 12 arising from the integration over the fixed interval. For example, the standard uniform distribution U(0,1) has mean 0.5 and variance \frac{1}{12} \approx 0.083. Higher moments, such as skewness, are zero for these symmetric distributions.

Higher moments

The central moments of a uniform distribution reflect its symmetry about the mean. For both continuous and discrete cases, all odd-order central moments are zero due to this symmetry. The skewness, defined as the third standardized central moment, is therefore always zero for symmetric uniform distributions. For the continuous uniform distribution on [a, b], the fourth central moment is given by E[(X - \mu)^4] = \frac{(b - a)^4}{80}, where \mu = (a + b)/2 is the mean. This leads to a kurtosis of \frac{9}{5} = 1.8. The kurtosis value below 3 indicates a platykurtic distribution with thinner tails than the normal distribution, implying fewer extreme outliers relative to a normal distribution with the same variance. The raw moments of order k for the continuous uniform distribution on [a, b] are E[X^k] = \frac{b^{k+1} - a^{k+1}}{(k+1)(b - a)}. For example, with the standard uniform distribution on [0, 1], the third raw moment is E[X^3] = \frac{1}{4} (though the corresponding central moment is 0), and the fourth raw moment is E[X^4] = \frac{1}{5}. In the discrete uniform distribution over \{1, 2, \dots, n\}, odd central moments are similarly zero due to symmetry. The kurtosis exhibits patterns analogous to the continuous case and approaches \frac{9}{5} as n becomes large.

Sampling and inference

Random number generation

Generating pseudorandom numbers from a uniform distribution is fundamental to and statistical computing, where high-quality uniform variates serve as the basis for sampling more complex distributions. Early efforts in this area include 's middle-square method, developed in 1946 and presented at a 1949 symposium on computing machinery, which generates pseudorandom digits by squaring an n-digit seed and extracting the middle n digits of the result. This approach, while pioneering, suffered from short periods and poor uniformity, leading to its replacement by more robust algorithms. For the discrete uniform distribution over integers from 1 to n, a standard method to avoid bias uses rejection sampling with modular arithmetic on output from a pseudorandom bit generator. Specifically, generate U uniformly in [0, m) where m is large (e.g., power of 2); let q = \lfloor m / n \rfloor, r = m \mod n; if U < q n, set X = \lfloor U / q \rfloor \mod n + 1; otherwise, reject and repeat. This ensures exact uniformity. Generating continuous uniform variates on [0,1) typically relies on pseudorandom number generators (PRNGs) that produce sequences approximating this distribution. Linear congruential generators (LCGs), defined by the recurrence X_{i+1} = (a X_i + c) \mod m with normalized output U_i = X_i / m, are simple and widely implemented due to their efficiency. For superior statistical properties and a period of $2^{19937} - 1, the Mersenne Twister algorithm, introduced by Matsumoto and Nishimura in 1998, has become a standard, offering equidistribution in up to 623 dimensions. To obtain a uniform variate on [a, b), apply the affine transformation X = a + (b - a) U where U \sim \text{Uniform}(0,1). Uniform pseudorandom numbers on [0,1) also form the foundation of inverse transform sampling for generating variates from arbitrary distributions. In this method, if F is the cumulative distribution function of the target distribution, then X = F^{-1}(U) where U \sim \text{Uniform}(0,1) produces a sample from the target, leveraging the uniformity to match the desired probabilities. This technique is particularly valuable in simulations requiring non-uniform samples, such as Monte Carlo integration for parameter estimation. The quality of uniform random number generators is assessed through metrics like period length, which measures the cycle before repetition (ideally approaching $2^k for k-bit outputs), and statistical tests for uniformity and independence. The , for instance, evaluates whether observed frequencies in binned samples deviate significantly from expected uniform proportions, with low p-values indicating poor uniformity. Comprehensive suites like apply such tests to detect correlations or lattice structures in the sequence. In practice, programming languages provide built-in implementations; for example, Python's random.uniform(a, b) function uses the to generate a float in [a, b] via the scaling transformation on its core uniform generator. This method ensures reproducibility with a fixed seed while maintaining high-quality output for most applications.

Parameter estimation

Parameter estimation for the uniform distribution involves inferring the support parameters from observed data, typically using methods like and the . These approaches are applied separately to the continuous and discrete cases, with considerations for bias, consistency, and confidence intervals derived from the distribution of .

Continuous Uniform Distribution

For a continuous uniform distribution on the interval [a, b] with a < b, the maximum likelihood estimators based on an independent sample X_1, \dots, X_n are the sample minimum and maximum: \hat{a}_{\text{MLE}} = X_{(1)} = \min_{1 \leq i \leq n} X_i, \quad \hat{b}_{\text{MLE}} = X_{(n)} = \max_{1 \leq i \leq n} X_i. These estimators maximize the likelihood function L(a, b) = [ (b - a)^{-n} ] \mathbf{1}_{\{a \leq X_{(1)}, X_{(n)} \leq b \}}, which is constant for a \leq X_{(1)} and b \geq X_{(n)} but drops to zero outside this region. The MLEs are biased: E[\hat{a}_{\text{MLE}}] = a + (b - a)/(n + 1) and E[\hat{b}_{\text{MLE}}] = b - (b - a)/(n + 1), so the estimated range \hat{b}_{\text{MLE}} - \hat{a}_{\text{MLE}} underestimates the true range by a factor of (n - 1)/(n + 1). An approximate unbiased adjustment for the bounds uses the estimated range r = \hat{b}_{\text{MLE}} - \hat{a}_{\text{MLE}}: \hat{a} = \hat{a}_{\text{MLE}} - \frac{r}{n + 1}, \quad \hat{b} = \hat{b}_{\text{MLE}} + \frac{r}{n + 1}. This correction aligns the expected values more closely with the true parameters, though exact unbiased estimators for both bounds simultaneously are not straightforward due to dependence. The MLEs are consistent, with the variance of the range estimator scaling as O(1/n^2), specifically \text{Var}(\hat{b}_{\text{MLE}} - \hat{a}_{\text{MLE}}) = (b - a)^2 \frac{2(n-1)}{(n + 1)^2 (n + 2)}. The method of moments equates the first two sample moments to the population moments: the mean \mu = (a + b)/2 and variance \sigma^2 = (b - a)^2 / 12. Using the sample mean \bar{X} and sample variance s^2, the estimators are \hat{a}_{\text{MoM}} = \bar{X} - \sqrt{3 s^2}, \quad \hat{b}_{\text{MoM}} = \bar{X} + \sqrt{3 s^2}. These are unbiased for the midpoint but may yield negative range estimates for small samples if s^2 > (\bar{X} - a)^2 / 3; they are consistent but less efficient than the for large n. Confidence intervals for the bounds leverage the beta-distributed spacings of order statistics. For the upper bound b assuming a known (e.g., Uniform[0, b]), a (1 - \alpha) is [X_{(n)}, X_{(n)} / \alpha^{1/n}]; similar pivot-based intervals apply for a or both parameters jointly using the joint distribution of X_{(1)} and X_{(n)}. For both unknown, approximate intervals adjust the MLE range by factors involving n and \alpha, such as scaling the range by n / \sqrt{\delta} for confidence level 1 - \delta. For example, given samples {1.2, 3.4, 2.1} (n=3), the MLEs are \hat{a}_{\text{MLE}} = 1.2, \hat{b}_{\text{MLE}} = 3.4, with approximate unbiased adjustments \hat{a} \approx 1.2 - 2.2/4 = 0.85, \hat{b} \approx 3.4 + 2.2/4 = 4.15. The method of moments yields \bar{X} = 2.233, s^2 \approx 1.223, so \hat{a}_{\text{MoM}} \approx 2.233 - \sqrt{3 \times 1.223} \approx 0.32, \hat{b}_{\text{MoM}} \approx 4.15.

Discrete Uniform Distribution

For a discrete uniform distribution on {1, 2, \dots, k}, the MLE for k from a sample X_1, \dots, X_n is \hat{k}_{\text{MLE}} = \max_i X_i, as larger k would dilute the probability mass without increasing the likelihood beyond the observed maximum. This estimator is biased downward, with E[\hat{k}_{\text{MLE}}] \approx k n/(n + 1) for large k; an approximately unbiased version is \hat{k} = \hat{k}_{\text{MLE}} \cdot (n + 1)/n. It is consistent, with variance decreasing as O(1/n). For general discrete uniform on integers from m to m + k - 1, the MLEs analogously use the sample min and max, with similar bias corrections. The method of moments for the discrete case equates the sample to the population m + (k - 1)/2 and uses the or second for the second parameter, yielding \hat{k}_{\text{MoM}} = 2(\bar{X} - m) + 1 if m is known, or joint solutions otherwise. These estimators are consistent but can be sensitive to outliers in small samples. intervals for k rely on the extreme value distribution, similar to the continuous case, often using \Pr(\max X_i \leq j) = (j/k)^n for pivotal quantities.

Applications and examples

In probability theory

In probability theory, the continuous uniform distribution serves as a fundamental null model, representing the maximum entropy distribution over a bounded interval of support, as it maximizes uncertainty subject to the constraint of fixed bounds without additional information. This property arises because any deviation from uniformity would reduce the entropy while preserving the support, making the uniform the least informative choice for modeling ignorance over a finite range. The is intrinsically linked to the , as it induces a on intervals by normalizing the restricted to the , providing a way to define "equal likelihood" across a . This connection underpins much of measure-theoretic probability, where the serves as the reference measure for and integration over Borel sets. In the , the sum of a large number of independent uniform random variables on a bounded approximates a , illustrating the theorem's generality even for distributions with compact support and finite moments. For instance, the standardized sum of n such uniforms converges in distribution to the standard normal as n increases, highlighting the uniform's role in demonstrating the robustness of asymptotic . The uniform distribution also plays a key role in as a for parameters in certain models, such as the upper bound θ in a uniform likelihood on [0, θ], where an improper uniform prior on θ yields a closed-form Pareto posterior that updates beliefs about the based on observed maxima. This conjugacy facilitates exact without numerical , particularly useful for bounded estimation problems. In , uniform assumptions resolve ambiguities like Bertrand's paradox, where different ways of generating "random" chords in a circle—such as uniform positioning relative to the radius or angle—yield probabilities of 1/2, 1/3, or 1/4 for the chord exceeding the side of an inscribed , emphasizing the need for invariant measures under geometric transformations to define uniformity consistently. A principled resolution favors the uniform distribution over the space of lines, invariant under rotations and translations, leading to a probability of 1/3. An illustrative example is , where dropping a needle of length l ≤ d (with d the spacing between ) assumes uniform distribution for the needle's distance from the nearest line and its orientation angle, yielding a crossing probability of 2l/(πd) that enables estimation of π from the observed proportion of crossings. This geometric setup leverages the to compute the exact probability integral over the parameter space.

In real-world modeling

The uniform distribution finds extensive use in for numerical , where samples drawn from the standard uniform distribution U(0,1) approximate integrals by averaging function evaluations over randomly selected points. A classic example is estimating the of π by generating points uniformly in the unit square and determining the proportion that fall within the unit quarter-circle, yielding π/4 as the ratio of "hits" to total samples; this method leverages the uniform distribution's equal likelihood across the domain to provide unbiased estimates with variance decreasing as 1/N for N samples. In , uniform sampling from the uniform distribution is employed in ray tracing to generate rays that intersect scenes evenly, mitigating artifacts in rendered images. For instance, pixels are subdivided into uniform grids of samples, or rays are cast uniformly over sources to compute indirect illumination, ensuring consistent coverage and reducing visible banding in textures or shadows. Reliability engineering often models failure times or wear processes as uniform distributions when bounded between known minimum and maximum values, such as in components with constant rates over a fixed . The two-parameter uniform distribution is particularly useful here, where the F(t) = (t - a)/(b - a) for t in [a, b] estimates the probability of within the bounds, aiding in predicting system lifetimes for bounded processes like battery discharge or material fatigue under steady conditions. In , the encryption scheme requires keys generated from a uniform distribution over all possible bit strings of the message length to achieve perfect secrecy, as the uniform randomness ensures that every is equally likely regardless of the , making decryption without the key impossible. incorporates uniform arrival times in simplified models of deterministic or near-constant traffic flows, such as in G/G/1 systems where interarrival times follow a uniform distribution to analyze wait times and queue lengths under bounded variability. A practical example arises in , where simulations model uniformly distributed loads on beams within bounds [0, max_load] to assess reliability against variability in applied forces, converting the load to nodal equivalents and evaluating failure probabilities through repeated random sampling. Despite these applications, the uniform distribution has limitations in modeling real-world phenomena with inherent clustering or , as it assumes equal probability across its support and fails to capture biases or patterns, often necessitating transformations to distributions like or for more realistic fits.

References

  1. [1]
    1.3.6.6.2. Uniform Distribution
    The uniform distribution defines equal probability over a given range for a continuous distribution. For this reason, it is important as a reference ...Missing: definition | Show results with:definition
  2. [2]
    The Uniform Distribution – Introductory Statistics - UH Pressbooks
    The uniform distribution is a continuous probability distribution and is concerned with events that are equally likely to occur.
  3. [3]
    14.6 - Uniform Distributions | STAT 414 - STAT ONLINE
    A continuous random variable has a uniform distribution, denoted , if its probability density function is: for two constants and , such that .
  4. [4]
    An extended approach for the generalized powered uniform ... - NIH
    Nov 16, 2022 · The uniform distribution U(0, 1) is closely related to other distributions since it is of fundamental importance to generate random numbers in ...
  5. [5]
    [PDF] 3.5.1 The Uniform (Discrete) Random Variable
    A uniform random variable, denoted X ∼ Unif(a, b), has equally likely values in the range {a, a+1, ..., b}, where a and b are integers.
  6. [6]
    [PDF] Uniform, Binomial, Poisson and Exponential Distributions
    Discrete uniform distribution is a discrete probability distribution: If a random variable has any of n possible values k1, k2, …, kn that are equally ...
  7. [7]
    14.8 - Uniform Applications | STAT 414
    The uniform distribution is used for random number generation, random assignments in experiments, and random selection for surveys.
  8. [8]
    [PDF] Order Statistics 1 Introduction and Notation
    Because these samples come from a uniform distribution, we expect them to be spread out “ran- domly” and “evenly” across the interval (0,1).
  9. [9]
    [PDF] Grinstead and Snell's Introduction to Probability
    'Introduction to Probability, 2nd edition', published by the American Mathematical So- ciety, Copyright (C) 2003 Charles M. Grinstead and J. Laurie Snell ...
  10. [10]
    [PDF] SPECIAL DISCRETE DISTRIBUTIONS Definition. A random variable ...
    Definition. A random variable X has a discrete uniform distribution if it is equally likely to assume any one of a finite set of possible values. Examples.
  11. [11]
    This Month in Physics History | American Physical Society
    In the mid-17th century, an exchange of letters between two prominent mathematicians–Blaise Pascal and Pierre de Fermat–laid the foundation for probability.
  12. [12]
    [PDF] Pascal and the Invention of Probability Theory - Mathematics
    Pascal states that in addition to Fermat and himself, also de Mere and the mathematician Roberval could solve the dice problem. In the preserved letters in the ...
  13. [13]
    Probability History - Utah State University
    Pascal and Fermat began a correspondence in 1654 in which they discussed a problem in gambling (the problem of points) brought to them by Pascal's friend the ...
  14. [14]
    A Tricentenary history of the Law of Large Numbers - Project Euclid
    Jacob Bernoulli's Ars Conjectandi remained unfinished in its final (fourth) part, the Pars. Quarta, the part which contains the theorem, at the time of his ...
  15. [15]
    [PDF] Survival Models
    De Moivre's (Uniform), Exponential, Weibull, Makeham, Gompertz ... . Its probability mass function is. Pr[Kx = k] = Pr[k ≤ Tx < k + 1] = Pr[k<Tx ≤ k + 1].Missing: method | Show results with:method
  16. [16]
    [PDF] The Significance of Jacob Bernoulli's Ars Conjectandi - Glenn Shafer
    More than 300 years ago, in a fertile period from 1684 to 1689, Jacob Bernoulli worked out a strategy for applying the mathematics of games of chance to the ...Missing: discrete uniform<|separator|>
  17. [17]
    [PDF] Kolmogorov and Probability Theory - CORE
    A) Kolmogorov may be considered as the founder of probability theory. The monograph by Kolmogorov published in 1933 transformed the calculus of probability ...
  18. [18]
    [PDF] HISTORY OF UNIFORM RANDOM NUMBER GENERATION - Hal-Inria
    ABSTRACT. Random number generators were invented before there were symbols for writing numbers, and long before mechanical and electronic computers.
  19. [19]
    History of uniform random number generation - ACM Digital Library
    In this article, we give a historical account on the design, implementation, and testing of uniform random number generators used for simulation. Formats ...
  20. [20]
  21. [21]
    [PDF] A Mathematical Theory of Communication
    In the discrete case the entropy was related to the logarithm of the probability of long sequences, and to the number of reasonably probable sequences of long ...
  22. [22]
    Multinoulli distribution | Properties and proofs - StatLect
    The Multinoulli distribution (sometimes also called categorical distribution) is a multivariate discrete distribution that generalizes the Bernoulli ...
  23. [23]
    None
    ### Summary of Uniform Distribution from https://www.math.drexel.edu/~tolya/continuous%20uniform.pdf
  24. [24]
    [PDF] Continuous Random Variables 1.10.1: Uniform Distribution ...
    In general, if events occur on the real number line x ⩾ 0 in such a way that the expected number of events in all intervals [x,x + d] is.
  25. [25]
    [PDF] Uniform distribution
    The uniform distribution is used to model a random variable that is equally likely to occur between a and b. The uniform distribution is central to random ...
  26. [26]
    [PDF] ECE 302: Lecture 4.3 Cumulative Distribution Function
    (i) The CDF is a non-decreasing. (ii) The maximum of the CDF is when x = ∞: FX (+∞)=1. (iii) The minimum of the CDF is when x = −∞: FX (−∞)=0.
  27. [27]
    Discrete Uniform Distribution -- from Wolfram MathWorld
    The discrete uniform distribution is also known as the "equally likely outcomes" distribution. Letting a set S have N elements, each of them having the same ...Missing: expected | Show results with:expected
  28. [28]
    Uniform Distribution -- from Wolfram MathWorld
    A uniform distribution, sometimes also known as a rectangular distribution, is a distribution that has constant probability.
  29. [29]
    Uniform distribution | Properties, proofs, exercises - StatLect
    Expected value. The expected value of a uniform random variable X · is [eq3] ; Variance. The variance of a uniform random variable X · is [eq5] ; Moment generating ...
  30. [30]
    [PDF] Lecture 2 Probability review - continuous random variables
    Sep 25, 2019 · Problem 2.6.7 (Moments of the uniform distribution). Let Y follow the uni- form distribution U(l, r) on the interval [l, r], where l < r ...
  31. [31]
    6.2 Discrete Uniform Distribution - Mathematics
    6.2 Discrete Uniform Distribution. 🔗 In this section, you will investigate distributions that begin with individual outcomes that are equally likely and expand ...
  32. [32]
    [PDF] Testing Random Number Generators - Rice University
    Apr 12, 2005 · Chi-square is an approximate test of the probability of getting the frequencies you've actually observed if the null hypothesis were.Missing: quality metrics length
  33. [33]
    The Art of Computer Programming: Random Numbers - InformIT
    Jun 23, 2014 · Von Neumann's original “middle-square method” has actually proved to be a comparatively poor source of random numbers. The danger is that ...<|control11|><|separator|>
  34. [34]
    [PDF] 3 Uniform Random Numbers - Art Owen
    The majority of modern random number generators are based on simple recursions using modular arithmetic. A well known example is the linear con- gruential ...
  35. [35]
    [PDF] Uniform Random Number Generation
    Nov 16, 2005 · This chapter covers the basic design principles and methods for uniform random number generators used in simulation.
  36. [36]
    [PDF] Random Number Generators - Columbia University
    Linear Congruential Generators. The most common and easy to understand and implement random number generator is called a Linear Congruential Generator (LCG).
  37. [37]
    [PDF] Mersenne Twister: A 623-dimensionally equidistributed uniform ...
    In this paper, a new algorithm named Mersenne Twister (MT) for generating uniform pseudoran- dom numbers is proposed.
  38. [38]
    [PDF] 1 Inverse Transform Method
    Define X = F−1(U), where U has the continuous uniform distribution over the interval (0,1). Then X is distributed as F, that is, P(X ≤ x) = F(x), x ∈ IR. Proof ...
  39. [39]
    [PDF] TestU01: A C Library for Empirical Testing of Random Number ...
    TestU01 is a C library for empirical statistical testing of uniform random number generators (RNGs), providing various tests and tools.Missing: metrics | Show results with:metrics
  40. [40]
    random — Generate pseudo-random numbers — Python 3.14.0 ...
    This module implements pseudo-random number generators for various distributions. For integers, there is uniform selection from a range.Random -- Generate... · Functions For Sequences · Examples
  41. [41]
    Maximum Likelihood Estimation (MLE) for a Uniform Distribution
    This tutorial explains how to find the maximum likelihood estimate (mle) for parameters a and b of the uniform distribution.
  42. [42]
    [PDF] Lecture 6: Estimators 6.1 Method of moments estimator
    The method of moments estimator finds θ by comparing data moments to model moments, and the resulting θ is called the method of moment estimator.
  43. [43]
    [PDF] STAT 391 Handout 2 Estimation of a uniform distribution
    For any true l, the probability of the observed range lML to be at most l is n. A range lML that's small is considered atypical, while a range that closer to ...
  44. [44]
    Statistical Inferences on Uniform Distributions: The Cases of ...
    The most efficient unbiased estimator is used to provide confidence intervals and tests of hypotheses procedures for the unknown parameter (the unknown boundary ...<|separator|>
  45. [45]
    [PDF] Topic 15: Maximum Likelihood Estimation - Arizona Math
    If the curvature is large and thus the variance is small, the likelihood is strongly curved at the maximum. We now look at these properties in some detail by ...
  46. [46]
    Method of Moments: Uniform Distribution - Real Statistics Using Excel
    Describes how to estimate the alpha and beta parameters of the uniform distribution that fits a set of data using the method of moments in Excel.
  47. [47]
    [PDF] Point Estimation - Srikar Katta
    1.2 Maximum Likelihood Estimator . ... The Method of Moments Estimator (MOME) is a technique that takes advantage of moments in probability. (i.e. ...
  48. [48]
    Confidence interval for uniform - Mathematics Stack Exchange
    Sep 3, 2012 · A 100γ% confidence interval for θ, when the distribution is uniform over (0,θ), is (yn, yn/(1−γ)1/n).Confidence interval for a uniform distribution - Math Stack ExchangeIs this a general form for the confidence interval of a uniform ...More results from math.stackexchange.com
  49. [49]
    Why is the MLE of N of the discrete uniform distribution the value you ...
    Feb 17, 2014 · I can give an intuitive argument in that since you are only viewing one observation, it is the largest value and hence is the maximum.Expectation for the MLE for a Uniform Discrete Random VariableEstimating the parameter of a uniform distribution: improper prior?More results from stats.stackexchange.com
  50. [50]
    Maximum Likelihood Example: Discrete Uniform - YouTube
    Mar 2, 2011 · The use of maximum likelihood estimation to estimate the upper bound of a discrete uniform distribution.
  51. [51]
    Uniform distribution | Probability and Statistics Class Notes - Fiveable
    Uniform distribution is a fundamental concept in probability theory, where all outcomes within a range are equally likely. It's characterized by a constant ...<|control11|><|separator|>
  52. [52]
    [PDF] A Probabilistic Upper Bound on Differential Entropy
    It is well known [3] that the entropy of a distribution with support [xL,xR] is at most log(xR − xL), which is the entropy of the distribution that is uniform ...<|separator|>
  53. [53]
    [PDF] What is the uniform distribution?
    So the uniform measure U[0,L] is the normalization of Lebesgue measure, λ[0,L]/L. That is, it's the uniform distribution in the usual sense. Let X be a compact ...
  54. [54]
    [PDF] Central Limit Theorem and the Law of Large Numbers Class 6 ...
    The central limit theorem says that the sum or average of many independent copies of a random variable is approximately a normal random variable. The CLT goes ...
  55. [55]
    Bayes Estimators for the Continuous Uniform Distribution - JSE
    We show how classical estimators can be derived as Bayes estimators from a family of improper prior distributions.
  56. [56]
    [PDF] The Well-Posed Problem - Probability Theory As Extended Logic
    Bertrand's paradox has a greater importance than appears at first glance, because it is a simple crystallization of a deeper paradox which has permeated ...
  57. [57]
    [PDF] Buffon Needle Problem, Extensions, and Estimation of π
    The Bertrand paradox led to proposals by Poincaré and others that probability statements for geometric situations be tied to densities that would be invariant.Missing: resolution | Show results with:resolution<|separator|>
  58. [58]
    [PDF] 1 Monte Carlo Integration
    Since the area of a circle of radius r is A = πr2, one way to numerically estimate π is to compute the area of the unit circle. Empirically, we can estimate the ...
  59. [59]
    [PDF] Distribution Ray Tracing - UT Computer Science
    One approach is uniform sampling (i.e., choosing X from a uniform distribution):. Page 12. University of Texas at Austin CS384G - Computer Graphics Fall 2010 ...
  60. [60]
    The 2 Parameter Uniform Distribution 7 Formulas - Accendo Reliability
    Cumulative Density Function (CDF). F(t) is the cumulative probability of failure from time zero till time t. Very handy when estimating the proportion of units ...
  61. [61]
    [PDF] 1 Perfect secrecy of the one-time pad - UMD MATH
    Theorem 1. If the key is chosen uniformly randomly from all keys of a given length, then the one-time pad is perfectly secret. Proof. We need to show that P ...
  62. [62]
    G/G/1 Queuing System and Little's Law - MATLAB & Simulink
    This example shows how to model a single-queue single-server system in which the interarrival time and the service time are uniformly distributed with fixed ...
  63. [63]
    Linear Moments-Based Monte Carlo Simulation for Reliability ...
    Mar 7, 2024 · For structural analysis, the uniformly distributed load qr was converted into three nodal loads, each being P = qrl/4. The serviceability ...
  64. [64]
    Uniform Probability Distribution - Vocab, Definition, Explanations
    Understanding these limitations is crucial; for example, many systems have inherent biases or skewed distributions that cannot be captured by a uniform model. ...