Fact-checked by Grok 2 weeks ago

Random number generation

Random number generation is the process of producing sequences of numbers that cannot be reasonably predicted better than by random chance, essential for simulating uncertainty in computational and physical systems. These sequences are generated primarily through two methods: true random number generators (TRNGs), which rely on inherently unpredictable physical phenomena such as thermal noise or , and pseudorandom number generators (PRNGs), which employ deterministic algorithms seeded with an initial value to produce outputs statistically indistinguishable from true randomness over short periods. TRNGs provide genuine unpredictability but are slower and require hardware entropy sources, whereas PRNGs offer efficiency and reproducibility for testing but risk periodicity and predictability if the seed or algorithm is compromised. In applications spanning , where secure demands resistance to prediction to prevent breaches, statistical sampling for methods, and procedural content in gaming, the quality of randomness directly impacts reliability and fairness. Notable linear congruential generators, defined by the recurrence X_{n+1} = (a X_n + b) \mod m, exemplify early PRNG designs but exhibit detectable correlations, underscoring the need for rigorous statistical testing like diehard or NIST suites to validate uniformity and . Challenges in random number generation include insufficient leading to biased outputs in TRNGs and algorithmic flaws causing short cycles or linear dependencies in PRNGs, which have historically undermined cryptographic protocols by enabling attacks on systems assuming perfect . Advances, such as quantum-based generators leveraging entanglement for provable unpredictability, address these issues but face hurdles in widespread deployment. Early computational efforts trace to mid-20th-century methods like the middle-square technique, evolving into modern standards prioritizing cryptographic security over mere statistical adequacy.

Fundamentals and Definitions

Core Concepts of Randomness

A in the context of number exhibits unpredictability, whereby the value of any element cannot be determined with from preceding elements or of the . This property ensures that no deterministic inference can reliably forecast future outputs, distinguishing random processes from patterned or algorithmic ones. Empirical validation of unpredictability relies on the sequence resisting all feasible prediction attempts, including those based on partial observations. Statistical randomness requires that the sequence approximates an ideal , with each possible value equally likely, and demonstrates between elements, showing no correlations or biases detectable by hypothesis testing. Standardized test suites, such as those outlined by NIST in Special Publication 800-22 (revised 2010), evaluate this through metrics like frequency (monobit) tests for balance between 0s and 1s, runs tests for streak lengths, and spectral tests for periodicity. , in , Volume 2 (1997 edition), emphasizes that sequences passing such empirical tests mimic the behavior of truly random sources, though they may fail under deeper scrutiny if underlying flaws exist. From an information-theoretic perspective, randomness quantifies as high entropy, measuring the average uncertainty per symbol; for a uniform binary source, Shannon entropy reaches its maximum of 1 bit per symbol. Min-entropy, focusing on the worst-case predictability, provides a conservative bound for applications, as low min-entropy enables efficient guessing attacks. Algorithmically, formalizes randomness: a finite is random if its shortest describing program (in a ) is at least as long as the string itself, rendering it incompressible. This absolute notion aligns with causal realism, where randomness arises from rather than mere statistical approximation, though practical generation often settles for testable proxies due to computational limits.

Distinction Between True and Pseudo-Random Numbers

True random numbers are generated by non-deterministic processes that exploit physical phenomena with inherent unpredictability, such as thermal in electronic circuits, quantum fluctuations, or , yielding outputs that cannot be reproduced or predicted even with exhaustive knowledge of the generating mechanism. These sources provide directly from chaotic or quantum events, ensuring that each bit or value arises from causal independence, as validated through tests like Bell inequalities that confirm non-local quantum beyond classical . In practice, hardware implementations, such as those using avalanche in diodes or photonic quantum events, produce sequences validated against statistical suites like NIST SP 800-22 to approximate without algorithmic . Pseudo-random numbers, by contrast, emerge from deterministic algorithms—known as deterministic random bit generators (DRBGs)—that transform an initial value through mathematical operations, such as linear congruential generators defined by the recurrence X_{n+1} = (a X_n + b) \mod [m](/page/M), to yield long sequences statistically resembling true but fully reproducible given the seed and parameters. Approved DRBGs, including those based on functions or ciphers like Hash_DRBG or CTR_DRBG, expand limited into extended outputs but remain vulnerable to reconstruction if the internal state is compromised, as their predictability stems from computational rather than physical indeterminacy. The seed itself typically requires true random input to prevent trivial predictability, yet the resulting stream lacks the fundamental entropy of physical sources. The core distinction lies in entropy origin and predictability: true random generators (often termed non-deterministic random bit generators or NRBGs) deliver irreducible essential for one-time pads or cryptographic key , where any undermines , whereas pseudo-random generators prioritize efficiency and speed for simulations, bulk data processing, or non- contexts, passing statistical tests like dieharder or NIST suites but failing under state recovery attacks. In cryptographic applications, systems combine true random with pseudo-random to unpredictability and performance, as pure true random generation is resource-intensive and slower, producing bits at rates like 100-1000 kbps in quantum devices versus gigabits per second for DRBGs. This separation ensures that pseudo-random outputs, while computationally indistinguishable from true for many purposes, do not equate to it, as their causal chain traces fully to the seed without exogenous physical variance.

Historical Development

Ancient and Pre-Computational Methods

The earliest known methods of random number generation relied on physical artifacts that harnessed unpredictable natural processes, such as the rolling of or the of lots, primarily for , , and in ancient civilizations. In , archaeological evidence indicates that cubic marked with dots from 1 to 6 appeared around 3000 BCE, excavated from sites in present-day and , and were employed for both recreational games and determining outcomes perceived as divinely ordained. Similar dice, also six-sided, have been documented from the Indus Valley civilization during the same third millennium BCE period, suggesting parallel development of these tools for generating discrete random integers. Casting lots, involving the random drawing or throwing of marked objects like sticks, stones, or tokens into a container or onto the ground, was a widespread practice across the , including among the by the second millennium BCE, for purposes such as land division, allocation, and resolving disputes. This technique effectively produced a uniform random selection from a finite set, often interpreted as revealing will. In ancient , the (Book of Changes), originating during the dynasty (1046–771 BCE), utilized yarrow stalks for ; the traditional method begins with 50 stalks, from which one is set aside, and the remainder are randomly divided into groups via handfuls, counted in threes or fours to generate six lines forming one of hexagrams, functioning as a biased random of symbolic outcomes equivalent to a 6-trit number. Ancient Greeks employed , or drawing lots—typically using marked pottery shards (ostraca) or beans—from the Archaic period onward, but prominently in 5th-century BCE to randomly select magistrates, council members, and jurors from eligible male citizens, thereby ensuring egalitarian distribution without bias toward wealth or influence. Complementary devices included astragali, sheep with four natural faces used as irregular for quaternary random outcomes in games and oracles. In the Roman era, coin flipping emerged as a random method, tossing coins inscribed with "navia" (ship) or "" (head) to decide between two options, a practice rooted in earlier Mediterranean traditions but documented from the 1st century BCE. These pre-computational approaches, while limited in scale and uniformity compared to modern techniques, laid foundational principles for exploiting physical in .

Mid-20th Century Computational Advances

The need for computational random number generation arose during the in the mid-1940s, as scientists like Stanislaw Ulam and developed methods to simulate neutron diffusion in atomic bombs, requiring vast sequences of random digits beyond manual or table-based production. These simulations were first implemented on the computer in 1947, marking an early application of algorithmic in digital computation. Von Neumann devised the middle-square method around 1946 as one of the first pseudo-random number generators (PRNGs) suitable for electronic computers, involving squaring a seed number, extracting the middle digits of the result, and repeating the process to produce subsequent values. This iterative algorithm, X_{n+1} derived from the central digits of X_n^2, was simple to implement on limited hardware like , which lacked floating-point operations, and was used to approximate uniform distributions for trials. However, von Neumann himself recognized its severe shortcomings, including short periods, zero outputs, and patterned sequences that failed basic randomness tests, cautioning against over-reliance due to these deterministic artifacts. By 1951, the , one of the first commercially available general-purpose computers, incorporated a based on electronic noise, alongside software PRNGs, enabling routine computational use in scientific simulations across and beyond. Concurrently, Derrick Lehmer proposed multiplicative congruential generators in 1951, using the recurrence X_{n+1} = (a * X_n) mod m to produce sequences with longer periods than middle-square, influencing early implementations for uniform random variates. These advances shifted random number production from precomputed physical tables—such as those distributed via punched cards in the —to on-the-fly algorithmic generation, though persistent issues with correlation and uniformity prompted further refinements by the late .

Post-1980s Refinements and Standardization

In the mid-1980s, standardization efforts began to address the need for reliable pseudorandom number generators (PRNGs) in cryptographic applications, particularly in financial systems. The (ANSI) published X9.17 in , which specified a DES-based PRNG for generating keys and initialization vectors, incorporating a 64-bit updated via triple steps to enhance against predictability. This marked an early formal , emphasizing deterministic methods seeded by true random sources to mitigate weaknesses in earlier linear congruential generators (LCGs). Concurrently, refinements focused on improving LCG parameters for general-purpose use; in 1988, Park and Miller proposed a "minimal standard" LCG with modulus m = 2^{31} - 1, multiplier a = 16807, and increment c = 0, which passed basic spectral and serial correlation tests while offering portability across systems. By the 1990s, advancements emphasized longer periods and better equidistribution to address lattice structure flaws in LCGs. George Marsaglia introduced the Diehard test suite in 1995, comprising 16 rigorous statistical tests (e.g., birthday spacings, overlapping permutations) to evaluate RNG quality beyond simple uniformity, revealing deficiencies in many contemporary generators. A landmark algorithmic refinement came in 1997 with the , developed by Makoto Matsumoto and Takuji Nishimura, featuring a state size of 624 32-bit words and a period of $2^{19937} - 1, achieving 623-dimensional equidistribution while remaining efficient for simulations. These developments shifted focus toward generators balancing computational speed with resistance to low-dimensional dependencies, influencing implementations in languages like and . Standardization accelerated in the through U.S. government initiatives for cryptographic . The Federal Information Processing Standard (, published in 2001, mandated approved RNGs in validated cryptographic modules, initially referencing ANSI designs before incorporating NIST guidelines. NIST released SP 800-22 in 2001 (revised 2010), providing a suite of 15 statistical tests (e.g., frequency, runs, ) for assessing , complementing Diehard by focusing on cryptographic applicability. The SP 800-90 series followed, with an initial draft in 2005 and SP 800-90A finalized in 2006 (revised 2011), specifying deterministic RNGs (DRBGs) such as Hash_DRBG, HMAC_DRBG, and CTR_DRBG, requiring reseeding from sources every $2^{48} bits or fixed intervals to ensure forward/backward . SP 800-90B (2018) standardized source validation, while SP 800-90C (2016) outlined RBG constructions, promoting hybrid true-pseudorandom approaches for high-assurance applications like . These standards prioritized empirical validation over theoretical claims, addressing biases in prior generators through mandatory testing and design constraints. ![X_{n+1}=(aX_{n}+b),\textrm{mod},m][inline] Linear congruential generators, refined in standards like Park-Miller, follow the recurrence X_{n+1} = (a X_n + c) \mod m, where careful parameter selection minimizes detectable patterns.

Methods of Generation

Physical True Random Number Generators

Physical true random number generators (TRNGs) produce random bits by sampling inherently unpredictable physical processes that generate from quantum or classical phenomena, distinguishing them from deterministic pseudo-random methods by their non-reproducibility even under identical conditions. These generators typically involve an entropy source, followed by and post-processing to mitigate biases, correlations, or environmental influences that could reduce effective . and validation, as outlined in standards like NIST SP 800-90B, are critical to confirm the source's rate, often requiring statistical tests for and uniformity. Unlike algorithmic approaches, TRNGs rely on causal irreversibility in physical systems, such as fluctuations defying deterministic due to incomplete of initial states or fundamental indeterminacy. Common entropy sources include electronic noise phenomena. Thermal (Johnson-Nyquist) noise arises from random electron motion in resistors, amplified and thresholded to yield bits; this method underpins hardware like early implementations, though susceptible to temperature variations affecting . Avalanche noise in reverse-biased diodes exploits probabilistic carrier multiplication, providing high bit rates but requiring debiasing via extractors like to handle slight asymmetries. Shot noise from discrete flow in semiconductors offers similar stochasticity, often combined with comparators for binary output. Quantum-based TRNGs leverage non-deterministic events for theoretically unbounded entropy. Photonic methods detect vacuum fluctuations or photon arrival times in attenuated lasers, as validated in peer-reviewed setups achieving gigabit-per-second rates with quantum-certified randomness. Spintronic variants use stochastic magnetic tunnel junction switching, where thermal agitation induces random state flips, enabling compact integration in chips with entropy rates exceeding 1 Gbps after processing. Radioactive decay timing, measured via Geiger-Müller tubes, provides decay-event intervals as input; services like HotBits have historically generated bits this way, with inter-arrival times following exponential distributions yielding near-ideal min-entropy. Ring oscillator jitter forms another class, where phase noise in free-running loops on silicon chips—driven by thermal and supply variations—produces timing differences sampled by counters; multiple oscillators XORed enhance independence, with designs achieving 100 Mbps post-extraction while consuming low power. Mechanical or macroscopic sources, such as turbulent or vibrating cantilevers, have been prototyped for unpredictability verifiable via tests, though less common in integrated systems due to limits. Environmental factors like voltage drifts or can degrade source quality, necessitating robust conditioning (e.g., hashing) and periodic health tests per NIST guidelines to maintain . Commercial hardware, including Intel's instruction since 2012, integrates such sources but has faced scrutiny for potential backdoor risks, underscoring the need for independent validation over vendor claims.

Algorithmic Pseudo-Random Number Generators

Algorithmic pseudo-random number generators (PRNGs) are deterministic computational algorithms that produce sequences of numbers exhibiting statistical properties similar to those of true random numbers, such as uniformity and . These generators start from an initial value and apply a to derive subsequent outputs, ensuring given the same . Unlike true random number generators, PRNGs are fully predictable once the internal state is known, limiting their use in applications requiring unpredictability, such as , unless specialized cryptographically secure variants are employed. The simplest and historically significant type is the (LCG), defined by the formula X_{n+1} = (a X_n + c) \mod m, where X_n is the current state, a is the multiplier, c the increment, and m the modulus, all chosen as integers. Introduced by Derrick Lehmer in for the computer, LCGs achieve a maximum period of m under the Hull-Dobell theorem conditions: c and m coprime, a-1 divisible by all prime factors of m, and a-1 divisible by 4 if m is divisible by 4. Despite their efficiency, LCGs often exhibit detectable correlations in higher dimensions, as seen in the infamous generator with parameters a=65539, c=0, m=2^{31}, which produced points lying on 15 planes in 3D space. More advanced PRNGs address LCG limitations through complex state transitions. The , developed by Matsumoto and Takuji in 1997, uses a 624-element state array and bitwise operations to generate numbers with a period of $2^{19937}-1, equivalent to a exponent. It employs a "twist" mechanism to update the state and a tempering function for output, passing rigorous statistical tests like diehard while remaining computationally efficient for non-cryptographic simulations. Other types include lagged Fibonacci generators, which compute X_n = (X_{n-j} \oplus X_{n-k}) \mod 2^m or via addition, offering long periods but potential for short cycles if poorly parameterized. PRNG quality is evaluated by period length, uniformity, serial correlation, and speed, with modern implementations like PCG () combining LCG-like updates with permutations for improved statistical properties and output mixing. These generators excel in simulations and modeling due to their speed—often generating billions of numbers per second on modern hardware—but require careful seeding from entropy sources to avoid degenerate sequences. Empirical testing reveals that while PRNGs approximate randomness effectively for many purposes, their deterministic nature necessitates validation against application-specific statistical suites to mitigate artifacts.

Human-Generated Sequences

Human-generated sequences in random number generation involve individuals consciously producing numbers or binary choices through mental effort, such as verbally reciting digits or selecting outcomes without external aids like or computers. These attempts typically yield outputs that deviate systematically from true due to cognitive heuristics and perceptual biases, resulting in predictable patterns that fail standard statistical tests for uniformity, , and . Experimental evidence consistently reveals specific deviations. For instance, when subjects generate sequences of single digits (1-9 or 0-9), distributions are often non-uniform, with underrepresentation of digits like or overrepresentation of personally significant numbers, alongside reduced repetitions and an excess of alternations compared to expectations. In sequences (e.g., heads/tails), humans produce fewer long runs and more short alternations than a would, reflecting a gambler's fallacy-like avoidance of streaks. These patterns persist across healthy adults, with neuropsychological patients exhibiting even greater predictability, as measured by tests like the NIST suite or estimators. Such biases stem from the brain's reliance on and predictability-seeking mechanisms, which conflict with the aperiodic, structureless nature of true . A analysis of 20 subjects generating 300-number sequences (1-10) identified recurrent motifs and serial dependencies, allowing characterization of an "internal generator" that prioritizes local balance over global . While , such as in competitive games, can improve to levels statistically akin to pseudo-random generators, outputs remain fingerprintable and lower in entropy than physical or algorithmic sources. In practice, human-generated sequences find limited use outside psychological assessment, where tasks like random number generation tests (RNGTs) probe executive function, , and inhibition by quantifying deviations via metrics such as redundant digit index or longest run length. They are unsuitable for applications requiring high-quality , like , due to vulnerability to prediction; even trained individuals cannot sustain unpredictability over extended lengths without introducing correlations. Recent comparisons with large language models highlight that both human and AI verbal generations underperform quantum or hardware methods in passing comprehensive randomness batteries.

Quality Evaluation and Enhancement

Statistical Testing Protocols

Statistical testing protocols for random number generators (RNGs) involve applying batteries of empirical tests to output sequences, assessing whether they exhibit properties expected of independent, uniformly distributed random bits or numbers, such as lack of serial correlation, balanced frequencies, and absence of periodic patterns. These protocols cannot prove true but can detect non-random artifacts from flawed algorithms or hardware, with p-values typically compared against significance levels like 0.01 to reject the of . Failure rates across multiple test instances inform generator quality, often requiring sequences of at least 10^6 to 10^9 bits for reliable detection of subtle biases. The NIST Special Publication 800-22 Revision 1a, released in 2010, provides a standardized suite of 15 core statistical tests (with variants yielding up to 188 sub-tests) tailored for cryptographic RNG validation, focusing on sequences. Key tests include the (monobit) test for bit balance, runs test for run length distribution, and test for predictability, alongside spectral tests like the to identify periodicities. The suite recommends generating 100 sequences per test for analysis, with passing criteria based on low failure proportions (e.g., less than 1% at α=0.01), and has been applied to hardware like Renesas RA4E2 microcontrollers, confirming compliance in evaluations as of 2023. However, critiques note potential over-reliance on asymptotic approximations, which may inflate Type I errors in finite samples. George Marsaglia's Diehard battery, introduced in 1995, comprises 15-18 tests emphasizing extreme tail behaviors and multidimensional uniformity, such as the birthday spacings test simulating pigeonhole collisions and the overlapping permutations test checking matrix singularity rates in 5x5 arrays of uniform variates. Designed for 32-bit outputs, it requires about 12-80 MB of data per test and has detected flaws in generators like linear congruential ones, though its fixed parameters limit adaptability to modern 64-bit or cryptographic contexts. An extended version, Dieharder (developed around 2004 by Robert G. Brown), incorporates additional tests like the NIST suite and supports raw binary input, facilitating comparisons across RNGs like PCG and MWC, which passed comprehensively in 2017 benchmarks. TestU01, a C library released in 2007 by Pierre L'Ecuyer and Richard Simard, offers hierarchical test batteries—SmallCrush (15 tests), (96 tests), and BigCrush (160 tests)—combining classical goodness-of-fit, , and serial correlation assessments with advanced batteries like the collision test and linear probes. It supports user-defined RNG interfaces and has exposed weaknesses in generators failing BigCrush, such as certain lagged Fibonacci variants, emphasizing long-range dependencies over short-sequence anomalies. Empirical evaluations recommend BigCrush for thorough scrutiny, as smaller suites may overlook lattice structures in pseudorandom outputs. In practice, protocols are often combined; for instance, cryptographic standards mandate NIST compliance alongside assessments, while non-cryptographic simulations may prioritize for computational efficiency. No single suite guarantees security against all attacks, as passing statistical tests does not preclude algebraic predictability exploitable in adversarial settings.

Post-Processing and Entropy Extraction Techniques

Post-processing techniques are essential for refining the raw output of true random number generators (TRNGs), which often exhibit statistical biases, serial correlations, or insufficient due to imperfections in physical entropy sources such as or . These methods transform imperfect random data into a , bit sequence by removing detectable patterns and extracting available , ensuring suitability for high-stakes applications like cryptographic . However, post-processing cannot generate beyond what is present in the source; over-extraction risks predictable outputs if the raw rate falls below the output length, as quantified by information-theoretic bounds like the leftover hash lemma. NIST Special Publication 800-90B emphasizes validating entropy sources prior to post-processing through rigorous estimation tests to confirm levels, preventing underestimation that could compromise security. A foundational debiasing approach is the extractor, introduced in 1951, which processes sequential bits in pairs: outputting 0 for a 01 pair, 1 for a 10 pair, and discarding 00 or 11 pairs. This method provably yields unbiased bits with probability 1/2 each, assuming independent input bits with fixed bias p (probability of 1), but at an efficiency of $2p(1-p), which drops quadratically for extreme biases (e.g., 0.25 bits per input bit at p=0.5, approaching 0 as p \to 0 or 1). Iterated variants, such as blocking multiple pairs or using models, improve throughput by modeling dependencies, achieving higher extraction rates while maintaining uniformity, as demonstrated in hardware implementations where raw bits from ring oscillators are debiased to pass NIST statistical test suites. XOR-based folding, another simple technique, combines multiple raw bit streams by bitwise XOR; it effectively reduces bias multiplicatively (e.g., XORing n identical biased sources yields bias (2p-1)^n) and mitigates some correlations, though it preserves overall entropy rate without amplification, making it suitable for lightweight post-processing in embedded systems. For cryptographic-grade output requiring near-full entropy (close to 1 bit per output bit), advanced conditioning uses as deterministic extractors. Hash functions like SHA-256 or SHA-512, applied to blocks of raw data, serve as hash-based extractors under the leftover hash lemma, producing output with bounded by \epsilon \approx 2^{-\frac{1}{2}(m - 2k)}, where m is hash output length and k is input. approves such conditioners (e.g., truncated hashes or ) for random bit generators, provided the input meets validated thresholds from SP 800-90B tests like Maurer's statistic or collision estimators; for instance, reseeding deterministic random bit generators (DRBGs) with conditioned TRNG output ensures forward/backward . Provably secure alternatives include Toeplitz-matrix extractors or Trevisan extractors, which offer information-theoretic guarantees for non-independent sources, enabling high-throughput implementations (e.g., gigabits per second) in quantum or classical TRNGs by optimally squeezing from weakly random inputs. These methods, however, introduce computational overhead, with costs scaling with block size, and require fixed seeds or keys that must themselves be high- to avoid vulnerabilities.

Applications and Practical Uses

Cryptographic and Security Implementations

In cryptographic systems, random number generators (RNGs) provide the unpredictability essential for secure , nonces, initialization vectors, and salts, as predictable outputs enable attacks such as key recovery or replay exploits. Cryptographically secure RNGs typically combine true random entropy sources with deterministic mechanisms to produce high-quality bits resistant to inversion or state compromise. The NIST SP 800-90 series establishes standards for these implementations, with SP 800-90A specifying deterministic random bit generators (DRBGs) including CTR_DRBG (using AES-128, AES-192, or AES-256 in counter mode), Hash_DRBG (based on approved hash functions like -256), and _DRBG (using with functions). These DRBGs require periodic reseeding from entropy sources validated under SP 800-90B, which assesses non-deterministic sources like hardware noise for sufficient (e.g., at least 0.5 bits per sample after processing). SP 800-90C outlines RBG constructions integrating entropy inputs with DRBGs or mechanisms to meet security strength levels up to 256 bits. Federal Information Processing Standard (FIPS) 140-2 and its successor mandate approved RNGs for validated cryptographic modules, with Annex C of listing deterministic and non-deterministic options compliant with SP 800-90. For instance, in TLS 1.3 protocol implementations, DRBG-derived random values initialize handshakes to prevent , while SSH uses similar RNGs for host keys and challenges. Hardware implementations, such as those in secure elements or TPMs, often employ physical entropy (e.g., ring oscillators) fed into DRBGs to generate ephemeral keys for protocols like VPNs. These mechanisms ensure —where compromise of current state does not reveal prior outputs—and backward secrecy via reseeding, though efficacy depends on quality and protection against side-channel attacks like timing or . In practice, libraries like integrate NIST-approved DRBGs for generating 256-bit keys in ECDH exchanges, validated through the Cryptographic Algorithm Validation Program (CAVP).

Scientific Simulations and Monte Carlo Methods

Monte Carlo methods, a class of computational algorithms that rely on repeated random sampling to obtain numerical approximations for deterministic problems with inherent uncertainty, depend fundamentally on random number generators (RNGs) to produce sequences approximating uniform distributions over specified intervals. These methods, originating from statistical physics and , model phenomena such as particle interactions or by simulating numerous trials, where each trial's outcome is determined by drawing from probability distributions via RNG outputs. The statistical convergence of Monte Carlo estimates improves with the square root of the number of samples, necessitating vast quantities of random numbers—often billions per simulation—to achieve acceptable precision, as seen in applications like estimating integrals in high-dimensional spaces where analytical solutions are infeasible. Pseudo-random number generators (PRNGs), which produce deterministic sequences indistinguishable from true within finite lengths, dominate scientific simulations due to their computational , long periods (e.g., exceeding 2^19937 for variants), and for and validation. allows researchers to rerun simulations with identical seeds to verify results or isolate errors, a feature absent in true RNGs that draw from physical sources like noise. In practice, PRNGs such as linear congruential generators or lagged variants transform uniform pseudo-random variates into samples from target distributions (e.g., via for exponentials in models), enabling simulations in fields like or cosmology. However, PRNG quality—measured by uniformity, independence, and absence of short-range correlations—is paramount; inadequate generators can amplify lattice artifacts or serial correlations, leading to biased estimators, as evidenced by up to 10% deviations in from two-dimensional simulations using flawed minimal standard generators. The impact of RNG deficiencies manifests in specific scientific contexts, such as GATE toolkit simulations for medical imaging, where differing generators (e.g., Marsaglia vs. RANLUX) yielded variations in positron emission tomography outputs exceeding statistical noise levels, underscoring the need for generators passing suites like Dieharder for spectral tests. In quantum Monte Carlo methods for electronic structure calculations, low-quality RNGs like RANLUX-0 introduce systematic errors in energy estimates due to inadequate entropy, while higher-luxury variants reduce these to negligible levels, highlighting post-processing techniques like shuffling to extract effective randomness. Comparisons with quantum RNGs, which leverage photon detection for true entropy, show marginal improvements in variance reduction for certain Monte Carlo integration tasks but at the cost of 10-100x slower generation rates, making hybrid approaches—seeding PRNGs with true random bits—common for balancing fidelity and performance. To mitigate risks, simulations incorporate strategies intertwined with RNG use, such as antithetic variates (pairing complementary samples from the same PRNG stream) or , which redirect computational effort toward high-probability regions while preserving unbiased estimates. Empirical validation often involves parallel runs with multiple RNGs; for instance, studies in statistical physics confirm that generators failing chi-squared tests for uniformity produce non-ergodic behaviors in long-run averages, deviating from theoretical predictions by orders of magnitude in rare-event simulations like polymer chain folding. Overall, while PRNGs suffice for most applications when rigorously vetted, ongoing advancements in generator design, informed by these simulations, prioritize spectral properties to handle the exponential scaling of sample requirements in multidimensional problems.

Gaming, Lotteries, and Entertainment

In casino gaming, random number generators (RNGs) underpin the fairness of such as slots and , where outcomes must be unpredictable and independent of prior results. Regulatory bodies like the Gambling Commission mandate that RNGs produce "acceptably random" sequences, often verified through statistical tests to prevent or . auditors, including eCOGRA, certify these systems by subjecting them to rigorous testing protocols that simulate millions of cycles, ensuring compliance with standards like ISO/IEC 17025 for impartiality. Slot machines, for instance, continuously generate numbers at high speeds—up to hundreds per second—mapping them to reel positions only upon player input, which isolates results from timing exploits. Lotteries predominantly employ physical true random number generators, such as mechanical ball draws, to achieve verifiable unpredictability, though some digital systems use certified pseudorandom algorithms. The World Lottery Association distinguishes between pseudorandom number generators (PRNGs), which rely on deterministic algorithms seeded by initial values, and true random number generators (TRNGs), which draw from physical entropy sources like . A notable failure occurred in the U.S. Hot Lotto scandal, where Eddie Tipton, a employee, inserted rigged software code in 2010 that exploited the PRNG to predict draws on specific dates, enabling wins totaling $24 million across states like and before detection in 2017. Post-scandal audits reinforced hybrid approaches, combining physical draws with cryptographic verification to mitigate insider threats. In and broader , pseudorandom number generators facilitate procedural content generation, such as randomized levels, enemy behaviors, and loot distribution, enhancing replayability without requiring hardware sources. Classic titles like utilized linear congruential generators (LCGs) for event sequencing, balancing computational efficiency with apparent sufficient for non-cryptographic needs. Modern applications extend to mobile and online platforms, where PRNGs seed from system clocks or user inputs to simulate chance events, though vulnerabilities like predictable seeds have prompted developers to incorporate pooling for improved distribution. These implementations prioritize perceptual fairness over absolute , as statistical tests confirm uniformity in outcomes across playthroughs.

Advanced Techniques and Alternatives

Quantum Random Number Generation

Quantum random number generators (QRNGs) exploit inherently probabilistic quantum mechanical processes to produce sequences of bits that are truly unpredictable, drawing directly from phenomena such as detection events, fluctuations, or atomic transitions, in contrast to deterministic pseudorandom generators that rely on algorithmic iteration from a . This approach leverages the fundamental indeterminacy of quantum measurements, as formalized in the and empirically validated through violations of Bell inequalities, ensuring that the output cannot be reproduced even with complete knowledge of the initial state or measurement apparatus. Early demonstrations emerged in the late alongside advances in , with practical implementations using single- sources and detectors to measure random outcomes like states or arrival times, achieving bit generation rates from kilobits per second in initial lab setups to over 1 Gbit/s in modern systems. Common methods include optical QRNGs based on the splitting of light at a , where the probabilistic transmission or reflection of yields binary , often enhanced by homodyne or detection to amplify weak signals from quantum . Phase-based schemes utilize in Mach-Zehnder interferometers to encode in phase fluctuations, while device-independent QRNGs employ entangled pairs and Bell tests to certify without trusting the devices, providing against flaws or . Post-processing techniques, such as hashing or Toeplitz extractors, are essential to distill uniform from raw quantum data, mitigating biases from detector inefficiencies or , with entropy extraction efficiencies reaching up to 80% in optimized setups. Commercial QRNG products, such as ID Quantique's Quantis series introduced around 2005, integrate -based photon detectors into USB or PCIe modules generating up to 4 Mbit/s of certified random bits, validated through audits for cryptographic use. Similarly, QuintessenceLabs employs quantum tunneling in devices for high-speed exceeding 1 Gbit/s, targeting enterprise encryption key management. These systems outperform classical RNGs in per bit, as quantum sources avoid the deterministic correlations inherent in thermal noise or , though challenges persist including sensitivity to temperature variations, the need for active stabilization, and vulnerability to side-channel attacks exploiting timing or power consumption . Ongoing research addresses scalability via integrated , aiming for chip-scale QRNGs with rates in the Tbit/s range for future quantum-secure networks.

Deterministic Low-Discrepancy Sequences

Deterministic low-discrepancy sequences, often termed quasi-random sequences, provide a structured alternative to pseudo-random numbers by generating points that fill multidimensional spaces with maximal , minimizing gaps and clustering inherent in stochastic sampling. Unlike pseudo-random number generators, which rely on recursive algorithms to mimic statistical but can exhibit periodic correlations, low-discrepancy sequences are explicitly constructed to reduce the worst-case deviation from , as quantified by discrepancy metrics such as the star discrepancy D_N^*, defined as \sup_{J \in [0,1]^s} | \frac{A(J,N)}{N} - \lambda(J) |, where A(J,N) counts points in subregion J after N steps and \lambda(J) is its volume. This deterministic approach ensures and avoids variance from , though it lacks probabilistic guarantees of . Prominent constructions include the , introduced in 1960, which extends the one-dimensional van der Corput sequence—based on radical-inverse functions in base b—to higher dimensions by assigning distinct prime bases to each coordinate, yielding discrepancy bounds of order O( (\log N)^s / N ) in s dimensions. Sobol sequences, developed by I. M. Sobol in 1967, improve on this by using direction numbers from primitive polynomials over \mathbb{F}_2, achieving similar logarithmic growth in discrepancy while enhancing equidistribution through bitwise operations, with empirical performance often superior in moderate dimensions up to 50. Faure sequences, proposed in 1992, generalize Halton by employing a single base with permutations across dimensions, offering comparable bounds but with variations in scrambling techniques to mitigate correlations. In quasi-Monte Carlo integration, these sequences replace independent random samples in \int_{[0,1]^s} f(\mathbf{x}) \, d\mathbf{x} \approx \frac{1}{N} \sum_{i=1}^N f(\mathbf{x}_i), exploiting the to bound errors by the variation of f times D_N^*, potentially converging faster than the O(N^{-1/2}) rate of crude , especially for smooth integrands in low to moderate dimensions. For instance, in financial option pricing or simulations, Sobol sequences have demonstrated variance reductions of factors up to 10-100 over pseudorandom counterparts in dimensions below 20, though the (\log N)^s term induces a , rendering gains negligible beyond s \approx 100. Scrambled variants, such as randomized QMC, combine determinism with variance estimation by digitally shifting sequences, preserving low-discrepancy properties while enabling error assessment. Despite advantages in deterministic uniformity, low-discrepancy sequences can underperform pseudorandom methods for discontinuous or high-variance functions due to their fixed ordering, which may align poorly with integrand structure, and implementation challenges like base choice in Halton leading to higher effective discrepancy in high dimensions. Libraries such as those in or implement these via direction number tables for Sobol, ensuring bit-reproducible generation, but users must validate against known discrepancy tables for specific N and s. Overall, their utility in random number generation contexts lies in applications demanding reproducible, low-error sampling rather than statistical randomness.

Security Risks and Controversies

Predictability Vulnerabilities and Cryptographic Attacks

Pseudo-random number generators (PRNGs) used in cryptographic applications are vulnerable to predictability if their internal state or parameters can be recovered from output sequences, enabling attackers to predict subsequent bits and compromise keys or nonces. Such state recovery attacks often exploit mathematical structure, as in linear congruential generators (LCGs), where observing a few consecutive outputs allows solving for multipliers a, increment b, and modulus m via lattice reduction or brute force on low bits. For instance, LCGs employed in the Digital Signature Standard (DSS) permitted secret key recovery after just a handful of signatures by reconstructing the generator state. The 2008 Debian OpenSSL vulnerability (CVE-2008-0166) exemplified implementation flaws leading to predictability: a 2006 patch to suppress warnings inadvertently removed a primary source (process ID) from the PRNG seeding, reducing it to a deterministic LCG reliant on predictable inputs like time, yielding only about $2^{15} to $2^{16} possible states. This rendered generated keys for SSH, SSL certificates, and DSA signatures predictable, with attackers able to enumerate and crack them; millions of keys were compromised, prompting widespread regeneration. Dual_EC_DRBG, standardized by NIST in 2006 as an elliptic curve-based PRNG, harbored suspected deliberate weaknesses allowing efficient prediction if non-public curve points (allegedly known only to the NSA) were used, effectively embedding a backdoor; documents leaked in 2013 revealed NSA influence in its selection and a $10 million payment to to prioritize it, despite known biases and inefficiency. Attacks on Dual_EC required observing 32 to 256 bytes of output to recover state, but with the secret, prediction was feasible across systems using fixed parameters. Hardware RNGs and hybrid systems face similar risks if entropy sources are biased or insufficient, as in virtual machine resets reusing prior states, leading to repeated nonces in TLS handshakes and session hijacking. Cryptographic protocols mitigate these via unpredictability requirements, but flawed designs or seeding—such as low-entropy pools—persistently enable forward prediction, underscoring the need for verified CSPRNGs like those in .

Historical Failures and Poor Designs

Early pseudorandom number generators often suffered from structural flaws that produced sequences with detectable patterns, undermining their use in simulations and statistical testing. The generator, distributed by in the , employed the linear congruential formula with multiplier a = 65539 and m = 2³¹, yielding triples of outputs that lie on at most 15 parallel planes in due to poor choice of parameters. This deficiency caused erroneous results in simulations, such as artificial lattice structures in physical modeling, persisting in widespread use until the mid-1970s when tests exposed its failings. Implementation errors in cryptographic contexts have similarly compromised randomness. In Netscape Navigator versions 1.0 through 2.0, released in 1994-1995, the Secure Sockets Layer (SSL) protocol seeded its random number generator using only the current time and process ID hashed via , enabling attackers to predict session keys and decrypt communications after brute-forcing approximately 2¹⁶ possibilities per connection. The vulnerability, identified by Phillip Hallam-Baker in 1994 but unaddressed until version 3.0 in 1996, stemmed from inadequate collection, highlighting the risks of deterministic seeding in networked environments. A notable software engineering lapse occurred in Debian's OpenSSL package between 2006 and 2008, where a patch intended to suppress warnings inadvertently removed lines that incorporated the process ID and an uninitialized stack variable into the pool. This reduced the effective to roughly 15 bits, rendering generated numbers predictable via exhaustive enumeration and compromising SSH host keys, SSL certificates, and other across millions of systems. Discovered by Luciano Bello on May 13, 2008, the bug (CVE-2008-0166) necessitated widespread key regeneration, as affected systems produced colliding keys vulnerable to impersonation attacks. Cryptographic standards have also harbored deliberate weaknesses resembling poor designs. The , endorsed by NIST in Special Publication 800-90A in 2006, utilized points with parameters suspected of an NSA backdoor, allowing of outputs if a 32-byte secret multiplier were known, as revealed by documents in 2013. The generator's reliance on non-prime order s and opaque choices facilitated efficient attacks via precomputed logarithms, prompting NIST to withdraw it in 2014 amid evidence of NSA payments to for its default inclusion in libraries. These cases underscore how parameter selection and entropy management flaws can cascade into systemic security failures, often persisting due to unexamined trust in authoritative implementations.

Hardware Backdoors and Implementation Flaws

Hardware random number generators (RNGs) are susceptible to backdoors, where designers intentionally embed weaknesses that enable prediction of outputs by entities possessing secret knowledge, such as government agencies. A prominent example is the algorithm, proposed by the (NSA) in 2006 and standardized by the National Institute of Standards and Technology (NIST) in 2007 as part of SP 800-90A. Analysis following Edward Snowden's 2013 leaks revealed that specific NIST-recommended points in likely constituted a backdoor, allowing an attacker with knowledge of a 32-byte secret to predict future outputs efficiently after observing approximately 2^32 bits of prior output. This vulnerability stems from the algorithm's reliance on operations where the backdoor exploits non-standard generator points, enabling decryption of systems using it for or nonces. Although primarily a deterministic random bit generator (DRBG), was implemented in components like smart cards and security modules via libraries such as BSAFE, compromising devices that prioritized it without alternatives. Suspicions of backdoors extend to proprietary hardware RNGs, exemplified by Intel's RDRAND instruction, introduced in Ivy Bridge processors on April 23, 2012, which draws entropy from thermal noise in ring oscillators. Post-Snowden revelations fueled concerns that RDRAND could be compromised through NSA influence on standards or supply chains, potentially biasing outputs or enabling selective predictability without detectable statistical anomalies. Independent audits, including those by Taylor Hornby in 2014, demonstrated that over-reliance on RDRAND in operating system entropy pools—such as Linux's—could propagate a backdoor, allowing an attacker to invert XOR-mixed outputs if they control the hardware source. Despite no empirical evidence of tampering emerging from tests like die photography or side-channel analysis, these risks prompted developers to implement mixing with software entropy sources; however, Linux kernel maintainer Linus Torvalds dismissed calls to disable RDRAND in September 2013, arguing that paranoia over unproven backdoors outweighed benefits and that attackers could compromise systems elsewhere. Beyond intentional backdoors, implementation flaws in hardware RNGs often arise from inadequate entropy harvesting or post-processing, leading to biased or predictable sequences. In August 2021, Bishop Fox researchers disclosed a pervasive in hardware RNGs across billions of () devices, where implementations failed to condition raw properly, resulting in non-random outputs vulnerable to prediction attacks; for instance, devices using simple ring oscillators without debiasing amplified correlations from environmental factors like temperature. These flaws, affecting chips from vendors including those in automotive and medical sectors, stemmed from cost-driven designs omitting extractors or hash-based whitening, rendering like TLS handshakes insecure. Empirical tests showed rates dropping below 0.1 bits per bit in affected units, far short of required uniformity. Such flaws underscore the need for rigorous validation, as hardware RNGs relying on physical phenomena—like avalanche noise or —can exhibit long-term correlations if not rigorously post-processed; for example, early implementations in some microcontrollers suffered from startup transients producing zero-biased initial bits, exploitable in key derivation. NIST SP 800-90B, updated in 2018, mandates to detect such deficiencies, yet many deployments bypass this due to performance constraints. To mitigate, best practices include pooling with multiple sources and periodic testing against suites like NIST STS, ensuring outputs pass dieharder or benchmarks before deployment.

References

  1. [1]
    Random Number Generation - (Honors Statistics) - Fiveable
    Random number generation is the process of producing a sequence of numbers or symbols that cannot be reasonably predicted better than by a random chance.
  2. [2]
    True Random vs. Pseudorandom Number Generation - wolfSSL
    Jul 13, 2021 · Software-generated random numbers only are pseudorandom. They are not truly random because the computer uses an algorithm based on a distribution.
  3. [3]
    Difference between TRNG or PRNG? - ResearchGate
    Mar 3, 2019 · TRNGs use an unpredictable physical means to generate numbers (like atmospheric noise), and PRNGs use mathematical algorithms (completely computer-generated).
  4. [4]
    Understanding random number generators, and their limitations, in ...
    Jun 5, 2019 · A good random numbers generator consists of two parts: a source of entropy and a cryptographic algorithm. A source of entropy (RNG). Entropy is ...Missing: fundamentals | Show results with:fundamentals
  5. [5]
    Quantum Random Number Generation Applications - ID Quantique
    Random Number Generation applications · Mobile Phones · Banking · Datacentre · Gaming and Lotteries · Automotive Security · Cryptography · Telecommunications.
  6. [6]
    Chapter 37. Efficient Random Number Generation and Application ...
    In this chapter, we discuss methods for generating random numbers using CUDA, with particular regard to generation of Gaussian random numbers.
  7. [7]
    [PDF] Chapter 3 Pseudo-random numbers generators - Arizona Math
    It is not so easy to generate truly random numbers. Instead, pseudo-random numbers are usually used. The goal of this chapter is to provide a basic ...
  8. [8]
    Cryptography's random number problem?
    Dec 12, 2018 · Common problems include developers using non-cryptographic random generators when they need cryptographic ones (don't use rand anywhere near ...What is the use of REAL random number generators in cryptography?Truly Random Numbers at Scale - Overloaded memory chips ...More results from crypto.stackexchange.com
  9. [9]
    NIST and Partners Use Quantum Mechanics to Make a Factory for ...
    Jun 11, 2025 · NIST and its partners at the University of Colorado Boulder built the first random number generator that uses quantum entanglement to produce ...
  10. [10]
    [PDF] Monte Carlo Methods: Early History and The Basics
    Early History of Probability Theory and Monte Carlo Methods. Early ... Early Random Number Generators on Digital. Computers. ▻ Middle-Square method ...<|control11|><|separator|>
  11. [11]
    [PDF] A Statistical Test Suite for Random and Pseudorandom Number ...
    The NIST Statistical Test Suite supplies the user with nine pseudo-random number generators. A brief description of each pseudo-random number generator follows.
  12. [12]
    [PDF] NIST Standards on Random Numbers
    NIST Standards and Guidelines for Randomness. Special Publication (SP) 800 ... SP 800-22 is not used during the NIST validation of random number generators.Missing: core | Show results with:core
  13. [13]
    [PDF] STATISTICAL TESTING of RANDOMNESS
    A number of classical empirical tests of randomness are reviewed in Knuth ... Technology (NIST) initiated a study to assess the quality of different random.
  14. [14]
    The Art of Computer Programming: Random Numbers - InformIT
    Jun 23, 2014 · Donald E. Knuth introduces the concept of random numbers and discusses the challenge of inventing a foolproof source of random numbers.Missing: core | Show results with:core
  15. [15]
    Kolmogorov complexity of sequences of random numbers generated ...
    May 18, 2018 · Statistical randomness is related with the frequency of occurrence of strings of data. On the other hand, algorithmic randomness is related ...
  16. [16]
    Kolmogorov complexity of sequences of random numbers generated ...
    Oct 26, 2018 · “Statistical” randomness is related with the frequency of occurrence of strings of data. “Algorithmic” randomness is related with ...
  17. [17]
    [PDF] HISTORY OF UNIFORM RANDOM NUMBER GENERATION - Hal-Inria
    ABSTRACT. Random number generators were invented before there were symbols for writing numbers, and long before mechanical and electronic computers.
  18. [18]
    Ancient Games - Biblical Archaeology Society
    The earliest dice known date to the second half of the third millennium B.C.E.; they come from the Indus Valley culture, in present-day Pakistan, and from ...
  19. [19]
    The casting of lots among the hittites in light of ancient near eastern ...
    The casting of lots, for a wide range of purposes, was a common practice among the different cultures of the ancient Near East (ANE). The division of land, ...<|separator|>
  20. [20]
    [PDF] The Casting of Lots among the Hittites in Light of Ancient Near ...
    The casting of lots, for a wide range of purposes, was a common practice among the different cultures of the ancient Near East (ANE).
  21. [21]
    A Brief History of Random Numbers - Carl Tashian
    Mar 10, 2017 · In 1951, a random number generator was first added to a general-purpose computer, the Ferranti Mark 1. The Mark ...
  22. [22]
    [PDF] Greeks Drawing Lots: The Practice and the Mindset of Egalitarianism
    Jun 18, 2024 · The drawing of lots reflected the values, practices, and egali- tarian mindset that were prevalent for nearly three centuries before the most ...
  23. [23]
    The Ancient Origins of Dice - JSTOR Daily
    Feb 18, 2018 · Archaeologist H.S. Darlington believed that many American dice games had origins in sacred Aztec rituals. As part of the process of correcting ...Missing: evidence earliest
  24. [24]
    [PDF] History of Random Number Generators
    Dec 19, 2017 · Coins, dice, roulette, picking balls from an urn, shuffling cards, etc., have been used for centuries. With computers, electronic devices such ...
  25. [25]
    Pseudo-Random Number Generators: From the Origins to Modern ...
    Nov 17, 2024 · ... randomness and PRNGs often comes down to a delicate balance between unpredictability and reproducibility. While true randomness is crucial ...
  26. [26]
    [PDF] Random Number Generators - IT Courses
    Middle-square method. ○ Developed by John von Neumann around 1946. ○ First ever PRNG? ○ The next number in a sequence is obtained by squaring the previous ...
  27. [27]
    [PDF] Financial Institution Key Management (Wholesale) X9.17
    X9.17 is a key management standard for protecting financial messages and sensitive information, providing a uniform process for key protection and exchange.
  28. [28]
    Random number generators: good ones are hard to find
    A CACM by Park and Miller [6] advocated a standard for random number generators based on the Lehmer generator [5] and criticised a number of computer ...
  29. [29]
    Mersenne twister: a 623-dimensionally equidistributed uniform ...
    A new algorithm called Mersenne Twister (MT) is proposed for generating uniform pseudorandom numbers. ... publication date: Feb-2026. https://doi.org/10.1016/j.
  30. [30]
    [PDF] FIPS 140-2 - Annex C - NIST Computer Security Resource Center
    Annex C provides a list of Approved random number generators applicable to FIPS PUB 140-2. There are two basic classes: deterministic and nondeterministic. A ...
  31. [31]
    SP 800-22 Rev. 1, A Statistical Test Suite for Random and ...
    This paper discusses some aspects of selecting and testing random and pseudorandom number generators. The outputs of such generators may be used in many ...
  32. [32]
    SP 800-90A Rev. 1, Recommendation for Random Number Generation Using Deterministic Random Bit Generators | CSRC
    ### Summary of SP 800-90A Rev. 1 on DRBG vs. Non-Deterministic Random Bit Generators
  33. [33]
  34. [34]
  35. [35]
    [PDF] NIST Special Publication 800-90A Revision 1
    This Recommendation specifies mechanisms for the generation of random bits using deterministic methods.
  36. [36]
    [PDF] Recommendations for the Design and Validation of a Physical True ...
    Feb 22, 2024 · Random number generators (RNGs) are essential components of cryptographic equipment. In particular, they are used to generate keys, ...
  37. [37]
    [PDF] True Randomness Can't Be Left to Chance: Why Entropy Is ...
    A key is strong only to the degree that it is hard to guess or – to put it another way – that it is random.
  38. [38]
    Entropy Sources Based on Silicon Chips: True Random Number ...
    In this survey paper, we present a systematic and comprehensive review of different state-of-the-art methods to harvest entropy from silicon-based devices.
  39. [39]
  40. [40]
    [PDF] On the Entropy of Oscillator-Based True Random Number Generators
    In this paper, we investigate the appli- cability of the different entropy estimation methods for oscillator-based. TRNGs, including the bit-rate entropy, the ...<|separator|>
  41. [41]
    High throughput true random number generator based on ...
    It further enables the dynamic superposition of entropy sources under prescribed conditions, which improves the TRNG throughput while reducing the resource ...
  42. [42]
    True random number generation using the spin crossover in LaCoO 3
    May 31, 2024 · Here we demonstrate a TRNG based on self-oscillations in LaCoO 3 that is electrically biased within its spin crossover regime.
  43. [43]
    An Overview of Spintronic True Random Number Generator - Frontiers
    In this mini review, we introduce the novel physical randomness generating mechanism based on the stochastic switching behavior of magnetic tunnel junctions.
  44. [44]
    [PDF] True Random Number Generators Secure in a Changing Environment
    The high entropy source used in a TRNG can usually be influenced by changes in the physical environment of the device. These changes can include changes in the ...Missing: methods | Show results with:methods
  45. [45]
    A Low-Complexity Start–Stop True Random Number Generator for ...
    Jun 28, 2024 · This paper introduces a low-complexity start–stop true random number generator (TRNG) utilizing jitter in ring oscillators (ROs).
  46. [46]
    Random Bit Generation | CSRC
    May 24, 2016 · NIST's RBG project focuses on generating random numbers for security, using the SP 800-90 series guidelines, and the NIST Randomness Beacon as ...Publications · Guide to the Statistical Tests · News & Updates · Events
  47. [47]
    Hardware Random Number Generators | Blogs
    Jan 26, 2020 · In this article we'll look at how they're generated in modern system-on-chips, best practice for using them and how they can be attacked.
  48. [48]
    [PDF] Generating Random and Pseudorandom Numbers
    Randomness of a sequence is the. Kolmogorov complexity of the sequence (size of smallest Turing machine that generates the sequence) – infinite sequence should.Missing: true | Show results with:true
  49. [49]
    Cryptography - Pseudo-Random Number Generators
    In cryptography, PRNG's are used to construct session keys and stream ciphers. True Randomness is generated from some source such as thermal noise.
  50. [50]
    Linear Congruential Generators - Monte Carlo Method
    The most widely used pseudorandom number generators are linear congruential generators (LCGs). Introduced by Lehmer (1951), these are specified with ...
  51. [51]
    Mersenne twister - ACM Digital Library
    A new algorithm called Mersenne Twister (MT) is proposed for generating uniform pseudoran- dom numbers. For a particular choice of parameters, the algorithm ...
  52. [52]
    Pseudo-Random Number Generator - ScienceDirect.com
    To date, most PRNGs are based on three types of methods: linear congruential generators (LCGs), lagged Fibonacci generators (LFGs) and the Mersenne Twister (MT ...
  53. [53]
    The PCG Paper | PCG, A Better Random Number Generator
    The PCG paper references a paper by Pierre L'Ecuyer, Tables of Linear Congruential Generators of Different Sizes and Good Lattice Structure, which lists ...
  54. [54]
    Analysing Humanly Generated Random Number Sequences
    Such number sequences are not mathematically random, and both extent and type of bias allow one to characterize the brain's “internal random number generator”.
  55. [55]
    Humans cannot consciously generate random numbers sequences
    The experiments show that neuropsychological patients generate sequences that are less random than those of normal subjects.
  56. [56]
    A Re-Examination of “Bias” in Human Randomness Perception - PMC
    Human randomness perception is commonly described as biased. This is because when generating random sequences humans tend to systematically under- and ...
  57. [57]
    A cognitive fingerprint in human random number generation - Nature
    Oct 12, 2021 · We conclude that the mechanism by which humans generate random sequences is (a) highly unique and that (b) this uniqueness is driven by both ...
  58. [58]
    A comparative evaluation of measures to assess randomness in ...
    Jul 1, 2024 · Humans cannot consciously generate random numbers sequences: Polemic study. Medical Hypotheses, 70(1), 182–185. https://doi.org/10.1016/j ...
  59. [59]
    Characterizing human random-sequence generation in competitive ...
    Oct 19, 2021 · Our results demonstrate that human RSG can reach levels statistically indistinguishable from computer pseudo-random generators in a competitive-game setting.
  60. [60]
    A Comparison of Large Language Model and Human Performance ...
    Aug 19, 2024 · Random Number Generation Tasks (RNGTs) are used in psychology for examining how humans generate sequences devoid of predictable patterns.
  61. [61]
    Assessment of Human Random Number Generation for Biometric ...
    In this paper, we show that there is a distinction between the random numbers generated by different people who provide the discrimination capability.
  62. [62]
    [PDF] TestU01: A C Library for Empirical Testing of Random Number ...
    We introduce TestU01, a software library implemented in the ANSI C language, and offering a collection of utilities for the empirical statistical testing of ...
  63. [63]
  64. [64]
    Further analysis of the statistical independence of the NIST SP 800 ...
    Dec 15, 2023 · A p -value is defined as the probability of obtaining results at least as extreme as the observed, in this context it represents the probability ...
  65. [65]
    Robert G. Brown's General Tools Page - Duke Physics
    Dieharder is a random number generator testing suite, designed to test generators and make it easy to time and test them.
  66. [66]
    DIEHARDER random number generator test results for PCG and MWC
    Jul 10, 2017 · George Marsaglia developed the DIEHARD battery of tests in 1995. Physics professor Robert G. Brown later refined and extended Marsaglia's ...
  67. [67]
    TestU01: A C library for empirical testing of random number generators
    We introduce TestU01, a software library implemented in the ANSI C language, and offering a collection of utilities for the empirical statistical testing of ...
  68. [68]
    Testing non-cryptographic random number generators: my results
    Aug 22, 2017 · TestU01. Another well-established framework is L'Ecuyer's TestU01. I run TestU01 in “big crush” mode using different seeds. Only when I see ...
  69. [69]
    Statistical testing of random number generators and their ... - arXiv
    The NIST statistical test suite (SP 800-22) rukhin2001statistical is the best known and widely used. This suite contains 15 tests, some of which have multiple ...
  70. [70]
    Statistical testing of random number generators and their ...
    Mar 27, 2024 · We begin by performing intensive tests on three RNGs—the 32-bit linear feedback shift register (LFSR), Intel's 'RDSEED,' and IDQuantique's ' ...
  71. [71]
    [PDF] Recommendation for the Entropy Sources Used for Random Bit ...
    The submitter provides the following inputs for entropy estimation, according to the requirements presented in Section 3.2.4. Page 18. NIST SP 800-90B.
  72. [72]
    [PDF] A generalization of the Von Neumann extractor - arXiv
    Jan 7, 2021 · Said differently, de-biasing a biased sequence is about extracting the randomness from the aforementioned biased sequence to produce a new ...
  73. [73]
  74. [74]
    Entropy extractor based high-throughput post-processings for True ...
    Sep 5, 2025 · Therefore, we implement two information-theoretically provable entropy extractors: Toeplitz extractor and Trevisan extractor catering to various ...
  75. [75]
    Overview of NIST RNG Standards (90A, 90B, 90C, 22) | CSRC
    The NIST standards are: SP 800-90A (deterministic generators), SP 800-90B (entropy sources), SP 800-90C (RBG constructions), and SP 800-22 (statistical test ...
  76. [76]
    Cryptographic Algorithm Validation Program CAVP
    Algorithm specifications for current FIPS-approved and NIST-recommended random number generators are available from the Cryptographic Toolkit.
  77. [77]
    3.4. Using the Random Number Generator - Red Hat Documentation
    In order to be able to generate secure cryptographic keys that cannot be easily broken, a source of random numbers is required. Generally, the more random ...<|separator|>
  78. [78]
    [PDF] Introduction to Random Numbers and The Monte Carlo Method
    In order to use the Monte Carlo method, we need to be able to generate random numbers; that is, a sequence of numbers with the property that it is not possible ...
  79. [79]
    Quality of random number generators significantly affects results of ...
    On one hand, the nature of Monte Carlo simulations tends to randomize the use of any generator, as it uses random numbers for a number of purposes, including ...
  80. [80]
    Random Number Generators and Monte Carlo Method - CS 357
    Random Number Generators (RNG) are algorithms or methods that can be used to generate a sequence of numbers that cannot be reasonably predicted.
  81. [81]
    Generating Random Numbers - Monte Carlo Methods in Practice
    ... important to mention that having a good random number generator is important to guarantee the quality of the output of the Monte Carlo method. We will ...
  82. [82]
  83. [83]
    Selection of random number generators in GATE Monte Carlo toolkit
    In this study, we used the random number generators in identical simulations in order to check their possible effect on the outputs of simulations.
  84. [84]
    [PDF] Quantum Monte Carlo Simulations with RANLUX Random Number ...
    It is evident that RANLUX-0 gives a systematic error arising from a poor quality RNG. This result seems unsurprising since. RANLUX-0 is theoretically ...
  85. [85]
    Comparing pseudo- and quantum-random number generators with ...
    Sep 20, 2024 · Research Article| September 20 2024 Comparing pseudo- and quantum-random number generators with Monte Carlo simulations
  86. [86]
    [PDF] Analysis of random number generators using Monte Carlo simulation
    Oct 14, 1993 · Here we compare the performance of some popular random number generators by high precision Monte Carlo simulation of the 2-d. Ising model, for ...
  87. [87]
    (PDF) Using random number generators in Monte Carlo simulations
    Mar 25, 2015 · One of the standard tests for Monte Carlo algorithms and for testing random number generators is the two-dimensional Ising model.
  88. [88]
    Bingo and casino technical requirements - 2 - Gambling Commission
    May 5, 2021 · 2.1 Random number generation (and game results) must be 'acceptably random'. · 2.2 Mechanically based RNG games are games that use the laws of ...
  89. [89]
    Ensuring Fair Play with RNG Testing and eCOGRA Certification
    Aug 2, 2024 · At eCOGRA, we provide comprehensive RNG testing services to ensure your online casino games meet stringent regulatory standards and guarantee fair play.
  90. [90]
    What is RNG (Random Number Generator) in Online Casino?
    Jul 17, 2025 · The random number generation algorithm, or RNG, guarantees transparency and an unbiased outcome in online casino games.Game Categories: RNG and... · RNG and IGaming: Use Cases
  91. [91]
    WLA Random chance is the essence of the lottery
    There are two main classifications of RNGs, Pseudo Random Number Generators (PRNG) and True Random Number Generators (TRNG). PRNGs are software-driven and ...
  92. [92]
    Eddie Tipton reveals how he pulled off the biggest lottery scam ever
    Mar 15, 2018 · Over a two-day period last year, Eddie Tipton told investigators how he hijacked U.S. lotteries worth $24 million.
  93. [93]
    How classic games make smart use of random number generation
    Jun 24, 2018 · Super Mario 64: Linear Congruential Generator. A popular source of pseudo-random numbers, provided that they don't produce game-breaking effects ...
  94. [94]
    How, Why and When Random Numbers are used in Video Games...
    May 23, 2023 · Many games use random numbers to enhance their game-play, determine various events, and add more uniqueness to each play-through.
  95. [95]
    [PDF] Random Numbers and Gaming - SJSU ScholarWorks
    Nov 25, 2017 · The usage of random numbers in games is nothing new, but poor implementations and bad business practices have given random numbers a smudge mark ...
  96. [96]
    Quantum random number generation | npj Quantum Information
    Jun 28, 2016 · Quantum physics can be exploited to generate true random numbers, which have important roles in many applications, especially in cryptography.
  97. [97]
    Quantum random number generators | Rev. Mod. Phys.
    Feb 22, 2017 · QRNGs using different quantum phenomena have gone from the lab to the shelves with at least eight existing commercial products ( ID Quantique, ...
  98. [98]
    A Comprehensive Review of Quantum Random Number Generators
    This article provides a review of the existing QRNGs with a focus on their various possible features (eg, device independence, semi-device independence)Missing: "peer | Show results with:"peer
  99. [99]
    A comprehensive review of quantum random number generators
    Dec 13, 2023 · This article provides a review of the existing QRNGs with a focus on their various possible features (eg, device independence, semi-device independence)
  100. [100]
    A Post-Processing Method for Quantum Random Number Generator ...
    Jan 14, 2025 · Quantum Random Number Generators (QRNGs) have been theoretically proven to be able to generate completely unpredictable random sequences, ...Missing: challenges "peer
  101. [101]
    Our Technology - Quintessence Labs
    Nov 14, 2014 · QuintessenceLabs uses quantum tunneling to deliver truly random numbers at 1Gbit/sec, with flexible form factors and compelling costs.Missing: commercial Quantique
  102. [102]
    [PDF] Quantum Random Number Generators in Integrated Photonics
    Published Manuscripts (peer-reviewed). F. Raffaelli, G. Ferranti, D. H. ... In the following decade many implementations taking advantage of similar schemes were ...Missing: challenges | Show results with:challenges
  103. [103]
    [PDF] Quasi-Random Sequences and Their Discrepancies
    Quasi-random (also called low discrepancy) sequences are a deterministic alternative to random sequences foruse in Monte Carlo methods, such as integration ...
  104. [104]
    [PDF] Low-discrepancy sequences: Theory and Applications - arXiv
    Feb 17, 2015 · Methods using low-discrepancy sequences, often called quasi-random sequences, are called Quasi-Monte Carlo methods (QMC). However, to construct ...
  105. [105]
    Quasi-Random Sequences and Their Discrepancies
    Quasi-random (also called low discrepancy) sequences are a deterministic alternative to random sequences for use in Monte Carlo methods.<|separator|>
  106. [106]
    [PDF] Low Discrepancy Sequences and Quasi-Monte Carlo Integration
    Aug 15, 1997 · When nodes of this type are used, quasi-Monte Carlo Integration overcomes some of the downfalls of Monte Carlo Integration.
  107. [107]
    Pseudorandom and Quasirandom Number Generation - MathWorks
    Quasirandom numbers, also known as low discrepancy sequences, generate each successive number as far away as possible from existing numbers in the set.
  108. [108]
    [PDF] Security Analysis of Pseudo-Random Number Generators with Input
    The lack of insurance about the generated random numbers can cause serious damages in cryptographic protocols, and vulnerabilities can be exploited by attackers ...
  109. [109]
    [PDF] “Pseudo-Random” Number Generation within Cryptographic ...
    It has been well accepted that a good notion of pseudorandomness for cryptographic purposes is unpredictability [18, 20, 3, 7]: given an initial sequence ...
  110. [110]
    [SECURITY] [DSA 1571-1] New openssl packages fix predictable ...
    May 13, 2008 · Luciano Bello discovered that the random number generator in Debian's openssl package is predictable. This is caused by an incorrect Debian-specific change to ...
  111. [111]
    Lessons from the Debian/OpenSSL Fiasco - research!rsc
    May 21, 2008 · Debian announced that in September 2006 they accidentally broke the OpenSSL pseudo-random number generator while trying to silence a Valgrind warning.
  112. [112]
    [PDF] Dual EC: A Standardized Back Door - Cryptology ePrint Archive
    Jul 31, 2015 · This paper traces the history of Dual EC including some suspicious changes to the standard, explains how the back door works in real-life.
  113. [113]
    The Many Flaws of Dual_EC_DRBG
    Sep 18, 2013 · This backdoor may allow the NSA to break nearly any cryptographic system that uses it. If you're still with me, strap in. Here goes the long ...
  114. [114]
    [PDF] When Good Randomness Goes Bad: Virtual Machine Reset ...
    Routine cryp- tographic operations such as encryption and signing can fail spectacularly given predictable or repeated random- ness, even when using good long- ...<|separator|>
  115. [115]
    Randomness Improvements for Security Protocols - IETF
    Feb 17, 2020 · Randomness is a crucial ingredient for TLS and related security protocols. Weak or predictable "cryptographically-strong" pseudorandom ...
  116. [116]
    RANDU: A truly horrible random number generator
    Apr 14, 2019 · A truly horrible random number generator called RANDU was commonly used on most of the world's computers. This generator starts with an odd seed.Missing: pseudorandom | Show results with:pseudorandom
  117. [117]
    RANDU: The case of the bad RNG | Why?
    Feb 16, 2016 · It should be impossible to calculate, or guess, from any given sub-sequence, any previous or future values in the sequence. It should be ...
  118. [118]
    DDJ, Jan96: Randomness and Netscape Browser - People @EECS
    Our study revealed serious flaws in Netscape's implementation of SSL that make it relatively easy for an eavesdropper to decode the encrypted communications.
  119. [119]
    Random Number Bug in Debian Linux - Schneier on Security
    May 19, 2008 · On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing.
  120. [120]
    How the NSA (may have) put a backdoor in RSA's cryptography
    Jan 6, 2014 · The evidence is mounting for Dual_EC_DRBG being well-suited for use as a back door. A working proof of concept backdoor was published in ...
  121. [121]
    The Strange Story of Dual_EC_DRBG - Schneier on Security -
    Nov 15, 2007 · It's possible to implement Dual_EC_DRBG in such a way as to protect it against this backdoor, by generating new constants with another secure ...
  122. [122]
    Torvalds shoots down call to yank 'backdoored' Intel RdRand in ...
    Sep 9, 2013 · The catalyst for the petition seems to be the belief that the RdRand instruction in Intel processors was compromised by the NSA and GCHQ, ...
  123. [123]
    You're Doing IoT Security RNG: The Crack in the… | Bishop Fox
    Aug 5, 2021 · Every IoT device with a hardware random number generator (RNG) contains a serious vulnerability whereby it fails to properly generate random numbers.
  124. [124]
    A Critical Random Number Generator Flaw Affects Billions of IoT ...
    Aug 9, 2021 · A critical vulnerability has been disclosed in hardware random number generators used in billions of Internet of Things (IoT) devices.