Fact-checked by Grok 2 weeks ago

Random sequence

A random sequence, in the context of , is a discrete-time consisting of a countable collection of random variables \{X_n\}_{n \in J}, where J is a countable such as the natural numbers \mathbb{N} or integers \mathbb{Z}, each defined on a common probability space. These sequences model phenomena where outcomes at discrete points (e.g., time steps) are subject to probabilistic variation, such as daily stock returns or successive measurements in an experiment. In contrast, within algorithmic information theory, a random sequence refers to an individual string or infinite sequence that lacks compressible patterns, quantified by its Kolmogorov complexity—the length of the shortest computer program capable of producing it—being approximately equal to the sequence's own length, indicating incompressibility. In , random sequences are foundational to analyzing asymptotic behaviors and . Key properties include stationarity, where the joint distribution of variables remains invariant under time shifts, enabling predictions in time-series models like processes. They also exhibit convergence modes, such as convergence in probability (where P(|X_n - X| > \epsilon) \to 0 as n \to \infty) or almost sure convergence, underpinning theorems like the (sample averages converging to the ) and the (normalized sums approaching a ). Examples include independent and identically distributed (i.i.d.) sequences, like coin toss outcomes, or dependent ones modeling Markov chains, where the next state depends only on the current one. The concept of was introduced by Ray Solomonoff in 1960 and independently developed by and in 1965; this measure captures that random s pass all effective statistical tests, such as uniform frequency of symbols or no simple patterns in subsequences. Per Martin-Löf's 1966 refinement, an infinite binary is random if it avoids all "null" sets defined by computable martingales or measures of zero probability under the , ensuring it satisfies laws like the almost surely. This notion contrasts with pseudorandom sequences in computing, which are deterministically generated to mimic for applications like but are compressible by design. Both interpretations intersect in applications: probabilistic random sequences inform simulations that approximate algorithmic randomness, while incompressibility tests validate pseudorandom generators in methods. The study of random sequences bridges , statistics, and , influencing fields from to .

Fundamentals

Definition

A random sequence is a sequence \{X_n\}_{n \in \mathbb{N}} (or more generally \{X_n\}_{n \in \mathbb{Z}}), where each X_n is a defined on a common probability space (\Omega, \mathcal{F}, P). This setup models uncertainty in a sequence of outcomes, where the value of each X_n is not fixed but follows a probabilistic law. The underlying probability space consists of three key components: the sample space \Omega, which is the set of all possible outcomes of the underlying experiment; the \sigma-algebra \mathcal{F}, a collection of subsets of \Omega (called events) that is closed under complementation and countable unions; and the probability measure P, a function from \mathcal{F} to [0,1] satisfying P(\Omega) = 1 and additivity over disjoint events. Each random variable X_n: \Omega \to \mathbb{R} (or to another measurable space) is measurable with respect to \mathcal{F}, ensuring that events like \{X_n \leq x\} belong to \mathcal{F} for all real x. Random sequences are a special case of processes, specifically discrete-time processes where the index set is the natural numbers or integers, as opposed to continuous-time processes indexed by real numbers. Unlike deterministic sequences, in which each term is a fixed value with no associated uncertainty, random sequences incorporate probabilistic variability, making outcomes unpredictable without knowledge of the realization from \Omega. Examples of random sequences include the sequence, where each X_n is a random variable taking value 1 with probability p \in (0,1) and 0 otherwise, often modeling repeated independent trials like coin flips. Another example is the uniform random sequence over [0,1], where each X_n follows a with probability density function f(x) = 1 for x \in [0,1] and 0 elsewhere, useful for generating continuous uniform variates.

Basic Properties

A random sequence \{X_n\}_{n=1}^\infty consists of random variables, each with its own \mathbb{E}[X_n] = \mu_n and variance \mathrm{Var}(X_n) = \sigma_n^2, assuming finite second moments. These moments characterize the and of each term in the sequence. For sequences where the expectations converge, \lim_{n \to \infty} \mathbb{E}[X_n] = \mu exists, providing a long-run , while the variances indicate the variability at each step. For independent and identically distributed (i.i.d.) sequences with finite \mu, the (LLN) asserts that the sample average to the . The weak LLN states that \frac{1}{n} \sum_{i=1}^n X_i \to \mu in probability as n \to \infty, originally formulated by in 1713 and generalized in modern form. The strong LLN, established by Kolmogorov in 1933, strengthens this to almost sure : \frac{1}{n} \sum_{i=1}^n X_i \to \mu with probability 1. These laws underpin the reliability of empirical averages in probabilistic modeling. Under similar i.i.d. conditions with finite variance \sigma^2 > 0, the (CLT) describes the asymptotic normality of the standardized sum. Specifically, \sqrt{n} \left( \frac{1}{n} \sum_{i=1}^n X_i - \mu \right) \to \mathcal{N}(0, \sigma^2) in distribution as n \to \infty, a result due to Lindeberg in 1922 for general cases and refined by Lévy. This convergence enables approximations of sums by normal distributions, facilitating inference in large samples. Stationarity is a key property ensuring temporal invariance in statistical features. A sequence is strictly stationary if the joint distribution of (X_{t_1}, \dots, X_{t_k}) equals that of (X_{t_1 + h}, \dots, X_{t_k + h}) for any shift h and times t_1, \dots, t_k, as defined by Khintchine in 1934. Weak stationarity, or second-order stationarity, is a milder condition requiring constant mean \mathbb{E}[X_t] = \mu, finite constant variance \mathrm{Var}(X_t) = \sigma^2, and covariance depending only on lag \mathrm{Cov}(X_t, X_{t+h}) = \gamma(h). Strict stationarity implies weak stationarity when moments exist, but the converse does not hold. Ergodicity extends stationarity by equating time averages to ensemble averages. A stationary sequence is ergodic if, for integrable functions f, the time average \frac{1}{n} \sum_{k=0}^{n-1} f(T^k X) converges to \mathbb{E}[f(X)], where T is the , per Birkhoff's 1931 theorem. This property ensures that a single realization suffices to estimate population parameters.

Types of Random Sequences

Independent and Identically Distributed Sequences

An and identically distributed (i.i.d.) sequence is a sequence of random variables \{X_n\}_{n=1}^\infty where the random variables X_i and X_j are for all i \neq j, and each X_i shares the same F. This structure implies that the joint of the first n variables takes the form of a : P(X_1 \leq x_1, \dots, X_n \leq x_n) = \prod_{i=1}^n F(x_i). Common examples of i.i.d. sequences include the outcomes of successive flips, which follow a with parameter p = 1/2, and sequences of standard random variables, each distributed as \mathcal{N}(0,1). A key asymptotic property unique to i.i.d. sequences with finite \mu = E[X_i] is Kolmogorov's strong , which states that the sample average converges to the mean: \frac{1}{n} \sum_{i=1}^n X_i \to \mu \quad \text{almost surely as } n \to \infty. This result holds under the sole condition of identical finite first moments and , without requiring higher-moment assumptions. For the central limit theorem applied to i.i.d. sequences with finite variance \sigma^2 > 0, the Berry–Esseen theorem provides a quantitative bound on the rate of convergence of the standardized sum to the standard normal distribution in the Kolmogorov distance: \sup_x \left| P\left( \frac{\sum_{i=1}^n X_i - n\mu}{\sigma \sqrt{n}} \leq x \right) - \Phi(x) \right| \leq C \frac{\rho}{\sqrt{n}}, where \Phi is the standard normal CDF, \rho = E[|X_1 - \mu|^3]/\sigma^3 < \infty, and C is a universal constant (e.g., C \approx 0.4785). This bound highlights the \mathcal{O}(1/\sqrt{n}) convergence rate, improving on weaker qualitative statements.

Dependent Sequences

In , a random sequence exhibits dependence if the joint of its random variables cannot be factored into the product of their marginal distributions, meaning that the occurrence of one variable affects the probabilities of others. This contrasts with sequences, where knowledge of past values provides no information about future ones. A prominent class of dependent sequences is Markov sequences, also known as Markov chains, defined for a discrete-time \{X_n\}_{n=0}^\infty taking values in a state space S such that the conditional distribution of X_{n+1} given the entire X_1, \dots, X_n depends only on the most recent state X_n:
P(X_{n+1} = j \mid X_1 = i_1, \dots, X_n = i_n) = P(X_{n+1} = j \mid X_n = i_n)
for all n \geq 1 and states i_1, \dots, i_n, j \in S. The one-step transition probabilities p_{ij} = P(X_{n+1} = j \mid X_n = i) form the entries of the P = (p_{ij}), a where each row sums to 1, capturing the memory-one structure of the dependence.
Classic examples illustrate this dependence. The simple random walk on the integers \mathbb{Z} is a Markov chain where X_{n+1} = X_n + Z_{n+1}, with \{Z_i\} independent and identically distributed steps (e.g., Z_i = +1 or -1 with equal probability 1/2), so the position depends solely on the previous position and an independent increment. Another example is the autoregressive process of order 1 (AR(1)), defined by X_n = \phi X_{n-1} + \varepsilon_n for |\phi| < 1 and \{\varepsilon_n\} independent white noise with mean zero, where each term linearly depends on the immediate predecessor, inducing short-term memory in the sequence. Stationarity in dependent sequences like Markov chains refers to a \pi = (\pi_i)_{i \in S} over the states that remains invariant under the transition dynamics, satisfying the balance equation \pi = \pi P, or componentwise \pi_j = \sum_{i \in S} \pi_i p_{ij} for all j. For irreducible and aperiodic chains, such a exists and is unique, representing the long-run proportion of time spent in each state. To ensure asymptotic properties such as central limit theorems hold despite dependence, mixing conditions quantify the decay of correlations over time lags. A sequence is \alpha-mixing (strong mixing) if the supremum over events A in the past \sigma-field and B in the future \sigma-field of |\mathbb{P}(A \cap B) - \mathbb{P}(A)\mathbb{P}(B)| tends to zero as the lag increases, measuring maximal dependence. Stronger is \beta-mixing (absolute regularity), where the total variation distance between the joint and product measures of past and future \sigma-fields vanishes, implying \alpha-mixing and enabling rates of convergence in statistical inference for dependent data.

Generation and Testing

Methods of Generation

True random number generators (TRNGs) produce sequences by harnessing inherently unpredictable physical processes as sources, ensuring each bit or number derives from genuine rather than deterministic computation. Common sources include thermal noise in electronic components, in photodiodes, and , where quantum-level unpredictability provides high-quality . These generators are essential for applications requiring non-reproducible outcomes, though they often suffer from lower throughput compared to algorithmic alternatives and require post-processing to remove biases. In contrast, pseudorandom number generators (PRNGs) are deterministic algorithms that generate long sequences of numbers appearing statistically random, starting from an initial value, and are widely used for their efficiency and reproducibility in simulations. A foundational example is the (LCG), defined by the X_{n+1} = (a X_n + c) \mod m, where X_n is the current state, a is the multiplier, c the increment, and m the , typically chosen as a large power of 2 for computational efficiency. LCGs produce uniform distributions over [0, m-1] but can exhibit detectable patterns if parameters are poorly selected. For -sensitive uses, cryptographically secure PRNGs (CSPRNGs) extend PRNG principles with strong unpredictability guarantees, ensuring that future outputs cannot be inferred from prior ones even with partial knowledge of the internal state. A seminal CSPRNG is the Blum-Blum-Shub , which computes X_{n+1} = X_n^2 \mod [n](/page/N+1) where n = pq for large primes p and q congruent to 3 4, outputting the least significant bits of X_n; its relies on the residuosity . CSPRNGs must resist attacks like state compromise and are standardized to include entropy injection mechanisms. Seeding initializes a PRNG's internal with an entropy source, such as system time or hardware noise, to produce varied sequences across runs; without proper , outputs become predictable. For extended operation, reseeding periodically incorporates fresh to maintain unpredictability and prevent state exhaustion, a requirement in standards for long-running generators. The of a PRNG, or the length before repetition, critically affects its quality; for LCGs, a full of m is achievable under the Hull-Dobell theorem's conditions: c and m are coprime, a-1 is divisible by all prime factors of m, and a-1 is divisible by 4 if 4 divides m. Optimal parameters often select m as prime and a as a root m to maximize length and distributional uniformity.

Tests for Randomness

Tests for randomness serve to empirically verify whether a given sequence exhibits properties of true , particularly by detecting deviations from uniformity in and between successive elements. These tests are crucial for validating pseudorandom number generators (PRNGs) used in simulations, , and statistical modeling, as non-random patterns can lead to biased results. Prominent test suites include the Diehard battery, developed by George Marsaglia in 1995 as a comprehensive collection of 15 statistical tests applied to large sequences, and the NIST Special Publication 800-22 suite, which comprises 15 tests (with variants) specifically tailored for binary sequences in cryptographic contexts. Another widely used suite is , developed by Pierre L'Ecuyer and colleagues, offering batteries of tests including and BigCrush for assessing RNG quality across various dimensions. The frequency test evaluates uniformity by partitioning a sequence of values assumed to be in [0,1] into k equal intervals and comparing observed frequencies to expected ones via the goodness-of-fit statistic. This statistic is given by \sum_{i=1}^{k} \frac{(O_i - E_i)^2}{E_i}, where O_i is the observed count in the i-th bin and E_i = n/k is the expected count for sequence length n; under the of uniformity, it asymptotically follows a distribution with k-1 . A low indicates clustering or gaps, signaling non-uniformity. The runs test, primarily for binary sequences of 0s and 1s, assesses independence by counting the total number of runs—consecutive streaks of identical bits—and comparing it to the expected value under randomness. For a sequence of length n with proportion p of 1s, the expected number of runs is approximately $2np(1-p) + 1, with variance $2np(1-p)(2np(1-p)-1)/(n-1); the test statistic, often normalized to a z-score, is evaluated against a standard normal distribution, or alternatively a chi-square approximation for small p. Deviations, such as too few or too many runs, suggest serial dependence or bias. The serial correlation test measures dependence between elements separated by a lag k, computing the coefficient \rho_k = \frac{\mathrm{Cov}(X_n, X_{n+k})}{\mathrm{Var}(X_n)} for a \{X_i\}. Under and stationarity, \rho_k should be near zero for k ≥ 1; the test typically transforms the sample estimate into a or uses a approximation to test the of no , with significance determined by comparing to critical values. Significant nonzero \rho_k reveals periodicities or linear dependencies. The spectral test targets structural defects in PRNGs, particularly linear congruential generators, by analyzing the lattice structure formed by points (i \alpha \mod 1, i \beta \mod 1, \dots) in the unit via . It computes the maximal distance between successive parallel containing these points, equivalent to the minimal norm; a high value indicates good by avoiding low-dimensional alignments, while low values expose clustering. Introduced by Coveyou and MacPherson in , this test provides a theoretical merit figure for quality. In test suites like Diehard and NIST, p-values quantify the probability of observing a test statistic at least as extreme under true , ideally uniformly distributed over [0,1]. For suite-wide assessment, the proportion of sequences passing individual tests (e.g., p-value ≥ 0.01) should hover near 1 - α, with confidence intervals like \hat{p} \pm 3\sqrt{\hat{p}(1-\hat{p})/m} for m sequences; uniformity of p-values across tests is further verified via a test on binned p-values. This approach accounts for multiple testing by focusing on aggregate pass rates rather than per-test adjustments, ensuring overall reliability without inflating Type I errors excessively.

Applications

In Probability and Statistics

Random sequences form the foundation of many theoretical developments in probability and statistics, particularly in the analysis of estimators, inference procedures, and predictive models derived from sequences of random variables. In asymptotic theory, random sequences enable the study of the large-sample behavior of statistical estimators, ensuring properties like and under suitable conditions. For instance, the method of moments relies on equating sample moments from a random sequence to moments to estimate parameters; under of the moments and mild regularity conditions, such estimators are consistent, converging in probability to the true parameter as the sequence length increases. Furthermore, these estimators can achieve asymptotic in certain parametric families, attaining the Cramér-Rao lower bound in the limit, though they may underperform maximum likelihood estimators in finite samples. Empirical processes, constructed from independent and identically distributed (i.i.d.) random sequences, provide tools for uniform convergence results essential to nonparametric inference. The Glivenko-Cantelli theorem exemplifies this, stating that for an i.i.d. sequence X_1, \dots, X_n from a distribution function F, the empirical cumulative distribution function F_n(x) = n^{-1} \sum_{i=1}^n \mathbf{1}_{\{X_i \leq x\}} satisfies \sup_{x \in \mathbb{R}} |F_n(x) - F(x)| \to 0 \quad \text{almost surely as } n \to \infty. This uniform strong law underpins the reliability of empirical distribution estimates and extends to broader classes of functions in theory. Such results facilitate the analysis of statistical functionals, like quantiles or ranks, derived from random sequences without assuming parametric forms. Bootstrap methods leverage random sequences for resampling-based inference, treating an observed sequence as a population to approximate sampling distributions. Introduced by Efron, the nonparametric bootstrap resamples with replacement from the observed to generate variability estimates, yielding consistent approximations to the and variance of estimators under i.i.d. assumptions. This approach is particularly valuable for complex statistics where analytical distributions are intractable, providing asymptotically valid confidence intervals and hypothesis tests. In time series analysis, dependent random sequences model temporal structures for forecasting, capturing autocorrelation inherent in real-world data. models, developed by Box and Jenkins, integrate autoregressive, differencing, and moving average components to represent non- sequences as processes after transformation, enabling predictions via maximum likelihood or fitted to the sequence. These models assume the sequence follows a of , supporting asymptotic inference on parameters through the and applied to dependent observations. Hypothesis testing with random sequences often employs sequential methods to minimize sample size while controlling error rates. The (SPRT), formulated by Wald, accumulates log-likelihood ratios from an ongoing random sequence, stopping when the ratio crosses predefined boundaries to decide between hypotheses; it achieves optimality by minimizing expected sample size under both hypotheses among tests with given error probabilities. This procedure is applicable to both i.i.d. and dependent sequences, enhancing efficiency in settings like or clinical trials.

In Computing and Simulation

In computing and , random sequences are essential for approximating complex and solving optimization problems through probabilistic methods. methods rely on sequences of random numbers to estimate integrals by averaging evaluations over randomly sampled points in the integration domain, providing unbiased estimators whose accuracy improves with the of the sample size. To enhance efficiency, techniques such as antithetic variates pair random samples with their "negatives" (e.g., using U and $1 - U from uniform distributions) to induce negative correlation and reduce estimator variance, often achieving up to 50% improvement in low dimensions. Random walks, generated from random sequences, underpin algorithms for optimization and analysis by simulating paths that explore solution spaces stochastically. In , a random walk perturbs the current state of a system with probability depending on a decreasing "temperature" parameter, allowing escape from optima to converge toward global minima in combinatorial problems like the traveling salesman. For , random walks sample nodes proportionally to , enabling efficient estimation of properties such as or in large networks without exhaustive enumeration. In , random sequences provide the unpredictability required for secure and nonce creation, ensuring that each encryption session uses unique material resistant to replay attacks. Nonces, typically generated from cryptographically secure random sources, initialize stream ciphers to produce distinct keystreams from the same . The stream cipher, for instance, uses a pseudo-random initialized by a key to generate a byte stream that XORs with , offering fast software implementation though now deprecated due to biases. Random sequences enable the simulation of stochastic processes by generating sample paths that mimic real-world variability in diffusion and queueing systems. In diffusion models, sequences drive Brownian motion paths via Euler-Maruyama discretization, where increments \Delta W_t = \sqrt{\Delta t} \cdot Z with Z \sim \mathcal{N}(0,1) approximate continuous trajectories for pricing financial derivatives or modeling particle transport. Queueing theory simulations use random arrival and service times from exponential distributions to replicate system performance, estimating metrics like average wait times in M/M/1 queues through multiple runs of the process. In gaming and graphics, pseudo-random number generators (PRNGs) facilitate procedural content generation to create varied textures and levels dynamically, reducing storage needs while enhancing replayability. , a gradient-based PRNG , interpolates smooth, organic patterns from random seeds to synthesize realistic terrains, clouds, or marble effects in real-time rendering. For level design, PRNGs seed algorithms that assemble modular elements, such as mazes or worlds in games, ensuring structural coherence with controlled randomness.

Historical Development

Early History

The concept of random sequences emerged from early probabilistic inquiries into repeated independent trials, with Jacob Bernoulli's Ars Conjectandi (1713) providing a foundational treatment of sequences of Bernoulli trials, where each event follows a fixed probability independently of others, laying the groundwork for independent and identically distributed (i.i.d.) sequences. In the , advanced this framework in his Théorie Analytique des Probabilités (1812), developing laws of probability that described the stability of frequencies in long sequences of trials, emphasizing the of empirical outcomes to theoretical expectations. A notable precursor to random sequences in was Georges-Louis Leclerc, Comte de Buffon's , introduced in , which modeled the random placement of a needle on a lined to estimate π through the probability of intersections, effectively simulating a sequence of random orientations and positions. In the early 20th century, formalized aspects of infinite random sequences in his 1909 work, proving the strong , which analyzes the almost sure convergence in infinite series of independent events, such as coin tosses, highlighting the regularity in ostensibly random infinite sequences. This axiomatic rigor was solidified by Andrey Kolmogorov's Foundations of the Theory of Probability (1933), which established probability as a measure on event spaces, enabling a precise mathematical definition of random sequences as realizations of stochastic processes. Concurrently, in the 1930s, applied to random sequences, proving theorems on the equivalence of time averages and ensemble averages for stationary processes, which connected dynamical systems to probabilistic sequences in . During the 1940s, contributed practical realizations of random sequences through computational means, developing the first electronic random number generator for the computer in 1946 alongside Stanislaw Ulam to support simulations, marking the transition to machine-generated sequences. Additionally, introduced a debiasing extractor in 1951 to derive unbiased random bits from biased sources, such as imperfect tosses, by pairing outcomes and selecting based on order (e.g., outputting 0 for HT and 1 for TH while discarding HH and TT), ensuring in sequences derived from non-ideal randomness.

Modern Approaches

In the mid-20th century, the concept of algorithmic randomness emerged as a foundational theoretical framework for understanding in sequences, building on earlier notions from . introduced the idea of in 1965, defining the complexity of a finite as the of the shortest program that can produce it, thereby providing a measure of incompressibility as a proxy for . This was extended to sequences through the notion of algorithmic randomness, where a sequence is considered random if it cannot be compressed by any effective procedure. Per Martin-Löf's 1966 formalization, a sequence is random if it avoids all effectively null sets—subsets of measure zero that can be enumerated by a computer—thus capturing sequences that pass every computable statistical test for . Advancements in pseudorandom number generators (PRNGs) during the late addressed the need for sequences with exceptionally long periods and high-dimensional uniformity, essential for simulations and . The , developed by Makoto Matsumoto and Takuji Nishimura in 1997, exemplifies this progress; it generates sequences with a period of $2^{19937} - 1, ensuring non-repetition for practical purposes, while maintaining equidistribution in up to 623 dimensions. This PRNG became widely adopted due to its balance of speed, long period, and statistical quality, influencing implementations in languages like and . The 2000s marked the rise of quantum random number generators (QRNGs), leveraging quantum phenomena to produce truly random bits immune to deterministic prediction. Early QRNGs exploited vacuum fluctuations or detection, but developments using photon entanglement provided enhanced security and entropy extraction. For instance, a demonstration utilized photon-number-path entanglement, generated via two-photon , to yield random outcomes from , achieving bit rates suitable for cryptographic applications. These systems ensure unpredictability rooted in the , distinguishing them from classical PRNGs. In the era of the , scalable testing suites became crucial for validating PRNGs and QRNGs against sophisticated failure modes. , a comprehensive library released in by Pierre L'Ecuyer and Richard Simard, offers over 160 statistical tests, including battery suites like SmallCrush and BigCrush, to detect correlations in sequences up to trillions of bits. Parallelism in generation also advanced, with methods like counter-based PRNGs enabling to produce massive random datasets efficiently for and simulations. Theoretical work in the 2010s further refined for infinite sequences, emphasizing relativization and resource-bounded . Rodney Downey and Denis Hirschfeldt's 2010 monograph systematized these concepts, showing equivalences between Martin-Löf , , and betting strategies for infinite binary sequences, while exploring lowness and truth-table reductions. Subsequent research integrated these with , providing tools to analyze in effective Hausdorff dimensions and Schnorr , enhancing understanding of generic properties in uncountable spaces.

References

  1. [1]
    Sequence of random variables - StatLect
    Learn about sequences of random variables (or random sequences), their properties and how they are used in probability and statistics.
  2. [2]
    10.1.0 Basic Concepts - Probability Course
    A discrete-time random process (or a random sequence) is a random process {X(n)=Xn,n∈J}, where J is a countable set such as N or Z.
  3. [3]
    [PDF] Kolmogorov Complexity
    To make this precise, we can select some small integer C, and say that a sequence is random if K(x) ≥ len(x) − C. Kolmogorov complexity is not computable.
  4. [4]
    The Definition of Random Sequences - ScienceDirect.com
    Kolmogorov has defined tile conditional complexity of an object y when the object x is already given to us as the minimal length of a.
  5. [5]
    The definition of random sequences - ScienceDirect.com
    In this paper it is shown that the random elements as defined by Kolmogorov possess all conceivable statistical properties of randomness.
  6. [6]
    [PDF] CS 252, Lecture 4: Kolmogorov Complexity
    Feb 6, 2020 · Definition 4 (Incompressible strings). A string x is incompressible or Kolmogorov random if K(x) ≥ |x|. Note that there exist incompressible ...
  7. [7]
    [PDF] kolmogorov complexity and algorithmic randomness
    Aug 30, 2013 · The study of algorithmic randomness is concerned with classifying which infinite sequences of characters from a finite set look random. As one ...
  8. [8]
    Random variable | Definition, examples, exercises - StatLect
    the set of all possible outcomes of a probabilistic experiment, called a sample space. A random variable associates a real number to each element of Omega , as ...
  9. [9]
    Probability space | Definition, axioms, explanation - StatLect
    by Marco Taboga, PhD. A probability space is a triple , where is a sample space, is a sigma-algebra of events and is a probability measure on . Elements of a ...Elements of a probability space · simple example · Explanation of the axioms of a...
  10. [10]
    Random Sequence - an overview | ScienceDirect Topics
    A random sequence is defined as a series of unpredictable outcomes ... AI generated definition based on: International Encyclopedia of Education, 2010.
  11. [11]
  12. [12]
    Variance | Definition based on the expected value - StatLect
    Variance is a measure of dispersion. It is equal to the average squared distance of the realizations of a random variable from its expected value.
  13. [13]
    Central Limit Theorem - StatLect
    Lindeberg-Lévy Central Limit Theorem​​ denotes convergence in distribution. converges in distribution to a standard normal distribution.Convergence To The Normal... · Examples · The Central Limit Theorem...Missing: source | Show results with:source
  14. [14]
    10.1.4 Stationary Processes - Probability Course
    Here, we define one of the most common forms of stationarity that is widely used in practice. A random process is called weak-sense stationary or wide-sense ...
  15. [15]
    [PDF] 18.600: Lecture 22 Sums of independent random variables
    Independent identically distributed (i.i.d.). ▷ The abbreviation i.i.d. means independent identically distributed. ▷ It is actually one of the most ...
  16. [16]
    [PDF] 1 Probability measure and random variables - Arizona Math
    Random variables X1,X2,ททท ,Xn are independent if their joint distribution measure is the product of their individual distribution mea- sures, i.e.,. µX1,X2 ...
  17. [17]
    [PDF] Some Probability and Statistics 2 Random Variables - cs.Princeton
    Feb 9, 2007 · Independent and identically distributed (or IID) random variables are mutually independent of each other, and are identically distributed in ...
  18. [18]
    [PDF] 8 The Laws of Large Numbers - Stat@Duke
    Oct 25, 2017 · ... Strong Law for iid sequences is that due to Kolmogorov, with no moment assumptions: Theorem 5 (Kolmogorov's L1 SLLN) Let {Xn} be iid and set ...
  19. [19]
    [PDF] Berry-Esseen bounds for functionals of independent random variables
    Oct 12, 2020 · Theorem 7.2 also provides an additional bound (7.6) which is valid for any i.i.d. sequence. (Xn)n≥1 and holds in the Kolmogorov distance, ...
  20. [20]
    [PDF] Chapter 4 Dependent Random Variables
    One of the key concepts in probability theory is the notion of conditional probability and conditional expectation. Suppose that we have a probability.
  21. [21]
    [PDF] Lecture 1: Markov Chains-Part I 1.1 Definition and characterization
    The matrix P = [Pij]i,j∈X is called the transition matrix of the Markov Chain X. P satisfies the properties that Pij ≥ 0 for all i, j ∈ X and Pj∈X. Pij = 1 for ...
  22. [22]
    [PDF] Lecture 18: Markov Chains 1 Overview 2 Basic Definitions
    Definition 1 (Transition Matrix). We define the transition matrix of a Markov chain as the matrix. P = pij, where: pij = P[Xt+1 = j | Xt = i]. (2). Using the ...
  23. [23]
    Section 2 Random walk | MATH2750 Introduction to Markov Processes
    Consider the following simple random walk on the integers Z Z : We start at 0 0 , then at each time step, we go up by one with probability p p and down by ...
  24. [24]
    [PDF] autoregressive linear models ar(1) models - Stat@Duke
    The AR(1) model is a linear regression of the current time series value on the previous value, generating positively auto-correlated time series.
  25. [25]
    Discrete Markov Chains: Finding the Stationary Distribution
    May 30, 2021 · Recall that the stationary distribution π is the row vector such that π=πP . Therefore, we can find our stationary distribution by solving ...
  26. [26]
    [PDF] Convergence Theorem for finite Markov chains
    Aug 28, 2017 · A distribution π is called a stationary distribution of a Markov chain P if πP = π. Thus, a stationary distribution is one for which advancing ...
  27. [27]
    Basic Properties of Strong Mixing Conditions. A Survey and Some ...
    This paper surveys basic properties of strong mixing conditions, updating a 1986 paper, and includes new developments and open problems.
  28. [28]
    [PDF] Stat 8112 Lecture Notes Stationary Stochastic Processes
    Apr 29, 2012 · A stochastic process having second moments is weakly stationary or sec- ond order stationary if the expectation of Xn is the same for all ...
  29. [29]
    [PDF] Recommendation for the Entropy Sources Used for Random Bit ...
    This document specifies design principles and requirements for entropy sources used by Random Bit Generators, and tests for validation.
  30. [30]
    Entropy Sources Based on Silicon Chips: True Random Number ...
    True random number generators (TRNGs) extract the dynamic entropy from random and microscopic fluctuations in physical processes (thermal noise, shot noise, ...
  31. [31]
    [PDF] Tailored Health Tests for Physical Entropy Sources
    Sep 20, 2023 · • Shall detect total breakdown of the noise source while the TRNG is in operation ... Example: Radioactive decay. Loosely based on Alkassar et al.
  32. [32]
    [PDF] NIST Special Publication 800-90A Revision 1
    ... Random Number (or Bit) Generators or Hardware Random Number Generators. 2 DRBGS have also been called Pseudorandom Number (or Bit) Generators. 1. Page 11 ...
  33. [33]
    The Art of Computer Programming: Random Numbers - InformIT
    Jun 23, 2014 · Donald E. Knuth introduces the concept of random numbers and discusses the challenge of inventing a foolproof source of random numbers.
  34. [34]
    [PDF] GENERATOR*
    (2.3) Cryptographically secure pseudo-random sequence generators (such as the x2 mod N generator) may be viewed as amplifiers of randomness (short random ...
  35. [35]
    [PDF] Random-Number-Generators-T-E-Hull-and-A-R-Dobell.pdf
    Our purpose is to survey the problem of obtaining these sequences of numbers, with particular emphasis on the procedures used for their generation on stored-.Missing: journal | Show results with:journal
  36. [36]
  37. [37]
    [PDF] Testing Random Numbers: Theory and Practice - CS, FSU
    Random numbers can be tested using Chi-Square, Kolmogorov-Smirnov (KS), Empirical, Equidistribution, Serial, Gap, Poker, Coupon Collector's, Permutation, Runs, ...
  38. [38]
    [PDF] Testing Random-Number Generators
    Perform the serial correlation test of randomness at 90% confidence and report the result. Page 39. 27-39. ©2010 Raj Jain www.rajjain.com.
  39. [39]
    Consistency of a Method of Moments Estimator Based on Numerical ...
    Feb 11, 2009 · This paper considers the properties of estimators based on numerical solutions to a class of economic models. In particular, the numerical ...
  40. [40]
    The relative efficiency of method of moments estimators
    Efficient method of moments dominates conventional method of moments. The relative efficiency of EMM is uniformly higher than the relative efficiency of CMM.
  41. [41]
    Weak Convergence and Empirical Processes - SpringerLink
    This book tries to do three things. The first goal is to give an exposition of certain modes of stochastic convergence, in particular convergence in ...Missing: sequences | Show results with:sequences
  42. [42]
    Bootstrap Methods: Another Look at the Jackknife - Project Euclid
    The bootstrap is a general method for estimating sampling distributions. The jackknife is a linear approximation method for the bootstrap.
  43. [43]
    Optimum Character of the Sequential Probability Ratio Test
    This means that of all tests with the same power the sequential probability ratio test requires on the average fewest observations.
  44. [44]
    [PDF] JM Hammersley and DC Handscomb - CS, FSU
    We are convinced never- theless that Monte Carlo methods will one day reach an impressive maturity. The main theoretical content of this book is in Chapter 5; ...
  45. [45]
    [PDF] Optimization by Simulated Annealing S. Kirkpatrick - Stat@Duke
    Nov 5, 2007 · The simulated annealing process consists of first "melt- ing" the system being optimized at a high effective temperature, then lower ...
  46. [46]
    [PDF] Spritz—a spongy RC4-like stream cipher and hash function
    Oct 27, 2014 · The cryptographic stream cipher RC4 was designed by Ronald L. Rivest in 1987 for RSA Data Security; it then appeared in RSA's cryptographic ...
  47. [47]
    [PDF] A guide to Brownian motion and related stochastic processes
    This is a guide to the mathematical theory of Brownian motion (BM) and re- lated stochastic processes, with indications of how this theory is related to other.<|separator|>
  48. [48]
    [PDF] Introduction to queueing theory
    explained in textbooks on queueing theory or applied stochastic processes. An intermediate-advanced course for students with prior knowledge of elementary.
  49. [49]
    [PDF] A Survey of Procedural Noise Functions
    Procedural noise functions are widely used in Computer Graphics, from off-line rendering in movie production to interactive video games.
  50. [50]
    Procedural content generation for games: A survey
    We first introduce a comprehensive, six-layered taxonomy of game content: bits, space, systems, scenarios, design, and derived.
  51. [51]
    Mathematical Treasures - Jacob Bernoulli's Ars Conjectandi
    This is the title page of Jacob Bernoulli's Ars Conjectandi (The Art of Conjecturing), published posthumously in 1713 by his nephew Nicolaus I Bernoulli.
  52. [52]
    [PDF] Jakob Bernoulli On the Law of Large Numbers Translated into ...
    His Ars Conjectandi (1713) (AC) was published posthumously with a Foreword ... Bernoulli's Ars Conjectandi. Theory of Probability and Its Applications ...
  53. [53]
    Pierre-Simon Laplace - Biography - University of St Andrews
    Pierre-Simon Laplace proved the stability of the solar system. In analysis Laplace introduced the potential function and Laplace coefficients. He also put the ...
  54. [54]
    P. S. Laplace's Work on Probability - jstor
    light on the whole work of Laplace in probability. Laplace's theory of probability is subdivided into theory of probability proper, limit theorems and ...
  55. [55]
    Buffon's Needle Problem -- from Wolfram MathWorld
    Buffon's needle problem asks to find the probability that a needle of length l will land on a line, given a floor with equally spaced parallel lines a distance ...
  56. [56]
    [PDF] How to Calculate pi: Buffon's Needle (Non-calculus version)
    Given this history, it's difficult to come up with a completely new method, yet this is precisely what Georges-Louis Le Clerc (1707–1788) did in 1777, in a ...
  57. [57]
    [PDF] “That's what all the old guys said.” The many faces of Cournot's ...
    Oct 12, 2023 · Beginning in 1906, the mathematician Émile Borel undertook to revive it, emphasizing the certainty associated with statistical physics. • ...
  58. [58]
    [PDF] FOUNDATIONS THEORY OF PROBABILITY - University of York
    THEORY OF PROBABILITY. BY. A.N. KOLMOGOROV. Second English Edition. TRANSLATION EDITED BY. NATHAN MORRISON. WITH AN ADDED BIBLIOGRPAHY BY. A.T. BHARUCHA-REID.
  59. [59]
    Khinchin's 1929 Paper on Von Mises' Frequency Theory of Probability
    Over the course of the 1930s–40s, Khinchin's knowledge of von Mises' ideas would become the route that led to a purely probabilistic concept of ergodicity, ...
  60. [60]
    A. I. Khinchin's Work in Mathematical Probability - jstor
    Khinchin's interest in the problems of statistical physics was by no means limited to ergodic theory and the applications of stationary processes. In a.Missing: Aleksandr | Show results with:Aleksandr
  61. [61]
    [PDF] HISTORY OF UNIFORM RANDOM NUMBER GENERATION - Hal-Inria
    They obtained the probability distribution of the length of a run up (or run down) in an infinite random sequence. ... Encyclopedia Britannica. Good, I. J. ...
  62. [62]
    [PDF] Stan Ulam, John von Neumann, and the Monte Carlo Method
    Games of chance are the classic examples of random processes, and the first inclination would to use traditional gambling devices as random-number generators.
  63. [63]
    [PDF] Various Techniques Used in Connection With Random Digits - MCNP
    The obvious procedure is to take a uniformly distributed random variable T in (-1, 1) and calculate. 8=sin ?TT. I have a fepling, however, that it is somehow ...Missing: debiasing | Show results with:debiasing
  64. [64]
    [PDF] Iterating Von Neumann's Procedure for Extracting Random Bits
    Nov 11, 2001 · Given a sequence of independent, identically distributed random biased bits, von Neumann's simple procedure extracts independent unbiased bits.Missing: debiasing | Show results with:debiasing
  65. [65]
    Algorithmic randomness - Scholarpedia
    Oct 12, 2007 · Algorithmic randomness is the study of random individual elements in sample spaces, mostly the set of all infinite binary sequences.The development of... · Martin-Löf randomness · Relativization, lowness, and...
  66. [66]
    [PDF] Kolmogorov Complexity and Algorithmic Randomness - LIRMM
    Levin (who was a student of Kol mogorov) found a link between complexity and the notion of algorithmic random ness (introduced in 1966 by P. Martin-Löf [115]).
  67. [67]
    [PDF] Mersenne Twister: A 623-dimensionally equidistributed uniform ...
    In this paper, a new algorithm named Mersenne Twister (MT) for generating uniform pseudoran- dom numbers is proposed. For a particular choice of parameters, ...
  68. [68]
    Quantum random number generators | Rev. Mod. Phys.
    Feb 22, 2017 · This review discusses the different technologies in quantum random number generation from the early devices based on radioactive decay to the multiple ways to ...
  69. [69]
    Quantum random number generator using photon-number path ...
    We report a quantum random number generator based on the photon-number–path entangled state that is prepared by means of two-photon quantum interference at a ...Missing: 2000s | Show results with:2000s
  70. [70]
    [PDF] TestU01: A C Library for Empirical Testing of Random Number ...
    TestU01 is a C library for empirical statistical testing of uniform random number generators (RNGs), providing various tests and tools.
  71. [71]
    Algorithmic Randomness and Complexity - SpringerLink
    Algorithmic randomness uses computability and information theory to explore randomness, relative computability, and information content, and its relationship ...