Fact-checked by Grok 2 weeks ago

Distribution function

In probability theory, the distribution function of a real-valued random variable X, also known as the cumulative distribution function (CDF), is defined as F_X(x) = P(X \leq x), which gives the probability that X takes on a value less than or equal to x. This function provides a complete characterization of the probability distribution of X, applicable to both discrete and continuous random variables. The CDF possesses key mathematical properties that ensure its utility in modeling uncertainty: it is non-decreasing in x, right-continuous, and satisfies \lim_{x \to -\infty} F_X(x) = 0 and \lim_{x \to \infty} F_X(x) = 1, with $0 \leq F_X(x) \leq 1 for all x. For discrete random variables, the probability mass function (PMF) can be obtained from the CDF via P(X = x_i) = F_X(x_i) - F_X(x_i^-), where x_i^- denotes the value immediately preceding x_i. In the continuous case, the probability density function (PDF) is the derivative of the CDF, f_X(x) = \frac{d}{dx} F_X(x), provided the CDF is absolutely continuous. Distribution functions are essential for computing probabilities of intervals, such as P(a < X \leq b) = F_X(b) - F_X(a), and form the basis for statistical inference, simulation, and deriving higher-order statistics like expected values. They also enable the transformation of random variables and the study of convergence in probability, underpinning applications in fields ranging from finance to engineering.

In Probability and Statistics

Definition

In probability theory, the distribution function of a real-valued random variable X, also known as the cumulative distribution function (CDF) and denoted F_X(x), is formally defined as F_X(x) = P(X \leq x), where P denotes the underlying probability measure. This definition encapsulates the cumulative probability that the random variable takes a value less than or equal to x. For the univariate case, F_X: \mathbb{R} \to [0,1] is a non-decreasing function that is right-continuous at every point, satisfying the boundary conditions \lim_{x \to -\infty} F_X(x) = 0 and \lim_{x \to \infty} F_X(x) = 1. The distribution function F_X uniquely determines the probability distribution of X, as it fully specifies the probability measure induced by X on the real line via the Carathéodory extension theorem.

Properties

The cumulative distribution function F_X(x) of a random variable X is non-decreasing, meaning that if x < y, then F_X(x) \leq F_X(y). This property follows directly from the definition, as the event \{X \leq x\} is a subset of \{X \leq y\} when x < y, implying P(X \leq x) \leq P(X \leq y). Additionally, F_X(x) is right-continuous at every point x \in \mathbb{R}, so \lim_{y \to x^+} F_X(y) = F_X(x). This ensures that the function captures probabilities without gaps from the right, which is essential for defining consistent probability measures. The boundary conditions are \lim_{x \to -\infty} F_X(x) = 0 and \lim_{x \to \infty} F_X(x) = 1. These limits reflect that the total probability over the entire real line is 1, with no probability mass escaping to infinity. Probabilities for intervals and points can be computed using the CDF: for a < b, P(a < X \leq b) = F_X(b) - F_X(a), and P(X = a) = F_X(a) - \lim_{y \to a^-} F_X(y). The right-continuity simplifies the point probability to the size of any jump discontinuity at a. Finally, the CDF uniquely determines the distribution of X, such that two random variables have the same distribution if and only if their CDFs coincide. This uniqueness theorem guarantees that the CDF fully characterizes the probability law induced by X.

Examples

The cumulative distribution function (CDF) illustrates the probability that a random variable takes on a value less than or equal to a given point, and explicit forms for common distributions reveal their distinct behaviors, such as jumps for discrete cases or smooth increases for continuous ones. For the Bernoulli distribution, which describes a random variable X taking value 1 with success probability p (where $0 \leq p \leq 1) and 0 otherwise, the CDF is piecewise defined as: F_X(x) = \begin{cases} 0 & x < 0 \\ 1-p & 0 \leq x < 1 \\ 1 & x \geq 1 \end{cases} This CDF remains constant at 0 for x < 0, jumps by $1-p at x = 0 to reflect the probability mass there, remains constant until it jumps by p at x = 1 to reflect the probability mass at the success outcome, and reaches 1 thereafter, directly interpreting P(X \leq x) as the cumulative probability up to x. The uniform distribution on the interval [a, b] (with a < b) models equally likely outcomes across a continuous range, yielding the CDF: F_X(x) = \begin{cases} 0 & x < a \\ \frac{x - a}{b - a} & a \leq x < b \\ 1 & x \geq b \end{cases} Computation involves linear interpolation between 0 and 1 over [a, b], interpreting the CDF as the proportion of the interval covered up to x, which rises steadily to highlight uniformity. For the exponential distribution with rate parameter \lambda > 0, often used for waiting times, the CDF is: F_X(x) = \begin{cases} 1 - e^{-\lambda x} & x \geq 0 \\ 0 & x < 0 \end{cases} This is derived from integrating the density \lambda e^{-\lambda x} from 0 to x, interpreting F_X(x) as the probability that the waiting time does not exceed x, starting at 0 and approaching 1 asymptotically as x grows. The normal distribution, parameterized by mean \mu and standard deviation \sigma > 0, has the CDF expressed using the error function \erf(z) = \frac{2}{\sqrt{\pi}} \int_0^z e^{-t^2} \, dt: F_X(x) = \frac{1}{2} \left[ 1 + \erf\left( \frac{x - \mu}{\sigma \sqrt{2}} \right) \right] This form, without a closed elementary expression, is computed numerically and interprets the symmetric bell-shaped probability, with F_X(\mu) = 0.5 centering the accumulation at the mean. In practice, the empirical distribution function approximates the true CDF from a sample \{X_1, \dots, X_n\} of independent observations: F_n(x) = \frac{1}{n} \sum_{i=1}^n I(X_i \leq x) where I is the indicator function (1 if true, 0 otherwise); this step function jumps by $1/n at each ordered data point, interpreting the proportion of observations at or below x to estimate the underlying distribution nonparametrically. For continuous distributions like the uniform, exponential, and normal, the probability density function serves as the derivative of the CDF, linking cumulative probabilities to local densities.

In Physics

Definition and Formulation

In physics, the distribution function, often denoted as f(\mathbf{r}, \mathbf{v}, t), serves as a phase-space density that quantifies the number of particles in a system at position \mathbf{r}, with velocity \mathbf{v}, and at time t. It represents the expected number of particles per unit volume in configuration space and per unit volume in velocity space, such that f(\mathbf{r}, \mathbf{v}, t) \, d^3\mathbf{r} \, d^3\mathbf{v} gives the number of particles within the infinitesimal phase-space volume element d^3\mathbf{r} \, d^3\mathbf{v}. This formulation distinguishes it from probabilistic interpretations by focusing on particle counts rather than normalized probabilities. The normalization condition for the distribution function is \int f(\mathbf{r}, \mathbf{v}, t) \, d^3\mathbf{r} \, d^3\mathbf{v} = N, where N is the total number of particles in the system, ensuring conservation of particle number in the absence of sources or sinks. To obtain a probability density, one divides by N, yielding a function whose integral over phase space equals unity. Dimensionally, f has units of number density (particles per cubic length) multiplied by inverse cubic velocity (per (length/time)^3), reflecting its role as a density in six-dimensional phase space. The time evolution of the distribution function is governed by the Boltzmann equation, \frac{\partial f}{\partial t} + \mathbf{v} \cdot \nabla_{\mathbf{r}} f + \mathbf{a} \cdot \nabla_{\mathbf{v}} f = C(f), where \mathbf{a} is the acceleration due to external forces, and C(f) is the collision term accounting for particle interactions. The left-hand side describes the advection of the distribution in phase space, while the right-hand side captures binary collision effects through an integral over scattering probabilities. The concept of the distribution function originated in the 19th century within kinetic theory, pioneered by James Clerk Maxwell in his 1860 paper "Illustrations of the Dynamical Theory of Gases," where he introduced velocity distributions for molecular motions, and further developed by Ludwig Boltzmann in his 1872 work "Weitere Studien über das Wärmegleichgewicht unter Gasmolekülen," which formalized the evolution equation.

Applications in Kinetic Theory

In kinetic theory, the distribution function plays a central role in describing the time evolution of particle distributions in dilute gases through the Boltzmann equation, which governs the behavior of a non-equilibrium classical gas via the phase-space distribution function f(\mathbf{r}, \mathbf{c}, t), where \mathbf{r} is position, \mathbf{c} is velocity, and t is time. This equation incorporates both streaming of particles and binary collisions, assuming the molecular chaos hypothesis that incoming particle velocities are uncorrelated before collisions. The collision term in the Boltzmann equation ensures relaxation toward equilibrium while capturing transport phenomena like viscosity and thermal conductivity in rarefied flows. For collisionless systems, such as plasmas where particle interactions are dominated by long-range electromagnetic fields rather than short-range collisions, the distribution function evolves according to the Vlasov equation, often termed the collisionless Boltzmann equation: \frac{\partial f}{\partial t} + \mathbf{v} \cdot \nabla_{\mathbf{x}} f + \frac{q}{m} (\mathbf{E} + \mathbf{v} \times \mathbf{B}) \cdot \nabla_{\mathbf{v}} f = 0, where f is the distribution function in phase space, \mathbf{v} is velocity, \mathbf{x} is position, q and m are particle charge and mass, and \mathbf{E} and \mathbf{B} are electric and magnetic fields. This equation models self-consistent plasma dynamics by coupling the distribution to mean-field forces, enabling analysis of phenomena like wave-particle interactions without stochastic collision effects. By taking moments of the distribution function—integrating f over velocity space—kinetic theory derives macroscopic fluid equations, bridging microscopic particle behavior to hydrodynamic descriptions. The zeroth moment yields the continuity equation for mass density, the first moment gives the momentum equation incorporating pressure and stress tensors, and higher moments produce the energy equation along with heat flux terms. These moment hierarchies, such as the 14-moment approximation, provide a systematic way to close the equations for non-equilibrium flows, extending Navier-Stokes models to capture anisotropic effects in kinetic regimes. Applications of the distribution function appear prominently in modeling free molecular flow within rarefied gases, where the Knudsen number exceeds unity, rendering collisions negligible and allowing particles to traverse the domain without interactions. Here, the distribution function simplifies to a streaming solution of the collisionless Boltzmann equation, directly informing surface accommodation coefficients and drag forces on objects like spacecraft in the upper atmosphere. In shock waves, the distribution function reveals non-equilibrium structures, such as bimodal velocity profiles across the front, where upstream and downstream populations mix via collisions, enabling prediction of shock thickness and temperature jumps in hypersonic flows. Despite its foundational role, the Boltzmann equation relies on the molecular chaos assumption, which neglects velocity correlations and breaks down in dense gases where multi-body effects dominate, limiting accuracy for moderately rarefied regimes. To address these limitations, modern computational methods like Direct Simulation Monte Carlo (DSMC) numerically solve the Boltzmann equation by probabilistically simulating particle trajectories and collisions, providing high-fidelity solutions for transitional flows without the chaos assumption's full imposition. DSMC has become essential for validating kinetic models in applications ranging from microscale devices to re-entry vehicles, offering scalable alternatives to direct moment closures.

Equilibrium Distributions

In physics, equilibrium distributions describe the steady-state probability density functions for particle velocities or momenta in systems at thermal equilibrium, arising as solutions to the Boltzmann equation where the collision term balances to zero. These distributions are fundamental to kinetic theory, providing the statistical foundation for thermodynamic properties like temperature and pressure. The classical form, known as the Maxwell-Boltzmann distribution, applies to non-interacting classical particles and was first derived by James Clerk Maxwell for the velocity distribution in an ideal gas. The Maxwell-Boltzmann distribution for the number density f(\mathbf{v}) of particles with velocity \mathbf{v} in three dimensions is given by f(\mathbf{v}) = n \left( \frac{m}{2\pi k T} \right)^{3/2} \exp\left( -\frac{m |\mathbf{v}|^2}{2 k T} \right), where n is the particle number density, m is the particle mass, k is Boltzmann's constant, and T is the temperature. This isotropic form assumes no preferred direction and maximizes the entropy subject to constraints on average kinetic energy, as shown using the maximum entropy principle. Alternatively, it emerges as the equilibrium solution to the Boltzmann equation via the H-theorem, which demonstrates that the functional H = \int f \ln f \, d\mathbf{v} decreases until the Maxwell-Boltzmann form is reached, aligning with the second law of thermodynamics. For systems involving quantum effects, such as fermions or bosons, the equilibrium distributions modify the classical form to account for indistinguishability and exclusion principles. The Fermi-Dirac distribution for the average occupation number f(\mathbf{p}) of momentum state \mathbf{p} for fermions is f(\mathbf{p}) = \frac{1}{\exp\left( \frac{\varepsilon(\mathbf{p}) - \mu}{k T} \right) + 1}, where \varepsilon(\mathbf{p}) is the single-particle energy and \mu is the chemical potential; this was independently derived by Enrico Fermi and Paul Dirac in 1926 for ideal quantum gases. For bosons, the Bose-Einstein distribution replaces the plus sign with a minus: f(\mathbf{p}) = \frac{1}{\exp\left( \frac{\varepsilon(\mathbf{p}) - \mu}{k T} \right) - 1}, originally proposed by Satyendra Nath Bose for photons in 1924 and extended by Albert Einstein to massive particles in 1925, predicting phenomena like Bose-Einstein condensation below a critical temperature. While the standard Maxwell-Boltzmann distribution is isotropic, anisotropic variants arise in non-equilibrium contexts like fluid flows, such as the drifting Maxwellian f(\mathbf{v}) \propto \exp\left( -\frac{m |\mathbf{v} - \mathbf{u}|^2}{2 k T} \right), where \mathbf{u} is a drift velocity, representing local thermodynamic equilibrium in the rest frame of the fluid. In relativistic regimes, where particle speeds approach the speed of light, the Jüttner distribution generalizes the Maxwell-Boltzmann form to Lorentz-invariant kinetics, given by f(\mathbf{p}) \propto \exp\left( -\frac{\varepsilon(\mathbf{p})}{k T} \right) with \varepsilon(\mathbf{p}) = \sqrt{(pc)^2 + (mc^2)^2}, as derived by Franz Jüttner in 1911 for relativistic ideal gases.

In Measure Theory

Primary Definitions

In measure theory, distribution functions generalize the cumulative distribution functions of probability theory to arbitrary measures on the real line, without requiring normalization to a total mass of 1. For a non-negative measurable function f: \mathbb{R} \to [0, \infty) defined with respect to the Lebesgue measure \mu, the distribution function \lambda_f(t) is defined as \lambda_f(t) = \mu(\{x : f(x) > t\}) for t \geq 0. This measures the Lebesgue content of the superlevel sets of f, and \lambda_f is non-increasing and right-continuous. For a finite measure \nu on the Borel \sigma-algebra of \mathbb{R}, the distribution function F_\nu(x) is given by F_\nu(x) = \nu((-\infty, x]). This function is non-decreasing and right-continuous, with F_\nu(x) - F_\nu(y) = \nu((y, x]) for y < x, allowing reconstruction of \nu on semi-closed intervals. Unlike in probability theory, F_\nu(+\infty) = \nu(\mathbb{R}) may be any finite positive value, not necessarily 1. For a locally finite Borel measure \nu on \mathbb{R} (finite on every compact set), the canonical distribution function is F(x) = \nu((-\infty, x]), which may tend to +\infty as x \to +\infty. Such functions are non-decreasing and right-continuous, and every right-continuous non-decreasing function corresponds to a unique locally finite measure via the Lebesgue-Stieltjes construction on intervals. These definitions extend to signed measures by decomposition into positive and negative parts, though the primary focus remains on positive measures. The key distinction from probability distributions is the allowance for infinite total mass and lack of bounded range [0,1], enabling applications to measures like Lebesgue measure itself.

Examples and Properties

A prominent example of a distribution function arises from the Lebesgue measure \lambda on the Borel \sigma-algebra of \mathbb{R}, where the distribution function is given by F_\lambda(x) = x for all x \in \mathbb{R}. This generates the Lebesgue-Stieltjes measure \mu satisfying \mu((a, b]) = F_\lambda(b) - F_\lambda(a) = b - a for a < b, though the total measure \mu(\mathbb{R}) = \infty. Another fundamental example is the Dirac measure \delta_a concentrated at a point a \in \mathbb{R} with mass c > 0, whose distribution function is F_{\delta_a}(x) = 0 for x < a and F_{\delta_a}(x) = c for x \geq a. This function exhibits a jump discontinuity of size c at x = a, reflecting the atomic nature of the measure, with \mu((y, x]) = c if y < a \leq x, and 0 otherwise. Distribution functions possess several key properties that facilitate their use in analysis. They are non-decreasing and right-continuous, ensuring that the associated measure is well-defined on semi-open intervals via differences F(b) - F(a). For finite measures, F is bounded, with \lim_{x \to -\infty} F(x) = 0 and \lim_{x \to \infty} F(x) = \mu(\mathbb{R}) < \infty; in the probability case, this limit is 1, linking back to cumulative distribution functions for random variables. A significant property connects distribution functions to integration: for a Borel measurable function g: \mathbb{R} \to [0, \infty), the Lebesgue-Stieltjes integral satisfies \int g \, dF = \int g \, d\mu, where \mu is the measure induced by F. This extends to the layer cake representation, which expresses integrals of non-negative measurable functions f: X \to [0, \infty) on a measure space (X, \mathcal{A}, \mu) as \int_X f \, d\mu = \int_0^\infty \mu(\{x \in X : f(x) > t\}) \, dt, $&#36; where $\lambda_f(t) = \mu(\{f > t\})$ plays the role of a generalized distribution function for the pushforward measure of $f$.[](https://www.math.utoronto.ca/almut/MAT1000/LL-1.pdf) This representation, also known as Cavalieri's principle, underpins many results in integration theory by decomposing the integral into horizontal slices.[](https://www.math.utoronto.ca/almut/MAT1000/LL-1.pdf) Distribution functions also feature prominently in the study of weak convergence of measures on $\mathbb{R}$. A sequence of probability measures $\{\mu_n\}$ converges weakly to $\mu$ if and only if $F_n(x) \to F(x)$ at all continuity points $x$ of $F$, the distribution function of $\mu$.[](https://arxiv.org/pdf/2007.10293) This convergence is metrized by the Prokhorov metric $d_P(\mu, \nu) = \inf\{\varepsilon > 0 : \mu(A) \leq \nu(A^\varepsilon) + \varepsilon \ \forall \, \text{Borel } A\}$, where $A^\varepsilon$ denotes the $\varepsilon$-enlargement of $A$, providing a topology on the space of probability measures equivalent to weak convergence.[](https://www.sfu.ca/~pft3/PUBLICATIONS/sde.pdf)

References

  1. [1]
    [PDF] Basics of the Probability Theory
    Oct 31, 2024 · Given a random variable X, its cumulative distribution function. (CDF), also called its distribution function, is defined as. D(x) = Pr(X ≤ x) ...
  2. [2]
    [PDF] Random Variables and Probability Distributions - Kosuke Imai
    Feb 22, 2006 · While the distribution function defines the distribution of a random variable, we are often interested in the likelihood of a random variable ...<|control11|><|separator|>
  3. [3]
    7.3 - The Cumulative Distribution Function (CDF) | STAT 414
    The cdf of random variable has the following properties: F X ( t ) is a nondecreasing function of , for − ∞ < t < ∞ .
  4. [4]
    [PDF] Topic 7 Random Variables and Distribution Functions - Arizona Math
    A distribution function FX has the property that it is right continuous, starts at 0, ends at 1, and does not decrease with increasing values of x. In ...
  5. [5]
    [PDF] Lecture 2: CDF and EDF 2.1 CDF: Cumulative Distribution Function
    For a random variable X, its CDF F(x) contains all the probability structures of X. Here are some properties of F(x):. • (probability) 0 ≤ F(x) ≤ 1. • ( ...
  6. [6]
    [PDF] ECE 302: Lecture 4.3 Cumulative Distribution Function
    (i) The CDF is a non-decreasing. (ii) The maximum of the CDF is when x = ∞: FX (+∞)=1. (iii) The minimum of the CDF is when x = −∞: FX (−∞)=0.
  7. [7]
    22.1 - Distribution Function Technique | STAT 414
    The distribution function technique finds the probability density function by first finding the cumulative distribution function, then differentiating it.
  8. [8]
    [PDF] STA 611: Introduction to Mathematical Statistics Lecture 3 - Stat@Duke
    3.3 The Cumulative Distribution Function. Cumulative Distribution Function. Def: Cumulative distribution function ... properties. (e.g. uniqueness) that the ...
  9. [9]
    14.2 - Cumulative Distribution Functions | STAT 414
    The cumulative distribution function ("c.d.f.") of a continuous random variable is defined as: F ( x ) = ∫ − ∞ x f ( t ) d t.
  10. [10]
    3.2.1 Cumulative Distribution Function - Probability Course
    The cumulative distribution function (CDF) of a random variable is another method to describe the distribution of random variables.
  11. [11]
    [PDF] Lecture 3: Random Variables & CDFs
    Jan 28, 2020 · From Proposition 3.1 and Carathéodory's extension theorem it follows that the CDF FX uniquely defines. PX the probability measure induced by X.
  12. [12]
    [PDF] 6.436J / 15.085J Fundamentals of Probability, Lecture 4: Random ...
    The cumulative distribution function of a random variable always possesses cer- tain properties. Theorem 3. Let X be a random variable, and let F be its CDF. (a) ...<|control11|><|separator|>
  13. [13]
    Class Notes 2-8-2019 - faculty.​washington.​edu
    The cumulative distribution function (c.d.f.) of a random variable X is defined as ... The size of the jump at k is P(X = k). • C.D.F. ... Examples: – If F(x)=0,1/3 ...
  14. [14]
    [PDF] Chapter 3 Continuous Random Variables
    3.3 The Uniform and Exponential Distributions. Two special probability density functions are discussed: uniform and exponential. The continuous uniform ...
  15. [15]
    [PDF] Chapter 2 Univariate Probability - MIT
    (Note that because the normal distribution is symmetric around its mean, the cumulative distribution function applied to the mean will always be equal to 0.5.).
  16. [16]
    [PDF] PDF - Math 408 - Mathematical Statistics
    Apr 24, 2013 · F Example: The Hazard Function for the Exponential Distribution ... The empirical cumulative distribution function (eCDF) is defined as.
  17. [17]
    [PDF] 2. Kinetic Theory - DAMTP
    One way to satisfy this is if f is a function of H and the most famous example is the Boltzmann distribution, f ⇠ e-H. However, notice that there is nothing ( ...<|control11|><|separator|>
  18. [18]
    [PDF] 1203.1.K.pdf - Caltech PMA
    In kinetic theory, the key concept is the “distribution function” or “number density of particles in phase space”, N; i.e., the number of particles of some ...
  19. [19]
    [PDF] London and Edinburgh Philosophical Magazine and Journal of ...
    Dynamical Theory of Gases.—Part I. On the Motions and Collisions of Perfectly Elastic Spheres. By J. C. Maxwell, M.A., Professor of Natural Philosophy in ...
  20. [20]
    [PDF] 2 Further Studies on the Thermal Equilibrium of Gas Molecules
    The Kinetic Theory of Gases Downloaded from www.worldscientific.com by KAINAN UNIVERSITY on 02/23/15. For personal use only. Page 12. BOLTZMANN: THERMAL ...
  21. [21]
    Boltzmann's equation at 150: Traditional and modern solution ...
    Jul 11, 2023 · 1) proposed a kinetic equation describing the evolution of a non-equilibrium classical gas through a distribution function f(r, c, t) in phase ...
  22. [22]
    Boltzmann's Kinetic Theory |
    Summarizing, Boltzmann kinetic theory describes the dynamics of dilute gases in terms of a probability distribution function including, besides space and time.
  23. [23]
    [PDF] KINETIC THEORY AND THE VLASOV EQUATION
    The Vlasov equation is often called the collisionless Boltzmann equation. ... distribution function with the result that the equation is nonlinear. The ...
  24. [24]
    Vlasov equation and distribution functions - Plasma Physics - Fiveable
    Vlasov equation describes evolution of particle distribution function in collisionless plasmas; Distribution function represents probability density of ...
  25. [25]
    Moments of Distribution Function
    Moments of a distribution function are velocity space moments, uniquely specifying the distribution. Low-order moments have physical interpretations, like ...
  26. [26]
    [1206.1554] Derivation of fluid dynamics from kinetic theory with the 14
    Jun 7, 2012 · We review the traditional derivation of the fluid-dynamical equations from kinetic theory according to Israel and Stewart.
  27. [27]
    [PDF] Molecular free path distribution in rarefied gases
    Abstract. We present the results of investigations into the distribution of molecular free paths in rarefied gases using molecular dynamics simulations.
  28. [28]
    Kinetic‐Theory Approach to the Problem of Shock‐Wave Structure in ...
    Given the steady state ahead of the shock and assuming a bimodal velocity distribution function for each component gas, the steady state behind the shock is ...
  29. [29]
    On the basic concepts of the direct simulation Monte Carlo method
    Jun 12, 2019 · Thus, in the Boltzmann collision operator, the “molecular chaos” is imposed a priori, and in the DSMC collision procedure, it is not. It means ...
  30. [30]
    A high-order Monte Carlo algorithm for the direct simulation of ...
    The idea of the DSMC method is to divide the Boltzmann equation into the pure convection and the pure collision equations, and to discretize the time dependent ...
  31. [31]
    [PDF] DSMC simulations of near-continuum hypersonic flows
    Oct 29, 2021 · The DSMC method directly simulates the Boltzmann equation and is accurate for flows ranging from free-molecular to fully continuum. II.
  32. [32]
    II. Illustrations of the dynamical theory of gases
    Illustrations of the dynamical theory of gases. JC Maxwell MA Marischal College and University of Aberdeen. Pages 21-37 | Published online: 26 May 2009.
  33. [33]
    [PDF] Information Theory and Statistical Mechanics
    E. T. JAYNES. Department of Physics, Stanford University, Stanford, California. (Received September 4, 1956; revised manuscript received March 4, 1957).
  34. [34]
    [PDF] On Quantizing an Ideal Monatomic Gas - Gilles Montambaux
    The aim of the present paper is to present a method of quantization of an ideal gas which, according to our opinion, is as independent of arbitrary assumptions ...Missing: perfetto | Show results with:perfetto
  35. [35]
    [PDF] Planck's law and the hypothesis of light quanta - Gilles Montambaux
    By Bose (Dacca-University, India). (Received 2 July 1924). The phase space of a quantum of light with respect to a given volume is divided into.
  36. [36]
    [PDF] Quantum theory of the monoatomic ideal gas - Gilles Montambaux
    The following is a translation of the first of Einstein's papers on the ideal Bose gas. It was read in a session of the Prussian Academy held on 10 July 1924 — ...
  37. [37]
    [PDF] Fundamentals of Measure and Integration Theory
    a distribution function with an arbitrary Lebesgue-Stieltjes measure on R", ... that every Borel measurable function is Lebesgue measurable, but not con-.
  38. [38]
  39. [39]
    [PDF] Measure and Integration - University of Toronto Mathematics
    ( 1) It is formula ( 4) that we call the layer cake representation of f. (Approximate the dt integral by a Riemann sum and the allusion will be obvious.) Page ...
  40. [40]
    [PDF] Weak Convergence of Probability Measures - arXiv
    Jul 20, 2020 · 1.2 Convergence in distribution and weak convergence . ... weakly to some probability measure P. By the mapping theorem. Pn00 π−1 t1,...,tk ...
  41. [41]
    [PDF] Weak Convergence in the Prokhorov Metric of Methods for ...
    Oct 19, 2008 · Weak convergence is usually expressed in terms of the convergence of expected values of test functions of the trajectories. Here we present an ...