Fact-checked by Grok 2 weeks ago

Gaussian function

The Gaussian function is a fundamental mathematical construct defined by the general form f(x) = a \exp\left( -\frac{(x - b)^2}{2c^2} \right), where a > 0 determines the peak amplitude, b the location of the peak (mean), and c > 0 the width (related to standard deviation), producing a symmetric, bell-shaped curve that decays rapidly away from the center. This function, named after the German mathematician for his early work on least-squares estimation and error analysis, is continuous, infinitely differentiable, and positive everywhere, with its integral over the real line equaling a c \sqrt{2\pi}. In its normalized variant, where the integral equals 1, it serves as the of the normal distribution: f(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp\left( -\frac{(x - \mu)^2}{2\sigma^2} \right), with mean \mu and standard deviation \sigma. The Gaussian function's prominence stems from its mathematical elegance and empirical ubiquity, justified by the , which states that the sum of many independent random variables approximates a regardless of their individual distributions, making it a cornerstone of statistical modeling across sciences. Key properties include its of approximately $2.355 \sigma, the containment of about 68% of the probability mass within \mu \pm \sigma, 95% within \mu \pm 2\sigma, and 99.7% within \mu \pm 3\sigma. Additionally, the of a Gaussian is another Gaussian, a property that underscores its optimality in time-frequency analysis, achieving the minimum uncertainty product in and . Beyond probability, the Gaussian function finds extensive applications in physics for modeling thermal distributions and processes, in astronomy for response functions, and in for Gaussian filters that smooth signals while preserving shape. In image processing and , two- or multi-dimensional Gaussians enable blurring and , while in , they underpin kernel methods like Gaussian processes for and . Its separability in multiple dimensions—allowing efficient computation as products of one-dimensional forms—further enhances its utility in high-dimensional .

Definition and Forms

One-Dimensional Gaussian

The one-dimensional Gaussian function, also known as the univariate Gaussian, is a fundamental mathematical function widely used in probability, statistics, signal processing, and physics due to its smooth, bell-shaped profile. It is defined by the equation g(x) = a \exp\left( -\frac{(x - \mu)^2}{2 \sigma^2} \right), where x is the independent variable, a > 0 represents the amplitude or height of the curve, \mu \in \mathbb{R} is the mean or location parameter determining the center, and \sigma > 0 is the standard deviation or scale parameter controlling the width. This form arises naturally as the kernel of the normal probability density function, providing a model for phenomena exhibiting central tendency and dispersion around a central value. The function produces a symmetric bell-shaped , peaking at x = \mu with maximum value g(\mu) = a, and decaying exponentially to near zero as |x - \mu| increases beyond a few multiples of \sigma. This rapid decay ensures that most of the function's "mass" is concentrated near the mean, making it ideal for approximating localized effects in various applications. The Gaussian function is named after the mathematician Johann Carl Friedrich Gauss, who introduced it in 1809 within his work on the method of least squares for analyzing astronomical observation errors, assuming errors follow this distribution to minimize discrepancies in orbital predictions. When appropriately normalized so that its over the real line equals , the Gaussian function serves as the of , a cornerstone of statistical theory detailed in subsequent sections.

Normalized and General Forms

The normalized Gaussian function serves as the probability density function (PDF) of the normal distribution in statistics, ensuring that its over the entire real line equals . This form is given by \phi(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp\left( -\frac{(x - \mu)^2}{2 \sigma^2} \right), where \mu is the (location parameter) and \sigma > 0 is the standard deviation (). The normalization constant \frac{1}{\sigma \sqrt{2\pi}} arises from the requirement that \int_{-\infty}^{\infty} \phi(x) \, dx = 1, providing a proper for continuous random variables. A special case is the standard normal distribution, obtained via the z-score transformation z = \frac{x - \mu}{\sigma}, which standardizes the parameters to \mu = 0 and \sigma = 1. The corresponding PDF simplifies to \phi(z) = \frac{1}{\sqrt{2\pi}} \exp\left( -\frac{z^2}{2} \right). This standardization facilitates tabular computations and theoretical analysis in probability theory. In applications beyond strict probabilistic contexts, such as physics and , unnormalized or generalized Gaussian forms are common, omitting the leading constant for convenience. For instance, the basic unnormalized Gaussian \exp(-x^2) or scaled variants like \exp(-x^2 / a) appear in wave functions and kernel representations, where is applied separately if needed. The parameter \sigma governs the function's width, with the half-width at half-maximum (HWHM)—the distance from the peak to the point where the drops to half—approximating $1.177 \sigma. This measure quantifies the effective spread in non-probabilistic uses.

Mathematical Properties

Symmetry and Moments

The Gaussian function, defined as g(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp\left( -\frac{(x - \mu)^2}{2\sigma^2} \right) for the normalized probability density form, exhibits symmetry about its mean \mu. Specifically, it is an even function when shifted to the origin, satisfying g(\mu + x) = g(\mu - x) for all x, which reflects its bell-shaped profile mirroring across the vertical line at x = \mu. This symmetry implies that the distribution is invariant under reflection around the mean, a property fundamental to its role in modeling symmetric phenomena in statistics and physics. Due to this even symmetry, the odd-order central moments of the Gaussian function vanish. The raw moments, denoted E[X^n], for a Gaussian random variable X \sim \mathcal{N}(\mu, \sigma^2), are given by E[X^n] = \sum_{k=0}^{\lfloor n/2 \rfloor} \binom{n}{2k} \mu^{n-2k} (2k-1)!! \sigma^{2k}, where !! denotes the , and odd n yield non-zero values only if incorporating the . The second raw moment equals E[X^2] = \mu^2 + \sigma^2, establishing the variance \sigma^2 as the second \mu_2 = E[(X - \mu)^2]. Higher raw moments follow from the M(t) = \exp(\mu t + \frac{1}{2} \sigma^2 t^2), with derivatives providing explicit forms. The central moments \mu_n = E[(X - \mu)^n] further highlight the symmetry: all odd n give \mu_n = 0, while even moments are \mu_{2k} = \sigma^{2k} (2k-1)!!. For instance, the fourth central moment is \mu_4 = 3 \sigma^4, leading to a kurtosis of 3 for the standard normal distribution. This structure distinguishes the Gaussian from distributions with non-zero odd moments or altered even moments. The skewness, defined as \gamma_1 = \mu_3 / \sigma^3, is zero due to the absent third moment, confirming perfect . Similarly, the excess kurtosis \gamma_2 = (\mu_4 / \sigma^4) - 3 = 0, indicating neither leptokurtic nor platykurtic tails relative to the mesokurtic baseline. These properties underscore the Gaussian's utility as a reference for symmetric, light-tailed distributions in statistical analysis.

Fourier Transform and Convolution

The Fourier transform of a Gaussian function is another Gaussian function, demonstrating a remarkable self-duality property that distinguishes it among common functions. Consider the one-dimensional Gaussian g(x) = a \exp\left( -\frac{(x - \mu)^2}{2\sigma^2} \right), where a is the amplitude, \mu is the mean, and \sigma > 0 is the standard deviation. Using the convention \mathcal{F}\{g\}(f) = \int_{-\infty}^{\infty} g(x) e^{-i 2\pi f x} \, dx, its Fourier transform is \mathcal{F}\{g\}(f) = a \sigma \sqrt{2\pi} \exp\left( -2\pi^2 \sigma^2 f^2 + i 2\pi \mu f \right). This result follows from the shift property of the Fourier transform, which introduces the phase term e^{i 2\pi \mu f} (noting the sign convention), combined with the transform of the centered unnormalized Gaussian \exp\left( -\frac{x^2}{2\sigma^2} \right), which yields \sigma \sqrt{2\pi} \exp\left( -2\pi^2 \sigma^2 f^2 \right). The self-Fourier nature arises because the transformed function retains the Gaussian form, up to scaling and shifting in the frequency domain, with the frequency-domain standard deviation being $1/(2\pi \sigma). This self-duality underpins the Gaussian's role in the for signals. The principle states that the product of the variances in the time and domains satisfies \sigma_x \sigma_f \geq \frac{1}{4\pi}, with achieved precisely when the signal is Gaussian. For the Gaussian g(x), the time-domain variance (of the intensity |g(x)|^2) is σ^2/2 and the frequency-domain variance (of the |ℱ{g}(f)|^2) is 1/(8π^2 σ^2), yielding the minimum product 1/(4π). This optimality makes the Gaussian the "most concentrated" function allowable under the constraint, with profound implications for analysis in both classical and . Gaussians also exhibit elegant behavior under convolution, serving as a fundamental smoothing operator. The convolution of a Gaussian with any integrable function h(x) produces a smoothed version of h(x), where the Gaussian acts as a by attenuating high-frequency components. Specifically, if g_1(x) and g_2(x) are Gaussians with means \mu_1, \mu_2 and variances \sigma_1^2, \sigma_2^2, their convolution g_1 * g_2 is another Gaussian with mean \mu_1 + \mu_2 and variance \sigma_1^2 + \sigma_2^2. This additive variance property follows from the , as the product of their transforms—each a Gaussian—results in a Gaussian whose inverse transform has the combined variance. In , this facilitates applications such as Gaussian filters for , where the kernel's width \sigma controls the degree of smoothing without introducing artifacts like ringing.

Integrals and Special Functions

The Gaussian Integral

The Gaussian integral refers to the evaluation of the definite integral of the over the entire real line, a fundamental result in that underpins many applications in probability and physics. The standard form is given by \int_{-\infty}^{\infty} e^{-x^2} \, dx = \sqrt{\pi}, which converges due to the of the integrand, ensuring the remains finite despite the unbounded domain. This rapid decay, characterized by the quadratic exponent, dominates any linear growth in the limits, guaranteeing for the integral. To evaluate this integral, a common approach involves squaring it to form I^2 = \left( \int_{-\infty}^{\infty} e^{-x^2} \, dx \right) \left( \int_{-\infty}^{\infty} e^{-y^2} \, dy \right) = \iint_{-\infty}^{\infty} e^{-(x^2 + y^2)} \, dx \, dy, then switching to polar coordinates where x = r \cos \theta, y = r \sin \theta, and dx \, dy = r \, dr \, d\theta. The integral simplifies to \int_0^{2\pi} d\theta \int_0^{\infty} e^{-r^2} r \, dr = 2\pi \cdot \frac{1}{2} = \pi, so I = \sqrt{\pi}. This polar coordinate method, attributed to Poisson, provides an elegant geometric resolution. The result extends to the more general unnormalized Gaussian form \int_{-\infty}^{\infty} \exp\left( -\frac{(x - \mu)^2}{2 \sigma^2} \right) dx = \sigma \sqrt{2\pi}, obtained via the substitution u = \frac{x - \mu}{\sigma}, which reduces it to \sigma times the standard integral scaled by \sqrt{2}. Historically, the evaluation built on foundational work in probability: de Moivre approximated related integrals in 1733, Gauss identified the error curve form in 1809, and Laplace provided rigorous proofs in 1812 using change-of-variables techniques, with Poisson contributing the polar method shortly thereafter.

Relation to Error Function

The error function, denoted \erf(z), is defined as \erf(z) = \frac{2}{\sqrt{\pi}} \int_0^z e^{-t^2} \, dt, which provides a means to express partial integrals of the Gaussian function through special functions. The indefinite integral of the unnormalized Gaussian \exp(-x^2/2) lacks a closed-form expression in elementary functions but can be written using the error function: \int \exp\left( -\frac{x^2}{2} \right) \, dx = \sqrt{\frac{\pi}{2}} \, \erf\left( \frac{x}{\sqrt{2}} \right) + C. This relation follows from differentiating the right-hand side, which yields \exp(-x^2/2) via the fundamental theorem of calculus and the definition of \erf. In probability theory, the cumulative distribution function (CDF) of the standard normal distribution \Phi(x) is directly linked to the error function: \Phi(x) = \frac{1}{2} \left[ 1 + \erf\left( \frac{x}{\sqrt{2}} \right) \right], where \Phi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt. This connection allows the CDF to be computed via error function evaluations, facilitating applications in statistics. The complementary error function is defined as \erfc(z) = 1 - \erf(z), equivalently \erfc(z) = \frac{2}{\sqrt{\pi}} \int_z^\infty e^{-t^2} \, dt, which is useful for tail probabilities since \Phi(x) = 1 - \frac{1}{2} \erfc\left( \frac{x}{\sqrt{2}} \right). For large z, asymptotic expansions provide efficient approximations: \erfc(z) \sim \frac{e^{-z^2}}{z \sqrt{\pi}} \left( 1 - \frac{1}{2z^2} + \frac{3}{4z^4} - \cdots \right) as z \to +\infty, with the series being divergent but useful for truncation-based estimates. Due to the absence of an elementary for the Gaussian, numerical evaluation of these integrals relies on tabulated values of \erf and \erfc, series expansions for small arguments, continued fractions, or the aforementioned asymptotic approximations for large arguments. Modern computational libraries implement these methods for high precision.

Multidimensional Generalizations

Bivariate Gaussian Function

The bivariate Gaussian function extends the one-dimensional Gaussian to two dimensions, incorporating parameters for the means along each , scale parameters for the spreads, and a to account for dependence between the variables. This form is widely used in fields requiring modeling of joint variability, such as statistics and . The general expression for the bivariate Gaussian function is given by g(x,y) = \frac{A}{2\pi \sigma_x \sigma_y \sqrt{1 - \rho^2}} \exp\left( -\frac{1}{2(1 - \rho^2)} \left[ \frac{(x - \mu_x)^2}{\sigma_x^2} + \frac{(y - \mu_y)^2}{\sigma_y^2} - \frac{2\rho (x - \mu_x)(y - \mu_y)}{\sigma_x \sigma_y} \right] \right), where A is the parameter, \mu_x and \mu_y are the locations along the x and y axes, \sigma_x > 0 and \sigma_y > 0 are the standard deviations, and \rho \in (-1, 1) is the measuring linear dependence between x and y. When used as a (with A = 1), this form integrates to 1 over \mathbb{R}^2, ensuring it defines a valid bivariate . The level sets of constant g(x,y) form elliptical centered at (\mu_x, \mu_y), with the shape and orientation determined by the structure encoded in \sigma_x, \sigma_y, and \rho. For \rho = 0, the ellipses align with the coordinate axes; nonzero \rho rotates them, with principal axes along the eigenvectors of the implied . Special cases simplify the function: when \rho = 0, it factors into the product of two independent one-dimensional Gaussians centered at \mu_x and \mu_y with spreads \sigma_x and \sigma_y, respectively. Additionally, if \sigma_x = \sigma_y and \rho = 0, the become circles, yielding a radially symmetric (isotropic) form.

Multivariate Gaussian Function

The multivariate Gaussian function extends the univariate Gaussian to n dimensions, representing the probability density function of a for a random \mathbf{x} \in \mathbb{R}^n. It is parameterized by the \boldsymbol{\mu} \in \mathbb{R}^n and the positive definite \boldsymbol{\Sigma} \in \mathbb{R}^{n \times n}, capturing the location, scale, and orientation of the . The standard normalized form is given by g(\mathbf{x}) = \frac{1}{(2\pi)^{n/2} |\boldsymbol{\Sigma}|^{1/2}} \exp\left( -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right), where |\boldsymbol{\Sigma}| denotes the determinant of \boldsymbol{\Sigma}, and the exponent involves a quadratic form that measures squared Mahalanobis distance from the mean. The quadratic form (\mathbf{x} - \boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) defines the geometry of the function, resulting in level sets that form ellipsoids centered at \boldsymbol{\mu}, with axes aligned according to the eigenvectors of \boldsymbol{\Sigma} and scaled by its eigenvalues. This Mahalanobis distance d^2 = (\mathbf{x} - \boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) generalizes the Euclidean distance by accounting for correlations and varying variances across dimensions, ensuring the function's contours reflect the data's intrinsic structure. In degenerate cases, when \boldsymbol{\Sigma} is singular (i.e., not full rank, with \det(\boldsymbol{\Sigma}) = 0), the multivariate Gaussian reduces to a lower effective dimensionality, as the distribution concentrates on a subspace defined by the rank of \boldsymbol{\Sigma}, such as a hyperplane or line in higher-dimensional space. This occurs due to linear dependencies among the variables, rendering \boldsymbol{\Sigma}^{-1} undefined, and the function no longer has a density with respect to the full Lebesgue measure in \mathbb{R}^n. The normalization ensures the integral of the function over \mathbb{R}^n equals 1; specifically, the unnormalized Gaussian \exp\left( -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right) integrates to (2\pi)^{n/2} |\boldsymbol{\Sigma}|^{1/2}, which the prefactor inverts to achieve unit total probability. This property holds under the assumption of positive definiteness for \boldsymbol{\Sigma}, generalizing the one-dimensional Gaussian integral.

Discrete and Approximated Versions

Discrete Gaussian Distribution

The discrete Gaussian distribution serves as the discrete counterpart to the continuous Gaussian distribution, defined over the integer lattice \mathbb{Z}. For real parameters \mu (the mean) and \sigma > 0 (the standard deviation), its probability mass function (PMF) is given by p(k) = \frac{\exp\left( -\frac{(k - \mu)^2}{2 \sigma^2} \right)}{Z(\mu, \sigma)}, \quad k \in \mathbb{Z}, where the normalization constant Z(\mu, \sigma) = \sum_{k \in \mathbb{Z}} \exp\left( -\frac{(k - \mu)^2}{2 \sigma^2} \right) ensures the probabilities sum to 1. Without loss of generality, the distribution can be shifted to center at 0 by considering \mu = 0, in which case Z(0, \sigma) = \sum_{k \in \mathbb{Z}} \exp\left( -\frac{k^2}{2 \sigma^2} \right). For large \sigma \gg 1, the discrete Gaussian closely approximates the continuous Gaussian , as the spacing between becomes negligible relative to the spread \sigma. In this regime, the constant satisfies \max\{\sigma \sqrt{2\pi}, 1\} \leq Z(0, \sigma) \leq \sigma \sqrt{2\pi} + 1, providing a tight bound around the continuous factor \sigma \sqrt{2\pi}. For small \sigma, the exhibits pronounced effects, becoming highly concentrated around the integer(s) nearest to \mu, with variance strictly less than \sigma^2. Sampling from the discrete Gaussian can be achieved efficiently via rejection sampling from a continuous Gaussian proposal distribution. One approach samples x \sim \mathcal{N}(c, \sigma^2) (where c is the desired center), rounds to the nearest integer z = \lfloor x + 0.5 \rfloor, and accepts z with probability proportional to the ratio of the discrete PMF at z to an upper bound on the continuous density near z, ensuring exact sampling after a constant expected number of trials (approximately 2 on average for typical \sigma). This method avoids direct computation of the full normalization constant and is scalable for varying \sigma. Alternative proposals, such as the discrete Laplace distribution, can also be used for rejection sampling, particularly for privacy-sensitive applications. The discrete Gaussian finds applications in generating integer-valued random numbers, such as in for adding noise to maintain security (e.g., in learning-with-errors schemes), and in models for simulating discrete physical or computational systems where floating-point continuity is undesirable. In , it provides noise for integer queries, achieving \frac{1}{2} \epsilon^2-concentrated DP with variance roughly half that of the discrete Laplace for small privacy budgets \epsilon, offering improved utility over continuous Gaussian rounding.

Parameter Estimation Techniques

Parameter estimation for the discrete Gaussian distribution involves inferring \mu and \sigma > 0 from integer-valued data samples k_1, \dots, k_n \in \mathbb{Z}. Unlike the continuous case, there are no closed-form maximum likelihood estimators (MLE) due to the parameter-dependent normalization constant Z(\mu, \sigma), which requires numerical computation of the infinite sum or approximation. The likelihood function is L(\mu, \sigma \mid \mathbf{k}) = \prod_{i=1}^n \frac{\exp\left( -\frac{(k_i - \mu)^2}{2 \sigma^2} \right)}{Z(\mu, \sigma)}, and MLE requires maximizing the log-likelihood \ell(\mu, \sigma) = -\frac{1}{2\sigma^2} \sum_{i=1}^n (k_i - \mu)^2 - n \log Z(\mu, \sigma) via numerical optimization, such as gradient descent or expectation-maximization variants. For large \sigma, the continuous MLE approximations—the sample mean \hat{\mu} = \bar{k} and sample variance \hat{\sigma}^2 = \frac{1}{n} \sum (k_i - \bar{k})^2—provide good estimates, as the discrete distribution approximates the normal. The of moments equates sample moments to theoretical ones. The first moment gives \hat{\mu} = \bar{k}, and the second central moment approximates \hat{\sigma}^2 \approx \frac{1}{n} \sum (k_i - \bar{k})^2, though exact matching requires solving for the discrete variance, which is less than \sigma^2. This method is simple but may be biased for small \sigma. Bayesian approaches use priors on \mu and \sigma, updating via MCMC or variational inference due to the intractable posterior involving Z. Conjugate priors are not available as in the continuous case, but normal-gamma approximations can be employed for large \sigma. These methods are useful in applications like where parameters are tuned for . For approximated versions in practice, when \sigma \gg 1, continuous techniques serve as efficient approximations. Asymptotic properties similar to the continuous case hold under large sample limits, ensuring for fixed \sigma.

Applications

In Probability and Statistics

The Gaussian function plays a central role in probability and statistics as the (PDF) of the normal distribution, also known as the Gaussian distribution. The PDF of a normal X \sim \mathcal{N}(\mu, \sigma^2) with \mu and variance \sigma^2 > 0 is given by f(x; \mu, \sigma^2) = \frac{1}{\sigma \sqrt{2\pi}} \exp\left( -\frac{(x - \mu)^2}{2\sigma^2} \right), \quad x \in \mathbb{R}, which is the Gaussian function normalized to integrate to 1 over the real line. This form ensures that the total probability is unity, distinguishing it from the unnormalized Gaussian used in other contexts. The ubiquity of the normal distribution in statistical modeling stems from the central limit theorem (CLT), which states that the sum (or average) of a large number of independent and identically distributed random variables with finite variance converges in distribution to a normal random variable, regardless of the underlying distribution. First rigorously established by Pierre-Simon Laplace in 1810, the CLT justifies the normal approximation for phenomena arising from aggregated independent effects, such as measurement errors or biological traits influenced by many factors. For normally distributed data, the empirical rule provides a practical guideline: approximately 68% of observations lie within one standard deviation of the mean, 95% within two standard deviations, and 99.7% within three standard deviations. In statistical inference, assessing normality is crucial, often using quantile-quantile (Q-Q) plots, which compare sample quantiles to theoretical Gaussian quantiles, or formal tests like the Shapiro-Wilk test. The Shapiro-Wilk test, introduced in 1965, computes a statistic W based on the correlation between ordered sample values and corresponding expected normal order statistics (quantiles), rejecting normality for small W values under the null hypothesis of a normal distribution. This test is particularly powerful for small sample sizes (n \leq 50) and outperforms alternatives like the Kolmogorov-Smirnov test in detecting deviations from normality. The assumption of Gaussian errors is foundational in linear regression, where the ordinary least squares (OLS) estimator is optimal under the Gauss-Markov theorem. This theorem asserts that, given linearity, no perfect multicollinearity, homoscedasticity, and uncorrelated errors (without requiring normality for unbiasedness), OLS yields the best linear unbiased estimator (BLUE) with minimum variance among linear unbiased estimators. Normality of errors further enables exact finite-sample inference, such as t-tests for coefficients, via the Gauss-Markov framework extended to maximum likelihood estimation.

In Physics and Signal Processing

In physics, the Gaussian function serves as the fundamental solution to the , describing the of heat or other quantities in one dimension. The , ∂u/∂t = κ ∂²u/∂x², where κ is the , has a or fundamental solution given by u(t, x) = \frac{1}{\sqrt{4\pi \kappa t}} \exp\left( -\frac{x^2}{4 \kappa t} \right), which represents the temperature distribution resulting from an initial of heat at the . This Gaussian profile illustrates the spreading and smoothing of the initial delta-function input over time, with the width increasing proportionally to √t, highlighting the diffusive nature of the process. In higher dimensions, the solution generalizes similarly, maintaining the Gaussian form for isotropic media. In , the Gaussian function characterizes the wavefunction of the , a cornerstone model for vibrational modes in molecules and fields. The time-independent for the potential V(x) = (1/2) m ω² x² yields the as \psi_0(x) \propto \exp\left( -\frac{x^2}{2 a^2} \right), where a = √(ℏ / m ω) relates to the m, ω, and reduced Planck's constant ℏ. This Gaussian form ensures the wavefunction is normalizable and minimizes the uncertainty product at the energy E_0 = (1/2) ℏ ω, embodying the Heisenberg uncertainty principle in its minimal form. In , Gaussian functions are employed in filters to achieve smooth low-pass filtering without introducing common in other kernels like sinc functions. A one-dimensional kernel is h(x) = (1 / (σ √(2π))) exp(-x² / (2 σ²)), where σ controls the spread, and its —the —is also Gaussian, exp(-2 π² σ² f²), preserving the shape under transformation. This property makes ideal for in signals while maintaining phase linearity. In , Gaussian beams describe the transverse intensity profile of many outputs, I(r, z) ∝ exp(-2 r² / w(z)²), where w(z) is the beam waist varying with propagation distance z, enabling precise modeling of focusing and in optical systems. In electromagnetism, Gaussian charge distributions approximate continuous charge densities for calculating electric potentials, particularly in computational simulations where point charges cause singularities. A spherically symmetric Gaussian charge density ρ(r) = (Q / (σ³ (2π)^{3/2})) exp(-r² / (2 σ²)) yields a potential φ(r) that can be evaluated via Poisson's equation ∇² φ = -ρ/ε₀, often using Fourier methods for efficiency, avoiding divergences while closely mimicking finite-size effects. This approach is prevalent in molecular dynamics and Ewald summation techniques for periodic systems.

References

  1. [1]
    Gaussian Function -- from Wolfram MathWorld
    The Gaussian function is the probability density function of the normal distribution, f(x)=1/(sigmasqrt(2pi))e^(-(x-mu)^2/(2sigma^2)), sometimes also called ...
  2. [2]
    [PDF] PROPERTIES OF THE GAUSSIAN FUNCTION } ) ( { exp )( c bx axy
    It is our purpose here to look at some of the properties of y(x) and in particular examine the special case known as the probability density function. Karl ...
  3. [3]
    The Gaussian or Normal Distribution
    The Gaussian or normal distribution plays a central role in all of statistics and is the most ubiquitous distribution in all the sciences.
  4. [4]
    Gaussian Function Properties - Stanford CCRMA
    This appendix collects together various facts about the fascinating Gaussian function--the classic ``bell curve'' that arises repeatedly in science and ...
  5. [5]
    Gaussian Distribution - HyperPhysics
    The Gaussian distribution is a continuous function which approximates the exact binomial distribution of events.
  6. [6]
    [PDF] Simple, Fast and Constant-Time Gaussian Sampling over the ...
    For σ, µ ∈ R with σ > 0, we call Gaussian function of parameters σ, µ and denote by ρσ,µ the function defined over R as ρσ,µ(x) = exp −(x−µ)2. 2σ2 . Note ...
  7. [7]
    6.5.1. What do we mean by "Normal" data?
    The shape of the normal distribution is symmetric and unimodal. It is called the bell-shaped or Gaussian distribution after its inventor, Gauss (although De ...
  8. [8]
    [PDF] From Abraham De Moivre to Johann Carl Friedrich Gauss - IJESI
    Jun 25, 2018 · In this way, this article aimed to present a material on the history of the Gaussian curve and its relations, due to the scarcity of texts in ...
  9. [9]
    1.3.6.6.1. Normal Distribution - Information Technology Laboratory
    Probability Density Function, The general formula for the probability density function of the normal distribution is.
  10. [10]
    [PDF] The Gaussian distribution
    The probability density function of the univariate (one-dimensional) Gaussian distribution is p(x | µ, σ2) = N(x; µ, σ2) = 1. Z exp. −. (x − µ)2. 2σ2 . The ...
  11. [11]
    7.1 - Standard Normal Distribution - STAT ONLINE
    A standard normal distribution has a mean of 0 and standard deviation of 1. This is also known as the z distribution.
  12. [12]
    Normal Distribution | Gaussian | Normal random variables | PDF
    A continuous random variable Z is said to be a standard normal (standard Gaussian) random variable, shown as Z∼N(0,1), if its PDF is given by fZ(z)=1√2πexp{−z22} ...
  13. [13]
    21.2 Normalization of the Gaussian - BOOKS
    When Gaussian's are used in probability theory, it is essential that the integral of the Gaussian for all x is equal to one, i.e. the area under the graph of ...
  14. [14]
    Halfwidth of a Gaussian Distribution - HyperPhysics
    The full width of the gaussian curve at half the maximum may be obtained from the function as follows. Let x=h at half the maximum height. Taking the natural ...
  15. [15]
    Normal Distribution -- from Wolfram MathWorld
    The raw moments can also be computed directly by computing the raw moments mu_n^'=<x^n> ,. mu_n^'=1/(sigmasqrt(2pi))int_(-infty. (27). (Papoulis 1984, pp. 147 ...
  16. [16]
    1.3.5.11. Measures of Skewness and Kurtosis
    Skewness is a measure of symmetry, or more precisely, the lack of symmetry. A distribution, or data set, is symmetric if it looks the same to the left and right ...
  17. [17]
    [PDF] Moments and Absolute Moments of the Normal Distribution - arXiv
    We present formulas for the (raw and central) moments and absolute moments of the normal distribution. We note that these results are not new, yet many ...
  18. [18]
    [PDF] EE 261 - The Fourier Transform and its Applications
    ... Mean and Standard Deviation for the Sum of Random Variables ... In the modern formulation of partial differential equations, the Fourier transform ...
  19. [19]
    Fourier Transform--Gaussian -- from Wolfram MathWorld
    The Fourier transform of a Gaussian function f(x)=e^(-ax^2) is given by F_x[e^(-ax^2)](k) = int_(-infty)^inftye^(-ax^2)e^(-2piikx)dx (1) ...
  20. [20]
    Gaussian Integral -- from Wolfram MathWorld
    The Gaussian integral is the integral of the one-dimensional Gaussian function over (-infinity, infinity), also called the probability integral.
  21. [21]
    [PDF] THE GAUSSIAN INTEGRAL Let I = ∫ ∞ e dx, J ... - Keith Conrad
    Fourth Proof: Another differentiation under the integral sign. Here is a second approach to finding J by differentiation under the integral sign.
  22. [22]
  23. [23]
    7.1 Special Notation - NIST Digital Library of Mathematical Functions
    The notations P ⁡ ( z ) , Q ⁡ ( z ) , and Φ ⁡ ( z ) are used in mathematical statistics, where these functions are called the normal or Gaussian probability ...
  24. [24]
  25. [25]
    DLMF: §7.11 Relations to Other Functions ‣ Properties ‣ Chapter 7 Error Functions, Dawson’s and Fresnel Integrals
    ### Summary of Numerical Computation, Approximations, and Tables for Error Function and Gaussian Integrals
  26. [26]
    4.2 - Bivariate Normal Distribution | STAT 505
    The determinant of the variance-covariance matrix is simply equal to the product of the variances times 1 minus the squared correlation.
  27. [27]
    [PDF] The Multivariate Gaussian Distribution - CS229
    Oct 10, 2008 · A multivariate Gaussian distribution has a mean µ and covariance matrix Σ, with a probability density function p(x;µ,Σ) = 1 (2π)n/2|Σ|1/2exp − ...
  28. [28]
    [PDF] Gaussians - UBC Computer Science
    Nov 24, 2006 · 2.3 Degenerate MVNs. A degenerate multivariate Gaussian is one for which the covariance matrix is singular, detΣ = 0. Consider for example. X ...
  29. [29]
    [2004.00010] The Discrete Gaussian for Differential Privacy - arXiv
    Mar 31, 2020 · A key tool for building differentially private systems is adding Gaussian noise to the output of a function evaluated on a sensitive dataset.
  30. [30]
    COSAC: COmpact and Scalable Arbitrary-Centered Discrete ...
    Sep 9, 2019 · In this paper, we propose a compact and scalable rejection sampling algorithm by sampling from a continuous normal distribution and performing rejection ...<|control11|><|separator|>
  31. [31]
    1.2 - Maximum Likelihood Estimation | STAT 415
    Answer. respectively. Note that the maximum likelihood estimator of for the normal model is not the sample variance . They are, in fact, competing estimators.
  32. [32]
    8.4.1.2. Maximum likelihood estimation
    Maximum likelihood estimation is a totally analytic maximization procedure. It applies to every form of censored or multicensored data, and it is even possible ...
  33. [33]
    [PDF] Topic 14: Maximum Likelihood Estimation - Arizona Math
    Note that ifˆθ(x) is a maximum likelihood estimator for θ, then g(ˆθ(x)) is a maximum likelihood estimator for g(θ). For example, if θ is a parameter for the ...
  34. [34]
    1.4 - Method of Moments | STAT 415 - STAT ONLINE
    The method of moments involves equating sample moments with theoretical moments. So, let's start by making sure we recall the definitions of theoretical ...
  35. [35]
    Normal distribution - Bayesian estimation - StatLect
    This lecture shows how to apply the basic principles of Bayesian inference to the problem of estimating the parameters (mean and variance) of a normal ...
  36. [36]
    [PDF] Conjugate Bayesian analysis of the Gaussian distribution
    Oct 3, 2007 · The Gaussian or normal distribution is one of the most widely used in statistics. Estimating its parameters using Bayesian inference and ...
  37. [37]
    Normal distribution - Maximum likelihood estimation - StatLect
    Maximum likelihood estimation (MLE) of the parameters of the normal distribution. Derivation and properties, with detailed proofs.
  38. [38]
    [PDF] Lecture 3 Properties of MLE: consistency, asymptotic normality ...
    Asymptotic normality says that the estimator not only converges to the unknown parameter, but it converges fast enough, at a rate 1/≥n. Consistency of MLE. To ...Missing: gaussian | Show results with:gaussian
  39. [39]
    [PDF] 9 Normal Distribution - CMU School of Computer Science
    Definition 9.1 A continuous r.v. X follows a Normal or Gaussian distribution, written X ∼ Normal(𝜇, 𝜎2), if X has probability density function (p.d.f.) fX (x).<|separator|>
  40. [40]
    [PDF] Normal distribution
    The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it ...
  41. [41]
    Central limit theorem: the cornerstone of modern statistics - PMC
    According to the central limit theorem, the means of a random sample of size, n, from a population with mean, µ, and variance, σ2, distribute normally with ...Missing: seminal | Show results with:seminal
  42. [42]
    Central limit theorem | Probability, Distribution & Statistics | Britannica
    Oct 14, 2025 · The standard version of the central limit theorem, first proved by the French mathematician Pierre-Simon Laplace in 1810, states that the sum ...Missing: original paper
  43. [43]
    [PDF] An Analysis of Variance Test for Normality (Complete Samples) S. S. ...
    May 21, 2007 · Details of this procedure and its results are given in Shapiro & Wilk (1965~). The tables of percentage points of W given in $3 are based on ...
  44. [44]
    Normality Tests for Statistical Analysis: A Guide for Non-Statisticians
    The Shapiro-Wilk test is based on the correlation between the data and the corresponding normal scores (10) and provides better power than the K-S test even ...
  45. [45]
    The Gauss-Markov Theorem and BLUE OLS Coefficient Estimates
    The Gauss-Markov theorem states that OLS can produce the best coefficient estimates. Learn more about this theorem and its implications for the estimates.Missing: optimality | Show results with:optimality
  46. [46]
    5.5 The Gauss-Markov Theorem - Introduction to Econometrics with R
    The Gauss-Markov theorem states that, in the class of conditionally unbiased linear estimators, the OLS estimator has this property under certain conditions.Missing: Gaussian optimality
  47. [47]
    The fundamental solution of the heat equation - Mathphysics.com
    May 11, 2000 · Notice that the Gaussian distribution of the heat kernel becomes very narrow when t is small, while the height scales so that the integral of ...
  48. [48]
    [PDF] Math 5587 – Lecture 4
    Aug 19, 2016 · The representation formula (8) justifies calling Φ the fundamental solution of the heat equation, since any solution with (reasonably) arbitrary ...
  49. [49]
    Quantum Harmonic Oscillator: Wavefunctions
    The wavefunctions for the quantum harmonic oscillator contain the Gaussian form which allows them to satisfy the necessary boundary conditions at infinity.
  50. [50]
    6.5: The Quantum Harmonic Oscillator - Physics LibreTexts
    May 23, 2024 · The ground state must have even symmetry about the origin, and indeed the gaussian wave function given above has this property. All the odd- ...Basic Features · Wave Functions · Energy Spectrum
  51. [51]
    Gaussian Filter - an overview | ScienceDirect Topics
    A Gaussian filter is defined as a technique applied to images to minimize noise by replacing noisy pixel values with the average of neighboring pixels, ...Introduction to Gaussian Filter... · Applications in Image...
  52. [52]
    Gaussian Beam Optics - Newport
    The Gaussian is a radially symmetrical distribution whose electric field variation is given by the following equation.
  53. [53]
    A long-range electrostatic potential based on the Wolf method ...
    Jun 22, 2012 · We show that instead of using a local point charge, a non-local Gaussian charge distribution can be used as an image charge for achieving the ...
  54. [54]
    Gaussian charge-transfer charge distributions for non-self-consistent ...
    Mar 27, 2012 · In real space, the resulting Gaussian charge distribution corresponding to Eq. (A9) is given by Eq. (6) . References (27). L.-W. Wang and A ...