Fact-checked by Grok 2 weeks ago

Gaussian measure

In mathematics, the Gaussian measure is a fundamental probability measure in probability theory and functional analysis, defined on a real separable Banach space F as a Borel measure \mu such that for every continuous linear functional x^* \in F^*, the pushforward measure \mu \circ (x^*)^{-1} on \mathbb{R} is a one-dimensional Gaussian distribution N(a, \sigma^2) with mean a \in \mathbb{R} and variance \sigma^2 \geq 0. In the finite-dimensional case on \mathbb{R}^n, the canonical (standard) Gaussian measure \gamma_n is absolutely continuous with respect to Lebesgue measure, possessing the explicit density \gamma_n(x) = (2\pi)^{-n/2} \exp(-\|x\|^2 / 2), which corresponds to the law of a multivariate normal random vector with mean zero and identity covariance matrix. This measure exhibits key properties such as rotational invariance and concentration around the origin in high dimensions, where the probability mass focuses near hyperspheres of radius approximately \sqrt{n}. Gaussian measures extend naturally to infinite-dimensional settings, such as Hilbert or Banach spaces, where they are characterized by the Gaussian nature of all finite-dimensional projections, enabling the study of stochastic processes like Brownian motion via the Wiener measure, which is a Gaussian measure on the space of continuous functions. In these spaces, Gaussian measures admit representations as limits of finite-dimensional approximations, often via series expansions \sum x_i g_i with independent standard normal random variables g_i, converging almost surely and in L^p for $0 < p < \infty. Notable properties include the isoperimetric inequality, which bounds the Gaussian measure of Minkowski sums with half-spaces, and the existence of an orthonormal basis in L^2 spaces over Gaussian measures formed by Hermite polynomials or Wick products. The theory of Gaussian measures underpins diverse applications, including the analysis of Gaussian processes, empirical processes, and random fields, with tools like the Wiener chaos decomposition partitioning L^2 functions into orthogonal components based on homogeneity degrees, and concentration inequalities providing tail bounds essential for high-dimensional statistics and machine learning. Modern developments integrate Gaussian measures with convexity, Sobolev spaces, and nonlinear transformations, as systematically explored in foundational texts that emphasize their role in bridging probability, geometry, and analysis.

Finite-dimensional Gaussian measures

Definition

In finite-dimensional Euclidean space \mathbb{R}^n, the standard Gaussian measure, often denoted \gamma_n, is the Borel probability measure on \mathbb{R}^n with respect to Lebesgue measure that has the explicit density function (2\pi)^{-n/2} \exp\left(-\frac{\|x\|^2}{2}\right), where \| \cdot \| denotes the Euclidean norm. This measure corresponds to the product of n independent standard normal distributions and serves as the canonical example of a centered Gaussian measure with identity covariance matrix. More generally, a Gaussian measure on \mathbb{R}^n is specified by a mean vector \mu \in \mathbb{R}^n and a positive semidefinite covariance matrix \Sigma \in \mathbb{R}^{n \times n}, and it has the density (2\pi)^{-n/2} (\det \Sigma)^{-1/2} \exp\left( -\frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu) \right) with respect to Lebesgue measure, provided that \Sigma is positive definite (i.e., invertible). When \mu = 0, the measure is centered. These measures are precisely the multivariate normal distributions, which form the foundation for many probabilistic models in finite dimensions. In the degenerate case where \Sigma is singular (positive semidefinite but not invertible), the measure lacks a density with respect to Lebesgue measure on \mathbb{R}^n and is instead supported on a lower-dimensional affine subspace, specifically the translate of the range of \Sigma by \mu. Such degenerate Gaussian measures arise naturally in applications where variables exhibit linear dependencies, and their support lies on affine subspaces of dimension equal to the rank of \Sigma.

Moments and characteristic function

The mean of a random vector X following a finite-dimensional Gaussian measure with density p(x) = (2\pi)^{-n/2} (\det \Sigma)^{-1/2} \exp\left( -\frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu) \right), where \mu \in \mathbb{R}^n is the mean vector and \Sigma is the positive definite covariance matrix, is given by \mathbb{E}[X] = \mu. This follows from the symmetry of the density around \mu, as the integral \int_{\mathbb{R}^n} x \, p(x) \, dx shifts the origin to \mu via the change of variables y = x - \mu, yielding \mathbb{E}[Y] = 0 for the centered vector Y, so \mathbb{E}[X] = \mu + \mathbb{E}[Y] = \mu. The covariance matrix is \mathrm{Cov}(X) = \Sigma, computed as \mathrm{Cov}(X) = \mathbb{E}[(X - \mu)(X - \mu)^T] = \int_{\mathbb{R}^n} (x - \mu)(x - \mu)^T p(x) \, dx. For the centered case, this integral evaluates to \Sigma due to the quadratic form in the exponent, which encodes the second moments directly; the off-diagonal entries capture covariances between components. Higher-order moments of Gaussian random vectors can be expressed using Isserlis' theorem, particularly for centered Gaussians. For a zero-mean multivariate normal vector Y = (Y_1, \dots, Y_n)^T \sim N(0, \Sigma), the moment \mathbb{E}[Y_1 \cdots Y_n] is zero if n is odd, and for even n = 2k, it equals the sum over all perfect matchings (pair partitions) of the indices, where each pair (l, r) contributes the covariance \mathbb{E}[Y_l Y_r] = \Sigma_{l r}: \mathbb{E}[Y_1 \cdots Y_n] = \sum_{p \in PP(n)} \prod_{(l,r) \in p} \Sigma_{l r}, with PP(n) denoting the set of partitions into k disjoint pairs; the number of such partitions is (2k-1)!! = (2k)! / (2^k k!). This theorem, also known as Wick's theorem in some contexts, reduces all even moments to sums of products of covariances, reflecting the pairwise structure of Gaussian dependence. For non-centered vectors, moments follow by expanding around the mean. The characteristic function of X \sim N(\mu, \Sigma) provides a compact summary of all moments and confirms the distribution's uniqueness. Defined as \phi(t) = \mathbb{E}[\exp(i t^T X)] for t \in \mathbb{R}^n, it is derived by direct integration against the density: \phi(t) = \int_{\mathbb{R}^n} \exp(i t^T x) p(x) \, dx = \exp\left( i t^T \mu - \frac{1}{2} t^T \Sigma t \right). To see this, complete the square in the exponent of the integrand: i t^T x - \frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu) = -\frac{1}{2} (x - (\mu + i \Sigma t))^T \Sigma^{-1} (x - (\mu + i \Sigma t)) + \frac{1}{2} t^T \Sigma t + i t^T \mu, where the integral over the shifted Gaussian yields the normalization constant times the exponential prefactor. Alternatively, represent X = \mu + D Z with Z \sim N(0, I) and \Sigma = D D^T; then \phi(t) = \exp(i t^T \mu) \mathbb{E}[\exp(i (D^T t)^T Z)] = \exp(i t^T \mu) \exp(-\frac{1}{2} \|D^T t\|^2) = \exp(i t^T \mu - \frac{1}{2} t^T \Sigma t), using the independence of standard normal components. This characteristic function uniquely determines the Gaussian measure, as characteristic functions uniquely identify distributions via the inversion theorem. Moreover, since the form depends only on \mu and \Sigma, and higher moments are fixed by these via Isserlis' theorem, Gaussian measures in finite dimensions are fully determined by their first two moments.

General Gaussian measures

Abstract definition

A Gaussian measure on a locally convex topological vector space E is defined as a Radon probability measure \mu on the Borel \sigma-algebra of E such that its characteristic functional \hat{\mu}(\ell) = \int_E \exp(i \langle \ell, x \rangle) \, d\mu(x), for \ell in the topological dual E', is continuous and admits the representation \hat{\mu}(\ell) = \exp\left( i \langle \ell, m \rangle - \frac{1}{2} Q(\ell) \right), where m \in E is the mean element and Q: E' \to [0, \infty) is a continuous positive semidefinite quadratic form. This abstract formulation unifies Gaussian measures across dimensions: in finite-dimensional Euclidean space \mathbb{R}^n, the standard multivariate Gaussian distribution \mathcal{N}(\mathbf{\mu}, \Sigma) has characteristic function \exp(i \ell^\top \mathbf{\mu} - \frac{1}{2} \ell^\top \Sigma \ell), which coincides with the above form upon identifying the mean term \langle \ell, \mathbf{\mu} \rangle = \ell^\top \mathbf{\mu} and Q(\ell) = \ell^\top \Sigma \ell, confirming compatibility with the finite-dimensional case. For centered Gaussian measures, where m = 0, the characteristic functional simplifies to \hat{\mu}(\ell) = \exp\left( -\frac{1}{2} Q(\ell) \right); here, Q induces a reproducing kernel Hilbert space H_Q on a subspace of E, obtained as the completion of the image of the dual under the covariance structure defined by Q, serving as the natural domain for translations preserving absolute continuity. Uniqueness holds: if two Gaussian measures on E share the same mean element m and quadratic form Q, then their characteristic functionals coincide, implying the measures are equal, as distinct Radon measures on locally convex spaces yield distinct continuous characteristic functionals.

Support and equivalence of measures

The support of a Gaussian measure depends on the definiteness of its covariance operator. In finite dimensions, a non-degenerate Gaussian measure, characterized by a positive definite covariance matrix, has full support on the entire Euclidean space \mathbb{R}^n. In contrast, a degenerate Gaussian measure, with a singular covariance matrix, is concentrated on a proper affine subspace of \mathbb{R}^n, specifically the translate of the range of the covariance matrix by the mean vector. In infinite dimensions, the situation is more nuanced due to the abstract Wiener space construction, where the Gaussian measure is defined on a separable Banach space containing a denser Hilbert space identified as the Cameron-Martin space. For a centered non-degenerate Gaussian measure \mu on a Banach space B with Cameron-Martin Hilbert space H, the topological support of \mu is the closure \overline{H} in the norm topology of B. More generally, for a Gaussian measure with mean m \in B, the support is the affine subspace m + \overline{H}. This closure reflects the fact that Gaussian measures in infinite dimensions are singular with respect to any "Lebesgue-like" measure on B and concentrate on this lower-dimensional affine structure. Absolute continuity between two Gaussian measures \mu and \nu on the same space requires \mu to be concentrated on sets of full \nu-measure. Specifically, \mu \ll \nu if and only if the mean of \mu differs from the mean of \nu by an element in the reproducing kernel Hilbert space (RKHS) associated to the covariance of \nu, and the covariance operator of \mu is absolutely continuous with respect to that of \nu in the sense that the inclusion of their RKHSs satisfies a certain operator inequality ensuring no additional mass outside the support of \nu. These conditions ensure that the support of \mu is contained within that of \nu. Equivalence of Gaussian measures \mu and \nu, meaning \mu \ll \nu and \nu \ll \mu, follows from the Feldman–Hájek theorem, which establishes a dichotomy: two Gaussian measures on a separable Hilbert space are either equivalent or mutually singular. They are equivalent if and only if they share the same support (i.e., identical Cameron-Martin spaces up to isomorphism), the difference of their means belongs to the intersection of their respective RKHSs, and their covariance operators differ by a Hilbert–Schmidt perturbation that preserves the positive definiteness and injectivity properties. This Hilbert–Schmidt condition quantifies the "closeness" of the covariances, ensuring the measures are non-singular with respect to each other. When two Gaussian measures are equivalent, the Radon–Nikodym derivative \frac{d\mu}{d\nu} can be explicitly computed. In finite dimensions, for \mu = \mathcal{N}(m, \Sigma) and \nu = \mathcal{N}(0, I) with m in the RKHS of \nu (here the space itself), the Cameron–Martin formula gives \frac{d\mu}{d\nu}(x) = \exp\left( -\frac{1}{2} \|m\|^2 + \langle m, x \rangle \right), where \langle \cdot, \cdot \rangle is the inner product and \| \cdot \| the Euclidean norm; this extends to general covariances via a change of variables.

Infinite-dimensional Gaussian measures

Construction

Cylinder measures on a separable infinite-dimensional Hilbert space H are constructed by projecting onto finite-dimensional subspaces spanned by elements of an orthonormal basis \{e_n\}_{n=1}^\infty. Specifically, for a finite subset \{e_1, \dots, e_n\} and a Borel set B \subset \mathbb{R}^n, the cylinder set is \{x \in H : ( \langle x, e_1 \rangle, \dots, \langle x, e_n \rangle ) \in B \}, and the measure on this set is induced by the finite-dimensional Gaussian distribution with mean zero and covariance matrix determined by the inner products under the embedding. This approach leverages the finite-dimensional theory to define measures on these cylinders, which form an algebra generating the Borel \sigma-algebra of H. To ensure the existence of a probability measure on the Borel \sigma-algebra of H extending these cylinder measures, consistency conditions must hold: the measures on projections to different finite-dimensional subspaces must agree on overlapping cylinders. These conditions are satisfied for Gaussian cylinder measures, allowing application of Kolmogorov's extension theorem in the projective limit sense, which guarantees a unique probability measure \mu on H such that \mu(C) = \nu(B) for every cylinder C with finite-dimensional measure \nu. This construction yields a countably additive measure on the Borel sets, despite the absence of a Lebesgue measure in infinite dimensions. For a centered Gaussian measure with covariance operator C, a positive self-adjoint trace-class operator on H, the explicit construction proceeds via the characteristic functional \hat{\mu}(\ell) = \exp\left( -\frac{1}{2} \langle C \ell, \ell \rangle_H \right) for \ell \in H. This functional is positive definite and continuous in the topology of H, and by the Bochner–Minlos theorem adapted to Hilbert spaces, it corresponds to a unique centered Gaussian probability measure \mu_C on the Borel \sigma-algebra of H whose finite-dimensional projections match the Gaussian distributions with covariance C restricted to those subspaces. The trace-class condition on C ensures the measure is supported on the entire space H in a suitable sense. In the abstract Wiener space framework, the Hilbert space H is densely embedded as a measurable subspace into a larger Banach space B equipped with a norm under which the embedding is continuous but not necessarily compact. The Gaussian measure \mu on H then extends uniquely to a Radon probability measure on the Borel \sigma-algebra of B, providing a setting where integration and analysis can be performed on the coarser topology of B while retaining the Hilbert structure for covariance. This construction, introduced by Gross, facilitates the study of Gaussian processes on non-Hilbert spaces like the space of continuous functions.

Cameron-Martin theorem

The Cameron-Martin theorem characterizes the absolute continuity of translated Gaussian measures in infinite-dimensional spaces. Consider a centered Gaussian measure \mu on a separable Hilbert space H with covariance operator C: H \to H, which is symmetric, positive, trace-class, and invertible on its range. The translated measure \mu_h is defined by \mu_h(A) = \mu(A - h) for Borel sets A \subset H. Then, \mu_h is absolutely continuous with respect to \mu if and only if h belongs to the Cameron-Martin space \mathcal{H} = C^{1/2}(H), which is the image of the operator C^{1/2} and coincides with the reproducing kernel Hilbert space associated to \mu. In this case, the Radon-Nikodym derivative is given by \frac{d\mu_h}{d\mu}(x) = \exp\left( \langle C^{-1/2} h, C^{-1/2} x \rangle - \frac{1}{2} \|C^{-1/2} h\|^2 \right), where the inner product is in H and C^{-1/2} is the pseudo-inverse of C^{1/2}. If h \notin \mathcal{H}, then \mu_h is singular with respect to \mu. This result was originally established for the Wiener measure on path space by Cameron and Martin through transformations of Wiener integrals, and later generalized to abstract Gaussian measures on Hilbert spaces. A proof sketch proceeds via characteristic functionals. The characteristic functional of \mu is \hat{\mu}(y) = \mathbb{E}[\exp(i \langle y, X \rangle)] = \exp\left( -\frac{1}{2} \langle C y, y \rangle \right) for y \in H, where X \sim \mu. For the translated measure, \widehat{\mu_h}(y) = \hat{\mu}(y) \exp(i \langle y, h \rangle). If h \in \mathcal{H}, write h = C^{1/2} k for some k \in H, so \langle y, h \rangle = \langle C^{1/2} y, k \rangle. Then, \widehat{\mu_h}(y) = \exp\left( i \langle C^{1/2} y, k \rangle - \frac{1}{2} \langle C y, y \rangle \right). This matches the characteristic functional obtained by integrating the proposed Radon-Nikodym derivative \exp\left( \langle k, C^{1/2} x \rangle - \frac{1}{2} \|k\|^2 \right) against \exp(i \langle y, x \rangle) under \mu, as the expectation completes the square in the exponent to yield the same form, confirming absolute continuity via the explicit derivative. For h \notin \mathcal{H}, the measures are singular by constructing a sequence of linear functionals separating them, using the fact that \mathcal{H} is a proper subspace of H. An alternative approach uses Girsanov's theorem for the change of measure under a drift in the direction of h. The theorem implies that \mu is quasi-invariant under translations by elements of \mathcal{H}, meaning \mu_h \sim \mu (equivalent measures), so the null sets remain the same. However, translations by elements outside \mathcal{H} yield singular measures, in sharp contrast to the finite-dimensional case where Gaussian measures are fully invariant under all translations. This quasi-invariance property is crucial for infinite-dimensional analysis. A key application arises in stochastic processes, particularly the law of Brownian motion on C[0,1] with covariance operator induced by the Wiener process. Here, \mathcal{H} = H^1([0,1]) = \{ h \in C[0,1] : h(0)=0, h' \in L^2([0,1]) \}, equipped with the inner product \langle h_1, h_2 \rangle_{\mathcal{H}} = \int_0^1 h_1'(t) h_2'(t) \, dt. The measure for Brownian motion with drift h \in \mathcal{H} is absolutely continuous with respect to the standard Wiener measure, with Radon-Nikodym derivative \exp\left( \int_0^1 h'(t) \, dW(t) - \frac{1}{2} \int_0^1 (h'(t))^2 \, dt \right), enabling Girsanov transformations for changing drifts within this Sobolev space.

References

  1. [1]
    [PDF] On Some Inequalities for Gaussian Measures - mimuw
    The modern theory of Gaussian measures combines methods from probability theory, analysis, geometry and topology and is closely connected with diverse ...
  2. [2]
    [PDF] Calculus on Gauss Space: An Introduction to Gaussian Analysis
    ample of which is the canonical Gaussian measure Pn on Rn, where n ⩾ 1 is an arbitrary integer. The measure Pn is defined simply as. Pn(A) := Z. A γn(x)dx.
  3. [3]
    Gaussian Measures - AMS Bookstore
    This book gives a systematic exposition of the modern theory of Gaussian measures. It presents with complete and detailed proofs fundamental facts about finite ...
  4. [4]
    [PDF] 1 Basic notions: finite dimension
    1b1 Definition. The standard n-dimensional Gaussian measure γn, known also as the standard multinormal distribution, is defined by γn(A) = (2π)−n/2 ∫A e ...
  5. [5]
    [PDF] The Multivariate Gaussian Distribution - CS229
    Oct 10, 2008 · formula for the density of a multivariate Gaussian as p(x;µ,Σ) = 1. (2π)n/2|BBT |1/2exp −. 1. 2(x − µ)T B−T B−1(x − µ) . (8). Step 2 ...Missing: source | Show results with:source
  6. [6]
    [PDF] INTRODUCTION TO GAUSSIAN PROCESSES Definition 1.1. A ...
    has a (possibly degenerate) Gaussian distribution; if these finite- dimensional distributions are all non-degenerate then the Gaussian process is said to be non ...
  7. [7]
  8. [8]
    [PDF] Multivariate normal distributions: characteristic functions
    Nov 3, 2008 · A random vector X has a (multivariate) normal distribution if for every real vector a, the random variable aT X is normal. PROOF OF EQUIVALENCE.
  9. [9]
    [PDF] Multivariate Normal Distributions Continued; Characteristic Functions
    φZ (t) = pφX (t) + (1 − p)φY (t). (c) Inversion theorem: If two random variables have the same characteristic function, then their distributions are the same. ...
  10. [10]
  11. [11]
    Stochastic Equations in Infinite Dimensions
    Giuseppe Da Prato, Scuola Normale Superiore, Pisa, Jerzy Zabczyk, Polish Academy of Sciences. Publisher: Cambridge University Press.
  12. [12]
    Abstract Wiener spaces - Project Euclid
    Abstract Wiener spaces. Chapter. Author(s) Leonard Gross. Editor(s) Lucien M. Le Cam, Jerzy Neyman. Berkeley Symp. on Math. Statist. and Prob., 1967: 31-42 ( ...
  13. [13]
    AMS eBooks: Mathematical Surveys and Monographs
    Gaussian Measures · Vladimir I. Bogachev · View full volume as PDF · Download chapters as PDF · Front/Back Matter · Chapters.
  14. [14]
    Gaussian Hilbert Spaces - Cambridge University Press & Assessment
    This book treats the very special and fundamental mathematical properties that hold for a family of Gaussian (or normal) random variables.Missing: URL | Show results with:URL