Fact-checked by Grok 2 weeks ago

Uniform integrability

Uniform integrability is a fundamental concept in measure theory and probability theory that describes a collection of integrable functions or random variables whose integrals over sets of small measure (or whose tails) can be controlled uniformly across the collection. Specifically, a family \mathcal{F} of measurable functions on a probability space (\Omega, \mathcal{F}, P) is uniformly integrable if for every \epsilon > 0, there exists \delta > 0 such that for any measurable set A \subset \Omega with P(A) < \delta, \sup_{f \in \mathcal{F}} \int_A |f| \, dP < \epsilon; equivalently, \lim_{K \to \infty} \sup_{f \in \mathcal{F}} \int_{\{|f| \geq K\}} |f| \, dP = 0. This property strengthens the conditions for convergence theorems, ensuring that pointwise or probabilistic convergence implies convergence in the L^1 norm, as seen in the Vitali convergence theorem: if \{f_n\} is uniformly integrable, converges pointwise almost everywhere to an integrable f, and the measure space has finite measure, then \lim_{n \to \infty} \int |f_n - f| \, d\mu = 0. In probability theory, uniform integrability is essential for martingale convergence; for instance, a uniformly integrable martingale converges almost surely and in L^1 to a limit in the same L^1 space, as established in Doob's upcrossing lemma and related results. Key equivalent characterizations include L^1-boundedness combined with uniform absolute continuity, or the existence of a convex increasing function \phi with \lim_{x \to \infty} \phi(x)/x = \infty such that \sup_{f \in \mathcal{F}} \int \phi(|f|) \, dP < \infty. Examples of uniformly integrable families include L^p-bounded families for p > 1, families dominated by an integrable function, or conditional expectations of an integrable random variable, while counterexamples like X_n = n \mathbf{1}_{\{U \leq 1/n\}} for uniform U on (0,1) illustrate failure, as the expectations remain 1 but tails do not vanish uniformly. The concept extends to more general measures and plays a vital role in functional analysis, compactness in L^1, and applications in stochastic processes.

Definitions

Measure-theoretic definition

In the measure-theoretic framework, a family of measurable functions \{f_\alpha : \alpha \in A\} on a measure space (X, \Sigma, \mu) is uniformly integrable if \lim_{K \to \infty} \sup_{\alpha \in A} \int_{\{x \in X : |f_\alpha(x)| > K\}} |f_\alpha(x)| \, d\mu(x) = 0. This condition requires that the supremum over the family of the integrals of |f_\alpha| over the sets where |f_\alpha| exceeds any large threshold K tends to zero as K increases, thereby uniformly controlling the contribution from regions of large function values and preventing the integral mass from concentrating at infinity across the entire family. A basic example of a uniformly integrable family is the collection of all measurable functions bounded in absolute value by a fixed constant M > 0, since the integral over |f_\alpha| > K vanishes for all K > M. In contrast, the family \{f_\alpha : \alpha > 0\} where f_\alpha(x) = \alpha \cdot \mathbf{1}_{[0, 1/\alpha]}(x) on the unit interval [0,1] with Lebesgue measure is not uniformly integrable, as each f_\alpha has integral 1 but \sup_\alpha \int_{\{|f_\alpha| > 1\}} |f_\alpha| \, d\mu = 1 \not\to 0. The concept was developed by Charles-Jean de la Vallée Poussin in 1915, building on earlier work in analysis to generalize convergence theorems in real analysis and measure theory. In finite measure spaces, bounded subsets of L^p(X, \Sigma, \mu) for $1 < p \leq \infty form uniformly integrable families.

Probabilistic definition

In probability theory, uniform integrability concerns families of random variables defined on a probability space (\Omega, \mathcal{F}, P). A family \{X_\alpha\}_{\alpha \in A} of random variables is said to be uniformly integrable if \lim_{K \to \infty} \sup_{\alpha \in A} \mathbb{E}\left[ |X_\alpha| \mathbf{1}_{\{|X_\alpha| > K\}} \right] = 0. This condition ensures that the contributions to the expectations from the tails beyond any large threshold K become negligible uniformly across the family. This probabilistic definition is equivalent to the measure-theoretic notion of uniform integrability when the underlying measure \mu is a probability measure, i.e., \mu(\Omega) = 1, as the total mass being finite aligns the tail control directly with expectation bounds in stochastic settings. A simple example of a uniformly integrable family arises when the random variables are uniformly essentially bounded. Specifically, if there exists a constant M < \infty such that |X_\alpha| \leq M almost surely for all \alpha \in A, then the indicator \mathbf{1}_{\{|X_\alpha| > K\}} vanishes for all K > M, making the supremum zero and thus satisfying the definition trivially. As a counterexample, consider a family of Pareto-distributed random variables with fixed scale parameter x_m = 1 and shape parameters \alpha > 1 decreasing to 1. Each individual X_\alpha is integrable since \mathbb{E}[X_\alpha] = \alpha / (\alpha - 1) < \infty, but the family fails uniform integrability because the increasingly heavy tails—characteristic of the Pareto distribution with shape approaching the boundary of integrability—prevent the supremum of the tail expectations from tending to zero as K \to \infty. For tails behaving like F(x) = C / x^k with k \to 1^+, the condition breaks down uniformly.

Characterizations and Properties

Uniform absolute continuity

In measure theory, a family of integrable functions \{f_\alpha\}_{\alpha \in A} on a measure space (X, \mathcal{A}, \mu) is uniformly absolutely continuous if for every \varepsilon > 0, there exists \delta > 0 such that \sup_{\alpha \in A} \int_E |f_\alpha| \, d\mu < \varepsilon for every measurable set E \in \mathcal{A} with \mu(E) < \delta. This condition ensures a uniform control over the integrals of the functions across the family on sets of arbitrarily small measure. When the measure space has finite total measure (i.e., \mu(X) < \infty), uniform absolute continuity is equivalent to the standard tail-integral definition of uniform integrability, which requires that \sup_{\alpha \in A} \int_{\{|f_\alpha| > K\}} |f_\alpha| \, d\mu \to 0 as K \to \infty. To sketch the proof in one direction, assume uniform integrability via the tail condition. For given \varepsilon > 0, choose K > 0 such that \sup_{\alpha} \int_{\{|f_\alpha| > K\}} |f_\alpha| \, d\mu < \varepsilon/2. On the set where |f_\alpha| \leq K, the integral over any E with \mu(E) < \delta = \varepsilon/(2K) is at most K \mu(E) < \varepsilon/2, so the total integral over E is less than \varepsilon uniformly. For the converse direction, uniform absolute continuity first implies the family is bounded in L^1(\mu) (by taking E = X), and then the small-set control can be applied to the sets \{|f_\alpha| > K\}, whose measures can be bounded using Markov's inequality uniformly; choosing K large ensures these sets have small measure, yielding the tail control. In the context of L^1 spaces over finite measure spaces, uniform absolute continuity directly implies uniform integrability for the family, as the equivalence holds without additional assumptions on the functions beyond integrability. This makes it a practical characterization in spaces like L^1([0,1]), where families satisfying the \varepsilon-\delta condition for small intervals inherit the uniform integrability property essential for convergence results. Uniform absolute continuity differs from the absolute continuity of individual functions, where for each fixed f_\alpha, one has \int_E |f_\alpha| \, d\mu \to 0 as \mu(E) \to 0, but the corresponding \delta may depend on \alpha and fail to be uniform across the family. Without uniformity, the family may not be integrable in a controlled manner, even if each member is.

Relation to tightness

In probability theory, a family of probability measures \{\mu_\alpha\} on a metric space (S, \mathcal{S}) is said to be tight if, for every \epsilon > 0, there exists a compact set K \subseteq S such that \sup_\alpha \mu_\alpha(S \setminus K) < \epsilon. A key relation between uniform integrability and tightness arises when considering families of random variables on a probability space. Specifically, if \{X_\alpha\} is a uniformly integrable family of random variables (in the probabilistic sense), then the family of their induced laws \{\mathcal{L}(X_\alpha)\} is tight. This follows because uniform integrability implies that \sup_\alpha \mathbb{E}[|X_\alpha|] < \infty, and by Markov's inequality, \sup_\alpha \mathbb{P}(|X_\alpha| > t) \leq \sup_\alpha \mathbb{E}[|X_\alpha|]/t \to 0 as t \to \infty, yielding tightness on \mathbb{R}. This implication plays a role in Prokhorov's theorem, which states that tightness is necessary and sufficient for relative compactness in the space of probability measures endowed with weak convergence (on Polish spaces), thus facilitating weak convergence results for uniformly integrable families. An illustrative example occurs with martingales. If \{M_t\}_{t \geq 0} is a uniformly integrable martingale, then the family of distributions \{\mathcal{L}(M_t)\}_{t \geq 0} is tight, as uniform integrability ensures the required L^1-boundedness for the Markov inequality application. However, the converse does not hold: tightness does not imply uniform integrability. A counterexample is the family of measures \mu_n = (1 - 1/n) \delta_0 + (1/n) \delta_n on \mathbb{R}, where \delta_x denotes the Dirac measure at x. This family is tight, since for any \epsilon > 0, choosing the compact interval [-A, A] with A > 1/\epsilon ensures \sup_n \mu_n(\mathbb{R} \setminus [-A, A]) = \sup_{n > A} 1/n < \epsilon. Yet, the corresponding random variables X_n (taking value n with probability $1/n and $0 otherwise) satisfy \mathbb{E}[|X_n|] = 1 but \sup_n \mathbb{E}[|X_n| \mathbf{1}_{|X_n| > K}] = 1 for any fixed K, so \{X_n\} is not uniformly integrable. Uniform absolute continuity, a characterization of uniform integrability, aids in proving the implication to tightness by ensuring tail probabilities are uniformly small.

Key Theorems

Vitali convergence theorem

The Vitali convergence theorem provides a fundamental condition for interchanging limits and integrals in measure theory. Specifically, let (X, \mathcal{M}, \mu) be a finite measure space. If a sequence of measurable functions \{f_n\} converges pointwise almost everywhere to a function f \in L^1(X, \mathcal{M}, \mu), and the family \{|f_n| : n \in \mathbb{N}\} is uniformly integrable, then \int_X f_n \, d\mu \to \int_X f \, d\mu as n \to \infty, or equivalently, \int_X |f_n - f| \, d\mu \to 0. This result holds more generally if the convergence is in measure rather than pointwise, and extends to \sigma-finite measure spaces under suitable conditions. The theorem was developed by the Italian mathematician Giuseppe Vitali in 1907, building on Henri Lebesgue's foundational work on integration by addressing limitations in interchanging limits for non-dominated sequences. Vitali's contribution appeared in his paper "Sull'integrazione per serie," where he established the role of uniform integrability in ensuring convergence of integrals for series expansions, extending earlier ideas on absolute continuity. A proof outline relies on the uniform absolute continuity property of uniformly integrable families. First, by Egoroff's theorem, the pointwise convergence is almost uniform on sets of finite measure, allowing control of the integrals there via bounded convergence. For the remainder, uniform integrability bounds the contribution from sets of small measure: for any \epsilon > 0, there exists \delta > 0 such that \mu(E) < \delta implies \sup_n \int_E |f_n| \, d\mu < \epsilon. Combining this with Fatou's lemma on the difference |f_n - f| yields the L^1 convergence. The uniform integrability condition is necessary for the theorem, as demonstrated by counterexamples where it fails. Consider the probability space ([0,1], \mathcal{B}, \lambda), where \lambda is Lebesgue measure, and define f_n(x) = n \cdot \mathbf{1}_{(0,1/n)}(x). Then f_n \to 0 pointwise almost everywhere, but \int_0^1 f_n \, d\lambda = 1 \not\to 0, since the family \{f_n\} is not uniformly integrable—the integrals over intervals of length \delta > 0 do not uniformly approach 0 as \delta \to 0. This theorem is closely related to the dominated convergence theorem, replacing pointwise domination by the weaker uniform integrability condition to handle a broader class of sequences.

de la Vallée Poussin theorem

The de la Vallée Poussin theorem provides a sufficient condition for uniform integrability of a family of measurable functions on a measure space, utilizing the growth properties of a convex dominating function. This criterion, originally developed in the context of Lebesgue integration theory during the early 20th century, offers a practical tool for verifying uniform integrability without directly estimating tail integrals. Named after the Belgian mathematician Charles-Jean de la Vallée Poussin, the theorem stems from his 1915 memoir on Lebesgue integrals, where he explored conditions for boundedness and convergence of integrals. In this work, de la Vallée Poussin introduced ideas that evolved into the modern formulation, emphasizing control over the behavior of functions at infinity through auxiliary growth functions. The theorem states that a family \{f_\alpha\} of integrable functions on a measure space (\Omega, \mathcal{A}, \mu) is uniformly integrable if there exists a convex function \phi: [0, \infty) \to [0, \infty) such that \phi(x)/x \to \infty as x \to \infty and \sup_\alpha \int_\Omega \phi(|f_\alpha|) \, d\mu < \infty. This condition ensures that the family is bounded in L^1(\mu) and that the integrals over sets where |f_\alpha| is large are uniformly controlled. The proof relies on the convexity of \phi to derive tail estimates. Specifically, for any M > 0, the integral \int_{\{|f_\alpha| > M\}} |f_\alpha| \, d\mu can be bounded using the fact that \phi(|f_\alpha|) \geq \phi(M) + \phi'(M) (|f_\alpha| - M) on \{|f_\alpha| > M\}, which implies that the tail contributions decay uniformly as M \to \infty due to the growth condition on \phi. This approach avoids explicit computation of absolute continuity while leveraging the superlinear growth of \phi to dominate potential outliers in the family. A classic example arises in L^p spaces for p > 1, where \phi(x) = x^p satisfies the conditions since \phi(x)/x = x^{p-1} \to \infty as x \to \infty&#36;, and boundedness in L^p(\mu)implies\sup_\alpha \int |f_\alpha|^p , d\mu < \infty, hence uniform integrability in L^1(\mu). In Orlicz spaces, functions like \phi(x) = x \log(1 + x)$ (or more precisely, variants ensuring the growth) characterize uniform integrability for families bounded in the Orlicz norm, connecting to broader convex analysis frameworks.

Applications in Convergence

Convergence of integrals

Uniform integrability plays a crucial role in establishing convergence of integrals in L^1 spaces. Specifically, if a sequence of functions \{f_n\} in L^1(\mu) converges pointwise almost everywhere to f \in L^1(\mu) on a finite measure space and the family \{f_n\} is uniformly integrable, then \int |f_n - f| \, d\mu \to 0, ensuring convergence in the L^1 norm. This preservation of L^1 convergence under pointwise limits highlights how uniform integrability controls the tails of the functions, preventing mass escape that could undermine integral equality. This property extends to weak convergence in L^1. The Dunford-Pettis theorem states that a subset of L^1(\mu) is relatively weakly compact if and only if it is uniformly integrable. Thus, every sequence in a uniformly integrable subset has a weakly convergent subsequence, with implications for the convergence of integrals against bounded continuous functions. This compactness criterion is fundamental in functional analysis for studying bounded sequences in L^1. An illustrative application appears in Fourier analysis, where uniform integrability ensures integral convergence for approximations via Fourier transforms. For instance, conditions involving uniform convergence of cosine and sine Fourier transforms, tied to uniform integrability of the underlying functions, lead to L^p-integrability results for weighted Fourier integrals, facilitating the analysis of approximation errors in integral norms. Uniform integrability also underlies the necessity for L^1 convergence in Scheffé's lemma concerning densities. Scheffé's lemma asserts that if a sequence of probability densities \{f_n\} converges pointwise almost everywhere to a density f and \int f_n \, d\mu \to \int f \, d\mu = 1, then \int |f_n - f| \, d\mu \to 0. The resulting L^1 convergence implies that \{f_n\} is uniformly integrable, making uniform integrability a necessary condition for such density convergence in the L^1 sense. This is a special case of the Vitali convergence theorem.

Stochastic ordering implications

Uniform integrability of a family of random variables \{X_\alpha\} on a probability space implies specific stochastic ordering properties, particularly in terms of the increasing convex order (icx-order). Specifically, the family is stochastically bounded above in the icx-order by an integrable random variable Y, meaning X_\alpha \leq_{\text{icx}} Y for all \alpha with \mathbb{E}[Y] < \infty. This characterization establishes that uniform integrability is equivalent to such boundedness in the icx-order. Conversely, if a family is bounded in the icx-order by an integrable random variable, it is uniformly integrable. This equivalence highlights how uniform integrability constrains the tail behavior of the family in a stochastic ordering sense, ensuring that expectations of increasing convex functions remain controlled. For instance, if \{X_\alpha\} and \{Y_\alpha\} are both uniformly integrable and X_\alpha \leq_{\text{icx}} Y_\alpha pointwise in the index \alpha, the ordering is maintained uniformly across the family due to the shared integrable bound. In financial applications, this stochastic ordering implication ensures the stability of risk measures such as expected shortfall (ES), which is a law-invariant convex risk measure. Uniform integrability of a set of loss distributions guarantees that ES remains robust under perturbations within sets dominated in the convex order, preserving monotonicity and continuity properties essential for risk aggregation and portfolio analysis. As an example, consider two families of loss distributions: one for a baseline portfolio X_\alpha and another for a riskier portfolio Y_\alpha where X_\alpha \leq_{\text{icx}} Y_\alpha for each \alpha. If both families are uniformly integrable, the icx-ordering implies that ES applied to X_\alpha is less than or equal to ES for Y_\alpha uniformly, ensuring that the risk comparison remains valid even as \alpha varies, such as across different market scenarios.

References

  1. [1]
    [PDF] Lecture 12 Uniform Integrability
    Jan 24, 2015 · Uniform Integrability. Uniform integrability is a compactness-type concept for families of random variables, not unlike that of tightness.
  2. [2]
    [PDF] Section 4.6. Uniform Integrability: The Vitali Convergence Theorem
    Nov 24, 2020 · In this section, we introduce a new condition on a set of functions (uniform integrability) which produces another convergence theorem that is ...
  3. [3]
    4.12: Uniformly Integrable Variables - Statistics LibreTexts
    Apr 23, 2022 · As a simple corollary, if the variables are bounded in absolute value then the collection is uniformly integrable. If there exists \( c \gt 0 \) ...
  4. [4]
    [PDF] An Introduction to Measure Theory - Terry Tao
    Theorem 1.5.13 (Uniformly integrable convergence in measure). Let fn : X → C be a uniformly integrable sequence of functions, and let f : X → C be another ...
  5. [5]
  6. [6]
    [PDF] 1 Uniform integrability
    In general, the needed condition ensuring that E(limn→∞ Xn) = limn→∞ E(Xn) is called uniform integrability : Definition 1.1 A collection of rvs {Xt : t ∈ T} is ...
  7. [7]
  8. [8]
    [PDF] Convergence of Probability Measures - CERMICS
    ... probability theory, I think it is still useful as a textbook one can study ... Integral Laws. Integration to the Limit. Relative measure.* Three Lemmas ...
  9. [9]
    [PDF] Probability: Theory and Examples Rick Durrett Version 5 January 11 ...
    Jan 11, 2019 · ... Probability: Theory and Examples. Rick Durrett. Version 5 January 11, 2019. Copyright 2019, All rights reserved. Page 2. ii. Page 3. Preface.
  10. [10]
    [PDF] Notes on uniform integrability and Vitali's Theorem for Math 501
    Oct 14, 2010 · Integrable functions cannot be too heavily concentrated on small sets, nor can they be too spread out over sets of infinite measure.Missing: original paper
  11. [11]
    [PDF] Vitali's Convergence Theorems.
    This hypothesis implies two properties of {fn} that are important in their own right. A. Uniform Integrability. Proposition 0.1 Let f be integrable over E. Then ...
  12. [12]
    Sull'integrazione per serie | Rendiconti del Circolo Matematico di ...
    Dec 20, 2008 · Cite this article. Vitali, G. Sull'integrazione per serie. Rend. Circ. Matem. Palermo 23, 137–155 (1907). https://doi.org/10.1007/BF03013514.
  13. [13]
    [PDF] Uniform Integrability; Convergence of Series - MIT OpenCourseWare
    Uniform integrability, convergence of series. Contents. 1. L1 convergence (aka convergence in mean), L1 LLN. 2. Uniform integrability. 3. Convergence of series ...
  14. [14]
    [PDF] Notes on the Dunford-Pettis compactness criterion
    The Dunford-Pettis compactness criterion implies that uniform integrability is a necessary and sufficient condition for weak sequential compactness of a ...
  15. [15]
    Uniform convergence and integrability of Fourier integrals
    Firstly, we study the uniform convergence of cosine and sine Fourier transforms. Secondly, we obtain Pitt–Boas type results on L p -integrability of Fourier ...
  16. [16]
    275A, Notes 1: Integration and expectation
    **Scheffé's Lemma Statement and Relation to Uniform Integrability:**
  17. [17]
    Stochastic order characterization of uniform integrability and tightness
    Jun 3, 2011 · We show that a family of random variables is uniformly integrable if and only if it is stochastically bounded in the increasing convex order by ...
  18. [18]