Fact-checked by Grok 2 weeks ago

Continuous mapping theorem

The continuous mapping theorem is a cornerstone result in asserting that if a of random vectors converges in probability, , or in distribution to a limiting random vector, then the application of a to this preserves the mode of convergence, yielding convergence of the transformed to the of the , provided the is continuous with respect to the limiting . Formally, for a sequence of random vectors \{X_n\} in \mathbb{R}^k converging to X and a g: \mathbb{R}^k \to \mathbb{R}^l that is continuous almost everywhere under the distribution of X, the theorem guarantees: (i) almost sure convergence X_n \to^{a.s.} X implies g(X_n) \to^{a.s.} g(X); (ii) convergence in probability X_n \to^p X implies g(X_n) \to^p g(X); and (iii) convergence in distribution X_n \to^d X implies g(X_n) \to^d g(X). This result extends to metric spaces, where the function must be continuous at points in a set of probability one under the limit. The theorem's significance lies in its facilitation of asymptotic analysis for transformed statistics, such as sums, products, ratios, and norms of estimators, enabling derivations of limiting distributions in central limit theorems and delta methods without re-proving from scratch. For instance, it underpins the consistency of maximum likelihood estimators under continuous transformations and supports for operations like addition and multiplication of convergent sequences. Its broad applicability has made it indispensable in , , and stochastic processes, where handling functions of random variables is routine.

Background and Context

Modes of Convergence

In , the continuous mapping theorem applies to sequences of random variables that converge in certain senses, with the primary modes being convergence in distribution, convergence in probability, and almost sure convergence. These modes provide progressively stronger conditions under which limiting behaviors of random variables can be analyzed, serving as prerequisites for understanding how transformations preserve properties. Convergence in distribution, also known as , occurs when a of s X_n converges to a X if the (CDF) F_n(x) = P(X_n \leq x) converges to the CDF F(x) = P(X \leq x) at all continuity points x of F. Equivalently, this holds if E[g(X_n)] \to E[g(X)] for every bounded g defined on the real line or more generally on a . Convergence in probability means that X_n converges to X if, for every \epsilon > 0, P(|X_n - X| > \epsilon) \to 0 as n \to \infty. This mode captures the idea that X_n becomes arbitrarily close to X with high probability for large n. Almost sure convergence, the strongest of these modes, requires that X_n converges to X if P(\{\omega : \lim_{n \to \infty} X_n(\omega) = X(\omega)\}) = 1, or equivalently, if P(|X_n - X| > \epsilon \text{ infinitely often}) = 0 for every \epsilon > 0. This implies except on a set of probability zero. The standard notations for these convergences are X_n \xrightarrow{d} X for distribution, X_n \xrightarrow{p} X for probability, and X_n \xrightarrow{a.s.} X for almost sure. A key hierarchy among these modes is that almost sure convergence implies convergence in probability, and convergence in probability implies convergence in distribution, though the converses do not hold in general.

Role of Continuous Functions

In probability theory, the continuous mapping theorem relies fundamentally on the topological property of continuity for functions applied to random variables. A function g: \mathbb{R}^k \to \mathbb{R}^m is defined to be continuous at a point x \in \mathbb{R}^k if, for every \varepsilon > 0, there exists a \delta > 0 such that \|y - x\| < \delta implies \|g(y) - g(x)\| < \varepsilon, where \|\cdot\| denotes the Euclidean norm. This \varepsilon-\delta definition ensures that the function does not exhibit abrupt jumps or discontinuities that could disrupt the preservation of convergence limits when applied to sequences of random vectors. For the theorem to hold in probabilistic settings, particularly with respect to convergence in distribution, the function g must often be continuous almost everywhere with respect to the limiting distribution. This means that the set D_g of discontinuity points of g satisfies \Pr(X \in D_g) = 0, where X is the limiting random variable. Such almost everywhere continuity accommodates real-world functions that may have isolated discontinuities, as long as these occur on a negligible set under the probability measure induced by X, thereby allowing the theorem to apply without requiring global . In broader contexts, random variables can take values in general metric spaces, where continuity is understood topologically: a function g from a metric space (S, d) to another (T, \rho) is continuous at x \in S if for every open neighborhood U of g(x) in T, there exists an open neighborhood V of x in S such that g(V) \subseteq U. This generalization extends the theorem's applicability to spaces like or more abstract , ensuring that the mapping preserves weak convergence of probability measures on these spaces. For random variables defined on such spaces, the continuity assumption guarantees that the image measures under g behave consistently with the convergence of the original measures. A simple illustrative example is the function g(x) = x^2 from \mathbb{R} to \mathbb{R}, which is continuous everywhere since the polynomial structure avoids discontinuities. Applying this to a sequence of random variables converging to a limit would preserve the convergence in the transformed variables, highlighting how continuity enables straightforward limit interchanges in probabilistic transformations.

Formal Statements

Convergence in Distribution

The continuous mapping theorem for convergence in distribution states that if a sequence of random variables X_n converges in distribution to X, and g is a continuous function, then g(X_n) converges in distribution to g(X). To prove this result, one approach relies on the portmanteau theorem, which characterizes weak convergence (equivalently, convergence in distribution) of probability measures P_n to P by the condition that \mathbb{E}[h(X_n)] \to \mathbb{E}[h(X)] for every bounded continuous function h on the range space. Suppose X_n \Rightarrow X. For any bounded continuous h, the composition h \circ g is also bounded and continuous because g is continuous, ensuring that the domain restrictions align and preserve continuity. Thus, \mathbb{E}[h(g(X_n))] \to \mathbb{E}[h(g(X))], which by the portmanteau theorem implies g(X_n) \Rightarrow g(X). A step-by-step elaboration begins with the assumption that X_n \Rightarrow X on a metric space, and g: S \to S' is continuous. The continuity of g ensures that for any open set U in S', g^{-1}(U) is open in S. By the portmanteau theorem's open set criterion, \liminf P_n(g^{-1}(U)) \geq P(g^{-1}(U)), and similarly for closed sets using the continuity points. Integrating over continuity sets—subsets B \subset S' where P(\partial g(X) \in \partial B) = 0—yields the key relation: \lim_{n \to \infty} P(g(X_n) \in B) = P(g(X) \in B). This holds because the boundary probabilities vanish under the limiting measure, preserving weak convergence. For functions g that are merely measurable but not everywhere continuous, the theorem extends if the set of discontinuities D_g satisfies P(X \in D_g) = 0. In this case, the proof approximates g by continuous functions on the continuity sets or invokes the , which constructs probability space versions \tilde{X}_n \to \tilde{X} almost surely such that \tilde{X}_n \stackrel{d}{=} X_n and \tilde{X} \stackrel{d}{=} X. Continuity of g almost everywhere then implies g(\tilde{X}_n) \to g(\tilde{X}) almost surely, and by the continuous mapping theorem for almost sure convergence (or direct portmanteau application), g(X_n) \Rightarrow g(X). This handling ensures the result applies broadly in weak convergence settings, such as metric spaces.

Convergence in Probability

The continuous mapping theorem for convergence in probability states that if a sequence of random variables X_n converges in probability to a random variable X, and g is a continuous function, then g(X_n) converges in probability to g(X). This result holds under the assumption that g is continuous almost surely with respect to the distribution of X, ensuring the mapping preserves the probabilistic limit. To prove this, fix \epsilon > 0. By the continuity of g at points in the of X , there exists \delta > 0 such that P\left( \sup_{|z - X| < \delta} |g(z) - g(X)| \geq \epsilon/2 \right) < \epsilon/2. On the event |X_n - X| < \delta, it follows that |g(X_n) - g(X)| \leq \sup_{|z - X| < \delta} |g(z) - g(X)|. Therefore, P(|g(X_n) - g(X)| > \epsilon) \leq P(|X_n - X| \geq \delta) + P\left( \sup_{|z - X| < \delta} |g(z) - g(X)| > \epsilon/2 \right). The second term is less than \epsilon/2 by choice of \delta. Since X_n \to X in probability, P(|X_n - X| \geq \delta) \to 0 as n \to \infty, so for sufficiently large n, this probability is less than \epsilon/2. Thus, P(|g(X_n) - g(X)| > \epsilon) < \epsilon for large n, establishing convergence in probability. A more explicit bounding uses the event decomposition: P(|g(X_n) - g(X)| > \epsilon) \leq P(|X_n - X| > \delta) + P(|g(X_n) - g(X)| > \epsilon/2, |X_n - X| \leq \delta). The first term vanishes by convergence in probability. For the second term, continuity ensures that when |X_n - X| \leq \delta, |g(X_n) - g(X)| < \epsilon/2 with high probability, controlled by the supremum bound above, driving it to zero. This result extends to vector-valued functions g: \mathbb{R}^d \to \mathbb{R}^k. If X_n \to X in probability where X_n, X \in \mathbb{R}^d, and g is continuous, then g(X_n) \to g(X) in probability, measured using the Euclidean norm \| \cdot \| on \mathbb{R}^k, such that for \epsilon > 0, there exists \delta > 0 with \|g(x) - g(y)\| < \epsilon whenever \|x - y\| < \delta. The proof follows analogously, replacing absolute values with norms in the inequalities.

Almost Sure Convergence

The continuous mapping theorem for almost sure convergence states that if X_n \to X almost surely and g: S \to S' is a Borel measurable function that is continuous almost surely with respect to the distribution of X (i.e., P(X \in D_g) = 0, where D_g is the set of discontinuity points of g), then g(X_n) \to g(X) almost surely. To prove this, consider the event A = \{\omega \in \Omega : X_n(\omega) \to X(\omega)\}, which has probability 1 by assumption. Let C = S \setminus D_g, the set of continuity points of g, so P(X \in C) = 1. The event B = \{\omega \in \Omega : X(\omega) \in C\} also has probability 1. On the set A \cap B, which has probability 1, for each \omega \in A \cap B, the sequence X_n(\omega) converges to X(\omega) \in C, and since g is continuous at X(\omega), it follows that g(X_n(\omega)) \to g(X(\omega)). Thus, g(X_n) \to g(X) almost surely. The Borel measurability of g ensures that g(X_n) and g(X) are random elements in S', preserving the measurability required for the convergence statements in general metric spaces. This proof leverages the pathwise nature of almost sure convergence, where limits are taken pointwise on a set of full probability measure, directly applying the deterministic of g at the random limit points X(\omega). Unlike weaker modes of convergence, no additional uniform control or approximation is needed here, as the almost sure limit exists pathwise almost everywhere. For a key step emphasizing the pathwise control, note that on A \cap B, \sup_{n \geq N} |g(X_n(\omega)) - g(X(\omega))| \to 0 as N \to \infty for almost all \omega, implying P(\sup_n |g(X_n) - g(X)| > \varepsilon) = 0 for any \varepsilon > 0 on the continuity set. This uniform tail behavior over n holds due to the at each fixed \omega \in A \cap B.

Proofs

Convergence in Distribution

The continuous mapping theorem for convergence in distribution states that if a sequence of random variables X_n converges in distribution to X, and g is a , then g(X_n) converges in distribution to g(X). To prove this result, one approach relies on the portmanteau theorem, which characterizes (equivalently, convergence in distribution) of probability measures P_n to P by the condition that \mathbb{E}[h(X_n)] \to \mathbb{E}[h(X)] for every bounded continuous function h on the range space. Suppose X_n \Rightarrow X. For any bounded continuous h, the composition h \circ g is also bounded and continuous because g is continuous, ensuring that the domain restrictions align and preserve continuity. Thus, \mathbb{E}[h(g(X_n))] \to \mathbb{E}[h(g(X))], which by the portmanteau theorem implies g(X_n) \Rightarrow g(X). A step-by-step elaboration begins with the assumption that X_n \Rightarrow X on a , and g: S \to S' is continuous. The of g ensures that for any U in S', g^{-1}(U) is open in S. By the portmanteau theorem's open set criterion, \liminf P_n(g^{-1}(U)) \geq P(g^{-1}(U)), and similarly for closed sets using the continuity points. Integrating over sets—subsets B \subset S' where P(\partial g(X) \in \partial B) = 0—yields the key relation: \lim_{n \to \infty} P(g(X_n) \in B) = P(g(X) \in B). This holds because the boundary probabilities vanish under the limiting measure, preserving weak convergence. For functions g that are merely measurable but not everywhere continuous, the theorem extends if the set of discontinuities D_g satisfies P(X \in D_g) = 0. In this case, the proof approximates g by continuous functions on the continuity sets or invokes the Skorohod representation theorem, which constructs probability space versions \tilde{X}_n \to \tilde{X} almost surely such that \tilde{X}_n \stackrel{d}{=} X_n and \tilde{X} \stackrel{d}{=} X. Continuity of g almost everywhere then implies g(\tilde{X}_n) \to g(\tilde{X}) almost surely, and by the continuous mapping theorem for almost sure convergence (or direct portmanteau application), g(X_n) \Rightarrow g(X). This handling ensures the result applies broadly in weak convergence settings, such as metric spaces.

Convergence in Probability

The continuous mapping theorem for convergence in probability states that if a sequence of random variables X_n converges in probability to a random variable X, and g is a , then g(X_n) converges in probability to g(X). This result holds under the assumption that g is continuous with respect to the distribution of X, ensuring the mapping preserves the probabilistic limit. To prove this, fix \epsilon > 0. By the continuity of g at points in the support of X almost surely, there exists \delta > 0 such that P\left( \sup_{|z - X| < \delta} |g(z) - g(X)| \geq \epsilon/2 \right) < \epsilon/2. On the event |X_n - X| < \delta, it follows that |g(X_n) - g(X)| \leq \sup_{|z - X| < \delta} |g(z) - g(X)|. Therefore, P(|g(X_n) - g(X)| > \epsilon) \leq P(|X_n - X| \geq \delta) + P\left( \sup_{|z - X| < \delta} |g(z) - g(X)| > \epsilon/2 \right). The second term is less than \epsilon/2 by choice of \delta. Since X_n \to X in probability, P(|X_n - X| \geq \delta) \to 0 as n \to \infty, so for sufficiently large n, this probability is less than \epsilon/2. Thus, P(|g(X_n) - g(X)| > \epsilon) < \epsilon for large n, establishing convergence in probability. A more explicit bounding uses the event decomposition: P(|g(X_n) - g(X)| > \epsilon) \leq P(|X_n - X| > \delta) + P(|g(X_n) - g(X)| > \epsilon/2, |X_n - X| \leq \delta). The first term vanishes by convergence in probability. For the second term, ensures that when |X_n - X| \leq \delta, |g(X_n) - g(X)| < \epsilon/2 with high probability, controlled by the supremum bound above, driving it to zero. This result extends to vector-valued functions g: \mathbb{R}^d \to \mathbb{R}^k. If X_n \to X in probability where X_n, X \in \mathbb{R}^d, and g is continuous, then g(X_n) \to g(X) in probability, measured using the Euclidean norm \| \cdot \| on \mathbb{R}^k, such that for \epsilon > 0, there exists \delta > 0 with \|g(x) - g(y)\| < \epsilon whenever \|x - y\| < \delta. The proof follows analogously, replacing absolute values with norms in the inequalities.

Almost Sure Convergence

The continuous mapping theorem for almost sure convergence states that if X_n \to X almost surely and g: S \to S' is a Borel measurable function that is continuous almost surely with respect to the distribution of X (i.e., P(X \in D_g) = 0, where D_g is the set of discontinuity points of g), then g(X_n) \to g(X) almost surely. To prove this, consider the event A = \{\omega \in \Omega : X_n(\omega) \to X(\omega)\}, which has probability 1 by assumption. Let C = S \setminus D_g, the set of continuity points of g, so P(X \in C) = 1. The event B = \{\omega \in \Omega : X(\omega) \in C\} also has probability 1. On the set A \cap B, which has probability 1, for each \omega \in A \cap B, the sequence X_n(\omega) converges to X(\omega) \in C, and since g is continuous at X(\omega), it follows that g(X_n(\omega)) \to g(X(\omega)). Thus, g(X_n) \to g(X) almost surely. The Borel measurability of g ensures that g(X_n) and g(X) are random elements in S', preserving the measurability required for the convergence statements in general metric spaces. This proof leverages the pathwise nature of almost sure convergence, where limits are taken pointwise on a set of full probability measure, directly applying the deterministic of g at the random limit points X(\omega). Unlike weaker modes of convergence, no additional uniform control or approximation is needed here, as the almost sure limit exists pathwise almost everywhere. For a key step emphasizing the pathwise control, note that on A \cap B, \sup_{n \geq N} |g(X_n(\omega)) - g(X(\omega))| \to 0 as N \to \infty for almost all \omega, implying P(\sup_n |g(X_n) - g(X)| > \varepsilon) = 0 for any \varepsilon > 0 on the continuity set. This uniform tail behavior over n holds due to the continuity at each fixed \omega \in A \cap B.

Examples and Applications

Illustrative Examples

A classic illustration of the continuous mapping theorem for convergence in probability involves the sequence of random variables X_n distributed as [0, 1/n], which converges in probability to the degenerate random variable X = 0 . Consider the g(x) = x^2. By the continuous mapping theorem, g(X_n) = X_n^2 \xrightarrow{p} g(0) = 0. For convergence in distribution, the theorem applies naturally to transformations under the . Let X_1, \dots, X_n be i.i.d. with \mu and finite positive variance \sigma^2 > 0, and define X_n = \sqrt{n} (\bar{X}_n - \mu), where \bar{X}_n is the sample . Then X_n \xrightarrow{d} Z \sim N(0, \sigma^2). For the g(x) = x^2, the continuous mapping theorem yields g(X_n) = n (\bar{X}_n - \mu)^2 \xrightarrow{d} g(Z) = \sigma^2 \chi^2(1), providing the asymptotic distribution of this scaled squared deviation, which has \sigma^2. The continuity of the mapping function is essential, as demonstrated by the following . Consider the discontinuous function g(x) = \mathbf{1}_{\{x > 0\}}, the indicator of the positive reals. Let X_n = 1/n, which converges in probability to 0. However, g(X_n) = 1 for all n, so g(X_n) \to 1, whereas g(0) = 0, violating the theorem's conclusion. In the multivariate setting, serves as a to the continuous mapping theorem applied to product spaces. Suppose X_n \xrightarrow{d} X and Y_n \xrightarrow{p} c, where c is a nonzero constant. For the function g(x, y) = x/y (continuous away from y = 0), the theorem implies g(X_n, Y_n) = X_n / Y_n \xrightarrow{d} g(X, c) = X / c.

Applications in Statistics

The continuous mapping theorem plays a pivotal role in deriving asymptotic approximations for functions of estimators in , particularly through the . Suppose \hat{\theta}_n is a of a \theta, meaning \hat{\theta}_n \xrightarrow{p} \theta, and g is a continuously differentiable function at \theta. The leverages the theorem to establish that \sqrt{n}(g(\hat{\theta}_n) - g(\theta)) \xrightarrow{d} N(0, (g'(\theta))^2 V), where V is the asymptotic variance of \sqrt{n}(\hat{\theta}_n - \theta). This approximation facilitates inference on nonlinear functions of parameters, such as ratios or logarithms, by transforming the asymptotic normality of \hat{\theta}_n while accounting for the linearization provided by the derivative g'(\theta). A direct of the continuous mapping theorem is , which extends operations like multiplication and addition to convergent sequences. Specifically, if X_n \xrightarrow{d} X and Y_n \xrightarrow{p} c for a constant c, then X_n Y_n \xrightarrow{d} c X, as the function g(x,y) = x y is continuous. This result is essential for combining asymptotically normal statistics with consistent estimators of constants, such as in the construction of pivotal quantities or studentized statistics where factors converge in probability. simplifies proofs of joint convergence and is foundational for applications involving estimated variances. In bootstrap methods, the continuous mapping theorem ensures the consistency of transformations applied to bootstrap replicates. When the converges appropriately, continuous functions of bootstrap statistics, such as quantiles or means, preserve the properties of the original , leading to valid approximate distributions for . This preservation is crucial for functions of estimators, like intervals for medians or variances, where the theorem guarantees that the bootstrap distribution mimics the asymptotically. For M-estimators, defined as solutions to \sum_{i=1}^n \psi(X_i; \theta) = 0 where \psi is a known , the continuous mapping theorem underpins the asymptotic of functions of these estimators. Under regularity conditions, such as the interchangeability of and , the theorem applies to the argmin or root of the empirical criterion, yielding \sqrt{n}(g(\hat{\theta}_n) - g(\theta)) \xrightarrow{d} N(0, g'(\theta)^T [\Sigma](/page/Sigma) g'(\theta)) for continuous differentiable g, where \Sigma is the asymptotic derived from the . This enables robust inference in generalized linear models and , ensuring that transformations like or inversion retain for large samples.

History and Extensions

Historical Origins

The continuous mapping theorem emerged from mid-20th-century advancements in , specifically during the early 1940s amid II-era research on processes. It was formally introduced in 1943 by statisticians Henry Berthold Mann and in their paper "On Stochastic Limit and Order Relationships," published in the Annals of Mathematical Statistics. In this publication, Mann and Wald developed the theorem as a key result within their exploration of limits, relationships, and orderings among random variables, addressing foundational questions in that were pertinent to wartime statistical applications such as and . The theorem, often referred to as the Mann–Wald theorem in recognition of its originators, built directly on prior limit theorem frameworks established in . Notably, it extended concepts from Harald Cramér's influential work on probabilistic limit theorems, including his 1938 paper "Sur un nouveau théorème-limite de la théorie des probabilités," which laid groundwork for understanding asymptotic distributions and convergence in random variables. This connection highlighted the theorem's roots in the evolving landscape of , where Cramér's contributions emphasized the stability of limiting behaviors under transformations. By the late 1950s, the continuous mapping theorem gained traction in applied fields like econometrics. John Denis Sargan referenced it in his 1958 paper "The Estimation of Economic Relationships Using Instrumental Variables," published in Econometrica, where he described it as the general transformation theorem to justify asymptotic properties in estimation procedures involving continuous functions of stochastic limits. Sargan's invocation underscored the theorem's versatility beyond pure mathematics, marking an early bridge to practical statistical modeling in economics. The continuous mapping theorem extends to general metric spaces, where if a sequence of probability measures P_n converges weakly to P on a metric space S, and h: S \to S' is a continuous function (or more generally, measurable with discontinuities of P-measure zero), then P_n \circ h^{-1} converges weakly to P \circ h^{-1} on the metric space S'. This holds particularly for Polish spaces—separable, completely metrizable topological spaces—where the separability ensures that continuous mappings preserve weak convergence without additional discontinuity conditions, leveraging the space's countable dense subset and completeness. In such spaces, random elements, including those taking values in function spaces like C[0,1] or the Skorohod space D[0,1], satisfy the theorem, enabling weak convergence results for processes under continuous transformations. A key generalization involves Hadamard differentiable mappings, which extend the theorem to nonlinear functionals in the context of the . Specifically, if a functional \phi is Hadamard differentiable at a P with \phi'_P, and \sqrt{n}(P_n - P) converges weakly to a Gaussian in a suitable normed , then \sqrt{n}(\phi(P_n) - \phi(P)) converges weakly to \phi'_P applied to the Gaussian , provided the is linear and continuous. This form, often called the functional , applies to mappings between Banach s and requires the differentiability to be stable under , ensuring preservation even for non-continuous but directionally differentiable functions under Lipschitz-like conditions on the . Extensions to empirical processes build on , which establishes weak convergence of the scaled empirical process to a Brownian bridge in \ell^\infty or suitable subspaces; continuous mappings then apply to derive limit theorems for functionals of these processes, such as supremum norms or transforms, preserving the Gaussian limit structure. For instance, in Glivenko-Cantelli or Donsker classes of functions, the continuous mapping theorem ensures that if the empirical process converges weakly, then compositions with continuous operators—like those yielding Kolmogorov-Smirnov —also converge in distribution. In counterparts, discontinuous mappings can fail to preserve , particularly in , where the contraction principle requires to transfer large deviation principles from one space to another via a map; discontinuities may violate the lower bound or alter the rate function, leading to non-equivalence of deviation probabilities unless compensated by additional regularity. Similarly, in , discontinuous functions do not inherit tail bounds from the original variables, as seen in cases where indicator functions of non-continuous sets disrupt exponential concentration, requiring instead for preservation.

References

  1. [1]
    Continuous mapping theorem - StatLect
    The Continuous Mapping theorem states that stochastic convergence is preserved by continuous functions.The problem · The theorem · Consequences · More details
  2. [2]
    [PDF] Stat 709: Mathematical Statistics Lecture 13
    The continuous mapping theorem provides an answer to the question in many problems. Theorem 1.10. Continuous mapping theorem. Let X,X1,X2,... be random k ...<|control11|><|separator|>
  3. [3]
    [PDF] Lecture 18: April 5 18.1 Continuous Mapping Theorem
    where a.s. is w.r.t. p.. In a narrow sense, the so-called continuous mapping theorem concerns the convergence in distribution of random variables, as we will ...
  4. [4]
    [PDF] Probability: Theory and Examples Rick Durrett Version 5 January 11 ...
    Jan 11, 2019 · Lemma 3.4.19 incorrectly asserts that the distributions in its statement have mean 0, but their means do not exist. The conclusion remains valid ...
  5. [5]
    [PDF] McFadden Chapter 4. Limit Theorems in Statistics 93
    1.11 The condition that convergence in distribution is equivalent to convergence of expectations of all bounded continuous functions is a fundamental ...
  6. [6]
    [PDF] Convergence of Probability Measures - CERMICS
    Billingsley, Patrick. Convergence of probability measures / Patrick Billingsley. - 2nd ed. Probability and statistics) p. cm. - ( ...
  7. [7]
    Asymptotic Statistics - Cambridge University Press & Assessment
    This book is an introduction to the field of asymptotic statistics. The treatment is both practical and mathematically rigorous. In addition to most of the ...
  8. [8]
    [PDF] 6.436J Lecture 17: Convergence of random variables - DSpace@MIT
    Nov 5, 2008 · For example if Xn is uniform on [0, 1/n], then Xn converges in distribution to a discrete random variable which is identically equal to zero. (h) ...Missing: X_n | Show results with:X_n
  9. [9]
    [PDF] CHAPTER 4. LIMIT THEOREMS IN STATISTICS
    (Continuous Mapping Theorem) If g(y) is a continuous function on an open ... - Yo* > δ) < ε/2. Then for m > n, P(*g(Ym) - g(Yo)* > ε) # P(*Yn - Yo* > δ ...
  10. [10]
    [PDF] Theoretical Statistics. Lecture 1.
    Asymptotic Statistics, Aad van der Vaart. Cambridge. 1998. Convergence of ... (continuous mapping theorem, Slutsky's Lemma). = σ. 2 . 19. Page 20. Also. √n ...
  11. [11]
    Delta Method (Chapter 3) - Asymptotic Statistics
    3 - Delta Method. Published online by Cambridge University Press: 05 June 2012. A. W. van der Vaart ... continuous-mapping theorem. If the sequence Tn ...
  12. [12]
    On Stochastic Limit and Order Relationships - Project Euclid
    Project Euclid, Open Access September, 1943, On Stochastic Limit and Order Relationships, HB Mann, A. Wald, DOWNLOAD PDF + SAVE TO MY LIBRARY.
  13. [13]
  14. [14]
    On a new limit theorem in probability theory (Translation of 'Sur un ...
    Feb 16, 2018 · This is a translation of Harald Cramér's article, 'On a new limit theorem in probability theory', published in French in 1938.Missing: 1930s | Show results with:1930s
  15. [15]
    The Estimation of Economic Relationships using Instrumental ... - jstor
    In this article the method is applied to a more general case in which the relationships are not exact, so that a set of ideal economic variables is assumed ...Missing: Denis | Show results with:Denis<|separator|>