Fact-checked by Grok 2 weeks ago

Basu's theorem

Basu's theorem, a fundamental result in , states that if T is a boundedly complete sufficient statistic for a \theta in a parametric family of distributions, and U is any , then T and U are stochastically . This independence holds regardless of the specific form of the distributions, provided the condition is satisfied, making the theorem a key tool for decoupling information about \theta from parameter-free aspects of the data. Proved by Indian statistician Debabrata Basu in his 1955 paper "On Statistics Independent of a Complete Sufficient Statistic" published in Sankhyā, the theorem builds on earlier concepts of sufficiency introduced by Ronald Fisher and completeness formalized by Lehmann and Scheffé. A statistic T is sufficient if the conditional distribution of the data given T does not depend on \theta; it is boundedly complete if every bounded function g(T) with E_\theta[g(T)] = 0 for all \theta satisfies g(T) = 0 almost surely. An ancillary statistic U, in contrast, has a marginal distribution free of \theta, capturing structural features of the sample rather than parametric information. The theorem's significance lies in its applications across , including deriving exact sampling distributions for test statistics, constructing unbiased estimators in the presence of parameters, and facilitating conditional inference methods. For example, it underpins results in exponential families, where minimal sufficient statistics are often complete, and extends to Bayesian contexts through variations that link to posterior . Basu's work has influenced generations of research, highlighting the interplay between sufficiency, ancillarity, and while addressing challenges in higher-dimensional or non-regular models.

Background Concepts

Sufficient Statistics

A sufficient statistic T(X) for a parameter \theta is a function of the sample X = (X_1, \dots, X_n) that captures all the about \theta contained in the full sample, enabling data reduction without loss of inferential content regarding \theta. Formally, T(X) is sufficient if the conditional of X given T(X) = t is of \theta. This property implies that once T(X) is observed, the original sample provides no additional about \theta. Fisher's factorization , also known as the Fisher-Neyman factorization , offers a constructive criterion for identifying sufficient statistics. The states that a statistic T is sufficient for \theta if and only if the probability or mass function of the sample can be expressed as f(x; \theta) = g(T(x), \theta) \cdot h(x), where g is a function depending on the x only through T(x) and on \theta, while h depends only on x and not on \theta. The behind this factorization is that the likelihood's dependence on \theta is entirely encapsulated in T(x), separating the parameter-relevant information from the data's structural aspects, thus facilitating efficient . Examples illustrate the theorem's application. For independent and identically distributed samples from a N(\mu, \sigma^2) with known variance \sigma^2, the sample \bar{X} = \frac{1}{n} \sum_{i=1}^n X_i is sufficient for \mu, as the likelihood factors with g(\bar{x}, \mu) = \exp\left( -\frac{n}{2\sigma^2} (\bar{x} - \mu)^2 \right) and h(x) capturing the remaining terms. Similarly, for i.i.d. samples from a on (0, \theta), the maximum X_{(n)} = \max\{X_1, \dots, X_n\} is sufficient for \theta, since the joint density factors as g(x_{(n)}, \theta) = \theta^{-n} I(0 < x_{(n)} \leq \theta) times an indicator for all x_i \leq x_{(n)} and x_i > 0, which is independent of \theta. Sufficient statistics can vary in dimensionality, leading to the concept of minimal sufficient statistics, which achieve the maximal reduction in data while retaining sufficiency. A sufficient statistic T is minimal if it is a function of every other sufficient statistic for \theta. Such statistics can be identified via the likelihood criterion: T is minimal sufficient if, for any sample realizations x and y, T(x) = T(y) the \frac{f(x; \theta)}{f(y; \theta)} does not depend on \theta. This equivalence partitions the based on proportional likelihoods across values.

Complete Sufficient Statistics

A sufficient statistic T for a parameter \theta is said to be complete if, for every g such that \mathbb{E}_\theta[g(T)] = 0 for all \theta in the parameter space, it holds that g(T) = 0 with respect to the of T. This condition implies that the family of of T has no non-trivial unbiased estimators of zero, meaning the only of T with expectation zero across all \theta is the zero itself. In the context of Basu's theorem, completeness of a ensures the absence of non-trivial functions of the statistic that are unbiased for zero, which is crucial for establishing properties in . This property prevents the existence of extraneous unbiased that could otherwise complicate based on the sufficient statistic. The Lehmann-Scheffé theorem provides a key implication of : if T is a complete sufficient statistic for \theta, and \delta(X) is any unbiased of a \tau(\theta), then the \mathbb{E}[\delta(X) \mid T] is the unique (MVUE) of \tau(\theta). This arises because eliminates any other unbiased that could achieve the same while matching the variance bound, thereby guaranteeing that the Rao-Blackwellized based on T is optimal among all unbiased . A classic example occurs with independent and identically distributed observations X_1, \dots, X_n from a N(\mu, \sigma^2) where \sigma^2 is known; here, the sample mean \bar{X} = n^{-1} \sum_{i=1}^n X_i is a complete sufficient for the mean parameter \mu.

Ancillary Statistics

In , an is defined as a function S(X) of the data X whose probability distribution does not depend on the unknown parameter \theta. Formally, S(X) is ancillary for the parameter space \Theta if, for every measurable set A, the probability P_\theta(S(X) \in A) remains constant across all \theta \in \Theta. This distribution-free property distinguishes ancillary statistics from those that vary with \theta, such as sufficient statistics, which concentrate all information about the parameter. Common examples of ancillary statistics arise in location-scale families of distributions. For instance, in an independent and identically distributed (i.i.d.) sample from a normal distribution N(\mu, \sigma^2), the sample range R = X_{(n)} - X_{(1)}, where X_{(i)} are the order statistics, is ancillary for the location parameter \mu when \sigma^2 is known, as its distribution is invariant under shifts in \mu. More generally, in location-scale families such as the uniform distribution on (\alpha, \beta) or the normal family, the configuration statistic—defined as the vector of normalized deviations \left( \frac{X_i - \bar{X}}{s} \right)_{i=1}^n, where \bar{X} is the sample mean and s is the sample standard deviation—serves as an ancillary statistic for the full parameter (\mu, \sigma), capturing the "shape" of the data independently of scale and location. These examples illustrate how ancillarity often emerges from transformations that eliminate parameter dependence. The key property of an ancillary statistic is that it conveys no information about \theta in the sense that its marginal distribution provides no basis for inference on the parameter; any inference derived solely from an ancillary would be identical regardless of \theta's true value. Despite this, ancillaries play a crucial role in conditional inference approaches, where conditioning on an ancillary statistic can yield procedures with desirable frequentist properties, such as similarity or exactness, by stabilizing the inference against nuisance parameters. In the context of Basu's theorem, ancillary statistics are essential because they enable the establishment of independence between such statistics and complete sufficient statistics under certain conditions, facilitating unbiased testing and estimation.

Formal Statement

Theorem Enunciation

Basu's theorem, named after the Indian statistician , provides a fundamental result on the independence between certain types of statistics in parametric statistical models. The theorem was originally published in 1955. Consider a random \mathbf{X} taking values in a \mathcal{X}, which may be or continuous, with probability or f(\mathbf{x} \mid \theta) parameterized by \theta \in \Theta. Let T = T(\mathbf{X}) be a and S = S(\mathbf{X}) be another . The theorem states that if T is a boundedly sufficient for \theta and S is an (i.e., the distribution of S does not depend on \theta), then T and S are under every P_\theta, \theta \in \Theta. In some formulations, the condition of bounded on T is replaced by full , though the original result emphasizes bounded to ensure the sufficiency of the ancillarity condition for .

Key Assumptions

Basu's theorem applies to families of distributions indexed by a \theta \in \Theta, often assuming the family admits densities with respect to a dominating measure to ensure the existence of expectations. A key assumption is that there exists a T for \theta, meaning the conditional distribution of the data given T does not depend on \theta. Additionally, T must be complete, which requires that if E_\theta[g(T)] = 0 for all \theta \in \Theta and for some g, then g(T) = 0 ; in Basu's original formulation, this is weakened to bounded completeness, applying only to bounded s g to accommodate broader classes of distributions under milder integrability conditions. Regularity conditions are essential for the theorem's validity, including the existence of all relevant expectations E_\theta[|h(X)|] < \infty for functions h involved in the statistics, and often the dominated convergence theorem is implicitly relied upon to justify interchanging limits and integrals in deriving properties of the distributions. These conditions are typically satisfied in full-rank exponential families, such as the normal or Poisson distributions, where the parameter space \Theta is open and the family has a natural parameterization ensuring the support does not depend on \theta. Without such regularity, the expectations may not exist, potentially invalidating the completeness property. The theorem also assumes the existence of an ancillary statistic S, whose distribution does not depend on \theta. While the theorem holds under these assumptions to establish independence between T and S, it fails without completeness; for instance, in the Laplace location family where the sufficient statistic for the location parameter is incomplete, ancillary statistics like the sample range can depend on the sufficient statistic, providing a counterexample to the independence claim. Bounded completeness offers a weaker sufficient condition than full completeness, making the theorem applicable to a wider range of models, including some non-exponential families, though full completeness is often assumed in extensions to multiparameter settings.

Proof and Derivation

Outline of Proof

The proof of Basu's theorem leverages the completeness of the sufficient statistic T to establish that any ancillary statistic S is independent of T. The core idea is to show that the conditional expectation E[g(S) \mid T] is constant (equal to the unconditional expectation E[g(S)]) almost surely for any bounded measurable function g, which implies that the conditional distribution of S given T matches its marginal distribution, free of dependence on T. A high-level outline proceeds in three main steps. First, the joint characteristic function (or moment generating function, in applicable cases) of (S, T) is considered, noting that ancillarity of S ensures its marginal distribution is parameter-free, while sufficiency of T implies the conditional distribution of S given T is also parameter-free. Second, conditioning on T reveals that E[g(S) \mid T] does not depend on the parameter \theta, so E[g(S) \mid T] - E[g(S)] serves as an unbiased estimator of zero for all \theta. Third, the completeness of T forces this difference to be zero almost surely, as no non-constant function of T can be unbiased for zero across all \theta, yielding independence. Intuitively, ancillarity "fixes" the marginal behavior of S regardless of \theta, preventing it from carrying information about the parameter, while completeness of T ensures that the conditioning on T introduces no additional variation in expectations involving S, eliminating any potential dependence. In the original presentation, Basu (1955) employed Fourier transforms (characteristic functions) to handle the independence argument in specific distributional settings.

Detailed Derivation

To rigorously prove Basu's theorem, assume the family of distributions \{P_\theta : \theta \in \Theta\} is dominated by a \sigma-finite measure \mu, ensuring the existence of Radon-Nikodym derivatives (densities) and well-defined conditional expectations. For the continuous case, \mu is Lebesgue measure on \mathbb{R}^k; for the discrete case, \mu is counting measure on a countable space. Let T be a complete sufficient statistic and S an ancillary statistic. To establish independence, show that the conditional distribution of S given T = t equals the marginal distribution of S for \mu-almost all t. It suffices to verify this for expectations of bounded measurable functions. Consider any bounded measurable function g: \mathcal{S} \to \mathbb{R} (where \mathcal{S} is the range of S), with |g| \leq M < \infty. By sufficiency of T, the conditional distribution P(S \in \cdot | T = t) does not depend on \theta, so the conditional expectation E[g(S) | T = t] is independent of \theta; denote it by h(t). By the law of total expectation, E_\theta[h(T)] = E_\theta[g(S)] for all \theta \in \Theta. Since S is ancillary, the marginal distribution P(S \in \cdot) is free of \theta, so E_\theta[g(S)] = c_g (a constant). Thus, E_\theta[h(T)] = c_g \quad \forall \theta \in \Theta. The function h(T) - c_g of T satisfies E_\theta[h(T) - c_g] = 0 for all \theta. Boundedness of g implies boundedness of h(T) - c_g (by |h(t) - c_g| \leq 2M). By (bounded) completeness of T, h(T) - c_g = 0 \quad P_\theta\text{-a.s. for all } \theta. Hence, E[g(S) | T] = E[g(S)] almost surely. This holds for all bounded measurable g, so the conditional and marginal distributions of S coincide (by the portmanteau theorem or functional convergence of characteristic functions). If the joint distribution admits a density f_{T,S}(t,s) with respect to the product measure \mu_T \times \mu_S, then f_{T,S}(t,s) = f_T(t) f_{S|T}(s|t) = f_T(t) f_S(s), as f_{S|T}(s|t) = f_S(s), confirming independence. An alternative derivation uses characteristic functions to extend beyond bounded g. Let \phi(t,s; \theta) = E_\theta[\exp(it \cdot T + is \cdot S)], where t, s \in \mathbb{R} (dot for inner product if vector-valued). By sufficiency, the conditional characteristic function \psi(s | t) := E[\exp(is \cdot S) | T = t] is independent of \theta. Thus, \phi(t,s; \theta) = E_\theta[\exp(it \cdot T) \psi(s | T)]. By ancillarity, the marginal characteristic function \phi_S(s) := E[\exp(is \cdot S)] is independent of \theta. Moreover, |\psi(s | t)| \leq 1 for all t, s, so \psi(s | T) - \phi_S(s) is a bounded function of T with E_\theta[\psi(s | T) - \phi_S(s)] = \phi_S(s) - \phi_S(s) = 0 \quad \forall \theta. By completeness, \psi(s | T) = \phi_S(s) \quad P_\theta\text{-a.s. for all } \theta, s. Hence, \phi(t,s; \theta) = \phi_T(t; \theta) \phi_S(s), the product of marginal characteristic functions, implying independence (by uniqueness of characteristic functions). To connect to parameter differentiation in exponential families (where completeness often holds), note that ancillarity implies \partial/\partial \theta \phi_S(s) = 0; sufficiency ensures the score \partial/\partial \theta \log f(X; \theta) depends only on T, so differentiating \phi(t,s; \theta) yields an expression involving only T whose expectation vanishes by completeness, reinforcing the factorization.

Applications and Examples

Normal Distribution Case

Consider a random sample X_1, \dots, X_n drawn independently and identically from a normal distribution N(\mu, \sigma^2), where \mu is unknown and \sigma^2 > 0 is known. The sample mean \bar{X} = n^{-1} \sum_{i=1}^n X_i is a complete sufficient statistic for the location parameter \mu. Meanwhile, the statistic V = (n-1) S^2 / \sigma^2 = n^{-1} \sum_{i=1}^n (X_i - \bar{X})^2 / \sigma^2 follows a chi-squared distribution with n-1 degrees of freedom, \chi^2_{n-1}, and its distribution does not depend on \mu, making it ancillary for \mu. Basu's theorem implies that \bar{X} and V (or equivalently, S^2) are independent. To illustrate this via the joint distribution, note that the joint probability density function of the sample is f(\mathbf{x} \mid \mu, \sigma^2) = (2\pi \sigma^2)^{-n/2} \exp\left\{ -\frac{1}{2\sigma^2} \sum_{i=1}^n (x_i - \mu)^2 \right\}, for \mathbf{x} = (x_1, \dots, x_n) \in \mathbb{R}^n. Expanding the sum of squares gives \sum_{i=1}^n (x_i - \mu)^2 = n (\bar{x} - \mu)^2 + \sum_{i=1}^n (x_i - \bar{x})^2 = n (\bar{x} - \mu)^2 + (n-1) s^2, so the joint pdf becomes f(\mathbf{x} \mid \mu, \sigma^2) = (2\pi \sigma^2)^{-n/2} \exp\left\{ -\frac{n (\bar{x} - \mu)^2}{2\sigma^2} \right\} \exp\left\{ -\frac{(n-1) s^2}{2\sigma^2} \right\}. This factors into a product involving only \bar{x} and \mu, and a product involving only s^2, confirming that the joint distribution of \bar{X} and S^2 separates into independent marginal distributions: \bar{X} \sim N(\mu, \sigma^2 / n) and (n-1) S^2 / \sigma^2 \sim \chi^2_{n-1}. This independence result predates Basu's theorem, having been first noted by R. A. Fisher in the context of the t-distribution and rigorously established as characteristic of the normal distribution by R. C. Geary in 1936. It serves as a canonical example of the theorem's implications in parametric inference.

Exponential Family Example

Consider an independent and identically distributed sample X_1, \dots, X_n from a Poisson distribution with parameter \lambda > 0. The statistic T = \sum_{i=1}^n X_i is a complete sufficient statistic for \lambda. The statistic S = \sum_{i=1}^n (X_i - \bar{X})^2 / \bar{X}, where \bar{X} = T/n, serves as an related to the dispersion of the sample, as its does not depend on \lambda. This follows from the fact that, conditional on T = t, the vector (X_1, \dots, X_n) follows a with parameters t and equal probabilities $1/n for each category, which is free of \lambda. Thus, S, being a of the deviations from the under this conditional multinomial setup, has a parameter-free . By Basu's theorem, since T is complete and sufficient while S is ancillary, T and S are . This independence extends to related quantities, such as the sample \sqrt{S/n} or standardized residuals derived from the sample. The joint of the sample is p(\mathbf{x} \mid \lambda) = \exp\left\{ \lambda \sum_{i=1}^n x_i - n\lambda \right\} \prod_{i=1}^n \frac{1}{x_i!}, which factors via the T, confirming the setup for applying Basu's theorem while highlighting that the ancillary S arises from the parameter-free component of the . This result is particularly beneficial in goodness-of-fit testing for the Poisson model, where conditioning on T allows assessment of over- or under-dispersion via the of S, which approximates a \chi^2_{n-1} under the null without dependence on \lambda.

Boundedly Complete Statistics

Bounded completeness provides a relaxation of the stricter of completeness for sufficient statistics, allowing Basu's theorem to apply in a wider class of statistical models. A statistic T is said to be boundedly complete if, for every bounded measurable function g (i.e., |g| \leq M < \infty for some M) such that \mathbb{E}_\theta[g(T)] = 0 for all \theta \in \Theta, it follows that g(T) = 0 almost surely for all \theta. This condition ensures that unbiased estimators based on T are unique among bounded functions, mitigating issues that arise when full completeness fails due to unbounded parameter spaces or irregular distributions. In Basu's original formulation, the theorem extends to boundedly complete sufficient statistics: if T is a boundedly complete sufficient statistic for \theta and U is an ancillary statistic (with distribution independent of \theta), then T and U are independent. This independence holds without requiring the stronger completeness assumption, making the result applicable beyond regular exponential families where complete sufficient statistics are more readily available. The proof mirrors the complete case but restricts to bounded functions g, leveraging the fact that ancillarity implies \mathbb{E}_\theta[g(T) h(U)] = \mathbb{E}_\theta[g(T)] \mathbb{E}_\theta[h(U)] = 0 for suitable h, leading to g = 0 under bounded completeness. A classic example illustrates this concept in a non-regular model. Consider an i.i.d. sample X_1, \dots, X_n from the Uniform(0, \theta) distribution with \theta > 0. The sample maximum T = \max\{X_1, \dots, X_n\} is a minimal for \theta, but it is not complete because the parameter space is unbounded, allowing non-trivial unbounded functions g(T) with zero across \theta. However, T is boundedly complete, as any bounded g satisfying the condition must vanish . Here, an like the ratios U_i = X_i / T (for i = 1, \dots, n-1) is independent of T by Basu's theorem, enabling conditional inference despite the lack of full completeness. This extension broadens the applicability of Basu's theorem to non-regular models, such as those with parameter-dependent supports, where traditional may not hold but bounded versions suffice for establishing and uniqueness of estimators in practical settings.

Multivariate Generalizations

The multivariate of Basu's theorem extends the core result to settings where the θ is a vector in ℝᵏ. Specifically, if T is a complete sufficient statistic for θ and S is an , then T and S are under the joint . This extension leverages principles to handle multiparameter models, allowing the construction of unbiased estimators as functions of complete sufficient and ancillary components. Eaton and Morris (1970) provide a foundational by showing that, in invariant families, an unbiased estimate can be expressed via a complete sufficient statistic and a maximal , thereby preserving properties across vector parameters. A prominent example arises in the multivariate normal distribution N_p(μ, Σ), where μ ∈ ℝᵖ is the mean vector and Σ is the p × p , both unknown. For an i.i.d. sample of size n > p, the sample mean vector \bar{X} is independent of the sample covariance matrix (n-1)S/n, where S is the unbiased of Σ. This follows from the complete sufficiency of \bar{X} for μ (conditional on S as ancillary) and the ancillarity of the configuration statistic derived from the residuals, enabling pivotal inferences for μ without estimating Σ. Ghosh (2002) highlights this as a key application, underscoring its role in deriving minimum variance unbiased estimators in vector-parameter settings. Further developments address partial ancillarity and in multiparameter models, where nuisance parameters complicate full ancillarity. (1977) introduced partial ancillarity, defining a as ancillary for a of parameters while possibly depending on others, which facilitates by eliminating nuisance effects. Subsequent work by and others, such as in empirical Bayes contexts, explores between partial sufficient statistics for interest parameters and partial ancillaries for nuisances, extending 's framework beyond scalar cases. These results, building on post-1955 refinements, enable robust applications in high-dimensional models like hierarchical multivariates. However, these generalizations rely on structural assumptions like exponential family membership or bounded completeness; without such conditions, independence may fail in non-exponential families, leading to dependent sufficient and ancillary components that complicate inference. Ghosh (2002) notes this limitation, emphasizing the need for case-specific verification in unstructured models.

References

  1. [1]
    On Statistics Independent of a Complete Sufficient Statistic - jstor
    If {Pq}, deil, be a family of probability measures on an abstract sample space. S and T be a sufficient statistic for d then for a statistic Tx to be ...
  2. [2]
    (PDF) Basu's Theorem - ResearchGate
    Aug 25, 2016 · Basu's Theorem, published in Sankhya, 1955, has served several generations of statisticians as a fundamental tool for proving independence of ...
  3. [3]
    24.2 - Factorization Theorem | STAT 415
    In general, if Y is a sufficient statistic for a parameter θ , then every one-to-one function of Y not involving θ is also a sufficient statistic for θ . Let's ...
  4. [4]
    [PDF] 4. Sufficiency 4.1. Sufficient statistics. Definition 4.1. A statistic T = T ...
    We consider the uniform distribution on (0,θ), that is, f(x)=1/θ for 0 <x<θ. We know that the analogous statistic Y = maxXj is again sufficient, and T = 2X is.
  5. [5]
    3.5 Minimal sufficient statistics | A First Course on Statistical Inference
    A minimal sufficient statistic is a sufficient statistic that can be obtained by means of (not necessarily injective but measurable) functions of any other ...Missing: identification | Show results with:identification
  6. [6]
    [PDF] ANCILLARY STATISTICS: A REVIEW
    Abstract: In a parametric statistical model, a function of the data is said to be ancillary if its distribution does not depend on the parameters in the model. ...
  7. [7]
    [PDF] Stat 709: Mathematical Statistics Lecture 20
    Lecture 12: Completeness. Ancillary statistics. A statistic V(X) is ancillary iff its distribution does not depend on any unknown quantity.
  8. [8]
    [PDF] 1 Ancillary statistics
    Jan 25, 2016 · Definition 1. A statistics is ancillary if its distribution does not depend on θ. More precisely, a statistic S(X) is ancillary for Θ it its ...
  9. [9]
    [PDF] Ancillary Statistics
    An illustrative, if somewhat artificial, example is a sample (Y1,...,Yn), where now n is fixed, from the uniform distribution on (θ , θ + 1). The largest and ...
  10. [10]
    [PDF] SOME EXAMPLES OF ANCILLARY STATISTICS AND THEIR ...
    1. Introduction. According to conventional definition, an ancillary statistic is one whose distribution is the same for all values of an unknown parameter 0.
  11. [11]
    [PDF] Principle of Data Reduction - Purdue Department of Statistics
    Definition 6.2.3 A statistic S(X) whose distribution does not depend on the parameter θ is called an ancillary statistic. Alone, an ancillary ...
  12. [12]
    On Statistics Independent of a Complete Sufficient Statistic
    If {Ρ θ}, θЄΩ, be a family of probability measures on an abstract sample space \(\mathcal {G}\) and Τ be a sufficient statistic for θ then for a statistic T ...
  13. [13]
    Completeness, Ancillarity, and Basu's Theorem - Stat 210a
    A statistic T ( X ) is complete for a family of distributions if no nontrivial function of T can have expectation zero for every distribution in the family.
  14. [14]
    On Statistics Independent of a Complete Sufficient Statistic
    Download book PDF · Selected Works of ... About this chapter. Cite this chapter. Basu, D. (2011). On Statistics Independent of a Complete Sufficient Statistic.
  15. [15]
    An Interpretation of Completeness and Basu's Theorem
    Mar 12, 2012 · In order to obtain a statistical interpretation of completeness of a sufficient statistic T, an attempt is made to characterize this property ...
  16. [16]
  17. [17]
    [PDF] STA732 Statistical Inference - Lecture 04: Completeness and ...
    Definition. Ancillarity. Suppose X has distribution from a family 乡 = {P𝜃,𝜃 ∈ Ω}. A statistics V (X) is called ancillary if its distribution does not depend.
  18. [18]
    [PDF] Biostatistics 602 - Statistical Inference Lecture 06 Basu's Theorem
    Jan 29, 2013 · Summary. Proof of Basu's Theorem. • As S(X) is ancillary, by definition, it does not depend on θ. • As T(X) is sufficient, by definition, fX(X|T ...
  19. [19]
    [PDF] Completeness and Basu's theorem
    If the distribution of (T,V) is complete, then T and U are conditionally independent, given V. Corollary (Basu's theorem): Suppose T is sufficient, and U is ...
  20. [20]
    [PDF] Completeness - Arizona Math
    By Basu's Theorem, T(X) and V(X) are independent. Thus, for any β,. 1 β. Eβ[X1] = Eβ[T(X)V(X)] ...
  21. [21]
  22. [22]
    Statistical Inference - George Casella, Roger Berger - Google Books
    May 23, 2024 · It covers all topics from a standard inference course including: distributions, random variables, data reduction, point estimation, hypothesis ...
  23. [23]
    Applications of Basu's Theorem - jstor
    This article reviews some basic ideas of classical inference and the roles they play in Basu's theorem. The usefulness of this theorem is demonstrated by ...
  24. [24]
    [PDF] Show Sample Mean and Variance are independent under Normality
    A well known result in statistics is the independence of X and S2 when X1,X2, ··· ,Xn ⇠ N µ, σ2 . This handout presents a proof of the result using a series ...
  25. [25]
    The Distribution of "Student's" Ratio for Non-Normal Samples - jstor
    in normal samples, is due principally to the statistical independence of the mean and the variance, s2, in such samples. The property, which was first ...
  26. [26]
    [PDF] Biostatistics 602 - Statistical Inference Lecture 06 Basu's Theorem
    Jan 29, 2013 · T(X). • The family of probability distributions is called complete if. • E[g(T)|θ]=0 for all θ implies Pr[g(T)=0|θ]=1 for all θ.
  27. [27]
    [PDF] 4.3 The multinomial distribution:
    There are two derivations of multinomial distribution. One is based on simple random sampling and the other is based the conditional distribution of Poisson.
  28. [28]
    Basu's theorem with applications: A personalistic review
    Dec 17, 2014 · statistics with a sufficient statistic T implies that T is boundedly complete. ... uniform(0,θ), θ ∈ (0, ∞) distribution. Here X. (n). is.
  29. [29]
    On Statistics Independent of Sufficient Statistics - jstor
    Basu, D. (1955) : On statistics independent of a complete sufficient statistic. Sankhy?, 15, 377. Fisher, R. A. (1921): The mathematical foundations of ...
  30. [30]
    The Application of Invariance to Unbiased Estimation - Project Euclid
    The Rao--Blackwell Theorem gives a method of constructing the MVUE of an estimable parametric function based on any unbiased estimate.<|separator|>
  31. [31]
    On the Elimination of Nuisance Parameters - jstor
    Let us see what sense we can make of such nightmares. 3. PARTIAL SUFFICIENCY AND PARTIAL ANCILLARITY. In this section we put together a number of mathe ...