Fact-checked by Grok 2 weeks ago

Variance-stabilizing transformation

A variance-stabilizing transformation (VST) is a mathematical function applied to a dataset in statistics to render the variance of the transformed observations approximately constant, regardless of the mean value of the original variable, thereby addressing heteroscedasticity and facilitating the application of standard parametric methods like linear regression and analysis of variance. This technique is particularly useful when the original data exhibit variance that increases with the mean, such as in count data or proportions, allowing transformed data to better approximate normality and constant spread assumptions. The concept of VSTs originated in the work of M. S. Bartlett, who in 1936 proposed the transformation for stabilizing variance in square root-transformed from continuous distributions and expanded on its use for analysis of variance in 1947. Building on this, F. J. Anscombe in 1948 derived specific VSTs for discrete distributions, including an adjusted for (approximately \sqrt{Y + 3/8}, yielding variance near 1/4 for large \lambda), the arcsine for proportions ( $2 \arcsin(\sqrt{Y/[n](/page/1)}), stabilizing variance to ), and extensions to negative . These early contributions were motivated by the need to improve the efficiency of statistical tests under non-constant variance, often using the for approximation: the variance of g(Y) is roughly [g'(\mu)]^2 \operatorname{Var}(Y), where g is chosen such that this equals a constant. Common VSTs include the square root transformation for Poisson-like counts (e.g., \sqrt{Y}, where \operatorname{Var}(\sqrt{Y}) \approx 1/4 if Y \sim \operatorname{Poisson}(\lambda)), the arcsine transformation for binomial proportions (e.g., \arcsin(\sqrt{Y}) to handle variance p(1-p)), and logarithmic or reciprocal forms for right-skewed data with multiplicative error. In practice, the choice of transformation can be guided by empirical methods like the Box-Cox procedure, which estimates a power parameter \alpha by regressing \log s_i on \log \bar{y}_i across groups to identify the form y^{1-\alpha}. VSTs remain essential in fields like bioinformatics for normalizing high-throughput count data, such as RNA-seq, where tools implement blind or conditional variants to avoid overfitting.

Introduction

Definition

A variance-stabilizing transformation (VST) is a functional transformation applied to a X whose variance depends on its , designed to render the variance of the transformed g(X) approximately across different values of the . This approach is particularly useful in scenarios where the original exhibit heteroscedasticity, meaning the variability increases or decreases systematically with the magnitude of the , complicating standard statistical procedures that assume homoscedasticity. The core objective of a VST is to identify a function g such that if \mu = E(X) and \text{Var}(X) = v(\mu), then \text{Var}(g(X)) \approx \sigma^2, where \sigma^2 remains independent of \mu. Mathematically, this is often pursued through asymptotic approximations, ensuring that the transformed variable behaves as if drawn from a with stable variance, thereby enhancing the applicability of methods like analysis of variance or that rely on constant spread. The concept of VST was introduced by M. S. Bartlett in 1936, who proposed the transformation to stabilize variance in the analysis of variance, particularly for Poisson-distributed count where the variance equals the mean. This approach was developed to improve the reliability of inferences in experimental with non-constant variance, such as biological counts.

Purpose and benefits

Variance-stabilizing transformations (VSTs) address a fundamental challenge in statistical analysis: heteroscedasticity, where the variance of data increases with the mean, as commonly observed in count (e.g., Poisson-distributed observations) and proportions (e.g., binomial ). This variance instability leads to inefficient estimators and invalidates assumptions of constant variance in models such as analysis of variance (ANOVA) and , potentially resulting in biased inference and reduced power of statistical tests. The primary benefits of VSTs include stabilizing the variance to a roughly level, which promotes approximate in the transformed observations and enhances the efficiency of maximum likelihood estimators by minimizing variance fluctuations across the data range. This stabilization simplifies graphical exploratory analysis, making patterns more discernible, and bolsters the validity of statistical tests that rely on homoscedasticity. Additionally, VSTs reduce bias in small samples, where untransformed data often exhibit excessive , enabling the reliable application of methods designed for variance. Without VSTs, inefficiencies arise prominently in contexts, where standard errors inflate for higher-mean observations, leading to overly conservative or imprecise estimates and unreliable intervals. For example, in ordinary least squares applied to heteroscedastic data, this can distort the assessment of variable relationships and diminish overall model sensitivity. The foundational work by (1947) emphasized these advantages for , while Anscombe (1948) further demonstrated their utility in stabilizing variance for and binomial cases.

Mathematical Foundations

General derivation

A variance-stabilizing transformation (VST) is derived for a X with \mu = \mathbb{E}[X] and variance \operatorname{Var}(X) = v(\mu), where v(\mu) is a known of the . The goal is to find a g such that the transformed variable g(X) has approximately constant variance, independent of \mu. This is achieved by solving the g'(\mu) = 1 / \sqrt{v(\mu)}, which ensures that the local of g counteracts the variability in v(\mu). Integrating the yields the transformation g(\mu) = \int_{a}^{\mu} \frac{1}{\sqrt{v(u)}} \, du + c, where a is a suitable lower (often chosen for convenience or to ensure positivity) and c is a constant. This form provides an exact solution when v(\mu) permits closed-form , though in practice, it is often scaled by a constant to achieve a target stabilized variance, such as 1. For instance, the arises from a Taylor expansion around \mu: g(X) \approx g(\mu) + g'(\mu) (X - \mu), implying \operatorname{Var}(g(X)) \approx [g'(\mu)]^2 v(\mu) = 1. This holds asymptotically under the for large samples, where X is sufficiently close to \mu. The derivation assumes that v(\mu) is positive, continuously differentiable, and depends solely on \mu, which is typical for distributions in exponential families or those satisfying the . It applies particularly well to large-sample settings or specific families where the variance-mean relationship is smooth. However, exact VSTs that stabilize variance for all \mu are rare and often limited to simple cases; in general, the provides only an , with performance degrading for small samples or when higher-order terms in the expansion become significant.

Asymptotic approximation

In the asymptotic framework for variance-stabilizing transformations (VSTs), the variance of the transformed variable g(X) is approximated using a Taylor around the mean \mu = E[X] for large sample sizes n or large \mu, where X has variance v(\mu). The expansion yields \operatorname{Var}(g(X)) \approx [g'(\mu)]^2 v(\mu), with higher-order terms contributing to deviations from constancy. To achieve approximate stabilization to a constant (often set to 1), the is chosen as g'(\mu) = 1 / \sqrt{v(\mu)}, leading to the form g(\mu) = \int^\mu du / \sqrt{v(u)} as a solution. Second-order corrections refine this approximation by incorporating the second derivative g''(\mu) to reduce bias in the mean of g(X). The bias term arises as E[g(X)] \approx g(\mu) + \frac{1}{2} g''(\mu) v(\mu), and adjusting constants in g (e.g., adding a shift) minimizes this O(1/\sqrt{\mu}) bias, improving accuracy for finite samples. For variance, the second-order expansion includes additional terms like \frac{1}{4} [g'''(\mu)]^2 [\operatorname{Var}(X)]^2 + g''(\mu) \operatorname{Cov}(X - \mu, (X - \mu)^3), but these are often set to yield a stabilized variance of $1 + O(1/n). Computation of g relies on evaluating the , which admits closed forms when v(\mu) is —for instance, v(\mu) = \mu ( case) gives g(\mu) = 2\sqrt{\mu}, with the second-order bias-corrected version g(X) = 2\sqrt{X + 3/8}. For non- v(\mu), iterative methods, such as or series approximations, are employed to obtain practical estimates. The approximation is inherently inexact due to neglected higher-order terms in the , which explain residual dependence on \mu; as \mu \to \infty or n \to \infty, \operatorname{Var}(g(X)) converges to a constant plus o(1), with error rates typically O(1/n) after second-order adjustments. This asymptotic behavior underpins the utility of VSTs in large-sample , though small-sample performance may require further refinements.

Specific Transformations

Poisson variance stabilization

For data distributed according to a , where the X \sim \text{Poisson}(\mu) has variance v(\mu) = \mu equal to its mean, the variance-stabilizing transformation is obtained by integrating the reciprocal of the variance function, yielding g(\mu) = \int \mu^{-1/2} \, d\mu = 2\sqrt{\mu}. Applying this to the observed data gives the key transformation g(X) = 2\sqrt{X}, which approximately stabilizes the variance of the transformed variable to 1. The asymptotic properties of this transformation ensure that \text{Var}(g(X)) \approx 1 for sufficiently large \mu, with the approximation becoming exact as \mu \to \infty; this independence from \mu facilitates more reliable statistical inference, such as in normality-based tests or regression analyses on count data. For practical simplicity, a scaled version g(X) = \sqrt{X} is sometimes employed instead, which stabilizes the variance to approximately $1/4. To improve accuracy for small \mu, where the basic approximation may deviate, the Anscombe transform refines the expression as g(X) = 2\sqrt{X + 3/8}; this correction minimizes bias in the variance stabilization and yields \text{Var}(g(X)) \approx 1 + O(1/\mu) even for moderate \mu \geq 1. The additive term $3/8 is chosen such that the first-order correction in the expansion of the variance aligns closely with the target constant, making it particularly useful for Poisson data with low counts, as encountered in fields like or .

Binomial variance stabilization

For a X following a X \sim \text{Bin}(n, p), the is \mu = np and the variance is v(\mu) = np(1-p) = \mu(1 - \mu/n), which is approximated as \mu(1 - \mu/n) for large n to reflect the quadratic dependence on the , particularly pronounced for proportions near 0 or 1. This heteroscedasticity makes direct analysis of binomial proportions challenging, as variance increases with \mu up to n/4 and decreases symmetrically. The standard variance-stabilizing for data is the arcsine square-root transformation, defined for the proportion p = X/n as g(p) = \arcsin(\sqrt{p}). Under this transformation, the variance of g(X) approximates $1/(4n), which is constant and independent of p, assuming n is fixed across observations. This stabilization arises from the asymptotic approximation where the transformed variable behaves like a with constant variance, facilitating parametric methods such as ANOVA or on proportion data. A notable property of the arcsine transformation is its effectiveness in stabilizing variance for proportions near the boundaries (0 or 1), where the original variance approaches zero but empirical fluctuations can be misleading. It also improves normality of the distribution, though it may not fully normalize for small n. A variant, the Freeman-Tukey double arcsine transformation, defined as g(X) = \arcsin(\sqrt{X/n}) + \arcsin(\sqrt{(X+1)/(n+1)}), effectively doubles the angle and yields a variance approximation of $1/n, offering better performance for small samples or boundary values by reducing bias in variance estimates. This transformation is commonly applied in for analyzing or proportion data, such as germination rates or incidences, where n represents a fixed number of trials (e.g., or ) and variance independence from p simplifies comparisons across treatments. In such contexts, it is often scaled by \sqrt{n} or 2 to align the standard deviation with for easier interpretation in statistical tests.

Other common cases

For the log-normal distribution, where a random variable X follows \log X \sim \mathcal{N}(\mu, \sigma^2), the mean-variance relationship is approximately v(\mu_X) \approx \mu_X^2 \sigma^2 with \mu_X = \exp(\mu + \sigma^2/2). The logarithmic transformation g(X) = \log(X) stabilizes the variance to the constant \sigma^2 on the transformed scale, facilitating analyses assuming homoscedasticity. In the gamma distribution with fixed shape parameter \alpha > 0, the variance function is v(\mu) = \mu^2 / \alpha, indicating a similar quadratic dependence on the mean. The primary variance-stabilizing transformation is the logarithm g(X) = \log(X), which approximates constant variance \approx 1/\alpha; power adjustments, such as the square root g(X) = \sqrt{X}, offer asymptotic optimality as \alpha \to \infty under criteria like Kullback-Leibler divergence to a normal target. The chi-square distribution with \nu degrees of freedom is a gamma special case (\alpha = \nu/2, scale 2), yielding mean \mu = \nu and variance v(\mu) = 2\mu. The square-root transformation g(X) = \sqrt{2X} stabilizes variance to approximately 1, with effectiveness increasing for large \nu where the distribution nears . A general pattern emerges across these cases: when v(\mu) \propto \mu^k, the approximate variance-stabilizing transformation is g(X) \propto X^{(2-k)/2} for k \neq 2, or the logarithm for k=2. This yields the identity transformation for constant variance (k=0), square root for linear variance (k=1, as in ), and logarithm for quadratic variance (k=2, as in log-normal and gamma). For overdispersed data exceeding standard Poisson variance (e.g., extra-Poisson variation), modified square-root transformations like \sqrt{X + c} with small c (such as 0.5 or 3/8) enhance stabilization by accounting for the inflated variance while preserving approximate constancy.

Applications

In regression models

Variance-stabilizing transformations (VSTs) can be applied to the response variable Y to achieve approximately constant variance, enabling the use of ordinary least squares (OLS) regression to handle heteroscedasticity in data that might otherwise be modeled using generalized linear models (GLMs) for distributions like the Poisson. In such cases, the variance of the response is a function of the mean \mu, denoted as v(\mu), and a VST is chosen such that the variance of the transformed response g(Y) is approximately constant, approximating a Gaussian error structure. This approach is particularly useful when the original data violate the homoscedasticity assumption of linear models, providing an approximation to GLM inference via OLS on the transformed scale. The procedure for implementing a VST in involves first specifying or estimating the variance function v(\mu) based on the assumed or from preliminary residuals, then deriving the g(Y) such that the variance of g(Y) is approximately constant. The transformed response g(Y) is subsequently used in an OLS , which is equivalent to fitting a GLM with a Gaussian and identity link for certain choices of g. For count data modeled under a , where v(\mu) = \mu, the \sqrt{Y} (or more precisely, \sqrt{Y + 3/8} for small counts) is a standard choice to stabilize variance. This method enables straightforward parameter estimation and hypothesis testing while preserving the interpretability of the model. In the context of analysis of variance (ANOVA), VSTs are beneficial for balanced experimental designs, as they stabilize variances across treatment groups, justifying the use of F-tests for comparing means. A classic application appears in agricultural yield experiments, where crop counts or yields often exhibit Poisson-like variability; applying the square root transformation allows valid assessment of treatment effects without bias from unequal variances. Post-fitting diagnostics on the transformed model, such as plotting residuals against fitted values, are essential to verify the constancy of residual variance and confirm the transformation's adequacy. Software implementations facilitate this process; in , for instance, the transformed response can be modeled using the glm function with family = gaussian(), enabling seamless integration with GLM diagnostics and inference tools.

In correlation analysis

Variance-stabilizing transformations (VSTs) are particularly useful in correlation analysis when dealing with heteroscedastic , where the variance of the variables depends on their means, leading to unstable estimates of the r. The of r is skewed, and its variance approximates (1 - \rho^2)^2 / n, where \rho is the true and n is the sample size; this dependence on \rho causes instability, especially when exhibit mean-dependent variance, such as in or proportional common in ecological studies. To mitigate this, a VST is applied to each variable individually before computing the Pearson on the transformed scale, which homogenizes variances and improves the validity of the estimate. For instance, with count data following a where variance equals the , the transformation \sqrt{x} serves as a VST, stabilizing the variance to approximately constant and allowing more reliable bivariate associations. This approach ensures that the transformed variables better satisfy the assumptions of constant variance and approximate required for Pearson . A specific VST for the correlation coefficient itself is Fisher's z-transformation, defined as z = \artanh(r) = \frac{1}{2} \ln \left( \frac{1 + r}{1 - r} \right), which normalizes the distribution of r and stabilizes its variance to approximately $1/(n - 3), independent of the true \rho. Proposed by Ronald A. Fisher in 1915, this transformation facilitates , confidence intervals, and hypothesis testing by rendering the variance constant across different magnitudes. In ecological contexts, such as analyzing correlations between species abundances that vary widely due to environmental factors, VSTs like the for counts or the variance-stabilizing transformation from DESeq2 for microbial data help uncover true patterns by reducing bias from heteroscedasticity. For example, in studies, applying DESeq2's VST to (OTU) abundances stabilizes variance before computing correlations, improving detection of associations compared to raw proportional data. For hypothesis testing, the transformed correlation z or correlations computed on VST variables are assumed to follow a , enabling standard t-tests or z-tests under the of no , with the stabilized variance providing accurate p-values and intervals. This is especially beneficial for testing in heteroscedastic settings, where raw r would yield distorted inferences.

Connection to delta method

The is an asymptotic technique for approximating the of a of a or . If \hat{\theta} is an of the \theta satisfying \sqrt{n}(\hat{\theta} - \theta) \xrightarrow{d} N(0, \sigma^2), then for a g with g'(\theta) \neq 0, \sqrt{n} \left( g(\hat{\theta}) - g(\theta) \right) \xrightarrow{d} N\left(0, [g'(\theta)]^2 \sigma^2 \right). This implies that the asymptotic variance of g(\hat{\theta}) is approximately [g'(\theta)]^2 \operatorname{Var}(\hat{\theta}). Variance-stabilizing transformations (VSTs) seek a function g such that the variance of g(X) is approximately constant for a random variable X with mean \mu = E[X] and variance v(\mu). Applying the , \operatorname{Var}(g(X)) \approx [g'(\mu)]^2 v(\mu). To achieve constant variance, say 1, set [g'(\mu)]^2 v(\mu) = 1, yielding the condition g'(\mu) = 1 / \sqrt{v(\mu)}. Integrating this produces the VST g(\mu), which asymptotically stabilizes the variance to a constant as justified by the . This connection mirrors the goal of VSTs by ensuring the transformed variable has parameter-independent variance in large samples. Higher-order expansions of the , incorporating second- and subsequent derivatives, address limitations of the first-order approximation, such as in the transformed estimator. For instance, when the first derivative g'(\theta) = 0 but higher derivatives are nonzero, the expansion shifts to n(g(\hat{\theta}) - g(\theta)) \xrightarrow{d} \frac{1}{2} g''(\theta) \sigma^2 \chi^2_1, providing refined variance and for VSTs. The further supports proofs of asymptotic efficiency for maximum likelihood estimators (MLEs) under VSTs, as the plugin estimator g(\hat{\theta}), where \hat{\theta} is the MLE, attains the Cramér-Rao lower bound asymptotically for the transformed parameter. Both the and VSTs trace their origins to the foundational work in asymptotic statistics during the 1920s and 1930s, particularly Ronald Fisher's developments in maximum likelihood and transformations for stabilizing distributions, such as his z-transformation for correlations. These ideas were later formalized and extended by statisticians like in the mid-20th century.

Comparison with power transformations

Power transformations, such as the Box-Cox family, provide a flexible class of monotonic transformations defined by g(y; \lambda) = \frac{y^\lambda - 1}{\lambda} for \lambda \neq 0 and g(y; 0) = \log y for positive y, aimed at stabilizing variance while also promoting approximate normality in the transformed data. The parameter \lambda is typically estimated from the data using maximum likelihood to optimize model fit under assumptions like constant variance and normality of residuals. In contrast, variance-stabilizing transformations (VSTs) are derived specifically to achieve constant variance in the transformed variable, based on the asymptotic between the \mu and variance v(\mu) of the original , often without explicit focus on . For instance, if v(\mu) \propto \mu^{2\alpha}, a VST takes the form T(y) = \int^\mu v(u)^{-1/2} \, du, which simplifies to a power transformation y^{1 - \alpha} in many cases. While Box-Cox transformations are more general and data-driven, allowing adaptation to unknown mean-variance through empirical estimation of \lambda, VSTs rely on knowledge of the for exact forms, making them a targeted rather than a broad family. VSTs often coincide with specific values of \lambda in the Box-Cox when the underlying is known, such as the transformation (\lambda = 0.5) for Poisson-distributed data where variance equals the , which stabilizes variance to approximately 1/4. Similarly, the logarithmic transformation serves as a VST for distributions with multiplicative errors (variance proportional to \mu^2), aligning with Box-Cox at \lambda = 0. In such scenarios, VSTs suffice without needing , offering computational efficiency, particularly for distributions. However, the flexibility of Box-Cox comes at the cost of increased computational intensity due to the optimization of [\lambda](/page/Lambda), which requires iterative fitting and may perform poorly if the mean-variance relationship is not tightly linear on a log-log scale. VSTs avoid this by using analytically derived forms, providing faster implementation for well-understood models, though they lack the adaptability of Box-Cox for complex or unknown heteroscedasticity patterns.

References

  1. [1]
    [PDF] Variance-stabilizing Transformations and Weighted Least Squares ...
    Variance-stabilizing transformations: If the variance depends on E(Yi), transform the response variable. Weighted least squares: If the variance is ...
  2. [2]
    Variance Stabilizing Transformations - SAS Help Center
    Jul 31, 2017 · Variance stabilizing transformations are often used to transform a variable whose variance depends on the value of the variable.
  3. [3]
    [PDF] Variance stabilizing transformations of Poisson, binomial and ...
    Consider variance stabilizing transformations of Poisson distribution π(λ), binomial distribution B(n, p) and negative binomial distribution NB(r, p), ...
  4. [4]
    The Transformation of Poisson, Binomial and Negative-Binomial Data
    THE TRANSFORMATION OF POISSON, BINOMIAL AND. NEGATIVE-BINOMIAL DATA. BY F. J. ANSCOMBE, Rothamsted Experimental Station. 1. INTRODUCTION. Bartlett (1936) showed ...
  5. [5]
    [PDF] Lecture 11
    Variance Stabilizing Transformations. Recap: An analysis of variance tests the following hypothesis: • H0: µi = µj for all i, j;. • H1: µi 6= µj for some i, j ...
  6. [6]
    [PDF] On Selecting a Transformation : With Applications
    In this sense,. Anscombe's g(X) is at the same time a normalizing and a better variance stabilizing transformation. The following calculations are done in this ...
  7. [7]
    [PDF] NORMALIZING AND VARIANCE STABILIZING ...
    This implies that the Fisher's z-transformation (4) of r is also a vari- ance stabilizing transformation for the elliptical population. 2.2 Canonical ...
  8. [8]
    [PDF] Classics in the History of Psychology - Usable Buildings
    The importance of the Poisson Series in biological research was first brought out in connexion with the accuracy of counting with a hæmocytometer. It was shown ...<|control11|><|separator|>
  9. [9]
    [PDF] An Analysis of Transformations G. E. P. Box - IME-USP
    Sep 29, 2007 · transformation is monotonic. For transformation to stabilize variance, the usual method (Bartlett, 1947) is to determine empirically or ...
  10. [10]
    The Use of Transformations - jstor
    Bartlett, M. S., and Kendall, D. G. "The Statistical Analysis of Variance-Hetero- geneity and the Logarithmic Transformation," Journal of the Royal Statistical.
  11. [11]
    10.1 - Nonconstant Variance and Weighted Least Squares | STAT 462
    Apply a variance-stabilizing transformation to the response variable, for example a logarithmic transformation (or a square root transformation if a ...
  12. [12]
    THE TRANSFORMATION OF POISSON, BINOMIAL AND NEGATIVE ...
    F. J. ANSCOMBE; THE TRANSFORMATION OF POISSON, BINOMIAL AND NEGATIVE-BINOMIAL DATA, Biometrika, Volume 35, Issue 3-4, 1 December 1948, Pages 246–254, https.
  13. [13]
    [PDF] STATISTICS 512 TECHNIQUES OF MATHEMATICS FOR ...
    Nov 20, 2018 · Question: what “variance stabilizing” transformation. = ( ) will have an approximately constant vari- ance? We require 0 ...
  14. [14]
    [PDF] Stat 5101 Lecture Slides Deck 7 - School of Statistics
    Variance Stabilizing Transformations (cont.) Thus g(p) = asin(2p − 1),. 0 ≤ p ≤ 1 is a variance stabilizing transformation for the Bernoulli distribu-.
  15. [15]
    [PDF] Variance Stabilizing Transformations
    We want a transformation f(Y ) that has constant variance. Writing out a first-order Taylor series expansion: f(Y ) ≈ f(µ)+(Y − µ)f0(µ). ⇒ f(Y ) − f(µ) ...Missing: derivation | Show results with:derivation
  16. [16]
    Advantages of Variance Stabilization - Wiley Online Library
    Feb 22, 2012 · The mechanics involved in finding a good normalizing and variance stabilizing transformation are largely solved in many routine applications of ...
  17. [17]
    [PDF] 1 transformations to obtain equal variance
    General method for finding variance-stabilizing transformations: If Y has mean µ and variance σ2, and if U = f(Y), then by the first order Taylor approximation,.Missing: second- expansion
  18. [18]
    Transformations Related to the Angular and the Square Root
    The comparison of transformed binomial or Poisson data with percentage points of the normal distribution to make approximate significance tests or to set ...
  19. [19]
    Data transformations - Handbook of Biological Statistics
    Dec 18, 2015 · The numbers to be arcsine transformed must be in the range 0 to 1. This is commonly used for proportions, which range from 0 to 1, such as the ...
  20. [20]
  21. [21]
  22. [22]
    Generalized Linear Models | P. McCullagh - Taylor & Francis eBooks
    Jan 22, 2019 · The success of the first edition of Generalized Linear Models led to the updated Second Edition, which continues to provide a definitive ...
  23. [23]
    Meta‐analyzing partial correlation coefficients using Fisher's z ...
    Jul 8, 2023 · The Fisher's z transformation is a variance-stabilizing transformation, so the sampling variance estimated with does not depend on the ...
  24. [24]
    Benchmarking microbiome transformations favors experimental ...
    Jun 11, 2021 · Variance-stabilizing transformation (VST). ... Normalization methods for microbial abundance data strongly affect correlation estimates BioRxiv ...
  25. [25]
    What is the delta method? | const-ae
    Feb 10, 2023 · Then we can find a variance-stabilizing transformation g by requiring constant standard deviation, Sd[g(Xi)]=const. S d [ g ( X i ) ] = const. ...
  26. [26]
    [PDF] Lecture Notes for 201A Fall 2019 - UC Berkeley Statistics
    Transformations. 3.9.1 Motivating Variance Stabilizing Transformations. The Delta method can be applied to variance stabilizing transformations. For example ...
  27. [27]
    [PDF] Plugin estimators and the delta method 17.1 Estimating a function of θ
    The plugin estimate g(ˆθ) where ˆθis the MLE achieves this variance asymptotically, so we say it is asymptotically efficient.
  28. [28]
    [PDF] Asymptotic Evaluations - Purdue Department of Statistics
    If an MlE is asymptotically efficient, the asymptotic variance in The- orem 10.1.3 is the Delta method variance of Theorem 5.5.24 (with- out the 1/n term) ...
  29. [29]
    The Epic Story of Maximum Likelihood - ResearchGate
    Aug 5, 2025 · Fisher's study of the correlation has lead to the discovery of variance-stabilizing transformations, sufficiency (Fisher, 1920), and ...
  30. [30]
    [PDF] What Did Fisher Mean by An Estimate? - arXiv
    [30] Fisher, R. A. (1922). On the mathematical foundations of theoretical statistics. Philos. Trans. A 222 309–368. [31] Fisher, R. A. (1925a). Statistical ...
  31. [31]
    [PDF] STAT 224 Lecture 13 Chapter 6 Transformation of Variables
    Variables are transformed to solve non-linearity and non-constant variability problems. Some nonlinear models can be made linear after transformation.<|control11|><|separator|>