Fact-checked by Grok 2 weeks ago

Noncentral chi-squared distribution

The noncentral chi-squared distribution is a continuous that generalizes the central , arising as the of \nu independent Gaussian random variables, each with unit variance but possibly non-zero means \mu_i where the noncentrality parameter \lambda = \sum_{i=1}^\nu \mu_i^2 \geq 0. It is parameterized by the positive \nu > 0 and the noncentrality parameter \lambda, with the central recovered as the special case \lambda = 0. The is given by f(x; \nu, \lambda) = \frac{1}{2} \left( \frac{x}{\lambda} \right)^{(\nu-2)/4} \exp\left( -\frac{x + \lambda}{2} \right) I_{(\nu/2)-1} \left( \sqrt{\lambda x} \right), for x > 0, where I_\alpha(z) denotes the modified Bessel function of the first kind of order \alpha. Equivalently, it admits a Poisson-mixture representation: f(x; \nu, \lambda) = \sum_{k=0}^\infty \frac{e^{-\lambda/2} (\lambda/2)^k}{k!} \, f_{\chi^2_{\nu + 2k}}(x), with f_{\chi^2_d}(x) the density of a central chi-squared with d . The mean is \nu + \lambda and the variance is $2(\nu + 2\lambda), both of which increase with \lambda relative to the central case. In statistical applications, the noncentral chi-squared distribution describes the of quadratic forms, such as statistics, under alternative hypotheses where the null assumption of zero means fails, enabling calculations for tests of means and variances in models. It also arises in for modeling the squared envelope of non-zero-mean and in for related quadratic functionals. Related noncentral distributions, including the noncentral t and F, extend these concepts to ratio statistics.

Definitions

Background and Motivation

The noncentral chi-squared distribution with k and noncentrality parameter λ is the of the sum of the squares of k independent random variables, each with mean μ_i and variance 1, where λ = ∑_{i=1}^k μ_i^2 measures the extent of deviation from zero means. When λ = 0, all means are zero, and the distribution reduces to the central with k , which serves as the foundational case for many tests in statistics. This distribution was introduced by Ronald A. Fisher in 1928, who derived it while studying the sampling distribution of the multiple correlation coefficient and recognizing its role in non-null scenarios for variance analysis. Fisher's work highlighted its emergence from sums of squared normals under deviated expectations, laying the groundwork for broader applications in inferential statistics. The noncentral chi-squared distribution arises naturally in quadratic forms within the general , where observations follow a with non-zero s, such as in or analysis of variance settings. Intuitively, it models the behavior of test statistics when assessing deviations from a hypothesized vector, with λ quantifying the signal strength or ; this contrasts with the central chi-squared, which assumes no deviation under the and is used to evaluate type I error rates.

Probability Density Function

The noncentral chi-squared distribution arises as the distribution of the sum of squares of independent normal random variables with possibly nonzero means, generalizing the central chi-squared distribution. Its probability density function (PDF), denoted f(x; k, \lambda), is defined for x \geq 0, where k > 0 is the degrees of freedom (typically a positive integer but generalizable to real values) and \lambda \geq 0 is the noncentrality parameter measuring the squared sum of the means scaled by the variance. The standard closed-form expression for the PDF involves the modified of the first kind and is given by f(x; k, \lambda) = \frac{1}{2} e^{-\frac{(x + \lambda)}{2}} \left( \frac{x}{\lambda} \right)^{\frac{k-2}{4}} I_{\frac{k-2}{2}} \left( \sqrt{\lambda x} \right), where I_{\nu}(z) denotes the modified of the first kind of order \nu. This function is zero for x < 0 and integrates to 1 over [0, \infty), ensuring proper normalization. An alternative representation expresses the PDF as an infinite mixture of central chi-squared densities: f(x; k, \lambda) = \sum_{j=0}^{\infty} e^{-\lambda/2} \frac{(\lambda/2)^j}{j!} f_{k + 2j}(x), where f_m(x) is the PDF of a central chi-squared distribution with m degrees of freedom, specifically f_m(x) = \frac{1}{2^{m/2} \Gamma(m/2)} x^{m/2 - 1} e^{-x/2} for x > 0. This Poisson-weighted series form highlights the distribution as a compound of central chi-squared components, with the mixing weights following a with mean \lambda/2. For \lambda > 0, the PDF is positively skewed and right-tailed compared to the central case (\lambda = 0), with the mode shifting rightward and the mean increasing to k + \lambda.

Derivation of the PDF

The noncentral chi-squared distribution with k degrees of freedom and noncentrality parameter \lambda \geq 0 arises as the distribution of X = \sum_{i=1}^k Z_i^2, where the Z_i are independent random variables with Z_i \sim \mathcal{N}(\mu_i, 1) and \lambda = \sum_{i=1}^k \mu_i^2. This setup assumes independence among the Z_i and unit variance for each. One standard derivation of the (PDF) begins with the (MGF) of X. The MGF is M_X(t) = \mathbb{E}[e^{tX}] = \prod_{i=1}^k \mathbb{E}[e^{t Z_i^2}] = (1 - 2t)^{-k/2} \exp\left( \frac{\lambda t}{1 - 2t} \right) for t < 1/2. To obtain the MGF for a single Z_i^2, compute \mathbb{E}[e^{t Z_i^2}] = \int_{-\infty}^\infty e^{t z^2} \cdot \frac{1}{\sqrt{2\pi}} \exp\left( -\frac{(z - \mu_i)^2}{2} \right) dz. Completing the square in the exponent yields \frac{(z - \mu_i)^2}{2} - t z^2 = \frac{1 - 2t}{2} \left( z - \frac{\mu_i}{1 - 2t} \right)^2 - \frac{\mu_i^2 t}{1 - 2t}, so the integral evaluates to (1 - 2t)^{-1/2} \exp\left( \frac{t \mu_i^2}{1 - 2t} \right). The product over i then gives the full MGF, as the noncentrality terms combine additively. This MGF can be expanded to reveal a mixture representation: M_X(t) = e^{-\lambda/2} \sum_{j=0}^\infty \frac{(\lambda/2)^j}{j!} (1 - 2t)^{-(k + 2j)/2}. The term (1 - 2t)^{-(k + 2j)/2} is the MGF of a central chi-squared distribution with k + 2j degrees of freedom. Thus, X follows the distribution of a central chi-squared random variable with degrees of freedom k + 2J, where J \sim \mathrm{Poisson}(\lambda/2). The PDF of X is therefore the Poisson-weighted mixture f(x; k, \lambda) = e^{-\lambda/2} \sum_{j=0}^\infty \frac{(\lambda/2)^j}{j!} f_{k + 2j}(x), \quad x > 0, where f_m(x) = \frac{x^{m/2 - 1} e^{-x/2}}{2^{m/2} \Gamma(m/2)} is the PDF of the central chi-squared distribution with m degrees of freedom. This series form follows directly from the uniqueness of the MGF. An alternative approach derives the closed-form PDF by inverting the MGF, which corresponds to taking the (with appropriate substitution s = -t). This inversion yields an expression involving the : \begin{aligned} f(x; k, \lambda) &= \frac{1}{2} e^{-(x + \lambda)/2} \left( \frac{x}{\lambda} \right)^{(k/2 - 1)/2} I_{k/2 - 1} \left( \sqrt{\lambda x} \right), \quad x > 0, \end{aligned} where I_\nu(z) = \sum_{j=0}^\infty \frac{1}{j! \, \Gamma(j + \nu + 1)} \left( \frac{z}{2} \right)^{2j + \nu} is the of order \nu = k/2 - 1. The equivalence to the series form arises because the Poisson mixture sum matches the series expansion of the after substituting the central chi-squared PDFs and simplifying. This univariate derivation extends naturally to the noncentral , which generalizes the to a case involving a p \times p from a k \times p multivariate sample, but the details differ and are not covered here.

Mathematical Properties

Moment-Generating Function

The (MGF) of a X following a noncentral chi-squared distribution with k and noncentrality parameter \lambda is given by M_X(t) = \exp\left(\frac{\lambda t}{1 - 2t}\right) (1 - 2t)^{-k/2}, \quad t < \frac{1}{2}. This function encapsulates the distribution's moments and facilitates derivations of higher-order properties, such as cumulants and convolutions with other distributions. The derivation proceeds from the representation of X as the sum of k independent noncentral chi-squared random variables each with 1 degree of freedom and respective noncentrality parameters \mu_1^2, \dots, \mu_k^2, where \lambda = \sum_{i=1}^k \mu_i^2. Each component X_i = Z_i^2, with Z_i \sim N(\mu_i, 1), has MGF M_{X_i}(t) = \exp\left(\frac{\mu_i^2 t}{1 - 2t}\right) (1 - 2t)^{-1/2}, \quad t < \frac{1}{2}. The independence implies that the MGF of X = \sum_{i=1}^k X_i is the product M_X(t) = \prod_{i=1}^k M_{X_i}(t) = \exp\left( \sum_{i=1}^k \frac{\mu_i^2 t}{1 - 2t} \right) (1 - 2t)^{-k/2} = \exp\left(\frac{\lambda t}{1 - 2t}\right) (1 - 2t)^{-k/2}, $$ confirming the overall form.[](https://www.math.wm.edu/~leemis/chart/UDR/PDFs/NoncentralchisquareC.pdf) Key properties of this MGF include its analytic continuation to complex arguments $t$ with $\operatorname{Re}(t) < 1/2$, ensuring uniqueness in the moment-determining region.[](https://pages.stat.wisc.edu/~shao/stat609/stat609-13.pdf) The cumulant-generating function is $\log M_X(t) = \frac{\lambda t}{1 - 2t} - \frac{k}{2} \log(1 - 2t)$, which directly yields the cumulants via differentiation.[](https://www.math.wm.edu/~leemis/chart/UDR/PDFs/NoncentralchisquareC.pdf) Notably, setting $\lambda = 0$ reduces the MGF to $(1 - 2t)^{-k/2}$, the form for the [central chi-squared distribution](/page/central_chi-squared_distribution), highlighting the noncentrality's role in shifting the distribution.[](https://pages.stat.wisc.edu/~shao/stat609/stat609-13.pdf) ### Moments and Cumulants The moments of the noncentral chi-squared distribution with $k$ degrees of freedom and noncentrality parameter $\lambda$ can be derived from its moment-generating function. The first raw moment, or mean, is $E[X] = k + \lambda$.[](https://mathworld.wolfram.com/NoncentralChi-SquaredDistribution.html)[](https://pages.stat.wisc.edu/~shao/stat609/stat609-13.pdf) The second central moment, or variance, is $\mathrm{Var}(X) = 2(k + 2\lambda)$.[](https://mathworld.wolfram.com/NoncentralChi-SquaredDistribution.html)[](https://pages.stat.wisc.edu/~shao/stat609/stat609-13.pdf) The skewness, defined as the third standardized central moment, is \gamma_1 = \sqrt{8} \frac{k + 3\lambda}{(k + 2\lambda)^{3/2}}. [1] For higher raw moments, a general expression is E[X^r] = 2^r e^{-\lambda/2} \Gamma\left(r + \frac{k}{2}\right) \ {}_1F_1\left(r + \frac{k}{2}; \frac{k}{2}; \frac{\lambda}{2}\right), where ${}_1F_1$ denotes the [confluent hypergeometric function of the first kind](/page/confluent_hypergeometric_function_of_the_first_kind).[](https://mathworld.wolfram.com/NoncentralChi-SquaredDistribution.html) For example, the second raw moment is $E[X^2] = (k + \lambda)^2 + 2(k + 2\lambda)$.[](https://mathworld.wolfram.com/NoncentralChi-SquaredDistribution.html) The cumulants $\kappa_r$ are given by \kappa_r = 2^{r-1} (r-1)! (k + r \lambda), \quad r \geq 1. The first two cumulants match the mean and variance: $\kappa_1 = k + \lambda$ and $\kappa_2 = 2(k + 2\lambda)$. The noncentrality parameter $\lambda$ increases both the mean and variance relative to the central chi-squared case ($\lambda = 0$), with the variance inflated by an additional $4\lambda$. This also heightens the skewness, amplifying the distribution's asymmetry, particularly for small $k$. For large $k$ or $\lambda$, the higher cumulants grow such that the distribution becomes approximately normal by the central limit theorem, as the standardized moments approach those of a Gaussian. ### Cumulative Distribution Function The cumulative distribution function (CDF) of a random variable $X$ following the noncentral chi-squared distribution with $k$ degrees of freedom and noncentrality parameter $\lambda \geq 0$ is defined as F(x; k, \lambda) = \Pr(X \leq x) = \int_0^x f(t; k, \lambda) , dt for $x \geq 0$, where $f(\cdot; k, \lambda)$ denotes the corresponding probability density function.[](https://www.itl.nist.gov/div898/software/dataplot/refman2/ch8/ncccdf.pdf) This integral representation highlights the CDF as the accumulation of probability mass from 0 to $x$, but it lacks a simple closed-form expression in terms of elementary functions.[](https://link.springer.com/article/10.1007/s00009-017-0874-1) An alternative series expansion provides a practical representation for computation: F(x; k, \lambda) = \sum_{j=0}^\infty e^{-\lambda/2} \frac{(\lambda/2)^j}{j!} , F_{k+2j}(x), where $F_m(x)$ is the CDF of the central chi-squared distribution with $m$ degrees of freedom.[](https://www.itl.nist.gov/div898/software/dataplot/refman2/ch8/ncccdf.pdf) This Poisson-weighted mixture arises from the distributional equivalence of the noncentral chi-squared to a central chi-squared with randomly shifted degrees of freedom, where the shift follows a [Poisson distribution](/page/Poisson_distribution) with mean $\lambda/2$. The series converges for all $x \geq 0$ and $\lambda \geq 0$, though truncation is typically required for numerical evaluation.[](https://www.itl.nist.gov/div898/software/dataplot/refman2/ch8/ncccdf.pdf) The CDF possesses standard properties of continuous distributions: it is continuous and strictly monotonically increasing from $F(0; k, \lambda) = 0$ to $\lim_{x \to \infty} F(x; k, \lambda) = 1$. Due to the absence of a closed form, evaluation generally relies on the integral or series forms, with the choice depending on parameter values and computational context.[](https://link.springer.com/article/10.1007/s00009-017-0874-1) The quantile function, or inverse CDF, $Q(p; k, \lambda) = F^{-1}(p; k, \lambda)$, is the value $x_p$ such that $F(x_p; k, \lambda) = p$ for $p \in (0,1)$; it has no closed-form expression and must be obtained numerically, often via root-finding methods like bisection on the CDF.[](https://www.itl.nist.gov/div898/software/dataplot/refman2/ch8/nccppf.pdf) This function is essential for determining critical values in statistical tests, where $x_p$ defines the threshold for a given significance level $p$. Computation can be sensitive for small $p$, as the lower tail of the distribution concentrates near 0, particularly when $\lambda$ is small.[](https://www.itl.nist.gov/div898/software/dataplot/refman2/ch8/nccppf.pdf) The survival function, complementary to the CDF, is given by $\overline{F}(x; k, \lambda) = 1 - F(x; k, \lambda) = \Pr(X > x)$, which quantifies right-tail probabilities and is directly applicable in hypothesis testing scenarios.[](https://www.itl.nist.gov/div898/software/dataplot/refman2/ch8/ncccdf.pdf) This form inherits the series representation by replacing $F_{k+2j}(x)$ with its complement $1 - F_{k+2j}(x)$. ### Approximations and Numerical Computation One common approach for approximating the cumulative distribution function (CDF) of the noncentral chi-squared distribution involves a truncated series expansion as a Poisson-weighted sum of central chi-squared CDFs: $$ F(x; k, \lambda) \approx \sum_{j=0}^{m} e^{-\lambda/2} \frac{(\lambda/2)^j}{j!} F_{k+2j}(x), $$ where $ F_{k+2j}(x) $ is the CDF of a central chi-squared distribution with $ k + 2j $ degrees of freedom, and the truncation index $ m $ is chosen such that the remainder term is below a desired tolerance, often $ m \approx \lambda/2 + \sqrt{2\lambda} $ for large noncentrality parameter $ \lambda $. This method provides error bounds of order $ O(e^{-\lambda/2} (\lambda/2)^{m+1}/(m+1)!) $, which tighten for moderate $ \lambda $ but require careful truncation for large $ \lambda > 100 $ to avoid numerical instability.[](https://cran.r-project.org/web/packages/DPQ/vignettes/Noncentral-Chisq.pdf) For large degrees of freedom $ k $ or noncentrality $ \lambda $, a normal approximation is effective, where the noncentral chi-squared random variable $ X \sim \chi^2_k(\lambda) $ is approximated by $ X \approx \mathcal{N}(k + \lambda, 2(k + 2\lambda)) $. This follows from the mean $ \mathbb{E}[X] = k + \lambda $ and variance $ \mathrm{Var}(X) = 2(k + 2\lambda) $, with refined local limit theorems improving the approximation error to $ O(r^{-2}) $ for the [survival function](/page/Survival_function) when $ r = k/2 $.[](https://arxiv.org/pdf/2201.07407) The Wilson-Hilferty cube-root transformation enhances quantile accuracy, particularly in the tails, by approximating $ (X/(k + \lambda))^{1/3} \approx \mathcal{N}(1 - 2/9(k + \lambda), 2/9(k + \lambda)) $, an extension of the central case that reduces skewness effects for moderate $ k $ and $ \lambda $. This method outperforms the plain normal approximation for percentage points, with relative errors under 1% for $ k \geq 10 $ and $ \lambda \leq 50 $.[](https://academic.oup.com/biomet/article-pdf/47/3-4/411/651133/47-3-4-411.pdf) Other approximations include the saddlepoint method, which excels for tail probabilities by expanding around the saddlepoint of the [moment-generating function](/page/Moment-generating_function), achieving relative errors below 0.1% in extreme tails for quadratic forms underlying the noncentral chi-squared. Imhof's method computes the CDF via numerical integration of the [characteristic function](/page/Characteristic_function), offering high precision (errors < 10^{-8}) for a wide range of parameters but at higher computational cost than series methods; post-2000 variants, such as those incorporating adaptive quadrature, address limitations in earlier implementations like Garwood's table-based algorithm from 1936.[](https://ecommons.cornell.edu/bitstream/1813/31905/1/BU-1311-M.pdf)[](https://www.tqmp.org/RegularArticles/vol18-3/p255/p255.pdf) Numerical computation in software libraries relies on these approximations and exact recursions. For instance, R's `pchisq` function implements Ding's Algorithm AS 275, which uses a recursive evaluation of the PDF terms to compute the CDF via integration, avoiding overflow in modified Bessel functions by forward and backward recursions with starting values from asymptotic expansions. Similarly, [SciPy](/page/SciPy)'s `stats.ncx2` employs a hybrid approach combining series summation for small $ \lambda $ and normal approximations for large $ \lambda $, with relative accuracies around 10^{-15} for most parameters. Recent advances as of 2025 include GPU-accelerated implementations for series summation in high-dimensional settings, leveraging parallel computation of Poisson terms to handle $ \lambda > 10^4 $ and $ k > 10^3 $ in seconds, as in optimized libraries for statistical simulations.[](https://cran.r-project.org/web/packages/DPQ/vignettes/Noncentral-Chisq.pdf)[](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ncx2.html)[](https://arxiv.org/html/2404.05062v3) For [quantiles](/page/Quantile), iterative methods such as [bisection](/page/Bisection) search on the approximated CDF provide robust solutions, converging in under 20 iterations for precision 10^{-10}. Direct formulas like Patnaik's approximation model the distribution as a scaled central chi-squared with adjusted [degrees of freedom](/page/Degrees_of_freedom) $ k' = \frac{(k + \lambda)^2}{k + 2\lambda} $ and [scale](/page/Scale) $ \theta = \frac{k + 2\lambda}{k + \lambda} $, yielding quantile errors under 2% for moderate $ \lambda < 20 $ and offering a simple alternative to inversion.[](https://search.r-project.org/CRAN/refmans/DPQ/html/pnchisqAppr.html) ## Related Distributions ### Central Limit and Special Cases The noncentral chi-squared distribution with parameters $k$ (degrees of freedom) and $\lambda$ (noncentrality parameter) encompasses several special cases that connect it directly to the central chi-squared distribution. When $\lambda = 0$, the distribution simplifies to the central chi-squared distribution with $k$ degrees of freedom, as there is no shift due to nonzero means in the underlying normal variables.[](https://real-statistics.com/chi-square-and-f-distributions/noncentral-chi-square-distribution/) For the specific case of $k = 1$, the random variable $X \sim \chi^2_1(\lambda)$ follows the distribution of the square of a normal random variable with mean $\sqrt{\lambda}$ and unit variance, i.e., $X = [Z]^2$ where $Z \sim N(\sqrt{\lambda}, 1)$.[](https://utstat.utoronto.ca/brunner/oldclass/appliedf16/lectures/2101f16NonCentralChisq.pdf) A key representation of the noncentral chi-squared distribution is as a Poisson mixture of central chi-squared distributions. Specifically, if $J \sim \mathrm{[Poisson](/page/Poisson)}(\lambda/2)$, then $X \stackrel{d}{=} \chi^2_{k + 2J}$, where $\chi^2_{k + 2J}$ denotes a central chi-squared random variable with $k + 2J$ [degrees of freedom](/page/Degrees_of_freedom). This mixture arises because the noncentrality introduces a random number of additional "effective" degrees of freedom, each contributing 2 to the total, weighted by the [Poisson](/page/Poisson) probabilities.[](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/Chisquare.html) In limiting regimes, the noncentral chi-squared distribution exhibits normal behavior consistent with the central limit theorem. For fixed $\lambda$ and large $k$, the standardized variable converges in distribution to the standard normal: \frac{X - (k + \lambda)}{\sqrt{2(k + 2\lambda)}} \to N(0, 1) as $k \to \infty$. Similarly, for fixed $k$ and large $\lambda$, the same standardization applies, with the variance term $2(k + 2\lambda) \approx 4\lambda$ dominating, yielding an approximate normal distribution centered near $\lambda$ with standard deviation approximately $2\sqrt{\lambda}$. These approximations are particularly useful for large-sample inference where exact computations are infeasible.[](https://arxiv.org/pdf/2201.07407)[](https://rseri.me/publication/j020/J020.pdf) The parameters $k$ and $\lambda$ have clear interpretations in terms of the underlying model. The degrees of freedom $k$ correspond to the number of independent standard normal components in the quadratic form defining the distribution, analogous to the central case. The noncentrality parameter $\lambda$ quantifies the squared signal strength, specifically $\lambda = \sum_{i=1}^k \mu_i^2$, where $\mu_i$ are the nonzero means of the normals, measuring the deviation from the [null hypothesis](/page/Null_hypothesis) of zero means.[](https://www.sciencedirect.com/topics/mathematics/noncentral-chi) ### Transformations to Other Distributions One important monotonic transformation of the noncentral chi-squared distribution involves the square root, which links it to the noncentral chi distribution. Let $ X \sim \chi_k^2(\lambda) $ denote a noncentral chi-squared random variable with $ k $ degrees of freedom and noncentrality parameter $ \lambda \geq 0 $. Then, $ \sqrt{X} $ follows a noncentral chi distribution with the same $ k $ degrees of freedom and noncentrality parameter $ \sqrt{\lambda} $. This distribution represents the Euclidean norm of a $ k $-dimensional normal random vector with independent components $ N(\mu_i, 1) $, where $ \sum_{i=1}^k \mu_i^2 = \lambda $. In the special case $ k=2 $, the noncentral chi distribution reduces to the Rice distribution (also known as the Rician distribution), which models the amplitude of a signal with a line-of-sight component in the presence of Gaussian noise.[](http://www.ensc.sfu.ca/people/faculty/ho/ENSC805/Rayl-Rice-chi-sq.pdf) Ratio transformations provide another class of links to F-related distributions. If $ X \sim \chi_{k_1}^2(\lambda) $ and $ Y \sim \chi_{k_2}^2 $ are [independent](/page/Independent), with $ Y $ following a central chi-squared distribution with $ k_2 $ [degrees of freedom](/page/Degrees_of_freedom), then the scaled ratio F = \frac{X / k_1}{Y / k_2} follows a noncentral F distribution with [degrees of freedom](/page/Degrees_of_freedom) $ k_1 $, $ k_2 $, and noncentrality parameter $ \lambda $. This arises naturally in the analysis of variance under non-null hypotheses. If the denominator $ Y $ is also noncentral, say $ Y \sim \chi_{k_2}^2(\lambda_2) $, the resulting distribution is doubly noncentral F with noncentrality parameters $ \lambda $ and $ \lambda_2 $. Logarithmic and power transformations are often considered for variance stabilization of the noncentral chi-squared, especially when the variance increases with the mean, but they do not produce distributions with simple closed forms. For instance, the logarithm $ \log(X) $ can approximate a [normal distribution](/page/Normal_distribution) under certain conditions, though exact mappings are unavailable. More precise variance-stabilizing transformations have been derived, such as those based on the [delta method](/page/Delta_method) or direct optimization for the noncentral case, which linearize the variance-mean relationship.[](https://www.researchgate.net/publication/265741201_Bar-Lev_SK_and_Boukai_B_2007_Variance_stabilizing_transformations_for_the_noncentral_chi-square_distribution_Journal_of_Probability_and_Statistical_Science_52_113-122) These are particularly useful in [signal processing](/page/Signal_processing) and [imaging](/page/Imaging), where noncentral chi-squared noise models speckle or magnitude data. Unlike the central chi-squared distribution, which is equivalent to a [gamma distribution](/page/Gamma_distribution) $ \Gamma(k/2, 2) $, the noncentral chi-squared lacks a direct inverse transformation to a single gamma variate. Instead, it admits a compound representation as a Poisson-weighted mixture of central chi-squared distributions: X \stackrel{d}{=} \chi_{k + 2J}^2, where $ J \sim \mathrm{Poisson}(\lambda/2) $ is independent of the central chi-squared component. This mixture structure facilitates computational approximations but does not yield a simple gamma linkage. ## Applications and Interpretations ### Hypothesis Testing and Power Calculations In hypothesis testing, the noncentral chi-squared distribution arises as the sampling distribution of the test statistic under the alternative hypothesis $H_a$ for various chi-squared tests, such as those assessing independence in contingency tables or goodness-of-fit to expected proportions.[](https://www.tqmp.org/RegularArticles/vol03-2/p063/p063.pdf) Specifically, when the null hypothesis of no association or perfect fit is false, the test statistic follows a noncentral chi-squared distribution $\chi^2(k, \lambda)$ with $k$ degrees of freedom and noncentrality parameter $\lambda$, where $\lambda$ is proportional to the square of an effect size measure multiplied by the sample size $n$.[](https://www.tqmp.org/RegularArticles/vol03-2/p063/p063.pdf) For example, in a chi-squared test of independence, $\lambda \approx n w^2$, with $w$ denoting Cohen's effect size, which quantifies the deviation from independence based on observed and expected cell frequencies.[](https://pmc.ncbi.nlm.nih.gov/articles/PMC12107878/) Similarly, for goodness-of-fit tests, $\lambda = n \omega^2$, where $\omega^2 = \sum (p_i - e_i)^2 / e_i$ sums the squared deviations of observed proportions $p_i$ from expected proportions $e_i$ across categories.[](https://www.tqmp.org/RegularArticles/vol03-2/p063/p063.pdf) The statistical power of such tests, defined as the probability of correctly rejecting the [null hypothesis](/page/Null_hypothesis) when $H_a$ holds, is computed using the noncentral chi-squared [cumulative distribution function](/page/Cumulative_distribution_function) (CDF). Power equals $1 - F_{k,\lambda}(\chi^2_{k,1-\alpha})$, where $F_{k,\lambda}(\cdot)$ is the CDF of $\chi^2(k, \lambda)$ and $\chi^2_{k,1-\alpha}$ is the $(1-\alpha)$-[quantile](/page/Quantile) of the central [chi-squared distribution](/page/Chi-squared_distribution) $\chi^2(k)$, serving as the [critical value](/page/Critical_value) for a type I error rate of $\alpha$.[](https://www.tqmp.org/RegularArticles/vol03-2/p063/p063.pdf) This formulation allows direct evaluation of power for specified $\lambda$, as illustrated in [contingency table](/page/Contingency_table) analyses where moderate effects (e.g., $w = 0.3$) with $n = 100$ yield powers around 0.85 at $\alpha = 0.05$ and $k = 1$.[](https://pmc.ncbi.nlm.nih.gov/articles/PMC12107878/) In goodness-of-fit scenarios, such as testing [uniform](/page/Uniform) distributions across four categories with a shifted [alternative](/page/Alternative), $\lambda \approx 7.68$ similarly produces power near 0.63.[](https://www.tqmp.org/RegularArticles/vol03-2/p063/p063.pdf) For experimental design, the noncentral chi-squared framework facilitates [sample size determination](/page/Sample_size_determination) to achieve a target [power](/page/Power) given an anticipated [effect size](/page/Effect_size) $\delta$. This involves solving iteratively for $n$ such that $\lambda = n \delta^2$ yields the desired [power](/page/Power) via the CDF expression, often using software to handle the nonclosed-form solution.[](https://pmc.ncbi.nlm.nih.gov/articles/PMC12107878/) Historically, the noncentral chi-squared distribution was introduced by [Fisher](/page/Fisher) in 1928 for [power](/page/Power) assessments in [correlation](/page/Correlation) and variance analyses, extending exact methods like Fisher's test for contingency tables to asymptotic [power](/page/Power) evaluations.[](https://www.jstor.org/stable/2333761) In modern applications, particularly within generalized linear models (GLMs), simulations are employed to approximate [power](/page/Power) when $\lambda$ involves complex covariate structures or non-iid observations, as analytical solutions become intractable.[](https://www.jstor.org/stable/2531897) ### Tolerance Intervals and Confidence Regions Tolerance intervals are statistical intervals designed to contain at least a specified proportion $p$ of the values from a [normal](/page/Normal_distribution) population with a given [confidence](/page/Confidence) level $\gamma$. For a univariate [normal distribution](/page/Normal_distribution) with unknown [mean](/page/Mean) and variance, based on a sample of size $n$, the two-sided tolerance interval takes the form $\bar{x} \pm k s$, where $\bar{x}$ is the sample [mean](/page/Mean), $s$ is the sample standard deviation, and $k$ is the tolerance factor. When the [mean](/page/Mean) is known, $k$ is determined using central chi-squared quantiles, but when the [mean](/page/Mean) is unknown, the factor $k$ requires accounting for the estimation uncertainty, leading to the use of noncentral chi-squared quantiles in the computation.[](https://www.researchgate.net/publication/263274525_On_the_Exact_Two-Sided_Tolerance_Intervals_for_Univariate_Normal_Distribution_and_Linear_Regression) The tolerance factor $k$ is solved from an equation involving the coverage probability, expressed as an integral over the standard normal density where the inner term uses the quantile of a noncentral chi-squared distribution with 1 degree of freedom and noncentrality parameter related to the standardized deviation $\delta^2 z^2$, with $\delta^2 = 1/n$. Specifically, the equation for the confidence $\gamma$ is $1 - \alpha = 2 \int_0^\infty \Gamma(\nu/2, \nu k^2 \chi^2_{1;1-\gamma}(\delta^2 z^2)/2) \phi(z) \, dz$, where $\nu = n-1$, $\Gamma$ is the upper [incomplete gamma function](/page/Incomplete_gamma_function), $\chi^2_{1;1-\gamma}(\lambda)$ is the $(1-\gamma)$-quantile of the noncentral chi-squared, and $\phi(z)$ is the standard [normal](/page/Normal) pdf. This approach ensures the interval covers at least proportion $p$ with exact confidence $\gamma$, distinguishing it from approximations that ignore the noncentrality.[](https://www.researchgate.net/publication/263274525_On_the_Exact_Two-Sided_Tolerance_Intervals_for_Univariate_Normal_Distribution_and_Linear_Regression) In the multivariate normal setting, confidence regions for the mean vector $\mu$ are elliptical and constructed using Hotelling's $T^2$ statistic, defined as $T^2 = n (\bar{\mathbf{x}} - \mu)^\top \mathbf{S}^{-1} (\bar{\mathbf{x}} - \mu)$, where $\mathbf{S}$ is the sample covariance matrix. The region $\{\mu : T^2(\mu) \leq \frac{p(n-1)}{n-p} F_{p,n-p}(1-\gamma)\}$ provides exact coverage probability $1-\gamma$ using the central F distribution, as the statistic is pivotal.[](https://www.tandfonline.com/doi/abs/10.1080/01621459.1966.10480892) Expanding to high-dimensional cases, [tolerance](/page/Tolerance) regions for multivariate normals form ellipsoids that cover a proportion $p$ of the [population](/page/Population) with confidence $\gamma$, requiring upper quantiles of noncentral chi-squared distributions with [degrees of freedom](/page/Degrees_of_freedom) equal to the [dimension](/page/Dimension) $p$. These regions generalize univariate [tolerance](/page/Tolerance) intervals and are essential in [quality control](/page/Quality_control) and [engineering](/page/Engineering), where the noncentral chi-squared accounts for mean [vector](/page/Vector) [estimation](/page/Estimation) [uncertainty](/page/Uncertainty), leading to wider ellipsoids in higher dimensions to maintain coverage guarantees.[](https://www.ism.ac.jp/editsec/aism/pdf/016_1_0135.pdf) ### Signal Detection and Other Uses In signal detection theory, particularly in radar and communications systems, the test statistic for detecting a known signal embedded in [additive white Gaussian noise](/page/Additive_white_Gaussian_noise) follows a noncentral chi-squared distribution with 2 [degrees of freedom](/page/Degrees_of_freedom) and noncentrality parameter $\lambda$ proportional to the [signal-to-noise ratio](/page/Signal-to-noise_ratio) (SNR).[](http://www2.ensc.sfu.ca/people/faculty/ho/ENSC805/Rayl-Rice-chi-sq.pdf) This arises because the envelope of the received signal follows a [Rice distribution](/page/Rice_distribution), and squaring it yields the noncentral chi-squared form, where $\lambda = 2 \times \text{SNR}$ under standard normalization for the [matched filter](/page/Matched_filter) output.[](https://www.fccdecastro.com.br/pdf/FRSPMR.pdf) [Receiver operating characteristic](/page/Receiver_operating_characteristic) (ROC) curves for such detectors are constructed using the [cumulative distribution function](/page/Cumulative_distribution_function) (CDF) of the noncentral chi-squared under the signal-present [hypothesis](/page/Hypothesis) for the probability of detection, contrasted with the central chi-squared CDF under noise-only for the [false alarm](/page/False_alarm) rate.[](https://ieeexplore.ieee.org/document/4383583/) In physics and [engineering](/page/Engineering) applications, the noncentral chi-squared distribution models quadratic detectors, which compute the squared output of filters to detect signals in noisy environments, such as in [spectroscopy](/page/Spectroscopy) where weak spectral lines are identified amid [Gaussian noise](/page/Gaussian_noise).[](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/9643/96432A/Applied-noncentral-Chi-squared-distribution-in-CFAR-detection-of-hyperspectral/10.1117/12.2194872.short) In particle physics, it appears in goodness-of-fit tests for data distributions deviating from expected backgrounds, as in the [Higgs boson](/page/Higgs_boson) discovery analyses at the LHC, where the profile likelihood ratio test statistic $-2 \ln \lambda$ approximates a noncentral chi-squared distribution under the signal-plus-background [hypothesis](/page/Hypothesis), with the noncentrality [parameter](/page/Parameter) reflecting the signal strength.[](https://indico.cern.ch/event/940219/contributions/5120200/attachments/2561259/4414680/Statistics_ESHEP_2022_part2.pdf) Beyond detection, the noncentral chi-squared distribution finds use in [reliability engineering](/page/Reliability_engineering) for [accelerated life testing](/page/Accelerated_life_testing), where approximations relate Weibull-distributed failure times to chi-squared forms via transformations of the hazard function, enabling [inference](/page/Inference) on accelerated factors under [stress](/page/Stress).[](https://www.researchgate.net/publication/275386291_A_Closed-Form_Second-Order_Reliability_Method_Using_Noncentral_Chi-Squared_Distributions) In [finance](/page/Finance), the multivariate noncentral chi-squared arises in modeling [portfolio](/page/Portfolio) return variances under nonzero drift, as the quadratic form of returns drawn from a multivariate [normal](/page/Normal) with [mean](/page/Mean) [vector](/page/Vector) captures the impact of expected gains on [risk](/page/Risk) metrics like value-at-risk.[](https://fan.princeton.edu/document/876) Recent developments since 2020 leverage the noncentral chi-squared in [machine learning](/page/Machine_learning) for [anomaly detection](/page/Anomaly_detection) within Gaussian mixture models, where the noncentrality parameter quantifies deviations from nominal behavior, enhancing explainability in applications like [gravitational wave](/page/Gravitational_wave) signal identification and AI-generated content forensics.[](https://arxiv.org/html/2501.13846v2) For instance, in spatiotemporal [anomaly](/page/Anomaly) learning for video authenticity verification, the [distribution](/page/Distribution) models score statistics for [outlier](/page/Outlier) identification, with $\lambda$ indicating [anomaly](/page/Anomaly) [intensity](/page/Intensity).[](https://openreview.net/pdf/2484ca4f6fbbbbcf88914b72ccfc134a22577658.pdf)

References

  1. [1]
    Noncentral Chi-Squared Distribution -- from Wolfram MathWorld
    The noncentral chi-squared distribution with noncentrality parameter lambda is given by where I_n(x) is a modified Bessel function of the first kind.
  2. [2]
    [PDF] Lecture 13: Noncentral χ
    Lecture 13 covers noncentral chi-square, t-, and F-distributions, and the noncentral chi-square distribution is defined.
  3. [3]
    Noncentral Chi - an overview | ScienceDirect Topics
    The noncentral χ 2 and F distributions are used in evaluating “power properties” of tests of significance of various hypotheses about means and variances of ...
  4. [4]
    Noncentral Chi-Square Distribution - MATLAB & Simulink - MathWorks
    The noncentral chi-square distribution is a more general case of the chi-square distribution, with applications in thermodynamics and signal processing.Definition · Background · Examples
  5. [5]
    The general sampling distribution of the multiple correlation coefficient
    The solution introduces an extensive group of distributions, occurring naturally in the most diverse types of statistical investigation.
  6. [6]
    Noncentral Chi-square Distribution - Real Statistics Using Excel
    The mean of the noncentral chi-square distribution is k + λ. The variance is 2(k+2λ). When λ = 0 the noncentral chi-square distribution is equal to the ...
  7. [7]
    [PDF] Handbook on probability distributions - Rice Statistics
    Moment generating function for the non central chi-squared distribution exists ... ) denotes the modified Bessel's function. The distribution function can ...
  8. [8]
    [PDF] Theorem If X i are mutually independent noncentral chi-square(δi,ni ...
    i=1 ni/2. = et Pm i=1 δi/(1−2t). (1 - 2t)Pm i=1 ni/2 t < 1/2, which is the moment generating function of a noncentral chi-square random variable with. Pm i=1 ...
  9. [9]
    [PDF] NCCCDF
    Mar 21, 1997 · DESCRIPTION. The non-central chi-square distribution with degrees of freedom υ and non-centrality parameter δ is the sum of υ independent ...Missing: definition | Show results with:definition
  10. [10]
    On New Formulas for the Cumulative Distribution Function of the ...
    Mar 3, 2017 · The main aim of this article is to derive three new formulas for the cumulative distribution function of the noncentral chi-square ...
  11. [11]
    [PDF] NCCPPF
    Mar 21, 1997 · The non-central chi-square distribution with degrees of freedom υ and non-centrality parameter δ is the sum of υ independent normal.
  12. [12]
    [PDF] Non-central Chi-Squared Probabilities – Algorithms in R
    For integer n, this is the distribution of the sum of squares of n normals each with variance one, λ being the sum of squares of the normal means; further,. E(X) ...
  13. [13]
    [PDF] Refined normal approximations for the central and noncentral chi ...
    Abstract. In this paper, we prove a local limit theorem for the chi-square distribution with r > 0 degrees of freedom and noncentrality parameter λ ≥ 0.
  14. [14]
  15. [15]
    [PDF] Explaining the Saddlepoint Approximation - Cornell eCommons
    Dec 16, 1995 · Example: Noncentral chi-squared. An interesting application of the saddlepoint approximation occurs in the case of the noncentral chi-squared.
  16. [16]
    [PDF] Computation of cumulative density of noncentral chi-square ...
    The paper presents Visual Basic for Application code in Microsoft Excel to compute the cumulative distribution function for noncentral chi-square distributions.
  17. [17]
    scipy.stats.ncx2 — SciPy v1.16.2 Manual
    A non-central chi-squared continuous random variable. As an instance of the rv_continuous class, ncx2 object inherits from it a collection of generic methods.
  18. [18]
    New methods to compute the generalized chi-square distribution
    Feb 26, 2025 · We present four new mathematical methods, two exact and two approximate, along with open-source software, to compute the cdf, pdf and inverse ...Missing: 2020-2025 | Show results with:2020-2025
  19. [19]
    (Approximate) Probabilities of Non-Central Chi-squared... - R
    Patnaik(1949)'s approximation to the non-central via central chi-squared. Is also the formula 26.4.27 26.4.27 26.4.27 in Abramowitz & Stegun, p.942. Johnson ...
  20. [20]
    [PDF] Non-Central Chi-squared=1See last slide for copyright information.
    If X ∼ N(µ,1) then Y = X2 is said to have a non-central chi-squared distribution with degrees of freedom one and non-centrality parameter λ = µ2. Write Y ∼ χ2(1 ...Missing: formula | Show results with:formula<|separator|>
  21. [21]
    The (non-central) Chi-Squared Distribution - R
    ... Johnson, Kotz, and Balakrishnan (1995, chapter 29). In that (noncentral, zero df ) case, the distribution is a mixture of a point mass at x = 0 x = 0 x=0 ...Missing: formula | Show results with:formula
  22. [22]
    [PDF] A Tight Bound on the Distance Between a Noncentral Chi Square ...
    Remark that for either k → ∞ or s → ∞, the limit of the noncentral chi square distribution is exactly the normal distribution, so that these bounds can be ...
  23. [23]
    [PDF] Rayleigh, Rice and Chi-Squared Distributions
    4.2.5 The Non-Central Chi-Squared Distribution. There are counterpart results in the case of Rice fading, where the centre of the Gaussian distribution of g ...
  24. [24]
    (PDF) Bar-Lev, S.K. and Boukai, B. (2007). Variance stabilizing ...
    Variance stabilizing transformations for the noncentral chi-square distribution. Journal of Probability and Statistical Science, 5(2), 113-122. January 2007.
  25. [25]
    [PDF] Understanding statistical power using noncentral probability ...
    Like the noncentral chi-squared distribution, an increase in the value of the noncentrality parameter diminishes the overlap between the null and alternative ...
  26. [26]
    Practical guide to calculate sample size for chi-square test in ...
    May 26, 2025 · Formula for sample size calculation​​ ncp = n w 2 is the non-centrality parameter of the non-central chi-squared distribution. χ 1 - α , df 2 is ...Missing: density function
  27. [27]
    Approximations to the Non-Central Chi-Square Distribution - jstor
    The objective was to make the distribution of y' more nearly normal than that of y. Again, results were given for a' first (normal) approximation' and a' second ...
  28. [28]
    Power/Sample Size Calculations for Generalized Linear Models - jstor
    Therefore, the result relies on the quality of the chi-square approximation at a sequence of points that move progressively farther out in the tail of the ...
  29. [29]
    (PDF) On the Exact Two-Sided Tolerance Intervals for Univariate ...
    Aug 9, 2025 · In this paper we present a brief overview of the theoretical background and approaches for computing the tolerance factors based on samples from ...
  30. [30]
    [PDF] On the Exact Two-Sided Tolerance Intervals for Univariate Normal ...
    Jun 1, 2014 · chi-square distribution with 1 degree of freedom and the non-centrality parameter. √ δ2z2. For m = 1, the tolerance factor given by the solution ...
  31. [31]
    Confidence, Prediction, and Tolerance Regions for the Multivariate ...
    Apr 10, 2012 · Formulas for confidence, prediction, and tolerance regions for the multivariate normal distribution for the various cases of known and unknown mean vector and ...Missing: squared | Show results with:squared
  32. [32]
    [PDF] tolerance regions for a multivariate normal
    To construct the tolerance ellipsoidal regions under the situations in sections 2 and 5, we have to calculate the upper q=1-p point of the noncentral chi-square ...Missing: squared | Show results with:squared
  33. [33]
    None
    ### Summary of Noncentral Chi-Squared Distribution, Square Root, Rice Distribution, and Transformations
  34. [34]
    [PDF] Fundamentals of Radar Signal Processing, Second Edition (McGraw ...
    ... monotonic increasing function. It is therefore useful to see what ... noncentral chi-squared density under H1: Page 477. (6.73). (6.74). Since z′ is the ...
  35. [35]
    Parametric Rao Test for Multichannel Adaptive Signal Detection
    Jul 31, 2007 · Parametric Rao Test for Multichannel Adaptive Signal Detection | IEEE Journals & Magazine | IEEE Xplore ... noncentral Chi-squared distribution ...
  36. [36]
    Applied noncentral Chi-squared distribution in CFAR detection of ...
    Oct 15, 2015 · In this paper, we use a noncentral chi-squared distribution to approximate the projected image of subspace based anomaly detectors. Firstly, the ...Missing: spectroscopy | Show results with:spectroscopy
  37. [37]
    [PDF] Search and Discovery Statistics in HEP Lecture 2 - CERN Indico
    μ| ′μ ) follows a noncentral Chi squared distribution ... discovering the Higgs Boson) with a given analysis, a given ... discovery. But even that formula can ...
  38. [38]
    A Closed-Form Second-Order Reliability Method Using Noncentral ...
    Aug 9, 2025 · The method is based on an asymptotic expansion of the sum of noncentral chi-squared variables taken from the literature. The two most widely ...
  39. [39]
    [PDF] Aggregation of Nonparametric Estimators for Volatility Matrix ∗
    estimate of the portfolio variance: aT b. ΣA,ta = ωtaT b. ΣS,ta + (1 − ωt)aT ... noncentral chi-squared distribution with degrees of freedom 4a. Q. 11 and ...
  40. [40]
    Identifying and Mitigating Machine Learning Biases for the ... - arXiv
    Oct 4, 2025 · The histograms should resemble a noncentral chi-squared distribution for both classes given the behaviour of real noise PSD s. This is true ...Missing: post- | Show results with:post-
  41. [41]
    [PDF] Physics-Driven Spatiotemporal Modeling for AI-Generated Video ...
    Ai-generated video detection via spatial- temporal anomaly learning. In ... is the noncentral chi-squared distribution with noncentrality parameter φ [61].