Fact-checked by Grok 2 weeks ago

Elliptical distribution

In probability and statistics, an elliptical distribution is a multivariate probability distribution defined such that its level sets (contours of constant density) form ellipsoids centered at a location parameter \boldsymbol{\mu}, generalizing the multivariate normal distribution to encompass a broader class of symmetric distributions with potentially heavier tails. The class originates from the work of Kelker (1970), who introduced spherical distributions as a special case and provided a location-scale generalization that extends to elliptical forms. Formally, a p-dimensional random vector \mathbf{X} follows an elliptical distribution, denoted \mathbf{X} \sim E_p(\boldsymbol{\mu}, \Sigma, g), if its probability density function is given by f(\mathbf{x}) = |\Sigma|^{-1/2} g\left( (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right), where \boldsymbol{\mu} \in \mathbb{R}^p is the location vector, \Sigma is a p \times p positive definite dispersion matrix, |\Sigma| denotes its determinant, and g: [0, \infty) \to [0, \infty) is a non-negative density generator function ensuring integrability to yield a valid pdf. This formulation implies radial symmetry around \boldsymbol{\mu}, with the shape and orientation of the ellipsoids determined by \Sigma. Elliptical distributions exhibit several key properties that make them analytically tractable yet flexible. They are closed under affine transformations: if \mathbf{X} \sim E_p(\boldsymbol{\mu}, \Sigma, g), then for any invertible matrix B and vector \mathbf{b}, B\mathbf{X} + \mathbf{b} \sim E_p(B\boldsymbol{\mu} + \mathbf{b}, B\Sigma B^T, g). Marginal and conditional distributions are also elliptical, though uncorrelated components are generally not independent unless the distribution is normal. If second moments exist, the mean is \mathbb{E}[\mathbf{X}] = \boldsymbol{\mu} and the covariance matrix is proportional to \Sigma, specifically \mathrm{Cov}(\mathbf{X}) = \alpha \Sigma where \alpha depends on the generator g. Prominent examples include the multivariate normal (with g(t) = (2\pi)^{-p/2} \exp(-t/2)), the multivariate Student's t-distribution (with heavier tails for degrees of freedom \nu > 0), and the multivariate Cauchy distribution (as the t-distribution with \nu = 1). These distributions find wide applications in robust statistical inference, where they accommodate outliers better than the normal due to tail flexibility, and in financial modeling for capturing dependencies in asset returns with elliptical copulas. They also appear in portfolio optimization theory, enabling generalizations beyond normality assumptions while preserving elliptical symmetry for risk assessment.

Mathematical Definition

Via Characteristic Function

A random vector \mathbf{X} in \mathbb{R}^d follows an elliptical distribution if its characteristic function takes the form \phi_{\mathbf{X}}(\mathbf{t}) = \exp(i \mathbf{t}^\top \boldsymbol{\mu}) \, \psi(\mathbf{t}^\top \boldsymbol{\Sigma} \mathbf{t}), where \mathbf{t} \in \mathbb{R}^d, \boldsymbol{\mu} \in \mathbb{R}^d is the location vector, \boldsymbol{\Sigma} is a d \times d positive semidefinite dispersion matrix, and \psi: [0, \infty) \to \mathbb{C} is the characteristic generator function satisfying \psi(0) = 1 and such that \psi(\|\mathbf{u}\|^2) is the characteristic function of a d-dimensional spherical random vector for all \mathbf{u} \in \mathbb{R}^d. This form generalizes the multivariate normal distribution, whose characteristic function is \exp(i \mathbf{t}^\top \boldsymbol{\mu} - \frac{1}{2} \mathbf{t}^\top \boldsymbol{\Sigma} \mathbf{t}), corresponding to \psi(u) = \exp(-u/2). The function \psi uniquely determines the specific elliptical distribution within the class, encompassing both light-tailed (e.g., normal with \psi(u) = \exp(-u/2)) and heavy-tailed cases (e.g., multivariate Student's t with \psi(u) = (1 + u/\nu)^{-( \nu + d )/2} for degrees of freedom \nu > 0). Different choices of \psi yield distinct families while preserving the elliptical structure, as the class is closed under affine transformations \mathbf{Y} = \mathbf{B} \mathbf{X} + \mathbf{c} for nonsingular \mathbf{B}, resulting in \phi_{\mathbf{Y}}(\mathbf{t}) = \exp(i \mathbf{t}^\top \mathbf{c}) \, \psi(\mathbf{t}^\top \mathbf{B} \boldsymbol{\Sigma} \mathbf{B}^\top \mathbf{t}). This characteristic function form arises from the affinity to the multivariate normal, where elliptical distributions can be viewed as having a decomposition into location, linear transformation, and a spherical component. Specifically, elliptical distributions admit the stochastic representation \mathbf{X} = \boldsymbol{\mu} + \boldsymbol{\Sigma}^{1/2} R \mathbf{U}, where R > 0 is a radial scalar random variable independent of \mathbf{U}, and \mathbf{U} is uniformly distributed on the unit sphere in \mathbb{R}^d, with the distribution of R such that E[\exp(i s R^2)] = \psi(s). This yields the desired \phi_{\mathbf{X}}, demonstrating the structural similarity to the Gaussian case beyond the quadratic exponent. The definition via characteristic function is particularly advantageous for non-differentiable generators \psi, where the distribution may lack a density with respect to Lebesgue measure (e.g., discrete or singular elliptical distributions), yet remains well-defined probabilistically without requiring absolute continuity assumptions.

Via Density Function

The probability density function of a p-dimensional elliptical random vector \mathbf{X}, when it exists, takes the form f(\mathbf{x}) = |\boldsymbol{\Sigma}|^{-1/2} g\left( (\mathbf{x} - \boldsymbol{\mu})^\top \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right), where \boldsymbol{\mu} \in \mathbb{R}^p is the location vector, \boldsymbol{\Sigma} is a positive definite dispersion matrix, and g: [0, \infty) \to [0, \infty) is the density generator function that implicitly includes normalization to ensure the density integrates to 1 over \mathbb{R}^p. This formulation reveals that the density contours are ellipsoids centered at \boldsymbol{\mu}, with shape and orientation determined by \boldsymbol{\Sigma}, and the argument of g represents the squared Mahalanobis distance, a quadratic form measuring deviation from the center relative to the dispersion. Explicitly separating the normalizing constant yields f(\mathbf{x}) = k_p |\boldsymbol{\Sigma}|^{-1/2} g\left( (\mathbf{x} - \boldsymbol{\mu})^\top \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right), where k_p = \frac{\Gamma(p/2)}{(2\pi)^{p/2} \int_0^\infty u^{p/2 - 1} g(u) \, du} for dimension p \geq 1. The constant k_p arises from integrating the unnormalized density over \mathbb{R}^p using spherical coordinates, where the denominator ensures unit total probability. For the density to be well-defined, the generator g must satisfy conditions ensuring convergence of the integral in k_p, such as g being continuous, radially decreasing, and positive near the origin, with the integral finite and positive. These conditions permit elliptical distributions with unbounded support (e.g., when g(u) decays slowly at infinity, as in heavy-tailed cases) or bounded support (e.g., when g(u) = 0 for large u). The quadratic form structure ties directly to the Mahalanobis distance, facilitating computations like contour plotting and transformations in elliptical families.

Key Properties

Symmetry and Ellipsoidal Contours

Elliptical distributions exhibit central symmetry around their location parameter \mu, meaning that the density function satisfies f(\mu + y) = f(\mu - y) for all vectors y. This property arises from the characteristic function or density form, which depends solely on the quadratic form (x - \mu)^\top \Sigma^{-1} (x - \mu), ensuring the distribution is invariant under reflections through \mu. As a consequence, when moments exist, all odd central moments about \mu vanish, reflecting the absence of skewness in the distribution. The iso-density contours of elliptical distributions are ellipsoids, defined by the set \{x : (x - \mu)^\top \Sigma^{-1} (x - \mu) = c\} for constants c > 0, where the density scales according to the generator function g. These level sets form nested ellipsoids centered at \mu, with the constant c determining the "size" of the contour and the generator g influencing the radial decay or concentration. Unlike spherical distributions, the ellipsoidal shape captures directional dependencies through the dispersion matrix \Sigma. The matrix \Sigma governs the orientation, shape, and spread of these ellipsoidal contours, representing a positive definite scale matrix that stretches and rotates the underlying spherical structure, while \mu serves as the geometric center, coinciding with the mean, median, and mode when they are defined. The support of the distribution can be bounded or unbounded depending on the choice of generator g; for instance, distributions like the multivariate normal have unbounded support over \mathbb{R}^p, whereas the uniform distribution on an ellipsoid has bounded support confined to the interior of the ellipsoid.

Moments and Linear Transformations

Elliptical distributions exhibit well-defined moment structures under certain conditions on the density generator. For a p-dimensional random vector \mathbf{X} \sim \mathcal{E}_p(\boldsymbol{\mu}, \boldsymbol{\Sigma}, g), the first-order moment exists if the generating scalar random variable has a finite expectation, yielding \mathbb{E}[\mathbf{X}] = \boldsymbol{\mu}. The second-order moment, or covariance matrix, is then proportional to the scale matrix: \mathrm{Cov}(\mathbf{X}) = \kappa \boldsymbol{\Sigma}, where \kappa > 0 is a scalar constant that depends solely on the generator function g and the dimension p. For the multivariate normal distribution, \kappa = 1, so \boldsymbol{\Sigma} is the exact covariance matrix. Higher-order moments follow from the spherical symmetry of the standardized form. Odd central moments vanish due to the centrosymmetric nature of the distribution. Even central moments exist provided the generator admits finite moments of corresponding order and can be computed using the representation \mathbf{X} = \boldsymbol{\mu} + \sqrt{\kappa} \boldsymbol{\Sigma}^{1/2} \mathbf{U} R, where \mathbf{U} is uniformly distributed on the unit sphere and R is the radial component; these moments involve traces of powers of \boldsymbol{\Sigma}. A defining feature of elliptical distributions is their closure under nonsingular affine transformations, which preserves the elliptical class. Specifically, if \mathbf{Y} = \mathbf{A} \mathbf{X} + \mathbf{b} for a q \times p matrix \mathbf{A} with full row rank and vector \mathbf{b} \in \mathbb{R}^q, then \mathbf{Y} \sim \mathcal{E}_q(\mathbf{A} \boldsymbol{\mu} + \mathbf{b}, \mathbf{A} \boldsymbol{\Sigma} \mathbf{A}^\top, g), with the same generator g. This invariance extends to subvectors and linear combinations: the marginal distribution of any subvector of \mathbf{X} is elliptical with the corresponding submatrix of \boldsymbol{\mu} and \kappa times the principal submatrix of \boldsymbol{\Sigma}; similarly, any linear combination \mathbf{c}^\top \mathbf{X} (with \mathbf{c} \in \mathbb{R}^p) follows a univariate elliptical distribution.

Examples

Multivariate Normal Distribution

The multivariate normal distribution serves as the foundational example of an elliptical distribution, generalizing the univariate normal to the p-dimensional case and exhibiting all the characteristic symmetry and ellipsoidal contours of the broader family. It is parameterized by a location vector μ ∈ ℝ^p and a positive definite dispersion matrix Σ ∈ ℝ^{p×p}, with the property that any linear combination of its components follows a univariate normal distribution. This distribution plays a central role in multivariate statistics due to its mathematical tractability and the fact that many elliptical distributions arise as generalizations or transformations thereof. In the framework of elliptical distributions, the multivariate normal is defined via the generator function g(u) = (2π)^{-p/2} \exp(-u/2) for u ≥ 0, which ensures the density takes the familiar form f(\mathbf{x}) = (2\pi)^{-p/2} |\Sigma|^{-1/2} \exp\left( -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})' \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right), where |Σ| denotes the determinant of Σ. This generator produces a density that is constant on ellipsoids centered at μ and aligned with the eigenvectors of Σ, reflecting the elliptical symmetry. The normalization constant embedded in g(u) guarantees that the integral over ℝ^p equals 1. A key distinguishing feature of the multivariate normal within elliptical distributions is that all its moments exist and are finite. Specifically, the covariance matrix of the distribution is exactly Σ, corresponding to the scale parameter κ = 1 in the general elliptical covariance formula Cov(X) = κ Σ, where κ arises from the second moment of the generating variate. This exact matching simplifies many theoretical derivations and applications, unlike in heavier-tailed elliptical cases where κ may differ or moments may not exist. Elliptical distributions, including more general forms, can be represented as affine transformations of spherical multivariate normals, underscoring the centrality of the normal case: if Z ∼ N_p(0, I_p), then X = μ + A Z follows an elliptical distribution with dispersion matrix Σ = A A', and the normal arises when the underlying spherical distribution is standard normal. Historically, the univariate normal distribution originated in Carl Friedrich Gauss's 1809 work on the method of least squares, while the multivariate extension, particularly the bivariate normal, was formalized by Karl Pearson in his 1895 contributions to the mathematical theory of evolution.

Heavy-Tailed Distributions

Heavy-tailed elliptical distributions are characterized by density generators that produce polynomial decay in the tails, in contrast to the exponential decay of light-tailed distributions like the multivariate normal. This property arises from the form of the generator function g(u), which determines the radial component of the distribution, leading to heavier tails that are particularly useful for modeling outliers and extreme events in data. Such distributions maintain the elliptical symmetry and linear dependence structure of the general family but exhibit kurtosis greater than that of the normal, enhancing robustness in statistical inference. The multivariate Student's t distribution is a prominent example of a heavy-tailed elliptical distribution, defined for a p-dimensional random vector \mathbf{X} with location \boldsymbol{\mu}, dispersion matrix \boldsymbol{\Sigma}, and degrees of freedom \nu > 0. Its density generator is given by g(u) = \frac{\Gamma\left(\frac{\nu + p}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right) (\nu \pi)^{p/2}} \left(1 + \frac{u}{\nu}\right)^{-\frac{\nu + p}{2}}, where u = (\mathbf{x} - \boldsymbol{\mu})^\top \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}), ensuring the density f(\mathbf{x}) = |\boldsymbol{\Sigma}|^{-1/2} g(u). Moments of order k exist only if k < \nu; for instance, the mean exists for \nu > 1 and equals \boldsymbol{\mu}, while the variance exists for \nu > 2 and is proportional to \boldsymbol{\Sigma}. As \nu decreases, the tails become heavier, with regular variation index \alpha = \nu, promoting applications in robust regression where sensitivity to outliers is reduced. The multivariate Cauchy distribution emerges as the limiting case of the Student's t when \nu = 1, serving as an elliptical distribution with generator g(u) = c (1 + u)^{-\frac{p+1}{2}}, where c is a normalizing constant ensuring integration to unity. Unlike the normal or t with higher \nu, the Cauchy has no defined mean or variance due to infinite moments, reflecting its extremely heavy tails with regular variation index \alpha = 1. This absence of finite second moments underscores its utility in scenarios requiring models invariant to scale, such as certain Bayesian analyses. Other heavy-tailed elliptical distributions include the multivariate logistic and Laplace. The multivariate logistic distribution features a generator g(u) = c \exp(-u) (1 + \exp(-u))^{-2}, yielding logistic marginals and polynomial tails suitable for ordinal data modeling. The Laplace distribution, also known as double exponential, admits an elliptical form through variance-mean mixtures, with generator involving exponential terms that produce heavier tails than the normal but lighter than the Cauchy in specific parameterizations; its multivariate version maintains elliptical contours while capturing sharp peaks and heavy tails for applications in signal processing. These examples illustrate the flexibility of elliptical generators in accommodating varying degrees of tail heaviness, generally enhancing robustness against anomalies compared to light-tailed alternatives.

Inference and Estimation

Parameter Estimation Methods

Parameter estimation for elliptical distributions typically focuses on the location vector \mu \in \mathbb{R}^p and the positive definite dispersion matrix \Sigma \in \mathbb{R}^{p \times p}, assuming the density generator g is known or specified. Classical approaches include the method of moments and maximum likelihood estimation, both relying on the symmetry and affine equivariance properties of the family. The method of moments provides straightforward estimators when the first and second moments exist. The location estimator is the sample mean \hat{\mu} = \bar{X} = \frac{1}{n} \sum_{i=1}^n X_i, which is unbiased and consistent under finite first moments. For the dispersion, the sample covariance matrix S = \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X})(X_i - \bar{X})^\top satisfies \mathbb{E}(S) = c \Sigma, where c = -2 \psi'(0) and \psi is the function in the characteristic function \phi(\mathbf{t}) = \exp(i \mathbf{t}^\top \boldsymbol{\mu}) \psi(\mathbf{t}^\top \Sigma \mathbf{t}); thus, \hat{\Sigma} = S / c if c is known, or an estimate of c is used otherwise. Maximum likelihood estimation involves maximizing the log-likelihood \ell(\mu, \Sigma) = \sum_{i=1}^n \log g(Q_i) - \frac{n}{2} \log |\Sigma|, where Q_i = (X_i - \mu)^\top \Sigma^{-1} (X_i - \mu). The score equations are solved iteratively. The location estimator is the weighted mean \hat{\mu} = \frac{\sum_{i=1}^n w(Q_i) X_i}{\sum_{i=1}^n w(Q_i)} with weights w(q) = -2 g'(q)/g(q), which simplifies to the sample mean \bar{X} for the normal distribution (constant weights). The dispersion estimator is \hat{\Sigma} = \frac{1}{n} \sum_{i=1}^n w(Q_i) (X_i - \hat{\mu})(X_i - \hat{\mu})^\top, but the dependence on g makes closed-form solutions unavailable except for the normal case (g(u) = (2\pi)^{-p/2} \exp(-u/2), where \hat{\Sigma} = S). For non-normal generators, numerical optimization is required, often computationally intensive due to the nonlinear dependence on g. When g is known and corresponds to specific elliptical distributions, such as the multivariate t-distribution with \nu > 2 degrees of freedom, iterative algorithms like the expectation-conditional maximization (ECM) facilitate MLE computation. The ECM algorithm alternates between updating location and scale via conditional expectations from an augmented normal-t mixture representation, converging to the MLE under standard conditions. Under regularity conditions, including finite second moments and identifiability of \mu and \Sigma, both method-of-moments and maximum-likelihood estimators are consistent as n \to \infty. Moreover, \sqrt{n} (\hat{\mu} - \mu, \text{vech}(\hat{\Sigma} - \Sigma))^\top converges in distribution to a multivariate normal with mean zero and covariance depending on the fourth moments of the distribution, adjusted for elliptical non-normality. For distributions sensitive to outliers, robust alternatives to these classical methods are available, though they are addressed separately.

Robust Estimators

Robust estimators for elliptical distributions are designed to provide reliable estimates of location and shape parameters in the presence of outliers or model misspecification, such as deviations from the assumed elliptical symmetry. Unlike classical maximum likelihood estimators, which can be highly sensitive to contamination, these methods achieve robustness through influence functions that downweight anomalous observations while maintaining consistency under elliptical assumptions. A prominent example is Tyler's M-estimator, which targets the shape matrix of an elliptical distribution in a distribution-free manner. It solves for the matrix \hat{\Sigma} that minimizes the objective such that \det\left( \frac{1}{n} \sum_{i=1}^n \frac{(x_i - \hat{\mu})' \hat{\Sigma}^{-1} (x_i - \hat{\mu})}{p} \right) = 1, where p is the dimension and \hat{\mu} is a location estimate, often the spatial median. This fixed-point equation is typically solved via iterative reweighting, starting from an initial scatter estimate and updating until convergence. Tyler's estimator is affine equivariant, meaning its properties are preserved under linear transformations, and it exhibits the highest asymptotic efficiency among shape matrix estimators for elliptical distributions. However, its finite-sample breakdown point is zero, limiting resistance to more than a negligible fraction of outliers. S-estimators extend robustness to both location and scatter by minimizing a robust scale measure of residuals, such as the value s that satisfies \frac{1}{n} \sum_{i=1}^n \rho\left( \frac{(x_i - \hat{\mu})' \hat{\Sigma}^{-1} (x_i - \hat{\mu})}{s} \right) = b, for a tuning constant b and a bounded \rho function (e.g., Tukey's biweight). These estimators are consistent for elliptical distributions and achieve a high breakdown point of up to 50%, allowing resistance to nearly half the data being outliers. They are also affine equivariant, ensuring equivariance under affine transformations of the data. MM-estimators combine the high breakdown point of S-estimators with the efficiency of M-estimators by first computing an S-estimate for initial robustness, then refining via an M-step with a high-efficiency \psi-function constrained to the S-estimate's consistency region. This two-stage approach yields estimators that are consistent under elliptical models while attaining near-maximum likelihood efficiency at the elliptical center (e.g., 85-95% for multivariate normal). Like S-estimators, MM-estimators are affine equivariant and possess a 50% breakdown point. Recent advances have established optimal finite-sample bounds for Tyler's M-estimator under elliptical distributions in high dimensions. Specifically, a 2025 analysis demonstrates that the estimator achieves error rates matching those of Gaussian covariance estimation, with a sample complexity threshold of O(p / \log p) for dimension p, using a novel \infty-expansion technique to bound deviations. This confirms strong scaling properties when data follow elliptical assumptions, closing prior gaps in non-asymptotic guarantees.

Applications

Classical Multivariate Analysis

Elliptical distributions extend the foundational results of classical multivariate normal theory to a broader class of symmetric distributions, preserving many inferential procedures under mild conditions such as finite second moments. In particular, the symmetry and affine invariance properties of elliptical distributions ensure that linear transformations and quadratic forms behave analogously to the normal case, allowing standard techniques like hypothesis testing and dimension reduction to apply with adjusted distributions or asymptotics. This generalization is particularly valuable in low- to moderate-dimensional settings where normality may not hold but elliptical symmetry provides a tractable alternative. A key extension is the generalization of Hotelling's T^2 statistic and multivariate analysis of variance (MANOVA). Under elliptical distributions with finite variance, Cochran's theorem holds, implying that sums of squares in quadratic forms decompose into independent components with distributions determined by the generator function, similar to the chi-squared decomposition under normality. Specifically, for elliptically contoured random vectors, the joint distribution of quadratic forms \mathbf{x}^T A \mathbf{x} and \mathbf{x}^T B \mathbf{x} follows a generalized form when A and B are idempotent and orthogonal, enabling exact or asymptotic tests for multivariate means. This allows Hotelling's T^2 to serve as a valid test statistic for the null hypothesis of equal means across groups, with asymptotic normality or F-approximations depending on the kurtosis parameter of the elliptical family. In principal component analysis (PCA) and factor analysis, the eigen-decomposition of the dispersion matrix \Sigma defines the principal axes, which are preserved under affine transformations inherent to elliptical distributions. For an elliptical random vector \mathbf{X} \sim EC_p(\boldsymbol{\mu}, \Sigma, \Phi), any nonsingular linear transformation \mathbf{Y} = A \mathbf{X} + \mathbf{b} yields another elliptical vector with transformed parameters, maintaining the eigenvectors of \Sigma as the directions of maximum variance. This property ensures that sample PCA, based on the eigendecomposition of the sample covariance matrix, identifies the same structural components as in the normal case, provided the sample covariance is well-defined under finite variance; factor analysis models similarly benefit, with adjustments for non-normal elliptical generators to account for higher moments in likelihood-based estimation. Quadratic forms, such as the squared Mahalanobis distance D^2 = (\mathbf{X} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{X} - \boldsymbol{\mu}), have distributions that generalize the chi-squared under normality. For elliptical distributions, D^2 follows a law independent of \boldsymbol{\mu} and \Sigma, scaling with the dimension p and the generator; for example, in the multivariate Student-t distribution with \nu degrees of freedom, D^2 / p \sim F(p, \nu). These properties underpin hypothesis tests for means and covariances, where elliptical assumptions validate approximations like F or t-distributions for test statistics, enhancing robustness in classical inference without requiring full normality.

High-Dimensional and Modern Statistics

In high-dimensional settings where the dimension p exceeds the sample size n, validating the elliptical distribution assumption poses significant challenges due to the curse of dimensionality and the nonparametric nature of the generator function. Recent advancements have introduced nonparametric tests for spherical symmetry—a foundational case of elliptical distributions—using data augmentation techniques to estimate a measure of deviation from symmetry. This approach employs pairwise differences to center the data without assuming a known location parameter and relies on a resampling algorithm for p-value calibration, ensuring consistent power against general alternatives even when p grows with n. Such methods facilitate goodness-of-fit testing for elliptical models by first applying an affine transformation to sphericalize the data, providing a robust validation tool applicable to modern high-dimensional datasets. Tensor elliptical graphical models extend traditional graphical models to multi-way data structures, accommodating high-dimensional tensors while relaxing the Gaussian assumption through elliptical distributions, which better handle heavy-tailed observations common in real-world applications like neuroimaging or genomics. A key contribution involves sparse estimation of the tensor precision matrix via a spatial-sign-based covariance estimator, which approximates the true covariance under elliptical symmetry and achieves estimation rates comparable to Gaussian cases without requiring sparsity in the generator. This framework enables robust inference on conditional independencies in tensor-valued data, with theoretical guarantees on consistency and selection accuracy, demonstrated through simulations showing superior performance under heavy tails compared to normal-only methods. Robust covariance inference here builds on spatial sign transformations, previously explored in lower dimensions, to mitigate outlier sensitivity in high-dimensional tensor settings. For comparing covariance structures across samples in high dimensions, two-sample tests tailored to generalized elliptical distributions have emerged, addressing scenarios where p \gg n without imposing sparsity or strict dimension growth constraints. These tests utilize U-statistics to estimate the squared Frobenius norm difference between covariance matrices, supported by a novel central limit theorem that holds under elliptical models for both null and alternative hypotheses, allowing precise control of the type I error and consistent power. Applicable to heavy-tailed elliptical families, this method outperforms existing approaches under minimal moment assumptions, with empirical validation on synthetic data illustrating its effectiveness in detecting covariance differences in regimes like gene expression analysis. Estimating extreme quantile regions under elliptical distributions in high dimensions requires affine-equivariant methods to preserve symmetry and handle tail behavior robustly, particularly when observations may include errors. A moment-based estimator for the extreme value index and quantiles has been developed, ensuring asymptotic negligibility of measurement errors under light- and heavy-tailed elliptical generators, leading to consistent affine-invariant region estimates for small probability levels p. This approach applies extreme value theory to multivariate elliptical settings, providing practical tools for risk assessment in high-dimensional finance or environmental data, where tails drive uncertainty, and demonstrates improved accuracy over non-robust alternatives in contaminated samples.

Finance and Economics

Elliptical distributions play a central role in portfolio theory by allowing asset returns to be modeled beyond the multivariate normal assumption, enabling mean-variance optimization while preserving key properties like affine invariance. Under elliptical assumptions, the Capital Asset Pricing Model (CAPM) holds, as the beta coefficient remains the sole measure of systematic risk, generalizing the linear relationship between expected returns and market risk to a broader class of distributions. The mutual fund separation theorems, originally derived by Tobin for normal distributions, extend to elliptical distributions due to their elliptical symmetry and affine invariance, allowing investors to construct optimal portfolios as combinations of a limited number of mutual funds regardless of risk preferences. This generalization broadens Markowitz's mean-variance framework to accommodate non-normal elliptical returns, facilitating two-fund separation where one fund is risk-free and the other is the tangency portfolio. In risk management, elliptical distributions simplify the computation of coherent risk measures such as Value-at-Risk (VaR) and Expected Shortfall (ES) for linear portfolios, as these depend only on the portfolio's mean, covariance, and the underlying generator function through elliptical quantiles. For instance, closed-form expressions for VaR and ES exist for elliptical risk factors, making them practical for regulatory compliance and stress testing. Empirically, t-elliptical distributions are widely used to model asset returns exhibiting fat tails, capturing extreme events better than the normal case while retaining elliptical structure for optimization. Studies on stock returns often fit multivariate Student-t elliptical models to account for leptokurtosis observed in financial data. In economics, elliptical distributions characterize production frontiers through their constant density contours, representing efficient input-output combinations under uncertainty, and inform inequality measures via elliptical Lorenz curves that fit grouped income data with flexible tail behavior. These contours provide a parametric framework for estimating Gini coefficients and other inequality indices from empirical distributions.

References

  1. [1]
    Elliptical Distribution - an overview | ScienceDirect Topics
    Elliptical distributions are a family of probability distributions that generalize the Gaussian distribution, characterized by location and scale parameters ...
  2. [2]
    High-dimensional covariance matrices in elliptical distributions with ...
    This paper discusses fluctuations of linear spectral statistics of high-dimensional sample covariance matrices in elliptical distributions, and establishes a ...
  3. [3]
    [PDF] Elliptical Distributions - Max Turgeon
    Elliptical distributions are a generalization of the multivariate normal distribution that retain the property that lines of constant density are ellipses. • ...
  4. [4]
    On the Class of Elliptical Distributions and their Applications to ... - jstor
    The class of elliptical distributions contains the multivariate normal (multinor- mal, henceforth) distribution as a special case; as well as many non-normal.
  5. [5]
  6. [6]
    [PDF] Theory and Applications of Elliptically Contoured and Related ...
    Sep 24, 1990 · A random vector X is said t have an elliptically contoured distribution if it has the distribution of I + AY, where is a constant vector, ...
  7. [7]
    [PDF] Chapter 3 Elliptically Contoured Distributions
    The location parameter µ of interest may be the mean or the center of symmetry of an elliptically contoured distribution. Hyperellipsoids will be estimated ...
  8. [8]
    Symmetric Multivariate And Related Distributions [PDF] - VDOC.PUB
    Symmetric Multivariate And Related Distributions [PDF]. Kai-Tai Fang, Samuel Kotz, Kai Wang Ng (auth.) 12,279; 203.
  9. [9]
    [PDF] Generalized Elliptical Distributions: Theory and Applications - CORE
    For data which is not normally but generalized elliptically distributed the results of random. matrix theory are no longer applicable if one uses the sample ...
  10. [10]
    [PDF] Elliptical Distributions and Dynamic Conditional Correlation Models
    We consider three distributions: • Multivariate Laplace. • Multivariate Student-t. • Multivariate Logistic. Page 13. Elliptical distributions. • Multivariate ...
  11. [11]
    [PDF] Expressions for joint moments of elliptical distributions - arXiv
    Aug 2, 2020 · [6] Kelker, D., 1970. Distribution theory of spherical distributions and location-scale parameter generalization. Sankhya 32, 419-430. [7] ...
  12. [12]
    [PDF] Multivariate elliptical models with general parameterization - IME-USP
    Abstract. In this paper we introduce a general elliptical multivariate regression model in which the mean vector and the scale matrix have parameters ...
  13. [13]
    [PDF] ML ESTIMATION OF THE t DISTRIBUTION USING EM AND ITS ...
    The EM, ECM and ECME algorithms can all be used to find the MLEs of the parameters of the multivariate t distribution. The associated large sample variances of ...Missing: seminal paper
  14. [14]
    Estimation and Testing in Elliptical Functional Measurement Error ...
    In this article, we consider estimation and testing by maximum likelihood and method of moments in functional models within the class of elliptically ...
  15. [15]
    Analysis of Covariance Structures under Elliptical Distributions
    Mar 12, 2012 · This article examines adjusting normal theory methods for covariance analysis under elliptical distributions, showing that many asymptotic ...
  16. [16]
    A Distribution-Free $M$-Estimator of Multivariate Scatter
    The estimator is the "most robust" estimator of the scatter matrix of an elliptical distribution in the sense of minimizing the maximum asymptotic variance.
  17. [17]
    [PDF] Highly Robust and Efficient Estimators of Multivariate Location and ...
    Dec 6, 2021 · This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a ...
  18. [18]
  19. [19]
    [PDF] High-breakdown estimators of multivariate location and scatter
    Σ. Robust methods aim to estimate µµ and Σ even though the data has been con- taminated by outliers. The well-known multivariate M-estimators can break down.
  20. [20]
    [PDF] MULTIVARIATE REGRESSION S-ESTIMATORS FOR ROBUST ...
    Abstract: In this paper we consider S-estimators for multivariate regression. We study the robustness of the estimators in terms of their breakdown point ...
  21. [21]
    [PDF] Robust Statistical Estimation Methods for Multivariate Data in the ...
    Apr 13, 2025 · The procedure to accurately estimate mean and covariance matrices by robust S- τ- and MM-estimators executes through iterative methods. The ...<|control11|><|separator|>
  22. [22]
    Optimal Bounds for Tyler's M-Estimator for Elliptical Distributions
    Oct 15, 2025 · For Elliptical distributions, Tyler proposed a natural M-estimator and showed strong statistical properties in the asymptotic regime, ...
  23. [23]
    [PDF] Distributions of Quadratic Forms and Cochran's Theorem for ... - DTIC
    Our purpose in this paper is to extend Cochran's. Theorem to elliptically contoured distributions in various aspects. The main results are in Sections 4 and 5.Missing: MANOVA | Show results with:MANOVA
  24. [24]
    Asymptotic null and nonnull distribution of Hotelling's T2-statistic ...
    This paper is concerned with asymptotic distribution of Hotelling's T2-statistic under the elliptical distribution for the null hypothesis and the local ...Missing: T² | Show results with:T²
  25. [25]
    [PDF] A characterization of elliptical distributions and some optimality ...
    May 4, 2014 · Furthermore, the elliptical distribution Ed(µ, Σ,ϕ) can be characterized by its marginals, namely for any fixed k<d, X ∼ Ed(µ, Σ,ϕ) if and only ...<|control11|><|separator|>
  26. [26]
    The Mahalanobis Distance and Elliptic Distributions - jstor
    The Mahalanobis distance is shown to be an appropriate measure of distance between two elliptic distributions having different locations but a common shape.
  27. [27]
  28. [28]
    [2508.00333] Tensor Elliptical Graphic Model - arXiv
    Aug 1, 2025 · We address the problem of robust estimation of sparse high dimensional tensor elliptical graphical model. Most of the research focus on tensor ...
  29. [29]
  30. [30]
    Moment Estimator-Based Extreme Quantile Estimation with Erroneous Observations: Application to Elliptical Extreme Quantile Region Estimation
    **Title:** Moment Estimator-Based Extreme Quantile Estimation with Erroneous Observations: Application to Elliptical Extreme Quantile Region Estimation
  31. [31]
    Capital Asset Pricing Model with Generalized Elliptical Distribution
    Generalized elliptical distribution can well describe the empirically distributional characteristics of risky asset returns. This article assumes that risky ...
  32. [32]
    CAPM and Option Pricing With Elliptically Contoured Distributions
    This article offers an alternative proof of the capital asset pricing model (CAPM) when asset returns follow a multivariate elliptical distribution.
  33. [33]
    On the Class of Elliptical Distributions and their Applications to the ...
    It is shown that the class of elliptical distributions extend the Tobin 14 separation theorem, Bawa's 2 rules of ordering uncertain prospects, Ross's 12 mutual ...
  34. [34]
    Accurate Evaluation of Expected Shortfall for Linear Portfolios with ...
    We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors.
  35. [35]
    [PDF] Value-at-Risk and expected shortfall for linear portfolios with ... - HAL
    Sep 12, 2003 · We treat both the expected shortfall and the Value-at-Risk of such portfolios. Special attention is given to the particular case of a multi-.
  36. [36]
    Elliptical Capital Asset Pricing Models: Formulation, Diagnostics ...
    In this paper, we introduce the symmetric CAPM considering distributions with lighter or heavier tails than the normal distribution.
  37. [37]
    (PDF) Elliptical Capital Asset Pricing Models - ResearchGate
    Mar 10, 2023 · the stock returns have distributions with heavier tails than the normal distribution. The Student-t model is employed instead of the normal ...
  38. [38]
    Elliptical Lorenz curves - ScienceDirect.com
    A family of elliptic Lorenz curves is proposed for fitting grouped income data. The associated distribution and density functions are displayed together ...