Fact-checked by Grok 2 weeks ago

Cauchy distribution

The Cauchy distribution, also known as the or Lorentz distribution, is a continuous defined on the real line, characterized by its f(x) = \frac{1}{\pi \gamma \left[1 + \left( \frac{x - \mu}{\gamma} \right)^2 \right]}, where \mu is the (median and ) and \gamma > 0 is the . This distribution arises as the ratio of two independent standard normal random variables (with the denominator having zero mean), and it is a special case of the with one degree of freedom. Named after the French mathematician Augustin-Louis Cauchy, the distribution was first studied in the context of astronomy and later recognized for its role in describing resonance phenomena in physics, such as the shape of spectral lines in spectroscopy. Unlike many common distributions like the normal, the Cauchy distribution has heavy tails that decay polynomially rather than exponentially, leading to the striking property that its mean, variance, and all higher moments are undefined due to the divergence of the relevant integrals. Its cumulative distribution function is F(x) = \frac{1}{2} + \frac{1}{\pi} \arctan\left( \frac{x - \mu}{\gamma} \right), and the characteristic function is \phi(t) = e^{i \mu t - \gamma |t|}, which underscores its stability under convolution: the sum of independent Cauchy random variables is again Cauchy-distributed with updated parameters. The distribution's infinite divisibility and lack of finite moments make it a canonical example of a "pathological" yet mathematically elegant object in , often used to illustrate the limitations of classical . In applications, it models phenomena with extreme outliers, such as the horizontal impact points of particles in physics or the distribution of errors in certain astronomical observations, and serves as an approximation to the in the limit as \gamma \to 0. Its symmetric bell-shaped density (resembling a but with fatter tails) highlights the importance of when dealing with heavy-tailed data.

Definitions

Probability density function

The probability density function of the Cauchy distribution, also known as the or Breit–Wigner distribution in certain contexts, is defined for a X as f(x; x_0, \gamma) = \frac{1}{\pi \gamma \left[1 + \left( \frac{x - x_0}{\gamma} \right)^2 \right]}, where x \in (-\infty, \infty), x_0 \in \mathbb{R} is the , and \gamma > 0 is the . This form ensures that the PDF is non-negative and integrates to 1 over the real line, providing a valid continuous . The x_0 specifies the peak of the and coincides with both its and , reflecting the . The \gamma governs the or width of the : larger values of \gamma result in a broader, flatter , while smaller values produce a narrower, more peaked shape. A key derivation of the Cauchy emerges from the ratio of two independent standard normal random variables; if X_1 \sim N(0,1) and X_2 \sim N(0,1) are independent, then Y = X_1 / X_2 follows the standard Cauchy (with x_0 = 0 and \gamma = 1). This ratio property highlights its natural occurrence in applications involving angular projections or resonance phenomena. Graphically, the PDF exhibits a symmetric, bell-shaped profile centered at x_0, resembling distribution near the peak but with markedly heavier tails that decay proportionally to $1/|x|^2 for large |x|. These heavy tails assign greater probability to extreme values compared to the Gaussian, such that while the total area under the curve is finite (integrating to 1), the "area" in the tails—when weighted by powers of |x|—diverges, emphasizing the distribution's challenges in for higher-order moments. The standard Cauchy simplifies to f(x) = \frac{1}{\pi (1 + x^2)} when x_0 = 0 and \gamma = 1, serving as a reference for scaling general cases.

Cumulative distribution function

The (CDF) of the Cauchy distribution with x_0 and \gamma > 0 is given by F(x; x_0, \gamma) = \frac{1}{\pi} \arctan\left(\frac{x - x_0}{\gamma}\right) + \frac{1}{2}, \quad x \in \mathbb{R}. This formula arises from integrating the (PDF) over the range from -\infty to x. For the standard Cauchy distribution (where x_0 = 0 and \gamma = 1), the is \int_{-\infty}^x \frac{1}{\pi(1 + t^2)} \, dt = \frac{1}{\pi} [\arctan(t)]_{-\infty}^x = \frac{1}{\pi} \left( \arctan(x) - \left(-\frac{\pi}{2}\right) \right) = \frac{1}{\pi} \arctan(x) + \frac{1}{2}. The general case follows by a location-scale transformation of the standard form. The CDF is strictly increasing from 0 to 1 as x ranges from -\infty to \infty, since the PDF is positive everywhere, ensuring a correspondence between probabilities and outcomes. At the , F(x_0; x_0, \gamma) = \frac{1}{\pi} \arctan(0) + \frac{1}{2} = \frac{1}{2}, so x_0 is the of the . As x \to -\infty, F(x) \to 0, and as x \to \infty, F(x) \to 1, with the approach to these limits being gradual due to the heavy tails of the , which prevent rapid near the extremes. The inverse of the CDF, known as the , provides the value x_p such that F(x_p) = p for p \in (0,1). For the Cauchy distribution, it is x_p = x_0 + \gamma \tan\left(\pi \left(p - \frac{1}{2}\right)\right). This explicit form facilitates applications such as for generating random variates from the distribution in simulations. For the standard case, the first and third quartiles are at -1 and $1, respectively, highlighting the of $2\gamma in the general parameterization.

Alternative parameterizations

The Cauchy distribution admits several alternative parameterizations that re-express its location-scale family in forms suited to geometric interpretations, Bayesian applications, or connections to broader classes of distributions. A notable alternative is McCullagh's parameterization, where the traditional μ ∈ ℝ and γ > 0 are combined into a single θ = μ + iγ ∈ ℂ. The in this form leverages transformations for the family, given by f(x | \theta) = \frac{\Im(\theta)}{\pi |x - \theta|^2}, where Im denotes the imaginary part; this preserves the standard density under group operations and aids in via methods. The from the standard (μ, γ) to θ is as θ = μ + iγ, while the yields μ = Re(θ) and γ = Im(θ), facilitating of the distribution's under linear fractional transformations. The angular or circular parameterization arises from the distribution's geometric origin on the unit circle. Specifically, a standard Cauchy X with parameters μ = 0 and γ = 1 can be represented as X = (Θ), where Θ follows a on (-π/2, π/2); for general μ and γ, this extends to X = μ + γ (Θ). This form underscores the motivating the parameterization, as it projects angular motion onto the real line. In Bayesian contexts, particularly for scale parameters, the half-Cauchy distribution serves as a one-sided variant, equivalent to the absolute value of a centered Cauchy random variable. Its density is f(x | μ, σ) = \frac{2}{\pi \sigma \left[1 + \left( \frac{x - \mu}{\sigma} \right)^2 \right]} for x ≥ μ (assuming μ ≥ 0 for positivity), which corresponds to twice the standard Cauchy density restricted to the positive domain. This parameterization relates to the Lévy distribution through parameter shifts in stable laws but maintains the core Cauchy structure for non-negative support. The scale σ in this form equates to the standard γ, ensuring direct comparability. An additional variant uses parameters (μ, τ) where τ = γ / 2 represents half the interquartile range, yielding the density f(x | μ, τ) = \frac{1}{2\pi \tau \left[1 + \left( \frac{x - μ}{2τ} \right)^2 \right]}; the conversion is γ = 2τ, which aligns the distribution with t-distributions (as the Cauchy is Student's t with 1 degree of freedom) and α-stable laws (with α = 1). These alternatives enhance comparisons across heavy-tailed families by standardizing scale interpretations.

Core Properties

Symmetry and stability

The Cauchy distribution possesses reflection symmetry around its location parameter x_0, meaning its probability density function satisfies f(x_0 + \delta) = f(x_0 - \delta) for all \delta \in \mathbb{R}, rendering it an even centered at x_0. This symmetry underscores the distribution's balanced shape, with the serving as the and . As a result, the distribution has infinite support over the entire real line, (-\infty, \infty), allowing extreme values to occur with non-negligible probability on both sides of the center. A deeper geometric insight into the Cauchy distribution's symmetries emerges from its rotational invariance, derived from projecting a uniform angular distribution onto a line. Consider a unit circle where the angle U from the positive x-axis is uniformly distributed over (-\pi/2, \pi/2); the y-coordinate of the projection onto the line x = 1 is given by Y = \tan(U), which follows a standard Cauchy distribution with location 0 and scale 1. This construction highlights the distribution's invariance under rotations, as shifting the angle by a fixed amount modulo \pi preserves the uniform distribution of directions, thereby maintaining the Cauchy form. Such rotational symmetry connects the one-dimensional Cauchy to circular geometries, emphasizing its role in isotropic processes. The Cauchy distribution is classified as a with index \alpha = 1, a property that captures its closure under : the of Cauchy random variables, after appropriate but without centering (due to the lack of finite ), yields another Cauchy distribution. This aligns with Lévy's characterization of stable laws, which delineates distributions invariant under summation and yet distinguished by their heavy-tailed behavior and absence of finite variance. Specifically, for \alpha = 1, the tails decay proportionally to $1/x^2, implying infinite variance and reflecting the symmetry's extension to unbounded extremes without decay to a degenerate form. These heavy tails are intrinsic to the axiom, ensuring that outliers propagate through additions without dilution, a hallmark differing from lighter-tailed distributions like .

Sums of random variables

The Cauchy distribution exhibits a remarkable closure property under addition: the sum of Cauchy-distributed random variables is itself Cauchy-distributed. Specifically, consider two standard Cauchy random variables X_1 and X_2, each with 0 and 1. Their sum X_1 + X_2 follows a Cauchy distribution with 0 and 2. In general, for random variables X_i \sim \text{Cauchy}(x_{0i}, \gamma_i) where i = 1, \dots, n, the sum S = \sum_{i=1}^n X_i is distributed as \text{Cauchy}\left( \sum_{i=1}^n x_{0i}, \sum_{i=1}^n \gamma_i \right). This result can be established using . The characteristic function of a \text{Cauchy}(x_0, \gamma) is \phi(t) = \exp(i t x_0 - \gamma |t|). For independent summands, the characteristic function of the sum is the product of the individual characteristic functions, yielding \phi_S(t) = \exp\left(i t \sum x_{0i} - |t| \sum \gamma_i \right), which matches the form for a Cauchy distribution with the aggregated parameters. A key implication of this additivity is the failure of the central limit theorem for Cauchy variables: unlike distributions with finite variance, normalized sums of independent Cauchy random variables do not converge in distribution to a normal distribution, but instead retain the Cauchy form indefinitely. For illustration, the arithmetic mean \bar{X} = \frac{1}{n} \sum_{i=1}^n X_i of n i.i.d. \text{Cauchy}(\mu, \gamma) variables follows the same \text{Cauchy}(\mu, \gamma) distribution as each X_i, underscoring the lack of convergence to a degenerate distribution and the absence of a law of large numbers.

Absence of moments

The mean of a random variable X following the Cauchy distribution is undefined, as the E[X] = \int_{-\infty}^{\infty} x f(x) \, dx fails to converge, where f(x) denotes the . This non-convergence stems from the heavy tails of the distribution, specifically because the \int_{-\infty}^{\infty} |x| f(x) \, dx = \infty. The can be seen by evaluating the tails, where the integrand behaves asymptotically as |x| / (\pi \gamma |x|^2) for large |x| in the location-scale parameterization with location x_0 and \gamma > 0, leading to a logarithmic . The variance \operatorname{Var}(X) is likewise undefined. Formally, variance requires finite second moments E[X^2], but since even the first absolute moment E[|X|] is , higher moments cannot exist in the Lebesgue sense; the mean further precludes a meaningful variance. Extending this, all moments E[X^k] for k \geq 1 are , as E[|X|^k] = \infty due to the same tail behavior causing the integrals to diverge. Fractional moments offer a partial exception among the lower-order moments. The absolute fractional moment E[|X|^\alpha] is finite $0 < \alpha < 1, while it diverges for \alpha \geq 1. This threshold arises from the tail decay of the density, f(x) \sim 1/(\pi \gamma |x|) as |x| \to \infty, which makes the integral \int_1^\infty x^\alpha / x^2 \, dx converge precisely when \alpha < 1. For the standard Cauchy (x_0 = 0, \gamma = 1), explicit computation yields E[|X|^\alpha] = \sec \left( \frac{\pi \alpha}{2} \right) for $0 < \alpha < 1. Truncating the distribution to a finite interval renders the moments well-defined and finite. For the indicator-truncated expectation E[X I_{\{|X| < a\}}] with truncation point a > 0, the integral converges, providing a finite value despite the untruncated case's divergence. In the location-scale parameterization, this truncated moment is x_0 \left[ F(a) - F(-a) \right] + \frac{\gamma}{\pi} \ln \left( \frac{1 + \left(\frac{a - x_0}{\gamma}\right)^2}{1 + \left(\frac{-a - x_0}{\gamma}\right)^2} \right), where F is the cumulative distribution function, reflecting the partial cancellation from the symmetric tails up to the cutoff plus a logarithmic contribution. More generally, closed-form expressions involve such terms from integration by parts. Higher truncated moments follow analogously, remaining finite for any fixed a. Sample moments from i.i.d. Cauchy observations are unreliable for inference. The sample mean \bar{X}_n = n^{-1} \sum_{i=1}^n X_i follows the same Cauchy distribution as a single X_i, so it does not converge in probability or almost surely to any finite limit as n \to \infty. Similarly, the sample variance does not stabilize, exhibiting erratic fluctuations without convergence to a defined value, underscoring the distribution's instability under averaging. This behavior contrasts with distributions possessing finite moments, where the law of large numbers ensures convergence of the sample mean.

Advanced Mathematical Properties

Characteristic function

The characteristic function of a random variable X with Cauchy distribution, having location parameter x_0 and scale parameter \gamma > 0, is given by \phi_X(t) = \mathbb{E}[e^{itX}] = \exp\left( i t x_0 - \gamma |t| \right). This form arises because the characteristic function for the standard Cauchy distribution (with x_0 = 0 and \gamma = 1) is \phi(t) = e^{-|t|}, and the general case follows by adjusting for location via the shift property \phi_{X + c}(t) = e^{i t c} \phi_X(t) and for scale via \phi_{\gamma X}(t) = \phi_X(\gamma t). To derive the characteristic function for the standard case, compute the of the f(x) = \frac{1}{\pi (1 + x^2)}: \phi(t) = \int_{-\infty}^{\infty} e^{i t x} \frac{1}{\pi (1 + x^2)} \, dx. Since the density is even, the imaginary part vanishes, reducing to \phi(t) = \frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\cos(t x)}{1 + x^2} \, dx. For t \geq 0, this equals e^{-t} using the known result \int_0^{\infty} \frac{\cos(t x)}{1 + x^2} \, dx = \frac{\pi}{2} e^{-t}, which can be evaluated via or Laplace transforms; the case t < 0 follows by evenness. The characteristic function facilitates proofs of key properties, such as the stability of the Cauchy distribution under summation of independent copies. If X_1 and X_2 are independent Cauchy random variables with location parameters x_{0,1}, x_{0,2} and scale parameters \gamma_1, \gamma_2, then the characteristic function of X_1 + X_2 is the product \phi_{X_1}(t) \phi_{X_2}(t) = \exp\left( i t (x_{0,1} + x_{0,2}) - (\gamma_1 + \gamma_2) |t| \right), which matches the form for a Cauchy distribution with location x_{0,1} + x_{0,2} and scale \gamma_1 + \gamma_2. This demonstrates closure under convolution without normalization, a hallmark of stable distributions. The Cauchy distribution is infinitely divisible, and its characteristic function admits a Lévy–Khinchine representation, characterizing all infinitely divisible laws. For the standard symmetric Cauchy (location 0, scale 1), the representation is \log \phi(t) = \int_{\mathbb{R} \setminus \{0\}} \left( e^{i t x} - 1 - i t x \mathbf{1}_{|x| < 1} \right) \frac{1}{\pi x^2} \, dx, with zero Gaussian coefficient and zero drift, where the Lévy measure is \nu(dx) = \frac{1}{\pi x^2} dx for x \neq 0. This pure-jump form reflects the distribution's heavy tails and arises in the context of Lévy processes, such as the Cauchy process defined via subordination of Brownian motion. For the general case, the location shifts the drift term, and the scale adjusts the Lévy measure by \gamma \nu(dx / \gamma) / \gamma. In contrast to the Gaussian distribution, whose characteristic function \phi(t) = \exp\left( i \mu t - \frac{\sigma^2 t^2}{2} \right) features a quadratic exponent ensuring analyticity and finite moments, the Cauchy's linear |t| term in the exponent is non-analytic at t = 0, corresponding to the absence of mean and higher moments and the presence of heavy tails. This structural difference underscores the Cauchy's role in modeling phenomena with extreme outliers, unlike the light-tailed Gaussian.

Entropy

The differential entropy of a continuous random variable X with probability density function f(x) is defined as h(X) = -\int_{-\infty}^{\infty} f(x) \log f(x) \, dx. For the Cauchy distribution with location parameter \mu and scale parameter \gamma > 0, the density is f(x) = \frac{1}{\pi \gamma \left[1 + \left(\frac{x - \mu}{\gamma}\right)^2\right]}. Due to the location-scale invariance of , h(X) = \log(4 \pi \gamma), independent of \mu. For the standard Cauchy distribution (\mu = 0, \gamma = 1), this simplifies to h(X) = \log(4\pi) \approx 2.531. To derive this, substitute t = \frac{x - \mu}{\gamma} to normalize to the standard case, yielding h(X) = \log \gamma + h(Z) where Z is standard Cauchy, so it suffices to compute h(Z) = -\int_{-\infty}^{\infty} \frac{1}{\pi (1 + z^2)} \log \left( \frac{1}{\pi (1 + z^2)} \right) dz = \log \pi + \frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\log(1 + z^2)}{1 + z^2} dz. The integral evaluates to \pi \log 4 via the substitution z = \tan \theta (with \theta \in (-\pi/2, \pi/2)), transforming it to \int_{-\pi/2}^{\pi/2} \log(\sec^2 \theta) \, d\theta = 2 \int_{-\pi/2}^{\pi/2} \log(\sec \theta) \, d\theta = 4 \int_0^{\pi/2} \log(\sec \theta) \, d\theta = -4 \int_0^{\pi/2} \log(\cos \theta) \, d\theta, where the known value \int_0^{\pi/2} \log(\cos \theta) \, d\theta = -\frac{\pi}{2} \log 2 gives -4 \times \left( -\frac{\pi}{2} \log 2 \right) = 2\pi \log 2 = \pi \log 4. Compared to distribution N(0, \sigma^2) with the same scale \sigma = \gamma, which has \frac{1}{2} \log(2 \pi [e](/page/E!) \gamma^2) \approx 1.419 + \log \gamma, the Cauchy's is higher by approximately $1.112 nats for \gamma = 1. This reflects greater uncertainty due to the Cauchy's heavier tails, despite lacking finite variance. The Kullback-Leibler () between two Cauchy s, D_{\mathrm{KL}}(X_1 \| X_2), where X_1 \sim \mathrm{Cauchy}(\mu_1, \gamma_1) and X_2 \sim \mathrm{Cauchy}(\mu_2, \gamma_2), is finite and has the \log \left( \frac{(\gamma_1 + \gamma_2)^2 + (\mu_1 - \mu_2)^2}{4 \gamma_1 \gamma_2} \right). For the special case of identical locations (\mu_1 = \mu_2), it simplifies to \log \left( \frac{(\gamma_1 + \gamma_2)^2}{4 \gamma_1 \gamma_2} \right) = 2 \log \left( \frac{\gamma_1 + \gamma_2}{2 \sqrt{\gamma_1 \gamma_2}} \right), which is symmetric in \gamma_1 and \gamma_2. The Cauchy distribution maximizes differential entropy among all distributions satisfying the constraint E\left[\log \left(1 + \frac{(X - \mu)^2}{\gamma^2}\right)\right] = \log 4, corresponding to a fixed expected logarithmic quadratic deviation. This constraint arises in contexts like robust estimation or geometric interpretations of ratios of independent normals, distinguishing it from the variance constraint yielding the Gaussian.

Transformation rules

The Cauchy distribution belongs to the location-scale family of distributions, meaning it is closed under affine transformations. Specifically, if X \sim \text{Cauchy}(x_0, \gamma) with location parameter x_0 and scale parameter \gamma > 0, then for any constants a \neq 0 and b, the transformed variable Y = aX + b follows \text{Cauchy}(a x_0 + b, |a| \gamma). This property can be verified through substitution into the (PDF) or (CDF). For the PDF approach, the density of X is f_X(x) = \frac{1}{\pi \gamma \left[1 + \left(\frac{x - x_0}{\gamma}\right)^2\right]}. Substituting x = \frac{y - b}{a} yields f_Y(y) = \frac{1}{\pi |a| \gamma \left[1 + \left(\frac{y - (a x_0 + b)}{|a| \gamma}\right)^2\right]}, which matches the PDF of \text{Cauchy}(a x_0 + b, |a| \gamma), accounting for the Jacobian factor |a|^{-1}. Similarly, the CDF transformation F_Y(y) = F_X\left(\frac{y - b}{a}\right) confirms the result, as the arctangent form of the Cauchy CDF preserves the family structure under linear shifts and scalings. The reciprocal transformation Y = 1/X also yields a distribution within the Cauchy family, though with adjusted parameters. For the standard Cauchy distribution (x_0 = 0, \gamma = 1), Y follows the same standard Cauchy distribution, a self-reciprocal property arising from the and the form of the PDF. In the general case, if X \sim \text{Cauchy}(x_0, \gamma), then Y \sim \text{Cauchy}\left(\frac{x_0}{x_0^2 + \gamma^2}, \frac{\gamma}{x_0^2 + \gamma^2}\right). Standardization reduces any Cauchy random variable to the standard form \text{Cauchy}(0, 1). If X \sim \text{Cauchy}(x_0, \gamma), then Z = \frac{X - x_0}{\gamma} follows the standard Cauchy distribution, leveraging the location-scale invariance. This transformation simplifies analysis and simulations by centering the distribution at zero and scaling it to unit spread. Nonlinear transformations generally do not preserve membership in the Cauchy family, leading to distributions outside the location-scale class. For instance, the logarithm of a Cauchy variable does not yield another Cauchy distribution, though approximations may hold in certain tail regions or under specific conditions. Exceptions occur for particular nonlinear mappings, such as certain projective or Möbius transformations, which map Cauchy densities to other Cauchy densities due to the distribution's connection to the upper half-plane in complex analysis. These transformation properties have practical implications for simulating Cauchy random variables. The standard Cauchy can be generated via : if U \sim \text{Uniform}(0,1), then Z = \tan\left(\pi (U - 1/2)\right) follows \text{Cauchy}(0,1), exploiting the arctangent inverse of the CDF. Alternatively, the of two independent standard normal variables Z = N_1 / N_2 (where N_1, N_2 \sim \mathcal{N}(0,1)) yields a standard Cauchy, a method derived from the joint density that highlights the distribution's heavy-tailed nature. General Cauchy variables are then obtained by applying the X = \gamma Z + x_0.

Statistical Inference

Parameter estimation methods

The method of quantiles provides a robust and simple approach to estimating the location parameter x_0 and scale parameter \gamma of the Cauchy distribution, particularly useful given the absence of finite moments. The sample serves as an estimator for x_0, corresponding to the 50th , while the scale \gamma is estimated as approximately half the (IQR) of the sample, since for the standard Cauchy distribution, the IQR equals 2. This method is computationally straightforward and offers good robustness to outliers, making it a practical starting point for more refined procedures. Maximum likelihood estimation (MLE) maximizes the for the joint parameters x_0 and \gamma. The log-likelihood for a sample of size n is given by l(\theta) = -n \log(\pi \gamma) - \sum_{i=1}^n \log\left(1 + \left(\frac{x_i - x_0}{\gamma}\right)^2\right), where \theta = (x_0, \gamma). Since no closed-form solution exists, the estimates are obtained numerically, often using iterative optimization algorithms with initial values from the quantile method. The MLE is invariant under location-scale transformations and achieves the Cramér-Rao lower bound asymptotically when applicable. M-estimators offer robust alternatives to MLE, minimizing a robust to downweight outliers. For the x_0, the sample is a special case of an using the absolute deviation , which is highly robust with a breakdown point of 50%. In the context, simultaneous M-estimators for location and solve equations derived from the score functions \psi(u) = u / (1 + u^2) and \chi(u) = u^2 / (1 + u^2), providing breakdown points up to nearly 50% with appropriate tuning. Bayesian estimation for the Cauchy parameters lacks conjugate priors, complicating analytical posteriors. An improper uniform prior is commonly used for the x_0, leading to a posterior proportional to the likelihood, while for the scale \gamma, priors resembling inverse gamma distributions (or Jeffreys priors) are employed to ensure propriety, often requiring numerical methods like MCMC for inference. Despite the non-existence of moments and the consequent failure of the for sample means, parameter estimators for the Cauchy distribution exhibit desirable asymptotic properties. The MLE and certain M-estimators, such as the sample , are consistent, converging in probability to the true parameters as sample size increases, with asymptotic established via theory or empirical characteristic functions rather than moment-based theorems; for instance, the one-step efficient estimator achieves asymptotic variance 2 for the , matching the inverse .

Challenges with sample moments

The sample mean of independent observations from a Cauchy distribution fails to converge to any central value, even as the sample size increases, because the population mean is and the heavy tails cause extreme outliers to dominate the average. Instead, the distribution of the sample mean remains Cauchy with the same and parameters as the original distribution, resulting in persistent wild oscillations that mimic the parent distribution's behavior. This property arises from the Cauchy's stability under summation, as demonstrated through analysis or direct simulation. For instance, simulations of thousands of samples from a standard Cauchy show the sample means forming a fractal-like without stabilization, in stark contrast to distributions with finite moments where the applies. The sample variance encounters similar instability, exhibiting enormous variability across samples due to the influence of rare but extreme values in the tails, and it does not converge to a finite variance that does not exist. While the sample variance is always non-negative for finite samples, its magnitude can fluctuate dramatically, often becoming impractically large, which undermines its reliability for summarizing spread. Higher-order sample moments, such as and , are even more erratic, amplifying the effects of outliers and rendering them essentially useless for in Cauchy data. To diagnose these issues and confirm a Cauchy-like structure, quantile-quantile (Q-Q) plots comparing empirical quantiles to the theoretical Cauchy are effective, revealing characteristic linear patterns in the tails if the data fits well. Additionally, tail index estimation methods, such as the Hill estimator applied to upper order statistics, can identify the index α ≈ 1 indicative of Cauchy tails, helping distinguish it from lighter-tailed distributions. Empirical simulations further illustrate these challenges: generating multiple datasets and computing sample means yields a that is empirically Cauchy, confirming the theoretical non-convergence and guiding practitioners away from moment-based summaries. Consequently, robust alternatives like the sample for and the () for scale are preferred in , as they remain consistent and bounded against outliers.

Univariate generalizations

The Cauchy distribution serves as a special case of the when the parameter \nu = [1](/page/1). The (PDF) of the , standardized to location 0 and scale , is given by f(x; \nu) = \frac{\Gamma\left(\frac{\nu + 1}{2}\right)}{\sqrt{\nu \pi} \, \Gamma\left(\frac{\nu}{2}\right)} \left(1 + \frac{x^2}{\nu}\right)^{-\frac{\nu + 1}{2}}, \quad -\infty < x < \infty, where \Gamma denotes the gamma function. Substituting \nu = 1 yields \Gamma(1) = 1 and \Gamma(1/2) = \sqrt{\pi}, simplifying the PDF to the standard Cauchy form f(x) = \frac{1}{\pi (1 + x^2)}, \quad -\infty < x < \infty. This relationship highlights the Cauchy as the t-distribution in its most heavy-tailed configuration. For \nu > 1, the t-distribution possesses finite moments up to order \nu - 1, contrasting with the Cauchy, which has no finite moments of order 1 or higher. The Cauchy distribution also relates to the through quadratic transformations. Specifically, if X follows a standard Cauchy distribution, then X^2 follows an with 1 and 1 . The generally arises as the ratio of two independent chi-squared random variables divided by their respective ; in the limiting case of 1 degree of freedom each, this ratio equals the square of the ratio of two independent standard normal variables, yielding the squared Cauchy. The PDF of the F(1,1) distribution is f(y) = \frac{1}{\pi \sqrt{y} (1 + y)}, \quad y > 0. This connection underscores the Cauchy's role in extreme tail behaviors within variance ratio testing. For modeling data on a circular domain, such as angles or directions, the wrapped Cauchy distribution extends the univariate Cauchy by folding it onto the interval [0, 2\pi). Its PDF is f(\theta; \mu, \rho) = \frac{1}{2\pi} \frac{1 - \rho^2}{1 + \rho^2 - 2 \rho \cos(\theta - \mu)}, \quad 0 \leq \theta < 2\pi, where \mu \in [0, 2\pi) is the location parameter (circular mean direction) and \rho = e^{-\gamma} \in (0,1) is the concentration parameter related to the scale \gamma > 0 of the underlying Cauchy, controlling concentration around \mu. This distribution inherits the heavy tails of the Cauchy, making it suitable for circular data with potential outliers, and its characteristic function is \phi(t) = e^{i t \mu} \rho^{|t|} = e^{i t \mu - \gamma |t|} for integer t. The truncated Cauchy distribution restricts the support of a Cauchy to a finite [a, b] with a < b, renormalizing to ensure the density integrates to 1. For a general Cauchy with location \mu and scale \gamma > 0, the PDF is f(x; \mu, \gamma, a, b) = \frac{\frac{1}{\pi \gamma \left[1 + \left(\frac{x - \mu}{\gamma}\right)^2\right]}}{F\left(\frac{b - \mu}{\gamma}\right) - F\left(\frac{a - \mu}{\gamma}\right)}, \quad a \leq x \leq b, where F(z) = \frac{1}{\pi} \arctan(z) + \frac{1}{2} is the of the standard Cauchy, and the is zero outside [a, b]. This renders all moments finite, addressing the Cauchy's undefined mean and variance while preserving its peaked, heavy-tailed shape within bounds; for example, moments can be expressed using the .

Multivariate extensions

The multivariate Cauchy distribution generalizes the univariate Cauchy distribution to random vectors in \mathbb{R}^p for p \geq 1. It is defined such that any of its components follows a univariate Cauchy distribution, ensuring with the univariate case when p=1. The probability density function (PDF) of a p-dimensional multivariate Cauchy random vector \mathbf{X} with location parameter \boldsymbol{\mu} \in \mathbb{R}^p and dispersion matrix \boldsymbol{\Sigma}, a positive definite p \times p matrix, is given by f(\mathbf{x} \mid \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{\Gamma\left(\frac{p+1}{2}\right)}{\pi^{p/2} \Gamma\left(\frac{1}{2}\right) \det(\boldsymbol{\Sigma})^{1/2}} \left[1 + (\mathbf{x} - \boldsymbol{\mu})^\top \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})\right]^{-\frac{p+1}{2}}, where \Gamma denotes the and \Gamma(1/2) = \sqrt{\pi}. This form was introduced in the context of statistical . Note that \boldsymbol{\Sigma} represents scale or rather than , as the distribution lacks finite second moments. In the isotropic case, where \boldsymbol{\Sigma} = \sigma^2 \mathbf{I}_p for scale \sigma > 0 and \mathbf{I}_p, the distribution exhibits spherical around \boldsymbol{\mu}, simplifying the PDF to f(\mathbf{x} \mid \boldsymbol{\mu}, \sigma^2 \mathbf{I}_p) = \frac{\Gamma\left(\frac{p+1}{2}\right)}{\pi^{(p+1)/2} \sigma^p} \left[1 + \frac{(\mathbf{x} - \boldsymbol{\mu})^\top (\mathbf{x} - \boldsymbol{\mu})}{\sigma^2}\right]^{-\frac{p+1}{2}}.[47] This special case is invariant under orthogonal transformations and is often used as a standard form. Marginal distributions of the multivariate Cauchy are also multivariate Cauchy. Specifically, the marginal of any subvector follows a lower-dimensional multivariate Cauchy with the corresponding submatrix of \boldsymbol{\Sigma} and subvector of \boldsymbol{\mu}; in particular, univariate marginals are standard univariate Cauchy distributions scaled and shifted appropriately. Conditional distributions preserve the family: the conditional distribution of a subvector given another is multivariate Cauchy (or, equivalently, multivariate Student's t with 1 degree of freedom), with parameters derived from the Schur complement of \boldsymbol{\Sigma}. The of \mathbf{X} is \phi(\mathbf{t} \mid \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \exp\left(i \mathbf{t}^\top \boldsymbol{\mu} - \|\boldsymbol{\Sigma}^{1/2} \mathbf{t}\|\right), for \mathbf{t} \in \mathbb{R}^p, where \|\cdot\| denotes the norm; this reflects the heavy-tailed nature and lack of moments.

Connections to stable distributions

The Cauchy distribution belongs to the family of stable distributions, which are characterized by a stability parameter \alpha \in (0, 2] that governs the tail heaviness and under , along with a parameter \beta \in [-1, 1], \mu, and scale \sigma > 0. Specifically, the symmetric Cauchy distribution corresponds to \alpha = 1 and \beta = 0, making it a central member of this family with heavy tails that lack finite moments beyond the zeroth order. Stable distributions arise as limiting laws in generalized central limit theorems for i.i.d. random variables with power-law tails, and the Cauchy case emerges when the underlying variables have tails decaying as $1/|x|^2. As an infinitely divisible distribution, the Cauchy admits a Lévy-Khintchine representation involving a Lévy measure \nu that captures the intensity of jumps. For the standard Cauchy distribution, this measure is given by \nu(dx) = \frac{1}{\pi x^2} \, dx, \quad x \in \mathbb{R} \setminus \{0\}, which reflects the symmetric jump structure with infinite activity near zero and governs the Poissonian jumps in the corresponding . This form ensures the measure integrates to over small jumps while satisfying the integrability condition \int_{\mathbb{R} \setminus \{0\}} (1 \wedge x^2) \, \nu(dx) < \infty, confirming the distribution's . Unlike most stable distributions, whose probability density functions lack closed-form expressions except in special cases, the Cauchy distribution (at \alpha = 1, \beta = 0) has an explicit form f(x) = \frac{1}{\pi (1 + x^2)} for the standard case. The other exceptions are the Gaussian distribution at \alpha = 2 (normal density) and the one-sided at \alpha = 1/2, \beta = 1 (with density involving an ). For general \alpha, the densities are typically expressed via series expansions or Fox's H-function, highlighting the Cauchy's relative simplicity within the class. The stability property of the Cauchy distribution manifests in its closure under convolution with appropriate scaling: if X_1, \dots, X_n are i.i.d. Cauchy random variables with location \mu and scale \sigma, then \frac{1}{n} \sum_{i=1}^n X_i follows a Cauchy distribution with the same location \mu and scale \sigma, preserving the family without normalization by n^{1/\alpha} beyond the linear factor (since \alpha = 1). This strict stability underscores its role in modeling phenomena with additive independence, such as certain physical processes with resonant frequencies. While positive stable distributions (with \beta = 1) link to subordinators in Lévy processes, the symmetric Cauchy at \alpha = 1 emphasizes balanced, two-sided jumps without positivity constraints.

Applications

Physical modeling

In particle physics, the relativistic Breit-Wigner distribution adopts the Cauchy form to model the profile of unstable particle s, where the is given by f(E) \propto \frac{1}{(E - M)^2 + \left(\frac{\Gamma}{2}\right)^2}, with M representing the and \Gamma the width, capturing the enhanced probability near the due to the Breit-Wigner mechanism. This formulation generalizes the non-relativistic case originally derived for cross-sections, providing a shape that accounts for the finite lifetime of the resonant state through the . The heavy tails of the Cauchy distribution reflect the broad spread in processes, making it suitable for high- collisions where relativistic effects dominate. In , the Cauchy distribution manifests as the lineshape in and molecular , arising directly from the of the in the for an excited state's . This natural broadening mechanism, first formalized in Lorentz's oscillator model of response to electromagnetic fields, describes the spectral intensity profile of emitted or absorbed light, with the width inversely proportional to the state's lifetime. The lineshape's symmetric, peaked form with extended wings accurately reproduces observed emission spectra in gases and solids, where dominates over Doppler effects. In plasma physics, Cauchy distributions, often termed Lorentzian, model velocity distributions in turbulent regimes, particularly in dusty plasmas where suprathermal particles lead to non-Maxwellian tails. Such distributions emerge in scenarios involving wave-particle interactions and instabilities, like two-stream configurations that drive turbulence, allowing for higher velocities than Gaussian models predict. For instance, in space plasmas, Lorentzian profiles fit observations of ion and electron speeds in regions with strong electrostatic fluctuations. Compared to the Gaussian distribution, the Cauchy excels in physical modeling by accommodating fat-tailed energy spectra prevalent in resonant and turbulent systems, where extreme events—such as high-energy outliers in particle decays or bursts—occur more frequently than Gaussian tails allow, aligning better with empirical from accelerators and diagnostics. Historically, the Cauchy's application in physics traces to Lorentz's early 20th-century work on optical and wave propagation, laying groundwork for its use in , though modern emphasis remains on the Breit-Wigner parameterization for quantitative fits.

Signal processing and finance

In signal processing, the Cauchy distribution serves as a robust model for impulsive , which arises from sources like atmospheric interference or switching transients in communication systems. This captures the rare but extreme outliers that Gaussian models fail to represent adequately, enabling the design of that maintain performance under such conditions. For instance, the myriad filter, which generalizes the for Cauchy-distributed noise, achieves optimality in suppressing impulses while preserving signal details in applications such as denoising and audio processing. Similarly, meridian filters extend this robustness by approximating the Cauchy score function, proving effective for one-dimensional signals corrupted by symmetric heavy-tailed noise. The Cauchy distribution also appears in spectral analysis, particularly through its Fourier transform, the Lorentzian function, which models power spectral densities in radar and sonar systems. In radar clutter modeling, Lorentzian spectra describe the Doppler signatures of sea surface returns at low grazing angles, where the principal spectral peak fits a Lorentzian shape better than Gaussian alternatives, aiding in target detection amid environmental noise. For sonar imagery, Lorentzian profiles characterize texture spectra in underwater acoustic signals, improving parameter estimation for reverberation and scattering analysis. These applications leverage the Cauchy's infinite variance to account for the broad, slowly decaying tails observed in real-world frequency responses. In , the Cauchy distribution addresses the fat-tailed nature of asset returns, where extreme events occur more frequently than predicted by normal distributions. As a special case of stable distributions with stability parameter α=1, it models log-returns in equity markets, capturing leptokurtosis and in daily price changes. This makes it suitable for simulating Lévy processes in option pricing, where Cauchy jumps introduce realistic discontinuities in asset paths, leading to closed-form approximations for European call options under symmetric assumptions. For , the Cauchy distribution informs calculations by providing finite quantiles despite undefined moments, offering a conservative estimate for tail risks in portfolios exposed to jumps. Truncated variants mitigate estimation challenges, yielding VaR forecasts that outperform Gaussian models during market turbulence by emphasizing heavy-tail probabilities. In practice, this approach enhances for hedge funds and , where Cauchy's properties align with of clustered extremes in return series.

History

Origins and developments

The form of the probability density function associated with the Cauchy distribution, proportional to \frac{1}{a^2 + x^2}, first appeared in mathematical literature during the 18th century as part of the curve known as the witch of Agnesi. This curve was described in Maria Gaetana Agnesi's 1748 treatise Istituzioni analitiche ad uso della gioventù italiana, where it served as an example in the study of cubic curves and integration techniques, though without a probabilistic interpretation. The explicit analysis of the distribution's properties in a probabilistic context was provided by in 1824, who examined it as the limiting case of the average of observations under certain error assumptions, publishing the results in 1827. Poisson's work highlighted its heavy-tailed nature and lack of finite moments, arising in the context of the ratio of two independent normal variables, but the distribution was not yet named after anyone specifically. The distribution became associated with Augustin-Louis Cauchy following his use of it in 1853 during an academic dispute with Irénée-Jules Bienaymé over the validity of least squares methods for interpolation when errors follow heavy-tailed distributions. In his response, Cauchy demonstrated that under such error laws, the method could lead to divergent results, thereby popularizing the distribution in mathematical statistics and leading to its eponymous naming, despite Poisson's earlier analysis. In the early , the distribution found an independent application in physics through Hendrik Lorentz's derivation of the natural linewidth in atomic spectra, where the shape emerges from the finite lifetime of excited states due to . This physical form, known as the lineshape, provided an early practical context for the distribution in modeling phenomena, distinct from its mathematical origins.

Key contributors

The Cauchy distribution is named after the mathematician (1789–1857), whose foundational work in , particularly in his 1823 memoir on definite integrals and their applications, laid the groundwork for the mathematical form of the distribution through explorations of residues and . Cauchy's rigorous approach to limits and infinite series in this period influenced the development of probability distributions with heavy tails. In the 1920s, Paul Lévy (1886–1971) advanced the understanding of the Cauchy distribution as a special case of stable distributions, characterizing it as the α=1 member in his seminal works on the summation of independent random variables and . Lévy's characterization highlighted its stability under convolution, distinguishing it from Gaussian laws. The Dutch physicist (1853–1928) derived the Lorentzian profile, mathematically equivalent to the Cauchy distribution, in his 1906 electron theory to model resonance phenomena and the dispersion of light by oscillating electrons. This physical interpretation connected the distribution to atomic spectra and electromagnetic theory. In the 1960s, (1924–2010) pioneered the application of stable distributions, including the Cauchy case, to financial time series, modeling speculative price variations in cotton markets as exhibiting heavy tails rather than Gaussian behavior. His work challenged traditional economic models by emphasizing infinite variance properties. Eugene Fama (b. 1939) built on Mandelbrot's ideas in , empirically testing the stable Paretian hypothesis—including the role of Cauchy-like tails—in the distribution of returns during the mid-. Fama's supported the relevance of non-Gaussian stable laws for capturing empirical regularities in financial data. Peter Huber (b. 1924) highlighted the Cauchy distribution's utility in during the 1960s, using it as a prototypical heavy-tailed model to motivate M-estimators that minimize influence from outliers in location estimation. Huber's framework emphasized the distribution's role in developing estimators resilient to deviations from normality.

References

  1. [1]
    5.32: The Cauchy Distribution - Statistics LibreTexts
    Apr 23, 2022 · The Cauchy distribution is a heavy tailed distribution because the probability density function g ⁡ ( x ) decreases at a polynomial rate as x → ...The Standard Cauchy... · Distribution Functions · Moments · Related Distributions
  2. [2]
    Cauchy Distribution -- from Wolfram MathWorld
    The Cauchy distribution, also called the Lorentzian distribution or Lorentz distribution, is a continuous distribution describing resonance behavior.
  3. [3]
    [PDF] Cauchy Distribution
    The Cauchy distribution, or the Lorentzian distribution, is a continuous probability distribution that is the ratio of two independent normally distributed ...
  4. [4]
    [PDF] Stat 3701 Lecture Notes: Statistical Models, Part II
    2i, −∞ <x< ∞. The special case when µ = 0 and σ = 1 is called the standard Cauchy distribution. Its PDF is f(z) = 1 π(1 + z2) , −∞ <z< ∞.
  5. [5]
    [PDF] Notes About Cauchy Distribution1 - USC Dornsife
    Jul 31, 2023 · (1) PDF (probability density function) fζ(x) = 1 π. 1. 1 + x2, x ∈ (−∞,+∞). (2) CDF (cumulative distribution function). Fζ(x) = 1. 2. +. 1 π.
  6. [6]
    [PDF] Cauchy Noise Removal by Nonconvex ADMM with Convergence ...
    Cauchy distri- bution belongs to the heavy-tailed distribution, where the tail heaviness is determined by the scale parameter γ. In particularly, if X and Y are ...
  7. [7]
    [PDF] Standard Cauchy distribution
    Theorem If X1 and X2 are independent standard normal random variables, then Y = X1/X2 has the standard Cauchy distribution.Missing: formula | Show results with:formula
  8. [8]
    [PDF] Stat 609: Mathematical Statistics Lecture 13
    which is the pdf of Cauchy(0,1). Hence, if X1 ∼ N(0,1) and X2 ∼ N(0,1) are independent, then we have just shown that X1/|X2| ∼ Cauchy(0,1).
  9. [9]
    Probability Playground: The Cauchy Distribution
    The Cauchy distribution is a symmetric bell-shaped distribution which arises naturally as the ratio of two independent normal random variables with mean zero.
  10. [10]
    Stable Distribution
    A plot of the density for a Cauchy distribution is symmetric and has a bell-shaped curve, but has heavier tails than the density of a normal distribution.
  11. [11]
    [PDF] Standard Cauchy distribution
    A standard Cauchy random variable X has probability density function f(x) = 1 π (1+x2). −∞ < x < ∞. The probability density function is illustrated below. −4. − ...
  12. [12]
    [PDF] Unit 23: PDF and CDF
    Cauchy distribution. Example: Find the cumulative distribution function of the Cauchy distribution. Solution: F(x) = ∫ x. −∞ f(t) dt = 1 π arctan(x)|x. −∞ = (.
  13. [13]
    1.3.6.6.3. Cauchy Distribution - Information Technology Laboratory
    Cumulative Distribution Function, The formula for the cumulative distribution function for the Cauchy distribution is. F ( x ) = 0.5 + arctan ⁡ ( x ) π. The ...
  14. [14]
    [PDF] Central limit theorems from a teaching perspective - DiVA portal
    ... angle γ which has a uniform distribution, then tan(γ) has a. Cauchy distribution. A third example is related to statistical inference, since a Cauchy(0, 1) ...
  15. [15]
    On the Half-Cauchy Prior for a Global Scale Parameter - Project Euclid
    This paper argues that the half-Cauchy distribution should replace the inverse-Gamma distribution as a default prior for a top-level scale parameter in.Missing: parameterization | Show results with:parameterization
  16. [16]
    [PDF] Disordered ensembles of random matrices
    -1. 0. 2. 3. Exp. E. Page 12. density. 0.8. ፦. Cauchy. 0.6. 0.4. 0.2. 0. -4. -2. Levy (Cauchy) matrix pre) == Tth th². N=600. 10日. E. 2. 4. Page 13. 100. 80. 2.
  17. [17]
    [PDF] Stable Distributions - EdSpace
    distributions are symmetric, bell-shaped curves. The main qualitative ... them is that the Cauchy distribution has much heavier tails, see Table 1.1.
  18. [18]
    [PDF] geometrical understanding of the cauchy distribution - Raco.cat
    Rotating the needle several times, we obtain axes uniformly distributed around the circle, and the intuitive average axis is also uniformly distributed. Taking ...Missing: projection | Show results with:projection
  19. [19]
    Cauchy Distribution - Random Services
    For every n ∈ N + the Cauchy distribution with location parameter a and scale parameter b is the distribution of the sum of n independent variables, each of ...
  20. [20]
    Cauchy distribution - Encyclopedia of Mathematics
    Jun 4, 2020 · in other words, a sum of independent random variables with Cauchy distributions is again a random variable with a Cauchy distribution.<|control11|><|separator|>
  21. [21]
    [PDF] 9 Sums of Independent Random Variables - Stat@Duke
    Oct 8, 2021 · then the sample mean ¯Xn ∼ Ca(m, s) also has the same Cauchy distribution— and, in particular, no weak or strong LLN applies and no CLT applies.
  22. [22]
    3.3.3 Cauchy Distribution - MIT
    Often the original random variable(s) is (are) uniformly, independently distributed over some range of values, perhaps depicting position or angle of an object.Missing: interpretation projection
  23. [23]
    [PDF] Stat 5101 Lecture Slides Deck 4 - School of Statistics
    The Cauchy Distribution (cont.) The distribution with PDF f(x) = 1 π(1 + x. 2. ) ,. −∞ <x< ∞ is called the standard Cauchy distribution. The distribution with ...
  24. [24]
    [PDF] Lecture 6-2 — 10/12/2017 1 The Cauchy Distribution
    Note that this distribution has undefined expectation and infinite variance. A notable property of the Cauchy distribution is that it is 1-stable: if z1,z2 ...
  25. [25]
    None
    No readable text found in the HTML.<|control11|><|separator|>
  26. [26]
    Full article: A truncated Cauchy distribution - Taylor & Francis Online
    Nov 23, 2006 · Because (Equation1) is defined over a finite interval, the truncated Cauchy distribution has all its moments. Thus, (Equation1) may prove to be ...
  27. [27]
    [PDF] Chapter 4 Truncated Distributions
    The first five sections of this chapter provide the theory needed to compare the different confidence intervals. Many of these results.
  28. [28]
    [PDF] Characteristic Functions and the Central Limit Theorem
    to obtain the characteristic function of the above Cauchy distribution φ(t) = eL|t|. 6.1. 3 Characteristic function of N(µ,σ2) . Our objective is to show that ...
  29. [29]
    [PDF] lévy processes, stable processes, and subordinators
    that it has station- ary, independent ...Missing: characterization | Show results with:characterization
  30. [30]
    [PDF] On a goodness of fit test for the Cauchy distribution
    The class of Cauchy distributions is closed under affine transformation, i.e. if X ∼ C(θ, λ) then, for constants c, d it follows that cX + d ∼ C(cθ + d, λ ...
  31. [31]
    [PDF] 1 Location-Scale Family
    Jan 4, 2016 · The Cauchy L-S family is not an exponential family, but its support is the same for all θ = (µ, σ). Proof. We will show that for σ = 1, Cauchy ...
  32. [32]
    [PDF] Distributions of Product and Quotient of Cauchy Variables
    since, as already stated, the distribution of the reciprocal of a Cauchy variable is identical with the distribution of the variable itself. It may be noted ...Missing: random | Show results with:random
  33. [33]
    [PDF] Theorem The inverse of a standard Cauchy random variable X is ...
    Theorem The inverse of a standard Cauchy random variable X is also standard Cauchy. Proof Let the random variable X have the standard Cauchy distribution.
  34. [34]
    Transformations Which Preserve Cauchy Distributions and Their ...
    This paper is concerned with invariant densities for transformations on R R which are the boundary restrictions of inner functions of the upper half plane.
  35. [35]
    Angular Processes Related to Cauchy Random Walks - SIAM.org
    This leads to different types of nonlinear transformations of Cauchy random variables which preserve the Cauchy density.
  36. [36]
    [PDF] Maximum Likelihood in R
    Sep 30, 2003 · 2 Maximum Likelihood Estimation in R. 2.1 The Cauchy Location-Scale ... The standard Cauchy distribution has no parameters, but it induces a two-.
  37. [37]
    [PDF] Breakdown points of Cauchy regression-scale estimators
    The lower bounds for the explosion and implosion breakdown points of the simultaneous Cauchy M-estimator (Cauchy MLE) of the regression and scale parameters.
  38. [38]
    On Bayesian Estimation of the Cauchy Parameters - jstor
    of the distribution. The distribution is often used to model heavy-tailed distributions, such as those which arise in outlier analyses. The Cauchy.
  39. [39]
    [PDF] EFFICIENT INFERENCE FOR THE CAUCHY DISTRIBUTION ...
    (ii) Find the an asymptotic approximation to the distribution of the sample median, Mn, that is the quantile corresponding to p = 1/2, and hence comment on the ...
  40. [40]
    [PDF] Standard Cauchy distribution
    Page 1. Theorem The standard Cauchy distribution is a special case of the Student's t distribution when n = 1. Proof The Student's t distribution has ...
  41. [41]
    [PDF] F distribution
    The F distribution is used for statistical inference concerning ratios of rates of two exponential populations. The probability density function is plotted ...<|separator|>
  42. [42]
    An extended family of circular distributions related to wrapped ...
    In this paper, we provide a four-parameter extended family of circular distributions, based on the wrapped Cauchy distribution, by applying Brownian motion ...
  43. [43]
    The Cauchy Distribution in Information Theory - PMC
    In addition to the Gaussian and Lévy distributions, the Cauchy distribution is stable: a linear combination of independent copies remains in the family, and is ...
  44. [44]
    [PDF] Properties of Multivariate Cauchy and Poly-Cauchy Distributions ...
    The sum of two independent Cauchy random variables is well-known to be distributed according to a ( scaled) Cauchy distribution.
  45. [45]
    [PDF] On the Conditional Distribution of the Multivariate t Distribution - arXiv
    Apr 2, 2016 · all the conditional means exist because the conditional distributions of the multivariate Cauchy distri- bution follow MVT distributions ...
  46. [46]
    The Cauchy Distribution in Information Theory - Entropy - MDPI
    ... Maximum Entropy Methods in Science and Engineering, Maximum Entropy, Maximum ... The Cauchy Distribution in Information Theory. by. Sergio Verdú.
  47. [47]
    [PDF] Stable Distributions - Finance
    Right panel: Closed form formulas for densities are known only for three distributions – Gaussian (α = 2; black solid line),. Cauchy (α = 1; red dotted line) ...
  48. [48]
    [PDF] Cauchy Noise and Affiliated Stochastic Processes - arXiv
    (Lévy) process, determined by the corresponding Lévy measure ν(dy) = 1 π dy y2 . It is instructive to notice that a pseudodifferential analog of the Fokker ...Missing: x²) | Show results with:x²)
  49. [49]
    Properties of Lévy Processes – Almost Sure
    Feb 25, 2011 · Lévy processes, which are defined as having stationary and independent increments, were introduced in the previous post.Missing: characterization | Show results with:characterization
  50. [50]
    [PDF] Stable Distributions - Uni Ulm
    Nov 2, 2016 · Let Y be a Cauchy distributed r.v. Find the characteristic function of Y. Give parameters. (α, σ, β, µ) for the stable random variable Y.<|separator|>
  51. [51]
    Introduction to spectral line shape theory - IOPscience
    Mar 2, 2022 · Therefore, the line shapes still have a Lorentzian shape, but with a width that is a combination of the natural and collisional broadening. The ...
  52. [52]
    Meridian Filtering for Robust Signal Processing | Semantic Scholar
    A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior ... filtering of a one-dimensional chirp-type signal in impulsive noise. Expand.
  53. [53]
    [PDF] A Model of Low Grazing Angle Sea Clutter for Coherent Radar ...
    For. HH data, the best representation was with a Lorentzian for the principal spectral peak, and a Voigtian contributing to both the upper and lower wings of ...
  54. [54]
    [PDF] TEXTURE ANALYSIS IN SONAR IMAGES - UCL Discovery
    The. Lorentzian spectrum provides a better fit to the three textures examined than the. Gaussian. Secondly, the inclusion of the instrument response has a ...
  55. [55]
    [PDF] Financial Applications of Stable Distributions
    The stable option pricing formula (53) may be applied without modification to options on commodities, stocks, bonds, and foreign exchange rates, simply by.
  56. [56]
    [PDF] Option Pricing with Lévy-Stable Processes Generated by ... - People
    May 13, 2009 · option prices when the random shocks to the price process are distributed according to a Cauchy Lévy-Stable process, α = 1 and β = 0 in (30) ...
  57. [57]
    VaR Forecasting for Financial Asset Series Based on Truncated ...
    Aug 4, 2025 · This model mitigates the effect of extreme values by truncating the Cauchy distribution over a finite interval while maintaining thick-tailed ...
  58. [58]
    An historical note on the Cauchy distribution - Oxford Academic
    STIGLER, Studies in the History of Probability and Statistics. XXXIII Cauchy and the witch of Agnesi: An historical note on the Cauchy distribution, Biometrika ...
  59. [59]
    Cauchy Distribution - Glynn - Wiley Online Library
    Sep 29, 2014 · The Cauchy distribution, discovered by S.D. Poisson in 1824, arises as the ratio of two independent standard normal variables.
  60. [60]
    Stark broadening models for plasma diagnostics - ResearchGate
    Aug 8, 2014 · The spectral lines have been represented with a natural width that has nothing to do with the Stark effect. ... Lorentz (1906) describes that ...
  61. [61]
    [PDF] arXiv:1107.3688v3 [math.HO] 30 May 2012
    May 30, 2012 · fined in terms of what would be called today the Cauchy distribution ... In 1823, and particularly in 1829, Cauchy develops a more flexible.
  62. [62]
    RESEARCH CAUCHY'S WORK ON INTEGRAL ... - Project Euclid
    occurring in Cauchy's formula is known as the Cauchy distribution in proba- bility theory. ... We will analyze Cauchy's book ... article on elasticity ([12], 1823) ...
  63. [63]
    [PDF] PAUL LEVY
    Stable Laws. A probability distribution F on U is called stable if, given a > 0, b > 0 and independent random variables XX,X2 with distribution F, there is a.
  64. [64]
    The theory of electrons and its applications to the phenomena of ...
    Nov 29, 2006 · The theory of electrons and its applications to the phenomena of light and radiant heat. by: Lorentz, H. A. (Hendrik Antoon), 1853-1928.
  65. [65]
    [PDF] The Variation of Certain Speculative Prices
    STABLE DISTRIBUTIONS AND THE. LAW OF PARETO. Except for the Gaussian limit case, the densities of the stable random variables follow a generalization of the ...
  66. [66]
    Mandelbrot and the Stable Paretian Hypothesis - jstor
    6 To date Mandelbrot's most comprehensive pub- lished work in this area is "The Variation of Certain. Speculative Prices," Journal of Business, October,. 1963.
  67. [67]
    [PDF] The Behavior of Stock-Market Prices
    as well as the main body of the data. The distributions referred to are mem- bers of a special class which Mandelbrot has labeled stable Paretian. The ...
  68. [68]
    the 1972 wald lecture robust statistics: a review"2 - jstor
    This is a selective review on robust statistics, centering on estimates of location, but extending into other estimation and testing problems. After.