Generalized inverse Gaussian distribution
The generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions supported on the positive real line, with probability density functionf(x; p, a, b) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{ab})} x^{p-1} \exp\left\{ -\frac{1}{2} (a x + b/x) \right\}
for x > 0, where p \in \mathbb{R} is the shape parameter, a > 0 and b > 0 are scale parameters, and K_p(\cdot) denotes the modified Bessel function of the second kind of order p.[1] This distribution was originally introduced by the French statistician Étienne Halphen in 1941 as part of a system of distributions for frequency analysis of hydrological data, such as river flows. It was rediscovered and popularized in the 1970s by Danish statistician Ole Barndorff-Nielsen, who coined the name "generalized inverse Gaussian distribution" during his work on infinitely divisible distributions and stochastic processes in physics and finance.[2] A comprehensive treatment of its statistical properties, including moments, cumulants, and inference methods, was provided by Bent Jørgensen in his 1982 monograph, which established the GIG as a fundamental tool in theoretical and applied statistics.[2] Notable special cases of the GIG include the gamma distribution (limit as b \to 0), the inverse gamma distribution (limit as a \to 0), and the inverse Gaussian distribution (when p = -1/2).[1] The GIG's flexibility in capturing both heavy-tailed and light-tailed behaviors has led to widespread applications, particularly in Bayesian statistics for constructing conjugate priors and facilitating Markov chain Monte Carlo sampling in hierarchical models, as well as in financial engineering for modeling stochastic volatility and Lévy processes.[1] Its infinite divisibility further enables its use in simulating compound Poisson processes and other continuous-time models in risk analysis.[3]