Fact-checked by Grok 2 weeks ago

Generalized inverse Gaussian distribution

The generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions supported on the positive real line, with probability density function
f(x; p, a, b) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{ab})} x^{p-1} \exp\left\{ -\frac{1}{2} (a x + b/x) \right\}
for x > 0, where p \in \mathbb{R} is the shape parameter, a > 0 and b > 0 are scale parameters, and K_p(\cdot) denotes the modified Bessel function of the second kind of order p.
This distribution was originally introduced by the French statistician Étienne Halphen in 1941 as part of a system of distributions for of hydrological data, such as river flows. It was rediscovered and popularized in the 1970s by Danish statistician Ole Barndorff-Nielsen, who coined the name "generalized inverse Gaussian distribution" during his work on infinitely divisible distributions and stochastic processes in physics and . A comprehensive treatment of its statistical properties, including moments, cumulants, and inference methods, was provided by Bent Jørgensen in his 1982 monograph, which established the GIG as a fundamental tool in theoretical and applied statistics. Notable special cases of the GIG include the (limit as b \to 0), the (limit as a \to 0), and the (when p = -1/2). The GIG's flexibility in capturing both heavy-tailed and light-tailed behaviors has led to widespread applications, particularly in for constructing conjugate priors and facilitating sampling in hierarchical models, as well as in for modeling and Lévy processes. Its infinite divisibility further enables its use in simulating compound processes and other continuous-time models in risk analysis.

Introduction

Definition

The generalized inverse Gaussian distribution is a three-parameter family of continuous probability distributions supported on the positive real line. A random variable X follows a generalized inverse Gaussian distribution with parameters p \in \mathbb{R}, a > 0, and b > 0, denoted X \sim \mathrm{GIG}(p, a, b), if its probability density function is given by f(x; p, a, b) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{a b})} \, x^{p-1} \exp\left( -\frac{a x + b / x}{2} \right), \quad x > 0, where K_p(\cdot) denotes the modified Bessel function of the second kind of order p. The parameters a and b control the scale and shape through the exponential term, while p influences the power-law behavior near the origin and at infinity. The support is strictly the positive reals (x > 0), ensuring the distribution is defined for positive-valued random variables, such as reciprocals or transformations in stochastic processes. The normalization constant \frac{(a/b)^{p/2}}{2 K_p(\sqrt{a b})} arises from the requirement that the density integrates to unity over (0, \infty). Specifically, the integral of the unnormalized density x^{p-1} \exp\left( -\frac{a x + b / x}{2} \right) equals $2 K_p(\sqrt{a b}), a property derived from the integral representation of the modified Bessel function of the second kind. This function K_p(z) ensures integrability for the given parameter constraints, as it converges for z > 0 and all real p, providing the exact normalizing factor.

Historical background

The generalized inverse Gaussian distribution was first introduced by French statistician and Étienne Halphen in 1941 as part of a three-parameter family of distributions developed for analyzing hydrological data, such as river flow frequencies. Halphen's work aimed to model natural phenomena with heavy tails and positive support, and the distribution he proposed—now recognized as the generalized inverse Gaussian—emerged from efforts to generalize earlier special cases like the reciprocal of the . This family, often referred to in hydrology as the Halphen system, included the generalized inverse Gaussian as one of its core types, providing a flexible framework for continuous positive variables. The distribution saw limited initial adoption outside hydrological contexts but gained renewed attention in the 1970s through the work of Danish statistician Ole Barndorff-Nielsen, who rediscovered and popularized it for broader statistical applications. Barndorff-Nielsen integrated the distribution into models for stochastic processes, particularly in physics and spatial statistics, emphasizing its role in representing . A pivotal contribution was the 1977 paper by Barndorff-Nielsen and Christian Halgreen, which established the infinite divisibility of the generalized inverse Gaussian and the related hyperbolic distribution, facilitating its use in Lévy processes and compound Poisson models. This popularization marked a shift from Halphen's specialized hydrological focus to general statistical theory, evolving the distribution from its special cases—such as the inverse Gaussian, originally derived in 1915 for first-passage times and formalized by Maurice Tweedie in 1957—to a versatile three-parameter form applicable across disciplines. Further consolidation came with Bent Jørgensen's 1982 monograph, which provided a comprehensive treatment of its statistical properties and inferential methods, solidifying its place in .

Parametrizations

Standard parametrization

The standard parametrization of the generalized inverse Gaussian (GIG) distribution employs three parameters: a > 0, b > 0, and p \in \mathbb{R}. In this formulation, the probability density function for a random variable X supported on x > 0 is given by f(x; a, b, p) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{ab})} \, x^{p-1} \exp\left( -\frac{1}{2} (a x + b/x) \right), where K_p(\cdot) denotes the modified Bessel function of the second kind of order p. This parametrization, originally developed in the context of first hitting time models, provides a flexible framework for modeling positive random variables with varying tail behaviors. The parameter a > 0 governs the in the for large values of x, influencing the right heaviness through the a x. Similarly, b > 0 controls the decay for small x via the b/x, affecting the behavior near zero. The power parameter p \in \mathbb{R} shapes the component x^{p-1}, which determines the power-law-like behavior close to the origin (for p < 1) and contributes to the overall flexibility in tail asymmetry. Together, these parameters allow the GIG to encompass a range of distributions, including gamma and inverse gamma as limiting cases when one of a or b approaches zero under appropriate conditions on p. For the distribution to be well-defined with a > 0 and b > 0, the normalizing constant must ensure the density integrates to unity over (0, \infty), which is achieved through the factor involving K_p(\sqrt{a b}). The modified Bessel function K_p(z) exists and is positive for all real p and z > 0, guaranteeing the existence of the GIG density under these positivity constraints on a and b. These conditions prevent divergences in the integral, as the exponential terms provide sufficient decay while the Bessel function accounts for the precise normalization. In this standard notation, key distributional quantities such as the mean take the form \mathbb{E}[X] = \sqrt{b/a} \, K_{p+1}(\sqrt{a b}) / K_p(\sqrt{a b}), highlighting the interplay between the parameters in determining location-scale properties. This expression, derived from the moment-generating properties, underscores the role of the ratio \sqrt{b/a} in scaling the mean relative to the parameter balance.

Alternative parametrizations

The generalized inverse Gaussian (GIG) distribution admits several alternative parametrizations that facilitate analysis in specific contexts, such as deriving moments or studying symmetry properties. One common form employs parameters \lambda, \chi, and \psi, where \lambda \in \mathbb{R} is a shape parameter, and \chi, \psi \geq 0 control the scale and asymmetry. The probability density function (PDF) is given by f(x; \lambda, \chi, \psi) = \frac{ \left( \frac{\psi}{\chi} \right)^{\lambda / 2} }{ 2 K_{\lambda} \left( \sqrt{\psi \chi} \right) } x^{\lambda - 1} \exp\left( -\frac{\chi / x + \psi x}{2} \right), \quad x > 0, with K_{\lambda}(\cdot) denoting the modified Bessel function of the second kind. This parametrization is equivalent to the standard form with parameters p, a, b via the substitution \lambda = p, \chi = b, \psi = a, which directly maps the exponential terms and normalizing constant without altering the distributional structure. Another reparametrization introduces a concentration and a scaling , transforming the PDF to f(x; \lambda, \theta, \eta) = \frac{ \eta^{\lambda} }{ 2 K_{\lambda} (\theta) } x^{\lambda - 1} \exp\left( -\frac{\theta}{2} \left( \eta x + \frac{1}{\eta x} \right) \right), \quad x > 0. This equivalence follows from substituting \chi = \theta / \eta and \psi = \theta \eta into the (\lambda, \chi, \psi) form, which symmetrizes the exponent around the term \eta x + 1/(\eta x) while preserving the Bessel normalizing factor through \sqrt{\chi \psi} = \theta and (\psi / \chi)^{\lambda / 2} = \eta^{\lambda}. The (\lambda, \chi, \psi) parameters prove convenient in Bayesian analyses, where they simplify expressions for conjugate priors and mixture representations. In contrast, the (\theta, \eta) form suits physical modeling scenarios, as \theta governs overall concentration and \eta captures directional in the distribution tails. Halgreen's parametrization, tailored for investigations into , employs a similar structure but emphasizes parameters that highlight self-decomposability properties, often expressed in terms of \nu, \alpha^2, and \beta^2 with the PDF proportional to x^{\nu-1} \exp\left( -\frac{\beta^2 x + \alpha^2 / x}{2} \right). This maps to the (\lambda, \chi, \psi) form via \nu = \lambda, \chi = \alpha^2, \psi = \beta^2, with the distribution being self-decomposable for \lambda \leq 0.

Properties

Moments and cumulants

The mean of a X following the generalized inverse Gaussian distribution GIG(p, a, b) with a > 0, b > 0, and p \in \mathbb{R} is given by \mu = \mathbb{E}[X] = \sqrt{\frac{b}{a}} \frac{K_{p+1}(\sqrt{ab})}{K_p(\sqrt{ab})}, where K_\nu(z) denotes the modified of the second kind of \nu. The variance is then \sigma^2 = \mathrm{Var}(X) = \frac{b}{a} \frac{K_{p+2}(\sqrt{ab})}{K_p(\sqrt{ab})} - \mu^2. This expression simplifies to \sigma^2 = \mu^2 \left( \frac{K_{p+2}(\sqrt{ab}) K_p(\sqrt{ab})}{K_{p+1}^2(\sqrt{ab})} - 1 \right), highlighting its dependence on ratios of consecutive . Higher-order moments are expressed in closed form as \mathbb{E}[X^r] = \left( \frac{b}{a} \right)^{r/2} \frac{K_{p+r}(\sqrt{ab})}{K_p(\sqrt{ab})} for real r such that p + r > 0 to ensure convergence. This general formula facilitates computation of , , and other measures by substituting appropriate r. For instance, the third central moment determines via \gamma_1 = \mathbb{E}[(X - \mu)^3] / \sigma^3, derived from the raw moments above. Cumulants \kappa_s of the GIG distribution satisfy \kappa_1 = \mu (the ) and \kappa_2 = \sigma^2 (the variance), with higher cumulants obtainable through recursive relations involving of the logarithm of the , specifically \frac{\partial}{\partial p} \log K_p(\sqrt{ab}) and related terms. These recursions are \kappa_{s+1} = \frac{b}{a} \frac{K_{p+1}(\sqrt{ab})}{K_p(\sqrt{ab})} \kappa_s + (s-1) \kappa_{s-1} \frac{\partial}{\partial p} \log K_p(\sqrt{ab}), allowing sequential computation starting from the first two cumulants. Asymptotic behaviors of moments and cumulants emerge in limiting regimes of the parameters. For large \sqrt{ab}, the distribution approximates a normal distribution with the above mean and variance, where higher cumulants \kappa_s for s \geq 3 become negligible relative to powers of \sigma^2. Conversely, when b \to 0 with fixed a > 0 and p > 0, the GIG converges to a gamma distribution with shape p and rate a/2, yielding moments asymptotic to those of \mathrm{Gamma}(p, a/2); similarly, for a \to 0 with fixed b > 0 and p < 0, it approaches an inverse gamma distribution with shape -p and scale b/2. These limits provide useful approximations for extreme parameter values.

Mode and median

The mode of the generalized inverse Gaussian (GIG) distribution is obtained by maximizing its probability density function, which involves setting the derivative of the log-density to zero and solving the resulting quadratic equation a m^2 - 2(p-1)m - b = 0, where a > 0, b > 0, and the parameters follow the standard parametrization. The relevant positive root provides the mode m = \frac{(p-1) + \sqrt{(p-1)^2 + ab}}{a}, valid under conditions ensuring an interior maximum, such as p > 1, where the distribution is unimodal with the mode in (0, \infty). When |p-1| \ll \sqrt{ab}, the mode approximates \sqrt{b/a}. The GIG distribution exhibits varying behavior across parameter regimes; for p > 1, the mode shifts toward larger values, reflecting lighter tails, while for negative p, the mode approaches zero, and the distribution develops heavy tails due to the dominance of the x^{p-1} term near the origin. Unlike the mean (detailed in the moments section), the mode provides a robust location measure less influenced by extreme values in heavy-tailed cases. The of the GIG distribution lacks a and must be approximated or computed numerically. Common approaches include the saddlepoint approximation, which leverages the cumulant generating function for accurate tail and central estimates, and the Cornish-Fisher , which adjusts quantile using and . Alternatively, numerical methods such as root-finding on the (involving evaluations) or quadrature-based inversion yield precise medians for specific parameters. These techniques are essential for , as the often lies between the and in skewed GIG variants.

Generating functions

The characteristic function of a random variable X following the generalized inverse Gaussian distribution GIG(p, a, b) is given by \phi(t) = \mathbb{E}[e^{i t X}] = \left( \frac{a}{a - 2 i t} \right)^{p/2} \frac{K_p \left( \sqrt{(a - 2 i t) b} \right)}{K_p (\sqrt{a b})}, \tag{1} where K_p(\cdot) denotes the modified of the second kind of order p, and the expression holds for parameters such that the distribution is defined (typically a > 0, b > 0, p \in \mathbb{R}). This form is derived by direct evaluation of the \phi(t) = \int_0^\infty e^{i t x} f(x) \, dx, where f(x) is the of the GIG, leveraging representations of the modified Bessel functions to simplify the resulting expression. The moment generating function (MGF) follows analogously by replacing i t with t in (1), yielding M(t) = \mathbb{E}[e^{t X}] = \left( \frac{a}{a - 2 t} \right)^{p/2} \frac{K_p \left( \sqrt{(a - 2 t) b} \right)}{K_p (\sqrt{a b})}, \tag{2} valid for t < a/2 to ensure convergence. This MGF serves as a foundational tool for deriving cumulants through logarithmic differentiation and expansion, facilitating analysis of higher-order moments without explicit computation. Since the GIG is a continuous distribution supported on (0, \infty), the probability generating function is not applicable. However, the Laplace transform, \mathbb{E}[e^{-s X}] for s > 0, takes the form \mathbb{E}[e^{-s X}] = \left( \frac{a}{a + 2 s} \right)^{p/2} \frac{K_p \left( \sqrt{(a + 2 s) b} \right)}{K_p (\sqrt{a b})}, \tag{3} obtained similarly via substitution in the or direct . These generating functions underpin key probabilistic properties, such as independence criteria in products of distributions. For instance, if X \sim GIG and Y \sim Gamma are , then X and Y/X are if and only if X follows a specific GIG subclass, with the joint factoring accordingly.

Entropy and infinite divisibility

The H of a following the generalized inverse Gaussian (GIG) with parameters p, a > 0, and b > 0 is given by H = -\frac{1}{2} \left[ p \ln \left( \frac{a}{b} \right) + \frac{a + b}{ \sqrt{a b} } \frac{K_{p+1} (\sqrt{a b}) }{K_p (\sqrt{a b}) } + \ln (2 \pi) + \ln K_p (\sqrt{a b}) + \psi (p) \right], where K_\nu(\cdot) denotes the modified of the second kind of order \nu, and \psi(\cdot) is the . This expression arises from integrating the negative logarithm of the GIG and simplifies using properties of the involving the . A notable property of this is its maximization under fixed second moment constraints when the achieves , specifically when a = b, leading to a gamma form that balances the shape parameters for peak . This maximization aligns with the GIG's role as a maximum- subject to constraints on the of the logarithm and , highlighting its information-theoretic efficiency in modeling positive data with specified moments. The GIG distribution is infinitely divisible for all parameter values p \in \mathbb{R}, a > 0, b > 0. This property follows from the complete monotonicity of the derivative of its , as established via Grosswald's representation of , allowing decomposition into sums of independent random variables with the same distribution scaled appropriately. Consequently, the GIG serves as the of a at unit time, with its function expressible in Lévy-Khintchine form involving an representation that ensures non-negativity and complete monotonicity for positive Lévy measures. Although the class of GIG distributions is not closed under convolution—the sum of independent GIG variables generally does not follow a GIG—the infinite divisibility implies that any GIG can be represented as the sum of n i.i.d. components for arbitrary n > 0, facilitating its use in compound processes and stochastic modeling.

Special cases

The generalized inverse Gaussian (GIG) distribution includes the inverse Gaussian distribution as a special case when p = -\frac{1}{2}. With parameters a = \frac{\lambda}{\mu} and b = \frac{\lambda}{\mu^2}, where \mu > 0 is the mean and \lambda > 0 is the shape parameter of the inverse Gaussian, the GIG density simplifies exactly to the inverse Gaussian form due to the closed-form evaluation of the modified Bessel function of the second kind at order -\frac{1}{2}, given by K_{-\frac{1}{2}}(z) = \sqrt{\frac{\pi}{2z}} e^{-z}. This yields a density proportional to x^{-\frac{3}{2}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right) for x > 0. The GIG distribution recovers the in the limiting case as b \to 0^+, with p = \alpha > 0 and a = 2\beta, where \alpha > 0 and \beta > 0 are the shape and rate parameters, respectively. In this limit, the of the b/x term in the exponent vanishes, and the involving K_p(\sqrt{ab}) approaches the reciprocal of the via the small-argument asymptotics of the , resulting in the standard gamma density f(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} for x > 0. Analogously, the arises as a \to 0^+, with p = -\alpha < 0 and b = 2\beta, corresponding to shape \alpha > 0 and scale \beta > 0. Here, the a x term in the exponent disappears, and the limiting yields the inverse gamma density f(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{-\alpha-1} e^{-\beta/x} for x > 0. Boundary behaviors of the GIG distribution occur when one or both of a and b approach zero. Specifically, setting one to zero while keeping the other positive leads to transitions to exponential-tailed distributions like the gamma or , as detailed above, with explicit parameter mappings ensuring the densities match. When both a \to 0^+ and b \to 0^+ simultaneously for fixed p, the exponential terms approach unity, reducing the density to a power-law form proportional to x^{p-1} for x > 0, though proper requires -1 < p < 0 for integrability over the positive reals.

Conjugate priors and mixtures

The distribution with parameters (\lambda, \chi, \psi) serves as a for the precision parameter \tau of a in variance-mean mixture models. In this setup, the on \tau is GIG(\lambda, \chi, \psi), and the incorporates a for the mean conditional on \tau. This conjugacy ensures that the posterior distribution remains in the GIG family after observing data from a likelihood. The GIG distribution also plays a key role in constructing models for enhanced flexibility in modeling heavy tails and . A -GIG compound , where the Poisson rate is mixed with a GIG , yields the Sichel distribution, which is particularly useful for capturing in count data such as word frequencies. Similarly, a variance-mean with GIG mixing on the variance produces the generalized ; a special case arises when \chi = 0, resulting in the , widely applied in for asset returns with and . The of the GIG underpins its suitability for these stable mixtures, as it allows the formation of infinitely divisible compound processes without introducing discontinuities.

Applications

The generalized inverse Gaussian (GIG) plays a key role in as a for the precision parameter (inverse variance) of a likelihood, particularly when the is either known or assigned a separate . This conjugacy results in a posterior that is also GIG, with hyperparameters updated based on the sufficient statistics from the , such as the sample and size. This property enables exact analytical expressions for posterior moments and predictive distributions without requiring numerical approximation, facilitating straightforward inference in univariate and multivariate models. In hierarchical Bayesian models, the GIG is frequently adopted for specifying variance components, offering flexibility in generalized linear mixed models (GLMMs) and spatial modeling frameworks. For example, in geostatistical applications like , GIG priors on error variances or nugget effects accommodate non-Gaussian spatial processes, allowing for robust prediction under heteroscedasticity or heavy-tailed innovations. This approach enhances model fit in datasets with clustered or spatially correlated observations, such as , by capturing complex dependence structures through layered variance specifications. Computationally, Bayesian analyses involving GIG priors benefit from direct posterior updates in Gibbs samplers due to conjugacy, which simplifies MCMC implementation compared to non-conjugate setups. When full conditionals are not standard, the Griddy-Gibbs method approximates sampling from GIG-related posteriors by gridding the parameter space and evaluating densities at discrete points, ensuring efficient exploration of the posterior. A key advantage over the more rigid gamma prior is the GIG's tunable tail behavior—ranging from sub-exponential light tails to power-law heavy tails via its —enabling better accommodation of outliers or posteriors in robust . Recent advancements in MCMC for GIG-involved models include novel generators that decompose the GIG density for faster variate production, as developed by Zhang and Reiter (2022), improving scalability in high-dimensional hierarchical settings like large-scale spatial Bayesian analyses. These methods reduce computational bottlenecks in post-2020 applications, such as integrated nested Laplace approximations or variational Bayes for GIG-based priors.

Other fields

In finance, the generalized inverse Gaussian (GIG) distribution serves as the mixing distribution in generalized hyperbolic (GH) Lévy processes, enabling the modeling of asset prices with heavy tails, , and semi-heavy tails to better capture empirical return distributions compared to models. These processes, particularly the normal inverse Gaussian subclass, have been applied to option pricing and , providing superior fits to high-frequency financial data. A 2023 review highlights the growing adoption of GH variants for financial returns due to their flexibility in replicating stylized facts like . In , the Sichel distribution—a —models word frequency counts in large corpora, accommodating and power-law tails observed in texts. This approach yields excellent fits for very long texts, outperforming simpler models like the negative binomial, as demonstrated in analyses of sentence lengths and vocabulary distributions. The model, originally proposed by Sichel in the 1980s, remains relevant for computational text analysis tasks such as topic modeling and . Ole Barndorff-Nielsen originally developed the GIG distribution in the context of physical processes, inspired by studies of wind-blown sand dynamics where it parameterized distributions for and . Extending this, GIG-based models, such as those using normal inverse Gaussian subordinators, simulate irregular jumps and persistence in physical systems like fluid flows and geophysical phenomena. In , GIG mixing enhances spatial correlation models by introducing flexible variance structures for non-Gaussian random fields in . Recent applications in the 2020s include GIG-based priors in for robust sparse regression, where expectation propagation with GIG mixtures induces sparsity while handling heavy-tailed noise in high-dimensional data. In epidemiology, GIG mixtures appear in frailty models for of disease progression, such as promotion time cure models for chronic conditions, improving estimates of heterogeneous risks in population studies.

References

  1. [1]
    [PDF] Properties of the generalized inverse Gaussian with applications to ...
    Jan 26, 2025 · The generalized inverse Gaussian (GIG) is a three-parameter distribution (p, a, b) that includes gamma, inverse gamma, and inverse Gaussian ...Missing: paper | Show results with:paper
  2. [2]
    Statistical Properties of the Generalized Inverse Gaussian Distribution
    Book Title: Statistical Properties of the Generalized Inverse Gaussian Distribution · Authors: Bent Jørgensen · Series Title: Lecture Notes in Statistics.
  3. [3]
    Saddlepoint approximation for the generalized inverse Gaussian ...
    The GIG distribution was proposed by Étienne Halphen in 1941 and popularized by Ole Barndorff-Nielsen in the 1970s. ... A property of the generalized inverse ...
  4. [4]
    (PDF) Random variate generation for the generalized inverse ...
    Aug 7, 2025 · We provide a uniformly efficient and simple random variate generator for the entire parameter range of the generalized inverse Gaussian ...Missing: original | Show results with:original
  5. [5]
    Infinite divisibility of the hyperbolic and generalized inverse ...
    Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions ... Barndorff-Nielsen, O.: Exponentially decreasing log-size distributions.
  6. [6]
    A Historical Survey | The Inverse Gaussian Distribution
    Oct 31, 2023 · He promptly baptized it the 'Inverse Gaussian' distribution. The physicists and probabilists call this the first passage time distribution of ...
  7. [7]
    [PDF] Generalized Inverse Gaussian Distributions Under different ...
    The goal of this project is to construct the Generalized Inverse gaussian distribution under different parameterizations; using the special function called ...
  8. [8]
    Infinite divisibility of the hyperbolic and generalized inverse ...
    9 by Springer-Verlag 1977. Infinite Divisibility of the Hyperbolic and Generalized Inverse Gaussian Distributions. O. Barndorff-Nielsen and Christian Halgreen.
  9. [9]
    Self-decomposability of the generalized inverse Gaussian and ...
    Jul 25, 1978 · Self-decomposability of the generalized inverse Gaussian and hyperbolic distributions. Published: January 1979. Volume 47, pages 13–17, (1979) ...
  10. [10]
    An independence property for the product of GIG and gamma laws
    ... product of the generalized inverse Gaussian (GIG) and gamma distributions. This paper is devoted to a detailed study of this phenomenon. Let X X and Y Y be ...
  11. [11]
    Properties of the generalized inverse Gaussian with applications to ...
    Jan 1, 2024 · This article introduces two mixture representations for the GIG: one that expresses the distribution as a continuous mixture of inverse Gaussians and another
  12. [12]
    [PDF] A Compendium of Conjugate Priors - Applied Mathematics Consulting
    generalized inverse Gaussian distribution. The conditional distribution of the mean is a Normal distribution. Given hyperparameters α,β,γ,τ > 0 and µ the ...
  13. [13]
    Generalized Sichel Distribution and Associated Inference
    Sep 1, 2017 · Download PDF ... (John Wiley & Sons, New York, 2005). B. Jørgensen, Statistical properties of the generalized inverse Gaussian distribution ...<|control11|><|separator|>
  14. [14]
    On normal variance–mean mixtures - ScienceDirect.com
    Normal variance–mean mixtures encompass a large family of distributions commonly used in many applied fields. A prominent example is the generalized hyperbolic ...
  15. [15]
    Why do we use inverse Gamma as prior on variance, when ...
    Jun 12, 2018 · The inverse gamma distribution is a conjugate prior of a normal distribution motivates the use of the inverse gamma as a prior.
  16. [16]
    [PDF] Bayesian Analysis of Censored Spatial Data Based on a Non ...
    of generalized inverse-Gaussian (GIG) distribution and some justification for utilizing the above priors can be found in Palacios and Steel (2006). Now, we ...
  17. [17]
    Non-Gaussian Bayesian Geostatistical Modelling - Academia.edu
    Appendix B: the Generalized Inverse Gaussian Distribution GIG(λ, δ, γ) will denote a Generalized Inverse Gaussian (GIG) distribution with density function ...
  18. [18]
    Spillovers from US monetary policy: evidence from a time varying ...
    Feb 8, 2019 · We sample from the generalized inverse Gaussian distribution by ... We use a griddy Gibbs algorithm (Ritter and Tanner, 1992) to obtain ...
  19. [19]
    [PDF] Bayesian Group-Sparse Modeling and Variational Inference
    The normal variance mixture with the GIG mixing distribu- tion is extremely flexible, and encompasses a large family of distributions some of which can be used ...
  20. [20]
    [PDF] arXiv:2211.13049v1 [stat.CO] 23 Nov 2022
    Nov 23, 2022 · We propose a new generator for the generalized inverse Gaussian (GIG) distribution by decomposing the density of GIG into two components.Missing: Gaskins | Show results with:Gaskins
  21. [21]
    Generalized Hyperbolic and Inverse Gaussian Distributions: Limiting ...
    Among these generalized hyperbolic Lévy processes turned out to provide an excellent fit to observed market data.
  22. [22]
    [PDF] Limit Shape of the Generalized Inverse Gaussian-Poisson Distribution
    Mar 14, 2023 · In view of formulas (2.3) and (2.9), the sample mean ˆη = N/M is an unbiased estimator of the expected value η, possessing all standard ...
  23. [23]
    [PDF] Ole E. Barndorff-Nielsen: Sand, Wind and Inference - arXiv
    Jun 18, 2025 · class of what he called generalized inverse Gaussian distributions (GIG). The GIG(λ, δ, γ)- distribution has density function. (γ/δ)λ. 2Kλ(δγ).
  24. [24]
    (PDF) Generalized Hyperbolic and Inverse Gaussian Distributions
    We conclude that the Normal Inverse Gaussian and Generalized hyperbolic distributions provide better fits than the assumed normal distribution. ...