Fact-checked by Grok 2 weeks ago

Inverse-gamma distribution

The inverse gamma distribution is a two-parameter family of continuous probability distributions supported on the , arising as the distribution of the of a . If Y \sim \Gamma(\alpha, \beta) with \alpha > 0 and \beta > 0, then X = 1/Y follows an inverse gamma distribution, denoted X \sim \text{Inv}\Gamma(\alpha, \beta). The is given by
f(x \mid \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{-\alpha-1} \exp\left(-\frac{\beta}{x}\right), \quad x > 0,
where \Gamma(\alpha) is the . This distribution is right-skewed and can take on various shapes depending on the parameter values, with heavier tails for smaller \alpha.
Key moments of the inverse gamma distribution include the mean \mathbb{E}[X] = \frac{\beta}{\alpha - 1} for \alpha > 1, and the variance \text{Var}(X) = \frac{\beta^2}{(\alpha - 1)^2 (\alpha - 2)} for \alpha > 2; these do not exist for smaller \alpha, reflecting the distribution's heavy-tailed nature. Higher moments follow similarly, with the k-th moment finite only for \alpha > k + 1. The lacks a closed form but can be expressed in terms of the . Parameterizations vary across contexts, sometimes using a rate parameter instead of scale, but the shape-scale form is standard in many statistical applications. In , the inverse gamma distribution is particularly notable as the for the precision (inverse variance) or variance of a with known , ensuring that the posterior distribution remains inverse gamma after updating with data. For instance, if the prior is \sigma^2 \sim \text{Inv}\Gamma(\alpha, \beta), the posterior incorporates the sample size and sum of squared deviations, yielding updated parameters \alpha' = \alpha + n/2 and \beta' = \beta + \sum (x_i - \mu)^2 / 2. Beyond , it models lifetimes in and appears in and of volatility processes.

Characterization

Probability density function

The inverse-gamma distribution is a two-parameter family of continuous probability distributions supported on the positive real numbers, with shape parameter \alpha > 0 and scale parameter \beta > 0. Its probability density function is f(x \mid \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{-\alpha - 1} \exp\left( -\frac{\beta}{x} \right) for x > 0, and f(x \mid \alpha, \beta) = 0 otherwise. The shape parameter \alpha determines the form of the distribution and the heaviness of its right tail, where smaller \alpha yields heavier tails, while the scale parameter \beta governs the and . This distribution arises intuitively from the transformation of a : if Y follows a with shape \alpha and rate \beta (equivalently, scale $1/\beta), then X = 1/Y follows an inverse-gamma distribution with the same parameters, obtained by applying the change-of-variable formula to the gamma density. The density features a heavy right , reflecting the behavior near zero of the underlying gamma variable, and reaches its at x = \beta / (\alpha + 1).

Cumulative distribution function

The of the inverse-gamma distribution with \alpha > 0 and \beta > 0 is F(x; \alpha, \beta) = \frac{\Gamma\left(\alpha, \frac{\beta}{x}\right)}{\Gamma(\alpha)}, \quad x > 0, where \Gamma(s) denotes the and \Gamma(s, z) = \int_z^\infty t^{s-1} e^{-t} \, dt is the upper incomplete gamma function. This formulation relies on the upper , which quantifies the tail probability of the and enables the CDF to capture the cumulative probability over (0, x]. Numerical evaluation of F(x) typically involves series expansions, continued fractions, or other approximations for the incomplete gamma function, as implemented in mathematical software libraries; for instance, these computations achieve relative accuracy exceeding 14 decimal digits in double-precision arithmetic for a wide range of parameters. As x \to 0^+, \beta/x \to \infty and \Gamma(\alpha, \beta/x) \to 0, so F(x) \to 0; conversely, as x \to \infty, \beta/x \to 0 and \Gamma(\alpha, \beta/x) \to \Gamma(\alpha), yielding F(x) \to 1. The survival function, or the probability that a random variable exceeds x, is $1 - F(x) = \gamma(\alpha, \beta/x)/\Gamma(\alpha), where \gamma(s, z) = \int_0^z t^{s-1} e^{-t} \, dt is the .

Properties

Moments

The moments of the inverse-gamma distribution are derived from its probability density function via integration, yielding expressions in terms of the . The nth raw moment is given by \mu_n' = \mathbb{E}[X^n] = \beta^n \frac{\Gamma(\alpha - n)}{\Gamma(\alpha)} for \alpha > n, where \Gamma denotes the . This formula arises from substituting into the moment-generating and recognizing the resulting form as a scaled gamma after the t = \beta / x. For n \geq \alpha, the moment does not exist, reflecting the heavy-tailed nature of the distribution when the shape parameter is small. As \alpha approaches n from above, the moments grow large, indicating increasing variability near the boundary. The first raw moment is the mean, \mathbb{E}[X] = \frac{\beta}{\alpha - 1} provided \alpha > 1; otherwise, the mean is infinite, and the distribution places substantial probability mass near zero, leading to undefined or divergent expected value. The second raw moment is \mathbb{E}[X^2] = \frac{\beta^2}{(\alpha - 1)(\alpha - 2)} for \alpha > 2. The variance follows as \mathrm{Var}(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2 = \frac{\beta^2}{(\alpha - 1)^2 (\alpha - 2)}, also requiring \alpha > 2 for finiteness. When $1 < \alpha \leq 2, the variance is infinite despite a finite mean, characteristic of distributions with power-law tails. These expressions are standard for the parameterization where the PDF is \frac{\beta^\alpha}{\Gamma(\alpha)} x^{-\alpha-1} e^{-\beta/x} for x > 0. Higher-order standardized moments quantify the asymmetry and tail heaviness. The skewness is \gamma_1 = \frac{\mathbb{E}[(X - \mathbb{E}[X])^3]}{(\mathrm{Var}(X))^{3/2}} = \frac{4 \sqrt{\alpha - 2}}{\alpha - 3} for \alpha > 3; the third central moment diverges for \alpha \leq 3, emphasizing the right-skewed, heavy-tailed behavior. As \alpha increases beyond 3, skewness decreases toward zero, approaching symmetry for large shape parameters. The kurtosis (fourth standardized central moment) is \gamma_2 = \frac{\mathbb{E}[(X - \mathbb{E}[X])^4]}{(\mathrm{Var}(X))^2} = 3 \frac{(\alpha - 2)(\alpha + 5)}{(\alpha - 3)(\alpha - 4)} for \alpha > 4, with the excess kurtosis being \gamma_2 - 3 = 6(5\alpha - 11)/((\alpha - 3)(\alpha - 4)). For $3 < \alpha \leq 4, the fourth moment is infinite, resulting in leptokurtic tails that become mesokurtic (approaching 3) as \alpha grows large. Near the boundaries, such as \alpha \to 4^+, kurtosis tends to infinity, underscoring the distribution's sensitivity to low shape values in applications like variance modeling.

Mode, median, and entropy

The mode of the inverse-gamma distribution, defined for shape parameter \alpha > 0 and \beta > 0, is given by \frac{\beta}{\alpha + 1}, representing the value that maximizes the . This mode is unique, as the distribution is unimodal over the positive reals for \alpha > 0. The of the inverse-gamma distribution lacks a and is typically computed numerically or approximated using the inverse of the regularized , since the involves the incomplete gamma. For special cases, such as \alpha = 1, tighter bounds on the can be derived relative to the . The of the inverse-gamma distribution is h(X) = \alpha + \log(\beta) + \log(\Gamma(\alpha)) - (\alpha + 1) \psi(\alpha), where \psi denotes the . This expression quantifies the average uncertainty in the distribution and arises from integrating the negative log-density weighted by the . For \alpha > 1, the inverse-gamma distribution exhibits right-skewness, resulting in the ordering < median < mean. The inverse-gamma distribution maximizes among distributions on the positive reals with fixed mean and fixed harmonic mean.

Characteristic function

The characteristic function of the inverse-gamma distribution is its Fourier transform, enabling analytical investigations into properties such as convolutions and asymptotic behavior. For a random variable X following an inverse-gamma distribution with shape parameter \alpha > 0 and \beta > 0, the characteristic function is given by \phi(t) = \mathbb{E}[e^{itX}] = \frac{2 (-i \beta t)^{\alpha/2}}{\Gamma(\alpha)} K_{\alpha}\left(2 \sqrt{-i \beta t}\right) for real t, where K_{\alpha}(\cdot) denotes the modified Bessel function of the second kind. This expression is derived by direct integration using the definition of the characteristic function and the probability density function of the inverse-gamma distribution. Substituting the density into the expectation yields the integral \phi(t) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} \int_{0}^{\infty} x^{-\alpha-1} \exp\left(-\frac{\beta}{x} + itx\right) \, dx. This form matches a known integral representation of the modified Bessel function of the second kind, K_{\alpha}(z) = \frac{1}{2} \left(\frac{z}{2}\right)^{\alpha} \int_{0}^{\infty} u^{-\alpha-1} \exp\left(-u - \frac{z^{2}}{4u}\right) \, du, after appropriate substitution and simplification with z = 2\sqrt{-i\beta t}. The facilitates high-level analysis of convolutions for sums or linear combinations of inverse-gamma random variables, as the of the sum is the product of individual s, allowing numerical inversion to obtain the resulting density when closed forms are unavailable. It also supports investigations into limit theorems for heavy-tailed distributions like the inverse-gamma, where the behavior of \phi(t) for small t informs convergence properties. The moments of the distribution relate to the through its expansion around t = 0, where the coefficients correspond to the moments via successive derivatives: the nth moment is \mathbb{E}[X^n] = (-i)^n \frac{d^n}{dt^n} \phi(t) \big|_{t=0}.

Sampling methods

The primary method for generating random variates from the inverse-gamma distribution with \alpha > 0 and \beta > 0 is the inversion method. This approach exploits the relationship with the : first, sample Y from a gamma distribution with shape \alpha and scale $1/\beta, then set X = 1/Y. This transformation ensures that X follows the desired inverse-gamma distribution, as the of the inverse-gamma is derived from that of the gamma via the mapping. In practice, this method is straightforward to implement in statistical software. For instance, in SAS/IML, random variates can be generated using the RAND function for the gamma distribution followed by taking the reciprocal, as shown in the macro %RandIGamma(alpha, beta) = (1 / rand('Gamma', alpha, 1/beta)). Similarly, Python's SciPy library implements invgamma.rvs(a=alpha, scale=beta) by internally sampling from the corresponding gamma and computing the inverse. In R, the rinvgamma function from the invgamma package applies the transformation theorem using base R's gamma functions. Alternative sampling techniques, such as acceptance-rejection methods, may be used for efficiency in specific parameter regimes, particularly when direct inversion is computationally burdensome. Tailored algorithms, including modifications to , have been developed for cases with small \alpha or extreme \beta, where standard gamma generators rely on acceptance-rejection internally but require adjustments to handle the heavy tails of the inverse-gamma. Numerical considerations arise mainly for small \alpha or large \beta, where the gamma variate Y can be extremely small, leading to large X = 1/Y values that risk floating-point overflow or underflow in computations. For example, the R invgamma package issues warnings for \alpha \leq 0.01 due to unreliability from precision limits in the reciprocal step. In such scenarios, log-space computations or rescaled parameters can mitigate issues, though the probability of extreme values aligns with the distribution's inherent heavy-tailed nature. To validate generated samples, compare empirical moments (e.g., and variance) to theoretical values \beta/(\alpha-1) and \beta^2 / [(\alpha-1)^2 (\alpha-2)], respectively, for \alpha > 2. Additionally, quantile-quantile (Q-Q) plots provide a visual check: plot sample quantiles against theoretical inverse-gamma quantiles; alignment along the 45-degree line indicates accurate sampling. For a brief example with \alpha=3, \beta=0.5, a Q-Q plot of 10,000 samples should show close adherence to the reference line, confirming fidelity to the distribution.

Derivation from the

The inverse-gamma distribution arises as the distribution of the reciprocal of a . Specifically, if Y \sim \text{Gamma}(\alpha, \beta') where the is parameterized with \alpha > 0 and rate parameter \beta' > 0, then X = 1/Y follows an inverse-gamma distribution with \alpha and \beta = \beta'. To derive the probability density function (PDF) of X, apply the transformation of variables formula, accounting for the determinant. The PDF of Y is f_Y(y) = \frac{(\beta')^\alpha}{\Gamma(\alpha)} y^{\alpha-1} e^{-\beta' y}, \quad y > 0. Under the transformation x = 1/y, or y = 1/x, the differential is dy/dx = -1/x^2, so the absolute value of the is |dy/dx| = 1/x^2. Substituting yields f_X(x) = f_Y(1/x) \cdot \left| \frac{dy}{dx} \right| = \frac{(\beta')^\alpha}{\Gamma(\alpha)} (1/x)^{\alpha-1} e^{-\beta' (1/x)} \cdot \frac{1}{x^2}, \quad x > 0. Simplifying the expression, f_X(x) = \frac{(\beta')^\alpha}{\Gamma(\alpha)} x^{-\alpha+1} e^{-\beta'/x} \cdot x^{-2} = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{-\alpha-1} e^{-\beta/x}, where \beta = \beta' is the for the inverse-gamma distribution. This confirms the standard form of the inverse-gamma PDF. The moments of the inverse-gamma distribution preserve certain relations from the original gamma under inversion. For instance, the expected value E[X] = E[1/Y] for Y \sim \text{Gamma}(\alpha, \beta') with \alpha > 1 is given by E[1/Y] = \frac{\beta'}{\alpha - 1}, which aligns with the of the inverse-gamma E[X] = \beta / (\alpha - 1) under the scale parameterization \beta = \beta'. This relation holds because the r-th negative moment of the gamma distribution is E[Y^{-r}] = (\beta')^r \Gamma(\alpha - r) / \Gamma(\alpha) for \alpha > r, and for r = 1, it simplifies using the property \Gamma(\alpha) = (\alpha - 1) \Gamma(\alpha - 1). The inverse-gamma distribution was recognized in the statistical literature during the 20th century, particularly as Bayesian methods gained prominence for modeling scale parameters.

Connections to other distributions

The inverse-gamma distribution is closely related to the scaled inverse chi-squared distribution, which is a specific reparameterization commonly used in Bayesian statistics for modeling variances. Specifically, an inverse-gamma random variable with shape parameter \alpha = \nu/2 and scale parameter \beta = \nu \sigma^2 / 2 follows the same distribution as \sigma^2 times the inverse of a chi-squared random variable with \nu degrees of freedom. The arises as a special case of the inverse-gamma distribution when the \alpha = 1/2 and the \beta = c/2, where c > 0 is the scale parameter of the centered at zero. This connection highlights the heavy-tailed nature of the inverse-gamma family, as the is a with infinite variance. The generalized inverse Gaussian (GIG) distribution serves as a broader extension of the inverse-gamma distribution within a three-parameter family. In the GIG parameterization with parameters p \in \mathbb{R}, a \geq 0, and b > 0, setting a = 0 and p < 0 yields the inverse-gamma distribution with \alpha = -p and \beta = b/2, where the probability density function simplifies to the standard inverse-gamma form f(x) \propto x^{- \alpha - 1} \exp(-\beta / x) for x > 0. In hierarchical Bayesian models, the inverse-gamma distribution frequently emerges as a for variance parameters. For instance, when precision parameters follow gamma distributions in a conjugate setup, integrating out intermediate hyperparameters results in an inverse-gamma for the variance components, facilitating tractable posterior inference. Common reparameterizations of the inverse-gamma distribution often involve switching between and parameters or expressing in terms of and . The following table summarizes key equivalences, assuming the standard shape- form with density f(x; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{-\alpha-1} \exp(-\beta / x) for x > 0:
ParameterizationShape (\alpha)Scale/Rate (\beta or \lambda)RelationSource
Shape-Scale\alpha > 0 \beta > 0Standard form
Mean-Shape\alpha > 0 \mu = \beta / (\alpha - 1) (\alpha > 1)\beta = \mu (\alpha - 1); requires \alpha > 1 for finite

Applications

Bayesian inference

The inverse-gamma distribution plays a central role in Bayesian inference as a conjugate prior for the precision parameter τ = 1/σ² of a normal likelihood when both the mean μ and variance σ² are unknown. This conjugacy arises because the posterior distribution for τ remains inverse-gamma after observing data from a normal model, enabling closed-form updates without numerical approximation in simple cases. Specifically, for a prior τ ∼ InverseGamma(α, β) and independent observations x₁, …, xₙ ∼ Normal(μ, 1/τ) with unknown μ, the marginal posterior for τ is InverseGamma(α + n/2, β + S/2), where S = ∑(xᵢ - x̄)² is the sum of squared errors from the sample mean x̄. This property extends to the normal-inverse-gamma conjugate family, which jointly parameterizes priors for μ and σ²: conditionally, μ | σ² ∼ (μ₀, σ²/κ₀), and marginally, σ² ∼ InverseGamma(α, β). Upon observing the , the posterior updates to Normal-InverseGamma with shape α' = α + n/2, β' = β + (1/2)∑(xᵢ - x̄)² + (κ₀ n / (2(κ₀ + n))) (μ₀ - x̄)², scale κ' = κ₀ + n, and location μ' = (κ₀ μ₀ + n x̄)/(κ₀ + n). This family is foundational for Bayesian analysis of , providing interpretable posterior means and variances for both parameters. In hierarchical Bayesian models, the inverse-gamma prior is frequently applied to variance components, such as group-level variances in multilevel structures, due to its mathematical convenience despite known issues with heavy tails in low-data regimes. For instance, it models σⱼ² for subgroup j as InverseGamma(α, β), with hyperparameters potentially informed by upper-level priors, facilitating on varying effects across groups. However, alternatives like half-t or folded-normal priors have been proposed to mitigate pathologies in such settings. Post-2000 computational advances have integrated the inverse-gamma into (MCMC) methods, particularly , where conditional posteriors for variances are often inverse-gamma, allowing efficient block updates in high-dimensional models like or Gaussian processes. Similarly, in variational inference, approximations using normal-inverse-gamma families enable scalable posterior estimation for large datasets, as in variational Bayes where the incorporates inverse-gamma marginals for . These extensions have broadened its utility in modern Bayesian workflows beyond analytical conjugacy.

Reliability engineering and other fields

In reliability engineering, the inverse gamma distribution is employed to model failure times and lifetimes exhibiting heavy-tailed behavior, particularly in scenarios involving gradual degradation such as or wear. For instance, the generalized inverse gamma distribution extends the standard form to better capture lifetime sub-models in these contexts, providing a flexible for analyzing failure-free operating times of devices under . This approach is particularly useful for systems where failure rates decrease over time due to initial wear-out phases, as the distribution's properties allow for realistic representation of rare but extreme events like accelerated in new ry. In load-sharing models for multi-component systems, a discrete variant of the inverse gamma distribution enhances reliability predictions by accounting for interdependent failure mechanisms, such as those in machine setups where one component's redistributes . This has been applied to simulate and quantify system reliability under varying operational loads, demonstrating improved accuracy over traditional models for heavy-tailed inter-failure times. Wireless communications leverage the inverse gamma distribution as a shadowing model for signal , offering a superior fit to empirical data compared to lognormal or gamma distributions, especially in and indoor environments post-2010. Experimental validations from campaigns show that it accurately describes the heavy-tailed of shadow in composite channels, such as κ-μ/inverse gamma or η-μ/inverse gamma models, leading to better performance in outage probability calculations and designs for / networks. For example, in line-of-sight shadowed scenarios, the distribution models the variability in dominant signal components due to obstructions, improving tractability in system analysis. In time series analysis and , the inverse gamma distribution serves as a for volatility parameters in GARCH-like models, capturing the heavy tails observed in asset return volatilities and enabling in . The generalized inverse gamma, in particular, provides an excellent fit to historical volatility indices like the , where its power-law tails align with empirical steady-state behaviors derived from processes. This application facilitates more robust forecasting of market fluctuations, as seen in models where instantaneous variance follows an inverse gamma steady-state distribution. In physics and inverse problems, generalized forms of the inverse gamma distribution model uncertainties in diffraction theory and related lifetime predictions, such as wave propagation through heterogeneous media or sub-modeling of processes. It proves effective for corrosion-related inverse problems in machinery, where heavy-tailed priors help infer hidden parameters from sparse observations, enhancing strategies. In , the inverse gamma distribution is briefly used as a boundary-avoiding for scale hyperparameters in Gaussian processes, ensuring and stability in estimation for tasks involving noisy data. This aids in by providing a conjugate form that integrates well with approximations. Sampling from the inverse gamma can support simulation-based reliability tests in these models, though primarily for validation rather than core inference.

References

  1. [1]
    [PDF] Continuous Probability Density Functions - Mathematical and ...
    Inverse Gamma PDF. A random variable x has a continuous Inverse Gamma distribution, x~InvΓ(α,β) if. , where, α,β>0, 0<x< ∞. And can take on many shapes. 28.<|control11|><|separator|>
  2. [2]
    [PDF] STAT 535: Chapter 5: More Conjugate Priors
    ▷ The conjugate prior for σ2 is the inverse gamma distribution. ▷ If a r.v. Y ∼ gamma, then 1/Y ∼ inverse gamma (IG). ▷ The prior for σ2 is p(σ2) = βα.
  3. [3]
    [PDF] Uniform, Exponential, Gamma, Inverse Gamma, and Beta Distributions
    Mar 27, 2020 · STAT 516: Some Special Continuous Distributions. Page 19. Definition and basic properties. ▷ We say that X ∼ Be(α, β) if its pdf is f (x) ...
  4. [4]
    [PDF] The Conjugate Prior for the Normal Distribution 1 Fixed variance (σ2 ...
    Feb 8, 2010 · 2 Random variance (σ2), fixed mean (µ). 2.1 Posterior. Assuming µ is fixed, then the conjugate prior for σ2 is an inverse Gamma distribution:.
  5. [5]
    16.7 Inverse Gamma Distribution | Stan Functions Reference
    Generate an inverse gamma variate with shape alpha and scale beta; may only be used in transformed data and generated quantities blocks.
  6. [6]
    [PDF] Inverse Gamma Distribution
    Oct 3, 2008 · Here we derive the distribution of the inverse gamma, calculate its moments, and show that it is a conjugate prior for an exponential.
  7. [7]
    Inverse Gamma distribution
    The Inverse Gamma distribution is useful as a prior for positive parameters. It imparts a quite heavy tail and keeps probability further from zero than the ...
  8. [8]
    [PDF] Theorem The reciprocal of a gamma(α, β) random variable is an ...
    The reciprocal of a gamma(α, β) random variable is an inverted gamma(α, β) random variable. Swapping α and β shows the reciprocal has the inverted gamma ...
  9. [9]
    Inverse Gamma Distribution
    ### Summary of Cumulative Distribution Function (CDF) for Inverse Gamma Distribution
  10. [10]
  11. [11]
    [PDF] Estimating an Inverse Gamma distribution - arXiv
    Jul 7, 2016 · The Inverse Gamma distribution belongs to the exponential family and has positive support. In most cases, the Gamma distribution is the one ...Missing: density | Show results with:density
  12. [12]
    [PDF] Inverted gamma distribution
    An inverted gamma random variable. X with shape parameter α and scale parameter β has probability density function f(x) = x−(α+1) e−1/(βx). Γ(α)βα x > 0 ...
  13. [13]
    (PDF) Computing the distribution of a linear combination of inverted ...
    Aug 7, 2025 · ... characteristic function of the one-. dimensional distribution function F(x). Then, for xbeing the continuity point of. the distribution, the ...
  14. [14]
    The inverse gamma distribution in SAS - The DO Loop
    Jan 27, 2021 · In Wikipedia, the CDF of the inverse gamma distribution is given in terms of the incomplete gamma function. Consequently, we can compute the ...<|control11|><|separator|>
  15. [15]
    Weighted Analogue of Inverse Gamma Distribution - ResearchGate
    Aug 7, 2025 · In this article we propose a new weighted version of inverse Gamma distribution known as Weighted Inverse Gamma distribution (WIGD).
  16. [16]
    scipy.stats.invgamma — SciPy v1.16.2 Manual
    The probability density above is defined in the “standardized” form. To shift and/or scale the distribution use the loc and scale parameters. Specifically, ...
  17. [17]
    dkahle/invgamma: A light weight package for the dpqr ... - GitHub
    invgamma implements the [dpqr] statistics functions for the inverse gamma distribution in R. It is ideal for using in other packages since it is lightweight ...
  18. [18]
    [PDF] IN RANDOM VARIATE GENERATION - Luc Devroye
    2. THE INVERSION METHOD. 2.1. The inversion principle. The lnverslon method 1s based upon the followlng property:.
  19. [19]
    On the Inverted Gamma Distribution
    ### Parameterization and Formulas for Inverted Gamma Distribution
  20. [20]
    A Note on the Use of Prior Interval Information in Constructing ... - jstor
    the inverse gamma distribution, denoted by IF(a, b), with pdf r(0) = b -a-(a+ ) exp[-1 /(0b)]/F(a), 0 > 0, (2.1) and parameters a > 0, b > 0. The resulting ...
  21. [21]
    [PDF] Conjugate Bayesian analysis of the Gaussian distribution - mimuw
    Oct 3, 2007 · The scaled inverse-chi-squared distribution is a reparameterization of the inverse Gamma [GCSR04, p575]. χ−2(x|ν, σ2) = 1. Γ(ν/2) νσ2. 2.<|control11|><|separator|>
  22. [22]
    Goodness-of-fit test for the one-sided Lévy distribution - PMC
    The one-sided Lévy distribution can also be obtained as a particular case of the inverse gamma distribution with the shape parameter 1/2 and the scale parameter ...Missing: infinity | Show results with:infinity
  23. [23]
    [PDF] Properties of the generalized inverse Gaussian with applications to ...
    Jan 26, 2025 · The generalized inverse Gaussian ... X ∼ TInvGamma(α, β, L, U): a truncated inverse gamma distribution with shape parameter α, scale.<|control11|><|separator|>
  24. [24]
    The Bayesian Posterior and Marginal Densities of the Hierarchical ...
    This paper calculates Bayesian posterior and marginal densities for eight hierarchical gamma and inverse gamma models with conjugate priors, and discusses ...
  25. [25]
    [PDF] Bayesian Data Analysis Third edition (with errors fixed as of 20 ...
    This book is intended to have three roles and to serve three associated audiences: an introductory text on Bayesian inference starting from first principles, a ...
  26. [26]
    Prior distributions for variance parameters in hierarchical models ...
    We use an example to illustrate serious problems with the inverse-gamma family of "noninformative" prior distributions. ... Bayesian Analysis, Bayesian Anal. 1(3) ...
  27. [27]
    [PDF] Analysis of the Gibbs Sampler for Hierarchical Inverse Problems
    and an inverse-gamma prior for a hyperparameter, namely the amplitude of the prior variance. The structure of the model is such that the Gibbs sampler can ...
  28. [28]
    [PDF] Derivation of a Variational-Bayes Linear Regression Algorithm using ...
    Jun 12, 2017 · Here we give a brief introduction to variational inference, derive a variational- Bayes linear regression algorithm using a normal inverse- ...
  29. [29]
    [PDF] SELECTED STOCHASTIC MODELS IN RELIABILITY
    2.2 Inverse gamma distribution . ... It finds important applications in reliability theory by description of failure-free working times of devices.
  30. [30]
    DISCRETE INVERSE GAMMA DISTRIBUTION BASED LOAD ...
    In reliability engineering, the multi-component load-sharing models are being used to amplify system's reliability ... DISCRETE INVERSE GAMMA DISTRIBUTION BASED ...
  31. [31]
    The Inverse Gamma Distribution: A New Shadowing Model
    The efficacy of using inverse gamma distribution to model the shadowing effect by the means of experimental measurements and other conventionally used ...
  32. [32]
    A model for stock returns and volatility - ScienceDirect.com
    We show that historic volatility is best described by the generalized inverse gamma distribution. · We show that historic stock returns are best described by the ...
  33. [33]
    Minimal model of financial stylized facts | Phys. Rev. E
    Apr 14, 2011 · ... volatility, in Sec. II we concentrate on a linear one able to reproduce an inverse-gamma distribution in the long run. In Sec. III we detail ...
  34. [34]
    10.3 Fitting a Gaussian Process - Stan User's Guide
    An inverse gamma distribution is one of many zero-avoiding or boundary-avoiding distributions. See for more on boundary-avoiding priors. If we're using the ...
  35. [35]
    [PDF] Gaussian process hyper-parameter estimation using ... - CORE
    Jan 25, 2016 · respectively, allow to identify a Gaussian-inverse-gamma distribution for β and σ2, which can be shown to yield the integrated posterior ...