Fact-checked by Grok 2 weeks ago

Inverse Gaussian distribution

The Inverse Gaussian distribution, also known as the Wald distribution, is a two-parameter continuous probability distribution supported on the positive real line (0, \infty), with probability density function
f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right),
where \mu > 0 is the mean parameter and \lambda > 0 is a shape parameter that controls the distribution's precision or skewness. Its mean is \mu, variance is \mu^3 / \lambda, skewness is $3 \sqrt{\mu / \lambda}, and excess kurtosis is $15 \mu / \lambda.
Originally derived in 1915 by as the distribution of the first passage time for a with positive drift to reach a fixed positive level, the distribution was independently obtained around the same time by in the context of particle diffusion. Maurice Tweedie formalized its statistical properties and proposed the name "inverse Gaussian" in the 1940s and 1950s, highlighting its inverse relationship to the Gaussian distribution via cumulant generating functions and its role as a member of the exponential dispersion family, specifically the Tweedie distributions with power parameter p = 3. This naming reflects the reciprocal connection between the time to cover a unit distance in drifted and the distance covered in unit time. Key properties include its positive , unimodal shape, and closure under certain transformations, such as the reciprocal of an inverse Gaussian following a scaled inverse Gaussian distribution. It belongs to the , facilitating its use in generalized linear models, and admits a variance-mean mixture representation with a mixed over a , which underlies extensions like the normal inverse Gaussian distribution for heavier-tailed data. Applications span reliability engineering, where it models lifetimes of mechanical systems subject to wear; finance, as the Wald distribution in sequential probability ratio tests for hypothesis testing; insurance, for claim size modeling with positive skew; and physics, including pollen particle motion in fluids and wind speed distributions in energy forecasting. Its tractable moments and sampling methods, such as rejection sampling or the algorithm of Michael, Schucany, and Haas, make it computationally appealing for simulation and inference.

Fundamentals

Definition and Parameters

The inverse Gaussian distribution is a two-parameter family of continuous probability distributions defined on the positive real numbers, commonly applied to model positively skewed data, such as first passage times in stochastic processes like Brownian motion with positive drift. It arises naturally in contexts involving the reciprocal of a Gaussian random variable under certain transformations, hence its name. The distribution is parameterized by two positive real numbers: \mu > 0, which serves as a location-like parameter representing the , and \lambda > 0, a that influences the and tail behavior. The of the distribution is x > 0. The standard is given by f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right), for x > 0, as introduced by Tweedie. The parameter \mu directly equals the mean of the distribution, E[X] = [\mu](/page/MU), while the variance is \mathrm{Var}(X) = \mu^3 / \lambda, indicating that larger values of \lambda reduce the relative spread. In the limit of large \lambda, the distribution approximates a centered at \mu with mode near \mu, and \lambda governs the heaviness of the right tail, with smaller \lambda leading to greater .

Probability Density Function

The (PDF) of the inverse Gaussian arises as the of the first \tau_a = \inf\{t > 0 : X(t) \geq a\} for a with positive drift X(t) = \nu t + \sigma W(t), where \nu > 0 is the drift, \sigma > 0 is the diffusion coefficient, W(t) is standard , and a > 0 is the barrier level. Using the for paths, the PDF of \tau_a is derived as f(t) = \frac{a}{\sigma \sqrt{2\pi t^3}} \exp\left( -\frac{(a - \nu t)^2}{2\sigma^2 t} \right), \quad t > 0. This form reflects the Gaussian nature of increments combined with the t^{-3/2} scaling from the geometry. Reparameterizing via \mu = a / \nu (mean) and \lambda = a^2 / \sigma^2 () yields the standard PDF f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right), \quad x > 0, with \mu > 0 and \lambda > 0. This reparameterization highlights the inverse relationship to Gaussian processes, as originally studied in this context. The PDF is unimodal and positively skewed, with the mode located at m = \mu \left[ \sqrt{1 + \frac{9\mu^2}{4\lambda^2}} - \frac{3\mu}{2\lambda} \right], which lies strictly between 0 and \mu. For fixed \mu, increasing \lambda shifts the mode rightward toward \mu and narrows the distribution, reflecting reduced variance \mu^3 / \lambda; conversely, small \lambda produces a left-skewed peak near 0 with a heavy right tail. Graphically, the PDF resembles a right-tailed bell curve, starting at 0, rising sharply to the mode, and decaying gradually, with the tail becoming more pronounced as \lambda decreases relative to \mu. Asymptotically, as x \to 0^+, the PDF behaves like x^{-3/2} modulated by \exp(-\lambda / (2x)), leading to rapid decay to 0 despite the singularity in the prefactor. For large x, it exhibits \sim \exp(-\lambda x / (2\mu^2)), characteristic of light-tailed behavior on the right. The distribution belongs to the with log-concave carrier measure x^{-3/2}, ensuring the PDF is log-concave overall; this property implies a unimodal , facilitating .

Cumulative Distribution Function

The cumulative distribution function (CDF) of the inverse Gaussian distribution IG(μ, λ) with shape parameter μ > 0 and scale parameter λ > 0 is F(x; \mu, \lambda) = \Phi\left( \sqrt{\frac{\lambda}{x}} \left( \frac{x}{\mu} - 1 \right) \right) + e^{2\lambda / \mu} \Phi\left( - \sqrt{\frac{\lambda}{x}} \left( \frac{x}{\mu} + 1 \right) \right), for x > 0, where \Phi denotes the CDF of the standard normal distribution, and F(x; \mu, \lambda) = 0 for x \leq 0. This , which expresses the CDF in terms of the normal CDF, was first derived by Shuster (1968) using a of the integral form of the CDF. The CDF can be obtained by direct integration of the corresponding probability density function (PDF), f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right), for x > 0. The integration involves a substitution, such as letting z = \sqrt{\frac{\lambda (x - \mu)^2}{2 \mu^2 x}}, which simplifies the exponent and leads to terms that integrate to the normal CDF after completing the square and handling the boundary contributions. Alternatively, the CDF arises naturally from the connection to stochastic processes: the inverse Gaussian distribution models the first passage time to a fixed level in a Brownian motion with positive drift, where the CDF can be derived using the reflection principle for diffusion processes. The survival function, or complementary CDF, is S(x; \mu, \lambda) = 1 - F(x; \mu, \lambda), which simplifies to S(x; \mu, \lambda) = \Phi\left( - \sqrt{\frac{\lambda}{x}} \left( \frac{x}{\mu} - 1 \right) \right) + e^{2\lambda / \mu} \Phi\left( - \sqrt{\frac{\lambda}{x}} \left( \frac{x}{\mu} + 1 \right) \right). This form highlights the tail behavior, particularly for small λ, where the scale parameter controls the precision; low λ values result in a heavy right tail due to the increased variance \mu^3 / \lambda, making large x more probable relative to the mean μ. The , defined as the inverse F^{-1}(p; \mu, \lambda) for $0 < p < 1, lacks a closed-form solution and requires numerical methods for evaluation, such as bisection or Newton-Raphson iteration applied to the CDF equation.

Moments and Characteristics

Mean, Variance, and Skewness

The mean of an inverse Gaussian random variable X \sim \text{IG}(\mu, \lambda) is E[X] = \mu, where \mu > 0 is the . The variance is \text{Var}(X) = \mu^3 / \lambda, with \lambda > 0 serving as the that controls . These moments can be derived by of the or via the M(t) = \exp\left(\frac{\lambda}{\mu} \left(1 - \sqrt{1 - \frac{2\mu^2 t}{\lambda}}\right)\right) for t < \lambda/(2\mu^2). The skewness is \gamma_1 = 3 \sqrt{\mu / \lambda}, which is always positive and increases as the ratio \mu / \lambda grows, reflecting the distribution's inherent right-tailed asymmetry. The excess kurtosis is \gamma_2 = 15 \mu / \lambda, indicating leptokurtosis that becomes more pronounced with larger \mu / \lambda, meaning heavier tails than a normal distribution. These moments capture the positive skew and peakedness typical of first passage times in Brownian motion with positive drift, where longer times are more probable due to the process's diffusive nature, leading to applications in reliability and survival analysis. The mean and variance also underpin method-of-moments estimation for the parameters.

Characteristic Function

The characteristic function of a random variable X following an inverse Gaussian distribution with parameters \mu > 0 and \lambda > 0, denoted X \sim IG(\mu, \lambda), is given by \psi(t) = \mathbb{E}[e^{itX}] = \exp\left( \frac{\lambda}{\mu} \left(1 - \sqrt{1 - \frac{2i\mu^2 t}{\lambda}} \right) \right), where i is the and t \in \mathbb{R}. This expression can be derived as the of the of the inverse Gaussian distribution, which involves integrating \int_0^\infty e^{itx} f(x; \mu, \lambda) \, dx where f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2\mu^2 x} \right) for x > 0. Alternatively, it arises from the representation of the inverse Gaussian as the first passage time distribution for a with positive drift \nu = 1/\mu and variance parameter \sigma^2 = 1/\lambda hitting a positive barrier at level a > 0, scaled appropriately, leveraging the known for such hitting times. The logarithm of the characteristic function, \log \psi(t), serves as the cumulant-generating function, from which all moments of the distribution can be obtained by successive differentiation with respect to t and evaluation at t = 0. For instance, the first cumulant yields the \mu, and the second cumulant yields the variance \mu^3 / \lambda. The form of \psi(t) implies stability under for sums of inverse Gaussian random variables sharing the same \lambda parameter, as the characteristic function of the sum is the product of the individual characteristic functions, preserving the inverse Gaussian family. Unlike the characteristic function of the normal distribution, which is \exp(i\mu t - \frac{1}{2}\sigma^2 t^2) and symmetric, the inverse Gaussian's \psi(t) exhibits asymmetry reflective of its positive support and , with the term introducing complex branching; it shares with the , facilitating representations.

Distributional Properties

Summation of Independent Variables

The sum of n and identically distributed random variables X_1, \dots, X_n, each following an inverse Gaussian distribution \mathrm{IG}(\mu, \lambda), itself follows an inverse Gaussian distribution with updated parameters \mathrm{IG}(n\mu, n^2\lambda). This reproductive property arises from the multiplicative nature of s for random variables. Specifically, the of an \mathrm{IG}(\mu, \lambda) random variable is given by \phi(t) = \exp\left\{ \frac{\lambda}{\mu} \left( 1 - \sqrt{1 - \frac{2 i t \mu^2}{\lambda}} \right) \right\}, and for the sum, the product of n such functions simplifies to the of \mathrm{IG}(n\mu, n^2\lambda), confirming the closed-form result. For the more general case of independent but non-identically distributed inverse Gaussian random variables X_j \sim \mathrm{IG}(\mu_j, \lambda_j) for j = 1, \dots, n, the sum \sum_{j=1}^n X_j follows an inverse Gaussian distribution if and only if the ratio \mu_j^2 / \lambda_j is across all j, equal to some constant c > 0. Under this condition, the sum is distributed as \mathrm{IG}\left( \sum_{j=1}^n \mu_j, \left( \sum_{j=1}^n \mu_j \right)^2 / c \right). The derivation follows from the product of the individual s, which takes the form of an inverse Gaussian precisely when the terms \mu_j^2 / \lambda_j align to yield a common structure inside the . Without this constancy condition, the of the sum is obtained via of the individual densities, which generally lacks a simple closed form. The inverse Gaussian distribution is infinitely divisible, meaning that for any n, it can be represented as the sum of n and identically distributed positive s. This property underpins its utility in modeling processes requiring into finer components, such as in , where sums of inter-arrival times correspond to cumulative waiting times in diffusion-based renewal processes.

Scaling and Location Transformations

The Inverse Gaussian distribution exhibits specific behavior under affine transformations, which is useful for understanding its flexibility in modeling scaled or shifted phenomena. Consider a X \sim \text{IG}(\mu, \lambda), where \mu > 0 is the parameter and \lambda > 0 is the , with f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right), \quad x > 0. For scaling, let Y = cX where c > 0. The of Y is E[Y] = c\mu, and the variance is \text{Var}(Y) = c^2 \cdot \mu^3 / \lambda = (c\mu)^3 / (c\lambda), which matches the form for an Inverse Gaussian with updated parameters. To derive this via PDF substitution, the density of Y is g(y) = \frac{1}{c} f\left(\frac{y}{c}; \mu, \lambda\right) = \sqrt{\frac{\lambda}{2\pi (y/c)^3}} \exp\left( -\frac{\lambda ((y/c) - \mu)^2}{2 \mu^2 (y/c)} \right) \cdot \frac{1}{c}. Simplifying the exponent: \lambda ((y/c) - \mu)^2 / (2 \mu^2 (y/c)) = \lambda (y - c\mu)^2 / (2 (c\mu)^2 y ), and the prefactor becomes \sqrt{(c\lambda)/(2\pi y^3)}, yielding g(y) = \sqrt{(c\lambda)/(2\pi y^3)} \exp\left( -(c\lambda) (y - c\mu)^2 / (2 (c\mu)^2 y ) \right). Thus, Y \sim \text{IG}(c\mu, c\lambda). For location transformations, consider Z = X + a where a > 0. The Inverse Gaussian family is not closed under such shifts, as there is no for the density of Z. The PDF of Z would require the transformation f_Z(z) = f(z - a; \mu, \lambda) for z > a, but this does not simplify to another Inverse Gaussian density due to the altered and the specific form of the original PDF involving $1/x terms. Approximations, such as saddlepoint methods or , are typically employed for practical computations in shifted scenarios. A single-parameter form arises when fixing the variance \sigma^2, leading to \lambda = \mu^3 / \sigma^2. Here, the distribution is parameterized solely by \mu, with the shape adjusted to maintain constant variance, which is useful in applications requiring variance stabilization, such as certain stochastic processes. Alternatively, setting \mu = 1 yields the standard Wald distribution, a one-parameter case varying only \lambda. This form simplifies analysis in contexts like first-passage times. The reciprocal transformation W = 1/X follows a Inverse Gaussian distribution, a special case of the generalized Inverse Gaussian with parameter p = -1/2. Briefly, in the limit as \mu \to \infty, this connects to the , relevant for certain hitting times without drift. Derivation follows from substituting w = 1/x into the PDF, yielding h(w) = \frac{1}{w^2} f\left(\frac{1}{w}; \mu, \lambda\right) = \sqrt{\frac{\lambda}{2\pi}} w^{-1/2} \exp\left( -\frac{\lambda (1 - \mu w)^2}{2 \mu^2 w} \right), \quad w > 0, which matches the reciprocal form after reparameterization.

Exponential Family Form

The inverse Gaussian distribution can be expressed as a member of the two-parameter , providing a parameterization that highlights its sufficient statistics and natural parameters for statistical modeling. The is rewritten in the form
f(x \mid \eta) = h(x) \exp\left( \eta^\top T(x) - A(\eta) \right),
where h(x) = (2\pi x^3)^{-1/2} is the base measure for x > 0, T(x) = (x, 1/x) is the vector of sufficient statistics, \eta = (\eta_1, \eta_2) is the vector of natural parameters with \eta_1 = -\lambda/(2\mu^2) and \eta_2 = -\lambda/2, and A(\eta) = -2\sqrt{\eta_1 \eta_2} - \frac{1}{2} \log(-2\eta_2) is the log-partition function. The inverse Gaussian forms a curved subfamily because the natural parameters satisfy the relation \eta_1 = \eta_2 / \mu^2, implying that the parameter space is a in the full two-parameter space and requires adjustments for .
This exponential family representation is particularly useful in generalized linear models (GLMs), where the inverse Gaussian serves as the response distribution with mean \mu > 0 and dispersion parameter \phi = 1/\lambda. The canonical link function is g(\mu) = 1/\mu^2, which directly relates the linear predictor \eta = X^\top \beta to the natural parameter associated with the mean , ensuring between model components and simplifying iterative algorithms like . In this framework, the variance function is V(\mu) = \mu^3, reflecting the distribution's heteroscedasticity. For model diagnostics in inverse Gaussian GLMs, the deviance measures goodness-of-fit as D = \sum_i (y_i - \hat{\mu}_i)^2 / (y_i \hat{\mu}_i^2), which under the null follows an approximate scaled by the after estimating the dispersion. Deviance residuals are defined as r_{D,i} = \operatorname{sign}(y_i - \hat{\mu}_i) \sqrt{ (y_i - \hat{\mu}_i)^2 / (y_i \hat{\mu}_i^2) }, providing a of discrepancy useful for plots and detection, while Pearson residuals r_{P,i} = (y_i - \hat{\mu}_i) / \sqrt{\hat{\mu}_i^3 / \hat{\phi}} assess fit against the variance structure. The form aids by concentrating the likelihood through the sufficient statistics T(x) = (x, 1/x), reducing dimensionality for large samples and enabling closed-form updates in GLM fitting. For Bayesian methods, it supports conjugate priors on the natural parameters, facilitating posterior sampling via techniques like variational inference or MCMC, and leverages the family's regularity for asymptotic approximations in credible intervals and hypothesis testing.

Stochastic Process Connections

Relation to Brownian Motion

The inverse Gaussian distribution emerges naturally in the study of stochastic processes as the probability distribution governing the first passage time of a with positive drift to a fixed positive barrier. Consider a one-dimensional Brownian motion process defined by X_t = \nu t + \sigma W_t, where W_t is a standard Wiener process (with W_0 = 0), \nu > 0 denotes the constant positive drift, \sigma > 0 is the diffusion coefficient, and the process starts at X_0 = 0. The first passage time \tau_a = \inf\{t \geq 0 : X_t = a\} to a barrier level a > 0 follows an inverse Gaussian distribution with shape parameter \mu = a / \nu and scale parameter \lambda = a^2 / \sigma^2. This connection provides a probabilistic foundation for the inverse Gaussian, linking it directly to the dynamics of particles or assets subject to random fluctuations with a systematic trend. The derivation of this distribution can be obtained using the adapted for drifted paths or by solving the Kolmogorov forward (Fokker-Planck) for the transition density with an absorbing boundary at a. In the reflection approach, the probability of paths reaching a by time t accounts for the drift through an exponential tilting of the no-drift case, yielding the inverse Gaussian density after of the cumulative distribution. Alternatively, the Kolmogorov \partial_t p(x,t) = -\nu \partial_x p(x,t) + \frac{\sigma^2}{2} \partial_{xx} p(x,t), with absorbing conditions p(a,t) = 0 and p(x,0) = \delta(x), integrates to the same result via or Laplace transforms. The historical origins trace back to Erwin Schrödinger's 1915 derivation of this first passage time density in the context of gravitational fall experiments modeled by drifted , predating the formal naming of the distribution. Schrödinger's work established the explicit form of the density, highlighting its role in quantifying the time for a particle to traverse a distance under combined deterministic and random forces. This seminal contribution was independently corroborated by in the same year, solidifying the inverse Gaussian's place in early modeling. In broader diffusion settings, the inverse Gaussian generalizes to first times across absorbing in processes with linear drift, such as in pricing or neuronal firing models, where the represents an threshold. For intuitive understanding, simulations of drifted Brownian paths reveal that while some trajectories cross the quickly due to the positive drift, others wander before , producing the characteristic right-skewed shape of the inverse Gaussian density that matches empirical histograms.

Zero Drift Case

In the zero drift case of the with unit variance, the first to a positive level a > 0 follows the , which arises as the limiting case of the inverse Gaussian distribution when the drift parameter \nu \to 0^+. This corresponds to letting the mean parameter \mu = a / \nu \to \infty in the inverse Gaussian \operatorname{IG}(\mu, \lambda) while fixing the shape parameter \lambda = a^2. The probability density function of the inverse Gaussian distribution is given by f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right), \quad x > 0. To derive the limiting density, expand the exponent: \frac{(x - \mu)^2}{\mu^2 x} = \frac{\mu^2 - 2\mu x + x^2}{\mu^2 x} = \frac{1}{x} - \frac{2}{\mu} + \frac{x}{\mu^2}. As \mu \to \infty, this simplifies to $1/x, so the exponent becomes -\lambda / (2x). The prefactor remains unchanged, yielding the limiting density f(x) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda}{2x} \right), \quad x > 0, which is the of the Lévy distribution with 0 and \lambda. A similar limiting procedure applies to the of the inverse Gaussian, confirming convergence to the Lévy distribution. In this limiting case, the moments of the distribution exhibit special properties: the mean and variance are both infinite, reflecting the heavy tail behavior characteristic of the . This distribution is a stable distribution with stability \alpha = 1/2 and skewness parameter \beta = 1, belonging to the domain of attraction of stable laws with the same . Additionally, as a stable distribution, it is infinitely divisible, allowing representation as the of a Lévy process at time 1.

Parameter Estimation

Maximum Likelihood Estimation

The maximum likelihood (MLE) for the \mu of the inverse Gaussian distribution, based on an independent sample x_1, \dots, x_n, is the sample \hat{\mu} = \bar{x}. The MLE for the \lambda is then \hat{\lambda} = n \left/ \sum_{i=1}^n \left( \frac{1}{x_i} - \frac{1}{\bar{x}} \right) \right.. These are obtained by maximizing the log-likelihood function \ell(\mu, \lambda; x_i) = \frac{n}{2} \log \lambda - \frac{1}{2} \sum_{i=1}^n \log(x_i^3) - \frac{\lambda}{2\mu^2} \sum_{i=1}^n \frac{(x_i - \mu)^2}{x_i}, which yields a system of score equations that decouple to provide the closed-form expressions above. Although the estimators are explicit, numerical optimization methods such as Fisher scoring may be employed in practice for robustness, particularly in extended models or when implementing in software. The inverse Gaussian belongs to the , which implies that the sufficient statistics \sum x_i and \sum 1/x_i underpin the MLEs. Asymptotically, as n \to \infty, the MLEs are normally distributed: \sqrt{n} (\hat{\theta} - \theta) \to N(0, I(\theta)^{-1}), where \theta = (\mu, \lambda) and the Fisher information matrix is diagonal, I(\theta) = n \begin{pmatrix} \mu^{-2} & 0 \\ 0 & (2\lambda^2)^{-1} \end{pmatrix}. This yields asymptotic variances \mu^2 / n for \hat{\mu} and $2\lambda^2 / n for \hat{\lambda}. The observed information matrix provides consistent estimates of these variances for finite n. For small samples, \hat{\mu} is unbiased, but \hat{\lambda} exhibits upward bias approximately equal to $3\lambda / n. An unbiased estimator is obtained by multiplying \hat{\lambda} by (n-3)/n for n > 3.

Method of Moments

The method of moments offers a simple, closed-form approach to estimating the parameters of the inverse Gaussian distribution by matching the first two population moments to their sample counterparts. For a random sample y_1, y_2, \dots, y_n from an inverse Gaussian distribution with parameters \mu > 0 and \lambda > 0, the population mean is \mathbb{E}(Y) = \mu and the population variance is \mathrm{Var}(Y) = \mu^3 / \lambda. Equating the sample mean \bar{y} = n^{-1} \sum_{i=1}^n y_i to \mu yields the estimator \hat{\mu} = \bar{y}. Similarly, equating the sample variance s_y^2 = n^{-1} \sum_{i=1}^n (y_i - \bar{y})^2 to \mu^3 / \lambda and substituting \hat{\mu} gives \hat{\lambda} = \bar{y}^3 / s_y^2. These plug-in estimators are consistent as the sample size increases, since the sample mean and variance converge to their values. However, they are generally less efficient than maximum likelihood estimators, particularly for moderate to large samples, as the method of moments does not account for the full likelihood structure and can exhibit higher in simulations. For small samples (e.g., n = 10), the method of moments may occasionally outperform maximum likelihood in terms of and for certain parameterizations, but this advantage diminishes with larger n. Higher-order sample moments, such as \gamma_1 = 3 \sqrt{\mu / \lambda} and \kappa = 3 + 15 \mu / \lambda, can be computed from the data to assess robustness of the fit or validate assumptions, providing checks beyond the first two moments. The method of moments is particularly advantageous for quick initial approximations due to its non-iterative nature or in scenarios where fails, such as when observations are identical or the likelihood surface is poorly behaved.

Generation and Computation

Sampling Algorithms

Generating random variates from the inverse Gaussian distribution is crucial for simulations, bootstrap procedures, and empirical studies in fields like and . Several algorithms have been developed to produce these variates efficiently, leveraging the distribution's connection to and its (CDF), which can be expressed in terms of the standard CDF. One standard approach is the inverse CDF method, also known as the inversion method. This technique generates a uniform random variable U \sim \mathcal{U}(0,1) and solves for X in the equation F(X) = U, where F is the CDF of the inverse Gaussian distribution IG(\mu, \lambda). Since the CDF lacks a closed-form inverse, numerical root-finding methods, such as Newton-Raphson or bisection, are applied to the equation involving the normal CDF term. This method ensures exact sampling but can be computationally intensive for high-precision requirements due to the iterative solving process. A more efficient exact algorithm was introduced by Michael, Schucany, and Haas in 1976, utilizing a transformation with multiple roots. The method starts by generating a chi-squared random variate Y \sim \chi^2(1) and a uniform U \sim \mathcal{U}(0,1). It then solves the quadratic equation \frac{\lambda (x - \mu)^2}{\mu^2 x} = Y, which yields two positive roots x_1 < \mu < x_2 where x_2 = \mu^2 / x_1. The smaller root x_1 is selected with probability \mu / (\mu + x_1); otherwise, the larger root x_2 is selected. This approach requires one chi-squared and one uniform random number per variate and achieves high efficiency across parameter ranges. In Bayesian contexts, Gibbs sampling provides a way to draw from the posterior distribution of inverse Gaussian parameters or generate variates conditional on data. For example, given data from IG(\mu, \lambda), the conditional posterior for \lambda given \mu and the data is gamma-distributed under a gamma prior; the conditional for \mu given \lambda and the data is typically non-standard (e.g., involving a truncated generalized Student's t after reparameterization θ = 1/μ) and may require Metropolis-Hastings steps. This method is particularly useful for hierarchical models and inference in survival analysis. Efficiency comparisons among these methods highlight trade-offs depending on the parameters. The Michael et al. algorithm is exact with no rejections, making it superior to pure inversion for most practical cases; in contrast, the inverse CDF method's computational cost grows with the number of iterations needed for root-finding, often 5-10 per sample. Gibbs sampling, while versatile for Bayesian applications, incurs autocorrelation in chains, requiring thinning for independent variates, with effective sample sizes reduced by 20-50% for high-dimensional posteriors. For implementation, a basic pseudocode outline of the Michael et al. algorithm is as follows:
Generate Y ~ χ²(1)  // or Z ~ N(0,1), Y = Z²
Let ν = λ / μ²
Let x1 = μ * (sqrt(1 + 2 * ν / Y) - 1) / (ν / Y)  // Smaller root; alternative formulations exist
Let x2 = μ² / x1  // Larger root
Generate U ~ Uniform(0,1)
Let p = μ / (μ + x1)
If U ≤ p:
    X = x1
Else:
    X = x2
Return X
This pseudocode captures the core steps using a direct computation of the roots and probabilistic selection; actual implementations may vary slightly in root calculation for numerical stability.

Numerical Evaluation and Software

The probability density function (PDF) of the inverse Gaussian distribution can be evaluated directly using the formula f(x; \mu, \lambda) = \sqrt{\frac{\lambda}{2\pi x^3}} \exp\left( -\frac{\lambda (x - \mu)^2}{2 \mu^2 x} \right), \quad x > 0, where \mu > 0 is the mean parameter and \lambda > 0 is the . For numerical stability, particularly when x is large, the PDF is computed in log-scale to avoid underflow or overflow in the exponential term. The (CDF) is given by F(x; \mu, \lambda) = \Phi\left( \frac{\lambda (x - \mu)}{\mu \sqrt{\lambda x}} \right) + \exp\left( \frac{2\lambda}{\mu} \right) \Phi\left( -\frac{\lambda (x + \mu)}{\mu \sqrt{\lambda x}} \right), where \Phi denotes the standard CDF. Direct evaluation requires careful handling of the \exp(2\lambda / \mu) term, which can cause overflow for large \lambda / \mu; log-scale computations and conditional evaluation of terms mitigate this issue, ensuring accuracy down to machine precision. Alternative approaches for the CDF, especially in the tails, include saddlepoint approximations, which replace the normal base with an inverse Gaussian base for improved accuracy over standard normal approximations. For the two-parameter inverse Gaussian, maximum likelihood estimates (MLEs) have closed forms: \hat{\mu} = \bar{x} (sample mean) and \hat{\lambda} = n / \sum_{i=1}^n (1/x_i - 1/\hat{\mu}), but numerical optimization is often used in software for robustness or when fitting generalized forms with location/scale parameters. Newton-Raphson iterations provide efficient convergence for MLE in extended models, such as three-parameter variants, starting from moment-based initials. The expectation-maximization (EM) algorithm is implemented for inverse Gaussian mixtures or random effects models, iteratively updating parameters via complete-data likelihoods. Software implementations support accurate PDF and CDF evaluation alongside MLE fitting. In R, the statmod package provides dinvgauss and pinvgauss functions, achieving 16-digit precision via log-scale and handling extreme parameters; for example, fitting is performed with fitdistr from MASS using MLE. Python's scipy.stats.invgauss (introduced in SciPy 0.10.0 in 2012 and updated through recent versions as of 2025) computes PDF/CDF using the Boost Math library and supports fitting via fit with numerical optimization (e.g., least-squares or MLE), as in params = invgauss.fit(data). MATLAB's Statistics and Machine Learning Toolbox offers pdf, cdf, and mle for the InverseGaussianDistribution object, enabling parameter estimation with pd = fitdist(data, 'InverseGaussian'). Numerical challenges arise when \lambda is small, leading to heavy right tails and requiring high-precision arithmetic to resolve near-zero probabilities without truncation errors; implementations like statmod address this through mode-initialized iterations for related computations.

Equivalence to Wald Distribution

The Wald distribution, named after the statistician for its application in sequential hypothesis testing, is mathematically identical to the inverse Gaussian distribution. Wald introduced the distribution in his 1947 monograph on sequential analysis, where it models the stopping times in probability ratio tests under normal error assumptions. The equivalence arises from independent derivations in distinct fields: the inverse Gaussian emerged from physics in the study of first-passage times, while the Wald formulation stemmed from statistical . This duality in origins—physical modeling versus sequential testing—explains the persistence of both names, with no underlying distributional disparities. In parameterization, the Wald distribution is frequently specified as \text{Wald}(\mu, \sigma^2), where \mu > 0 is the and \sigma^2 > 0 the variance; this corresponds directly to the inverse Gaussian \text{IG}(\mu, \lambda) via the relation \sigma^2 = \mu^3 / \lambda. Post-1970s statistical literature, influenced by comprehensive treatments in key references, increasingly standardized on "inverse Gaussian" to encompass its multifaceted uses, diminishing reliance on the "Wald" label outside sequential analysis contexts.

Connections to Lévy and Gamma Distributions

The inverse Gaussian distribution exhibits a limiting relationship to the in the zero-drift scenario. Specifically, if Y \sim \text{IG}(\mu, \lambda), then as \mu \to \infty, the distribution of Y converges to a with 0 and \lambda. This limit reflects the underlying connection to first passage times, where the inverse Gaussian arises for positive drift and the emerges in the absence of drift. The reciprocal of an inverse Gaussian follows a reciprocal inverse Gaussian distribution. This transformation highlights the inverse Gaussian's ties to other heavy-tailed distributions and facilitates derivations in and reliability modeling, where reciprocal forms often arise naturally. The inverse Gaussian distribution admits a representation as a of distributions, where the mixing is over a suitable positive governing the . This mixture structure underscores its flexibility in generating skewed and leptokurtic marginals, analogous to how gamma mixtures yield the Student-t distribution. The of the inverse Gaussian distribution establishes its role in constructing Lévy processes, where increments follow the inverse Gaussian law. This property, proven through formulas and analysis, allows the inverse Gaussian to serve as a building block for Lévy-driven models in stochastic processes, linking it directly to broader classes of infinitely divisible distributions like the gamma and stable laws.

Applications

Reliability and Survival Analysis

The inverse Gaussian distribution arises naturally in as the distribution of the first passage time to a fixed in a , such as a with positive drift representing cumulative damage accumulation leading to failure. This formulation models scenarios where degradation follows a path until it crosses a critical level, providing a mechanistic basis for lifetime in systems subject to random environmental stresses. For instance, the inverse Gaussian captures times in modeling fatigue crack growth, accounting for both deterministic growth and Brownian-like fluctuations in material response. The hazard function of the inverse Gaussian distribution, defined as h(x) = \frac{f(x)}{1 - F(x)} where f(x) and F(x) are the probability density and cumulative distribution functions, respectively, has the form h(x) = \frac{f(x)}{1 - F(x)}, with F(x) = \Phi\left[\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu} - 1\right)\right] + \exp\left(\frac{2\lambda}{\mu}\right) \Phi\left[-\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu} + 1\right)\right], where \Phi is the standard normal CDF. This hazard increases to a maximum and then decreases, exhibiting a unimodal shape for typical parameter values \mu > 0 (mean lifetime) and \lambda > 0 (shape), reflecting phases of useful life in component lifetimes. Extensions of the inverse Gaussian to models include accelerated failure time () frameworks, where log-lifetimes follow a with inverse Gaussian errors, allowing stress factors to scale the time to failure proportionally. Proportional hazards adaptations incorporate inverse Gaussian frailty terms to account for unobserved heterogeneity, enhancing model fit in clustered or correlated failure data. These approaches are particularly useful in , where elevated conditions compress lifetimes while preserving distributional shape. In practical applications, the inverse Gaussian has been applied to post-2010 studies of reliability, modeling bearing under variable loads as an inverse Gaussian process to predict remaining useful life from and temperature data. For example, recent methods as of 2025 use inverse Gaussian models for real-time remaining useful life of bearings. Similarly, in biological contexts, it describes lifetimes in scenarios of progressive deterioration, such as neuronal times or cellular aging processes reaching a , offering a alternative to deterministic growth models. Compared to the , the inverse Gaussian may provide a better fit for datasets with unimodal rates, capturing diffusion-based , while Weibull is more flexible for increasing or decreasing depending on its .

Finance and Modeling

The Inverse Gaussian distribution arises naturally in as the distribution of first passage times for with positive drift, which approximates barrier crossing events in surplus processes. In the Cramér-Lundberg model of , the diffusion approximation yields the time to as an Inverse Gaussian random variable when conditioning on ruin occurring, enabling explicit computation of ruin probabilities under light-tailed claims. This framework extends to the Inverse Gaussian process for aggregate claims, where the infinite-time ruin probability admits a closed-form Pollaczek-Khinchine representation, outperforming exponential approximations in subexponential tail regimes. In stochastic volatility models, the Inverse Gaussian distribution models hitting times of volatility processes to predefined barriers, capturing regime shifts or volatility explosions in asset price dynamics. For instance, it describes the time for a mean-reverting volatility process to reach extreme levels, facilitating the of volatility-linked derivatives under jump-diffusion settings. Recent extensions incorporate the Inverse Gaussian as a subordinator in time-changed Lévy models, enhancing the representation of with heavy tails observed in high-frequency data. Credit risk assessment leverages the Inverse Gaussian for the time to default in structural models, where the firm's asset value follows a Brownian motion with drift influenced by economic covariates such as interest rates or GDP growth. The default time is then the first hitting time to a liability barrier, with the Inverse Gaussian's shape parameter reflecting recovery rates and skewness from macroeconomic shocks. Pure jump variants, including the Inverse Gaussian process, model distance-to-default under Lévy dynamics, providing superior calibration to empirical default intensities compared to Gaussian alternatives. Practical applications include pricing barrier options, where the Inverse Gaussian governs the distribution of the first in the underlying Brownian path, integrated into simulations or transform methods for up-and-out calls in time-dependent barrier settings. In insurance, it computes finite-time probabilities for portfolios with Inverse Gaussian claim severities, as seen in post-2020 assessments under heavy-tailed loss data. Empirically, the Inverse Gaussian fits short-term credit and market risks better than the by capturing asymmetric tails and positive , reducing Value-at-Risk overestimation in stressed scenarios. The property of Inverse Gaussian variates further supports modeling aggregate portfolio risks via convolutions.

History

Origins in Brownian Motion

The inverse Gaussian distribution first emerged in the context of early 20th-century investigations into , particularly the timing of a diffusing particle reaching a specified barrier under the influence of a constant force. In 1915, derived the for the first passage time of a Brownian particle undergoing fall or rise experiments in a , modeling the motion as a with drift. This derivation appeared in his paper "Zur Theorie der Fall- und Steigversuche an Teilchen mit Brownscher Bewegung," published in Physikalische Zeitschrift. The parameters of the resulting distribution were directly linked to the physical setup: the mean time scaled with the square of the barrier distance divided by the drift velocity, while the shape parameter incorporated the diffusion constant characterizing the random fluctuations. Independently in 1916, Marian Smoluchowski obtained an equivalent result for the first passage time distribution in Brownian motion with drift, building on his foundational work in colloid physics and diffusion. Smoluchowski's contribution, detailed in a paper in Physikalische Zeitschrift, emphasized applications to colloidal particle sedimentation and coagulation, where the distribution described the time until particles aggregate or settle against thermal agitation. These early formulations were influenced by Smoluchowski's prior theoretical framework for Brownian motion, developed around 1906, which reconciled Einstein's diffusion equation with experimental observations of particle displacements. At this stage, the distribution was not yet termed "inverse Gaussian" but was recognized in physics literature as a specialized law governing transient diffusion phenomena under external forces. In the ensuing years leading into , the distribution gained further traction in physics and nascent , with extensions exploring more general drifted Brownian paths. Mathematicians such as J. L. Doob and others began formalizing these problems within the emerging theory of processes, adapting the 1915 results to broader boundary conditions and multidimensional settings. These developments laid the groundwork for recognizing the distribution's role beyond specific physical experiments, though its statistical nomenclature and applications awaited later advancements.

Key Developments and Variants

In 1947, re-derived the distribution in the context of sequential probability ratio tests and named it the Wald distribution, highlighting its role in decision-making processes under uncertainty. During the 1950s, M. C. K. Tweedie and colleagues advanced its theoretical foundations by establishing it as a member of the exponential dispersion family, which facilitated its integration into generalized linear models and led to the widespread adoption of the term "inverse Gaussian distribution." This standardization was further consolidated in V. Seshadri's 1993 monograph, which provided a comprehensive treatment of its statistical theory and applications. Key variants emerged in the 1980s to extend the univariate form for multivariate settings, particularly in modeling spatial and dependent processes. For instance, J. L. Jensen and B. Jørgensen proposed multivariate distributions with generalized inverse Gaussian marginals, enabling applications in Poisson mixtures and correlated data analysis. Concurrently, Raj S. Chhikara and J. Leroy Folks formalized the generalized inverse Gaussian distribution in their 1988 book, broadening its utility for skewed and heavy-tailed data beyond the standard parameterization. In the 1990s, the distribution found application in rating models, where the Poisson-inverse Gaussian variant was used to account for in claim , improving premium calculations in bonus-malus systems. In the , the inverse Gaussian has seen renewed interest in for , such as monitoring shape parameters in processes or modeling outliers in via hybrid statistical-learning frameworks.

References

  1. [1]
  2. [2]
    [PDF] Inverse Gaussian Distribution - Paul Johnson Homepage
    The expected value is µ and the variance of this version of the inverse Gaussian distribution is. V ar(x) = µ3 λ. The skewness and kurtosis are, respectively ...
  3. [3]
    The Inverse Gaussian Distribution and Its Statistical Application - jstor
    BACKGROUND AND ORIGINS. IN 1915 Schr6dinger gave the probability distribution of the first passage time in Brownian motion.Missing: history | Show results with:history
  4. [4]
    Inverse Gaussian (or Inverse Normal) Distribution - Boost
    Tweedie used the name Inverse Gaussian because there is an inverse relationship between the time to cover a unit distance and distance covered in unit time. The ...
  5. [5]
    Normal Inverse Gaussian Distributions and Stochastic Volatility ...
    The normal inverse Gaussian distribution is defined as a variance-mean mixture of a normal distribution with the inverse Gaussian as the mixing distribution.
  6. [6]
    Inverse gaussian distribution and its application - Wiley Online Library
    Feb 23, 2007 · Schrödinger derived the cumulant generating function for the first-passage time of the Brownian motion with a drift. Then Tweedie extended ...
  7. [7]
    Statistical Properties of Inverse Gaussian Distributions. I
    The inverse Gaussian distribution has the form exp[−λ(x−μ)2/2μ2x][λ/2πx3]1/2, where x, μ, and λ are in (0,∞). The expectation of x is μ, and λ measures ...
  8. [8]
    [PDF] Inverse Gaussian Distribution, Introduction and Applications - arXiv
    Jan 28, 2025 · Chapter 2 introduces the inverse Gaussian distribution from a theoretical perspective, emphasizing its derivation from Brownian motion with ...
  9. [9]
    Statistical Properties of Inverse Gaussian Distributions. II
    Citation. Download Citation. M. C. K. Tweedie. "Statistical Properties of Inverse Gaussian Distributions. II." Ann. Math. Statist. 28 (3) 696 - 705, September ...
  10. [10]
    [PDF] statmod: Probability Calculations for the Inverse Gaussian Distribution
    Seshadri. The Inverse Gaussian Distribution: a Case Study in Exponential Families. Clarendon Press,. Oxford, 1993. [p339]. J. Shuster. On the inverse Gaussian ...
  11. [11]
    The Inverse Gaussian Distribution | Theory - Taylor & Francis eBooks
    This monograph is a compilation of research on the inverse Gaussian distribution. It emphasizes the presentation of the statistical properties, methods, ...
  12. [12]
    statmod: Probability Calculations for the Inverse Gaussian Distribution
    Jul 26, 2016 · The inverse Gaussian distribution (IGD) (Tweedie 1957; Johnson and Kotz 1970) is widely used in a variety of application areas including ...
  13. [13]
    [PDF] Inverse Gaussian distribution
    The inverse Gaussian distribution can be used to model the lifetime of an ob- ject. It has also been used to describe the motion of pollen particles in water ...
  14. [14]
    [PDF] properties of the inverse gaussian distribution - eScholarship@McGill
    This dissertation is divided into two sections. The first, Chapters 4 to 9, concentrates on the distribution theory of the Inverse Gaussian distribution.
  15. [15]
    None
    Summary of each segment:
  16. [16]
    [PDF] Information Based Approach for Detecting Change Points in Inverse ...
    The Inverse Gaussian (IG) distribution originates as the distribution of the first passage time of Brownian motion with positive drift [SS99]. However, Halphen ...
  17. [17]
    RIGPDF
    Jul 7, 2004 · The reciprocal inverse Gaussian distribution is the distribution of (1/X) when X has an inverse Gaussian distribution.
  18. [18]
    [PDF] Exponential Families - Stat@Duke
    The next pages show several familiar (and some less familiar ones, like the Inverse Gaussian IG(µ, λ) and Pareto Pa(α, β)) distributions in expo- nential family ...
  19. [19]
    [PDF] Chapter 3 Exponential Families
    A curved exponential family is a k-parameter exponential family where the ... inverse Gaussian distribution, Y ∼ IG(θ, λ), then the pdf of Y is f(y) ...
  20. [20]
    [PDF] Generalized Linear Models
    Members of the exponential family of dis- tributions available in. S-PLUS ... This is the only built-in link function for the inverse gaussian distribution.
  21. [21]
    15 Generalized Linear Models
    canonical link gc(·), we can equally ... 47See Exercise 15.9, which also develops formulas for the deviance in Poisson, gamma, and inverse-Gaussian models.
  22. [22]
    [PDF] 18 The Exponential Family and Statistical Applications
    Exponential family are also members of the one parameter Exponential family. ... , 1 ≤ i ≤ k. Example 18.15. (Two Parameter Inverse Gaussian Distribution).
  23. [23]
    [PDF] Brownian Motion - UC Berkeley Statistics
    Brownian motion is the central object of probability, discussed in this book, with emphasis on its sample path properties.
  24. [24]
    [PDF] Stochastic Dynamics - Heidelberg University
    Apr 11, 2025 · This time is called the first passage time (FPT) (or stopping time, especially in math- ematics) and can be defined in mathematical terms as TFP ...
  25. [25]
  26. [26]
    [PDF] First-passage time distribution of a Brownian motion - arXiv
    Jul 8, 2025 · A substantial implication is that the first-passage time distribution does not indicate whether the process originates from a drifted Brownian ...
  27. [27]
    [PDF] Likelihood Theory I & II - MIT OpenCourseWare
    ML) is the maximum likelihood estimate of ( ). For example, if θ λ,. T θ. = the scale parameter of the inverse Gaussian distribution the MLE ˆ is given in Eq ...
  28. [28]
    [PDF] v2303257 Maximum likelihood Estimation of Parameters in the ...
    Maximum likelihood estimation is applied to the three-parameter Inverse Gaussian distri- bution, which includes an unknown shifted origin parameter.
  29. [29]
    [PDF] Asymptotic Confidence Ellipses of Parameters for the Inverse ...
    In this article, we derive maximum likelihood equations and find Fisher information matrix to construct asymptotic confidence ellipses for parameters of the ...
  30. [30]
    [PDF] The unit-inverse Gaussian distribution: A new alternative to two ...
    Nov 17, 2018 · A new two-parameter distribution over the unit interval, called the. Unit-Inverse Gaussian distribution, is introduced and studied in detail ...
  31. [31]
    [PDF] Parameter Estimation for Re-Parametrized Inverse Gaussian ...
    distribution was published in Tweedie (1957) [3] with the new name, Inverse Gaussian, for the first passage-time distribution of the Brownian Motion with drift.<|control11|><|separator|>
  32. [32]
    [PDF] Non- Uni form - Random Variate Generation
    Page 1. Luc Devroye. Non- Uni form. Random Variate Generation. S p ri n ge r ... Non-uniform random variate generation. Bibliography: p. Includes index. 1 ...
  33. [33]
    Generating Random Variates Using Transformations with Multiple ...
    Generating Random Variates Using Transformations with. Multiple Roots. JOHN R. MICHAEL, WILLIAM R. SCHUCANY AND ROY W. HAAS*. ABSTRACT. 1. Introduction. The ...
  34. [34]
    Survival analysis for the inverse Gaussian distribution with the Gibbs ...
    This paper describes a comprehensive survival analysis for the inverse Gaussian distribution employing Bayesian and Fiducial approaches.
  35. [35]
    Saddlepoint Approximations to the CDF of Some Statistics with ...
    In this article we consider what happens when the normal “base” distribution is replaced by an arbitrary base distribution.
  36. [36]
    Parameter estimation and application of inverse Gaussian regression
    Feb 16, 2024 · The numerical methods used are Fisher scoring and Broyden-Fletcher-Goldfarb-Shanno (BFGS). In this study, we applied inverse Gaussian regression ...
  37. [37]
    Degradation modeling with subpopulation heterogeneities based on ...
    This study proposes a random effects model based on inverse Gaussian process, where the mixture normal distribution is used to account for both ...Degradation Modeling With... · 3. Statistical Inference · 5. Case Study
  38. [38]
    scipy.stats.invgauss — SciPy v1.16.2 Manual
    The probability density above is defined in the “standardized” form. To shift and/or scale the distribution use the loc and scale parameters. Specifically, ...Missing: property | Show results with:property
  39. [39]
    Inverse Gaussian Distribution - MATLAB & Simulink - MathWorks
    The inverse Gaussian is used to model nonnegative positively skewed data. The distribution originated in the theory of Brownian motion.Missing: software R Python
  40. [40]
    Note on a Characterization of the Inverse Gaussian Distribution
    A random variable x x is said to have the inverse Gaussian [3], or Wald's ([4] pages 192-193) distribution function when its density function is given by ...
  41. [41]
  42. [42]
    The Inverse Gaussian Distribution
    The term “inverse Gaussian” is due to Tweedie (1945, 1956), and is based upon a relationship between the cumulant generating functions of the Gaussian and ...
  43. [43]
  44. [44]
    Continuous Univariate Distributions, Volume 1 - Google Books
    Authors, Norman Lloyd Johnson, Samuel Kotz ; Publisher, J. Wiley, 1970 ; Original from, the University of Michigan ; Digitized, Nov 20, 2007 ; ISBN, 0471446262, ...
  45. [45]
    [PDF] The Inverse Gaussian Models: Analogues of Symmetry, Skewness ...
    Interestingly, the asymptotic null distributions of these two statistics coincide, i.e., both are asymptotically normal with mean zero and variance 3/n.
  46. [46]
    [PDF] The multivariate normal inverse Gaussian distribution - EURASIP
    A review,” J. R. Statist. Soc. B, Vol. 40, pp ...
  47. [47]
    Infinite divisibility of the hyperbolic and generalized inverse ...
    Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions. Published: December 1977. Volume 38, pages 309–311, (1977); Cite this ...
  48. [48]
  49. [49]
  50. [50]
    [PDF] THE DISTRIBUTION OF THE TIME TO RUIN IN THE CLASSICAL ...
    In Section 3.3 we noted that the time to ruin, given that ruin occurs, for a dif- fusion process has an Inverse Gaussian distribution. By choosing the para ...
  51. [51]
    The probability of ruin for the Inverse Gaussian and related processes
    Abstract. We consider a family of aggregate claims processes that contains the gamma process, the Inverse Gaussian process, and the compound Poisson process ...
  52. [52]
    [PDF] Hitting time distributions in financial markets - IRIS UniPA
    Mar 30, 2007 · ... time t ¼ 0, the distribution of the time t to reach a barrier at position h is given by the so called inverse Gaussian. Fًt; p0ق ¼ h p0.<|control11|><|separator|>
  53. [53]
    [PDF] Stochastic Volatility for Lévy Processes
    Other examples are the IG inverse Gaussian process or the time taken by a Brownian with drift to reach level t. Finally we consider SIG or a BDLP such that the ...<|control11|><|separator|>
  54. [54]
    [PDF] Statistical Methods in Credit Risk Modeling
    They all correspond to the distributional assumptions of the default time τ, while the inverse Gaussian distribution has a beautiful first-passage- time ...
  55. [55]
    A Structural Approach to Default Modelling with Pure Jump Processes
    Lévy process · gamma process · inverse gaussian process · one-sided process · variance gamma process · credit risk · distance to default · default probability.
  56. [56]
    Exact simulation of the first-passage time of SDEs to time-dependent ...
    Dec 17, 2024 · For instance, in financial markets, the pricing of barrier options, cf. ... Inverse Gaussian distribution with parameters λ 1 subscript 𝜆 1 ...
  57. [57]
    Estimating Ruin Probability in an Insurance Risk Model Using the ...
    ... Inverse Gaussian, to actual claim data. Methods involve the transformation of loss distributions via the Wang-PH transform and rigorous evaluation to select ...
  58. [58]
    Quantifying Risk Using Loss Distributions - IntechOpen
    Note that [21] used the inverse Gaussian distribution to quantify credit risk and used Copula functions and Laplace transformation to run an algorithm that ...
  59. [59]
    The Inverse Gaussian Distribution: Statistical Theory and Applications
    This book is divided into two sections and fills up the gap updating the material found in the book of Chhikara and Folks. Part I contains seven chapters and ...
  60. [60]
    Schrodinger E., (1915). "Zur Theorie Der Fall-und Steigversuche an ...
    Schrodinger E., (1915). "Zur Theorie Der Fall-und Steigversuche an Teilchen Mit Brownscher Bewegung", Physikalische Zeitschrift, Vol. (16), pp 289-295.
  61. [61]
    On the First Passage Time Probability Problem | Phys. Rev.
    We have derived an exact solution for the first passage time probability of a stationary one-dimensional Markoffian random function from an integral equation.
  62. [62]
    Multivariate Distributions with Generalized Inverse Gaussian ... - jstor
    Several types of multivariate extensions of the inverse Gaussian (IG) distribution and th reciprocal inverse Gaussian (RIG) distribution are proposed.
  63. [63]
    Bonus-Malus Scale models: creating artificial past claims history
    Jul 29, 2022 · For example, various Poisson-inverse Gaussian, hurdle or zero-inflated distributions can be used. ... rating model, even if we do not know ...
  64. [64]