Fact-checked by Grok 2 weeks ago

Hyperexponential distribution

The hyperexponential distribution, also known as the H_k distribution, is a continuous on the non-negative real line that models a as a of k independent distributions, where each component is selected with probability p_i (summing to 1) and has rate parameter μ_i. Its is given by
f(t) = \sum_{i=1}^k p_i \mu_i e^{-\mu_i t}, \quad t > 0,
and the by
F(t) = 1 - \sum_{i=1}^k p_i e^{-\mu_i t}, \quad t \geq 0. This structure allows it to approximate distributions with high variability, as its squared satisfies c^2 \geq 1, contrasting with the distribution's c^2 = 1.
Key moments include the E[X] = \sum_{i=1}^k p_i / \mu_i and variance \mathrm{Var}(X) = \sum_{i=1}^k p_i / \mu_i^2 - \left( \sum_{i=1}^k p_i / \mu_i \right)^2, which can be matched to empirical data using just the first two moments for fitting purposes, particularly for the two-phase case (H_2) where parameters are solved to balance across phases. The distribution is a special case of phase-type distributions, specifically an acyclic phase-type with parallel phases, and it exhibits a decreasing , with a monotonically decreasing hazard function. In applications, the hyperexponential distribution is widely used in to model service times or interarrival processes with heavy tails and high variability, such as in M/H_k/1 queues where explicit solutions for measures like waiting times are available. It also appears in , software modeling, and call center analysis for customer patience times, providing robust fits to empirical traces when assumptions fail due to . For instance, in storage systems and network simulations, H_2 fits outperform other long-tail models like log-normal in capturing response time distributions.

Definition

Probability Density Function

The hyperexponential distribution is defined as a finite , or , of k independent distributions, where a X is selected to follow the i-th with probability p_i > 0 for i = 1, \dots, k, and \sum_{i=1}^k p_i = 1. This structure captures heterogeneity in processes by weighting multiple components, each characterized by its own \lambda_i > 0. The of the hyperexponential is given by f(x) = \sum_{i=1}^k p_i \lambda_i e^{-\lambda_i x}, \quad x \geq 0, and f(x) = 0 otherwise. The support of the is the non-negative real line, reflecting the nature of the underlying components. This distribution arises in the context of phase-type distributions as the absorption time in a with k parallel s, where the process begins in i with probability p_i and is absorbed upon exiting that single at rate \lambda_i, without transitioning between s. A practical interpretation occurs in queueing models, where service times follow a hyperexponential distribution due to different types, such as voice calls (with one rate) versus data sessions (with another), each selected probabilistically based on arrival proportions.

Cumulative Distribution Function

The cumulative distribution function of the hyperexponential distribution is given by F(x) = \begin{cases} 1 - \sum_{i=1}^k p_i e^{-\lambda_i x} & x \geq 0, \\ 0 & x < 0, \end{cases} where p_i > 0 are the mixture weights with \sum_{i=1}^k p_i = 1 and \lambda_i > 0 are the rate parameters of the underlying components. This form arises from integrating the probability density function over the interval [0, x]. Specifically, F(x) = \int_0^x f(t) \, dt = \sum_{i=1}^k p_i \int_0^x \lambda_i e^{-\lambda_i t} \, dt = \sum_{i=1}^k p_i \left(1 - e^{-\lambda_i x}\right) for x \geq 0, which simplifies to the expression above since \sum_{i=1}^k p_i = 1. The is the derivative of F(x). The , defined as S(x) = 1 - F(x), takes the form S(x) = \sum_{i=1}^k p_i e^{-\lambda_i x}, \quad x \geq 0, which directly provides the probability that a exceeds x and is particularly useful for evaluating tail probabilities in applications such as and reliability analysis. The hyperexponential distribution is absolutely continuous with respect to , possessing a density that is positive on (0, \infty) and an unbounded support on [0, \infty).

Characteristic Functions

Characteristic Function

The characteristic function of a random variable X with a hyperexponential distribution is \phi(t) = \mathbb{E}[e^{itX}] for real t, given by \phi(t) = \sum_{i=1}^k \frac{p_i \mu_i}{\mu_i - i t}. This follows from the mixture structure, as the characteristic function of an with rate \mu_i is \mu_i / (\mu_i - i t), and the overall function is the weighted . Unlike the , it is defined for all real t since the poles are off the real axis. The characteristic function encodes all moments via derivatives at t=0 and is useful for proving theorems and central results for sums of hyperexponential variables. It relates to the via \phi(t) = M(it), where M is the MGF.

Moment-Generating Function

The moment-generating function (MGF) of a random variable X with a hyperexponential distribution provides a compact representation that encodes all moments of the distribution and is particularly useful for analyzing sums of independent hyperexponential random variables. For a hyperexponential distribution defined as a mixture of k exponential distributions with rates \mu_1, \dots, \mu_k > 0 and mixing probabilities p_1, \dots, p_k > 0 satisfying \sum_{i=1}^k p_i = 1, the MGF is given by M(t) = \sum_{i=1}^k \frac{p_i \mu_i}{\mu_i - t}, valid for t < \min_i \{\mu_i\}. This formula arises directly from the definition of the MGF and the probability density function (PDF) of the hyperexponential distribution, f(x) = \sum_{i=1}^k p_i \mu_i e^{-\mu_i x} for x \geq 0. Substituting into the MGF integral yields M(t) = \int_0^\infty e^{tx} f(x) \, dx = \sum_{i=1}^k p_i \int_0^\infty \mu_i e^{-(\mu_i - t)x} \, dx. The inner integral is the MGF of an exponential random variable with rate \mu_i, which evaluates to \mu_i / (\mu_i - t) for t < \mu_i, leading to the overall expression upon summation. The domain of convergence for the MGF is t < \min_i \{\mu_i\}, as the integral diverges beyond the smallest rate parameter, ensuring the expectation exists. Within this interval, the MGF is analytic, meaning it is infinitely differentiable and can be expanded in a Taylor series around t = 0, with coefficients corresponding to the moments of X. This analyticity guarantees the uniqueness of the hyperexponential distribution among all distributions on [0, \infty) sharing the same MGF, by the standard uniqueness theorem for moment-generating functions. The rational form of the MGF—a ratio of polynomials where the denominator is of degree k and the numerator of degree at most k-1—uniquely characterizes hyperexponential mixtures relative to other infinite-support distributions, facilitating identification in applications like fitting empirical data to phase-type models.

Laplace Transform

The Laplace-Stieltjes transform (LST) of a hyperexponential random variable X with parameters p_i > 0 (\sum_{i=1}^k p_i = 1) and distinct rates \mu_i > 0 (i = 1, \dots, k) is defined as \tilde{f}(s) = \mathbb{E}[e^{-sX}] for \operatorname{Re}(s) \geq 0. This transform evaluates to \tilde{f}(s) = \sum_{i=1}^k \frac{p_i \mu_i}{\mu_i + s}, which follows directly from the mixture structure, as the LST of an with rate \mu_i is \mu_i / (\mu_i + s), and the overall transform is the corresponding weighted sum. The LST relates to the (MGF) M(t) = \mathbb{E}[e^{tX}] via \tilde{f}(s) = M(-s), where the domain restriction \operatorname{Re}(s) \geq 0 ensures for nonnegative X, mirroring the MGF derivation but emphasizing for stability analysis in systems. At s = 0, \tilde{f}(0) = 1, confirming as a proper transform, while the satisfies \tilde{f}'(0) = -\mathbb{E}[X], providing a direct link to the mean without full inversion. Inversion of the LST for hyperexponential distributions typically employs partial fraction decomposition, exploiting the rational form with distinct poles at s = -\mu_i, to recover the mixture density f(x) = \sum_{i=1}^k p_i \mu_i e^{-\mu_i x} for x > 0. This method is particularly valuable in renewal theory, where the LST facilitates solving integral equations for quantities like the renewal function, whose transform is $1 / (s (1 - \tilde{f}(s))), enabling efficient computation of transient behaviors in phase-type renewal processes modeled by hyperexponential interarrival times.

Moments and Properties

Mean and Variance

The mean of a hyperexponential X, which is a of k independent distributions with rates \mu_i > 0 and mixing probabilities p_i > 0 where \sum_{i=1}^k p_i = 1, is given by E[X] = \sum_{i=1}^k \frac{p_i}{\mu_i}. This formula arises from the , conditioning on the component: E[X] = \sum_{i=1}^k p_i E[X \mid \text{component } i] = \sum_{i=1}^k p_i \cdot \frac{1}{\mu_i}, since each component follows an with $1/\mu_i. Alternatively, the mean can be derived from the (MGF) of the hyperexponential distribution, M(t) = \sum_{i=1}^k \frac{p_i \mu_i}{\mu_i - t}, \quad t < \min_i \mu_i. Differentiating yields M'(t) = \sum_{i=1}^k \frac{p_i \mu_i}{(\mu_i - t)^2}, and evaluating at t=0 gives M'(0) = \sum_{i=1}^k \frac{p_i}{\mu_i}, confirming the mean. The variance is \text{Var}(X) = \sum_{i=1}^k \frac{2 p_i}{\mu_i^2} - \left( \sum_{i=1}^k \frac{p_i}{\mu_i} \right)^2. This follows from the law of total variance: \text{Var}(X) = E[\text{Var}(X \mid \text{component } i)] + \text{Var}(E[X \mid \text{component } i]). The first term is E[\text{Var}(X \mid i)] = \sum_{i=1}^k p_i \cdot \frac{1}{\mu_i^2}, and the second is \text{Var}(1/\mu_i) = \sum_{i=1}^k p_i (1/\mu_i)^2 - (E[X])^2 = \sum_{i=1}^k \frac{p_i}{\mu_i^2} - (E[X])^2, yielding the total. Equivalently, from the MGF, M''(t) = \sum_{i=1}^k \frac{2 p_i \mu_i}{(\mu_i - t)^3}, so E[X^2] = M''(0) = \sum_{i=1}^k \frac{2 p_i}{\mu_i^2}, and \text{Var}(X) = E[X^2] - (E[X])^2. The coefficient of variation, defined as CV = \sqrt{\text{Var}(X)} / E[X], exceeds 1 for k \geq 2 (assuming distinct \mu_i), in contrast to the exponential distribution where CV = 1. This high variability stems from the mixture structure, where the squared CV ranges from 1 (degenerate case) to \infty as the mixing probabilities vary. For the two-phase case (H_2) with probabilities p and $1-p, and rates \mu_1, \mu_2, the mean simplifies to E[X] = p / \mu_1 + (1-p) / \mu_2, and the variance to \text{Var}(X) = 2 \left[ p / \mu_1^2 + (1-p) / \mu_2^2 \right] - (E[X])^2. These expressions facilitate fitting to data with elevated variability.

Higher Moments

The k-th raw moment of a hyperexponential random variable X with k phases, mixing probabilities p_i > 0 (\sum_{i=1}^k p_i = 1), and rates \mu_i > 0 (i=1,\dots,k) is given by E[X^k] = \sum_{i=1}^k p_i \frac{k!}{\mu_i^k}, for any positive k. This expression follows from the fact that the hyperexponential distribution is a finite of distributions, whose raw moments are weighted averages of the component moments. Central moments can be obtained from the raw moments using the relation \mu_k = E[(X - E[X])^k] = \sum_{j=0}^k \binom{k}{j} (-E[X])^{k-j} E[X^j]. Cumulants are derived via the coefficients in the expansion of the logarithm of the evaluated at zero. These higher-order moments capture the shape of the , with the third determining and the fourth relating to . The skewness \gamma_1 is \gamma_1 = \frac{E[(X - E[X])^3]}{(E[(X - E[X])^2])^{3/2}} = \frac{E[X^3] - 3 E[X] E[X^2] + 2 (E[X])^3 }{ (E[X^2] - (E[X])^2)^{3/2} }, and the excess kurtosis \gamma_2 is \gamma_2 = \frac{E[(X - E[X])^4]}{(E[(X - E[X])^2])^2} - 3 = \frac{E[X^4] - 4 E[X] E[X^3] + 6 (E[X])^2 E[X^2] - 3 (E[X])^4 }{ (E[X^2] - (E[X])^2)^2 } - 3. For the hyperexponential distribution, these measures typically exceed those of the single-phase exponential distribution (where \gamma_1 = 2 and \gamma_2 = 6), reflecting greater asymmetry and peakedness with heavier tails due to the mixture structure. For large k, the k-th raw moment is asymptotically dominated by the phase with the smallest rate \mu_{\min} = \min_i \mu_i, as the term p_j k! / \mu_{\min}^k (for the corresponding j) grows fastest, where the slowest contributes most to the tail behavior.

Parameter Estimation

Method of Moments

The method of moments for estimating parameters of the hyperexponential distribution equates the sample and variance to the corresponding theoretical moments, \mathbb{E}[X] and \mathrm{Var}(X), and solves the resulting system for the mixing probabilities p_i and rates \mu_i. This approach is particularly suited for low-order approximations, such as the two-phase case (H_2), where it yields efficient parameter fits based on the first two moments alone. For the H_2 distribution, closed-form solutions are obtained under the balanced means assumption, ensuring each exponential phase contributes equally to the overall . Given the sample \hat{\mu} = \mathbb{E}[X] and variance \sigma^2 = \mathrm{Var}(X), let c^2 = \sigma^2 / \hat{\mu}^2 denote the squared (with c^2 \geq 1). The mixing probability p (for the slower phase) solves the derived from the moment conditions: p^2 - p + \frac{1}{2(c^2 + 1)} = 0, with the relevant root p = \frac{1}{2} \left( 1 + \sqrt{\frac{c^2 - 1}{c^2 + 1}} \right). The rates are then \mu_1 = \frac{2p}{\hat{\mu}}, \quad \mu_2 = \frac{2(1 - p)}{\hat{\mu}}, where \mu_1 < \mu_2. This procedure provides explicit parameter values without iteration. The advantages of this method include its simplicity and lack of need for numerical optimization when k = 2, making it computationally attractive for quick approximations. However, for k > 2, matching only the first two moments results in non-unique solutions, as the is underdetermined, often necessitating additional constraints or higher moments to resolve . This estimation technique originated in and has been widely adopted for approximating empirical service time distributions using known first two moments, enabling analytical tractability in modeling.

Maximum Likelihood Estimation

The (MLE) for the parameters of a k-phase hyperexponential distribution, consisting of mixing probabilities p_i > 0 with \sum_{i=1}^k p_i = 1 and exponential rates \mu_i > 0, is based on an independent and identically distributed sample x_1, \dots, x_n from the distribution. The is given by L(\mathbf{p}, \boldsymbol{\mu}) = \prod_{j=1}^n \sum_{i=1}^k p_i \mu_i e^{-\mu_i x_j}, where the probability density function of the hyperexponential distribution appears in the sum. The corresponding log-likelihood is \ell(\mathbf{p}, \boldsymbol{\mu}) = \sum_{j=1}^n \log\left( \sum_{i=1}^k p_i \mu_i e^{-\mu_i x_j} \right). This log-likelihood is non-convex due to the mixture structure, making direct maximization challenging and prone to multiple local optima. To address this, the expectation-maximization (EM) algorithm is commonly used, which treats the unobserved phase (mixture component) for each data point as a latent variable and iteratively maximizes a lower bound on the log-likelihood. In the E-step at iteration t, the posterior probability \gamma_{ji}^{(t)} that observation x_j arises from phase i is calculated as \gamma_{ji}^{(t)} = \frac{ p_i^{(t)} \mu_i^{(t)} e^{-\mu_i^{(t)} x_j} }{ \sum_{m=1}^k p_m^{(t)} \mu_m^{(t)} e^{-\mu_m^{(t)} x_j} }. In the M-step, the parameters are updated to maximize the expected complete-data log-likelihood: p_i^{(t+1)} = \frac{1}{n} \sum_{j=1}^n \gamma_{ji}^{(t)}, \mu_i^{(t+1)} = \frac{ \sum_{j=1}^n \gamma_{ji}^{(t)} }{ \sum_{j=1}^n \gamma_{ji}^{(t)} x_j }. The E- and M-steps are alternated until the change in log-likelihood falls below a tolerance threshold, typically requiring hundreds to thousands of iterations for convergence. Despite its effectiveness, the EM algorithm for hyperexponential distributions faces numerical challenges, including sensitivity to initial values, which can lead to at suboptimal local maxima or saddle points, and computational expense in the E-step for large k or n. Initialization strategies often involve matching (e.g., equating the first two sample to those of the ) or multiple random starts to improve reliability, with monitored via the log-likelihood or stability. Implementations of the EM algorithm for hyperexponential fitting are available in statistical software; for example, the PhaseTypeR package in supports parameter estimation for phase-type distributions, including the hyperexponential as a special case, using EM-based methods. In , custom EM routines can be implemented using libraries such as and for optimization and matrix operations.

Applications

In

The hyperexponential distribution is frequently employed in to model service times exhibiting high variability, particularly in M/H_k/1 queues with arrivals and k-phase hyperexponential service times. This setup effectively captures bursty traffic patterns, where the squared of service times exceeds 1, reflecting real-world scenarios like variable intensities in communication . Similarly, in G/H_k/1 queues with general interarrival times, the hyperexponential service distribution accounts for heterogeneous processing demands, enabling approximate analysis of systems with non- inputs. A key analytical tool for M/H_k/1 models is the Pollaczek-Khinchine formula, which incorporates the Laplace-Stieltjes transform of the hyperexponential service time distribution to derive the mean length exactly. The transform for an H_k distribution takes the form of a of individual transforms, ∑_{i=1}^k p_i μ_i / (s + μ_i), facilitating computations of steady-state performance metrics like waiting times and lengths under stability conditions (ρ < 1). For G/H_k/1 queues, approximations such as diffusion limits provide similar insights into transient behavior. Approximation techniques often involve fitting hyperexponential distributions to empirical traces of service times, such as web traffic logs, using methods like the expectation-maximization algorithm to match moments or tail behavior for performance prediction in complex queues. These fits enable scalable simulations or bounds on delay probabilities, improving accuracy over simpler exponential assumptions for high-variability data. For example, in telephony systems, the hyperexponential distribution models mixed call types—including short voice interactions and prolonged data sessions—allowing queueing models to predict congestion and staffing needs in multi-server environments with heterogeneous demands.

In Reliability Analysis

The hyperexponential distribution is employed in reliability analysis to model the failure times of heterogeneous systems, where components or subsystems exhibit varying exponential failure rates corresponding to different operational modes or types. This mixture model captures the overall system behavior as a weighted combination of phase-specific exponentials, providing a tractable representation for complex, non-homogeneous populations without assuming a constant hazard rate. A key application involves approximating heavy-tailed distributions such as the or , which are common in reliability for capturing wear-out or fatigue failures, using hyperexponential mixtures to facilitate analytical tractability in performance evaluation and system design. For distributions with decreasing failure rates, like certain shapes (\gamma < 1), the hyperexponential provides an effective fit by matching the complementary cumulative distribution function (CCDF) at multiple points, ensuring accuracy for long tails that represent rare but critical extended lifetimes. This approximation is particularly valuable for simulating or verifying reliability metrics in scenarios where exact computations are computationally intensive. In these models, the mean time to failure (MTTF) is given by the expected value \mathbb{E}[X] = \sum_{i=1}^k p_i / \lambda_i, which quantifies the average operational duration before system failure, while the reliability function R(t) = \sum_{i=1}^k p_i e^{-\lambda_i t} represents the survival probability at time t, directly derived from the survival function of the mixture. For instance, in standby equipment with hyperexponential failure times, the MTTF equals $1/\lambda under parameterized rates, enabling predictions of accumulated operating time to failure. An illustrative example is the modeling of failure rates in sensor networks, where devices of distinct types (e.g., varying power levels or environmental exposures) lead to heterogeneous exponential lifetimes; the hyperexponential distribution estimates parameters efficiently on resource-constrained nodes, supporting reliability assessments for network longevity and fault tolerance without excessive data transmission. This approach has been applied to environmental monitoring systems, where long-tailed failure patterns due to diverse sensor behaviors are captured to predict overall system dependability.

Phase-Type Distributions

Phase-type (PH) distributions model the time until absorption in a continuous-time Markov chain with a finite number of transient states and one absorbing state. These distributions generalize by allowing transitions among multiple exponential phases, enabling flexible modeling of stochastic processes with varying variability. The emerges as a specific subclass of PH distributions characterized by parallel phases, where there are no transitions between phases, resulting in a mixture of independent . In this representation, the infinitesimal generator matrix \mathbf{Q} for a k-phase hyperexponential distribution H_k is a k \times k diagonal matrix with entries q_{ii} = -\lambda_i for i = 1, \dots, k, where \lambda_i > 0 are the rates from each . The initial \boldsymbol{\alpha} assigns probabilities \alpha_i to starting in i, and occurs directly from any at rate \lambda_i. This structure ensures the hyperexponential distribution's aligns with the general PH form. PH distributions exhibit desirable closure properties: the (sum) of independent PH random variables remains PH, as does any finite of PH distributions. The hyperexponential distribution specifically corresponds to mixtures of (one-phase PH) distributions, preserving the PH class while achieving squared coefficients of variation greater than one for high-variability scenarios. The matrix-exponential representation of PH distributions, including hyperexponential, facilitates computational analysis through efficient matrix operations, avoiding the numerical challenges of direct transform inversions and enabling scalable solutions in queueing and reliability models.

Hypoexponential Distribution

The hypoexponential distribution arises as the distribution of the sum of a finite number of independent random variables with possibly distinct rates \lambda_1, \lambda_2, \dots, \lambda_k > 0, representing a series of sequential phases in a phase-type framework. This contrasts with the , which models a parallel mixture of exponentials; the hypoexponential instead captures the of these exponentials, suitable for processes with reduced variability. When all rates are identical, \lambda_1 = \lambda_2 = \dots = \lambda_k = \lambda, the hypoexponential distribution specializes to the , a well-known light-tailed used in modeling deterministic-like delays. The (PDF) of a hypoexponential Y = \sum_{i=1}^k X_i, where each X_i \sim \exp(\lambda_i) and the \lambda_i are distinct, is given by f_Y(y) = \sum_{j=1}^k \ell_j \lambda_j e^{-\lambda_j y}, \quad y > 0, where the coefficients are \ell_j = \prod_{\substack{i=1 \\ i \neq j}}^k \frac{\lambda_i}{\lambda_i - \lambda_j}. This form emerges from the of the individual exponential densities and highlights the distribution's phase-type nature, with absorption occurring only after traversing all phases in sequence. In the phase-type representation, the hypoexponential distribution corresponds to an absorbing continuous-time Markov chain starting in the first transient state, progressing through a linear chain of k states, and absorbing from the last state. The infinitesimal generator matrix Q for the transient states is upper bidiagonal (or triangular in structure), with diagonal entries Q_{ii} = -\lambda_i and superdiagonal entries Q_{i,i+1} = \lambda_i for i = 1, \dots, k-1, ensuring forward-only transitions between phases. The initial probability vector is \boldsymbol{\alpha} = (1, 0, \dots, 0), reflecting the sequential start. This chain structure embodies the "dual" to the hyperexponential's parallel phases within the broader phase-type family, where the hypoexponential models light-tailed behaviors with squared coefficient of variation (CV) strictly less than 1. In comparison, the hyperexponential exhibits CV greater than 1, emphasizing heavier tails.

References

  1. [1]
    Hyperexponential Distribution - Boost
    A k-phase hyperexponential distribution is a continuous probability distribution obtained as a mixture of k Exponential Distributions. It is also referred to as ...
  2. [2]
    [PDF] 1 Basic concepts from probability theory
    distribution is useful for fitting a distribution if only the first two moments of a random variable are known. 1.2.5 Hyperexponential distribution. A random ...<|control11|><|separator|>
  3. [3]
    Explicit solutions for queues with Hypo- or Hyper-exponential ...
    Queueing systems with Poisson arrival processes and Hypo- or Hyper-exponential service time distribution have been widely studied in the literature.
  4. [4]
    [PDF] arXiv:2005.03576v5 [math.PR] 30 Aug 2022
    Aug 30, 2022 · By fitting parameters of hyperexponential and generalized-hyperexponential distributions our method provides a robust estimation framework for.
  5. [5]
    [PDF] Distribution Fitting and Performance Modeling for Storage Traces
    We found that the Hyper-exponential distribution with just two phases (H2) was superior in modeling the storage traces compared to other distributions under ...
  6. [6]
    [PDF] Hand-book on STATISTICAL DISTRIBUTIONS for experimentalists
    20.3 Characteristic Function. The characteristic function of the hyperexponential distribution is given by φ(t) = p. 1 − ıt λ1. + q. 1 − ıt λ2. 77. Page 90 ...
  7. [7]
    [PDF] Summary Sheet for Continuous Distributions ECE 695/CS 590
    Probability density function (pdf). 1 f(x) ≥ 0, for all x. FX(x) = Pr(a < X ≤ b) ... Hyperexponential distribution. It is a DFR from ∑ αλ to min{λ1, λ2 ...
  8. [8]
    Regenerative Analysis and Approximation of Queueing Systems ...
    Since there are only two customer classes, the service time distribution becomes a two-component mixture: ... hyperexponential distribution with parameters ...<|control11|><|separator|>
  9. [9]
    [PDF] Hyperexponential distribution
    hyperexponential distribution with parameters ~α and ~p. A hyperexponential random variable X with parameters~α and ~p has probability density function f(x) ...
  10. [10]
    Continuous distributions summary sheet - Purdue Engineering
    If a process faces k parallel phases and it will go through only one of these, then the time follows a Hyperexponential distribution. It is a DFR from to min{l1 ...Missing: formula | Show results with:formula
  11. [11]
    [PDF] CS 547 Lecture 14: Other Service Time Distributions
    The hyperexponential distribution is formed from a probabilistic mixture of exponentials. A two-stage hyperexponential is formed by a mixture of two ...
  12. [12]
    Proof: Moment-generating function of the exponential distribution
    MX(t)=E[etX].Missing: hyperexponential | Show results with:hyperexponential
  13. [13]
    25.1 - Uniqueness Property of M.G.F.s | STAT 414
    The uniqueness property of MGFs means if two MGFs are the same, the random variables follow the same distribution. If a calculated MGF is the same as a known ...
  14. [14]
    [PDF] arXiv:1412.2353v1 [math.PR] 7 Dec 2014
    Dec 7, 2014 · The ME class can be defined as a class of distributions with a rational moment generating function (see [7]). ... hyperexponential distribution, ...<|control11|><|separator|>
  15. [15]
    Mixtures of Exponential Distributions - Cai - Major Reference Works ...
    The hyperexponential distribution can be characterized by the Laplace transform. ... distribution is the convolution of distinct exponential distributions ...
  16. [16]
    Moment Generating Functions MGF - GitHub Pages
    DEFINITION of MGF:​​ Notice that the MGF is the Laplace transform with −s replaced by t. Here is the formula for the Laplace transform: L{f}(s)=E[e−sX]. The ...
  17. [17]
    [PDF] MEAN ABSOLUTE DEVIATION FOR HYPEREXPONENTIAL ... - aircc
    Hyperexponential and Hypoexponential distributions are derived from mixtures and convolutions of independent exponential random variables, respectively, ...
  18. [18]
    [PDF] Introduction to Probability Models, Tenth Edition
    ... moment generating function φ(t) of the random variable X is defined for all ... hyperexponential random variable. To see how such a random variable ...
  19. [19]
    [PDF] On Using Conventional and TL moments for the Estimation of a ...
    Apr 4, 2020 · A comparison was made between L- moments, TL-moments of two component mixture of exponential distributions and conventional moments. In ...
  20. [20]
    [PDF] Why two moments of job size distribution are not enough
    Use of hyper-exponential distributions allows us the freedom to evaluate the effect of different characteristics of the distribution while preserving the first ...
  21. [21]
    Calculating equilibrium probabilities for λ(n)/C k /1/N queues
    Abstract. Equilibrium state distributions are determined for queues with load-dependent Poisson arrivals and service time distributions representable by Cox's ...
  22. [22]
    An investigation of phase-distribution moment-matching algorithms ...
    The moment-matching algorithms under consideration match two moments to a hyperexponential distribution with balanced means and three moments to a mixture ...
  23. [23]
    Bayesian analysis of M/Er/1 and M/H_k/1 queues | Queueing Systems
    This paper describes Bayesian inference and prediction for some M/G/1 queueing models. Cases when the service distribution is Erlang, hyperexponential and.
  24. [24]
    [PDF] On Approximations for Queues, III: Mixtures of Exponential ...
    Since the mixture of k exponen- tial distributions is called hyperexponential and is denoted by Hk, we use H to refer to interarrival-time distributions that ...
  25. [25]
    Single server queues with hyperexponential service times
    - **Laplace-Stieltjes Transform for Hyperexponential Service Time**: The article derives explicit expressions for waiting time and state distributions in single server queues with hyperexponential service times, but specific Laplace-Stieltjes transform formulas are not detailed in the provided abstract.
  26. [26]
    Fitting world-wide web request traces with the EM-algorithm
    To overcome this problem, in this paper, we approximate the empirical distributions by analytically more tractable, that is, hyper-exponential distributions.
  27. [27]
    [PDF] A STAFFING ALGORITHM FOR CALL CENTERS WITH SKILL ...
    The overall service-time is now hyperexponential (a mixture of exponentials) instead of expo- nential, so the M/M/C/K/ model is even more optimistic, but it can ...
  28. [28]
    [PDF] Reliability Evaluation of Large-Scale Systems With Identical Units
    There are several well-known phase-type distributions including. Erlang, hyper-exponential, and Coxian distributions; and many algorithms have been developed to ...
  29. [29]
    [PDF] Towards the Automated Verification of Weibull Distributions for ...
    We approximate a Weibull DFR with an hyper-exponential distribution, which is a mixture of exponential distributions. The hyper-Erlang distribution is also ...
  30. [30]
    [PDF] Fitting mixtures of exponentials to long-tail distributions to analyze ...
    Our fitting algorithm fits a hyperexponential distribution to a given long-tail distribution, aiming to be accurate over a finite interval [tl, t2] for suitably ...
  31. [31]
    [PDF] RELIABILITY PERFORMANCE OF STANDBY EQUIPMENT WITH ...
    Specifications for standby equipment are likely to include the starting failure probability and mean time to failure (MTTF) ... The hyperexponential distribution ...
  32. [32]
    Estimation of the Hyperexponential Density with Applications in ...
    This paper solves the problem of estimation of the parameters of a hyperexponential density and presents a practical application of the solution in sensor ...Missing: failure | Show results with:failure
  33. [33]
    [PDF] Mathematical and Statistical Characterizations of ... - DTIC
    Neuts [1975, 1981] has popularized a class of distribution functions ... hyperexponential distribution, we do not require that each ai be nonnegative ...Missing: oscar | Show results with:oscar
  34. [34]
    [PDF] On characterization of the exponential distribution via ... - arXiv
    Apr 2, 2022 · It is called hypoexponential distribution because its coefficient of variation is less than one, in contrast to the hyperexponential ...<|control11|><|separator|>
  35. [35]
    [PDF] On the Non-uniqueness of Representations of Coxian Phase-Type ...
    Jul 23, 2019 · generator matrix. Mocanu and Commault (1999) proposed a ... Note that a Hypoexponential distribution is a convolution of independent ...