Fact-checked by Grok 2 weeks ago

Multivariate gamma function

The multivariate gamma function, denoted \Gamma_m(a), is a multidimensional of the classical \Gamma(a), first introduced by John Wishart in 1928 in the context of the for sample covariance matrices from multivariate normal samples. It is defined for complex parameters a with \Re(a) > (m-1)/2 as the integral over the cone of m \times m positive definite symmetric matrices \mathbf{S}: \Gamma_m(a) = \int_{\mathbf{S} > 0} |\mathbf{S}|^{a - (m+1)/2} e^{-\operatorname{tr}(\mathbf{S})} \, d\mathbf{S}, where |\mathbf{S}| is the determinant of \mathbf{S} and \operatorname{tr}(\mathbf{S}) is its trace. This function serves as a fundamental normalizing constant in matrix-variate probability distributions. A closed-form expression for the multivariate gamma function expresses it in terms of the univariate gamma function: \Gamma_m(a) = \pi^{m(m-1)/4} \prod_{j=1}^m \Gamma\left(a - \frac{j-1}{2}\right). This product formula facilitates numerical computation and reveals connections to lower-dimensional gamma functions, with the exponent of \pi arising from the volume elements in the matrix space. The function extends naturally to more general forms, such as \Gamma_m(a_1, \dots, a_m) = \pi^{m(m-1)/4} \prod_{j=1}^m \Gamma(a_j - (j-1)/2), where the parameters can differ, enabling applications in non-central or structured matrix distributions. In , the multivariate gamma function is indispensable, appearing prominently in the of the W_m(n, \Sigma), which models sample covariance matrices from multivariate normal data: f(\mathbf{W}) = \frac{|\Sigma|^{-n/2}}{2^{mn/2} \Gamma_m(n/2)} |\mathbf{W}|^{(n-m-1)/2} \exp\left(-\frac{1}{2} \operatorname{tr}(\Sigma^{-1} \mathbf{W})\right), for n > m-1. It also features in the multivariate beta function, defined as \mathrm{B}_m(a,b) = \Gamma_m(a) \Gamma_m(b) / \Gamma_m(a+b), which normalizes distributions over matrix partitions like the matrix beta. These roles underpin hypothesis testing, , and inference in high-dimensional data, with extensions to complex-valued and cone-valued variants broadening its utility in advanced statistical models.

Definition and Representations

Integral Definition

The multivariate gamma function \Gamma_p(a) for a positive p and parameter a is fundamentally defined via the representation \Gamma_p(a) = \int_{S > 0} \exp(-\operatorname{tr}(S)) \, |S|^{a - (p+1)/2} \, dS, where the integral is over the cone of all p \times p positive definite symmetric matrices S > 0, \operatorname{tr}(S) is the of S, |S| denotes the of S, and dS is the on the vector space of p \times p symmetric real matrices. This integral converges absolutely when \operatorname{Re}(a) > (p-1)/2. The Lebesgue measure dS corresponds to the product measure over the independent entries of the upper triangular part of S (including the diagonal), and is invariant under congruence transformations S \mapsto O S O^T for orthogonal matrices O. This integral definition was first introduced by James in the context of normalizing constants for the Wishart distribution. When p=1, the definition reduces to the standard univariate \Gamma(a).

Product Form

The multivariate gamma function admits a closed-form product representation in terms of univariate , given by \Gamma_p(a) = \pi^{p(p-1)/4} \prod_{j=1}^p \Gamma\left(a + \frac{1-j}{2}\right), valid for \operatorname{Re}(a) > (p-1)/2. This expression facilitates numerical computation and analytical manipulation in applications requiring explicit evaluations. This product form is derived from the representation of \Gamma_p(a) by a that diagonalizes the positive definite matrix argument into its eigenvalues \lambda_1, \dots, \lambda_p > 0 and an of eigenvectors. The associated factor introduces a Vandermonde \prod_{1 \leq i < j \leq p} (\lambda_j - \lambda_i), which, upon symmetrization and integration over the eigenvalues, yields the stated product of univariate gamma functions after separating the angular and radial components of the measure on the space of positive definite matrices. For small values of p, the formula simplifies notably. When p=1, it reduces to the standard gamma function: \Gamma_1(a) = \Gamma(a). For p=2, it becomes \Gamma_2(a) = \pi^{1/2} \Gamma(a) \Gamma(a - 1/2), which connects to the via the relation B(a, b) = \Gamma(a) \Gamma(b) / \Gamma(a+b).

Properties

Recurrence Relations

The multivariate gamma function satisfies a fundamental recurrence relation that expresses it in terms of the univariate and a lower-dimensional multivariate gamma function, facilitating recursive computation across dimensions. For a positive integer p and \operatorname{Re}(a) > (p-1)/2, \Gamma_p(a) = \pi^{(p-1)/2} \Gamma(a) \, \Gamma_{p-1}\left(a - \frac{1}{2}\right). This relation follows directly from the product representation of \Gamma_p(a) and is a key property in multivariate statistical theory. Iterative application of this recurrence reduces the computation to univariate gamma functions. For instance, for p=3, \Gamma_3(a) = \pi^{3/2} \, \Gamma(a) \, \Gamma\left(a - \frac{1}{2}\right) \, \Gamma(a - 1), which aligns with the explicit product form for small dimensions and enables efficient for higher p. For in computations, particularly when p is large or a leads to in direct , the logarithmic version of the recurrence is employed: \log \Gamma_p(a) = \frac{p-1}{2} \log \pi + \log \Gamma(a) + \log \Gamma_{p-1}\left(a - \frac{1}{2}\right). This form mitigates issues with large intermediate values while preserving accuracy in recursive calls down to the scalar case. The recurrence's form arises uniquely from the integral representation of the multivariate gamma function, where integration over positive definite matrices leverages properties of the determinant (capturing volume scaling) and trace (in the exponential term) to reduce dimensionality by one, yielding the factor of \pi^{(p-1)/2} from Gaussian integrals over orthogonal complements.

Relation to Univariate Gamma

The multivariate gamma function \Gamma_p(a) reduces to the standard univariate gamma function when the dimension p = 1, as \Gamma_1(a) = \Gamma(a), where \Gamma(a) is the classical defined for \operatorname{Re}(a) > 0 by the Euler integral \Gamma(a) = \int_0^\infty t^{a-1} e^{-t} \, dt. This reduction highlights the univariate case as a foundational instance of the multivariate extension, preserving key properties such as the functional equation \Gamma(a+1) = a \Gamma(a) in one . The multivariate gamma function extends the univariate gamma as a meromorphic function on the complex plane, analytic for \operatorname{Re}(a) > (p-1)/2 and characterized by simple poles. Its poles occur at a = -k + (j-1)/2 for nonnegative integers k = 0, 1, 2, \dots and j = 1, 2, \dots, p, arising from the singularities of the constituent univariate gamma functions in its product representation \Gamma_p(a) = \pi^{p(p-1)/4} \prod_{j=1}^p \Gamma\left(a - \frac{j-1}{2}\right). This structure ensures that \Gamma_p(a) inherits the meromorphic nature of \Gamma(a) while introducing dimension-dependent pole locations that reflect the increased complexity in higher dimensions. For integer values of a sufficiently large to avoid poles, \Gamma_p(a) evaluates to explicit products involving factorials and double factorials via the properties of the univariate gamma at positive s and half-integers. A representative case is \Gamma_p\left(n + \frac{p+1}{2}\right) for nonnegative n, which simplifies to \pi^{p(p-1)/4} \prod_{j=1}^p \Gamma\left(n + 1 + \frac{p-j}{2}\right), where each \Gamma term reduces to a product of factorials when the argument is or to double factorials scaled by \sqrt{\pi} when half-. For instance, in p=2, this yields expressions linking to (2n+1)!! \cdot n! up to constants, underscoring connections to combinatorial structures like volumes of spaces. In the asymptotic regime of large |a| with fixed p and |\arg a| < \pi - \delta for \delta > 0, the behavior of \Gamma_p(a) follows from applying Stirling's series to each factor in the product representation, yielding \log \Gamma_p(a) \sim \frac{p}{2} \log(2\pi) + \sum_{j=1}^p \left[ \left(a - \frac{j-1}{2}\right) \log\left(a - \frac{j-1}{2}\right) - \left(a - \frac{j-1}{2}\right) - \frac{1}{2} \log\left(a - \frac{j-1}{2}\right) \right] + \frac{p(p-1)}{4} \log \pi + O(1/a). This approximation connects directly to the univariate Stirling formula \log \Gamma(a) \sim (a - 1/2) \log a - a + \frac{1}{2} \log(2\pi) + O(1/a), demonstrating how the multivariate form scales with while retaining the dominant in the univariate case. As p \to 1, the expression collapses to the univariate asymptotic, confirming consistency across dimensions.

Derivatives

Digamma Function

The multivariate digamma function, denoted \psi_p(a), is defined as the logarithmic derivative of the multivariate gamma function \Gamma_p(a), that is, \psi_p(a) = \frac{d}{da} \log \Gamma_p(a). This function arises naturally in the analysis of matrix-variate distributions, such as the , where it appears in expressions for expectations involving logarithms of determinants. Using the product representation of the multivariate gamma function, \Gamma_p(a) = \pi^{p(p-1)/4} \prod_{i=1}^p \Gamma\left(a + \frac{1-i}{2}\right), the logarithm is \log \Gamma_p(a) = \frac{p(p-1)}{4} \log \pi + \sum_{i=1}^p \log \Gamma\left(a + \frac{1-i}{2}\right). Differentiating with respect to a yields the explicit form \psi_p(a) = \sum_{i=1}^p \psi\left(a + \frac{1-i}{2}\right), where \psi denotes the univariate . This sum allows direct computation of \psi_p(a) without evaluating integrals, leveraging efficient algorithms for the univariate case. For the case p=2, the formula simplifies to \psi_2(a) = \psi(a) + \psi\left(a - \frac{1}{2}\right). This bivariate form is particularly useful in two-dimensional statistical models. When a = n is a positive , \psi_p(n) generalizes s through the univariate relation \psi(n) = -\gamma + H_{n-1}, where \gamma is the Euler-Mascheroni constant and H_k is the kth . Thus, \psi_p(n) = \sum_{i=1}^p \left[ -\gamma + H_{n + (1-i)/2 - 1} \right], with adjustments for half-integer arguments using known extensions, such as \psi\left(n + \frac{1}{2}\right) = -\gamma - 2 \ln 2 + 2 H_{2n} - H_n from the properties of the . These expressions facilitate exact evaluations in discrete settings.

Polygamma Functions

The polygamma functions of order n \geq 1 for the multivariate gamma function are defined as the higher-order derivatives of the logarithm of \Gamma_p(a): \psi_p^{(n)}(a) = \frac{d^{n+1}}{da^{n+1}} \log \Gamma_p(a). Given the product representation of the multivariate gamma function, \Gamma_p(a) = \pi^{p(p-1)/4} \prod_{i=1}^p \Gamma\left(a + \frac{1-i}{2}\right), the logarithmic derivative yields \psi_p^{(n)}(a) = \sum_{i=1}^p \psi^{(n)}\left(a + \frac{1-i}{2}\right), where \psi^{(n)}(\cdot) denotes the univariate polygamma function of order n. The univariate polygamma function relates to the Hurwitz zeta function via \psi^{(n)}(z) = (-1)^{n+1} n! \, \zeta(n+1, z), so the multivariate version is the corresponding sum over shifted arguments. These functions satisfy the recurrence relation \psi_p^{(n)}(a+1) = \psi_p^{(n)}(a) + (-1)^n n! \, a^{-(n+1)}, generalizing the univariate case. For large a, asymptotic expansions of the univariate polygamma functions involve B_{2k}: \psi^{(n)}(z) \sim (-1)^{n-1} \left( \frac{(n-1)!}{z^n} + \frac{n!}{2 z^{n+1}} + \sum_{k=1}^m \frac{(n + 2k - 1)! \, B_{2k}}{(2k)! \, z^{n + 2k}} \right) + O\left( \frac{1}{z^{n + 2m + 1}} \right), as z \to \infty in |\arg z| < \pi. The multivariate polygamma follows by summing these expansions over the shifted arguments. The digamma function serves as the case n=0.

Applications

Multivariate Statistics

The multivariate gamma function plays a central role in multivariate statistics, particularly as a component of the normalizing constants for several key matrix-variate distributions derived from the multivariate normal model. It arises naturally in the analysis of sample covariance matrices and precision matrices, facilitating the derivation of densities for distributions that model uncertainty in covariance structures. This function was introduced by James in the context of multivariate analysis of variance, where it enabled the characterization of latent roots and matrix variates from normal samples. In the Wishart distribution, which generalizes the chi-squared distribution to model the distribution of sample matrices from n independent p-dimensional normal observations with \Sigma, the multivariate gamma function appears in the normalizing constant. Specifically, for degrees of freedom n > p-1, the density is given by f(W) = \frac{ |W|^{(n-p-1)/2} \exp\left( -\frac{1}{2} \operatorname{tr}(\Sigma^{-1} W) \right) }{ 2^{np/2} |\Sigma|^{n/2} \Gamma_p(n/2) }, where W is a p \times p positive definite matrix and \Gamma_p(\cdot) denotes the multivariate gamma function of dimension p. This form ensures the integral over the space of positive definite matrices equals 1, and the Wishart serves as a for the inverse in Bayesian multivariate normal models. The inverse Wishart distribution, a conjugate prior for the covariance matrix itself, similarly relies on the multivariate gamma function in its normalizing constant. For scale matrix \Psi (positive definite) and degrees of freedom \nu > p-1, the density is f(\Sigma) = \frac{ |\Psi|^{\nu/2} }{ 2^{\nu p /2} \Gamma_p(\nu/2) } |\Sigma|^{-(\nu + p + 1)/2} \exp\left( -\frac{1}{2} \operatorname{tr}(\Psi \Sigma^{-1}) \right), particularly when expressing the distribution of the inverse of a Wishart random matrix. This connection highlights the inverse Wishart's role in modeling covariance uncertainty, with the multivariate gamma ensuring proper normalization over the positive definite cone. Connections to the Dirichlet and matrix beta distributions further underscore the multivariate gamma's utility in compositional data analysis and ratio-based models. The matrix variate Dirichlet distribution, a generalization for modeling proportions over matrix entries, incorporates ratios of multivariate gamma functions in its normalizing constant, analogous to the scalar Dirichlet's use of gamma ratios. Similarly, the matrix beta distribution of type I, defined for positive definite matrices X with parameters A and B (positive definite), has a normalizing constant B_p(A, B) = \Gamma_p(A) \Gamma_p(B) / \Gamma_p(A + B), which directly employs such ratios \Gamma_p(A)/\Gamma_p(A + B). These structures facilitate applications in Bayesian inference for correlation matrices and multivariate betas in MANOVA settings.

Special Functions and Integrals

The multivariate gamma function arises in generalizations of integrals over eigenvalue distributions in random matrix theory. In the context of the Wishart distribution, the volume element in eigenvalue coordinates incorporates \Gamma_p(n/2) to account for the Jacobian of the transformation. The multivariate gamma function also appears prominently in the definitions of hypergeometric functions with matrix arguments, such as the Gaussian hypergeometric function _2F_1 of matrix argument. Specifically, in the Jacobi form, the function is normalized by ratios of multivariate gamma functions: P^{(γ,δ)}_ν(T) = \frac{Γ_m(γ + ν + \frac{1}{2}(m+1))}{Γ_m(γ + \frac{1}{2}(m+1))} , _2F_1(-ν, γ + δ + ν + \frac{1}{2}(m+1); γ + \frac{1}{2}(m+1); T), for 0 < T < I and Re(γ) > -1, ensuring convergence over the space of symmetric positive definite matrices. This relation extends the scalar hypergeometric series to matrix-variate settings, with Γ_m providing the essential prefactor for integral representations and reflection formulas in multivariable analysis. Distinct from the multivariate gamma function Γ_p(a), which depends on a scalar parameter a and dimension for symmetric matrix arguments, the Barnes multiple gamma function Γ_n(z; a_1, ..., a_n) generalizes the to n complex variables with parameters a_i, defined via the Weierstrass regularized by multiple Hurwitz functions: log Γ_n(z; \vec{a}) = (2π i)^{-n} ∫_{C} \frac{Γ(1-w) ζ_n(w; z, \vec{a}) }{w} dw, where ζ_n is the multiple Hurwitz zeta. This vector-parameter form satisfies higher-order functional equations like Γ_n(z + e_k; \vec{a}) = Γ_n(z; \vec{a}) Γ_1(z_k; a_k) for basis vectors e_k, contrasting with the fixed-parameter structure of Γ_p(a) used in matrix statistics.

References

  1. [1]
  2. [2]
    Extended matrix variate gamma and beta functions - ScienceDirect
    The multivariate gamma function defined above arises naturally in the probability density function (p.d.f.) of the Wishart matrix. If W ∼ W m ( n , Σ ) , n ...
  3. [3]
    Distributions of Matrix Variates and Latent Roots Derived from ...
    June, 1964 Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. Alan T. James · DOWNLOAD PDF + SAVE TO MY LIBRARY. Ann. Math. Statist ...Missing: gamma | Show results with:gamma
  4. [4]
    [PDF] Arak M. Mathai · Serge B. Provost · Hans J. Haubold - OAPEN Library
    Jul 1, 2022 · As the title of the book suggests, its most distinctive feature is its development of a parallel theory of multivariate analysis in the complex ...<|control11|><|separator|>
  5. [5]
    None
    Below is a merged summary of the multivariate gamma function sections based on the provided summaries. To retain all information in a dense and organized manner, I will use a table in CSV format for key details (Definition, Integral Representation, Recurrence Relations/Properties) across the different segments, followed by additional notes and useful URLs. This approach ensures comprehensive coverage while maintaining clarity and conciseness.
  6. [6]
    [PDF] An Improved Random Matrix Prediction Model for ... - arXiv
    May 26, 2021 · where etr(·) = exp(Tr(·)) represents the exponential of the matrix trace, and Γd(·) is the multivariate Gamma function which can be expressed as ...
  7. [7]
    Multivariate Digamma Function - R
    The multivariate digamma function is the derivative of the log of the multivariate gamma function; for p = 1 p = 1 p=1 it is the same as the univariate digamma ...Missing: recurrence | Show results with:recurrence
  8. [8]
    Matrix Variate Distributions | A K Gupta, D K Nagar | Taylor & Francis
    May 2, 2018 · Matrix Variate Distributions gathers and systematically presents most of the recent developments in continuous matrix variate distribution ...Missing: digamma | Show results with:digamma
  9. [9]
    DLMF: §5.15 Polygamma Functions ‣ Properties ‣ Chapter 5 ...
    The functions ψ ( n ) ⁡ ( z ) , n = 1 , 2 , ... , are called the polygamma functions. In particular, ψ ′ ⁡ ( z ) is the trigamma function.
  10. [10]
    Matrix Variate Distributions - 1st Edition - A K Gupta - D K Nagar - R
    In stock Free deliveryMatrix Variate Distributions gathers and systematically presents most of the recent developments in continuous matrix variate distribution theory and includes ...Missing: digamma | Show results with:digamma
  11. [11]
    [PDF] Generalized heterogeneous hypergeometric functions and ... - arXiv
    Apr 27, 2021 · where the multivariate gamma function is ... which is referred to as the Selberg's integral without eigenvalue ordering, see Macdonald [14].
  12. [12]
    [PDF] hal-00088079, version 2 - 17 Apr 2007
    where Γr is the multivariate Gamma function. Γr(α) = πr(r−1)/4Γ(α)Γ α ... is the value of the Selberg integral (see Hiai and Petz [36] p.118 and also.
  13. [13]
    35.7 Gaussian Hypergeometric Function of Matrix Argument
    ... multivariate gamma function, ℜ ⁡ : real part, 𝐈 : m × m identity matrix, 𝐓 ... 35.6 Confluent Hypergeometric Functions of Matrix Argument35.8 Generalized ...
  14. [14]
    [PDF] The multiple gamma function and its g-analogue - arXiv
    The multiple gamma function was introduced by Barnes. It is defined to be an infinite product regularized by the multiple Hurwitz zeta-functions [2], [3],.