Fact-checked by Grok 2 weeks ago

Logarithmic distribution

In and , the logarithmic distribution, also known as the logarithmic series distribution, is a family of defined on the positive integers with a single \theta \in (0,1), where the is given by P(X = k) = -\frac{\theta^k}{k \ln(1 - \theta)} for k = 1, 2, 3, \dots. This distribution is derived from the Maclaurin of the negative function, -\ln(1 - \theta) = \sum_{k=1}^\infty \frac{\theta^k}{k}, normalized to form a valid , and it features a monotonically decreasing with its at k=1, reflecting a heavy tail suitable for modeling . The logarithmic distribution was first introduced in 1943 by Ronald A. Fisher, A. Steven Corbet, and C. B. Williams to describe the relative abundances of in random samples from animal populations, particularly in ecological surveys where many are represented by few individuals, such as in collections of Malayan (620 from 3,306 individuals) and light-trap catches of moths (240 from 15,609 individuals). In this context, the expected number of with exactly n individuals is modeled as \alpha \frac{x^n}{n} (where \alpha > 0 is an index of diversity and $0 < x < 1), leading to the total number of S = \alpha \ln(1/(1-x)) and total individuals N = \alpha x / (1-x), which provided a robust fit to empirical data and highlighted the randomness in ecological sampling. Key properties include its mean \mu = \frac{\theta}{(1 - \theta) \ln(1/(1 - \theta))} and variance \sigma^2 = \frac{\theta \ln(1/(1 - \theta))}{(1 - \theta)^2} - \left( \frac{\theta}{(1 - \theta)} \right)^2 [\ln(1/(1 - \theta))]^2, both of which increase with \theta and exhibit overdispersion relative to the Poisson distribution. The distribution is a special case of a power series distribution and serves as a limiting form of the negative binomial distribution under certain parameterizations, with applications extending beyond ecology to linguistics (e.g., word frequency distributions), insurance (modeling claim counts for rare events), genetics (mutation occurrences), and information theory (term frequencies in documents).

Definition and Formalism

Probability Mass Function

The logarithmic distribution is a discrete probability distribution supported on the positive integers k = 1, 2, 3, \dots. It is parameterized by a shape parameter \theta \in (0, 1), which controls the decay rate of the probabilities. The probability mass function is given by P(X = k) = -\frac{\theta^k}{k \ln(1 - \theta)}, \quad k = 1, 2, 3, \dots. This formula ensures that the probabilities are positive since \ln(1 - \theta) < 0 for \theta \in (0, 1). The normalizing constant -\frac{1}{\ln(1 - \theta)} arises from the expansion of the natural logarithm, specifically \sum_{k=1}^\infty \frac{\theta^k}{k} = -\ln(1 - \theta) for $0 < \theta < 1, which confirms that the probabilities sum to 1 over the support.

Cumulative Distribution Function

The cumulative distribution function (CDF) of the logarithmic distribution with parameter $0 < \theta < 1 is defined for nonnegative integers k as F(k) = P(X \leq k) = \frac{\sum_{j=1}^{k} \frac{\theta^j}{j}}{-\ln(1-\theta)}, with F(0) = 0 since the support begins at 1. This form follows from integrating the partial sums of the for -\ln(1-\theta). An equivalent representation employs the incomplete beta function (extended to the second shape parameter of zero): F(k) = 1 + \frac{B(\theta; k+1, 0)}{\ln(1-\theta)}, where B(x; a, b) = \int_0^x t^{a-1} (1-t)^{b-1} \, dt. Here, the extension for b=0 corresponds to the integral form \int_0^\theta \frac{t^k}{1-t} \, dt for the series remainder, facilitating connections to special functions in computational libraries. Numerical evaluation of F(k) relies on direct summation of the finite series up to k, which converges quickly for small to moderate k due to the decreasing terms \frac{\theta^j}{j}. For larger k, the integral representation or recursive computation of partial sums avoids overflow and improves efficiency, as implemented in statistical software. The CDF is strictly increasing, satisfies F(0) = 0, and approaches 1 as k \to \infty, confirming it is a valid probability distribution.

Statistical Properties

Moments

The moments of the logarithmic distribution provide key descriptive statistics for this discrete distribution supported on the positive integers. The expected value, or mean, is given by E[X] = -\frac{p}{(1-p)\ln(1-p)}, where $0 < p < 1 is the shape parameter. This expression arises from differentiating the probability generating function and evaluating at s=1. The variance is \operatorname{Var}(X) = \frac{ -p \left[ p + \ln(1-p) \right] }{(1-p)^2 \left[ \ln(1-p) \right]^2 }. This can be obtained as the second derivative of the generating function minus the square of the mean, or via direct summation of E[X(X-1)] + E[X] - (E[X])^2. The variance exceeds the mean when p > 1 - e^{-1} \approx 0.632, reflecting relative to the for such parameters, while for smaller p it is underdispersed. The of the is at k=1 for all p \in (0,1), as the is strictly decreasing in k. Higher moments, such as and , can be computed using the G(s) = \frac{\ln(1 - p s)}{\ln(1 - p)} by successive . The is \gamma_1 = \frac{2p^2 + 3p \ln(1-p) + (1+p) [\ln(1-p)]^2 }{\ln(1-p) \sqrt{ -p \left[ p + \ln(1-p) \right] } \left[ p + \ln(1-p) \right] }, which is positive, indicating right-skewness that increases with p. The excess kurtosis exhibits leptokurtosis relative to the normal distribution. As p \to 0^+, the mean approaches 1 and the variance approaches 0, with the distribution concentrating at X=1. As p \to 1^-, both the mean and variance diverge to infinity, and the distribution develops a heavy right tail. These limits highlight the flexibility of the logarithmic distribution in modeling sparse to abundant count data.

Generating Functions

The probability generating function (PGF) of a X following the logarithmic distribution with parameter $0 < p < 1 is G(z) = \frac{\ln(1 - p z)}{\ln(1 - p)}, \quad |z| < \frac{1}{p}. This expression arises directly from the series expansion of the probability mass function (PMF). Specifically, G(z) = \sum_{k=1}^{\infty} P(X = k) z^k = -\frac{1}{\ln(1 - p)} \sum_{k=1}^{\infty} \frac{(p z)^k}{k}. The inner sum is the Taylor series for the natural logarithm, \sum_{k=1}^{\infty} \frac{u^k}{k} = -\ln(1 - u) where |u| < 1, so with u = p z, it simplifies to the closed form above. The moment-generating function (MGF) is derived by substituting z = e^t into the PGF, yielding M(t) = \frac{\ln(1 - p e^t)}{\ln(1 - p)}, \quad t < -\ln p. The domain restriction ensures |e^t| < 1/p. For advanced properties, such as those involving Fourier analysis or limit theorems, the characteristic function is \phi(t) = \frac{\ln(1 - p e^{i t})}{\ln(1 - p)}, defined for all real t. This follows analogously from the PGF by replacing z with e^{i t}. These generating functions facilitate the extraction of moments through differentiation; for instance, the mean equals the first derivative of the PGF at z = 1, and higher-order moments follow from further derivatives, providing a systematic approach for stochastic process analysis involving the logarithmic distribution.

Parameter Estimation

Method of Moments

The method of moments estimator for the parameter p of the logarithmic distribution is obtained by equating the sample mean \bar{x} to the theoretical mean E[X] = -\frac{p}{(1-p) \ln(1-p)}. This yields the equation \bar{x} = -\frac{p}{(1-p) \ln(1-p)}, which must be solved for \hat{p}. The resulting equation is transcendental and lacks a closed-form solution, requiring numerical methods such as fixed-point iteration, the bisection method, or Brent's algorithm for resolution. These approaches typically converge quickly for p \in (0,1) and sample means greater than 1, as the function is monotonic in p. The method of moments estimator \hat{p} is consistent, converging in probability to the true p as the sample size n \to \infty, by the law of large numbers applied to the sample mean and the continuous mapping theorem, provided the moments exist. It is generally biased in finite samples but asymptotically unbiased. For example, consider a small dataset of five observations from a suspected logarithmic distribution: x = \{1, 1, 2, 1, 3\}, with sample mean \bar{x} = 1.6. Setting $1.6 = -\frac{p}{(1-p) \ln(1-p)} and solving numerically (e.g., via bisection between 0.5 and 0.6) yields \hat{p} \approx 0.59. This estimate can then be used to compute fitted probabilities for model validation.

Maximum Likelihood Estimation

The maximum likelihood estimator (MLE) for the parameter p of the logarithmic distribution is obtained by maximizing the log-likelihood function derived from a random sample of n independent observations x_1, x_2, \dots, x_n, where each x_i \geq 1 is an integer. The log-likelihood is given by l(p) = n \ln\left(-\frac{1}{\ln(1-p)}\right) + \sum_{i=1}^n \left( x_i \ln p - \ln x_i \right) = C + \left( \sum_{i=1}^n x_i \right) \ln p - n \ln \left( -\ln(1-p) \right), where C = -\sum_{i=1}^n \ln x_i is a constant independent of p, and $0 < p < 1. To find the MLE \hat{p}, set the score function (first derivative of l(p)) to zero: \frac{\partial l(p)}{\partial p} = \frac{\sum_{i=1}^n x_i}{p} + \frac{n}{(1-p) \ln(1-p)} = 0, which simplifies to \bar{x} = -\frac{p}{(1-p) \ln(1-p)}, where \bar{x} = n^{-1} \sum_{i=1}^n x_i is the sample mean. This equation equates the sample mean to the population mean of the and lacks a closed-form solution in terms of elementary functions, necessitating numerical methods such as or fixed-point approaches for its solution. Recent work has provided an exact closed-form expression for \hat{p} using the principal branch of the , resolving a problem unresolved since the distribution's introduction. Under standard regularity conditions, the MLE \hat{p} is consistent (converging in probability to the true p as n \to \infty) and asymptotically normal, with distribution \sqrt{n} (\hat{p} - p) \xrightarrow{d} \mathcal{N}\left(0, \frac{1}{I(p)}\right), where I(p) is the Fisher information for a single observation, given by I(p) = \frac{(1-p) \ln(1-p) + p}{p (1-p) [\ln(1-p)]^2}. The asymptotic variance can be estimated by substituting \hat{p} into $1/I(p). Computing the MLE presents challenges due to the distribution's infinite support starting at 1, which can lead to boundary issues or slow convergence in iterative algorithms for small samples (n < 20), particularly when observations are concentrated at low values (e.g., many x_i = 1), amplifying sensitivity to initial guesses and requiring careful numerical implementation to avoid divergence.

Applications

Ecology and Species Abundance

The logarithmic distribution, particularly in its series form, was introduced by Ronald A. Fisher and colleagues in 1943 as a model for relative species abundances within ecological samples, capturing the typical pattern where most species are rare and few are common. This approach posits that the number of species represented by k individuals follows a logarithmic series, providing a statistical framework to relate total species richness to sample size and individual counts. A central interpretation of the model is that the probability of randomly capturing an individual from a species with exactly k individuals in the community is proportional to k \cdot P(X = k), where P(X = k) is the probability mass function of the underlying distribution; this weighting by abundance leads to the logarithmic series form, which naturally emphasizes the dominance of singleton and low-abundance species in biodiversity samples. In practice, this has been applied to biodiversity surveys, such as of 3,306 Malayan butterflies across 620 species and 15,609 Rothamsted-trapped moths across 240 species, where the model closely fitted the frequency of rare taxa. Similar uses extend to insect collections and plant quadrat counts, illustrating skewed abundance patterns in natural communities. To assess model adequacy in ecological data, maximum likelihood estimation is routinely used to fit the diversity parameter \alpha, often followed by chi-square goodness-of-fit tests to evaluate deviations between observed and expected species frequencies. Despite its historical success, the logarithmic distribution shows limitations in over-dispersed datasets, where species abundances exhibit greater variance than predicted—such as in global eukaryotic communities—necessitating alternatives like the Poisson log-normal for improved fits.

Queueing Theory and Other Fields

In queueing theory, the logarithmic series distribution models the steady-state number of customers in certain state-dependent single-server queues with Poisson arrivals, where service rates vary such that the equilibrium distribution takes a logarithmic form. It also arises as the distribution of batch sizes in infinite-server queueing models, particularly in Erlang loss systems with compound Poisson demand, where the number of customers per batch follows the logarithmic series to capture overdispersed arrival patterns. The distribution connects to branching processes, notably in subcritical Markov branching models, where the conditional distribution of total progeny given non-extinction or specific absorption times limits to the logarithmic series distribution. This linkage extends to random allocation problems, such as balls-and-bins scenarios, where the logarithmic series describes the occupancy counts in highly loaded systems. In linguistics, the logarithmic series distribution has been applied to model word frequency distributions in natural language texts, providing a discrete heavy-tailed alternative to for capturing the skew in vocabulary usage across corpora. In insurance, it serves as a model for claim counts per policyholder, especially in compound Poisson risk processes with overdispersion, as seen in dividend optimization frameworks. To generate random variates from the logarithmic series distribution, common methods include the inverse cumulative distribution function approach, which inverts the CDF numerically, and techniques leveraging the probability generating function for recursive sampling.

Relations to Other Distributions

Negative Binomial Distribution

The negative binomial distribution arises as a compound distribution in scenarios where counts are aggregated across a random number of categories or types following a . In this interpretation, the logarithmic distribution governs the number of distinct classes present, while the contribution from each class is modeled as a . This compounding provides a mechanistic explanation for clustered or heterogeneous count data, commonly observed in fields like and . Specifically, let N denote the number of categories, where N follows a logarithmic distribution with parameter p (0 < p < 1), so that P(N = k) = -\frac{p^k}{k \ln(1 - p)} for k = 1, 2, \dots. Given N = k, the total count X is the sum of k independent Poisson random variables, each with mean λ > 0, resulting in X | N = k \sim Poisson(k λ). The unconditional distribution of X then follows a negative binomial distribution with shape parameter r = 1 (corresponding to the geometric case) and success probability q related to p and λ via q = 1 - p e^{-\lambda}. This parameterization ensures the mean of X is \frac{p \lambda}{-\ln(1 - p)} and the variance exceeds the mean, capturing overdispersion. The derivation relies on probability generating functions (PGFs). The PGF of the logarithmic distribution is G_N(t) = \frac{\ln(1 - p t)}{\ln(1 - p)} for |t| \leq 1/p. The PGF of Poisson(λ) is e^{\lambda (s - 1)}. For the compound structure, the PGF of X is G_X(s) = G_N(e^{\lambda (s - 1)}) = \frac{\ln(1 - p e^{\lambda (s - 1)})}{\ln(1 - p)}. With suitable choice of p and λ, this simplifies to the PGF of the negative binomial with r = 1, G_{NB}(s) = \frac{q}{1 - (1 - q) s}, confirming the distributional equivalence. An alternative compounding form expresses G_{NB}(s) = G_{\text{Log}}\left(1 - \frac{1 - s}{\lambda}\right), where the argument aligns the Poisson scaling for small λ approximations in limiting cases. This compound interpretation has significant implications for modeling in count data, where the variance exceeds the mean due to unobserved heterogeneity in the number of categories. Unlike the , which assumes uniformity, the logarithmic-Poisson compounding naturally accounts for rare categories contributing disproportionately, making it suitable for applications such as species counts or defect clustering. The negative binomial's flexibility in this framework allows for robust inference on underlying clustering mechanisms without assuming a fixed number of categories.

Poisson and Geometric Limits

The logarithmic series distribution approaches the in the limit as the p \to 0 while holding the mean fixed at 1, resulting in a degenerate case concentrated at X = 1. In this regime, the takes the approximate form P(X = k) \approx \frac{\theta^{k-1}}{k} for small \theta, reflecting the rapid decay and concentration near the lower support. This limit can be derived by expanding the -\frac{1}{\ln(1-p)} \approx \frac{1}{p} for small p, yielding P(X = k) \approx \frac{p^{k-1}}{k}, which aligns with the geometric form when the $1/k term is secondary for the initial probabilities. For larger k relative to the mean, the tail of the logarithmic series distribution further approximates a , as the $1/k factor varies slowly compared to the exponential p^k decay. This tail approximation arises from the ratio P(X = k+1)/P(X = k) = p / (k+1) \approx p for k \gg 1 but p fixed away from , mimicking the constant ratio $1 - \theta of the geometric. Derivations rely on series expansions of the G(z) = \frac{\ln(1 - p z)}{\ln(1 - p)}, truncating higher-order terms for small p or large k. The relation to the Poisson distribution is indirect, arising in the context of rare events or diluted limits through the negative binomial distribution as an intermediate form. The logarithmic series emerges as the limit of the zero-truncated negative binomial with dispersion parameter r \to 0 and success probability q fixed, while the negative binomial itself converges to the Poisson under fixed mean and r \to \infty with success probability scaling as $1 - \lambda / (r \mu). A combined double limit—first r \to 0 for the logarithmic, then appropriate scaling for Poisson—captures rare-event behavior where events are infrequent but clustered, as in compound Poisson processes stopped by logarithmic counts. This connection is established via probability generating function manipulations, applying l'Hôpital's rule in the r \to 0 step. In scaling limits for large means (as p \to 1^-), the tail behavior of the logarithmic series resembles a power-law but remains distinct due to the base. The pmf P(X = k) = -\frac{p^k}{k \ln(1-p)} implies a tail probability P(X \geq k) \approx \int_k^\infty \frac{p^t}{t} \, dt / |\ln(1-p)|, which for intermediate k (much smaller than the \mu \approx p / ((1-p) |\ln(1-p)|)) follows a near power-law with exponent -1, since P(X = k+1)/P(X = k) \approx p (k)/(k+1) \approx 1 when p \approx 1. However, for k \gg \mu, the p^k term enforces stricter cutoff. This intermediate power-law-like regime is derived using the approximation for the tail sum or series expansions of -\ln(1-p) = \sum_{j=1}^\infty p^j / j, isolating the contribution. Stirling's approximation, k! \approx \sqrt{2\pi k} (k/e)^k, aids in analyzing factorial moments or normalizing large-k ratios but highlights the distinction from pure power laws, which lack the ultimate bound. The negative binomial serves briefly as an intermediate form in these scalings, bridging to heavier-tailed limits.

History and Development

Fisher's Original Work

The logarithmic distribution was first introduced by in 1943 as part of an effort to model species abundance patterns in ecological samples. In collaboration with A. Steven Corbet and C. B. Williams, published the seminal paper "The Relation Between the Number of Species and the Number of Individuals in a Random Sample of an Animal Population" in the Journal of Animal Ecology. This work addressed the challenge of describing how the number of species relates to the total number of individuals captured in random samples, particularly for insects where many species are rare and few are common. The motivation stemmed from empirical data on (Rhopalocera) collected by a single collector in . This collection provided frequency distributions of species abundances that defied simple or geometric models, showing a long tail of rare species. sought to fit these observed frequencies by assuming that species abundances in the larger follow a logarithmic series, with individuals sampled randomly from this series. This approach yielded a for the number of individuals per species that aligned closely with the data, capturing the skewed nature of abundances without requiring assumptions of equal rarity across species. Key empirical findings demonstrated the model's robustness across taxa and datasets. For the Malayan butterfly sample, comprising 3,306 individuals from 620 , the logarithmic series provided an excellent fit, with the diversity parameter α estimated at 135. Similarly, for moths (Macrolepidoptera) collected at Rothamsted, involving 15,609 individuals and 240 , the fit was strong, yielding α ≈ 40.2. These results highlighted the distribution's utility for diverse insect groups, with α serving as a reliable index of that varied predictably with sample size, season, and .

Modern Extensions and Usage

Theoretical extensions of the logarithmic distribution have focused on generalizations that enhance its flexibility for complex data structures. One notable development is the logarithmic-G family of distributions, introduced to provide greater adaptability in modeling lifetime data by incorporating logarithmic transformations within broader generator families, allowing for better tail behavior in skewed count scenarios. Similarly, the generalized logarithmic-X family extends the distribution by integrating T-X methodologies, enabling the creation of flexible lifetime models suitable for biomedical applications where traditional forms may underperform. Multivariate versions, such as the multivariate logarithmic series distribution, allow for joint modeling of multiple count variables, with properties like marginal and conditional distributions facilitating applications in correlated ecological counts. Zero-truncated forms, inherent to the logarithmic distribution's support starting from 1, have been further generalized in compound models, such as zero-truncated variants within Poisson-lognormal hierarchies, to handle excess zeros in multivariate settings more robustly. Computational advances have made the logarithmic distribution more accessible for practical analysis. In , the VGAM package provides comprehensive functions for , , , and random generation of the logarithmic , supporting vector generalized linear models for efficient fitting and simulation. In , the library's stats.logser implements the logarithmic series as a , offering methods for probability mass functions, cumulative functions, and sampling, which integrate seamlessly with broader statistical workflows. Critiques of the logarithmic distribution often highlight its limitations in handling severe , where the variance exceeds what the model can accommodate, leading researchers to prefer alternatives like the for more flexible variance-mean relationships in count data. For instance, when empirical data exhibit quadratic variance growth relative to the mean, shifting to models mitigates underfitting observed with the logarithmic form. Recent work on conflating the two distributions addresses these gaps by combining their strengths, producing hybrid models that better capture both light and heavy tails in over-dispersed scenarios. Recent applications of the logarithmic distribution have expanded into , particularly for modeling microbial where abundances follow long-tailed patterns. In post-2000 studies, it underpins indices like Fisher's alpha, which assumes a logarithmic series for prevalence, aiding in the analysis of time series and environmental samples to quantify community structure. For example, metagenomic pipelines now routinely apply logarithmic-based metrics to assess in soil and water microbiomes, revealing shifts in microbial composition under varying ecological conditions. Despite these advances, gaps persist in the literature, particularly regarding comprehensive methods for the logarithmic distribution, with most work limited to E-Bayesian approaches that constrain prior hierarchies and lack full posterior exploration for complex models. For example, a 2016 study focused on E-Bayesian estimation under . Further development of Bayesian frameworks, including implementations, is needed to incorporate in parameter for high-dimensional metagenomic .

References

  1. [1]
    The Logarithmic Series Distribution - Random Services
    The logarithmic series distribution , as the name suggests, is based on the standard power series expansion of the natural logarithm function. It is also ...<|control11|><|separator|>
  2. [2]
    [PDF] The Relation Between the Number of Species and the Number of ...
    The Fisher series has been established for all the entomological collections tested in which there was. * This equation may be written log S=log C+m log n,.
  3. [3]
    Logarithmic Series Distribution definition, properties and application
    Nov 3, 2020 · The logarithmic series distribution is a discrete probability distribution obtained from the Maclaurin series expansion.
  4. [4]
    [PDF] Some estimators of the PMF and CDF of the Logarithmic Series ...
    May 31, 2016 · A random variable X is said to have the Logarithmic series distribution, if its probability mass function (PMF) is given by. P(X = x;p) = f(x) ...
  5. [5]
    [PDF] Taylor Polynomials and Taylor Series - UW Math Department
    the nth partial sum of the Taylor series for f(x). In this problem, you ... a) f x( )= ln 1 x. ( ). Hint: What is. 1. 1 x dx ? b) f x( )= ln 1+ x. ( ) c ...
  6. [6]
    Log-Series Distribution -- from Wolfram MathWorld
    The log-series distribution, also sometimes called the logarithmic distribution ... incomplete beta function. The log-series distribution is implemented as ...
  7. [7]
    [PDF] Hand-book on STATISTICAL DISTRIBUTIONS for experimentalists
    ... Incomplete Beta function . . . . . . . . . . . . . . 161. Incomplete Gamma ... Logarithmic distribution . . . . . . . . . . . . . . . . 81. Logistic ...
  8. [8]
    [PDF] ebookdistributions.pdf - Vose Software
    The logarithmic distribution (sometimes known as the Logarithmic Series distribution) is a discrete, ... is an incomplete Beta function. Parameter ...
  9. [9]
    [PDF] 1988: LOGARITHMIC SERIES DISTRIBUTION AND ITS USE IN ...
    The model presented here uses the logarithmic series distribution to account for the variation among number of units and the. Dirichlet distribution to model.<|control11|><|separator|>
  10. [10]
  11. [11]
    [PDF] Logarithm distribution
    The probability mass functions for two different values of c are illustrated below. ... The cumulative distribution, survivor function, hazard function ...
  12. [12]
    1.4 - Method of Moments | STAT 415 - STAT ONLINE
    The method of moments involves equating sample moments with theoretical moments. So, let's start by making sure we recall the definitions of theoretical ...
  13. [13]
    [PDF] 9 Properties of point estimators and finding them - Arizona Math
    The method of moments typically produces a consistent estimator(s). But they need not be unbiased and when there are unbiased they need not have minium variance ...<|control11|><|separator|>
  14. [14]
    Maximum likelihood estimation of the logarithmic series distribution
    This paper discusses the maximum likelihood estimation of the parameter of the logarithmic series distribution. The univariate case is treated in Part I, ...
  15. [15]
    Closed‐form maximum likelihood estimator for logarithmic distribution and its asymptotic variance
    **Summary of Maximum Likelihood Estimation for Logarithmic Series Distribution**
  16. [16]
  17. [17]
    Species abundance distributions: pattern or process?
    The log-series provided a compelling fit to the butter- fly and moth collections described in Fisher, Corbet &. Williams's (1943) original paper. However, it is ...
  18. [18]
    Unveiling global species abundance distributions - Nature
    Sep 4, 2023 · The best statistical fit for almost all classes was the Poisson log-normal distribution. This strong evidence for a universal pattern of gSADs ...
  19. [19]
    A state-dependent queueing system with asymptotic logarithmic ...
    A Markovian single-server queueing model with Poisson arrivals and state-dependent service rates, characterized by a logarithmic steady-state distribution, ...
  20. [20]
    [PDF] Solution procedures for lost sales base-stock inventory ... - HAL
    Oct 22, 2021 · This queueing system is also called Erlang's loss model. The ... has the same Poisson/Logarithmic-series distribution. This is given ...<|separator|>
  21. [21]
    Extended Sibuya distribution in the Subcritical Markov Branching ...
    Dec 7, 2022 · The conditional limit probability is exactly the logarithmic series distribution supported by the positive integers. Comments: 10 pages.
  22. [22]
    [PDF] On a Class of Skew Distribution Functions - Herbert A. Simon
    Jun 12, 2001 · binomial as it approaches its limiting form, Fisher's logarithmic series distribution. How- ever, in the case of the negative binomial, I ...
  23. [23]
    (PDF) On zero - inflated logarithmic series distribution and its ...
    We consider an extended version of a logarithmic series distribution and discuss the estimation of its parameters by the method of moments and the method of ...
  24. [24]
    [PDF] Optimal dividend problem for a generalized compound Poisson risk ...
    Feb 25, 2014 · (also known as the logarithmic series distribution) ln(θ), with probability mass function. P(Xi = n) = θn. −n ln(1 − θ). , n = 1, 2, ··· , 0 ...
  25. [25]
    Sampling from the generalized logarithmic series distribution
    In this paper, some methods of sampling from the generalized logarithmic series distribution are presented. The inversion algorithm is compared with ...
  26. [26]
    A Relation between the Logarithmic, Poisson, and Negative ... - jstor
    tive-binomial distribution with parameter, p/ (1 + p) = x, say, and index k ... LOGARITHMIC, POISSON, AND NEGATIVE BINOMIAL SERIES 163 series was noted ...
  27. [27]
    [PDF] Redalyc.Linking the Negative Binomial and Logarithmic Series ...
    The relations between some common discrete distributions with infinite sup- port are well known since the work of Quenouille (1949), who showed the links.
  28. [28]
    [PDF] Fitting the Negative Binomial Distribution to Biological Data - CI Bliss
    If we disregard the number of units con- taining no individuals, the negative binomial then converges to Fisher's logarithmic series (13), which describes ...
  29. [29]
    [PDF] Sampling Theory of the Negative Binomial and Logarithmic Series ...
    Mar 7, 2006 · Fisher's logarithmic series distribution was proposed as a model for the relative abundance ... Fisher et al. (1943) and Williams.
  30. [30]
    A statistical evidence of power law distribution in the upper tail of ...
    A statistical evidence of power law distribution in the upper tail ... The extended family contains the Zipf, the geometric, the logarithmic series and the ...
  31. [31]