Fact-checked by Grok 2 weeks ago

Elias gamma coding

Elias gamma coding is a universal designed for the of sequences of positive integers, particularly those with unknown upper bounds, enabling efficient variable-length encoding without prior parameters. Developed by Peter Elias in 1975, it represents each positive integer x \geq 1 by first determining the bit length l = \lfloor \log_2 x \rfloor + 1, then concatenating l-1 unary zeros with the l-bit binary representation of x. This results in a codeword length of $2\lfloor \log_2 x \rfloor + 1 bits, which is asymptotically optimal within a factor of 2 for high-entropy sources and ensures instantaneous decodability due to its prefix-free property. The code's universality stems from its ability to perform well on average for any distribution of integers, making it a cornerstone of parameter-free coding schemes in information theory. It excels in applications requiring compact storage of sparse data, such as compressing gap sequences in inverted indexes for information retrieval systems, where it can reduce space usage significantly—for example, the inverted index for the Reuters-RCV1 collection (960 MB) can be compressed to 101 MB when combined with other techniques. Elias gamma coding is also widely implemented in software libraries for general-purpose data compression and has influenced hardware accelerators for real-time encoding. Among Elias's family of codes, gamma coding is the simplest, with variants like and offering refinements for better efficiency on larger integers by optimizing the unary prefix or using recursive structures. Its prefix-free nature allows concatenation of codewords without delimiters, facilitating streaming applications, though it may underperform fixed-length codes for very small integers compared to alternatives like . Overall, Elias gamma coding remains a fundamental tool in due to its balance of simplicity, universality, and effectiveness.

Fundamentals

Definition and Motivation

Elias gamma coding is a designed to encode positive in a variable-length manner, enabling efficient representation without prior knowledge of the . codes ensure that no codeword is a prefix of another, allowing instantaneous decoding without or delimiters. Fixed-length codes, while simple, are inefficient for sources with skewed probability , such as those where small integers are far more likely than large ones, as they allocate the same number of bits to all symbols regardless of frequency. codes address this by providing performance that approaches the bound for any ergodic source, adapting implicitly to the data's statistical properties. In Elias gamma coding, a positive n is encoded using a of \lfloor \log_2 n \rfloor zeros followed by a 1, followed by \lfloor \log_2 n \rfloor bits representing the value n - 2^{\lfloor \log_2 n \rfloor }, which captures the offset from the nearest lower . This construction results in a total code length of $2 \lfloor \log_2 n \rfloor + 1 bits, making it particularly suitable for drawn from geometric or unknown distributions where the probability decreases exponentially. The motivation behind Elias gamma coding stems from the need for simple, adaptive encoding schemes in data compression, especially for sequences of positive integers where probability models are unavailable or change dynamically. It achieves near-optimal average code lengths by bounding the redundancy relative to the source , without requiring the encoder or decoder to estimate parameters like those in . In his 1975 paper, Peter Elias introduced this code as part of a broader framework for universal representations of integers, aiming to create straightforward prefix-free sets that perform well across diverse countable sources in applications.

Historical Development

Elias gamma coding was developed by Peter Elias, a pioneering figure in at the (), as part of his research on universal coding schemes in 1975. Elias, who joined MIT's faculty in 1953 and made seminal contributions to including the introduction of convolutional codes in 1955 and in the early 1960s, sought to create prefix-free codes capable of efficiently representing positive integers without prior knowledge of their distribution. In his influential paper "Universal codeword sets and representations of the integers," published in the IEEE Transactions on Information Theory, Elias introduced gamma coding alongside delta and omega codes, demonstrating their universal properties for integer encoding in scenarios where source statistics are unknown or varying. The emergence of Elias gamma coding occurred amid the rapid expansion of data compression research in the 1970s, a period marked by increasing interest in practical implementations of Claude Shannon's information theory foundations, particularly the entropy bounds for . This era saw growing needs for efficient encoding in emerging digital systems, such as early computer storage and transmission technologies, where variable-length codes were essential to approach theoretical limits without adaptive feedback. Elias's work built on these ideas, extending principles to universal integer representations that balanced simplicity and optimality for geometric distributions common in natural data sources. Following its introduction, Elias gamma coding found initial application in theoretical models of universal but gradually transitioned to practical use in specialized domains. By the and , it influenced extensions like exponential-Golomb codes for video standards, and post-2000 literature has frequently cited it in high-impact areas such as for search engines and genomic sequence , where its efficiency for sparse, positive streams remains relevant. No significant modifications to the core gamma code have occurred since 1975, though it continues to be integrated into modern algorithms for its asymptotic optimality and ease of implementation.

Encoding Process

Unary Code Foundation

Unary coding serves as a fundamental building block in Elias gamma coding, providing a simple mechanism to encode the length of the subsequent representation. In code, a positive k is represented by a sequence of k-1 zeros followed by a single 1. For instance, the code for k=1 is "1", for k=2 it is "01", and for k=3 it is "001". This convention ensures a uniquely decodable representation where the position of the terminating 1 directly indicates the value k. Within Elias gamma coding, the unary code forms the prefix that specifies the bit length l = \lfloor \log_2 n \rfloor + 1 of the part for a positive n. This prefix consists of l-1 zeros followed by a 1, which delimits the without requiring additional separators. For example, to encode a l=3 (corresponding to values of n from 4 to 7), the unary prefix is "001", signaling that the following segment uses 3 bits. The choice of unary for this role stems from Peter Elias's design in his seminal work on universal codes, where it enables efficient, prefix-free encoding of arbitrary s. The rationale for employing over a representation for the length lies in its inherent self-delimiting property, which prevents ambiguity in concatenated variable-length codes. A encoding of the length would necessitate a fixed or additional , complicating the scheme and increasing overhead for small values; , by contrast, integrates seamlessly as a run of zeros terminated by the leading 1 of the offset, promoting simplicity and instantaneous decodability. This approach ties directly into Elias's goal of constructing universal codeword sets that perform well without prior knowledge of the integer distribution.

Gamma Code Construction

To construct an Elias gamma codeword for a positive n \geq 1, first compute the l = \lfloor \log_2 n \rfloor + 1, which represents the number of bits in the representation of n. This l is then encoded using a code consisting of l-1 zeros followed by a single 1. The code is immediately followed by the (l-1)-bit representation of the n - 2^{l-1}, which provides the bits after the implicit leading 1 in the binary form of n. Formally, the gamma code is denoted as \gamma(n) = \text{[unary](/page/Unary)}(l) + \text{[binary](/page/Binary)}(n - 2^{l-1}, l-1 \text{ bits}), where \text{[unary](/page/Unary)}(l) is the unary representation of l and the binary uses exactly l-1 bits, with leading zeros if necessary. The terminating 1 of the serves as the leading 1 of n's representation. This procedure ensures the code is prefix-free and self-delimiting, as the unary prefix uniquely indicates the length of the subsequent binary part. For example, consider n = 5. The binary representation of 5 is 101, so l = 3 (since \lfloor \log_2 5 \rfloor = 2). The unary code for l = 3 is 001. The offset is $5 - 2^{2} = 1, which in 2-bit binary is 01. Thus, \gamma(5) = 00101. An equivalent way to construct the codeword is to take the binary representation of n, which has length l, and prepend l-1 zeros to it. This directly yields \gamma(5) = 00101. Edge cases arise for small values and powers of 2. For n=1, l=1, the code is 1, and there is no , resulting in \gamma(1) = 1. For a power of 2 like n=4 (binary 100, l=3), the is 001, the is 00 (since $4 - 4 = 0), yielding \gamma(4) = 00100. In implementation, the following pseudocode provides a straightforward way to generate the codeword as a :
function elias_gamma_encode(n):
    if n == 0:
        return ""  // Not defined for 0, but handle if needed
    binary_n = [binary](/page/Binary)_representation(n)  // Without leading zeros, as [string](/page/String)
    l = length(binary_n)
    unary_prefix = "0" * (l - 1) + "1"
    return unary_prefix + binary_n[1:]  // Append binary_n without the first '1'
This assumes a that returns the minimal starting with 1. Common pitfalls include incorrect computation of \lfloor \log_2 n \rfloor for n=1 (where it is 0, so l=1) and ensuring the offset uses exactly l-1 bits without the leading 1, which can lead to off-by-one errors in padding during programming. Additionally, floating-point precision issues in log2 calculations for large n may require integer-based alternatives, such as functions in languages like Python's n.bit_length().

Decoding Process

Reading the Unary Prefix

In the decoding process of Elias gamma coding, the initial step involves parsing the unary prefix from the concatenated of codewords. This prefix, consisting of a sequence of zeros followed by a single one, indicates the length l of the subsequent offset. To read it, the sequentially examines bits starting from the current position in the until the first 1 is encountered. The number of preceding zeros, denoted as k, determines l = k + 1, which specifies the bit length of the representation of the being decoded. This procedure consumes exactly l bits from the for the prefix: the k zeros plus the terminating 1. Since Elias gamma codes are prefix-free, the parser can unambiguously identify the end of one codeword's prefix without interference from subsequent codewords, allowing reliable in a of multiple encoded integers. The prefix-free property ensures that no valid codeword begins with another complete codeword, preventing misinterpretation during sequential reading. For example, consider the codeword "00101", which represents \gamma(5). The decoder reads the first two bits as zeros, followed by a 1 on the third bit, yielding k = 2 and thus l = 3. At this point, the prefix has been fully consumed, advancing the bitstream position by three bits, and the decoder proceeds to interpret the next l - 1 = 2 bits as the binary offset. In practice, the bitstream assumes no leading zeros before the first codeword, but errors such as an unending sequence of zeros would render decoding invalid, as it violates the finite length guaranteed by the unary construction and the prefix-free nature of the code. This invalid case cannot correspond to any finite positive integer, ensuring the code's robustness for well-formed inputs.

Extracting the Binary Offset

Once the length l of the representation has been determined from the prefix, the offset is extracted by reading the subsequent l - 1 from the and interpreting them as an unsigned m. The original n is then reconstructed using the formula n = 2^{l-1} + m. This step leverages the fact that the prefix encodes l = \lfloor \log_2 n \rfloor + 1, ensuring the offset captures the lower bits of n after accounting for the implied leading 1 in its form. The process assumes a big-endian bit order, where bits are read from most significant to least significant. In practice, $2^{l-1} is computed efficiently via left bit shifting (1 shifted left by l-1 positions), and m is accumulated by shifting and ORing each subsequent bit. This reconstruction guarantees unique decodability since Elias gamma codes are prefix-free. For example, consider the codeword "00101" for n = 5, where the unary prefix "001" yields l = 3. The next l - 1 = 2 bits "01" are read and interpreted as m = 1. Thus, n = 2^{3-1} + 1 = 4 + 1 = 5. Edge cases arise for small values and powers of two. For n = 1, the codeword is "1", with l = 1 and no bits, so m = 0 and n = 2^{1-1} + 0 = 1 + 0 = 1. For powers of two like n = 4 (codeword "00100"), l = 3, the "00" gives m = 0, yielding n = 2^{3-1} + 0 = 4. In such cases, the is all zeros, reflecting the exact power-of-two structure.

Properties and Performance

Code Length Analysis

The length of an Elias γ codeword for a positive n \geq 1 is given by |\gamma(n)| = 2 \lfloor \log_2 n \rfloor + 1 bits. This formula arises from the structure of the code, where the unary prefix consists of \lfloor \log_2 n \rfloor zeros followed by a one, contributing \lfloor \log_2 n \rfloor + 1 bits, and the subsequent binary representation encodes the n - 2^{\lfloor \log_2 n \rfloor} using exactly \lfloor \log_2 n \rfloor bits. The total length is thus the sum: (\lfloor \log_2 n \rfloor + 1) + \lfloor \log_2 n \rfloor = 2 \lfloor \log_2 n \rfloor + 1. The following table illustrates the codewords and their lengths for n = 1 to $16:
n\gamma(n)Length (bits)
111
20103
30113
4001005
5001015
6001105
7001115
800010007
900010017
1000010107
1100010117
1200011007
1300011017
1400011107
1500011117
160000100009
These lengths reflect the prefix-free nature of the code, ensuring instantaneous decodability, though they exceed the Shannon bound for specific distributions. For a p(n) \propto r^n with $0 < r < 1, the expected code length is approximately $2 / \log_2 (1/r) + constant, where the constant accounts for the +1 bit overhead and boundary effects on \lfloor \log_2 n \rfloor. This approximation highlights the code's efficiency for exponentially decaying probabilities, approaching twice the average logarithm plus a small additive term. In practice, for distributions like the Yule-Simon (a power-law variant akin to geometric tails with parameter \rho = 1), the expected length is exactly 3 bits, closely matching the of approximately 2.95 bits.

Asymptotic Efficiency

The asymptotic length of the Elias gamma codeword for a positive n satisfies \lim_{n \to \infty} \frac{|\gamma(n)|}{\log_2 n} = 2, indicating that the code requires approximately twice as many bits as the information-theoretic minimum for representing large n under a assumption up to n. This factor of 2 arises because the code prepends a unary representation of the binary length \lfloor \log_2 n \rfloor + 1, doubling the bit requirement for the offset in the limit. Consequently, for large values, the code exhibits roughly twice the , making it inefficient relative to optimal fixed-length coding but suitable for scenarios without prior knowledge of the maximum value. As a universal code, Elias gamma coding achieves optimality in Elias's minimax criterion for countable prefix codeword sets, ensuring bounded performance across distributions without adaptation, though its pointwise redundancy over the Shannon limit grows as \log_2 n bits for individual large integers. More precisely, the redundancy approaches \log_2 n + 1 bits beyond \log_2 n, highlighting its non-asymptotic optimality compared to advanced variants like , which reduce this to approximately $1 + \log_2 (\log_2 n). For i.i.d. sources following a , the average redundancy per symbol is bounded by 2 bits, leveraging the code's efficacy for skewed, heavy-tailed probabilities. A proof sketch for these properties relies on the Kraft inequality, which confirms the prefix-free nature of the code since the sum of $2^{-|\gamma(n)|} over all n equals 1, allowing instantaneous decoding. Elias's theorem on universal sets further establishes that such codes minimize the worst-case in a sense over all positive integer distributions with finite , with the gamma code providing a simple instantiation despite its linear-logarithmic overhead. Limitations include suboptimality for uniform distributions over a known finite range, where fixed-length coding achieves exactly \log_2 N bits on average versus gamma's approximate $2 \log_2 N, resulting in 100% relative . Conversely, it excels for sparse or large-integer distributions, such as those in inverted indexes or , where small values dominate and the variable-length prefix avoids wasteful padding. A 2022 proposes a high-capacity reversible data-hiding scheme for medical images using modified gamma encoding combined with , achieving an embedding rate of 0.75 bits per .

Applications and Comparisons

Use in Data Compression

Elias gamma coding is primarily employed in lossless data compression to encode run-lengths, pointers, and sparse data structures, especially for sequences of positive integers where smaller values occur more frequently than larger ones. In the Burrows-Wheeler transform (BWT), it efficiently represents the output of the transform, achieving compression close to the limit for text data with power-law distributions, such as those in the Corpus where it averages 2.90 bits per byte. This makes it suitable for encoding symbol positions or longest common prefix (LCP) values in inverted indexes and compressed suffix arrays, reducing space overhead without requiring prior knowledge of value ranges. The code is integrated into succinct data structures, notably wavelet trees, where it compresses binary strings at internal nodes using run-length or gap encoding schemes. In run-length encoded wavelet trees, Elias gamma provides a prefix length of at most $2 \log i + 1 bits for integer i, enabling overall compression bounded by $2 |s| H_0(s) + 2 \log |s| + O(|\Sigma(s)|) bits for a string s over alphabet \Sigma(s), with H_0(s) denoting the zero-order empirical entropy. This approach supports practical implementations for large-scale text indexing and genomic data storage, balancing space efficiency with fast query times in external memory settings. In practice, Elias gamma coding offers advantages through its simplicity of implementation and lack of a training phase, making it ideal for encoding integers at web scale without adaptive overhead. It is particularly effective for in sparse datasets, as seen in post-2015 adoptions within frameworks like TensorFlow's library, where functions such as run_length_gamma_encode compress sequences of repeated values or model weights using Elias gamma embedded in range coders. For instance, in scalable model , it encodes quantized weights with code lengths of $2 \lfloor \log_2 |x| \rfloor + 1 bits, facilitating efficient of large datasets in distributed training environments.

Relation to Other Codes

Elias gamma coding shares its foundational principles with other universal codes developed by , particularly the and omega codes, all designed to encode positive without prior knowledge of their distribution. The code builds directly upon gamma by encoding the length of the binary representation using a gamma , followed by the binary of the itself; this results in an asymptotic of approximately log₂(N) + log₂(log₂(N)) + O(1), making it more efficient than gamma's ≈ 2 log₂(N) for large N, though gamma remains simpler and preferable for small-to-medium due to lower computational overhead in handling. In contrast, the Elias omega code employs a recursive structure without a unary , instead self-describing the lengths of successive sections in a manner, achieving an even tighter asymptotic bound of log₂(N) + 2 log₂(log₂(N)) - 2 + o(1) for very large N and offering superior efficiency in scenarios involving extremely large , albeit at the cost of increased decoding complexity compared to gamma's straightforward unary-binary split. Beyond Elias's family, gamma coding differs from parametric codes like Golomb-, which optimize for known geometric distributions by parameterizing the unary run length (e.g., as a special case where the parameter is a power of 2), yielding better for data with predictable variance—such as posting lists in inverted indexes—while gamma's universality makes it distribution-agnostic but less adaptive, often resulting in 5-10% larger encoded sizes in benchmarks on document collections like the . Similarly, variable-byte aligns bytes with continuation bits for byte-level encoding of integers up to 2^28 - 1, providing faster at the expense of poorer ratios, making it suitable for hardware-aligned processing but less ideal for bit-packed storage where gamma excels in simplicity. , another universal approach, uses Zeckendorf representations without a terminator for non-prefix variants, but requires end-of-stream signaling, contrasting gamma's inherent prefix-freeness and leading to comparable or slightly longer codes for small values without gamma's efficiency. Gamma coding is typically chosen for its balance of simplicity and performance in encoding small-to-medium integers (e.g., up to 2^20), where its code lengths are competitive with alternatives and decoding is faster than recursive methods like omega, though trade-offs arise in computation: gamma's bit-level operations can be 1.5-6.7x slower than byte-aligned codes like variable-byte in SIMD-optimized libraries such as JASSv2, yet it maintains smaller footprints in bit-compressed scenarios. Generalizations of gamma extend to higher radices, such as base-128 for byte-oriented offsets, reducing the unary prefix overhead and achieving average lengths around 1.5 log₂(N) for large N—more efficient than binary gamma's 2 log₂(N) for values beyond 2^16—while variable-radix variants dynamically increase the base (e.g., from 2 to higher with grouped bits), outperforming delta for mid-to-large ranges but requiring more sophisticated encoding logic. In recent benchmarks from 2020s compression tests, gamma integrated into libraries like those for inverted indexes shows decompression speeds slower than optimized Golomb-Rice variants but with better ratios than variable-byte for sparse integer sequences, underscoring its role in universal, low-complexity applications.

References

  1. [1]
    Gamma codes - Stanford NLP Group
    $$\gamma $ codes implement variable-length encoding by splitting the representation of a gap $G$ into a pair of length and offset.
  2. [2]
    [PDF] compression - Cornell: Computer Science
    Elias-γ Code. ▫ To encode a number k, compute length of offset offset. Elias-γ Code: alternate explanation. ▫ To encode a number k,. – Encode k in binary.
  3. [3]
    [PDF] HARDWARE ACCELERATOR FOR ELIAS GAMMA CODE by ...
    Examples of Elias Gamma Encoding ... has not explored Elias Gamma coding. J. Jiang et al. has presented a novel software design of an arithmetic coding.
  4. [4]
  5. [5]
    Peter Elias - Engineering and Technology History Wiki
    Feb 16, 2016 · One of his major contributions was the introduction of convolutional codes, which are now the workhorse of communications systems. He also ...
  6. [6]
    [PDF] Peter Elias, 1923–2001 - IEEE Information Theory Society Newsletter
    Peter also contributed fundamental new concepts and tech- niques to source coding. His widely cited 1975 paper [11] in- troduced universal representations ...Missing: original | Show results with:original
  7. [7]
    [PDF] peter elias - National Academy of Sciences
    the cornerstone of information theory was shannon's noisy-channel coding theorem, which showed that it is pos- sible to send data over an essentially arbitrary ...
  8. [8]
    [PDF] Lossless Data Compression - Christian Steinruecken
    Mar 2, 2018 · IEEE Computer Society. ISBN 978-0-7695-3994-2. ISSN 1068-0314. Elias, Peter (1975). Universal codeword sets and representations of the ...<|control11|><|separator|>
  9. [9]
    [PDF] Techniques for Inverted Index Compression - Persone
    Aug 3, 2020 · The aim of this article is twofold: first, surveying the encoding algorithms suitable for inverted index compression and, second, characterizing ...
  10. [10]
    Data structures and compression algorithms for genomic sequence ...
    While Huffman coding achieves compression close to the entropy limit, the ... Elias Gamma coding is applied both to the absolute and relative addresses.Missing: post- | Show results with:post-
  11. [11]
    [PDF] Elias Coding for Arbitrary Natural Numbers - Fenix
    The Elias gamma code is the simplest of the Elias codes and is defined as follows. ... https://en.wikipedia.org/wiki/Universal code (data compression)). Let ...
  12. [12]
    [PDF] Implementation. Indexing - Department of Computer Science
    Compressing docid's: Elias-Gamma code (self-delimiting binary code). ▷ Elias-Gamma code is self-delimiting. ▷ Exercise: think how to decode uniquely.Missing: definition | Show results with:definition
  13. [13]
    [PDF] Elias Coding - Instituto de Telecomunicações
    Elias, “Universal codeword sets and representations of the integers.” IEEE Transactions on Information Theory, vol. 21, no. 2, pp. 194-203, March 1975. 4.
  14. [14]
    [PDF] Introduction to Information Retrieval - Stanford University
    Aug 1, 2006 · ... Manning. Prabhakar Raghavan. Hinrich Schütze. Cambridge University Press. Cambridge, England. Page 4. Online edition (c) 2009 Cambridge UP.
  15. [15]
    [PDF] A Short Course in Universal Coding - IPI PAN
    Jul 26, 2024 · natural numbers were studied by Peter Elias [44]. The most popular ... universal code called the Lempel-Ziv code. The Lempel-Ziv code ...
  16. [16]
    [PDF] arXiv:cs/0611073v2 [cs.IT] 21 Jun 2007
    Jun 21, 2007 · the average length of the Elias γ code (EG0) code for a Yule-Simon distribution with ρ = 1 is easily calculated as. ∞. X i=1 p(i)n(i) = 1+2.
  17. [17]
    Necessary and sufficient memory size for m-hypothesis testing
    Insufficient relevant content. The provided content snippet from https://ieeexplore.ieee.org/document/1055332 does not contain specific sections or details on gamma code, asymptotic behavior, redundancy, optimality, proof using Kraft, or theorem on universal sets. It only includes a title and a loading reference, lacking substantive text or data to summarize.
  18. [18]
    [PDF] Generalized Universal Coding of Integers - arXiv
    Apr 15, 2022 · Elias γ code is universal, but it is not asymptotically optimal. Next, a lemma about the length function of other classic UCIs is given.
  19. [19]
    [PDF] Universal Codeword Sets and Representations of the Integers
    PETER ELIAS, FELLOW, IEEE. Abstract-Countable prefix codeword sets are ... Davisson calls this sequence a universal code for d. We prefer to reserve ...
  20. [20]
    A High-Capacity Reversible Data-Hiding Scheme for Medical Image ...
    Elias gamma is a well-known variable-length encoding scheme in practice, and it is suited for encoding the run-length sequences during compression. This ...
  21. [21]
    [PDF] Punctured Elias Codes for variable-length coding of the integers
    Dec 5, 1996 · Elias, “Universal Codeword Sets and Representations of the Integers”, IEEE Trans. Info. Theory, Vol IT. 21, No 2, pp 194–203, Mar 1975. 4. S ...
  22. [22]
    [PDF] arXiv:1611.08198v1 [cs.DS] 24 Nov 2016
    Nov 24, 2016 · The Elias δ-code of ℓ is composed of the γ-code of 1 + ⌊log2 ℓ⌋, followed by the binary code of ℓ without the most significant bit. The δ-coding ...
  23. [23]
    [PDF] The myriad virtues of Wavelet Trees聻
    For instance, γ-coding [6] satisfies the inequality with a = 2 and b = 1. Notice that the assumption |Pfx(i)| ⩽ a log i + b is not restrictive, since the.
  24. [24]
    A space efficient direct access data structure - ScienceDirect.com
    Külekci [17] suggested the usage of Wavelet trees for Elias and Rice variable length codes. The method is based on handling separately the unary and binary ...
  25. [25]
    Module: tfc | TensorFlow v2.16.1
    Apr 26, 2024 · run_length_gamma_decode(...) : Decodes data using run-length and Elias gamma coding. run_length_gamma_encode(...) : Encodes data using run- ...
  26. [26]
    Scalable model compression | TensorFlow Core
    Aug 16, 2024 · The coding scheme applied for compressing the weights, an Elias gamma code, produces codes of length 1 + ⌊ log 2 ⁡ | x | ⌋ bits for the ...
  27. [27]
  28. [28]
  29. [29]
  30. [30]
    Efficient Inverted Index Compression Algorithm Characterized ... - NIH
    Trotman [21] compares set sizes and decompression times using the Golomb-Rice, Elias gamma, Elias delta, Variable Byte, and Binary Interpolative Coding ...
  31. [31]
    Elias Revisited: Group Elias SIMD Coding - ACM Digital Library
    Abstract. The prior belief that the Elias gamma and delta coding are slow because of the bit-wise manipulations is examined in the light of new CPU instructions ...