Fact-checked by Grok 2 weeks ago

Unary coding

Unary coding is a simple encoding method in that represents a positive n using a string of exactly n ones followed by a single zero, making it a prefix-free code suitable for variable-length encoding of small natural numbers. For example, the number 1 is encoded as "10", 2 as "110", and 3 as "1110", with the length of the codeword being n + 1 bits. This encoding scheme, also known as thermometer code in some contexts, is uniquely decodable and instantaneous, allowing the decoder to identify codeword boundaries without ambiguity by searching for the terminating zero. Unary coding is optimal for sources following a geometric probability distribution where the success probability p = 1/2, achieving the bound in such cases, but it becomes highly inefficient for larger values of n due to its linear growth in code length. It serves as a building block for more advanced universal codes, such as gamma and codes, where the unary part encodes the length of a subsequent . In practice, unary coding finds applications in lossless data compression, particularly for encoding frequencies or positions of small integers in inverted indexes for systems, where many values are low and follow a skewed distribution. It is also employed in modern video coding standards like HEVC through Context-Adaptive Binary Arithmetic Coding (CABAC), where unary codes signal bin strings for parameters such as transform coefficients or motion vectors, balancing simplicity with adaptability to probability models. Additionally, variants appear in designs for networks and implementations due to their straightforward mapping to physical representations like pulse widths.

Fundamentals

Definition and Basic Concept

Unary coding, also known as the , is a fundamental method for representing positive in a binary framework, where the n is encoded as a of n identical symbols—typically 1's—followed by a terminating , usually a 0, resulting in a codeword of length n+1 bits. This approach treats the number's value directly as the count of repeated symbols, making it one of the simplest forms of encoding for natural numbers. Alternative representations exist within unary coding, such as encoding n with n zeros followed by a single 1, or strictly positive variants that use n-1 ones followed by a 0, resulting in length n bits and avoiding representation of zero. Historically, unary coding has been termed "" due to its linear progression, where the bit length increases proportionally with n, resembling the rising level in a . This terminology highlights its intuitive, analog-like scaling in digital contexts. The primary motivation for unary coding lies in its extreme simplicity, which is advantageous for encoding small integers or in data compression scenarios where the source follows a favoring low values, as unary codes achieve optimality under such conditions. For instance, under a , unary encoding minimizes average code length by assigning shorter codes to more probable smaller numbers. A basic example illustrates this: the 1 is encoded as "10", 2 as "110", and 3 as "1110", where each additional 1 corresponds to an increment in value before the terminating 0.

Encoding and Decoding Procedures

The encoding procedure for standard unary coding in a represents a positive n as a of exactly n '1' bits followed by a single '0' bit as the delimiter. This ensures the codeword is prefix-free, allowing unambiguous parsing when concatenated. The algorithm is straightforward and can be implemented iteratively:
function encode_unary(n):
    if n < 1:
        raise ValueError("n must be a positive integer")
    codeword = ""
    for i in range(1, n + 1):
        codeword += "1"
    codeword += "0"
    return codeword
This pseudocode generates the codeword directly, with a time complexity of O(n) proportional to the output length. Decoding reverses this process by scanning the bitstream sequentially, counting consecutive '1's until a '0' is encountered, and interpreting the count as n. The algorithm assumes a well-formed input stream and handles edge cases such as an empty string (invalid, returns error) or a stream ending with all '1's without a delimiter (invalid, potentially indicating truncation). A basic implementation in pseudocode is:
function decode_unary(bitstream):
    if bitstream is empty:
        raise ValueError("Empty bitstream")
    count = 0
    i = 0
    while i < len(bitstream) and bitstream[i] == '1':
        count += 1
        i += 1
    if i == len(bitstream) or bitstream[i] != '0':
        raise ValueError("Invalid unary code: missing delimiter")
    return count
This decoder advances through the stream one codeword at a time, with O(n) time per codeword. A common variation inverts the symbols, representing n as n '0' bits followed by a single '1' , where decoding counts the '0's before the '1'. This form is equivalent in structure and decodability but may suit specific or constraints, such as those favoring majority-zero representations. Unary coding exhibits high sensitivity to bit-flip errors due to its reliance on run-length counting and delimiters for synchronization. A single bit error, such as flipping a '0' delimiter to '1', can merge adjacent codewords, causing the decoder to misinterpret subsequent values by shifting the run boundaries and potentially propagating errors across the entire stream. For illustration, encoding the integer 4 yields the codeword "11110": four '1's followed by '0'. To decode "11110", the algorithm counts four consecutive '1's before the terminating '0', recovering n = 4. If a bit flip changes it to "11111", decoding would count five '1's without a delimiter, resulting in an invalid parse and loss of synchronization for following codewords.

Mathematical Properties

In unary coding, the codeword for a positive integer n has length l(n) = n + 1 bits, typically consisting of n ones followed by a zero (or vice versa). The average codeword length under a probability distribution P(n) over positive integers is then given by \sum_{n=1}^{\infty} P(n) \cdot (n + 1). The standard unary code is a prefix code, meaning no codeword is a prefix of another, which ensures instantaneous decodability without ambiguity during sequential decoding. This property implies that the code satisfies the Kraft inequality, a necessary and sufficient condition for the existence of a prefix code over a binary alphabet: \sum_{n=1}^{\infty} 2^{-l(n)} \leq 1. For unary coding, the sum evaluates to \sum_{n=1}^{\infty} 2^{-(n+1)} = \frac{1}{2} \sum_{n=1}^{\infty} 2^{-n} = \frac{1}{2} < 1, confirming compliance and allowing for potential extensions to more efficient codes if needed. Unary coding achieves optimality as a for certain source , particularly geometric ones where probabilities decay rapidly. Specifically, the standard unary code has 1 bit for the P(n) = 2^{-n} for n \geq 1, where the H = 2 bits but average is 3 bits. However, the strictly positive variant (length n) is the Huffman code (and thus optimal in average ) for this , achieving the bound with zero . More generally, unary variants are optimal for geometric P(n) = (k-1) k^{-n} with base k \geq 2, as the codeword lengths align precisely with the ideal -\log_2 P(n). For example, under P(n) = 2^{-n}, the H = 2 bits, and the variant's average is also 2 bits, achieving the Shannon limit. For slower-decaying (e.g., over a finite or geometric with k < 2), unary's linear length growth leads to inefficiency, with average exceeding the by a factor that grows with the support size. Variants of unary coding can be uniquely decodable without being codes, requiring look-ahead during decoding to resolve ambiguities where one codeword es another. For instance, a code where shorter symbols end with 1 and longer ones start with 0 may necessitate examining subsequent bits to distinguish boundaries, as the decoder cannot decide instantaneously upon encountering a potential . Such constructions satisfy the McMillan extension of the Kraft inequality for uniquely decodable codes but introduce decoding delay proportional to the maximum look-ahead needed.

Basic Unary Codes

Standard Run-Length Unary Codes

The standard run-length unary code represents a positive n as a consisting of exactly n consecutive '1's followed by a single '', forming a run of identical symbols whose length encodes the value directly. This method, often termed "run-length" due to the contiguous sequence of '1's terminated by a , serves as a variable-length encoding scheme for non-negative integers starting from 1. The encodings for small values of n are shown in the table below:
nUnary Code
110
2110
31110
411110
5111110
This offers simplicity in implementation, as encoding requires only repeating '1' n times and appending '0', while decoding involves counting '1's until the first '0' is encountered. It is inherently -free, ensuring no codeword is a prefix of another, which allows for instantaneous, unambiguous decoding without buffering or lookahead. A primary disadvantage arises from the code length of n + 1 bits, which grows linearly with n and becomes highly inefficient for large values, particularly when the data distribution lacks a strong toward small integers. This exponential inefficiency relative to representations limits its utility for general-purpose of unbounded integers. Standard run-length unary coding relates to broader (RLE) in data by using the run of '1's to directly signify the count, with the '0' acting as a , akin to how RLE stores run lengths for repeated symbols in sequences.

Uniquely Decodable Non-Prefix Unary Codes

Uniquely decodable non-prefix codes represent positive integers using variable-length binary strings based on runs of identical symbols, where at least one codeword serves as a of another, violating the prefix condition. Despite this, the code as a whole ensures unique decodability, meaning any of codewords can be parsed into the original sequence without ambiguity. The Sardinas–Patterson algorithm provides a polynomial-time method to verify this property by iteratively computing sets of possible "dangling suffixes" and checking for intersections with the itself; if no intersection occurs, the code is uniquely decodable. A representative construction encodes the integer n as a leading '1' followed by exactly n-1 '0's: for example, n=1 maps to "1", n=2 to "10", n=3 to "100", and n=4 to "1000". Here, the codeword "1" is a prefix of all longer codewords like "10" and "100", requiring the decoder to read ahead after a '1' to check if the next bit is '0' (indicating continuation of the current codeword) or '1' (indicating the end of the current codeword and start of the next). For instance, the encoded sequence "110100" decodes uniquely as 1 ("1"), followed by 2 ("10"), followed by 3 ("100"), parsed by counting consecutive '0's after each leading '1' until the next '1' appears. This approach allows for shorter average codeword lengths under certain symbol probability distributions compared to prefix-constrained unary codes, as the relaxed prefix condition permits more flexible length assignments while still satisfying the Kraft-McMillan inequality for unique decodability. Construction typically involves appending run-length indicators ('0's) after a fixed ('1'), with the of the subsequent codeword serving as the implicit , thereby avoiding catastrophic prefix conflicts and ensuring a unique tree across the entire . However, the need for look-ahead introduces decoding complexity, as the parser must buffer bits to resolve potential continuations, rendering these codes non-instantaneous and unsuitable for applications demanding , streaming decoding without delay.

Specialized Unary Codes

Unary codes using leading zeros followed by a one

Unary codes using leading zeros followed by a one represent a variant of coding designed for consistent encoding of positive s through a step-wise accumulation of leading zeros terminated by a 1, ensuring a predictable structure suitable for and into schemes. In this form, the n is encoded as n-1 leading zeros followed by a single 1, reflecting a progressive addition where each increment adds a zero, producing codewords that maintain order for applications requiring sorted representations. This construction adheres to principles and is prefix-free. The encoding process is straightforward: for n=1, the code is "1"; for n=2, "01"; for n=3, "001"; and so on, with the code length exactly equal to n. These codewords are prefix-free, guaranteeing unique decodability upon parsing, as the terminating 1 unambiguously delineates each symbol without overlap. Unary codes using leading zeros followed by a one are frequently employed as the length-indicator component in composite schemes, such as , where the unary segment precedes a binary offset to encode the bit length of the offset. Key advantages include the predictability of codeword lengths, which directly correspond to the integer value, and their inherent sortability, as lexicographical ordering of the codewords mirrors the numerical sequence of the integers they represent. This reduces ambiguity in mixed coding environments by providing a reliable baseline for synchronization and ordering. For decoding, the process interprets the number of leading zeros k as an offset, with the value computed as k + 1; for instance, "001" yields two leading zeros, indicating n=3.
Integer nUnary CodeLength
111
2012
30013
400014
This table illustrates representative encodings, highlighting the step-wise zero prefix addition.

Generalized Unary Coding

Generalized unary coding extends the standard unary representation to a fixed-length block structure, allowing the encoding of a predefined range of non-negative integers from 0 to m-1 within a bounded number of bits, where m is determined by the block length n and the fixed number of 1s k used in the codeword. Unlike traditional unary coding, which uses variable-length runs of 1s terminated by a 0 and scales linearly with the value, this generalization permits the block of k 1s to be distributed or "spread" across the n-bit block, enabling a quadratic increase in representational capacity to approximately (n - k)^2 values. This approach is particularly suited for scenarios where the maximum value is known in advance, as the parameters n and k are chosen such that m \approx (n - k)^2. The construction combines unary-like runs of 1s with offsets or delimiters to the k 1s uniquely within the fixed n-bit , ensuring instantaneous decodability. For a given v in the 0 to m-1, the codeword begins with all 0s for v = 0, and for higher values, it places the k consecutive or spread 1s according to a systematic rule, such as starting with a contiguous and then introducing increasing separations (e.g., one additional 0, then two, etc.) in subsequent "cycles" to cover the expanded . A general formulation for the codeword can be viewed as a segment of fixed length related to \log_2 m bits to indicate the cycle or , followed by a unary representing the remainder within that cycle using the k 1s. This structure maintains some unary properties while bounding the total length to n bits. Decoding involves scanning the for the positions of the k 1s and their configuration back to the v. For example, consider encoding values from 0 to 15 (m = 16) using a 7-bit with k = 3 (n - k = 4, $4^2 = 16). The code for 0 is 0000000 (no 1s, treated as a special case). For 1, it is 0000111 (the 3 ones placed contiguously at the end). Higher values spread the 1s further, such as for 6: 0010111 (1s positioned with specific ). Similarly, for a smaller range m = 8 (0 to 7), a with n = 8 and appropriate k (e.g., k = 4) might use a 4-bit of leading 0s or offset indicators combined with a 4-bit run of 1s for the remainder; for v = 1, this could yield a total 8-bit codeword like a 4-bit fixed followed by a short unary run (e.g., effectively 0000 + 0111, adjusted for the scheme), ensuring all codewords are exactly 8 bits long. These examples illustrate how the fixed enforces uniform length while embedding elements. The primary advantages of generalized unary coding lie in its bounded codeword length for applications with known maximum values, preventing the in bits seen in unbounded unary schemes, and its utility in or table-based lookups where fixed-size entries simplify processing and storage. By achieving roughly quadratic capacity in n bits—far surpassing the linear capacity of standard —it offers improved for moderate ranges without resorting to full encoding, and the structure can enhance due to higher minimum Hamming distances in some configurations. This makes it relevant in representations or biological-inspired coding, where spatial distribution of signals (like in birdsong timing) mirrors the 1s. However, this approach sacrifices the simplicity of pure unary coding, as decoding requires more complex position analysis rather than just counting runs, potentially increasing computational overhead. Additionally, it necessitates prior knowledge of the m to select optimal n and k, limiting adaptability to or bounds without redesigning the set. Optimization techniques, such as adjusting the for minimal length within the fixed block, have been proposed to mitigate while preserving .

Applications

Uses in Computing

Unary coding plays a key role in data compression algorithms designed for sources with geometric distributions, where it efficiently encodes the component of non-negative integers. In Golomb-Rice codes, the \lfloor x / 2^k \rfloor is represented using a unary sequence of that many zeros followed by a one, paired with a binary-encoded , making it optimal for exponentially decaying probabilities common in residuals or run lengths. Similarly, Elias gamma codes prepend a unary representation of the \lfloor \log_2 n \rfloor + 1 (as zeros followed by a one) to the of n excluding its leading one, providing prefix-free encoding for positive integers in inverted indexes and sparse data. In character encoding standards, employs a unary-like mechanism to indicate continuation bytes in multi-byte sequences, where each non-lead byte begins with the fixed pattern "10" followed by six data bits, allowing decoders to synchronize without ambiguity even after bit errors. This design, specified in RFC 3629, ensures with ASCII while supporting variable-length encoding up to four bytes for code points. Hardware implementations leverage coding for its linear scaling and monotonicity, particularly in thermometer codes used within analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). In flash ADCs, a array produces a thermometer code—a unary string of consecutive ones indicating the input voltage level—which is then encoded to , enabling high-speed conversion with reduced sensitivity in current-steering DACs that activate unary-weighted elements sequentially. representations also appear in priority encoders for hardware arbitration, where thermometer-like outputs from comparators facilitate one-hot to conversion in systems like interrupt controllers, minimizing decoding latency. In , coding models activations by representing activation strength as the number of pulses or ones in a temporal or spatial sequence, aiding energy-efficient inference on memristive hardware. Modern applications extend unary coding to sparse data structures in databases and search engines, where it supports of bit vectors for compressed inverted indexes. For instance, in indexes, unary prefixes encode short runs of ones or zeros in posting lists, reducing space for low-cardinality attributes while enabling fast intersection queries via succinct operations. Overall, unary coding offers low decoding overhead for small integers—typically 1 to 10 bits—making it suitable for in file systems or database indexes, where frequent access to short values prioritizes speed over density for values under 32.

Unary Coding in Biology

In biological systems, unary coding appears in neural processes as a sparse representation strategy, particularly in the premotor nucleus HVC of songbirds like the (Taeniopygia guttata), where it supports the precise timing of learned vocal sequences. During song production, individual HVC projection neurons (HVC_RA) fire a single, brief burst of 2–7 spikes per rendition of the song —a structured sequence of 3–10 syllables lasting approximately 200–600 ms—resulting in an ultra-sparse or unary code that dedicates one neural event to encoding a specific temporal position in the . This firing pattern aligns bursts with individual syllables or transitions, enabling the population of HVC neurons to collectively generate a motor program for song syntax without overlap or redundancy in a single cycle. Research on brains has demonstrated this unary representation in premotor circuits, highlighting its role in sequence timing. In a seminal study, Kozhevnikov and Fee (2007) recorded from identified HVC neurons during singing, revealing that HVC_RA cells produce precisely timed bursts (mean duration 5.1 ms, 4.3 spikes per burst) exactly once per , with no activity during intervening periods or , confirming the unary sparsity. This pattern contrasts with HVC and HVC_X neurons, which fire multiple bursts per motif but lack the same precision. The unary code in HVC_RA neurons thus serves as a combinatorial basis for , where the population activity reconstructs the full song sequence through distributed, non-overlapping contributions. Broader biological analogs to unary coding occur in sensory neurons, where stimulus intensity is encoded via increasing spike trains in a rate-code manner, akin to a thermometer-like progression. For instance, in thermoreceptors, the firing rate escalates with intensity, effectively representing discrete levels through the number of in a fixed time window, much like unary for sparse, graduated signals. This mechanism allows efficient transmission of analog environmental data in neural pathways, prioritizing reliability for low-frequency but critical stimuli.

References

  1. [1]
    Gamma codes - Stanford NLP Group
    The simplest bit-level code is unary code . ... codes are relatively inefficient for large numbers (e.g., 1025 in Table 5.5 ) as they encode the length of the ...
  2. [2]
    [PDF] Index Compression - Information Retrieval
    Unary code. ▫ Represent n as n 1s with a final 0. ▫ Unary code for 3 is 1110. ▫ Unary code for 40 is. 11111111111111111111111111111111111111110 . ▫ Unary ...
  3. [3]
    [PDF] compression - Cornell: Computer Science
    Bit-Aligned Codes. ▫ Breaks (i.e. spaces) between encoded numbers can occur after any bit position. ▫ Unary code (base-1 encoding). – Encode k by k 1s followed ...
  4. [4]
    [PDF] Techniques for Inverted Index Compression
    Prefix-Free Codes. Page 13. Unary Coding x-1 ones followed by a single zero length is x optimal when. P(x) = 2^-x. Page 14. Binary Coding representation of x-1 ...
  5. [5]
    [PDF] MIT Open Access Articles Entropy Coding in HEVC
    2. Unary coding involves signaling a bin string of length N +1, where the first N bins are 1 and the last bin is 0. The decoder searches for a 0 to determine ...
  6. [6]
  7. [7]
    [PDF] Unary Coding for Neural Network Learning - arXiv
    Unary code of n is generally represented by a string of n 1 bits followed by a terminating 0 bit (or equivalently as n 0 bits followed by a terminating 1 bit). ...
  8. [8]
    Optimal source codes for geometrically distributed integer alphabets ...
    Encode j by a unary code (i.e., j zeros followed by a single one), and encode r by a Huffman code, using codewords of length lfloor log_2 l rfloor, for r < 2^{ ...
  9. [9]
    [1405.2846] Introduction to Dynamic Unary Encoding - arXiv
    May 12, 2014 · Examples of encoding and decoding algorithms are given. Examples of other constructs utilizing the principles of dynamic unary encoding are ...
  10. [10]
    [PDF] Source Encoding and Compression
    The Kraft inequality holds for instantaneous codes. The MacMillan inequality ... α-coding: This is unary coding, where number n is represented by n-1 ...
  11. [11]
    [PDF] Entropy and Lossless Coding
    Lossless coding involves binary decision trees, variable length coding, entropy, and bit-rate. Entropy is the expected value of information, and is a lower ...
  12. [12]
    [PDF] Codes for the positive integers
    Jan 16, 2008 · Thus, the unary code for run length will have an average codelength that is about 1 bit greater than the entropy. This is not very impressive.
  13. [13]
    [PDF] Huffman Coding
    For any non-prefix uniquely decodable code, we can always find a prefix code with the same codeword lengths. The lengths of the code words of uniquely decodable ...
  14. [14]
    [PDF] Error Correction Capacity of Unary Coding - arXiv
    The unary code of a number n is represented by n ones followed by a zero or by n zero bits followed by 1 bit [1]. Unary codes have found applications in data ...Missing: sensitivity | Show results with:sensitivity
  15. [15]
    Adding Compression to Block Addressing Inverted Indexes
    Unary coding: A simple scheme codes an integer x in (x − 1) one-bits followed by a zero-bit and therefore is called Unary code. The unary codes for numbers ...
  16. [16]
    [PDF] Efficient Systematic Deletions/Insertions of 0's Error Control Codes ...
    Feb 13, 2023 · is nothing but the prefix free unary representation of a sequence of integer numbers. Hence, both V and. V −1 are one-to-one mappings such that.
  17. [17]
    [PDF] arXiv:1109.0383v1 [quant-ph] 2 Sep 2011
    Sep 2, 2011 · x in the above self-delimiting procedure is highly inefficient since we are using the unary code. We can improve this cod- ing by using the ...
  18. [18]
    [PDF] Generalized Universal Coding of Integers - arXiv
    Apr 15, 2022 · Run-length encoding (RLE) [27] is essentially a method of encoding run-length rather than encoding individual ... code of the positive integers,” ...
  19. [19]
    [PDF] Elements of Information Theory
    Page 1. Elements of Information. Theory. Elements of Information Theory. Thomas M. Cover ... Sardinas and Patterson have devised a finite test for unique.
  20. [20]
    [PDF] 27) The Sardinas-Patterson test for unique decodability. A code is ...
    The proof of the Sardinas-Patterson test has two parts. In the first part, we will show that if there is a code string that has two different interpretations, ...Missing: unary | Show results with:unary
  21. [21]
    [PDF] Introduction to Dynamic Unary Encoding - arXiv
    Dec 15, 2014 · 2.2 Decoding. First, decoding determines the terminus parity of the unary code used to encode from the nth (bn-1) bit of the string. Then the ...
  22. [22]
  23. [23]
    None
    ### Summary of Elias Gamma Code from https://www.cs.auckland.ac.nz/~peter-f/FTPfiles/2002%20Goldbach.pdf
  24. [24]
    Generalized Unary Coding | Circuits, Systems, and Signal Processing
    Jul 8, 2015 · The unary code for the number n is a sequence of n 1s. To mark the beginning of a new unary number, one might append a 0 to its left. Since the ...Missing: properties | Show results with:properties<|separator|>
  25. [25]
    [PDF] Optimization of Generalized Unary Coding - arXiv
    For every s=Pn+1, the number of 0's between the appended 1 and basic set of k 1's is incremented. Here consider an example n=8, k=4. To calculate 13: i. Find 1, ...
  26. [26]
    [PDF] A General SIMD-based Approach to Accelerating Compression ...
    Golomb coding encodes an integer by two parts, i.e., the quotient and the remainder. The quotient is unary encoded, and the remainder is binary en- coded.
  27. [27]
    [PDF] Quasi-Succinct Indices - arXiv
    Jun 19, 2012 · After a self-delimiting metadata section, there are fixed-width forward pointers, the lower-bits array, and finally the upper-bits array. In ...
  28. [28]
    RFC 3629 - UTF-8, a transformation format of ISO 10646
    UTF-8 is a transformation format of ISO 10646, preserving the US-ASCII range with a one-octet encoding unit, and is compatible with US-ASCII.Missing: unary- continuation
  29. [29]
    An 8-bit current-steering digital to analog converter - ScienceDirect
    The thermometer-coded DAC, also called unary DAC, utilizes a number of equally weighted elements. Thermometer-coded DAC is invariably monotonic. When the ...
  30. [30]
  31. [31]
    [PDF] Chapter 4: Inverted Indexing for Text Retrieval
    Unary codes of the first ten positive integers are shown in Figure 4.3. Elias γ code is an efficient coding scheme that is widely used in practice. An integer ...
  32. [32]
    How to tame LLMs with quantum-inspired compression
    Oct 23, 2025 · A new generation of quantum-inspired compression techniques helps reduce the model size and energy requirements of LLMs while also ...Missing: low- entropy
  33. [33]
    Singing-Related Activity of Identified HVC Neurons in the Zebra Finch
    We have examined the activity of identified HVC neurons of zebra finches during singing. Antidromic activation was used to identify three classes of HVC cells: ...Missing: unary | Show results with:unary
  34. [34]
    Temporal Sparseness of the Premotor Drive Is Important for Rapid ...
    Individual RA-projecting HVC neurons burst just once in an entire approximately 1-s song motif ("unary” coding), and fire almost no spikes elsewhere in the ...
  35. [35]
    Model of the HVC neural network as a song motor in zebra finch
    Nov 19, 2024 · This paper proposes a model of the HVC neural network based on the physiological properties of individual HVC neurons, their synaptic interactions calibrated ...Missing: unary | Show results with:unary