Fact-checked by Grok 2 weeks ago

Hamming bound

The Hamming bound, also known as the sphere-packing bound or volume bound, is a fundamental upper bound in that limits the size of a capable of correcting a specified number of errors. For a q-ary code of length n with minimum d = 2t + 1 (allowing correction of up to t errors), the maximum number of codewords M satisfies M \leq \frac{q^n}{\sum_{k=0}^t \binom{n}{k} (q-1)^k}, where the denominator represents the volume of a Hamming ball (or ) of t in the q-ary space of dimension n. This bound derives from the geometric intuition that the disjoint spheres of t centered at each codeword must fit within the total space without overlap, ensuring unique decoding for correctable errors. Introduced by Richard W. Hamming in 1950 while developing error-correcting mechanisms for early computing machines at Bell Laboratories, the bound emerged from his analysis of binary codes for single-error correction, where he derived M \leq 2^n / (n + 1) as a special case for t=1 and q=2. Hamming's work, motivated by the need for reliable operation in large-scale digital computing machines and to address undetected errors in such systems, laid the groundwork for modern error-correcting codes and was published in his seminal paper "Error Detecting and Error Correcting Codes." The anecdote of frustrations with punched-card readers is often associated with his motivation. The general q-ary formulation followed naturally from this geometric packing argument and became a cornerstone of the field, complementing Claude Shannon's information-theoretic limits on reliable communication. The bound's significance lies in its role as a for code optimality: codes achieving equality are called perfect codes, meaning their spheres exactly the space without gaps or overlaps. Notable examples include the Hamming codes of length $2^m - 1 (for t=1), the Golay codes, and certain Reed-Muller codes, which saturate the bound and represent the most efficient known constructions for their parameters. In asymptotic terms, the Hamming bound implies that the R = (1/n) \log_q M of an error-correcting code satisfies R \leq 1 - H_q(t/n), where H_q is the q-ary function, providing a non-constructive limit that guides the design of practical codes in applications like digital communications, storage systems, and . While tighter bounds exist for specific cases (e.g., the Johnson bound), the Hamming bound remains influential due to its simplicity and broad applicability across finite alphabets.

Fundamentals of Error-Correcting Codes

Basic Concepts

Error-correcting codes are structured subsets of a , such as the set of all strings of a fixed length, designed to enable the detection and correction of errors that occur during data transmission or storage. These codes map information symbols into codewords that possess specific properties, allowing a to reconstruct the original even if some bits are altered by or . The encoding process involves selecting codewords from a larger ambient space, ensuring that erroneous versions of different codewords remain distinguishable. In typical communication scenarios, errors arise from channel noise, which can be modeled using probabilistic frameworks like the binary symmetric channel (BSC). The BSC assumes a binary input and output , where each transmitted bit is received correctly with probability $1 - p and flipped with probability p, independently for each bit, capturing symmetric error behavior without bias toward 0 or 1. This model simplifies analysis while approximating real-world noisy channels, such as those in digital telecommunications. To combat such errors, error-correcting codes incorporate , which entails appending extra bits—often parity bits computed from the information bits—to form longer codewords. This expands the transmitted length beyond the minimal , trading off efficiency for robustness; for instance, the added bits enable the to identify and repair a limited number of errors by verifying consistency across the codeword. The decoding process then uses algorithms to map the potentially corrupted received word back to the nearest valid codeword. The modern theory of error-correcting codes originated with Claude Shannon's seminal 1948 paper, "," which proved the existence of codes achieving arbitrarily low error rates over noisy channels, provided the information rate remains below the channel's capacity. This work established the fundamental limits of reliable communication and motivated subsequent developments in code design and performance bounds.

and Minimum Distance

In , the provides a fundamental measure of dissimilarity between two codewords. For two strings u = (u_1, \dots, u_n) and v = (v_1, \dots, v_n) over an of size q, the d(u, v) is defined as the number of positions i where u_i \neq v_i, that is, d(u, v) = |\{ i : 1 \leq i \leq n, \, u_i \neq v_i \}|. This metric, introduced in the context of , quantifies the minimum number of symbol changes needed to transform one codeword into another. The minimum distance d of a code C \subseteq \Sigma_q^n, where \Sigma_q is the q-ary alphabet and n is the code length, is the smallest between any pair of distinct codewords: d = \min \{ d(u, v) : u, v \in C, \, u \neq v \}. A larger minimum enhances the code's robustness against errors, as it ensures that codewords are sufficiently separated in the . This property directly influences the 's ability to detect and correct transmission errors. The minimum distance determines the error-correcting capability of the code. Specifically, a code with minimum distance d can correct up to t = \lfloor (d-1)/2 \rfloor errors, meaning that any received word within t of a unique codeword can be decoded correctly to that codeword via nearest-neighbor decoding. For detection, the code can identify up to d-1 errors without necessarily correcting them. This capability arises because spheres of radius t around codewords are disjoint, preventing overlap that could lead to decoding ambiguity. An important upper bound relating the minimum distance to the code size is the . For a q-ary of length n and minimum distance d, the maximum number of codewords A_q(n, d) satisfies A_q(n, d) \leq q^{n - d + 1}. This bound, achieved by maximum distance separable (MDS) codes such as Reed-Solomon codes, limits the trade-off between code rate and error correction. Codes meeting this bound are optimal in terms of for a given distance.

Statement of the Hamming Bound

Preliminary Definitions

In , an error-correcting code over an of size q (such as the field with q=2) is defined as a C \subseteq \Sigma^n, where \Sigma is the with |\Sigma| = q and n is the code length, representing the number of symbols in each codeword. For a over a \mathbb{F}_q, the k denotes the number of symbols, such that the code size M = |C| = q^k. The code rate R = k/n measures the efficiency of the code in terms of the fraction of symbols relative to the total length. The minimum distance d of the code C is the smallest Hamming distance between any two distinct codewords, as defined in prior discussions of distance metrics. For error correction up to t errors, the code must satisfy d = 2t + 1, ensuring that spheres of radius t around codewords do not overlap. A Hamming sphere (or ball) of radius t centered at a codeword c \in C consists of all vectors in \Sigma^n that are at most Hamming distance t from c. The volume of such a sphere, denoted V(n, t; q), is the number of vectors it contains and is given by V(n, t; q) = \sum_{i=0}^t \binom{n}{i} (q-1)^i. This formula counts the ways to choose i positions for s and q-1 choices per erroneous position. The maximum possible size of a q-ary of n and minimum distance d, denoted A(n, d; q), represents the largest M for which such a code exists. This quantity is central to bounds like the Hamming bound, providing an upper limit on code performance.

Formal Statement

The Hamming bound, also known as the sphere-packing bound, is an upper bound on the maximum number of codewords in a over an of size q with n that can correct up to t errors, which corresponds to a minimum of d = 2t + 1. For such a code C with |C| = M, the bound states that M \leq \frac{q^n}{V(n, t; q)}, where V(n, t; q) = \sum_{k=0}^t \binom{n}{k} (q-1)^k denotes the volume of a Hamming of radius t in the q-ary space of length n. Equivalently, the optimal code size A(n, d; q) satisfies A(n, d; q) \leq \frac{q^n}{\sum_{i=0}^t \binom{n}{i} (q-1)^i}. In the special case of binary codes where q = 2, the bound simplifies to M \leq \frac{2^n}{\sum_{i=0}^t \binom{n}{i}}. This form was originally derived by Richard W. Hamming in his seminal 1950 paper on , which focused on binary codes capable of single-error correction (t=1) and established the bound M \leq 2^n / (n+1). Asymptotically, as n \to \infty with relative distance \delta = d/n, the Hamming bound implies an upper limit on the code rate R = (\log_q M)/n of R \leq 1 - H_q(\delta/2), where H_q(p) = p \log_q (q-1) - p \log_q p - (1-p) \log_q (1-p) is the q-ary entropy function.

Derivation of the Bound

Sphere Packing Principle

The sphere packing principle provides a geometric for understanding the limitations on the size of error-correcting codes in . In this analogy, the ambient consists of all possible q-ary sequences of length n, forming a discrete n-dimensional with total volume q^n. Each codeword serves as the center of a "sphere" of t, where t is the error-correcting capability of the code (corresponding to a minimum of at least 2t+1). These spheres are disjoint because the minimum distance ensures that no two spheres overlap; any sequence within distance t of one codeword cannot be within distance t of another. The volume of each such sphere, denoted V(n, t; q), represents the number of sequences at Hamming distance at most t from the center and is given by the sum \sum_{k=0}^{t} \binom{n}{k} (q-1)^k. Since the spheres are disjoint and must fit within the total space without overlapping, the maximum number of codewords M satisfies M \cdot V(n, t; q) \leq q^n. This inequality captures the fundamental trade-off: increasing the code size M reduces the effective error-correcting radius t, or vice versa, due to the finite volume of the space. The principle was originally developed in the context of binary codes but extends naturally to q-ary alphabets. To illustrate, consider a simple (q=2) case with n=3 and t=1, aiming to correct single errors (minimum 3). Each has volume V(3,1;2) = 1 + \binom{3}{1} = 4, consisting of the codeword and its three single-flip neighbors. The total space has volume 8, so at most M = 8/4 = 2 disjoint spheres can fit, limiting the to at most two codewords—such as the all-zero and all-one sequences. This example demonstrates how the non-overlapping spheres constrain the code size even in low dimensions. This geometric perspective originates from Richard W. Hamming's foundational work on error-correcting codes, where he modeled the problem using spheres in the (binary case) to derive bounds on code parameters.

Detailed Proof

The detailed proof of the Hamming bound relies on the sphere packing argument applied to a q-ary error-correcting code C of length n, minimum d = 2t + 1, and M = |C| codewords, where t is a non-negative representing the error-correcting capability. For any received word y in the ambient space \mathbb{F}_q^n, nearest codeword decoding (under maximum-likelihood decoding for errors up to t) assigns y to the unique codeword c \in C such that d_H(y, c) \leq t, provided such a c exists; this ensures that the spheres of t centered at the codewords the set of all correctable errors without overlap. To see that these M spheres are disjoint, suppose two distinct spheres centered at c_1, c_2 \in C intersect at some y \in \mathbb{F}_q^n; then d_H(c_1, y) \leq t and d_H(c_2, y) \leq t, implying d_H(c_1, c_2) \leq 2t by the for , which contradicts the minimum distance d = 2t + 1. The volume (or size) of each such sphere, denoted V(n, t; q), is the number of words in \mathbb{F}_q^n at Hamming distance at most t from a fixed center; this is given by V(n, t; q) = \sum_{k=0}^{t} \binom{n}{k} (q-1)^k. To derive this formula, note that the number of words at exact distance k from the center is \binom{n}{k} (q-1)^k: choose k positions out of n to differ from the center (\binom{n}{k} ways), and for each such position, select one of the q-1 nonzero symbols in \mathbb{F}_q ((q-1)^k ways), while the remaining n-k positions match the center exactly. Summing over k from 0 to t yields the total sphere volume. For small t, this simplifies explicitly; for instance, when t=0, V(n, 0; q) = 1 (just the center itself), and when t=1, V(n, 1; q) = 1 + n(q-1). Since the M disjoint spheres are contained within the entire space \mathbb{F}_q^n of size q^n, their total volume satisfies M \cdot V(n, t; q) \leq q^n, which rearranges to the Hamming bound M \leq q^n / V(n, t; q). This inequality holds even for non-perfect codes, where the spheres do not cover the entire (i.e., some words at distance greater than t from all codewords exist, making the covering incomplete); equality occurs precisely when the spheres the exactly, defining a perfect code.

Packing and Covering Radii

In , the packing radius of a C is defined as the largest t such that the Hamming spheres (or balls) of t centered at the codewords are pairwise disjoint. This value is directly related to the minimum distance d of the code by the t = \lfloor (d-1)/2 \rfloor, which ensures that the spheres do not overlap, providing a geometric of the code's error-detection and correction capabilities up to t errors. Geometrically, this corresponds to a sphere-packing in the Hamming \mathbb{F}_q^n, where the disjoint spheres represent the exclusive decoding regions for each codeword. The covering radius \rho(C) of a code C is the smallest integer \rho such that the union of Hamming spheres of radius \rho centered at the codewords covers the entire ambient space \mathbb{F}_q^n. In geometric terms, it measures the maximal from any point in the to the nearest codeword, quantifying how efficiently the "covers" the beyond the packing spheres. Typically, \rho(C) \geq t, with equality holding only for perfect codes. The Hamming bound, also known as the sphere-packing bound, relies on the packing radius t to derive an upper limit on the code size by considering the volume of disjoint spheres of radius t. However, the covering radius \rho(C) extends this analysis by assessing decoding efficiency for errors exceeding t, as spheres of radius \rho(C) must encompass all possible received words for complete coverage. This distinction highlights that while the packing radius ensures unique decoding up to t errors, the covering radius determines the overall space coverage, often requiring \rho(C) > t for non-perfect codes. A key trade-off arises in code design: codes achieving a small covering radius relative to their packing radius are particularly efficient as covering codes, minimizing the radius needed for full space coverage while maintaining a reasonable minimum distance. This balance is crucial in applications where exhaustive coverage is prioritized over maximal error correction, such as in or communication systems seeking to bound undetected errors.

Perfect Codes

A perfect t-error-correcting code over an of size and length n is one that achieves equality in the Hamming bound, such that the total number of codewords M satisfies M \cdot V(n, t; q) = q^n, where V(n, t; q) denotes of a Hamming of t in the q-ary of dimension n. This equality implies that the spheres of radius t centered at each codeword are disjoint and collectively cover the entire ambient without gaps or overlaps, achieving optimal packing efficiency. The Hamming codes provide a fundamental family of perfect 1-error-correcting codes. These are linear codes of length n = 2^m - 1, k = 2^m - 1 - m, and minimum distance d=3, for m ≥ 2, constructed using parity-check matrices whose columns are all nonzero vectors of length m. Their perfection follows directly from the Hamming bound for t=1, as the sphere volume V(n, 1; 2) = 1 + n divides $2^n exactly, yielding M = 2^{n - m}. Hamming codes, defined analogously over (3) with length n = (3^m - 1)/2, k = n - m, and d=3, are also perfect for t=1. The Golay codes represent the only known non-Hamming perfect codes with t > 1. The binary Golay code is a [23, 12, 7] that corrects up to t=3 errors, achieving equality in the Hamming bound with sphere volume V(23, 3; 2) = 1 + 23 + 253 + 1771 = 2048, so M = 2^{23} / 2048 = 2^{12}. Similarly, the ternary Golay code is an [11, 6, 5] over GF(3) that corrects up to t=2 errors perfectly, as V(11, 2; 3) = 1 + 22 + 220 = 243, so M = 3^{11} / 243 = 729 = 3^6. These codes were introduced in a seminal note emphasizing their role in approaching limits. A complete classification of perfect codes over prime-power alphabets reveals their extreme rarity: the only nontrivial examples are the Hamming codes (for t=1), the binary Golay code (for t=3), the ternary Golay code (for t=2), and certain trivial codes such as the full space or repetition codes. This result, conjectured by Golay and proved independently by Tietäväinen and by , establishes that no other perfect multiple-error-correcting codes exist over finite fields for q a . In particular, for codes, the only non-trivial perfect codes are the Hamming codes and the Golay code. Consequently, most parameter sets (n, t, q) do not admit perfect codes, underscoring the exceptional nature of these constructions.

Applications and Limitations

Examples in Binary Codes

One prominent example of a binary code achieving the Hamming bound is the [7,4,3]2 , which encodes 4 information bits into 7-bit codewords with minimum distance 3, yielding M= codewords capable of correcting t=1 error. This code saturates the bound precisely, as the sphere-packing argument gives an upper limit of 2^7 / \sum{k=0}^{1} \binom{7}{k} = 128 / 8 = codewords. Binary repetition codes of odd length 2t+1, such as the [3,1,3]_2 code with M=2 codewords repeating a single bit three times, also attain the Hamming bound for t=1, satisfying 2^3 / (1 + 3) = 8 / 4 = 2. However, these trivial perfect codes exhibit poor information R = 1/n, limiting their practical utility despite exact adherence to the bound. In contrast, Hadamard codes of length n=2^r, dimension k=r, and minimum distance d=2^{r-1} (relative distance δ=1/2) lie below the Hamming bound for finite parameters, illustrating an extreme in the rate-distance trade-off with low but high relative distance; they are not perfect. To illustrate how binary codes of minimum distance d=3 (t=1) compare to the Hamming bound for small lengths, the following table lists the optimal size A(n,3;2) alongside the bound floor(2^n / (1 + n)):
nHamming boundA(n,3;2)
554
694
71616
82820
95140
109372
Here, equality holds only at n=7 (the ), while other lengths show strict inequality, highlighting the bound's tightness in specific cases.

Beyond the Hamming Bound

The Hamming bound serves as an upper limit on the size of error-correcting codes but exhibits significant looseness, particularly when the relative error-correcting capability t/n is large, corresponding to a high relative minimum \delta = d/n. For codes with \delta > 1/2, the bound permits positive rates, whereas more stringent upper bounds, such as the Plotkin bound, demonstrate that no such codes exist with nonzero rate. This looseness arises because the sphere-packing argument underlying the Hamming bound overestimates the code size by not accounting for overlaps or inefficiencies in high-distance regimes. In contrast, the Gilbert-Varshamov bound provides a complementary lower bound on code size, guaranteeing the existence of codes that outperform pessimistic estimates derived solely from upper bounds like Hamming. Specifically, for q-ary s of n and minimum distance d, the bound states that A_q(n, d) \geq q^n / \sum_{j=0}^{d-1} \binom{n}{j} (q-1)^j, ensuring asymptotically good codes with at least $1 - h_q(\delta) + o(1), where h_q is the q-ary entropy function. This lower bound, originally established by in 1952 and independently by Varshamov in 1957, highlights the gap between achievable code performance and the Hamming upper bound of $1 - h_q(\delta/2) + o(1). The Hamming bound extends naturally to q-ary alphabets and applies to non-linear codes without modification, using the q-ary V_q(n, t) = \sum_{j=0}^t \binom{n}{j} (q-1)^j to limit the code size to at most q^n / V_q(n, t). In the asymptotic regime with fixed \delta = d/n, random codes achieve rates approaching the Gilbert-Varshamov lower bound, which aligns with the Shannon capacity for the binary symmetric channel under worst-case error correction up to fraction \delta/2, but the Hamming bound proves loose compared to refined upper bounds like the Elias-Bassalygo or McEliece-Rodemich-Rumsey-Welch (MRRW) bounds. These random codes thus demonstrate that practical constructions can surpass the constraints implied by the Hamming bound's looseness for moderate \delta. In modern contexts, such as quantum error-correcting codes and low-density parity-check (LDPC) codes, the Hamming bound's limitations are evident, prompting tighter analyses. For quantum stabilizer codes, the quantum Hamming bound $2^{n-k} \geq \sum_{j=0}^t \binom{n}{j} 3^j applies to non-degenerate cases but can be violated by degenerate codes, where multiple errors map to the same , allowing higher rates. Quantum LDPC codes, with constant-degree parity-check matrices, approach the quantum hashing bound (the analog of for Pauli channels) while exceeding classical Hamming predictions in overhead efficiency. Similarly, (LP) bounds, pioneered by McEliece, Rodemich, Rumsey, and Welch in , refine the Hamming bound by optimizing over association schemes and polynomials, yielding stricter upper bounds on A_q(n, d) for binary and q-ary codes in intermediate distance regimes. Open problems surrounding the optimal size A(n, d; q) persist, with computational advances extending known bounds through exhaustive searches and algebraic constructions up to lengths beyond n = 256 for cases as of 2025. Recent results improve upper bounds for linear codes in high-distance regimes, such as d = n/2 - \Theta(\sqrt{n}), via and randomized methods, while lower bounds from explicit constructions like folded Reed-Solomon codes narrow the gaps for list-decodable parameters. These efforts underscore the ongoing challenge of determining exact values or tighter asymptotics for A(n, d; q), particularly for non-binary q and moderate n.

References

  1. [1]
    [PDF] Chapter 2. Sphere Packing and Shannon's Theorem
    The [7, 4] Hamming code is a perfect 1-error- correcting code, as we shall see in Section 4.1. ( 2.2.7 ) Theorem. (Gilbert-Varshamov bound.) There exists an m- ...Missing: primary | Show results with:primary
  2. [2]
    [PDF] The Bell System Technical Journal - Zoo | Yale University
    April, 1950. The Bell System ... Copyright, 1950,American Telephone and Telegraph Company. Error Detecting and Error Correcting Codes. By R. W. HAMMING.
  3. [3]
    [PDF] Lecture 4: Hamming code and Hamming bound
    Sep 5, 2007 · Lecture 4: Hamming code and Hamming bound. September 5,2007. Lecturer: Atri Rudra. Scribe: Kanke Gao & Atri Rudra. In the last couple of ...Missing: primary sources
  4. [4]
    Error-Correcting Code -- from Wolfram MathWorld
    An error-correcting code is an algorithm for expressing a sequence of numbers such that any errors which are introduced can be detected and corrected.
  5. [5]
    Binary Symmetric Channel - an overview | ScienceDirect Topics
    The binary symmetric channel (BSC) is defined as a communication channel with binary input and output that transmits its input accurately with high probability, ...<|separator|>
  6. [6]
    A mathematical theory of communication | Nokia Bell Labs Journals ...
    A mathematical theory of communication ; Page(s): 379 - 423 ; Date of Publication: July 1948 ; ISSN Information: Print ISSN: 0005-8580 ; Persistent Link: https:// ...
  7. [7]
  8. [8]
    [PDF] Notes 1: Introduction, linear codes
    Let us get a few simple definitions out of the way. Definition 1 (Hamming distance) The Hamming distance between two strings x and y of the same length over ...
  9. [9]
    [PDF] Bounds on Code Parameters - Henry D. Pfister
    Jan 16, 2025 · The Hamming bound is the natural packing bound for. Hamming balls in Hamming metric space. Theorem 2. For a binary code C of block length n and ...Missing: nkq MA(
  10. [10]
    [PDF] Notes 4: Elementary bounds on codes 1 Singleton bound
    The Hamming or sphere-packing bound gave an upper bound on the size (or rate) ... However, we will improve upon the Hamming bound and show that its asymptotic form ...Missing: mathematical | Show results with:mathematical
  11. [11]
    [PDF] ENEE 739C: Advanced Topics in Signal Processing: Coding Theory ...
    ... packing radius of the code C. The number r(C) = max x∈H n. 2 min c∈C d(x, c) is called the covering radius of C. Remark on binary vs. q-ary codes. In ...
  12. [12]
    [PDF] Coding Theory: Bounds
    A code C is perfect if and only if its covering radius equals its packing radius (t = ρ(C)). ▶ Otherwise, the covering radius is larger than the packing radius ...
  13. [13]
    On the Nonexistence of Perfect Codes over Finite Fields - SIAM.org
    In this paper we considerably reduce the range in which perfect codes in the Johnson scheme can exist; e.g., we show that there are no nontrivial perfect codes ...
  14. [14]
    [PDF] PERFECT BINARY CODES: CLASSIFICATION AND PROPERTIES
    Aug 21, 2009 · In 1971 Tietäväinen and Perko [69] proved that every perfect binary code must have the same parameters—length, cardinality and minimum distance— ...Missing: 1979 | Show results with:1979
  15. [15]
    [PDF] 1 Hamming bound and perfect codes
    Because we can rearrange the columns, the binary Hamming code is unique up to equivalence. 4. Example Ham(3,2). The parity check matrix is. H =.. 0 ...
  16. [16]
    [PDF] Golay codes. Classification of perfect codes
    We give without proof a complete classification of perfect codes over alphabets of prime power size up to parameter equivalence, conjectured by Golay and proved ...
  17. [17]
    Golay Paper PDF - Scribd
    Notes on Digital Coding* 'The consideration of message coding as a 'zans for approaching the theoretical capac- iy of communication channel, while reduc~ ig ...
  18. [18]
    Error Detecting and Error Correcting Codes - Hamming - 1950
    Error Detecting and Error Correcting Codes. R. W. Hamming,. R. W. Hamming ... Download PDF. back. Additional links. About Wiley Online Library. Privacy ...
  19. [19]
    [PDF] Information Theory Hamming Codes and Bounds on Codes
    The main coding theory problem for linear codes is to optimize one of the parameters n, k, d for given values of the other two. 14. Page 15. Optimal Binary ...Missing: primary sources
  20. [20]
    Probability - Perfect Codes - Applied Cryptography Group
    Every binary repetition code of length 2 e + 1 has two codewords: all 0s and all 1s. Every word differs from one of the codewords by at most e bits.
  21. [21]
    [PDF] 1 Hamming Bound - CS-People by full name
    The Hamming bound states that for a code with distance d, the size of the code is bounded by |C| ≤ 2n Vol2(⌊(d − 1)/2⌋,n). Asymptotically, the best rate is at ...
  22. [22]
  23. [23]
    [PDF] Notes 2: Gilbert-Varshamov bound 1 Asymptotically good codes and ...
    The Gilbert-Varshamov bound was actually proved in two independent works (Gilbert, 1952) and (Varshamov, 1957). The latter actually proved the existence of ...
  24. [24]
    Shannon limit vs Hamming bound - Math Stack Exchange
    Jan 23, 2021 · The Shannon limit is tight, but there are better upper bounds than the Hamming bound, such as the Elias-Bassalygo bound and the MRRW bound.
  25. [25]
    Degenerate quantum codes and the quantum Hamming bound
    In contrast, the present result specifies the restriction on K and d when the quantum Hamming bound holds exactly, irrespective of the size of n .Missing: limitations | Show results with:limitations
  26. [26]
    Quantum Low-Density Parity-Check Codes
    Oct 11, 2021 · Quantum LDPC codes​​ For CSS codes this means that the Hamming weight of each row and column of H X and H Z is bounded by a constant. More ...Abstract · Article Text · OTHER CONSTRUCTIONS... · APPLICATIONS BEYOND...
  27. [27]
    Linear programming bounds for codes via a covering argument - arXiv
    Feb 14, 2007 · We recover the first linear programming bound of McEliece, Rodemich, Rumsey, and Welch for binary error-correcting codes and designs via a covering argument.