Fact-checked by Grok 2 weeks ago

Cyclic code

A cyclic code is a subclass of linear in , defined over a \mathbb{F}_q (where q is a ) with n, such that any cyclic shift of a codeword is also a codeword. These codes are particularly powerful for error detection and correction in digital communications and data storage due to their algebraic structure, which allows representation as ideals in the polynomial ring \mathbb{F}_q / (x^n - 1). Cyclic codes are generated by a monic generator g(x) of r = n - k (where k is the ), which divides x^n - 1, ensuring the code consists of all multiples of g(x) modulo x^n - 1. This structure enables efficient encoding via linear feedback shift registers (LFSRs) and systematic forms where information bits precede parity bits. Key properties include invariance under cyclic shifts, the ability to detect single errors and bursts of adjacent errors, and decoding methods like computation using check polynomials. Among the earliest practical error-correcting codes, cyclic codes were recognized for their rich algebraic framework by E. Prange in the 1950s, facilitating hardware implementation with shift registers. Notable subclasses include BCH codes for correcting multiple random errors, Reed-Solomon codes for burst error correction (especially when n \leq q), and codes for error detection in protocols like Ethernet. These codes maintain a minimum distance related to the roots of g(x), often bounding the error-correcting capability via the BCH bound for designed distances. Their versatility extends to applications in storage media, satellite communications, and .

Fundamentals

Definition

A cyclic code is a subclass of linear block codes defined over the finite field \mathbb{F}_q (also denoted GF(q)), where q is a prime power. Specifically, an (n, k) cyclic code C of length n and dimension k is a k-dimensional subspace of \mathbb{F}_q^n such that if \mathbf{c} = (c_0, c_1, \dots, c_{n-1}) is a codeword, then the right cyclic shift \mathbf{c}' = (c_{n-1}, c_0, c_1, \dots, c_{n-2}) is also in C. This property holds for any number of cyclic shifts, making the code closed under cyclic permutations. Codewords of a cyclic code are conveniently represented using over \mathbb{F}_q. Each codeword \mathbf{c} corresponds to a c(x) = c_0 + c_1 x + \dots + c_{n-1} x^{n-1} of less than n, and the cyclic shift operation corresponds to by x x^n - 1. Thus, the set of codeword polynomials forms a in the R = \mathbb{F}_q / (x^n - 1), which is the of the \mathbb{F}_q by the generated by x^n - 1. This algebraic structure enables efficient encoding and decoding algorithms. Every nonzero cyclic code has a unique monic generator g(x) of n - k that divides x^n - 1, and the code consists of all polynomials in R that are multiples of g(x). The G of the code can be expressed in systematic (standard) form as a k \times n matrix [I_k \mid P], where I_k is the k \times k and P is a k \times (n - k) submatrix derived from g(x). If g(x) = x^{n-k} + g_{n-k-1} x^{n-k-1} + \dots + g_1 x + g_0, then the rows of P are formed by shifting the coefficients of g(x) appropriately to ensure the codewords satisfy the cyclic property. The parity-check polynomial is defined as h(x) = (x^n - 1)/g(x), which is a of degree k that also divides x^n - 1. The corresponding parity-check H has rows that are cyclic shifts of the coefficients of h(x). In syndrome computation for error detection, a received \mathbf{r} (polynomial r(x)) is checked by computing the s(x) = r(x) \mod g(x); if s(x) = 0, then \mathbf{r} is a codeword, as r(x) is divisible by g(x) (equivalently, r(\alpha) = 0 for roots \alpha of g(x)). This leverages the dual ideal generated by h(x) in R.

Algebraic Structure

Cyclic codes over a \mathbb{F}_q (also denoted GF(q)) of length n are precisely the principal ideals of the R = \mathbb{F}_q / (x^n - 1). Since R is a , each such ideal C is uniquely generated by a g(x) \in \mathbb{F}_q of degree r = n - k, where k = \dim C is the of the . The codewords are all multiples of g(x) in R, i.e., C = \langle g(x) \rangle = \{ p(x) g(x) \mod (x^n - 1) \mid \deg p(x) < k \}, and g(x) must divide x^n - 1 exactly in \mathbb{F}_q. The polynomial x^n - 1 factors uniquely into a product of distinct irreducible polynomials over \mathbb{F}_q, provided that \gcd(n, q) = 1 (ensuring no repeated roots and that R is semisimple). These irreducible factors are the minimal polynomials of the primitive d-th roots of unity for divisors d of n, grouped according to the cyclotomic cosets modulo n in the multiplicative group of an extension field. Any generator polynomial g(x) for a cyclic code is a product of a subset of these minimal polynomials, determining the roots (and thus the parity-check matrix) of the code. Each cyclic code admits a unique idempotent generator e(x) \in R satisfying e(x)^2 \equiv e(x) \pmod{x^n - 1} and \deg e(x) < n, which generates the ideal as C = e(x) R. This idempotent is the unique element of minimal degree in C that acts as the identity for multiplication within the code, and it exists under the semisimplicity condition \gcd(n, q) = 1. The idempotent provides an alternative generator to g(x), useful for decomposing codes into direct sums of minimal ideals corresponding to primitive idempotents. The dual code C^\perp of a cyclic code C is also cyclic. If g(x) generates C and h(x) = (x^n - 1)/g(x) is the corresponding parity-check polynomial, then C^\perp is generated by the reciprocal (or reverse) polynomial \tilde{h}(x) = x^{\deg h} h(x^{-1}), which is monic and divides x^n - 1. This reciprocity preserves the cyclic structure and relates the generator of the dual directly to the parity-check of the original code. The dimension of a cyclic code satisfies k = n - \deg g(x), as the degree of the generator determines the number of parity-check symbols. Cyclic codes are non-catastrophic (i.e., admit faithful shift-register encoding without error propagation) precisely when g(x) divides x^n - 1 and \gcd(g(x), h(x)) = 1, a condition inherently met by the definition of the generator polynomial in the principal ideal structure. This ensures the encoding process is invertible and the code avoids degenerate error patterns in practical implementations.

Examples

Trivial Examples

The trivial cyclic codes provide simple illustrations of the cyclic structure and basic properties without involving complex error-correcting capabilities. The most basic example is the entire space \mathbb{F}_q^n, which forms a cyclic code of length n over the finite field \mathbb{F}_q. This code has dimension n and generator polynomial g(x) = 1, meaning every possible vector is a codeword. Since cyclic shifts of any vector remain within the space, the code satisfies the cyclic property. The minimum distance of this code is d = 1, as single-position changes are possible codewords. Another fundamental example over \mathbb{F}_2 is the even-weight code, also known as the even-parity code, which consists of all binary vectors of length n with even Hamming weight. This is a cyclic code with dimension n-1 and generator polynomial g(x) = x + 1. The code is cyclic because a cyclic shift preserves the parity (even number of 1s) of any codeword, as the shift merely rearranges the positions without altering the total count. The minimum distance is d = 2, since the lowest-weight nonzero codewords have exactly two 1s, and it can detect but not correct single errors. The binary repetition code of length n is the cyclic code comprising the all-zero vector and the all-one vector, with dimension 1 and generator polynomial g(x) = 1 + x + \cdots + x^{n-1}. It is cyclic because cyclic shifts of the all-zero codeword remain all-zero, and shifts of the all-one codeword remain all-one. This code achieves the maximum possible minimum distance d = n for its length and dimension, allowing correction of up to \lfloor (n-1)/2 \rfloor errors.

Reed-Solomon Codes

Reed-Solomon codes are a prominent subclass of non-binary cyclic codes defined over finite fields, renowned for their optimal error-correcting capabilities in practical applications. Introduced in 1960 by Irving S. Reed and Gustave Solomon, these codes construct codewords as evaluations of polynomials of degree less than k at n distinct points in the finite field \mathbb{F}_q, where n \leq q and the evaluation points form a set of n elements, often the powers of a primitive element \alpha \in \mathbb{F}_q. This evaluation-based construction ensures the code is cyclic, with the generator polynomial g(x) formed as the least common multiple of the minimal polynomials of \alpha^b, \alpha^{b+1}, \dots, \alpha^{b+d-2} over \mathbb{F}_q, where b is a starting exponent and d is the designed minimum distance. A defining feature of Reed-Solomon codes is their maximum distance separable (MDS) property, which achieves the Singleton bound with minimum distance d = n - k + 1. This optimality arises from the Vandermonde structure of the parity-check matrix, where any k columns are linearly independent, guaranteeing that the code can correct up to \lfloor (d-1)/2 \rfloor errors or detect up to d-1 errors. Specifically, the generator polynomial takes the explicit form g(x) = \prod_{i=1}^{d-1} (x - \alpha^{b+i}), ensuring the code has \alpha^{b}, \alpha^{b+1}, \dots, \alpha^{b+d-2} as consecutive roots, which directly enforces the distance property. Encoding in Reed-Solomon codes involves polynomial interpolation: a message of k symbols corresponds to a polynomial m(x) of degree less than k, and the codeword is the vector of its evaluations at the n points, often realized systematically by appending parity symbols computed via multiplication by g(x). Decoding leverages syndrome computation, where received symbols are evaluated to form syndromes corresponding to powers of \alpha, enabling error location and correction through methods like the Berlekamp-Massey algorithm for finding the error locator polynomial. In practice, Reed-Solomon codes have been widely adopted for reliable data storage and transmission, such as in compact disc (CD) systems where they correct errors from scratches and defects in the cross-interleaved Reed-Solomon code (CIRC) configuration, and in QR codes where they provide error correction levels up to 30% data loss as per ISO standards.

BCH Codes

BCH codes form a prominent class of cyclic error-correcting codes capable of correcting multiple random errors, constructed using algebraic constraints on the roots of the generator polynomial. Independently developed by Alexis Hocquenghem in 1959 and by Raj Chandra Bose and Dwijendra Kumar Ray-Chaudhuri in 1960, these codes generalize earlier single-error-correcting cyclic codes by imposing conditions on consecutive powers of a primitive element in a finite field extension. The standard Bose-Chaudhuri-Hocquenghem (BCH) construction defines a cyclic code of length n = q^m - 1 over the finite field \mathbb{F}_q, where m \geq 1 is the degree of the extension \mathbb{F}_{q^m} / \mathbb{F}_q, and \alpha is a primitive element of \mathbb{F}_{q^m}. The generator polynomial g(x) is the least common multiple of the minimal polynomials of \alpha, \alpha^2, \dots, \alpha^{\delta-1} over \mathbb{F}_q, denoted as g(x) = \mathrm{lcm} \left[ m_1(x), m_2(x), \dots, m_{\delta-1}(x) \right], where m_i(x) is the minimal polynomial of \alpha^i. This choice ensures that the code has a designed distance \delta, which provides a lower bound on the actual minimum distance d \geq \delta, allowing correction of up to t = \lfloor (\delta - 1)/2 \rfloor errors. The dimension k of the code satisfies k \geq n - m (\delta - 1), since the degree of g(x) is at most m (\delta - 1). Primitive BCH codes specifically employ a primitive \alpha, yielding full-length codes n = q^m - 1; in contrast, general BCH codes may use non-primitive elements for shorter lengths and are classified as narrow-sense (roots starting from \alpha^1) or wide-sense (starting from \alpha^b for b > 1). Binary BCH codes represent a key special case with q = 2, where the field is \mathbb{F}_2 and lengths are of the form n = 2^m - 1. In this setting, the designed distance \delta is typically chosen to be odd for binary BCH codes to optimize the root constraints, while even-designed distances are achieved through extended versions, such as by adding an overall parity-check bit to the primitive code, which increases the minimum distance by one. These binary variants maintain the bound k \geq n - m t and are particularly efficient for implementations due to their over \mathbb{F}_2. BCH codes, especially ones, continue to play a role in modern communications, including systems and media, where their multiple-error-correction capabilities enhance reliability in noisy environments.

Cyclic Redundancy Check (CRC) Codes

Cyclic redundancy check (CRC) codes are a class of primarily used for error detection rather than correction. Introduced by W. Wesley Peterson in 1961, they operate over \mathbb{F}_2 with a fixed generator g(x) of r, which divides x^n - 1 for code length n. The encoding process treats the message as a m(x) of degree less than n - r, shifts it by x^r, and computes the remainder when divided by g(x); the codeword is then m(x) x^r - remainder, ensuring the entire codeword is divisible by g(x). This structure allows detection of all single-bit errors, all odd-numbered bit errors, and bursts of errors up to length r. codes are systematic and efficiently implemented using linear feedback shift registers. CRC codes are ubiquitous in protocols and storage systems due to their simplicity and effectiveness. Common polynomials include CRC-16 (e.g., g(x) = x^{16} + x^{15} + x^2 + [1](/page/1)) used in USB and , and CRC-32 (e.g., x^{32} + x^{26} + [x^{23}](/page/X-23) + \dots + [1](/page/1)) in Ethernet and ZIP files, providing high undetected error probabilities below $10^{-5} for typical frame sizes.

Error Correction Capabilities

Single Error Correction with Hamming Codes

Hamming codes represent a fundamental of cyclic codes designed for single-error correction. These codes have length n = 2^m - 1, where m is a positive , k = n - m, and minimum distance d = 3. The generator g(x) is a primitive of m over \mathrm{GF}(2), which is the minimal for a primitive element \alpha of the extension field \mathrm{GF}(2^m). This construction ensures that the codewords are precisely the polynomials in \mathrm{GF}(2) / (x^n - 1) that are divisible by g(x), with roots \alpha, \alpha^2, \alpha^4, \dots, \alpha^{2^{m-1}}. Syndrome decoding for Hamming codes leverages the roots of g(x). For a received word corresponding to polynomial r(x), the syndrome components are computed as s(\beta) = r(\beta) for each root \beta of g(x). If a single error occurs at position i (where the error polynomial is x^i), then s(\beta) = \beta^{i} for each root \beta, and the syndrome values form a vector that corresponds to the binary representation of i under the vector space isomorphism \mathrm{GF}(2^m) \cong \mathrm{GF}(2)^m, uniquely identifying the error location. This algebraic approach enables efficient correction of any single error. Hamming codes are perfect codes, meaning the spheres of radius 1 centered at each codeword are disjoint and exactly cover the entire space \mathrm{GF}(2)^n. The number of codewords $2^k satisfies $2^k \sum_{l=0}^{1} \binom{n}{l} = 2^n, as the volume of each is $1 + n = 2^m, and there are $2^{n-m} such spheres filling $2^n vectors without overlap or gap. An extended Hamming code is obtained by appending an overall to each codeword of the binary , resulting in a code of length $2^m, $2^m - m - 1, and minimum distance 4. This extension preserves single-error correction while enabling detection of double errors. The parity-check matrix H for the Hamming code is an m \times n matrix over \mathrm{GF}(2) whose columns are the distinct nonzero vectors in \mathrm{GF}(2)^m, specifically the binary representations of $1, 2, \dots, n, or equivalently, the vectors (\alpha^{0 j}, \alpha^{1 j}, \dots, \alpha^{(m-1) j})^T for j = 0, 1, \dots, n-1, where \alpha is interpreted in its \mathrm{GF}(2)-vector space basis. This structure aligns with the cyclic representation and facilitates syndrome computation as H \mathbf{r}^T. Binary Hamming codes correspond to the case t=1 of BCH codes, serving as a foundational example of cyclic error-correcting codes.

Burst Error Correction

A burst error is defined as a sequence of consecutive errors confined to a window of b positions within a codeword of length n. The Reiger bound provides a fundamental limit on the burst error-correcting capability of linear block codes, stating that to correct all burst errors of length up to b, the redundancy r = n - k must satisfy r ≥ 2b. Codes achieving this bound are considered optimal for burst correction. Fire codes represent a prominent class of cyclic codes designed specifically for single burst error correction, constructed with generator polynomial g(x) = (x^{2t} - 1) p(x), where p(x) is an irreducible polynomial of degree m ≥ b over GF(2), and the code length n = 2t \cdot \ord(p(x)), with \ord(p(x)) denoting the order of p(x). This structure ensures the code can correct any single burst of length up to m while detecting bursts up to 2t + m. Fire codes are particularly efficient for channels prone to isolated bursts, such as early data transmission systems, offering high rates for large n. Decoding of Fire codes typically employs an error-trapping method using a feedback shift register to isolate the burst location, followed by syndrome computation to correct the errors within the trapped window. This approach, originally devised by Peterson and refined by Chien, processes the received word sequentially, leveraging the cyclic nature to shift errors into a known short syndrome span. Alternatively, the Berlekamp-Massey can be adapted to locate the burst by finding the minimal matching the syndrome sequence, enabling efficient algebraic decoding for practical implementations. In practical applications, particularly post-1980s storage media like compact discs, interleaved cyclic codes—such as cross-interleaved Reed-Solomon codes—enhance burst correction by distributing errors across multiple codewords, allowing correction of long bursts (up to 3,500 bits) that exceed single-code capabilities. This interleaving transforms consecutive errors into dispersed patterns correctable by component cyclic codes, significantly improving reliability in magnetic and optical recording systems.

Error Correction Bounds

The BCH bound provides a fundamental lower bound on the minimum of a cyclic code based on the consecutive roots of its generator . Specifically, for a cyclic code of length n over a \mathbb{F}_q with generator g(x) that has \delta - 1 consecutive roots \alpha^b, \alpha^{b+1}, \dots, \alpha^{b+\delta-2} in an extension field, where \alpha is a primitive n-th root of unity, the minimum satisfies d \geq \delta. A proof sketch relies on the parity-check matrix H of the code, which includes \delta - 1 rows corresponding to the syndromes evaluated at these roots. The submatrix formed by any \delta - 1 columns of H is a with \prod_{0 \leq i < j \leq \delta-2} (\alpha^{b+j} - \alpha^{b+i}) \neq 0, ensuring full rank. Thus, no nonzero codeword of weight less than \delta can satisfy H \mathbf{c}^T = \mathbf{0}, implying d \geq \delta. This bound is achieved by , where the minimum distance equals n - k + 1 and matches the designed \delta from consecutive root selection. Similarly, primitive are constructed to meet the exactly for their designed distance \delta, making them optimal in many parameter regimes. The extends the to non-consecutive roots under additional constraints. If the defining set includes \delta + a(\gamma - 1) roots comprising \delta - 1 consecutive roots followed by a additional blocks of \gamma - 1 roots each, spaced by steps coprime to n (i.e., \gcd(\gamma, n) = 1), then d \geq \delta + a(\gamma - 1). This generalization allows tighter bounds for codes with structured but gapped root sets. The Roos bound further refines these by incorporating paired root constraints. For a defining set with a core of \delta - 1 consecutive roots, augmented by a and b additional roots satisfying certain coprimality and spacing conditions, the minimum distance satisfies d \geq \delta + \min(a, b)(\gamma - 1). Recent computational verifications in the 2020s have confirmed that numerous optimal achieve or exceed these bounds for specific lengths and dimensions, as documented in updated tables of best-known linear codes. For instance, exhaustive searches have identified binary of odd length up to 125 where attain the maximum possible distance except in rare cases.

Spectral Perspective

Fourier Transform over Finite Fields

The discrete Fourier transform (DFT) over finite fields provides a powerful spectral analysis tool for cyclic codes defined over \mathbb{F}_q, where q is a power of a prime, by transforming codewords from the time domain to the frequency domain. For a cyclic code of length n over \mathbb{F}_q, assuming \gcd(n, q) = 1 so that n is invertible in \mathbb{F}_q, the DFT of a codeword c = (c_0, c_1, \dots, c_{n-1}) is the vector \hat{c} = (\hat{c}(\alpha^0), \hat{c}(\alpha^1), \dots, \hat{c}(\alpha^{n-1})), where \alpha is a primitive nth root of unity in an extension field \mathbb{F}_{q^m} containing such an element (with n \mid q^m - 1), and \hat{c}(\alpha^j) = \sum_{i=0}^{n-1} c_i (\alpha^j)^i. This transform evaluates the associated polynomial c(x) = \sum_{i=0}^{n-1} c_i x^i at the nth roots of unity, yielding a spectral representation that simplifies the study of code structure and operations. The inverse DFT recovers the original codeword from its spectrum via c_i = n^{-1} \sum_{j=0}^{n-1} \hat{c}(\alpha^j) \alpha^{-ij} for i = 0, \dots, n-1, confirming the transform's bijectivity under the invertibility condition on n. A key property is the convolution theorem, which states that the DFT of the cyclic convolution of two sequences equals the pointwise product of their DFTs: if e_i = \sum_{k=0}^{n-1} f_k g_{(i-k) \mod n}, then \hat{e}_j = \hat{f}_j \hat{g}_j. This correspondence underpins efficient encoding and decoding of cyclic codes, as multiplication in the polynomial ring \mathbb{F}_q / (x^n - 1) translates directly to spectral multiplication. In the context of cyclic codes, codewords exhibit zero spectral components at the roots of the generator polynomial g(x); specifically, if \beta is a root of g(x), then \hat{c}(\beta) = 0 for every codeword c(x) divisible by g(x), highlighting how the spectrum encodes the code's defining zero set. The DFT matrix over finite fields possesses orthogonality properties analogous to the complex case, with the inner product of distinct columns satisfying \sum_{i=0}^{n-1} \alpha^{i(j-k)} = n \delta_{j,k} (where \delta is the ), enabling \sum_i |c_i|^2 = n^{-1} \sum_j |\hat{c}(\alpha^j)|^2 (adapted to field trace for non-Archimedean norms). This unitarity up to scaling—where the inverse is n^{-1} times the conjugate transpose, but in characteristic not dividing n, it behaves as a scaled unitary transform—facilitates energy-preserving analyses and bounds on code weights via spectral distributions. These features make the finite-field DFT indispensable for deriving algebraic insights into performance without relying on exhaustive enumeration.

Spectral Factorization and Code Properties

In the spectral domain, cyclic codewords exhibit a structured support determined by the generator polynomial. For a cyclic code of length n over a finite field \mathbb{F}_q generated by g(x), the discrete Fourier transform (DFT) \hat{c}(\beta) of any codeword c(x) vanishes at all elements \beta in the zero set Z(g(x)), the set of roots of g(x) in the splitting field. Thus, the spectral support of the code—the frequencies where \hat{c}(\beta) may be nonzero—comprises the complement of Z(g(x)) in the multiplicative group of the extension field. This property allows the code to be viewed as a subspace constrained to specific spectral lines, facilitating analysis and design in the frequency domain. Syndrome computation for error detection in cyclic codes leverages the DFT directly: for a received word r(x) = c(x) + e(x), the error syndrome components are the evaluations of the DFT \hat{r}(\beta) at the check roots, i.e., the nonzero elements of Z(g(x)), since \hat{c}(\beta) = 0 there. These syndromes, forming a vector in the dual code's spectral representation, capture the error pattern's frequency content at the parity-check positions without requiring time-domain polynomial division. This approach reduces syndrome calculation to n-k DFT evaluations, where k is the code dimension, and is foundational for algebraic decoding. The Peterson-Gorenstein-Zierler (PGZ) decoder exploits this spectral framework to correct errors up to the code's designed distance. It constructs an error locator polynomial whose roots correspond to the spectral positions of the error frequencies, using the syndromes to form a matrix whose determinant conditions reveal the error multiplicity t. The decoder solves for the locator coefficients via linear algebra over the syndromes, then finds the roots in the spectral domain to identify error locations, enabling correction by subtracting the error pattern derived from the inverse DFT. This method, applicable to any cyclic code with known designed distance, achieves polynomial-time decoding for small t. Autocorrelation properties of cyclic codes further illuminate their performance in noisy channels through spectral analysis. The autocorrelation function of a codeword, measuring similarity under shifts, transforms to the power spectral density |\hat{c}(\beta)|^2 via the DFT; for ideal codes, this density concentrates on the spectral support, yielding low out-of-phase autocorrelation for sequence subsets like m-sequences derived from cyclic codes. Such analysis quantifies spectral efficiency and interference resistance, with the code's power spectrum determining peak-to-average ratios in modulated transmissions. For practical implementation of the DFT in cyclic coding when n is composite or not suited to standard fast algorithms (e.g., not a power of 2 or the field characteristic), Bluestein's algorithm provides an efficient alternative. Introduced in 1970, it reformulates the DFT as a linear convolution via a chirp-like transformation, computable using a standard FFT of length approximately $2n over the base field, achieving O(n \log n) complexity regardless of n's factorization. This is particularly relevant for hardware realizations of cyclic decoders over arbitrary lengths.

Advanced Bounds

The Hartmann-Tzeng bound refines the minimum distance estimate for cyclic codes by incorporating zeros whose indices form arithmetic progressions in the spectral domain. Specifically, if the defining set of a cyclic code of length n over \mathbb{F}_q includes \delta - 1 consecutive powers of a primitive nth root \alpha, along with additional zeros at positions \alpha^{i + j b} for j = 1, \dots, s, where \gcd(b, n) = 1, then the minimum distance d satisfies d \geq \delta + s. This spectral formulation leverages the structure of root indices in arithmetic progression to achieve tighter bounds than the standard , particularly when the zeros are not strictly consecutive. The Roos bound further generalizes this approach by considering unions of cosets in the defining set and refining the distance estimate through their intersections. For a cyclic code whose defining set contains elements from multiple cosets of a subgroup of the multiplicative group, the bound exploits the cardinality of intersections between these cosets to yield d \geq \delta + s, where \delta relates to the length of progressions within cosets and s to the number of intersecting cosets. This method provides improved lower bounds for codes with non-consecutive or structured spectral zeros, often surpassing the in cases involving coset intersections. Quadratic residue (QR) codes represent a prominent class of binary cyclic codes constructed using spectral zeros based on quadratic residues modulo a prime length p \equiv \pm 1 \pmod{8}. Defined over \mathbb{F}_2 with length p, the generator polynomial g(x) is the product (x - \alpha^i) over all quadratic residues i modulo p, where \alpha is a primitive pth root of unity in an extension field \mathbb{F}_{2^m} such that p divides $2^m - 1. These codes have odd minimum distance d, often achieving d = (\sqrt{p} + 1)/2, and dimension (p + 1)/2. In the spectral domain, the zeros lie precisely at the powers \alpha^j where j is a quadratic residue modulo p. Notable examples include the binary (7,4,3) , which is the QR code for p=7 with zeros at quadratic residues 1, 2, 4 modulo 7, and the binary (23,12,7) , the QR code for p=23 with zeros at residues 1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18, 19 modulo 23. Recent classifications in the 2010s and 2020s have advanced the understanding of QR code optimality, confirming that certain parameters achieve the best-known distances and revealing connections to self-dual structures and weight distributions for larger p.

Generalizations and Variants

Constacyclic Codes

Constacyclic codes represent a natural generalization of , where the shift operation incorporates a nonzero constant multiplier from the finite field. Specifically, a linear code C of length n over the finite field \mathbb{F}_q is defined as \lambda-constacyclic, for \lambda \in \mathbb{F}_q^\times with \lambda \neq 1, if for every codeword (c_0, c_1, \dots, c_{n-1}) \in C, the shifted vector (\lambda c_{n-1}, c_0, c_1, \dots, c_{n-2}) also belongs to C. This closure property under the constacyclic shift distinguishes them from standard while preserving the algebraic structure that facilitates efficient encoding and decoding. In polynomial terms, codewords of a \lambda-constacyclic code correspond to residue classes of polynomials in the quotient ring \mathbb{F}_q / (x^n - \lambda). Each such code is a principal ideal generated by a unique monic polynomial \tilde{g}(x) that divides x^n - \lambda, with the code dimension given by k = n - \deg(\tilde{g}(x)) and the check polynomial h(x) = (x^n - \lambda) / \tilde{g}(x). This representation mirrors that of cyclic codes but with a twisted modulus, enabling the use of roots of unity adapted to the constant \lambda. When \lambda = 1, the structure reduces precisely to that of a cyclic code. Moreover, if \lambda admits an nth root \beta \in \mathbb{F}_q (i.e., \beta^n = \lambda), the code can be transformed into an equivalent cyclic code via coordinate scaling by powers of \beta^{-1}, highlighting their close relation. The dual code of a \lambda-constacyclic code is itself \mu-constacyclic for \mu = \lambda^{-1}, ensuring that duality preserves the constacyclic structure; in particular, this holds for the negacyclic case where \lambda = -1 (assuming characteristic not 2), as \mu = -1. Regarding error correction, constacyclic codes inherit bounds analogous to those for cyclic codes, such as the generalized BCH bound, which provides a lower estimate on the minimum distance based on the number of consecutive roots of the generator polynomial in an extension field, allowing correction of up to t errors where the designed distance meets or exceeds $2t+1. These capabilities extend to applications resembling convolutional coding, where families of maximum-distance-separable constacyclic codes yield optimal convolutional codes with strong error-correcting performance. Constacyclic codes were formalized in the 1960s as an extension of cyclic codes, with the seminal work by Berlekamp introducing the concept, particularly for the negacyclic variant (\lambda = -1), to address specific algebraic and error-correcting needs beyond standard shifts.

Quasi-Cyclic Codes

Quasi-cyclic codes generalize cyclic codes by allowing invariance under shifts of multiple consecutive positions, forming a broader class of algebraic linear codes useful in constructing more flexible error-correcting schemes. A quasi-cyclic code of length n = m\ell and index \ell > 1 over a finite field \mathbb{F}_q is a linear block code where, for every codeword \mathbf{c} = (c_1, \dots, c_n), the vector obtained by cyclically permuting the m blocks of \ell symbols each—resulting in (B_2, B_3, \dots, B_m, B_1), where B_i denotes the i-th block—is also a codeword. This property extends the single-shift invariance of cyclic codes (the case \ell = 1) to block-wise operations, enabling representations as subcodes of concatenated cyclic codes of length m. Algebraically, such codes correspond to ideals in the ring \mathbb{F}_q/(x^m - 1) \times \cdots \times \mathbb{F}_q/(x^m - 1) (\ell factors) or, more precisely, as \mathbb{F}_q-submodules of the module R^\ell, where R = \mathbb{F}_q/(x^m - 1). The generator matrix of a quasi-cyclic code takes a structured circulant block form, facilitating efficient encoding and analysis. Specifically, it can be expressed as an \ell \times \ell block matrix where each block is an m \times m circulant matrix generated by polynomials in \mathbb{F}_q of degree less than m, often in a Toeplitz or upper-triangular configuration to ensure the shift-invariance. This matrix representation underscores their utility as generalizations of cyclic codes, where the full code is the row space of such a block-circulant structure, allowing for systematic encoding via polynomial multiplication in the ring R. Distance properties benefit from this structure: the minimum distance d satisfies bounds such as Jensen's bound, which provides a lower estimate based on the minimum distances of the constituent cyclic codes in the concatenated structure. This bound highlights their enhanced error-detection capabilities compared to single cyclic components, making them suitable for applications requiring scalable reliability. In modern coding applications, quasi-cyclic codes underpin low-density parity-check (LDPC) constructions, particularly quasi-cyclic LDPC (QC-LDPC) codes, which offer low-complexity encoding and decoding due to their circulant parity-check matrices. These codes have been standardized for enhanced (eMBB) data channels, supporting rate-compatible designs with girth at least 6 to avoid short cycles and enabling efficient hardware implementation via shift-register operations for lifting sizes that are powers of two. Post-2000s developments have further integrated quasi-cyclic structures with , realizing them as evaluation codes on curves like y^m + x^n = 1 over function fields, where divisors and rational functions yield quasi-cyclic invariants through orbit actions in the , enhancing parameters for high-rate, long-length codes in advanced communication systems.

Negacyclic and Other Variants

Negacyclic codes represent a specific variant of constacyclic codes, where the scaling factor \alpha = -1, leading to codewords that are invariant under a negation and cyclic shift operation, equivalently defined as ideals in the \mathbb{F}_q / (x^n + 1). This structure facilitates efficient implementations, particularly in (FFT)-based decoding algorithms over finite fields, enabling frequency-domain processing for error correction in negacyclic settings. Shortened cyclic codes arise from puncturing a standard cyclic code to yield a shorter while retaining desirable properties; specifically, by selecting only those codewords where s designated information positions are zero and then deleting those positions, the resulting code maintains cyclicity in the remaining coordinates. The of the shortened code is k' = k - s, where k is the original and s is the number of shortened positions, and the minimum satisfies d' \geq d, with d the original , ensuring at least equivalent error-correcting capability relative to . Projective cyclic codes, constructed as the quotient of a cyclic code by its repetition subcode, yield codes with enhanced geometric interpretations and weight distributions suitable for applications requiring balanced error patterns. Shortened Reed-Solomon codes, a prominent application of these variants, have been integrated into communication standards such as () since the late 1990s, where they provide for OFDM symbols, for instance, using RS(64,48) configurations to mitigate burst errors in high-data-rate transmissions. Array-based generalizations extend cyclic codes to two dimensions, forming 2D cyclic or constacyclic codes over \mathbb{F}_q[x,y], which are ideals invariant under toroidal shifts; these have found use in image processing for error correction in 2D data arrays, such as correcting pixel-level corruptions in digital imagery during storage or transmission in the 2010s.

References

  1. [1]
    [PDF] Cyclic Codes
    The linear code C of length n is a cyclic code if it is invariant under a cyclic. cyclic code. shift: c = (c0,c1,c2 ...,cn−2,cn−1) ∈ C.
  2. [2]
    Cyclic linear \(q\)-ary code | Error Correction Zoo
    A q-ary code of length n is cyclic if, for each codeword, the cyclically shifted string is also a codeword.
  3. [3]
    Cyclic Code - an overview | ScienceDirect Topics
    Cyclic code is defined as a type of error-correcting code where the encoding of data involves multiplying the data word by a generator polynomial, ...
  4. [4]
    [PDF] ERROR CONTROL CODING | Fundamentals and Applications
    ... ERROR CONTROL CODING. Fundamentals and Applications . SHU LIN. University of Hawaii. Texas A&M University ... Most linear codes used in practice are cyclic codes.
  5. [5]
    [PDF] 4.0 Cyclic Codes
    Theorem 20 C is a q-ary linear cyclic code of length n if and only if the {c(x)}∈C form an ideal in Fq[x]/(xn − 1). Simply put, a cyclic code of block length n ...
  6. [6]
    [PDF] A q-polynomial approach to cyclic codes - HKUST CSE Dept.
    A linear code C is cyclic if and only if the corresponding subset in GF(q)[x]/(xn − 1) is an ideal of the ring GF(q)[x]/(xn − 1). Note that every ideal of GF(q) ...
  7. [7]
    [PDF] Survey of primitive idempotents in cyclic codes of length 2n, pn and ...
    Oct 19, 2022 · As the ring Rn is semi-simple therefore each ideal in Rn contains a unique idempotent which also generates the ideal. This idempotent is called ...
  8. [8]
    [PDF] Idempotent Generators of Generalized Residue Codes
    A polynomial e(x) ∈ C, of degree less than n, which is an identity of C is called the idempotent generator of C. This idempotent generator is unique and has the ...Missing: uniqueness | Show results with:uniqueness
  9. [9]
    [PDF] Cyclic codes
    C is a cyclic code if and only if C(x) is an ideal in GF(q)[x]/(xn−. 1). We can go a step further and describe cyclic codes explicitly. Proposition. C is a ...
  10. [10]
    GAP (guava) - Chapter 5: Generating Codes - Documentation
    It is also cyclic, and has generator polynomial g(x)=1+x^2+x^4+x^5+x^6+x^10+ ... WholeSpaceCode returns the cyclic whole space code of length n over F .
  11. [11]
    Repetition code | Error Correction Zoo
    Repetition codes are RM codes. The repetition code is cyclic with generator polynomial 1 + x + ⋯ + x n − 1 .
  12. [12]
    Polynomial Codes Over Certain Finite Fields
    Polynomial Codes Over Certain Finite Fields. Authors: I. S. Reed and G. SolomonAuthors Info & Affiliations. https://doi.org/10.1137/0108018 · PDF · BibTeX.
  13. [13]
    Reed-Solomon Codes and the Compact Disc - IEEE Xplore
    This chapter contains sections titled: Introduction Description of the Compact Disc System EFM Recording Code The CIRC Code The Art of Interleaving.
  14. [14]
  15. [15]
    On a class of error correcting binary group codes - ScienceDirect.com
    On a class of error correcting binary group codes* ... A general method of constructing error correcting binary group codes is obtained. A binary group code with ...
  16. [16]
    [PDF] Chapter 8: Cyclic Codes
    Dec 11, 2000 · If an (n, k) cyclic code has generator polynomial g(x), and parity-check polynomial h(x), what are the generator and parity-check polynomials of.
  17. [17]
    [PDF] Hamming Codes
    For a Hamming code, the covering radius is 1. Indeed, for any perfect e- error-correcting code, the covering radius is e. ( 4.2.Missing: pack | Show results with:pack
  18. [18]
    CodeTables.de
    Code Tables · Linear Block Codes · Quantum Error-Correcting Codes · Entanglement-Assisted Quantum Error-Correcting Codes · External Links · Acknowledgments · Contact.How to cite the tables? · QECC · Updates · EaqeccMissing: 2020 | Show results with:2020
  19. [19]
  20. [20]
    Transform techniques for error control codes - ACM Digital Library
    By using the theory of finite field Fourier transforms, the subject of error control codes is described in a language familiar to the field of signal ...
  21. [21]
    None
    Nothing is retrieved...<|control11|><|separator|>
  22. [22]
  23. [23]
    16 Quadratic-residue codes - ScienceDirect.com
    ... primitive element of GF(p). (b) In f a c t PSL2(p) consists of the 5 p ( p 2 - 1) permutations V`S': y +p2`y + j V`S'TSk y + k - ( p 2 ' y + j ) : where O ...
  24. [24]
    [PDF] Performance and analysis of Quadratic Residue Codes of lengths ...
    Aug 25, 2014 · In this paper, the performance of quadratic residue (QR) codes of lengths within 100 is given and analyzed when the hard decoding, ...
  25. [25]
    [PDF] New linear codes from constacyclic codes - Kenyon College
    A linear code C is called constacyclic if it is closed under the constacyclic shift, i.e. whenever ًc0; c1; …; cn 1قAC then ًacn 1; c0; c1; …; cn 2قAC as well.Missing: seminal | Show results with:seminal
  26. [26]
    Properties of Constacyclic Codes Under the Schur Product - arXiv
    Oct 17, 2018 · This paper characterizes how increasing powers of constacyclic codes grow under the Schur product and gives necessary and sufficient criteria.Missing: seminal | Show results with:seminal
  27. [27]
    [PDF] Repeated-root constacyclic codes of length 2ℓmpn - arXiv
    Jun 7, 2014 · of a λ-constacyclic code is a λ−1-constacyclic code; specifically, the dual of a cyclic code is a cyclic code and the dual of a negacyclic code ...
  28. [28]
    A class of constacyclic BCH codes with length
    In this paper, we investigate the parameters of a class of q -ary constacyclic BCH codes with length q m + 1 2 and compute their exact dimensions in some ranges ...
  29. [29]
    On optimal constacyclic codes - ScienceDirect
    May 1, 2016 · In this paper we construct new families of maximum-distance-separable (MDS) convolutional codes derived from classical constacyclic codes.Missing: seminal | Show results with:seminal
  30. [30]
    (PDF) Note on decoding negacyclic codes - ResearchGate
    A classical derivation of Roth's decoding algorithm for Berlekamp's negacyclic codes is given. Roth's algorithm has been presented using alternant codes.Missing: history seminal
  31. [31]
    Quasi-cyclic code - Error Correction Zoo
    A block code of length n is quasi-cyclic if, for each codeword, the string, where each entry is cyclically shifted by l increments, is also a codeword.
  32. [32]
    [PDF] Quasi-Cyclic Codes - arXiv
    Dec 9, 2020 · Quasi-cyclic (QC) codes are a generalization of cyclic codes, invariant under shifts by a minimum number ℓ, where ℓ=1 is a cyclic code.
  33. [33]
    Algebraic structure of quasicyclic codes - ScienceDirect.com
    Generally, a quasicyclic code of length ℓm and index ℓ may be represented as the row space of a block matrix, each row of which has the form (G1,…,Gℓ), where G ...
  34. [34]
    Construction of Quasi‐Cyclic LDPC Codes Based on Fundamental ...
    Apr 15, 2018 · Quasi-cyclic (QC) LDPC codes play an important role in 5G communications and have been chosen as the standard codes for 5G enhanced mobile ...
  35. [35]
    A Geometrical Realisation of Quasi-Cyclic Codes - IntechOpen
    Nov 19, 2019 · The cyclic code is realised as the algebraic geometric code associated to the divisors D=P0+P1+… +Pk, G=μP∞ and the parameter μ satisfies the ...
  36. [36]
    A characterization of two-weight projective cyclic codes - arXiv
    Nov 14, 2012 · Title:A characterization of two-weight projective cyclic codes ... Abstract:We give necessary conditions for a two-weight projective cyclic code ...<|separator|>
  37. [37]
    [PDF] IEEE P802.11 Wireless LANs REED SOLOMON FEC FOR OFDM ...
    Nov 11, 1998 · The FEC method recommended in this submission is Reed Solomon (RS) coding. The submission compares RS coding to the currently proposed.Missing: WiFi | Show results with:WiFi
  38. [38]
    2-D skew-cyclic codes over Fq[x,y;ρ,θ] - ScienceDirect
    1. Introduction. Two-dimensional (2-D) cyclic codes are generalizations of usual cyclic codes. 2-D cyclic codes were introduced by Ikai et al.