Fact-checked by Grok 2 weeks ago

Error detection and correction

Error detection and correction are techniques employed in digital communication and systems to identify errors introduced during or and, in the case of correction, to restore the original without retransmission. These methods rely on adding redundant information to the original , allowing the to detect discrepancies caused by , , or physical defects, thereby ensuring across unreliable channels. The fundamental principle underlying these techniques is the use of , which introduces redundancy through codewords with a minimum —the number of positions at which two strings differ—sufficient to distinguish errors. Error detection identifies anomalies but requires retransmission or other recovery, while correction actively repairs errors up to a certain ; for instance, a with Hamming distance d \geq 2t + 1 can correct up to t errors. This distinction is critical, as detection schemes like checks or cyclic redundancy checks () are simpler and detect single or burst errors with low overhead, whereas correction methods demand more redundancy for decoding complexity. Key error-detecting codes include the , which appends a single bit to achieve even or odd parity and detects odd-numbered errors, and codes, which use polynomial division to generate checksums effective against burst errors in networks and storage. For correction, prominent examples are Hamming codes, such as the (7,4) code that uses three parity bits to correct single-bit errors in four data bits, and more advanced linear block or convolutional codes applied in high-reliability scenarios. These approaches trace their theoretical foundations to Claude Shannon's 1948 work on , which established limits on error-free communication over noisy channels, though practical codes approximate but do not fully achieve the Shannon limit. Applications of error detection and correction span critical domains, including (e.g., single-error-correcting double-error-detecting codes in ), satellite and deep-space communications (e.g., convolutional codes with Viterbi decoding), and media like and hard drives (e.g., Reed-Solomon codes for burst error correction). Trade-offs in design involve balancing code rate (data bits to total bits), , and error protection levels, tailored to channel characteristics like random bit flips or burst errors. Ongoing advancements continue to enhance efficiency, particularly in emerging fields like , where specialized codes address decoherence.

Fundamentals

Definitions

Error detection is the process of identifying the presence of errors in data that has been transmitted over a or stored in a medium, without necessarily determining the location or correcting the errors. This technique typically involves adding redundant information to the original data, allowing the receiver to verify integrity and flag potential issues for further action. Error correction extends beyond mere identification by not only detecting errors but also locating their positions and determining their nature to restore the original data as accurately as possible. It relies on sufficient to enable the to reconstruct the intended message even when multiple errors occur within a defined tolerance. Error control encompasses the systematic addition of redundant bits or symbols to the source data, forming codewords that facilitate either error detection, correction, or both, thereby enhancing the reliability of data transmission over noisy channels. This approach trades off some transmission efficiency for improved error resilience, as seen in various schemes used in communications. The primary distinction between error detection and error correction lies in their outcomes: detection typically flags errors to trigger mechanisms like retransmission in protocols such as (ARQ), while correction enables automatic fixing of errors up to a certain rate through (FEC) without requiring sender involvement. Detection is simpler and requires less redundancy, suiting scenarios where channel feedback is available, whereas correction is more robust for one-way or high-latency links. A basic example involves a single-bit in , such as a transmitted byte "10110110" becoming "10111110" due to flipping the fifth bit; simple detection can identify the odd parity mismatch to signal the , while a more advanced code like the could both detect and correct it by pinpointing the flipped bit. serves as a foundational measure in both processes, quantifying the number of positions at which two codewords differ to assess error-handling capabilities.

Error Types and Models

Errors in digital communication systems are broadly classified into single-bit errors, multi-bit errors, and burst errors. Single-bit errors occur when only one bit in a transmitted sequence is altered, typically due to random noise such as thermal noise in channels. Multi-bit errors involve alterations to two or more bits, which may arise from more severe or hardware faults, and can challenge error-correcting codes designed primarily for isolated flips. Burst errors consist of consecutive erroneous bits within a defined block or window, often caused by phenomena like impulse noise or in channels, where multiple adjacent bits are corrupted simultaneously. Mathematical models characterize these errors to analyze channel behavior and code performance. The binary symmetric channel (BSC) models symmetric errors, assuming each bit is independently flipped with probability p (where $0 < p < 0.5), representing random, uncorrelated bit errors common in additive white Gaussian noise environments. In contrast, the binary asymmetric channel (BAC) accounts for unequal flip probabilities, such as p_0 for 0-to-1 transitions and p_1 for 1-to-0, which better suits channels with biased noise like certain optical or storage systems. For bursty channels exhibiting clustered errors, the Gilbert-Elliott model describes a two-state Markov process: a "good" state with low error probability and a "bad" state with high error probability, enabling simulation of error bursts in fading or multipath propagation scenarios. Key metrics quantify error prevalence across these models. The bit error rate (BER) measures the fraction of bits received incorrectly out of the total transmitted bits, serving as a fundamental indicator of channel reliability in binary systems. The symbol error rate (SER) extends this to higher-order modulation schemes, representing the probability that an entire symbol (comprising multiple bits) is decoded erroneously, which is particularly relevant for evaluating performance in M-ary signaling. The packet error rate (PER), defined as the ratio of packets containing at least one error to the total packets sent, assesses end-to-end reliability in networked systems, aggregating bit-level errors into practical data unit failures. Understanding code performance requires assumptions about error distributions, such as independence in for capacity calculations or burst lengths in for throughput predictions, as these inform the design of detection and correction strategies under modeled conditions.

Historical Overview

Origins and Early Developments

In the 19th century, the rise of electrical telegraphy introduced the need for reliable long-distance communication, with engineers like developing synchronous coding systems in the 1870s that used fixed-length binary-like signals to improve transmission efficiency and synchronization. enabled multiplexed channels, which supported better timing uniformity in electrical signaling. World War II accelerated theoretical advancements in communication reliability, as cryptographers and engineers grappled with noisy channels in military applications; Claude Shannon's seminal 1948 paper, "A Mathematical Theory of Communication," established information theory's core principles, demonstrating that reliable transmission is possible over imperfect channels by quantifying noise limits and redundancy needs. This work provided the mathematical groundwork for distinguishing detectable errors from correctable ones in digital systems. By the early 1950s, practical computing demands drove innovations in error control, exemplified by 's 1950 paper at Bell Labs, where frustrations with frequent failures in relay-based computers—often due to unreliable vacuum tube components—prompted the development of systematic error-correcting codes. Hamming's approach extended simple parity checks, which had emerged in the late 1940s for basic error detection in punched-card readers and memory systems, into more robust schemes capable of single-error correction. The parity bit, a single redundant bit ensuring an even or odd number of 1s in a data word, became a standard for simple detection in 1950s computing hardware, such as IBM's early disk systems. , as an early outcome, illustrated how parity could be arranged in positions to pinpoint and fix errors automatically.

Key Milestones in the 20th Century

In 1950, Richard W. Hamming introduced Hamming codes, a class of linear error-correcting codes capable of detecting up to two errors and correcting single errors in binary data transmission, motivated by the need to handle failures in early computing machines at Bell Laboratories. These codes, constructed using parity-check matrices derived from all nonzero binary vectors of a fixed length, represented a foundational advancement in systematic error correction, enabling reliable operation in punched-card readers and vacuum-tube computers without manual intervention. The late 1950s and early 1960s saw significant progress in cyclic codes, a subclass of linear block codes where any cyclic shift of a codeword remains a codeword, facilitating efficient encoding and decoding via shift registers. In 1959, Alexis Hocquenghem developed a method for constructing cyclic codes with specified error-correcting capabilities over finite fields, followed independently in 1960 by Raj Chandra Bose and Dinesh K. Ray-Chaudhuri, who extended the approach to binary group codes capable of correcting multiple random errors. These Bose-Chaudhuri-Hocquenghem (BCH) codes, built using roots of unity in Galois fields, provided a flexible framework for designing codes with predetermined minimum distances, influencing subsequent standards in digital communications. In 1961, W. Wesley Peterson further advanced cyclic codes by demonstrating their use for error detection through polynomial division, laying the groundwork for practical implementations like cyclic redundancy checks (CRC) in noisy channels. Also in 1960, Irving S. Reed and Gustave Solomon proposed , a non-binary cyclic code variant operating over finite fields of order greater than the symbol size, particularly effective for correcting burst errors common in storage and transmission systems. achieve the Singleton bound, offering optimal trade-offs between code rate and error-correcting capability, and were initially motivated by applications in military communications requiring resilience to consecutive symbol errors. Their structure, based on evaluating polynomials at distinct points, enabled decoding algorithms that locate and correct errors up to half the minimum distance, making them suitable for early satellite and telemetry systems. In 1967, Andrew J. Viterbi developed the , a dynamic programming method for maximum-likelihood decoding of convolutional codes, which model error-prone channels as finite-state machines producing continuous streams of data. Although proposed theoretically, the algorithm gained practical traction in the 1970s for decoding convolutional codes in deep-space missions and satellite communications, where it efficiently navigated to minimize computational complexity while approaching for low-rate codes. During the 1960s, the International Telegraph and Telephone Consultative Committee (CCITT, now ITU-T) standardized CRC polynomials for telecommunications, adopting specific generator polynomials like x^16 + x^12 + x^5 + 1 for 16-bit checks to detect burst errors in asynchronous data links and early packet networks. These standards, formalized in recommendations such as V.41 by the early 1970s but developed amid 1960s research, ensured interoperability in international telephony and data transmission by specifying CRC computation for frame validation. In the 1980s and 1990s, concatenated codes—combining an outer code for burst correction with an inner code for random errors—emerged as a high-performance approach, building on 's early ideas from his 1957 MIT thesis on sequential decoding and further refined in 's 1966 work. Supervised by Wozencraft, Forney's concatenated schemes demonstrated exponential error reduction with polynomial-time decoding, influencing deep-space probes and CD-ROM storage. Concurrently, Automatic Repeat reQuest () protocols rose in prominence within the , the precursor to the internet, where the 1822 host-IMP protocol incorporated acknowledgment-based retransmissions to handle packet errors over unreliable links, enabling reliable end-to-end communication across the network's initial deployments in the 1970s and expansions through the 1980s.

Recent Advances

In the late 1990s and early 2000s, turbo codes emerged as a breakthrough in error-correcting coding, introduced by Claude Berrou, Alain Glavieux, and Punya Thitimajshima in 1993. These parallel concatenated convolutional codes, decoded via iterative algorithms, achieve performance within 0.7 dB of the at a bit error rate (BER) of 10^{-5}, enabling near-capacity operation in noisy channels. Their adoption in 3G mobile standards like UMTS marked a shift toward capacity-approaching codes for wireless communications. Low-density parity-check (LDPC) codes, originally conceived by Robert Gallager in 1963, underwent a renaissance in the early 2000s with the development of efficient iterative belief propagation decoding algorithms, notably advanced by Richardson and Urbanke in 2001 through density evolution techniques that optimized code ensembles for capacity achievement. This revival led to their standardization in the DVB-S2 satellite broadcasting standard in 2005, where LDPC codes with rates from 1/4 to 9/10 provide coding gains up to 10 dB, supporting high-throughput video delivery over fading channels. Further, in 2018, 3GPP adopted quasi-cyclic LDPC codes for 5G NR data channels in Release 15, enabling flexible rates and block lengths up to 8448 bits while maintaining low latency and high reliability for enhanced mobile broadband. The 2010s saw the introduction of polar codes by Erdal Arikan in 2009, which exploit channel polarization to achieve symmetric capacity on binary-input discrete memoryless channels through a recursive construction that transforms unreliable channels into reliable ones. These codes were selected by 3GPP in 2016 for 5G NR control channels due to their short-blocklength efficiency and low decoding complexity via successive cancellation, supporting block error rates below 10^{-3} for URLLC scenarios. In quantum error correction, stabilizer codes, formalized by Daniel Gottesman in 1997, and surface codes, proposed by Alexei Kitaev in 1997, provide the theoretical foundation for fault-tolerant quantum computing by encoding logical qubits into entangled states that detect and correct errors without collapsing superposition. Practical advances accelerated in the 2020s, with Google demonstrating in 2024 a distance-7 surface code logical qubit on their Willow processor using 101 physical qubits, achieving a logical error rate of 0.143% ± 0.003% per cycle below the surface code threshold, with a suppression factor of 2.14 ± 0.02 when increasing code distance by 2. In November 2025, IBM announced new quantum processors with increased qubit connectivity, enabling execution of error-corrected circuits with 30% more complexity than previous generations, advancing toward fault-tolerant systems by 2029. Emerging trends in the 2020s integrate artificial intelligence into error control, particularly for 6G networks, where machine learning is explored to enhance decoding of LDPC and polar codes. These developments, including AI-driven adaptive schemes, are being investigated in 3GPP Release 18 and 19 standardization efforts for beyond-5G reliability as of 2025.

Core Principles

Hamming Distance

In coding theory, the Hamming distance between two equal-length strings u and v over an alphabet, denoted d(u, v), is defined as the number of positions at which the corresponding symbols differ. This metric quantifies the dissimilarity between codewords and serves as a fundamental measure for assessing the robustness of error-correcting codes against transmission errors. For instance, consider the binary strings $000 and $111; their Hamming distance is d(000, 111) = 3, as all three positions differ. Similarly, for $101 and $110, the distance is d(101, 110) = 2, reflecting differences in the second and third positions. These examples illustrate how the distance captures the minimal changes needed to transform one string into another, which is crucial for identifying erroneous receptions. The minimum Hamming distance d of a code—the smallest distance between any two distinct codewords—plays a pivotal role in distinguishing valid codewords from those corrupted by errors. In a channel prone to errors, such as the binary symmetric channel, an erroneous version of a transmitted codeword lies within a Hamming ball of radius equal to the number of errors around the original; a sufficiently large minimum distance ensures that these balls around different codewords do not overlap for small error counts, enabling unique identification and recovery of the intended codeword. This separation property underpins the design of codes that can reliably detect or correct errors by mapping received words back to the nearest valid codeword. A key performance limit derived from the Hamming distance is the Hamming bound, also called the sphere-packing bound, which establishes an upper bound on the maximum number of codewords A(n, d) in a binary code of length n and minimum distance d. The bound is given by A(n, d) \leq \frac{2^n}{\sum_{k=0}^t \binom{n}{k}}, where t = \left\lfloor \frac{d-1}{2} \right\rfloor represents the error-correction capability. This inequality arises from the non-overlapping packing of Hamming spheres of radius t around each codeword within the entire space of $2^n possible strings, providing a theoretical ceiling on code efficiency. The Hamming distance concept generalizes naturally to q-ary codes, where the alphabet has q symbols (e.g., q=4 for quaternary codes). In this setting, d(u, v) remains the count of positions where u and v differ, but the total space size becomes q^n, and the sphere-packing bound adjusts accordingly to A(n, d) \leq q^n / \sum_{k=0}^t \binom{n}{k} (q-1)^k. This extension is essential for non-binary applications, such as multilevel signaling in communications, where larger alphabets enhance data rates while maintaining error resilience through distance-based separation.

Detection and Correction Capabilities

The error detection and correction capabilities of a code are fundamentally determined by its minimum Hamming distance d, which is the smallest number of positions in which any two distinct codewords differ. A code with minimum distance d can detect up to d-1 errors, as any change of fewer than d symbols in a codeword cannot result in another valid codeword, allowing the receiver to identify that an error has occurred. For error correction, such a code can reliably correct up to t = \left\lfloor \frac{d-1}{2} \right\rfloor errors. This follows from the sphere-packing argument: around each codeword, consider a Hamming sphere of radius t; these spheres are disjoint because the distance between any two codewords exceeds $2t, ensuring that any received word within distance t of a codeword belongs uniquely to that codeword's sphere and can be decoded to it via nearest-neighbor decoding. An important upper bound on the minimum distance is given by the Singleton bound, which states that for a code of length n and dimension k (where k is the number of information symbols), d \leq n - k + 1. Codes achieving equality in this bound are known as maximum distance separable (MDS) codes, such as Reed-Solomon codes, which maximize error correction capability for a given rate. These capabilities come with inherent trade-offs: increasing d to enhance detection and correction reduces the code rate R = k/n, as more (parity symbols) is required to separate codewords farther apart, limiting the information throughput. In erasure channels, where error positions are known to the receiver, a code with minimum distance d can correct up to d-1 erasures, since the known positions allow solving for the missing symbols using the remaining reliable ones without ambiguity.

Error Detection Methods

Parity Checks

Parity checks represent one of the simplest and earliest methods for error detection in digital systems, involving the addition of a redundant bit to a block of data to verify its integrity. This technique relies on the concept of parity, which refers to whether the number of 1s in a binary string is even or odd. By appending a parity bit, the sender ensures that the total number of 1s in the transmitted data meets a predefined parity rule, allowing the receiver to detect discrepancies indicative of errors. In even parity, the parity bit is set to 0 if the data already has an even number of 1s, or to 1 if it has an odd number, resulting in an even total count of 1s across the data and parity bit. Conversely, odd parity sets the bit to achieve an odd total number of 1s. The parity bit is typically computed as the exclusive-OR (XOR) of all data bits, where even parity uses the direct XOR result and odd parity inverts it. This method detects any odd number of bit flips in the transmitted data, as an even parity scheme will yield an odd count (or vice versa) upon reception if an odd number of errors occurred. However, it fails to detect an even number of errors, such as two bit flips, which would preserve the overall parity. For example, consider the 4-bit data word 1011, which has three 1s (odd count). To apply even parity, the sender computes the XOR of the bits (1 XOR 0 XOR 1 XOR 1 = 1) and sets the parity bit to 1 to make the total even, yielding the 5-bit transmission 10111. At the receiver, recounting the 1s (four) confirms even parity, indicating no error if transmission was intended to be even. If a single bit flips, say the last data bit to 10101 (with parity 1, total three 1s), the receiver detects odd parity and identifies an error. This single-bit parity is commonly applied to bytes in memory systems for basic single-error detection. To enhance detection capabilities, particularly for burst errors—consecutive bit flips—block parity, also known as two-dimensional (2D) parity, extends the concept by organizing data into a matrix and adding parity bits for both rows and columns. In this scheme, each row of data bits receives a row parity bit (computed via XOR as in single-bit parity), and an additional column of parity bits is calculated for the rows, including a parity bit for the row parity bits themselves. This structure can detect all single-bit errors, double-bit errors, and burst errors up to length 2, as errors in a row or column will mismatch the corresponding parity. For instance, in a 4x4 data matrix, the resulting 5x5 block (with parities) allows identification of error locations through mismatched row and column parities, though it provides only detection, not correction. Despite its simplicity and low overhead (typically 12.5% for byte-level parity), the parity check method has notable limitations, including its inability to detect even-numbered errors and vulnerability to undetected bursts longer than the scheme's capability. It has been widely used in early computer memory for single-bit error detection and in foundational RAID configurations to identify disk failures through parity inconsistencies, though modern systems often supplement or replace it with more robust techniques like checksums for multi-byte validation.

Checksums

Checksums are simple arithmetic methods used to detect errors in blocks of data by appending a computed value that allows the receiver to verify integrity. These techniques compute a fixed-size value, typically 8 or 16 bits, based on the sum or exclusive-OR of data words, providing a lightweight alternative to more complex codes for applications where correction is not required. The Internet Checksum, specified in RFC 1071, is a widely adopted 16-bit checksum used in protocols like IP and TCP. It operates by dividing the data into 16-bit words and computing their one's complement sum, handling any odd-length data by padding with a zero byte. The process involves adding the words with end-around carry, where any carry out of the 16th bit is added back to the least significant bit. The checksum is then the one's complement (bitwise NOT) of this sum, ensuring that the sum of the data words and the checksum equals all 1s (0xFFFF) in one's complement arithmetic. Mathematically, if s is the one's complement sum of the 16-bit words, the checksum c is given by: c = \sim (s \mod 2^{16}) where \sim denotes bitwise complement. This method detects all single-bit errors and all odd-numbered bit errors, as any such change alters the sum modulo $2^{16}. Another common checksum variant is the Longitudinal Redundancy Check (LRC), employed in protocols like IBM's Binary Synchronous Communications (BISYNC). LRC computes an 8-bit value by taking the bitwise XOR of all bytes in the data block, effectively providing a vertical parity across the entire message. This results in a single check byte appended to the block, which the receiver recomputes and compares to detect discrepancies. LRC is particularly suited for character-oriented transmissions, complementing per-character parity checks. Despite their efficiency, checksums have notable limitations in error detection. They perform poorly against burst errors longer than the checksum size, as error patterns can cancel out during summation or XOR, allowing some bursts of 16 bits or more to go undetected in the . Additionally, they are vulnerable to systematic errors, such as those from hardware faults that consistently flip specific bits, where the probability of undetected errors approaches $1/2^{16} for random 2-bit errors in balanced data patterns. For random independent bit errors in long messages, the detects over 99.99% of errors, but this drops in adversarial or correlated scenarios. As a more robust alternative for burst-prone channels, polynomial-based methods like offer superior detection. In practice, the Internet Checksum appears in IPv4 headers and TCP segments, where it verifies the header integrity against transmission errors without requiring retransmission for minor issues. LRC finds use in legacy synchronous protocols for block-level validation. These checksums prioritize computational simplicity, making them ideal for real-time networking despite their constraints.

Cyclic Redundancy Checks

Cyclic redundancy checks (CRCs) are a family of error-detecting codes derived from cyclic codes, utilizing polynomial division over the finite field GF(2) to append a fixed-length checksum to data blocks for detecting transmission errors. Invented by W. Wesley Peterson and D. T. Brown, CRCs treat the binary message as coefficients of a polynomial M(x) of degree less than k, where k is the message length in bits. To compute the CRC, the message polynomial is shifted left by m bits, equivalent to multiplying by x^m, where m is the degree of the generator polynomial g(x). This augmented polynomial is then divided by g(x) using modulo-2 arithmetic, and the remainder R(x), a polynomial of degree less than m, serves as the CRC bits appended to the original message. The resulting codeword polynomial C(x) = M(x) \cdot x^m + R(x) is exactly divisible by g(x), ensuring that any received codeword divisible by g(x) is assumed error-free. A common example is the CRC-16 scheme, which uses the generator polynomial g(x) = x^{16} + x^{15} + x^2 + 1. This polynomial enables detection of all burst errors of length less than or equal to 16 bits, as any such error pattern corresponds to a nonzero polynomial of degree at most 15, which cannot be divisible by g(x) unless the error is zero. CRCs exhibit strong detection properties: they detect all single-bit errors due to the nonzero constant term in g(x); all double-bit errors if g(x) has at least two terms; and all odd-number of bit errors if x+1 divides g(x), as is the case for the CRC-16 polynomial since g(1) = 0 \mod 2. Many standard CRC polynomials, including CRC-16 variants, achieve a minimum Hamming distance of 4 for typical message lengths up to several thousand bits, allowing detection of all error patterns with up to three bit flips. In hardware, CRC computation is efficiently realized using a linear feedback shift register (LFSR) of length m, where XOR gates are positioned at taps corresponding to the nonzero coefficients of g(x) (excluding the leading term). The message bits are serially fed into the register, which performs the equivalent of polynomial division through successive shifts and conditional XOR operations, producing the CRC at the end. This shift-register architecture enables high-speed processing in digital circuits, such as those in network interfaces and storage controllers. CRCS are extensively applied in data storage systems and communication protocols to ensure integrity against bursty errors common in physical media.

Hash-Based Methods

Hash-based methods employ cryptographic hash functions to verify data integrity by generating a fixed-size digest from variable-length input, enabling the detection of any alterations during transmission or storage. These functions are designed to be one-way, meaning it is computationally infeasible to reverse the digest to recover the original data, and they exhibit strong collision resistance to prevent finding two different inputs that produce the same output. A prominent example is SHA-256, part of the Secure Hash Algorithm family standardized by NIST, which produces a 256-bit digest and is widely used to detect changes in messages by comparing the computed digest of received data against the expected value. If the digests differ, an error or tampering is indicated. The function's avalanche effect ensures that even a single-bit change in the input typically alters approximately half of the output bits, enhancing its sensitivity to errors. For authenticated error detection, digital signatures combine hashing with public-key cryptography, such as . The sender hashes the message and encrypts the digest using their private key to create the signature, which the receiver verifies by decrypting with the sender's public key and comparing it to a freshly computed hash of the received message, confirming both integrity and origin. This approach, detailed in specifications, provides non-repudiation alongside detection. Despite their robustness, cryptographic hashes like are often overkill for pure accidental error detection, as their emphasis on collision resistance against adversarial attacks incurs higher computational overhead compared to lightweight precursors like checksums, which suffice for non-malicious scenarios. They excel at detecting any change but do not locate or correct errors. In practice, the (HMAC), which applies a hash function to the message concatenated with a secret key, is used for integrity verification in secure protocols. For instance, in computes authentication tags on record payloads to flag modifications or replays, ensuring reliable data transmission over networks.

Error Correction Methods

Automatic Repeat Request

Automatic Repeat Request (ARQ) is an error control technique employed in data communication systems to ensure reliable transmission over potentially noisy channels by detecting errors and prompting the retransmission of affected data units through a feedback mechanism. This method relies on acknowledgments (ACKs) from the receiver to confirm successful reception or negative acknowledgments (NACKs) to signal errors, enabling the sender to resend only the problematic frames. ARQ protocols integrate error detection schemes, such as (CRC) or checksums, to identify corrupted frames at the receiver, which then triggers a NACK or the absence of an ACK, initiating retransmission. These detection methods append redundant bits to the data frame, allowing the receiver to verify integrity without decoding the full content. The primary variants of ARQ include , , and protocols, each balancing simplicity, efficiency, and complexity differently. In , the sender transmits one frame and halts until receiving an ACK or detecting a timeout, ensuring basic reliability but limiting throughput due to idle periods. The throughput efficiency for this protocol is expressed as \eta = \frac{1 - P_e}{1 + 2a} where P_e represents the frame error probability and a is the propagation delay factor, defined as the ratio of round-trip propagation time to frame transmission time; this formula highlights how errors and delays degrade performance. Go-Back-N ARQ extends efficiency by allowing the sender to transmit a window of up to N unacknowledged frames continuously. Upon error detection via CRC or checksum, the receiver issues a NACK for the faulty frame, prompting the sender to retransmit that frame and all subsequent ones in the window, discarding correctly received frames after the error to simplify processing. Selective Repeat ARQ optimizes bandwidth usage by permitting the receiver to buffer out-of-sequence frames and request retransmission solely for the erroneous or lost ones, using sequence numbers and ACKs to manage reordering. This approach requires more sophisticated buffering at both ends but minimizes unnecessary retransmissions compared to Go-Back-N. Notable implementations of ARQ appear in the High-Level Data Link Control (HDLC) protocol, standardized by ISO, which supports both Go-Back-N and Selective Repeat modes for point-to-point and multipoint links, employing CRC-16 or CRC-32 for error detection. Similarly, ARQ mechanisms are integral to IEEE 802.11 Wi-Fi standards, where optional block ACKs enable selective retransmissions of failed MAC protocol data units (MPDUs) to enhance reliability in wireless environments. Despite their effectiveness, ARQ protocols introduce drawbacks, particularly increased latency on links with substantial propagation delays, as the sender must await feedback before proceeding, amplifying round-trip times in satellite or long-haul networks. Additionally, ARQ mandates a bidirectional feedback channel, which must itself be robust against errors to avoid compounding reliability issues.

Forward Error Correction Basics

Forward error correction (FEC) is a technique that enables the receiver to detect and correct errors in transmitted data without requiring retransmission from the sender. The sender encodes the original message by adding redundant information, creating a longer codeword that includes both the original data and parity bits. The receiver then uses this redundancy to identify and repair errors that occur during transmission over noisy channels. In FEC, an (n, k) block code is commonly employed, where k bits of information are encoded into n bits of codeword, with n > k to incorporate redundancy. The code rate is defined as R = \frac{k}{n} < 1, representing the fraction of useful data in the transmitted stream; lower rates provide more redundancy and thus greater error-correcting capability but at the cost of reduced throughput. Such codes can correct up to t errors per block, where t is determined by the minimum Hamming distance d_min of the code, specifically t = \left\lfloor \frac{d_{\min} - 1}{2} \right\rfloor. Decoding in FEC can be performed using hard-decision or soft-decision methods. Hard-decision decoding treats received symbols as discrete decisions (e.g., 0 or 1 based on a ), simplifying but discarding or reliability from the . In contrast, soft-decision decoding utilizes probabilistic metrics, such as log-likelihood ratios (LLRs), which quantify the confidence in each received symbol's value, enabling more accurate error correction at the expense of increased . The theoretical foundation of FEC is rooted in information theory, particularly the Shannon limit for the binary symmetric channel (BSC). For a BSC with crossover probability p, the channel capacity is C = 1 - H(p), where H(p) = -p \log_2 p - (1-p) \log_2 (1-p) is the binary entropy function. FEC codes can approach this limit, achieving arbitrarily low error rates as long as the code rate R ≤ C. A simple illustrative example of FEC is the repetition code, where each information bit is repeated multiple times (e.g., 0 encoded as 000, 1 as 111); the receiver applies majority voting to decode, correcting errors if fewer than half the repetitions are flipped. FEC is particularly advantageous in unidirectional communication links, such as satellite broadcasts, where feedback for retransmission is impractical or impossible.

Hybrid Schemes

Hybrid schemes integrate (ARQ) mechanisms with (FEC) to enhance reliability and efficiency in error-prone channels by correcting errors where possible and requesting retransmissions only when necessary. This combination mitigates the limitations of pure ARQ, which relies solely on retransmissions, and pure FEC, which lacks feedback for adaptation. The primary implementation is hybrid ARQ (HARQ), which employs FEC encoding on transmitted packets and uses feedback to trigger retransmissions if decoding fails. HARQ is classified into three types based on retransmission strategies. Type I applies FEC to the entire packet and, upon error detection via (CRC), discards the packet and requests a full retransmission of the same encoded version, without combining prior receptions. Type II uses incremental redundancy, where initial transmission sends a high-rate code, and retransmissions provide additional parity bits that, when combined with previous receptions, form a lower-rate code for improved decoding. Type III employs chase combining with self-decodable codes, retransmitting the same coded packet (possibly punctured differently) and combining multiple identical versions to boost via soft combining. These schemes reduce the number of retransmissions compared to pure ARQ by leveraging FEC for initial corrections, thereby improving throughput in time-varying channels like wireless environments. In , HARQ utilizes and supports up to four retransmissions per transport block (totaling five transmissions). In , LDPC codes are employed with configurable HARQ processes (typically up to 16), both approaching capacity limits for enhanced . The added decoding complexity from combining receptions increases computational load but yields lower latency than pure ARQ, as fewer full retransmissions are needed. Performance evaluations demonstrate that HARQ achieves bit error rate (BER) reductions by factors of up to 10^3 over pure ARQ in fading channels, owing to the synergistic error correction and feedback.

Notable Error-Correcting Codes

Linear Block Codes

Linear block codes form a fundamental class of error-correcting codes in , defined as linear subspaces of the \mathbb{F}_q^n over a \mathbb{F}_q with q elements, where n denotes the code length. A linear [n, k, d]_q consists of q^k codewords, each of length n, and has minimum distance d, enabling correction of up to t = \lfloor (d-1)/2 \rfloor errors. These codes are particularly useful because their linearity allows efficient encoding and decoding operations via matrix algebra. In systematic form, a linear block code can be represented using a G of k \times n whose rows form a basis for the , and a parity-check H of (n-k) \times n such that the codewords c satisfy c H^T = 0. Encoding a m \in \mathbb{F}_q^k produces the codeword c = m G. For codes (q=2), G often places the k symbols in the first k positions, with the remaining n-k parity symbols computed to satisfy the parity-check equations. This structure ensures that the is k/n, balancing against information density. Decoding relies on syndrome computation for error detection and correction. Given a received r = c + e, where e is the , the is s = r H^T. If s = 0, no is assumed, and r is the codeword. Otherwise, s identifies a leader (a low-weight ) via lookup in a syndrome table, allowing correction by subtracting the estimated e from r. This method's efficiency stems from the linearity, as syndromes depend only on the . A prominent example is the , a [7,4,3] linear that corrects single errors (t=1). Its parity-check matrix H has columns consisting of all nonzero vectors of length 3, ensuring d=3. Introduced by to address errors in early systems, this code adds three parity bits to four message bits, with positions numbered in for parity grouping. BCH codes extend this framework for multiple-error correction, designed as cyclic linear over \mathbb{F}_{2^m} capable of correcting up to t errors in blocks of length n = 2^m - 1. Constructed using a primitive of degree m to define the field and a generator whose roots are consecutive powers of a primitive element, BCH codes achieve designed distance at least $2t+1. Independently developed by Hocquenghem and by and Ray-Chaudhuri, these codes are widely used for their bounded length and error-correcting capability. Reed-Solomon (RS) codes are another important class of linear , defined over a \mathbb{F}_q with q \geq n, typically q = 2^m, as evaluation codes where codewords are evaluations of of degree less than k at n distinct points. An RS [n, k, d = n - k + 1]_q code can correct up to t = \lfloor (n - k)/2 \rfloor symbol errors and detect up to n - k erasures, making them particularly effective against burst errors since errors affect entire symbols. Encoding involves interpolating a through the k message symbols and evaluating at additional points for parity symbols. Decoding uses algorithms like Berlekamp-Massey for syndrome-based error location or Forney for magnitude estimation. Introduced by Irving S. and Gustave in 1960, RS codes are extensively applied in digital storage (e.g., using [31,26,6] over GF(8) to correct 2.5 symbols) and communications (e.g., QR codes, satellite links).

Convolutional Codes

Convolutional codes constitute a class of error-correcting codes tailored for sequential data streams, where encoding occurs continuously rather than in fixed blocks. Introduced by Peter Elias in as a method to approach with structured codes, they generate redundant bits through a convolutional process that mixes current and past input symbols. Unlike , convolutional codes process information in a streaming fashion, making them ideal for real-time applications like wireless communications and satellite links. The structure of a convolutional encoder relies on a , where the connections are defined by generator polynomials over GF(2). The key parameters include the constraint length K, which represents the number of stages in the (including the current input bit), determining the memory depth and the number of possible states $2^{K-1}, and the code rate k/n, indicating that k input bits yield n output bits per cycle, typically with k=1 for simplicity. Encoding proceeds by shifting input bits through the register and computing each output bit as the modulo-2 sum of selected register contents, as specified by the generators; for instance, a rate $1/2 code with K=3 might use generators g_0 = 111_2 and g_1 = 101_2, producing two output bits per input. This process forms a semi-infinite codeword, often terminated by flushing the register with zeros to return to the zero state. The trellis diagram illustrates the encoding as a time-indexed of states, with branches labeled by input-output pairs, enabling visualization of all possible code sequences. Decoding convolutional codes employs the , developed by in 1967, which achieves maximum likelihood sequence estimation via dynamic programming on the trellis. At each time step, the algorithm computes branch metrics—such as for hard-decision decoding or for soft-decision variants—between received symbols and possible branch outputs, then selects and retains the lowest-metric survivor path to each state, pruning suboptimal paths. The complexity scales as O(2^K) operations per input bit, dominated by the exponential growth in states, though practical implementations limit K to around 7-9 for feasibility. Soft decoding, which incorporates bit reliability from the channel, can improve performance by up to 2-3 dB but is briefly noted here as it extends into more advanced concatenated schemes. To achieve higher rates without redesigning the encoder, puncturing deletes specific output bits according to a predefined , effectively increasing the rate from a low-rate mother code while preserving decoding compatibility via the full trellis. This technique, formalized by , , and Geist in 1979, allows rates like $7/8 from a rate $1/2 base code by omitting one bit every four outputs, for example, and maintains near-optimal performance for moderate puncturing densities. Prominent applications include the rate $1/2, constraint length K=7 standardized by for the in 1977, concatenated with Reed-Solomon for deep-space and decoded via Viterbi to yield a 3.5 dB gain at $10^{-5} . In cellular networks, the Global System for Mobile Communications () utilizes rate $1/2, K=5 convolutional codes with generators G_0 = 1 + D^3 + D^4 and G_1 = 1 + D + D^3 + D^4 for full-rate speech coding, where the protected 189 bits (including 182 Class 1 bits, 3 bits, and 4 bits) are encoded into 378 bits, to which the 78 unprotected Class 2 bits are added for a total of 456 bits. These examples underscore the codes' robustness in bandwidth-constrained, error-prone environments.

Advanced Codes

Advanced codes represent a class of error-correcting codes designed to approach the theoretical limits of channel capacity, particularly the Shannon limit, through sophisticated constructions and decoding algorithms. These codes, including turbo, low-density parity-check (LDPC), and polar codes, leverage iterative processing and graphical representations to achieve near-optimal performance in practical systems. Unlike earlier linear block or convolutional codes, advanced codes emphasize sparsity, concatenation, and polarization to minimize decoding complexity while maximizing error correction capability. Turbo codes consist of parallel concatenated convolutional codes, where two or more systematic convolutional encoders process the input data, separated by an interleaver to randomize the sequence and decorrelate errors between the encoders. This structure produces parity bits from both encoders, enabling iterative decoding that exchanges soft information between component decoders to refine estimates of the transmitted bits. The interleaver plays a critical role in preventing burst errors from propagating across iterations, allowing the decoder to converge on highly reliable decisions after several passes. were introduced as a breakthrough in achieving performance close to the Shannon limit with feasible computational resources. LDPC codes are defined by a sparse parity-check , where the low density of nonzero entries (typically three to six per column) forms a that models the code constraints. Decoding relies on , specifically the sum-product algorithm, which iteratively passes messages along the graph edges to compute marginal probabilities for each bit, updating beliefs until or a maximum count is reached. This graphical approach exploits the sparsity to keep linear in length, making LDPC codes scalable for high-rate applications. The originated from early work on probabilistic decoding but gained prominence with optimized irregular distributions that further enhance performance. Polar codes exploit the phenomenon of channel polarization, where recursive application of a basic kernel (such as the 2x2 Arikan kernel) transforms a set of identical noisy channels into a subset of nearly perfect channels and a complementary subset of noisy ones. Information bits are placed on the reliable (polarized) channels, while frozen bits (known constants) occupy the unreliable ones, enabling capacity-achieving performance in the asymptotic limit. The standard successive cancellation (SC) decoder processes bits sequentially, using decisions on prior bits to estimate likelihoods for subsequent ones, which achieves the code's designed capacity but can be enhanced by SC list decoding. In SC list decoding, the decoder maintains a list of candidate paths during the successive process, selecting the most likely one at the end to improve error rates at finite lengths. Polar codes were proven to attain symmetric capacity for binary-input discrete memoryless channels. These advanced codes demonstrate remarkable performance, operating within 0.5 dB of the limit at a (BER) of 10^{-5} for typical rates around 1/2 to 8/9, as validated through simulations and theoretical bounds. This proximity to the theoretical limit underscores their efficiency in bandwidth-constrained environments. In practical deployments, LDPC codes form a mandatory component of the IEEE 802.11ax () standard for high-throughput wireless local area networks, providing robust error correction in multipath fading channels. Similarly, polar codes are specified by for control channels in New Radio (NR), handling short payloads with low latency requirements.

Applications

Telecommunications

In telecommunications, error detection and correction techniques are essential for maintaining across diverse transmission media, including , , and optical s, where noise, interference, fading, and attenuation can degrade signal quality. These methods employ both detection mechanisms, such as checksums, and correction strategies, including (FEC) codes and (ARQ) protocols, to achieve low bit error rates (BER) suitable for high-reliability applications like voice, video, and data services. The choice of technique depends on characteristics, latency constraints, and availability, with hybrid approaches often combining detection and correction for optimal performance. In deep-space communications, concatenated coding schemes using Reed-Solomon (RS) outer codes and form the basis of the Consultative Committee for Space Data Systems (CCSDS) standard for . This standard specifies a , rate-1/2 paired with a (255,223) RS code over GF(2^8), which corrects up to 16 symbol errors per block to combat burst errors and low signal-to-noise ratios inherent in long-distance space links. The Voyager spacecraft missions exemplified this approach, employing the (255,223) RS code concatenated with a rate-1/2 to achieve a post-decoding BER as low as 10^{-5}, enabling reliable transmission of scientific data from billions of kilometers away despite weak signals. Satellite and broadcast television systems leverage low-density parity-check (LDPC) codes for efficient FEC in standards like , which supports a range of code rates from 1/4 to 9/10 to balance error correction capability with . In , LDPC inner codes are concatenated with BCH outer codes, particularly for modulation schemes like QPSK, allowing robust performance in noisy satellite channels with fading and . This configuration enables high-throughput video distribution while maintaining low BER, with code rates adjustable based on link conditions to support services from direct-to-home broadcasting to mobile satellite communications. Internet protocols integrate error control at the , where employs ARQ with mandatory checksums to detect and retransmit corrupted segments, ensuring reliable end-to-end delivery over potentially lossy networks. In contrast, provides optional checksums for basic error detection but supports FEC extensions, such as packets in RTP, for real-time applications like streaming where low latency precludes full ARQ. These mechanisms complement lower-layer error handling in networks, adapting to varying channel qualities in wired and . Modern cellular networks, including 5G New Radio (NR), utilize polar codes for control channels and LDPC codes for data channels to provide scalable FEC tailored to enhanced mobile broadband requirements. Polar codes offer low-complexity decoding for short blocks in control signaling, while LDPC codes support high-throughput data transmission with flexible rates and lengths. 5G NR further enhances reliability through hybrid ARQ (HARQ) with up to 16 processes per UE, enabling incremental redundancy retransmissions to adaptively correct errors without excessive latency. Optical fiber systems for long-haul and links rely on FEC to mitigate accumulated and over thousands of kilometers, typically achieving a post-FEC BER of 10^{-15} with approximately 7% overhead. Standards like RS(255,239) codes, used in networks, add redundancy to correct pre-FEC BER up to 10^{-3}, extending reach and capacity for high-speed data transport in systems like 100G Ethernet over dense . This overhead enables error-free performance across global undersea cables, supporting the backbone of international .

Data Storage

Error detection and correction are essential in systems to ensure against physical imperfections, environmental factors, and wear over time. In magnetic, optical, and devices, specialized coding schemes mitigate bit errors that arise during read/write operations or due to media degradation. These techniques typically combine detection mechanisms, such as cyclic redundancy checks () in sector headers for initial error flagging, with correction codes that recover data without user intervention. In hard disk drives (HDDs), run-length limited (RLL) codes constrain the encoding of data to optimize magnetic recording density while aiding error detection by limiting consecutive zero transitions, often paired with low-density parity-check (LDPC) codes for robust correction. Error-correcting codes () in HDDs are designed to detect and correct sector errors, typically handling up to 100 bits per 512-byte sector through on-the-fly processing by the drive controller. This capability addresses random bit flips from magnetic interference or head misalignment, ensuring reliable retrieval even as areal densities exceed 1 Tb/in². Optical storage media, such as compact discs () and digital versatile discs (DVDs), employ cross-interleaved Reed-Solomon (CIRC) coding to combat burst errors caused by scratches or fingerprints on the disc surface. The CIRC scheme uses two layers of shortened Reed-Solomon codes with deliberate interleaving to spread errors across frames, enabling correction of burst lengths up to approximately 3.5 mm (or 3,500 bits) while detecting longer ones for or concealment in audio/video playback. This design, standardized for CD-DA and extended to DVD formats, provides a raw reduction to below 10^{-12} post-correction, prioritizing playability over perfect . NAND flash memory in solid-state drives (SSDs) relies on Bose-Chaudhuri-Hocquenghem (BCH) or LDPC codes for page-level error correction, addressing cell-to-cell and charge leakage that increase with to multi-level cells (, , QLC). BCH codes offer guaranteed correction of up to 40-60 bits per 1-4 page in enterprise TLC NAND, while LDPC provides superior performance in high-noise scenarios through iterative soft-decision decoding. Wear-leveling algorithms integrate tracking by monitoring program/erase (P/E) cycle counts and raw bit rates (RBER) per block, dynamically remapping data to underused regions to balance wear and preempt uncorrectable . Redundant array of independent disks () configurations enhance storage reliability at the system level by computing across multiple disks, enabling detection and correction of entire failures without . In RAID-5, for instance, a single stripe per block group allows reconstruction of data from a failed disk using XOR operations on surviving drives, tolerating one failure while distributing overhead evenly. Advanced RAID-6 variants extend this to two failures via dual parities, crucial for large-scale arrays where annual failure rates approach 1-2%. In the 2020s, NVMe SSDs have adopted soft-decision LDPC decoding to achieve unprecedented reliability, targeting an uncorrectable (UBER) of 10^{-16} or lower in environments. This trend supports exabyte-scale deployments in centers, where LDPC's ability to leverage multi-read voltage thresholds corrects over 100 bits per sector amid 3D NAND's rising error floors, often combined with host-side for end-to-end protection.

Computing Hardware

In computing hardware, error detection and correction mechanisms are essential for maintaining in systems like and processor caches, where transient faults such as soft errors can occur due to environmental factors including cosmic rays. These soft errors, which flip individual bits in memory cells, arise primarily from high-energy particles striking , with an estimated rate of approximately 10^{-12} errors per bit per hour (1000–5000 FIT per Mbit) in under terrestrial conditions. Early motivations for such techniques trace back to the unreliability of vacuum tube-based computers in the , prompting to develop error-correcting codes to automate error handling during routine operations. Error-correcting code () memory, commonly implemented in server-grade systems, employs single-error correction double-error detection (SECDED) schemes derived from Hamming codes to protect 64-bit data words using 8 additional check bits, enabling correction of single-bit errors and detection of double-bit errors within the word. For instance, DDR4 modules integrate these codes at the module level to safeguard against corruption during high-speed data transfers. In contrast, consumer-grade typically lacks full ECC but may incorporate basic checking for error detection in select configurations, though modern non-ECC variants often forgo even this to prioritize cost and density. Within CPUs, cache hierarchies use simpler bits on tag fields to detect errors in address mapping, ensuring that faulty cache lookups trigger misses rather than silent data propagation. Graphics processing units (GPUs) and specialized accelerators for training demand higher-throughput error correction to handle massive parallel computations, where low-density parity-check (LDPC) codes are increasingly applied for their efficiency in correcting multiple s at scale. NVIDIA's GPU, for example, incorporates on its HBM3 to mitigate soft s during intensive workloads, supporting reliable training of large models without frequent interruptions. environments benefit significantly from , as studies indicate that s contribute to about 50% of hardware-induced ; implementation can reduce such incidents by automatically correcting transient faults, thereby enhancing overall system availability. As of 2025, advancements in chiplet-based architectures on 3nm processes, such as AMD's CDNA 4 for accelerators, integrate on-die directly into compute and I/O chiplets to address increased error susceptibility from denser transistors and multi-die interconnects, ensuring robust error handling across heterogeneous designs.

References

  1. [1]
    Topic: Coding for Error Detection and Correction
    Error coding is a method of detecting and correcting these errors to ensure information is transferred intact from its source to its destination.
  2. [2]
    [PDF] CS 140 1 Error Correction and Detection - UCSD CSE
    Error Correction in contrast aims at correcting the error in case it occurs, thus restoring the representation back to its correct form.
  3. [3]
    [PDF] Detecting and Correcting Bit Errors - cs.Princeton
    – Encoding and decoding fundamentals. – Measuring a code's error correcting ... = 3: Detect 2 bit errors, correct 1 bit error. Page 34. Hamming (7, 4): ...
  4. [4]
    [PDF] Lecture 18: Error Detection and Correction
    Nov 27, 2021 · Even-parity codewords are defined by a single parity-check equation: ... Hamming Code Error Detection and Correction. The parity equations are.
  5. [5]
    [PDF] Lecture 10: Error-correcting Codes 1 Overview 2 Basic definitions
    Oct 9, 2013 · In information theory and coding theory, error detection and correction are techniques that enable reliable delivery of digital data over ...
  6. [6]
  7. [7]
  8. [8]
    Cyclic Low Density Parity Check Codes With the Optimum Burst ...
    Oct 21, 2020 · The paper presents a new scheme of cyclic codes suitable for the correction of burst errors. This is accomplished by the proper definition ...
  9. [9]
    [PDF] 6.02 Lecture 3: Errors, channel codes - MIT OpenCourseWare
    binary symmetric channel (BSC)​​ With probability the input binary digit gets flipped before being presented at the output.
  10. [10]
    [PDF] Error Probability Analysis of Binary Asymmetric Channels
    Since the BSC is strongly symmetric, it is not surprising that there are a huge number of codes that can achieve the best error probability. Among them many are.Missing: correction | Show results with:correction
  11. [11]
    Estimates of Error Rates for Codes on Burst‐Noise Channels - Elliott
    Exemplary evaluations of error detecting codes on the switched telephone network are included in this paper. On channels which may be represented by Gilbert's ...
  12. [12]
  13. [13]
  14. [14]
    Ancient Accounting Systems - Investopedia
    See how ancient accounting evolved to keep records of increasingly complex transactions.
  15. [15]
    How the world's first accountants counted on cuneiform - BBC News
    Jun 12, 2017 · The world's first accountants, sitting at the door of the temple storehouse, using the little loaf tokens to count as the sacks of grain arrive and leave.
  16. [16]
    Jean-Maurice-Émile Baudot - Britannica
    Sep 29, 2025 · In Baudot's code, each letter was represented by a five-unit combination of current-on or current-off signals of equal duration; this ...
  17. [17]
    Émile Baudot Invents the Baudot Code, the First Means of Digital ...
    In 1870 French telegraph engineer Émile Baudot Offsite Link invented the Baudot code, a character set predating EDCDIC Offsite Link and ASCII.Missing: error detection parity- schemes 19th century
  18. [18]
    [PDF] A Mathematical Theory of Communication
    In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible ...
  19. [19]
    [PDF] The Bell System Technical Journal - Zoo | Yale University
    When examining various problems connected with error detecting and correcting codes it is often convenient to introduce a geometric model. The model used here ...Missing: vacuum tube
  20. [20]
    [PDF] Error detecting and correcting codes - World Radio History
    used apply equally well to both. With binary codes, it is very easy to detect errors by means of what is known as a parity.
  21. [21]
    Cyclic Codes for Error Detection | IEEE Journals & Magazine
    Cyclic codes are defined and described from a new viewpoint involving polynomials. The basic properties of Hamming and Fire codes are derived.
  22. [22]
    [PDF] “Polynomial Codes over Certain Finite Fields”
    “Polynomial Codes over. Certain Finite Fields”. A paper by: Irving Reed and Gustave Solomon presented by Kim Hamilton. March 31, 2000. Page 2 ...
  23. [23]
    Concatenated codes. - DSpace@MIT
    Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1965. Sc.D. Leaf 124 lacking. Date issued. 1965 ...
  24. [24]
    [PDF] A History of the ARPANET: The First Decade - DTIC
    Apr 1, 1981 · By. December 1969 an initial host-to-host protocol had been specified which supported communication between a terminal on one host and a process ...
  25. [25]
    [PDF] Near Optimum Error Correcting Coding And Decoding: Turbo-Codes
    Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon limit error-correcting coding and decoding: Turbo-codes,” in ICC'93, Geneva,. Switzerland, May 93, pp.Missing: invention | Show results with:invention
  26. [26]
    [PDF] TS 138 212 - V15.2.0 - 5G; NR; Multiplexing and channel coding ...
    TS 138 212 is a technical specification for 5G NR, concerning multiplexing and channel coding, produced by ETSI 3GPP.
  27. [27]
    Designing adaptive coded modulation for optical networks via ...
    A coding technique is described which improves error performance of synchronous data links without sacrificing data rate or requiring more bandwidth. This ...
  28. [28]
  29. [29]
    [PDF] ERROR DETECTION: PARITY BITS AND CHECK DIGITS
    With odd parity, an error condition is indicated by any nine bit pattern with an even number of 1s. Modern computer memories use built-in parity bits. We think ...
  30. [30]
    [PDF] Error Detection and Correction Parity Checker
    The parity-bit generator generates a P so that the number of 1's in the transmitted data is odd (or even). The parity checker checks this in the received data.
  31. [31]
    [PDF] Error Detection - Computer Science (CS)
    We will apply an even-parity scheme. The parity bit could be stored at any fixed location with respect to the corresponding data bits. Upon receipt of data and ...
  32. [32]
    [PDF] 1 Error Correction and Detection - UCSD CSE
    In addition, this parity only provides single error detection (or in fact, an odd number of errors can be detected). It does not detect multiple errors (even in ...
  33. [33]
    chap10_lect05_memory2.html - UMBC
    Parity checking is used to detect single bit errors in the memory. The current trend is away from parity checking. Parity checking adds 1 bit for every 8 data ...
  34. [34]
    Error Detection and Correction: Two Dimensional Parity - gaia
    Solution · 1. The parity bits for the 16 columns is: 11000110 00100011 · 2. The parity bits for the 5 rows is: 11001 · 3. The parity bit for the parity row is: 1<|control11|><|separator|>
  35. [35]
    [PDF] CSC458 - Lecture 2 Bits and Bandwidth Error Detection and ...
    • Does not correct any errors. 2D Parity. • Add parity row/column to array of bits. • Detects all 1, 2, 3 bit errors, and many errors with >3 bits. • Corrects ...
  36. [36]
    [PDF] Reliable, Parallel Storage Architecture: RAID & Beyond
    • orthogonal parity groups. • double-error detecting codes from memory systems. Overheads: check space versus check update time. • 2d-parity has minimal time ...
  37. [37]
    RFC 1071 - Computing the Internet checksum - IETF Datatracker
    This memo discusses methods for efficiently computing the Internet checksum that is used by the standard Internet protocols IP, UDP, and TCP.
  38. [38]
  39. [39]
  40. [40]
    [PDF] IBM Binary Synchronous Communications (BSC) - Bitsavers.org
    With ASCII, Vertical Redundancy Checking (VRC) ver- ifies each character as it is received, and the entire block is checked by Longitudinal Redundancy Checking ...
  41. [41]
    [PDF] The Effectiveness of Checksums for Embedded Control Networks
    Mar 24, 2009 · We describe checksum performance for random independent bit errors and burst errors in a binary symmetric channel. In addition, we describe the ...Missing: limitations | Show results with:limitations
  42. [42]
    [PDF] A tutorial on CRC computations - IEEE Micro
    CRC Computations. Data can lose integrity in storage and transmission. Here are a few software algorithms to protect your data. Tenkasi V. Ramabadran and ...
  43. [43]
    Best CRC Polynomials - Carnegie Mellon University
    Best CRC polynomials are general-purpose, with specific Hamming Distance properties, shown in tables. For example, 0x247 provides HD=4 up to 501 bit dataword  ...CRC Polynomials Evaluation... · 4 Bit CRC Zoo · 8 Bit CRC Zoo · CRC Selection
  44. [44]
    FIPS 180-4, Secure Hash Standard (SHS) | CSRC
    This standard specifies hash algorithms that can be used to generate digests of messages. The digests are used to detect whether messages have been changed.
  45. [45]
    [PDF] SHA-1 and the Strict Avalanche Criterion - arXiv
    Sep 2, 2016 · Abstract—The Strict Avalanche Criterion (SAC) is a measure of both confusion and diffusion, which are key properties of a cryptographic hash ...
  46. [46]
  47. [47]
    RFC 2104: HMAC: Keyed-Hashing for Message Authentication
    ### Definition and Use of HMAC for Message Authentication
  48. [48]
    RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2
    This document specifies Version 1.2 of the Transport Layer Security (TLS) protocol. The TLS protocol provides communications security over the Internet.
  49. [49]
    Automatic-repeat-request error-control schemes
    **Summary of ARQ Protocols from IEEE Document (Authors: S. Lin, P. S. Yu; Year: 1996)**
  50. [50]
    [PDF] Throughput Performance of Data-Communication Systems Using ...
    Sep 8, 1977 · The throughput efficiency of the stop-and-wait ARQ scheme is highly dependent ... For M = 1 we have the basic stop-and-wait throughput efficiency.
  51. [51]
    ISO/IEC 13239:2002 - Information technology
    Telecommunications and information exchange between systems — High-level data link control (HDLC) procedures.Missing: ARQ | Show results with:ARQ
  52. [52]
    [PDF] Communication and Networking Forward Error Correction Basics
    Forward error correction (FEC) is a way of adding redundancy to messages so that the receiver can both detect and correct common errors. D. Richard Brown III. 2 ...
  53. [53]
    [PDF] Introduction to Forward-Error- Correcting Coding
    It would be preferable if k - n, but generally 2k<< 2n for good codes. EXAMPLE4.1. Assume that the code word u was sent and that the channel creates two errors; ...
  54. [54]
    Forward Error Correction: Choosing Between Hard-Decision, Soft ...
    Jul 28, 2011 · System designers in the 21st century can choose between two approaches in FEC, a hard-decision method implemented primarily in chip-level ...
  55. [55]
    Hard and Soft decision decoding - GaussianWaves
    Dec 29, 2009 · While soft decision decoding can achieve better error correction, it is more complex and computationally expensive than hard decision decoding.
  56. [56]
    Forward Error Correction | Comtech EF DataComtech EF Data
    Forward Error Correction is a powerful technique for improving the performance of error-prone channels found in communication systems.
  57. [57]
    [PDF] ARQ error control techniques - 3GPP
    The type I hybrid ARQ has potentially lower signalling overheads than type II hybrid ARQ and type III hybrid ARQ. The physical layer, which only need FEC ...
  58. [58]
  59. [59]
    Performance comparison of type I, II and III hybrid ARQ schemes ...
    Type I uses a fixed rate code designed for error correction, and retransmission remains necessary if the correction operation fails. ...
  60. [60]
    [PDF] Linear Block Codes: Encoding and Syndrome Decoding - MIT
    Feb 28, 2012 · The rate of the code is k/n. The conversion in a linear block code involves only linear operations over the message bits to produce codewords.Missing: seminal paper
  61. [61]
    On a class of error correcting binary group codes - ScienceDirect.com
    A general method of constructing error correcting binary group codes is obtained. A binary group code with n places, k of which are information places is ...Missing: BCH original
  62. [62]
    [PDF] IRE convention, Vol 3 Part 4, pages 37-46; 1955
    ... 1955. Copyr:ght 1955, by T" -:tifute of Radio Engineors. Inc" I Eost 79 Street, New York 21 , N.Y.. Page 2. CODING FOR NOISY CHANNELS". Peter Elias. Department ...
  63. [63]
    [PDF] Convolutional Codes - cs.Princeton
    Viterbi algorithm: Summary. Page 41. 1. Encoding data using convolutional codes. – Changing code rate: Puncturing. 2. Decoding convolutional codes: Viterbi ...
  64. [64]
    [PDF] Viterbi Decoding of Convolutional Codes - MIT
    Oct 9, 2011 · The free distance of the K = 3 convolutional code is 4, which means it can correct one bit error over blocks that are similar in length to the ...
  65. [65]
    [PDF] Error Bounds for Convolutional Codes and an Asymptotically ...
    69-72, February. 1967. Error Bounds for Convolutional Codes and an Asymptotically Optimum. Decoding Algorithm. ANDREW J. VITERBI,.
  66. [66]
    [PDF] N91-24068 - NASA Technical Reports Server (NTRS)
    Voyager used a rate r = 1/2, constraint length. K = 7 convolutional code, which was decoded on the ground using Viterbi's revolutionary decoding algorithm. This ...
  67. [67]
    [PDF] Channel coding (GSM 05.03) - ETSI
    - These information + parity bits are encoded with a convolutional code, building the coded bits. - Reordering and interleaving the coded bits, and adding a ...
  68. [68]
    Near Shannon limit error-correcting coding and decoding: Turbo ...
    A new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed.
  69. [69]
    List decoding of polar codes - IEEE Xplore
    We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan.
  70. [70]
    [PDF] EN 302 307 - V1.3.1 - Digital Video Broadcasting (DVB) - ETSI
    Forward Error Correction (FEC) Encoding shall be carried out by the concatenation of BCH outer codes and LDPC. (Low Density Parity Check) inner codes (rates 1/4 ...
  71. [71]
    [PDF] The Effects of Reed-Solomon Code Shortening on the Performance ...
    Introduction. In the proposed NASA/ESA telemetry/coding standard, a (255, 223) Reed-Solomon code is concatenated with an inner (7, 1/2) convolutional code.
  72. [72]
    [PDF] Performance of Concatenated Reed-Solomon/Viterbi Channel Coding
    Reed-Solomon decoders will be installed in NASA's Deep Space Network. (DSN) ground stations in time for the 1986 Uranus encounter. The concatenated RS ...
  73. [73]
    [PDF] FEC for EFM - IEEE 802
    975 RS(255,239) FEC now used in current submarine fiber optic networks (improves submarine fiber distance by 2x to 4x). ... • Net coding gain electrical (NCGE)= ...
  74. [74]
    [PDF] New Capacity-Approaching Codes for Run-Length-Limited Channels
    RLL channels are found in digital recording systems like the Hard Disk Drive (HDD), Compact Disc (CD), and. Digital Versatile Disc (DVD). For a binary ...
  75. [75]
    [PDF] Iterative Detection Read Channel Technology in Hard Disk Drives
    The new enhance- ment is an iterative detector that uses a Low Density Parity Check (LDPC) code to correct additional bit errors using succes- sive iterations ...
  76. [76]
    2023 IRDS Mass Data Storage
    In today's 3D NAND, LDPC is now the most commonly used. ECC with a correction strength of approximately 120-160 bits/1kByte. All NAND chips include an area for ...
  77. [77]
    [PDF] Reed-Solomon Codes and Compact Disc
    4 THE CIRC CODE. CIRC code (cross-interleaved Reed-Solomon code) is the name of the error control code used in the CD system. The system requirements are.Missing: DVDs 3.5
  78. [78]
  79. [79]
  80. [80]
  81. [81]
  82. [82]
  83. [83]
    The EVENODD Code and its Generalization: An Efficient Scheme for ...
    A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers.
  84. [84]
    [PDF] Soft Errors in Electronic Memory – A White Paper
    Jan 5, 2004 · To eliminate the soft memory errors that are induced by cosmic rays, memory manufacturers must either produce designs that can resist cosmic ray.Missing: CPU | Show results with:CPU<|separator|>
  85. [85]
  86. [86]
    [PDF] Intel & Samsung: Improving Memory Reliability at Data Centers
    For example, at the data center of one global cloud service provider (CSP), it was determined that ~50% of hardware-triggered server downtime was caused by ...<|control11|><|separator|>
  87. [87]
    NVIDIA H100 GPU
    ### Summary of H100 GPU Content Related to Error Correction, ECC Memory, and LDPC
  88. [88]
    [PDF] AMD CDNA™ 4 ARCHITECTURE
    Jun 10, 2025 · The AMD CDNA™ 4 architecture highlights one of the major advantages of a chiplet-based, heterogeneous approach to building a compute platform – ...