Symmetric-key algorithm
A symmetric-key algorithm, also known as a secret-key algorithm, is a type of cryptographic algorithm that employs the same secret key to perform both the encryption of plaintext into ciphertext and the decryption of ciphertext back into plaintext.[1] These algorithms form a foundational component of modern cryptography, providing efficient mechanisms for securing data confidentiality, integrity, and authenticity in applications ranging from secure communications to file storage.[2]Historical Development
The origins of standardized symmetric-key algorithms trace back to the 1970s, when the U.S. National Bureau of Standards (now NIST) sought a robust method for protecting unclassified but sensitive government information.[3]This effort culminated in the adoption of the Data Encryption Standard (DES) in 1977 as Federal Information Processing Standard (FIPS) 46, a 64-bit block cipher developed by IBM with input from the National Security Agency (NSA), featuring a 56-bit key length.[3][4]
By the late 1990s, advances in computing power rendered DES vulnerable to brute-force attacks, prompting NIST to initiate a public competition in 1997 to select a successor.[3]
The winning submission, Rijndael, was standardized as the Advanced Encryption Standard (AES) in 2001 under FIPS 197, supporting key sizes of 128, 192, or 256 bits and operating on 128-bit blocks, thereby establishing it as the de facto global standard for symmetric encryption.[3]
Key Characteristics and Operations
Symmetric-key algorithms typically fall into two categories: block ciphers, which process data in fixed-size blocks (e.g., AES encrypts 128-bit blocks using substitution, permutation, and key mixing operations), and stream ciphers, which generate a keystream to encrypt data sequentially bit-by-bit or byte-by-byte.[1]A core principle is the secrecy of the shared key, which must remain confidential to prevent unauthorized access; the algorithm itself is public, relying on the key's unpredictability for security.[5]
Advantages and Challenges
One primary advantage of symmetric-key algorithms is their computational efficiency, enabling rapid encryption and decryption of large data volumes compared to asymmetric alternatives, making them ideal for resource-constrained environments and high-throughput scenarios like disk encryption or VPNs.[2][6]However, they face the inherent key distribution problem, where securely sharing the secret key between parties without prior secure channels poses significant risks, often necessitating additional protocols or hybrid systems for key exchange.[7][6]
Additionally, while scalable for bulk data, symmetric algorithms lack built-in mechanisms for non-repudiation or authentication, typically requiring integration with other cryptographic primitives like message authentication codes (MACs).[6]
Fundamentals
Definition and basic operation
A symmetric-key algorithm, also referred to as a secret-key algorithm, is a type of cryptographic algorithm that employs the same cryptographic key for both encrypting plaintext into ciphertext and decrypting ciphertext back into plaintext.[1] This shared key must be kept secret and securely distributed to the communicating parties beforehand, distinguishing it from systems where keys differ for encryption and decryption.[8] The foundational mathematical model for such systems was established by Claude Shannon in 1949, defining a secrecy system as a probabilistic set of transformations T from a plaintext space M (possible messages) to a ciphertext space C (possible cryptograms), where each transformation is selected via a key from a key space K, with the key chosen according to a probability distribution.[9] In its basic operation, a symmetric-key algorithm follows a straightforward process centered on the shared key. First, a key-generation procedure produces a secret key k \in K from a security parameter \lambda, typically ensuring sufficient randomness and length to resist attacks; this key is then securely exchanged between the sender (Alice) and receiver (Bob), often via a separate secure channel.[10] For encryption, Alice computes the ciphertext c = E(k, m), where E is the encryption function, m \in M is the plaintext message, and the operation may incorporate additional elements like a random nonce r to ensure freshness, such as c = (r, F(k, r) \oplus m) in simple stream-like constructions, with F denoting a key-derived function and \oplus bitwise XOR.[11] The ciphertext c is then transmitted over an insecure channel to Bob. Upon receipt, Bob performs decryption using the inverse function D(k, c) = m, recovering the original plaintext, provided the key matches and no transmission errors occur.[11] The scheme satisfies correctness: for all keys k \in K and messages m \in M, D(k, E(k, m)) = m.[11] This symmetry in key usage enables efficient computation, as the encryption and decryption operations are computationally lightweight compared to alternatives, but it relies critically on secure key distribution to prevent compromise by adversaries.[12] Shannon's model emphasizes that security arises from the key's secrecy, with perfect secrecy achievable if the key is at least as long as the message and uniformly random, rendering ciphertext statistically independent of plaintext.[9]Comparison to asymmetric cryptography
Symmetric-key algorithms employ a single secret key shared between the communicating parties for both encryption and decryption processes. In contrast, asymmetric cryptography, also known as public-key cryptography, utilizes a pair of mathematically related keys: a public key available to anyone for encryption or verification, and a private key kept secret by the owner for decryption or signing. This fundamental difference in key structure addresses distinct security needs, with symmetric methods relying on the absolute secrecy of the shared key, while asymmetric systems base their security on the computational difficulty of inverting certain mathematical functions, such as integer factorization or discrete logarithms.[13] One primary limitation of symmetric-key cryptography is the key distribution problem: securely exchanging the shared secret key between parties over an insecure channel is challenging without prior secure communication, potentially exposing the key to interception. Asymmetric cryptography resolves this by allowing the public key to be freely distributed, enabling secure key exchange without prior secrets, as introduced in the seminal work on public-key distribution systems. This innovation, proposed by Diffie and Hellman, revolutionized cryptography by eliminating the need for a trusted courier or pre-established secure channel for key setup in many scenarios. However, asymmetric systems introduce their own vulnerabilities, such as the risk of private key compromise if not properly protected, and require careful management of key pairs.[14][15] In terms of performance, symmetric algorithms are significantly more efficient, often orders of magnitude faster than their asymmetric counterparts, making them ideal for resource-constrained environments or bulk data encryption. For instance, symmetric ciphers like AES can process data at speeds exceeding gigabits per second on modern hardware, whereas public-key operations, such as RSA encryption, may be 100 to 1,000 times slower due to the complexity of large-integer arithmetic. To achieve equivalent security levels—measured in bits of security against brute-force or known attacks—symmetric keys are much shorter; a 128-bit symmetric key offers security comparable to a 3,072-bit RSA modulus or a 256-bit elliptic curve key in asymmetric systems. NIST guidelines specify these equivalences to ensure consistent protection levels across cryptographic primitives.[16][6] Asymmetric cryptography excels in scenarios requiring non-repudiation, such as digital signatures, or initial key establishment, but its computational overhead limits direct use for large-scale data protection. Consequently, hybrid cryptosystems are prevalent, combining both approaches: asymmetric methods securely exchange a temporary symmetric key, which is then used to encrypt the bulk data symmetrically. This leverages the strengths of each—efficient bulk encryption from symmetric algorithms and secure key distribution from asymmetric ones— as recommended in key management standards for federal systems.[6][15]Types
Block ciphers
A block cipher is a symmetric-key cryptographic algorithm that operates on fixed-length groups of bits, known as blocks, transforming plaintext blocks into ciphertext blocks of equal size using a secret key for both encryption and decryption.[17] Typically, block sizes range from 64 to 256 bits, with common values being 64 bits for older designs and 128 bits for modern ones, ensuring efficient processing while providing a balance between security and computational overhead.[18] The encryption process involves iterative rounds of operations derived from Claude Shannon's principles of confusion and diffusion: confusion complicates the relationship between the key and the ciphertext to thwart key recovery, while diffusion ensures that changes in a single plaintext bit affect multiple ciphertext bits, spreading statistical dependencies.[19] Block ciphers are constructed using structured frameworks to achieve these properties securely and efficiently. The Feistel network, a widely adopted structure, divides the input block into two equal halves and applies a round function—often involving substitutions and key-dependent operations—to one half, XORing the output with the other half before swapping the halves for the next round.[20] This design permits decryption by simply reversing the round order and using the same round function, avoiding the need for invertible components. The Data Encryption Standard (DES), standardized in 1977, exemplifies a Feistel cipher with a 64-bit block, 56-bit key, and 16 rounds, where each round incorporates expansion, substitution via eight S-boxes, and permutation to enhance diffusion.[21] In contrast, substitution-permutation networks (SPNs) build security through layered applications of key addition, nonlinear substitution (S-boxes), linear mixing (often matrix multiplications over finite fields), and bit permutations across multiple rounds.[22] The Advanced Encryption Standard (AES), selected in 2001 from the Rijndael algorithm, employs an SPN structure with a 128-bit block and variable key sizes of 128, 192, or 256 bits, corresponding to 10, 12, or 14 rounds; each round features byte substitutions, row shifts, column mixing, and key XORs to provide strong resistance to linear and differential attacks.[23] These structures prioritize provable security margins, with round counts calibrated to exceed known attack complexities, ensuring block ciphers remain foundational for secure data protection in symmetric cryptography.[24]Stream ciphers
A stream cipher is a type of symmetric-key algorithm that encrypts plaintext one bit or byte at a time by combining it with a pseudorandom keystream generated from a secret key.[25] Unlike block ciphers, which process fixed-size blocks of data, stream ciphers operate continuously on a data stream, making them suitable for real-time applications such as wireless communications where data arrives incrementally.[26] The core operation involves a pseudorandom number generator (PRNG) initialized with the key (and often an initialization vector) to produce the keystream, which is then combined with the plaintext via bitwise XOR to yield the ciphertext: c_i = p_i \oplus k_i, where p_i is the i-th plaintext bit, k_i is the corresponding keystream bit, and c_i is the ciphertext bit.[25] Decryption reverses this process using the same key and IV to regenerate the keystream. The concept of stream ciphers traces back to the early 20th century, with Gilbert Vernam's 1917 invention of the one-time pad, a perfect secrecy system using a truly random keystream as long as the message, though impractical for key distribution. Modern stream ciphers emerged in the mid-20th century for teletype encryption, evolving to use pseudorandom keystreams for efficiency while aiming to approximate one-time pad security.[25] They are classified into synchronous stream ciphers, where the keystream is generated independently of the plaintext and ciphertext, requiring precise synchronization between sender and receiver, and self-synchronizing stream ciphers, which derive the keystream from previous ciphertext blocks to recover from transmission errors automatically.[25] Keystream generation typically relies on linear feedback shift registers (LFSRs) combined with nonlinear functions to ensure unpredictability, as pure LFSRs are vulnerable to known attacks like the Berlekamp-Massey algorithm.[27] Seminal designs include RC4, introduced by Ron Rivest in 1987 for its simplicity and speed, widely used in protocols like WEP and TLS until cryptanalytic weaknesses, such as biases in the initial keystream, rendered it insecure by the early 2000s.[28] More robust modern examples include Salsa20, developed by Daniel J. Bernstein in 2005 as a high-speed, software-optimized cipher resistant to timing attacks, and its variant ChaCha20, refined in 2008 for better diffusion and performance on simple processors, now standardized in IETF protocols like TLS 1.3.[29][30] For resource-constrained environments, such as IoT devices, lightweight stream ciphers like Grain-128AEAD from the eSTREAM project (2004–2008) provide authenticated encryption with low gate counts (around 2,500 GE in typical hardware implementations) and high throughput (up to 33 Gbps in optimized parallel designs, though ~0.5 Gbps for minimal area configurations), selected for their balance of security and efficiency after extensive cryptanalysis.[31][32] Stream ciphers offer advantages like minimal error propagation— a single bit error affects only the corresponding bit in decryption—and low latency for streaming data, but security hinges on the keystream's randomness; reusing keys or IVs can lead to devastating attacks, such as keystream recovery via XOR of ciphertexts.[26] Research, such as the NIST Lightweight Cryptography standardization process finalized in August 2025, which selected the Ascon family as the standard for lightweight authenticated encryption (with Grain-128AEAD as a finalist) resistant to quantum threats and side-channel attacks, focuses on advancing these designs.[33][34]Design and construction
Core principles
Symmetric-key algorithms rely on a single shared secret key for both encryption and decryption, with their design fundamentally guided by principles that ensure the ciphertext reveals no information about the plaintext without the key. The foundational concepts stem from Claude Shannon's 1949 paper, which established the theoretical basis for secure secrecy systems by introducing the notions of confusion and diffusion as essential to resisting cryptanalytic attacks. These principles aim to make the encryption process computationally infeasible to reverse without knowledge of the key, while maintaining efficiency for legitimate users.[35] Confusion obscures the statistical relationship between the plaintext, key, and ciphertext, complicating any attempt to deduce the key from observed inputs and outputs. It is typically implemented through nonlinear components, such as substitution boxes (S-boxes), that map input bits to output bits in a non-linear fashion, ensuring that even small changes in the key lead to unpredictable alterations in the encryption outcome. Diffusion, on the other hand, spreads the influence of each plaintext bit and key bit across many ciphertext bits, achieving an "avalanche effect" where a single-bit change affects approximately half the output bits after sufficient processing. This property is realized through linear operations like permutations, mixing layers, or bitwise XORs that propagate changes throughout the data block. Together, confusion and diffusion transform the plaintext into a pseudorandom ciphertext that withstands frequency analysis and other statistical exploits.[35][36] In block ciphers, these principles are operationalized through iterative structures like Feistel networks or substitution-permutation networks (SPNs). A Feistel network divides the input block into two halves, applying a round function (combining substitution for confusion and key mixing for diffusion) to one half before swapping and recombining, allowing decryption by reversing the rounds without inverting the function. SPNs, as in the Advanced Encryption Standard (AES), alternate layers of nonlinear substitution (for confusion), linear diffusion (via matrix multiplications over finite fields), and key addition across multiple rounds to amplify security. The number of rounds is chosen to ensure complete diffusion, typically scaling with block and key sizes to resist exhaustive search and differential attacks.[37][38] For stream ciphers, the core principles adapt confusion and diffusion to sequential processing, where a pseudorandom keystream is generated from the key and combined with the plaintext via XOR. The keystream must exhibit perfect secrecy properties, being statistically indistinguishable from random noise, with high linear complexity and long periods to prevent correlation or algebraic attacks. Design emphasizes a large internal state (at least twice the desired security level in bits) and nonlinear feedback mechanisms, such as in linear feedback shift registers (LFSRs) combined with nonlinear filters, to achieve diffusion over time while maintaining high-speed operation suitable for real-time applications.[39][27] A critical auxiliary principle in symmetric-key design is the key schedule, which derives round-specific subkeys from the master key to introduce variability and prevent slide or related-key attacks. Subkeys must maintain full entropy and avoid weak patterns, often using nonlinear expansions to enhance confusion. Overall, these principles are validated through rigorous cryptanalysis, ensuring the algorithm's resistance to both classical and emerging threats while prioritizing computational efficiency.[37][36]Structural approaches
In symmetric-key algorithm design, structural approaches provide the foundational frameworks for constructing ciphers that ensure security through confusion (obscuring the relationship between plaintext and ciphertext) and diffusion (spreading the influence of each plaintext bit across the ciphertext). These approaches are primarily applied to block ciphers, where the plaintext is divided into fixed-size blocks, and the structure iterates over multiple rounds to transform the data under a secret key. The most prominent structures include Feistel networks, substitution-permutation networks (SPNs), and Lai-Massey schemes, each offering trade-offs in invertibility, efficiency, and resistance to cryptanalysis.[40] Feistel networks form a balanced, invertible structure that divides the input block into two equal halves, typically denoted as left (L) and right (R) parts, and processes them through a series of rounds. In each round i, the right half R_{i-1} is fed into a keyed round function f alongside the subkey K_i, producing an output that is XORed with the left half to yield the new right half, while the halves are then swapped: \begin{align*} L_i &= R_{i-1}, \\ R_i &= L_{i-1} \oplus f(R_{i-1}, K_i). \end{align*} This design ensures that decryption mirrors encryption by simply reversing the order of the subkeys, without requiring the inverse of the round function f, which can be any complex, non-invertible function (often incorporating S-boxes for substitution). Introduced by Horst Feistel in a 1971 patent for a block cipher system, the structure was refined in IBM's Lucifer cipher and later formalized in the Data Encryption Standard (DES).[41] DES employs 16 rounds of this network on 64-bit blocks with a 56-bit effective key, where f combines expansion, substitution via eight 6-to-4-bit S-boxes, and permutation to achieve both confusion and diffusion. The Feistel approach excels in hardware efficiency and has inspired generalized variants, such as type-2 or type-3 networks with multiple branches, used in modern ciphers like Camellia for enhanced security against differential attacks.[42] Substitution-permutation networks (SPNs) represent an unbalanced structure that operates on the entire block through alternating layers of nonlinear substitution and linear permutation, promoting full diffusion across all bits in fewer rounds compared to Feistel designs. A typical SPN round applies a substitution layer using S-boxes—small lookup tables that replace groups of bits (e.g., 8 bits to 8 bits) to introduce confusion—followed by a linear transformation layer, such as a bit permutation or matrix multiplication over GF(2, to diffuse changes. Key addition or mixing often precedes or follows these layers, with multiple rounds (e.g., 10–14) iterating the process, and the final rounds sometimes including whitening with key material for added security. This wide-trail strategy, emphasizing linear diffusion, was pioneered in the AES (Advanced Encryption Standard) via the Rijndael cipher, designed by Joan Daemen and Vincent Rijmen. Rijndael processes 128-bit blocks in 10, 12, or 14 rounds depending on key size (128, 192, or 256 bits), using byte-oriented S-boxes based on the finite field GF(2^8 and a linear MixColumns step that multiplies by a fixed matrix to ensure avalanche effects. SPNs require invertible components for decryption but offer superior performance in software due to parallelizable operations, as seen in AES's adoption as a NIST standard.[43] The Lai-Massey scheme provides an alternative to Feistel and SPN structures, particularly suited for ciphers requiring operations over different algebraic groups (e.g., XOR for bits and modular addition for integers). It divides the block into two halves and applies nonlinear transformations (e.g., S-boxes) to each, followed by a compression function that XORs the results and subtracts (or adds modulo) the other half, often incorporating key-dependent linear mixing. Unlike Feistel, which uses only XOR, Lai-Massey employs a mix of group operations to balance confusion and diffusion while maintaining invertibility through the subtraction step. Formally, for halves X and Y in round i: \begin{align*} U_i &= g(X_{i-1}, K_i), \quad V_i &= g(Y_{i-1}, K_i'), \\ X_i &= U_i \oplus V_{i-1}, \quad Y_i &= U_{i-1} - V_i, \end{align*} where g is a nonlinear bijection and K, K' are subkeys. This structure was introduced by Xuejia Lai and James Massey in the design of the International Data Encryption Algorithm (IDEA), a 64-bit block cipher with 128-bit keys and 8.5 rounds, using bitwise XOR, modular addition modulo 65536, and modular multiplication modulo 65537 on 16-bit words to resist both linear and differential cryptanalysis. Though less common than Feistel or SPN due to implementation complexity, Lai-Massey variants appear in lightweight ciphers like IDEA NXT (FOX), offering robustness in resource-constrained environments.[44][40] These structural approaches have evolved with provable security analyses, such as Luby-Rackoff constructions demonstrating that 3–4 rounds of Feistel or Lai-Massey suffice for pseudorandom permutations under ideal round functions. Modern designs often hybridize elements, prioritizing resistance to side-channel and quantum threats while maintaining efficiency.[20]Implementations
Historical examples
One of the earliest modern implementations of a symmetric-key block cipher was Lucifer, developed by Horst Feistel, Walter Tuchman, Don Coppersmith, and others at IBM in the early 1970s. Lucifer utilized a Feistel network structure to process 64-bit plaintext blocks with key sizes ranging from 48 to 128 bits across its variants, employing substitution and permutation operations for diffusion and confusion. The algorithm was patented on March 19, 1974, and initially applied in securing data for automated teller machine systems developed for Lloyds Bank in the United Kingdom.[41][45] The Data Encryption Standard (DES), adopted by the National Bureau of Standards (now NIST) on January 15, 1977, as Federal Information Processing Standard (FIPS) 46, evolved directly from a modified version of Lucifer submitted by IBM in response to a 1973 solicitation for a federal encryption standard. Under consultation with the National Security Agency, IBM shortened the effective key length to 56 bits (from Lucifer's longer options) and refined the 16-round Feistel structure for 64-bit blocks to balance security and computational efficiency on hardware of the era. DES became the foundational symmetric-key algorithm for protecting unclassified government and commercial data, influencing global standards until its vulnerabilities to brute-force attacks emerged in the 1990s.[46][47] To address DES's key length limitations, the Triple Data Encryption Algorithm (Triple DES or 3DES) was proposed in the late 1970s and formally specified by NIST in 1999 as part of FIPS 46-3, applying the DES cipher three times sequentially (encrypt-decrypt-encrypt) with two or three distinct 56-bit keys to achieve an effective 112-bit or 168-bit security level on 64-bit blocks. This construction extended DES's usability in legacy systems, such as financial transactions and smart cards, until its deprecation in 2017 due to performance overhead and emerging threats.[21][48] The International Data Encryption Algorithm (IDEA), introduced in 1991 by Xuejia Lai and James L. Massey at ETH Zurich, marked a departure from Feistel-based designs by using an 8.5-round substitution-permutation network on 64-bit blocks with 128-bit keys, combining bitwise XOR, addition modulo $2^{16}, and multiplication modulo $2^{16} + 1 for enhanced resistance to differential cryptanalysis. Developed under contract with Ascom-Tech AG and patented internationally, IDEA was integrated into applications like Pretty Good Privacy (PGP) email encryption during the 1990s, serving as a bridge to stronger standards before its own partial vulnerabilities were identified.Modern standards
The Advanced Encryption Standard (AES), specified in Federal Information Processing Standard (FIPS) 197, serves as the primary symmetric-key block cipher for securing sensitive data in modern cryptographic systems.[49] AES operates on 128-bit blocks with key sizes of 128, 192, or 256 bits, providing robust resistance to known cryptanalytic attacks when implemented correctly.[49] Adopted in 2001 following a competitive evaluation process, AES has become the de facto global standard for symmetric encryption, underpinning protocols such as TLS and IPsec.[50] In high-security environments, the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0) mandates the use of AES-256 for all classification levels to ensure long-term protection against brute-force attacks.[51] For resource-constrained devices, such as those in the Internet of Things (IoT), NIST finalized the Ascon family of lightweight cryptographic algorithms in Special Publication (SP) 800-232 in August 2025.[52] Ascon provides authenticated encryption (Ascon-AEAD128) and hashing (Ascon-Hash256, Ascon-XOF128) primitives based on a permutation function, offering efficiency with a small footprint suitable for devices like RFID tags and sensors. Selected from the NIST Lightweight Cryptography competition in 2023, Ascon balances security margins against side-channel attacks with low computational overhead, making it ideal for embedded systems where AES may be too demanding.[53] These standards emphasize 128-bit security levels while supporting authenticated modes to prevent tampering. While AES remains dominant for general-purpose applications, ongoing NIST guidance addresses potential quantum threats by recommending larger key sizes for symmetric algorithms, though no immediate transitions are required as Grover's algorithm only quadratically impacts brute-force resistance.[54] Implementations must adhere to validated modules under FIPS 140-3 to ensure compliance.Modes of operation
Encryption modes
Encryption modes of operation specify how a symmetric-key block cipher processes data larger than a single block or provides stream-like encryption. These modes ensure confidentiality by transforming plaintext into ciphertext while addressing issues like error propagation, parallelism, and security against patterns in the data. The five primary confidentiality-only modes—Electronic Codebook (ECB), Cipher Block Chaining (CBC), Cipher Feedback (CFB), Output Feedback (OFB), and Counter (CTR)—are standardized for use with approved block ciphers such as AES. They were initially defined for the Data Encryption Standard (DES) in FIPS PUB 81, with ECB, CBC, CFB, and OFB appearing there, while CTR was introduced later to enhance performance and flexibility.[55] These modes generally require an initialization vector (IV) or nonce to ensure uniqueness across encryptions, except for ECB, which does not use one. The IV must be unpredictable and unique per message to prevent attacks like keystream reuse. All modes assume the underlying block cipher is secure, but their properties differ in diffusion (spreading plaintext influence), malleability, and implementation efficiency.| Mode | Description | Key Features and Security Notes |
|---|---|---|
| ECB | Each plaintext block P_i is independently encrypted: C_i = E_K(P_i), where E_K is the block cipher with key K. Decryption reverses this directly. | Simple and parallelizable for both encryption and decryption. No IV needed. However, it reveals patterns in plaintext (e.g., identical blocks yield identical ciphertext), making it insecure for most data; not recommended except for encrypting single blocks or random data. |
| CBC | The first block is XORed with an IV: C_1 = E_K(P_1 \oplus IV). Subsequent blocks chain: C_i = E_K(P_i \oplus C_{i-1}). Decryption XORs decrypted blocks with the previous ciphertext (IV for the first). Padding is required for non-block-aligned data. | Provides good diffusion across blocks. IV must be random and unique. Vulnerable to chosen-plaintext attacks if IV is reused and to padding oracle attacks without proper integrity checks. Widely used historically but often paired with authentication today. |
| CFB | Encrypts the IV to generate initial keystream S_1 = E_K(IV), then C_1 = P_1 \oplus S_1. Feedback uses ciphertext: S_i = E_K(C_{i-1}), C_i = P_i \oplus S_i (for full-block; smaller segments possible). Decryption mirrors this using ciphertext feedback. | Acts as a self-synchronizing stream cipher; errors affect only the current and next few blocks (up to the feedback size). Sequential only, no parallelism. Suitable for hardware with limited buffering but malleable (bit flips alter plaintext predictably). IV must be unpredictable. |
| OFB | Similar to CFB but feedback from previous keystream: S_1 = E_K(IV), C_1 = P_1 \oplus S_1; S_i = E_K(S_{i-1}), C_i = P_i \oplus S_i. Decryption uses the same keystream generation. | Pure stream cipher behavior; ciphertext errors do not propagate to subsequent plaintext (ideal for error-prone channels like wireless). Sequential and malleable. Precomputable keystream if IV known, but IV reuse exposes XOR of plaintexts. Deprecated in some contexts due to implementation risks. |
| CTR | A nonce (or IV) concatenates with a counter starting at 0: C_i = P_i \oplus E_K(\text{nonce} \| \text{counter}_i). Counter increments per block; decryption is identical (XOR with same keystream). No chaining or padding needed. | Highly parallelizable and allows random access (encrypt/decrypt any block independently). Provides confidentiality equivalent to a one-time pad if counters are unique. Nonce must never repeat with the same key, or it leaks plaintext XOR; preferred for high-speed applications like disk encryption. |