Fact-checked by Grok 2 weeks ago

Symmetric-key algorithm

A symmetric-key algorithm, also known as a secret-key algorithm, is a type of cryptographic that employs the same secret key to perform both the of into and the decryption of back into . These algorithms form a foundational component of modern , providing efficient mechanisms for securing data , , and in applications ranging from secure communications to file storage.

Historical Development

The origins of standardized symmetric-key algorithms trace back to the , when the U.S. National Bureau of Standards (now NIST) sought a robust method for protecting unclassified but sensitive government information.
This effort culminated in the adoption of the in 1977 as Federal Information Processing Standard (FIPS) 46, a 64-bit developed by with input from the (NSA), featuring a 56-bit key length.
By the late 1990s, advances in computing power rendered vulnerable to brute-force attacks, prompting NIST to initiate a public competition in 1997 to select a successor.
The winning submission, Rijndael, was standardized as the in 2001 under FIPS 197, supporting key sizes of 128, 192, or 256 bits and operating on 128-bit blocks, thereby establishing it as the de facto global standard for symmetric encryption.

Key Characteristics and Operations

Symmetric-key algorithms typically fall into two categories: block ciphers, which process data in fixed-size blocks (e.g., encrypts 128-bit blocks using , , and key mixing operations), and stream ciphers, which generate a keystream to encrypt data sequentially bit-by-bit or byte-by-byte.
A core principle is the secrecy of the shared key, which must remain confidential to prevent unauthorized access; the algorithm itself is public, relying on the key's unpredictability for security.

Advantages and Challenges

One primary advantage of symmetric-key algorithms is their computational efficiency, enabling rapid encryption and decryption of large data volumes compared to asymmetric alternatives, making them ideal for resource-constrained environments and high-throughput scenarios like or VPNs.
However, they face the inherent key distribution problem, where securely sharing the secret key between parties without prior secure channels poses significant risks, often necessitating additional protocols or hybrid systems for .
Additionally, while scalable for bulk , symmetric algorithms lack built-in mechanisms for or , typically requiring integration with other like message authentication codes (MACs).

Fundamentals

Definition and basic operation

A symmetric-key algorithm, also referred to as a secret-key algorithm, is a type of cryptographic algorithm that employs the same cryptographic key for both encrypting into and decrypting back into . This shared key must be kept secret and securely distributed to the communicating parties beforehand, distinguishing it from systems where keys differ for encryption and decryption. The foundational mathematical model for such systems was established by in 1949, defining a secrecy system as a probabilistic set of transformations T from a plaintext space M (possible messages) to a ciphertext space C (possible cryptograms), where each transformation is selected via a key from a key space K, with the key chosen according to a . In its basic operation, a symmetric-key algorithm follows a straightforward process centered on the shared key. First, a key-generation procedure produces a secret key k \in K from a security parameter \lambda, typically ensuring sufficient randomness and length to resist attacks; this key is then securely exchanged between the sender (Alice) and receiver (Bob), often via a separate secure channel. For encryption, Alice computes the ciphertext c = E(k, m), where E is the encryption function, m \in M is the plaintext message, and the operation may incorporate additional elements like a random nonce r to ensure freshness, such as c = (r, F(k, r) \oplus m) in simple stream-like constructions, with F denoting a key-derived function and \oplus bitwise XOR. The ciphertext c is then transmitted over an insecure channel to Bob. Upon receipt, Bob performs decryption using the inverse function D(k, c) = m, recovering the original , provided the key matches and no transmission errors occur. The scheme satisfies correctness: for all keys k \in K and messages m \in M, D(k, E(k, m)) = m. This symmetry in key usage enables efficient computation, as the encryption and decryption operations are computationally lightweight compared to alternatives, but it relies critically on secure to prevent by adversaries. Shannon's model emphasizes that arises from the key's , with perfect secrecy achievable if the key is at least as long as the message and uniformly random, rendering statistically independent of .

Comparison to asymmetric cryptography

Symmetric-key algorithms employ a single secret key shared between the communicating parties for both encryption and decryption processes. In contrast, asymmetric cryptography, also known as , utilizes a pair of mathematically related keys: a public key available to anyone for encryption or verification, and a private key kept secret by the owner for decryption or signing. This fundamental difference in key structure addresses distinct security needs, with symmetric methods relying on the absolute secrecy of the shared key, while asymmetric systems base their security on the computational difficulty of inverting certain mathematical functions, such as or logarithms. One primary limitation of is the key distribution problem: securely exchanging the key between parties over an insecure channel is challenging without prior , potentially exposing the key to . resolves this by allowing the public key to be freely distributed, enabling without prior secrets, as introduced in the seminal work on public-key distribution systems. This innovation, proposed by Diffie and Hellman, revolutionized by eliminating the need for a trusted or pre-established for key setup in many scenarios. However, asymmetric systems introduce their own vulnerabilities, such as the risk of private key compromise if not properly protected, and require careful management of key pairs. In terms of performance, symmetric algorithms are significantly more efficient, often orders of magnitude faster than their asymmetric counterparts, making them ideal for resource-constrained environments or bulk data . For instance, symmetric ciphers like can process data at speeds exceeding gigabits per second on modern hardware, whereas public-key operations, such as , may be 100 to 1,000 times slower due to the complexity of large-integer arithmetic. To achieve equivalent levels—measured in bits of against brute-force or known attacks—symmetric keys are much shorter; a 128-bit symmetric key offers comparable to a 3,072-bit or a 256-bit key in asymmetric systems. NIST guidelines specify these equivalences to ensure consistent protection levels across . Asymmetric cryptography excels in scenarios requiring , such as digital signatures, or initial key establishment, but its computational overhead limits direct use for large-scale data protection. Consequently, hybrid cryptosystems are prevalent, combining both approaches: asymmetric methods securely exchange a temporary symmetric key, which is then used to encrypt the bulk data symmetrically. This leverages the strengths of each—efficient bulk encryption from symmetric algorithms and secure from asymmetric ones— as recommended in key management standards for federal systems.

Types

Block ciphers

A block cipher is a symmetric-key cryptographic algorithm that operates on fixed-length groups of bits, known as blocks, transforming blocks into blocks of equal size using a secret for both and decryption. Typically, block sizes range from 64 to 256 bits, with common values being 64 bits for older designs and 128 bits for modern ones, ensuring efficient processing while providing a balance between and computational overhead. The encryption process involves iterative rounds of operations derived from Claude Shannon's principles of : complicates the relationship between the key and the to thwart key recovery, while ensures that changes in a single bit affect multiple bits, spreading statistical dependencies. Block ciphers are constructed using structured frameworks to achieve these properties securely and efficiently. The , a widely adopted structure, divides the input block into two equal halves and applies a round function—often involving and key-dependent operations—to one half, XORing the output with the other half before swapping the halves for the next round. This design permits decryption by simply reversing the round order and using the same round function, avoiding the need for invertible components. The , standardized in 1977, exemplifies a with a 64-bit block, 56-bit key, and 16 rounds, where each round incorporates expansion, via eight S-boxes, and to enhance . In contrast, substitution-permutation networks (SPNs) build security through layered applications of key addition, nonlinear substitution (S-boxes), linear mixing (often matrix multiplications over finite fields), and bit permutations across multiple rounds. The Advanced Encryption Standard (AES), selected in 2001 from the Rijndael algorithm, employs an structure with a 128-bit and variable key sizes of 128, 192, or 256 bits, corresponding to 10, 12, or 14 s; each features byte substitutions, row shifts, column mixing, and key XORs to provide strong resistance to linear and attacks. These structures prioritize provable margins, with counts calibrated to exceed known complexities, ensuring block ciphers remain foundational for secure data protection in symmetric cryptography.

Stream ciphers

A is a type of symmetric-key algorithm that encrypts one bit or byte at a time by combining it with a pseudorandom keystream generated from a secret key. Unlike block ciphers, which process fixed-size blocks of data, stream ciphers operate continuously on a , making them suitable for applications such as communications where data arrives incrementally. The core operation involves a (PRNG) initialized with the key (and often an ) to produce the keystream, which is then combined with the plaintext via bitwise XOR to yield the : c_i = p_i \oplus k_i, where p_i is the i-th plaintext bit, k_i is the corresponding keystream bit, and c_i is the ciphertext bit. Decryption reverses this process using the same key and IV to regenerate the keystream. The concept of stream ciphers traces back to the early , with Gilbert Vernam's 1917 invention of the , a perfect system using a truly random keystream as long as the message, though impractical for . Modern stream ciphers emerged in the mid-20th century for teletype encryption, evolving to use pseudorandom keystreams for efficiency while aiming to approximate one-time pad security. They are classified into synchronous stream ciphers, where the keystream is generated independently of the and , requiring precise between sender and receiver, and self-synchronizing stream ciphers, which derive the keystream from previous ciphertext blocks to recover from errors automatically. Keystream generation typically relies on linear feedback shift registers (LFSRs) combined with nonlinear functions to ensure unpredictability, as pure LFSRs are vulnerable to known attacks like the Berlekamp-Massey algorithm. Seminal designs include , introduced by in 1987 for its simplicity and speed, widely used in protocols like WEP and TLS until cryptanalytic weaknesses, such as biases in the initial keystream, rendered it insecure by the early . More robust modern examples include Salsa20, developed by in 2005 as a high-speed, software-optimized resistant to timing attacks, and its variant ChaCha20, refined in 2008 for better and performance on simple processors, now standardized in IETF protocols like TLS 1.3. For resource-constrained environments, such as devices, lightweight stream ciphers like Grain-128AEAD from the eSTREAM project (2004–2008) provide with low gate counts (around 2,500 GE in typical hardware implementations) and high throughput (up to 33 Gbps in optimized parallel designs, though ~0.5 Gbps for minimal area configurations), selected for their balance of and efficiency after extensive . Stream ciphers offer advantages like minimal error propagation— a single bit error affects only the corresponding bit in decryption—and low latency for , but hinges on the keystream's ; reusing keys or IVs can lead to devastating attacks, such as keystream via XOR of ciphertexts. , such as the NIST Lightweight Cryptography standardization process finalized in August 2025, which selected the Ascon family as the standard for lightweight (with Grain-128AEAD as a finalist) resistant to quantum threats and side-channel attacks, focuses on advancing these designs.

Design and construction

Core principles

Symmetric-key algorithms rely on a single key for both encryption and decryption, with their design fundamentally guided by principles that ensure the ciphertext reveals no information about the without the key. The foundational concepts stem from Claude Shannon's 1949 paper, which established the theoretical basis for secure secrecy systems by introducing the notions of as essential to resisting cryptanalytic attacks. These principles aim to make the process computationally infeasible to reverse without knowledge of the key, while maintaining efficiency for legitimate users. Confusion obscures the statistical relationship between the plaintext, key, and ciphertext, complicating any attempt to deduce the key from observed inputs and outputs. It is typically implemented through nonlinear components, such as substitution boxes (S-boxes), that map input bits to output bits in a non-linear , ensuring that even small changes in the key lead to unpredictable alterations in the encryption outcome. Diffusion, on the other hand, spreads the influence of each plaintext bit and key bit across many ciphertext bits, achieving an "" where a single-bit change affects approximately half the output bits after sufficient processing. This property is realized through linear operations like permutations, mixing layers, or bitwise XORs that propagate changes throughout the data block. Together, transform the plaintext into a pseudorandom ciphertext that withstands and other statistical exploits. In block ciphers, these principles are operationalized through iterative structures like Feistel networks or substitution-permutation networks (SPNs). A Feistel network divides the input block into two halves, applying a round function (combining for and key mixing for ) to one half before swapping and recombining, allowing decryption by reversing the rounds without inverting the function. SPNs, as in the (AES), alternate layers of nonlinear (for ), linear (via matrix multiplications over finite fields), and key addition across multiple rounds to amplify security. The number of rounds is chosen to ensure complete , typically scaling with block and key sizes to resist exhaustive search and differential attacks. For stream ciphers, the core principles adapt to sequential , where a pseudorandom keystream is generated from the and combined with the via XOR. The keystream must exhibit perfect secrecy properties, being statistically indistinguishable from random noise, with high linear complexity and long periods to prevent or algebraic attacks. Design emphasizes a large internal (at least twice the desired level in bits) and nonlinear mechanisms, such as in linear feedback shift registers (LFSRs) combined with nonlinear filters, to achieve over time while maintaining high-speed operation suitable for real-time applications. A critical auxiliary in symmetric-key design is the , which derives round-specific subkeys from the master key to introduce variability and prevent or related-key attacks. Subkeys must maintain full and avoid weak patterns, often using nonlinear expansions to enhance . Overall, these principles are validated through rigorous , ensuring the algorithm's resistance to both classical and emerging threats while prioritizing computational efficiency.

Structural approaches

In symmetric-key algorithm design, structural approaches provide the foundational frameworks for constructing ciphers that ensure security through confusion (obscuring the relationship between plaintext and ciphertext) and diffusion (spreading the influence of each plaintext bit across the ciphertext). These approaches are primarily applied to block ciphers, where the plaintext is divided into fixed-size blocks, and the structure iterates over multiple rounds to transform the data under a secret key. The most prominent structures include Feistel networks, substitution-permutation networks (SPNs), and Lai-Massey schemes, each offering trade-offs in invertibility, efficiency, and resistance to cryptanalysis. Feistel networks form a balanced, invertible structure that divides the input block into two equal halves, typically denoted as left (L) and right (R) parts, and processes them through a series of s. In each i, the right half R_{i-1} is fed into a keyed f alongside the subkey K_i, producing an output that is XORed with the left half to yield the new right half, while the halves are then swapped: \begin{align*} L_i &= R_{i-1}, \\ R_i &= L_{i-1} \oplus f(R_{i-1}, K_i). \end{align*} This design ensures that decryption mirrors encryption by simply reversing the order of the subkeys, without requiring the inverse of the round function f, which can be any complex, non-invertible function (often incorporating S-boxes for substitution). Introduced by Horst Feistel in a 1971 patent for a block cipher system, the structure was refined in IBM's Lucifer cipher and later formalized in the Data Encryption Standard (DES). DES employs 16 rounds of this network on 64-bit blocks with a 56-bit effective key, where f combines expansion, substitution via eight 6-to-4-bit S-boxes, and permutation to achieve both confusion and diffusion. The Feistel approach excels in hardware efficiency and has inspired generalized variants, such as type-2 or type-3 networks with multiple branches, used in modern ciphers like Camellia for enhanced security against differential attacks. Substitution-permutation networks (SPNs) represent an unbalanced structure that operates on the entire block through alternating layers of nonlinear substitution and linear , promoting full across all bits in fewer rounds compared to Feistel designs. A typical SPN round applies a substitution layer using S-boxes—small lookup tables that replace groups of bits (e.g., 8 bits to 8 bits) to introduce —followed by a linear transformation layer, such as a bit or over , to diffuse changes. Key addition or mixing often precedes or follows these layers, with multiple rounds (e.g., 10–14) iterating the process, and the final rounds sometimes including whitening with key material for added security. This wide-trail strategy, emphasizing linear , was pioneered in the () via the Rijndael cipher, designed by Joan Daemen and . Rijndael processes 128-bit blocks in 10, 12, or 14 rounds depending on key size (128, 192, or 256 bits), using byte-oriented S-boxes based on the and a linear MixColumns step that multiplies by a fixed to ensure effects. SPNs require invertible components for decryption but offer superior performance in software due to parallelizable operations, as seen in 's adoption as a NIST standard. The Lai-Massey scheme provides an alternative to Feistel and SPN structures, particularly suited for ciphers requiring operations over different algebraic groups (e.g., XOR for bits and modular addition for integers). It divides the block into two halves and applies nonlinear transformations (e.g., S-boxes) to each, followed by a compression function that XORs the results and subtracts (or adds modulo) the other half, often incorporating key-dependent linear mixing. Unlike Feistel, which uses only XOR, Lai-Massey employs a mix of group operations to balance confusion and diffusion while maintaining invertibility through the subtraction step. Formally, for halves X and Y in round i: \begin{align*} U_i &= g(X_{i-1}, K_i), \quad V_i &= g(Y_{i-1}, K_i'), \\ X_i &= U_i \oplus V_{i-1}, \quad Y_i &= U_{i-1} - V_i, \end{align*} where g is a nonlinear bijection and K, K' are subkeys. This structure was introduced by Xuejia Lai and James Massey in the design of the International Data Encryption Algorithm (IDEA), a 64-bit block cipher with 128-bit keys and 8.5 rounds, using bitwise XOR, modular addition modulo 65536, and modular multiplication modulo 65537 on 16-bit words to resist both linear and differential cryptanalysis. Though less common than Feistel or SPN due to implementation complexity, Lai-Massey variants appear in lightweight ciphers like IDEA NXT (FOX), offering robustness in resource-constrained environments. These structural approaches have evolved with provable security analyses, such as Luby-Rackoff constructions demonstrating that 3–4 rounds of Feistel or Lai-Massey suffice for pseudorandom permutations under ideal round functions. Modern designs often hybridize elements, prioritizing resistance to side-channel and quantum threats while maintaining efficiency.

Implementations

Historical examples

One of the earliest modern implementations of a symmetric-key block cipher was , developed by Horst Feistel, Walter Tuchman, , and others at in the early 1970s. Lucifer utilized a Feistel network structure to process 64-bit blocks with key sizes ranging from 48 to 128 bits across its variants, employing and operations for and . The algorithm was patented on March 19, 1974, and initially applied in securing data for systems developed for in the . The , adopted by the National Bureau of Standards (now NIST) on January 15, 1977, as Federal Information Processing Standard (FIPS) 46, evolved directly from a modified version of submitted by in response to a solicitation for a federal encryption standard. Under consultation with the , shortened the effective key length to 56 bits (from 's longer options) and refined the 16-round Feistel structure for 64-bit blocks to balance security and computational efficiency on hardware of the era. became the foundational symmetric-key algorithm for protecting unclassified government and commercial data, influencing global standards until its vulnerabilities to brute-force attacks emerged in the . To address DES's key length limitations, the Triple Data Encryption Algorithm (Triple DES or 3DES) was proposed in the late 1970s and formally specified by NIST in 1999 as part of FIPS 46-3, applying the DES cipher three times sequentially (encrypt-decrypt-encrypt) with two or three distinct 56-bit keys to achieve an effective 112-bit or 168-bit security level on 64-bit blocks. This construction extended DES's usability in legacy systems, such as financial transactions and smart cards, until its deprecation in 2017 due to performance overhead and emerging threats. The (IDEA), introduced in 1991 by Xuejia Lai and James L. Massey at , marked a departure from Feistel-based designs by using an 8.5-round substitution-permutation network on 64-bit blocks with 128-bit keys, combining bitwise XOR, addition modulo $2^{16}, and multiplication modulo $2^{16} + 1 for enhanced resistance to differential cryptanalysis. Developed under contract with Ascom-Tech AG and patented internationally, IDEA was integrated into applications like (PGP) during the 1990s, serving as a bridge to stronger standards before its own partial vulnerabilities were identified.

Modern standards

The (AES), specified in Federal Information Processing Standard (FIPS) 197, serves as the primary symmetric-key for securing sensitive data in modern cryptographic systems. AES operates on 128-bit blocks with key sizes of 128, 192, or 256 bits, providing robust resistance to known cryptanalytic attacks when implemented correctly. Adopted in 2001 following a competitive evaluation process, AES has become the de facto global standard for symmetric encryption, underpinning protocols such as TLS and . In high-security environments, the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0) mandates the use of AES-256 for all classification levels to ensure long-term protection against brute-force attacks. For resource-constrained devices, such as those in the (IoT), NIST finalized the Ascon family of lightweight cryptographic algorithms in Special Publication (SP) 800-232 in August 2025. Ascon provides (Ascon-AEAD128) and hashing (Ascon-Hash256, Ascon-XOF128) primitives based on a function, offering efficiency with a small footprint suitable for devices like RFID tags and sensors. Selected from the NIST Lightweight Cryptography competition in 2023, Ascon balances security margins against side-channel attacks with low computational overhead, making it ideal for embedded systems where may be too demanding. These standards emphasize 128-bit security levels while supporting authenticated modes to prevent tampering. While AES remains dominant for general-purpose applications, ongoing NIST guidance addresses potential quantum threats by recommending larger key sizes for symmetric algorithms, though no immediate transitions are required as only quadratically impacts brute-force resistance. Implementations must adhere to validated modules under to ensure compliance.

Modes of operation

Encryption modes

Encryption modes of operation specify how a symmetric-key block cipher processes data larger than a single block or provides stream-like encryption. These modes ensure by transforming into while addressing issues like error propagation, parallelism, and security against patterns in the data. The five primary confidentiality-only modes—Electronic Codebook (ECB), , Cipher Feedback (CFB), Output Feedback (OFB), and Counter (CTR)—are standardized for use with approved block ciphers such as AES. They were initially defined for the in FIPS PUB 81, with ECB, CBC, CFB, and OFB appearing there, while CTR was introduced later to enhance performance and flexibility. These modes generally require an initialization vector (IV) or nonce to ensure uniqueness across encryptions, except for ECB, which does not use one. The IV must be unpredictable and unique per message to prevent attacks like keystream reuse. All modes assume the underlying block cipher is secure, but their properties differ in diffusion (spreading plaintext influence), malleability, and implementation efficiency.
ModeDescriptionKey Features and Security Notes
ECBEach plaintext block P_i is independently encrypted: C_i = E_K(P_i), where E_K is the block cipher with key K. Decryption reverses this directly.Simple and parallelizable for both encryption and decryption. No IV needed. However, it reveals patterns in plaintext (e.g., identical blocks yield identical ciphertext), making it insecure for most data; not recommended except for encrypting single blocks or random data.
CBCThe first block is XORed with an IV: C_1 = E_K(P_1 \oplus IV). Subsequent blocks chain: C_i = E_K(P_i \oplus C_{i-1}). Decryption XORs decrypted blocks with the previous ciphertext (IV for the first). Padding is required for non-block-aligned data.Provides good diffusion across blocks. IV must be random and unique. Vulnerable to chosen-plaintext attacks if IV is reused and to padding oracle attacks without proper integrity checks. Widely used historically but often paired with authentication today.
CFBEncrypts the IV to generate initial keystream S_1 = E_K(IV), then C_1 = P_1 \oplus S_1. Feedback uses ciphertext: S_i = E_K(C_{i-1}), C_i = P_i \oplus S_i (for full-block; smaller segments possible). Decryption mirrors this using ciphertext feedback.Acts as a self-synchronizing stream cipher; errors affect only the current and next few blocks (up to the feedback size). Sequential only, no parallelism. Suitable for hardware with limited buffering but malleable (bit flips alter plaintext predictably). IV must be unpredictable.
OFBSimilar to CFB but feedback from previous keystream: S_1 = E_K(IV), C_1 = P_1 \oplus S_1; S_i = E_K(S_{i-1}), C_i = P_i \oplus S_i. Decryption uses the same keystream generation.Pure stream cipher behavior; ciphertext errors do not propagate to subsequent plaintext (ideal for error-prone channels like wireless). Sequential and malleable. Precomputable keystream if IV known, but IV reuse exposes XOR of plaintexts. Deprecated in some contexts due to implementation risks.
CTRA nonce (or IV) concatenates with a counter starting at 0: C_i = P_i \oplus E_K(\text{nonce} \| \text{counter}_i). Counter increments per block; decryption is identical (XOR with same keystream). No chaining or padding needed.Highly parallelizable and allows random access (encrypt/decrypt any block independently). Provides confidentiality equivalent to a one-time pad if counters are unique. Nonce must never repeat with the same key, or it leaks plaintext XOR; preferred for high-speed applications like disk encryption.
For stream ciphers, which natively produce a continuous keystream XORed with , modes are implicit in their design rather than added. Synchronous stream ciphers like those based on linear feedback shift registers (LFSR) generate keystream independently of , requiring and unique keys per session to avoid attacks. Self-synchronizing stream ciphers recover from errors automatically but may introduce delays. NIST recommends approved ciphers in modes over deprecated stream ciphers like due to known biases.

Authenticated modes

Authenticated modes of operation for symmetric-key block ciphers combine for with message for , ensuring that received data has not been altered or forged. These modes, often termed (AE) or authenticated encryption with associated data (AEAD), allow protection of both the payload and optional additional data (like headers) without requiring separate mechanisms for and , reducing overhead and potential vulnerabilities from mismatched primitives. They are widely standardized to support protocols requiring secure data transmission, such as TLS and . One prominent example is the , defined in NIST Special Publication 800-38C. CCM integrates the (CTR) mode for parallelizable encryption of the plaintext into ciphertext with a CBC-MAC for generating an authentication tag over the ciphertext and any associated data, using a single symmetric key. This design requires the message length to be known in advance and uses a to ensure uniqueness, providing security against chosen-plaintext attacks up to 2^64 blocks per key with a 128-bit like . CCM's efficiency stems from a single-pass operation in hardware, though it is less flexible for streaming data compared to other modes; it has been adopted in standards like IEEE 802.11i for (WPA2). The Galois/Counter Mode (GCM), specified in NIST SP 800-38D, offers a highly efficient AEAD scheme suitable for high-speed applications. It employs for , generating keystream blocks from an initial value derived from a , while is achieved via GHASH, a universal based on multiplication in the Galois field GF(2^128). A single key derives both the subkey and the subkey, enabling parallel computation of and , which yields throughput rates approaching the block cipher's native speed on modern hardware. GCM supports variable-length associated data and messages up to 2^39-256 bits, with a 128-bit tag providing 128-bit ; its original proposal by McGrew and emphasized provable under standard assumptions. Widely used in TLS 1.2 and later (via RFC 5288) and (per NIST SP 800-77 Revision 1), GCM balances performance and but requires unique to avoid catastrophic failures from reuse. Offset Codebook () mode, proposed by Rogaway, provides another parallelizable AEAD approach emphasizing minimal overhead and rate-1 efficiency, where ciphertext expansion is limited to the tag size (typically 128 bits). OCB uses an offset-based tweakable construction, XORing the with a derived keystream before , and incorporates all data into a polynomial for authentication, all in a single pass without padding. This results in high software and hardware performance, with security proven to match the underlying block cipher's strength against adaptive adversaries. Although patented until 2015, OCB's design influenced subsequent modes and is specified in RFC 7253 for potential use in IETF protocols, though adoption lags behind GCM due to historical licensing concerns. These modes exemplify structural approaches to : MAC-then-encrypt (as in CCM), encrypt-then-MAC (GCM and ), prioritizing provable security and efficiency. Selection depends on application constraints, such as requirements, parallelism needs, and integration, with NIST recommending GCM and CCM for federal systems. As of November 2025, NIST is revising SP 800-38B (CMAC), SP 800-38C (CCM), and SP 800-38D (GCM) to address enhancements, including adaptations for larger sizes and generation methods, with revisions in progress following public comment periods earlier in the year. Additionally, in June 2025, NIST launched the development of cryptographic accordions, a new family of tweakable modes for variable-length inputs aimed at improving efficiency in certain applications, with a future specification planned in SP 800-197A.

Security

Theoretical foundations

The theoretical foundations of security in symmetric-key algorithms are rooted in , particularly Claude Shannon's seminal work on secrecy systems. In his 1949 paper, Shannon introduced the concept of perfect secrecy, defined as a system where the between the and is zero, meaning an adversary with unlimited computational power gains no information about the from the without the key. This ideal is achieved in the , a symmetric using a truly random key as long as the , which is added modulo 2 to the ; however, Shannon proved that perfect secrecy requires the key to be at least as large as the space, rendering it impractical for repeated use due to key reuse vulnerabilities. Given the infeasibility of for practical systems with short keys, modern symmetric-key relies on computational security, which assumes adversaries are polynomially bounded in resources. This framework posits that a scheme is secure if no efficient algorithm can distinguish its output from a random or with non-negligible advantage. Central to this are pseudorandom s (PRFs) and pseudorandom s (PRPs): a PRF family, keyed by a secret, appears indistinguishable from a truly random to any probabilistic polynomial-time (PPT) distinguisher, while a PRP additionally preserves properties. These primitives form the basis for secure encryption, with often proven under the existence of PRFs, as established by Goldreich, Goldwasser, and Micali in their 1986 construction showing pseudorandom s can be built from one-way s, enabling pseudorandom s via the Luby-Rackoff construction. A key result bridging theory to practice is the Luby-Rackoff construction, which proves that applying four rounds of a Feistel network using independent PRFs as round functions yields a secure PRP against chosen-plaintext attacks. This 1988 theorem provides a concrete method for building block ciphers like and from pseudorandom components, with security reductions showing that breaking the construction implies breaking the underlying PRFs. Extensions, such as Patarin's 1998 analysis, refine the number of rounds needed for stronger security notions like chosen-ciphertext resistance, emphasizing the role of round function independence and key randomness in resisting and . Overall, these foundations ensure symmetric algorithms achieve —indistinguishability of encryptions—under standard hardness assumptions, guiding the design of modes and protocols.

Common attacks and mitigations

Symmetric-key algorithms are susceptible to a variety of attacks that exploit weaknesses in their design, implementation, or usage. Brute-force attacks attempt to recover the secret by exhaustively trying all possible key values, with the computational effort exponentially with key length; for instance, a 128-bit key requires approximately $2^{128} operations, rendering it infeasible with current technology. Quantum computers pose an additional threat via , which reduces the effective security of brute-force attacks to the of the . For example, a 128-bit key provides only 64 bits of quantum security, so NIST recommends at least 256-bit keys for long-term protection against quantum adversaries as of 2024. To mitigate brute-force attacks, standards recommend using keys of at least 128 bits for long-term security, as endorsed by NIST for algorithms like . Cryptanalytic attacks target the mathematical structure of the cipher. Differential cryptanalysis, introduced by Biham and Shamir, analyzes differences between pairs of plaintexts to deduce key bits, and was notably applied to break with fewer than $2^{47} chosen plaintexts. Linear cryptanalysis, developed by Matsui, approximates the cipher as a linear equation over GF(2) to approximate key-dependent linear relations, enabling key recovery with about $2^{43} known plaintexts. Modern ciphers like resist these through design principles such as wide-trail strategies and nonlinear S-boxes that minimize high-probability differentials and linear approximations. Implementation attacks exploit physical or environmental leakages rather than the algorithm itself. Side-channel attacks, such as differential power analysis (DPA) introduced by Kocher et al., measure power consumption variations during execution to infer key-dependent operations in block ciphers like , often recovering keys with thousands of traces. Countermeasures include masking, where intermediate values are randomized with secret shares to decorrelate leakage from secrets, and hiding techniques like constant-time implementations to eliminate timing variations. Fault injection attacks induce computational errors, such as bit flips during AES rounds, to reveal the key via differential fault analysis, as demonstrated by Biham and Shamir. Mitigations involve error detection mechanisms, like parity checks on computations, and redundancy in hardware designs to verify outputs. Attacks on modes of operation can amplify cipher vulnerabilities. The , formalized by Vaudenay, exploits error messages revealing valid padding in mode, allowing decryption of arbitrary blocks with about 128 oracle queries per byte. To counter this, authenticated encryption modes like GCM provide integrity checks that prevent padding-related information leaks, as standardized in NIST SP 800-38D. Overall, robust and adherence to vetted standards, such as those from NIST and IETF, are essential to integrate these mitigations effectively.

Key management

Key generation

In symmetric-key cryptography, key generation refers to the process of creating secret keys that are used for both and decryption operations within the same . These keys must possess sufficient randomness and length to provide the desired security strength, typically measured in bits of security. The National Institute of Standards and Technology (NIST) outlines that symmetric keys should be generated using approved methods to ensure unpredictability and resistance to attacks, such as brute-force or guessing. The key length is determined by the specific and the target security level; for instance, the (AES) supports 128-bit, 192-bit, or 256-bit keys, corresponding to security strengths of 128, 192, or 256 bits, respectively. The primary method for symmetric key generation is random generation, where the key is produced directly from a random bit generator (RBG) that meets stringent entropy requirements. According to NIST Special Publication (SP) 800-133 Revision 2, the RBG must provide full equal to the key's bit length to ensure the key is indistinguishable from uniform random bits; for example, generating a 128-bit key requires at least 128 bits of from high-quality sources like hardware noise or physical processes. Approved RBGs, such as those validated under ( or later, are recommended to mitigate risks from weak randomness, and generation should occur within a cryptographic to protect against side-channel exposures. Deterministic alternatives involve key derivation functions (KDFs), where a high- or is processed using approved algorithms like those in SP 800-108 to derive the key, allowing for reproducible yet secure key creation when direct random sharing is impractical. Best practices emphasize the use of hardware-based cryptographic modules, such as Hardware Security Modules (HSMs), over software implementations for enhanced protection during generation, as hardware can better resist physical and environmental attacks. NIST SP 800-57 Part 1 specifies that keys must be generated with security strengths aligned to the system's risk profile, avoiding reuse across multiple purposes and ensuring post-generation checks for validity where feasible. Additionally, keys derived from multiple components—such as combining existing keys with additional data via approved methods—can be used to achieve higher effective , but all inputs must themselves be securely generated. Compliance with these guidelines ensures the key's from creation, forming the foundation for secure symmetric encryption deployments.

Key establishment and distribution

In symmetric-key cryptography, key establishment and distribution refer to the processes by which two or more parties securely agree upon or share a common secret key for subsequent and decryption operations. This is essential because symmetric algorithms require the same key to be available to all involved parties, yet transmitting keys over insecure channels risks or compromise. Methods for key establishment can be broadly classified into key transport (where one party generates the key and securely delivers it to others) and key agreement (where parties jointly compute the key without one generating it unilaterally). These processes often rely on pre-existing shared secrets, trusted intermediaries, or use of asymmetric techniques to bootstrap . One fundamental approach is key distribution, where keys are generated offline and physically transported using secure means, such as couriers, locked devices, or receipted documents. This method avoids network vulnerabilities but is labor-intensive and unscalable for large systems, making it suitable primarily for small-scale or high-security environments like . NIST recommends manual distribution for initial key loading, emphasizing against unauthorized during , such as through tamper-evident . For example, symmetric keys may be loaded into hardware security modules (HSMs) via physical interfaces before deployment. Automated alternatives are preferred for efficiency, but manual methods remain a baseline for verifying higher-level protocols. A widely adopted automated method involves key distribution centers (KDCs), trusted third-party servers that generate and distribute session keys using pre-shared long-term keys. The Kerberos protocol, developed at MIT, exemplifies this: a client authenticates to the KDC using a shared secret, receives a ticket encrypted with the service's long-term key, and uses it to obtain a symmetric session key for secure communication with the service. Kerberos operates entirely on symmetric cryptography, dividing the KDC into an Authentication Server (AS) for initial tickets and a Ticket Granting Server (TGS) for service tickets, mitigating replay attacks through timestamps and nonces. This approach scales well for enterprise networks but requires a trusted KDC and secure time synchronization. For scenarios without a trusted third party, a basic symmetric key transport protocol can enable one party to securely send a newly generated key to another using an existing shared master key. In this protocol, Alice sends Bob an encrypted message containing the new session key K_s, her identity, and a nonce, all protected by their shared key K; Bob responds with the nonce to confirm receipt. This approach provides mutual authentication but may be vulnerable to replay attacks if not enhanced with timestamps or other measures. Hybrid methods, such as using Diffie-Hellman key agreement to derive a symmetric key, combine asymmetric computation over public channels with symmetric protection: parties exchange public values to compute a shared secret K = g^{ab} \mod p, which then serves as the symmetric key after hashing for uniformity. This is foundational in protocols like TLS and is resistant to eavesdropping assuming the discrete logarithm problem's hardness.

References

  1. [1]
    symmetric key algorithm - Glossary | CSRC
    A cryptographic algorithm that uses the same secret key for an operation and its complement (eg, encryption and decryption). Also called a secret-key algorithm.
  2. [2]
    Key Establishment - Key Management | CSRC
    Symmetric key cryptography is more computationally efficient than public key cryptography, and is commonly used to protect larger volumes of information ...
  3. [3]
    Cryptographic Standards and a 50-Year Evolution - NCCoE
    May 26, 2022 · A 64-bit block cipher with 56-bit key, DES was the first public encryption created by the U.S. government. An exhaustive search attack for a DES ...
  4. [4]
    Symmetric-Key Cryptography - CS@Cornell
    Much of the development of modern cryptography was spurred on by the acceptance, in 1976 of an algorithm from IBM (with collaboration by the NSA) that became ...
  5. [5]
    symmetric key - Glossary | CSRC
    A symmetric key is a cryptographic key used for both an operation and its inverse, shared between entities, and not made public.
  6. [6]
    [PDF] Recommendation for Key Management: Part 1 - General
    May 5, 2020 · This document provides general guidance and best practices for managing cryptographic keying material, including security services, algorithms, ...
  7. [7]
    Deciphering: The Thrill of a Lifetime | NIST
    Jun 9, 2017 · Public-key cryptography , which was invented in the 1970s, solves the key distribution problem ... First, there are symmetric and asymmetric ...
  8. [8]
    Symmetric Cryptography - Glossary | CSRC
    A cryptographic algorithm that uses the same secret key for its operation and, if applicable, for reversing the effects of the operation.
  9. [9]
    Communication theory of secrecy systems - IEEE Xplore
    Communication theory of secrecy systems. Abstract: THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.
  10. [10]
    [PDF] Recommendation for Cryptographic Key Generation
    Jun 2, 2020 · Symmetric key. See Secret key. Symmetric-key algorithm. A cryptographic algorithm that uses the same secret key for its operation and (if ...
  11. [11]
    [PDF] Symmetric Key Cryptography Notes Contents - UPC
    Aug 8, 2024 · In a symmetric key encryption scheme a long term secret key k is shared between a sender and a recipient. Two procedures called encryption, ...
  12. [12]
    [PDF] Draft NIST SP 800-71, Recommendation for Key Establishment ...
    Jul 2, 2018 · Symmetric-key cryptography. Cryptography that uses the same key for both applying cryptographic protection (e.g., encryption or computing a MAC).
  13. [13]
    Handbook of Applied Cryptography
    ### Summary of Symmetric-Key vs. Public-Key Cryptography from Handbook of Applied Cryptography
  14. [14]
    [PDF] New Directions in Cryptography - Stanford Electrical Engineering
    Diffie and M. E. Hellman, “Multiuser cryptographic techniques,” presented at National Computer Conference, New York, June 7-10,. 1976. [6] D. Knuth, The Art of ...
  15. [15]
  16. [16]
    [PDF] Chapter 8: Public-Key Encryption
    Public-key encryption schemes are typically substantially slower than symmetric-key encryption algorithms such as DES (§7.4).
  17. [17]
    [PDF] Symmetric Ciphers
    Feb 17, 2004 · A Block Cipher is a cipher which treats a fixed-sized block of plaintext as a whole, and from it creates a ciphertext block of equal length.
  18. [18]
    Symmetric-Key Cryptography - CS@Cornell
    The Advanced Encryption Standard (AES) was chosen in 2001 as the winner of a 5-year contest to replace the then outdated and insecure DES. AES is a version of ...1.3 Nonces · 1.4 One-Time Pad Encryption · 1.5. 1 Block Ciphers
  19. [19]
    [PDF] Lecture 9 - CPSC 467: Cryptography and Security
    Sep 29, 2020 · Confusion and difiusion are two properties of the operation of a secure cipher which were identified by Claude Shannon in his paper.
  20. [20]
    [PDF] Feistel Structures for MPC, and More - Cryptology ePrint Archive
    Here, we introduce the Multi-Rotating structure for generalized Feistel networks, which provides full diffusion as quickly as a Twine-like structure without ...
  21. [21]
    [PDF] FIPS 46-3, Data Encryption Standard (DES) (withdrawn May 19, 2005)
    Oct 25, 1999 · The Data Encryption Standard (DES) specifies two FIPS approved cryptographic algorithms as required by FIPS 140-1. When used in conjunction ...
  22. [22]
    [PDF] Provable Security of Substitution-Permutation Networks
    Examples of block ciphers based on. Feistel networks include DES, FEAL, MISTY and KASUMI; block ciphers based on SPNs include AES, Serpent, and PRESENT.
  23. [23]
    [PDF] FIPS 197, Advanced Encryption Standard (AES)
    Nov 26, 2001 · The AES algorithm is a symmetric block cipher that can encrypt (encipher) and decrypt (decipher) information.
  24. [24]
    [PDF] Provable Security of (Tweakable) Block Ciphers Based on ...
    Substitution-Permutation Networks (SPNs) refer to a family of constructions which build a wn-bit block cipher from n-bit public permutations (often called S- ...
  25. [25]
    [PDF] Stream ciphers
    6.2 Definition A synchronous stream cipher is one in which the keystream is generated inde- pendently of the plaintext message and of the ciphertext. The ...
  26. [26]
  27. [27]
    [PDF] Stream cipher designs: a review
    Feb 10, 2020 · On the theoretical side, seminal work by Shamir [74], Blum et al. [75], Yao [76], and. Goldreich et al. [77] in the early 1980s produced the ...
  28. [28]
    [PDF] Lecture Notes on Stream Ciphers and RC4 - Rick Wash
    Abstract. In these notes I explain symmetric key additive keystream ciphers, using as an example the cipher RC4. I discuss a number of attack.Missing: seminal | Show results with:seminal
  29. [29]
    [PDF] ChaCha, a variant of Salsa20
    Jan 28, 2008 · This paper introduces the ChaCha family of stream ciphers, a variant of the. Salsa20 family. ChaCha follows the same basic design principles ...
  30. [30]
    RFC 7539: ChaCha20 and Poly1305 for IETF Protocols
    This document defines the ChaCha20 stream cipher as well as the use of the Poly1305 authenticator, both as stand-alone algorithms and as a combined mode.
  31. [31]
    [PDF] Attacks in Stream Ciphers: A Survey - Cryptology ePrint Archive
    Aug 26, 2014 · 2.2.1 Examples of Stream Ciphers. There exists a lot of streams cipher, we will present stream ciphers used by Eu- ropean Network of ...Missing: seminal | Show results with:seminal
  32. [32]
    [PDF] Stream Ciphers for Constrained Environments
    Nov 7, 2011 · There is no NIST approved stream cipher. Do we need dedicated stream ciphers? More cryptanalytic results on stream ciphers and their impact on.
  33. [33]
    [PDF] Communication Theory of Secrecy Systems* - By CE SHANNON
    Communication Theory of Secrecy Systems*. By C. E. SHANNON. 1. INTRODUCTION AND SUMMARY. HE problems of cryptography and secrecy systems furnish an interest ...
  34. [34]
    [PDF] Block Ciphers and the Data Encryption Standard Lecture Notes on ...
    Jan 21, 2025 · Diffusion means that a change in any plaintext bit must propagate out to as many ciphertext bits as possible. • The strategy used for creating ...
  35. [35]
    [PDF] pdf
    Block ciphers are symmetric-key functions mapping n-bit plaintext to n-bit ciphertext, used for confidentiality and other cryptographic purposes.
  36. [36]
    [PDF] Block Ciphers and DES
    Block ciphers break messages into blocks, like DES. DES, published by NIST in 1977, uses a 64-bit key and is used in financial transactions.
  37. [37]
    [PDF] Communication theory of secrecy systems - Semantic Scholar
    Communication theory of secrecy systems · C. Shannon · Published in Bell Labs technical journal 1 October 1949 · Computer Science, Mathematics.
  38. [38]
    [PDF] Revisiting Lightweight Block Ciphers: Review, Taxonomy and Future ...
    Apr 13, 2021 · Substitution-Permutation Networks: The Substitution Permutation Network is Shannon's concept for a modern block cipher where the plaintext is ...
  39. [39]
    US3798359A - Block cipher cryptographic system - Google Patents
    A cryptographic system for encrypting a block of binary data under the control of a key consisting of a set of binary symbols.
  40. [40]
    [PDF] On Generalized Feistel Networks - Cryptology ePrint Archive
    Beyond their use in making conventional blockciphers, generalized Feistel networks have been proposed as blockcipher modes-of-operation for format-preserving ...<|separator|>
  41. [41]
    The Design of Rijndael: The Advanced Encryption Standard (AES)
    In stockThis is the authoritative guide to Rijndael, the block cipher whose elegance, efficiency, security, and principled design made it the Advanced Encryption ...
  42. [42]
    A Proposal for a New Block Encryption Standard - SpringerLink
    A new secret-key block cipher is proposed as a candidate for a new encryption standard. In the proposed cipher, the plaintext and the ciphertext are 64 bit ...Missing: IDEA | Show results with:IDEA
  43. [43]
    Cryptography - IBM Research
    The group created an encryption method, named “Lucifer,” to protect the data for a cash-dispensing system that IBM had developed for Lloyds Bank in the United ...Missing: cipher | Show results with:cipher
  44. [44]
    FIPS 46, Data Encryption Standard (DES) | CSRC
    The standard specifies an encryption algorithm which is to be implemented in an electronic device for use in Federal ADP systems and networks.Missing: block specification
  45. [45]
    Data Encryption Standard - NIST Computer Security Resource Center
    Jan 1, 2001 · This chapter provides an overview of the development of the Data Encryption Standard (DES) and was published in NIST Special Publication 958.
  46. [46]
    [PDF] Recommendation for the triple data encryption algorithm (TDEA ...
    Nov 17, 2017 · This document has been developed by the National Institute of Standards and Technology. (NIST) in furtherance of its statutory ...
  47. [47]
    [PDF] Advanced Encryption Standard (AES)
    May 9, 2023 · The AES algorithm is a symmetric block cipher that can encrypt (encipher) and decrypt (decipher) digital information.
  48. [48]
    Block Cipher Techniques | CSRC
    There are two (2) Approved* block cipher algorithms that can be used for both applying cryptographic protection (eg, encryption) and removing or verifying the ...Missing: definition | Show results with:definition
  49. [49]
    [PDF] Announcing the Commercial National Security Algorithm Suite 2.0
    May 30, 2025 · NSA recommends Leighton-Micali with SHA-256/192, but all NIST SP 800-208 algorithms are approved for this use case. Note that to avoid. Page 3 ...
  50. [50]
    Announcing Lightweight Cryptography Selection | CSRC
    Feb 7, 2023 · The team has decided to standardize the Ascon family for lightweight cryptography applications as it meets the needs of most use cases where lightweight ...<|control11|><|separator|>
  51. [51]
    Post-Quantum Cryptography | CSRC
    NIST will issue guidance regarding any transitions of symmetric key algorithms and hash functions to protect against threats from quantum computers when we can ...
  52. [52]
    [PDF] FIPS 81 - Des Modes of Operation
    Dec 2, 1980 · The Electronic Codebook (ECB) mode is a basic, block, cryptographic method which transforms 64 bits of input to 64 bits of output as specified ...
  53. [53]
    Current Modes - Block Cipher Techniques | CSRC
    The modes in SP 800-38A are updated versions of the ECB, CBC, CFB, and OFB modes that are specified in FIPS Pub. 81; in addition, SP 800-38A specifies the CTR ...
  54. [54]
    [PDF] Recommendation for block cipher modes of operation: the CCM ...
    CCM may be used to provide assurance of the confidentiality and the authenticity of computer data by combining the techniques of the Counter (CTR) mode and the.
  55. [55]
    [PDF] Counter with CBC-MAC (CCM) AES Mode of Operation - CSRC
    CCM is a generic authenticate-and-encrypt block cipher mode. CCM is only defined for use with 128-bit block ciphers, such as AES. The CCM ideas can easily be ...Missing: original | Show results with:original
  56. [56]
    [PDF] Galois/Counter Mode (GCM) and GMAC
    The two functions of GCM are called authenticated encryption and authenticated decryption. ... NIST Special Publication 800-38D. 5 Elements of GCM. The elements ...
  57. [57]
    [PDF] The Galois/Counter Mode of Operation (GCM) | CSRC
    Galois/Counter Mode (GCM) is a block cipher mode of operation that uses universal hashing over a binary Galois field to provide authenticated encryption.Missing: 38D | Show results with:38D
  58. [58]
    RFC 5288 - AES Galois Counter Mode (GCM) Cipher Suites for TLS
    This memo defines TLS cipher suites that use AES-GCM with RSA, DSA, and Diffie-Hellman-based key exchange mechanisms.Missing: adoption | Show results with:adoption
  59. [59]
    [PDF] OCB: A Block-Cipher Mode of Operation for Efficient Authenticated ...
    May 24, 2001 · OCB is a parallelizable block-cipher mode for efficient authenticated encryption, providing privacy and authenticity with minimal overhead and ...
  60. [60]
    RFC 7253 - The OCB Authenticated-Encryption Algorithm
    This document specifies OCB, a shared-key blockcipher-based encryption scheme that provides confidentiality and authenticity for plaintexts and authenticity ...
  61. [61]
    [PDF] Communication Theory of Secrecy Systems - cs.wisc.edu
    Though he has left the world, I believe this classical paper, “Communication Theory of Secrecy Systems”, will not. I work in the area of network security and ...
  62. [62]
    [PDF] On the Construction of Pseudo-Random Permutations: Luby-Rackoff ...
    We reduce somewhat the complexity of the construction and simplify its proof of security by showing that two Feistel permutations are sufficient together with ...
  63. [63]
    [PDF] Differential Cryptanalysis of the Data Encryption Standard - Eli Biham
    Dec 7, 2009 · The security of iterated cryptosystems and hash functions has been an active research area for many years. The best known and most widely.
  64. [64]
    [PDF] Linear cryptanalysis method for DES cipher - GIUZZI Luca
    Matsui and A. Yamagishi, "A New Method for Known Plaintext Attack of. FEAL Cipher," Advances in Cryptology - EUROCRYPT'92, Lecture Notes in. Computer Science ...
  65. [65]
    [PDF] Fault Attacks In Symmetric Key Cryptosystems
    One such attack, referred to as Fault Attack1 (FA) is shown to be powerful against common ciphers; including the high-profile ones, which are considered secure ...Missing: mitigations | Show results with:mitigations
  66. [66]
    Vaudenay. "Security Flaws Induced by CBC Padding Applications to ...
    No information is available for this page. · Learn why
  67. [67]
  68. [68]
    [PDF] Key Establishment - Centre For Applied Cryptographic Research
    This chapter considers key establishment protocols and related cryptographic techniques which provide shared secrets between two or more parties, typically for ...
  69. [69]
    Kerberos: The Network Authentication Protocol - MIT
    Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography.MIT Kerberos Distribution Page · MIT Kerberos Documentation · KfW 4.1