Fact-checked by Grok 2 weeks ago

Cryptographic primitive

A cryptographic primitive is a low-level cryptographic that serves as a fundamental building block for constructing more complex cryptographic protocols and systems. These primitives provide essential security properties such as , , , and . They are rigorously standardized to ensure reliability in applications ranging from secure communications to data protection. Key examples of cryptographic primitives include symmetric-key algorithms like the (AES), which is a approved for use by the U.S. government to protect sensitive data; hash functions such as , designed to produce fixed-size digests resistant to collision attacks; and public-key primitives like the Rivest-Shamir-Adleman () algorithm for encryption and digital signatures, or () for efficient and signing. Cryptographic primitives form the foundation of higher-level constructions, such as modes of operation for s (e.g., Galois/Counter Mode or GCM for ) and protocols like (TLS), but their security depends on proper implementation and resistance to cryptanalytic attacks, including side-channel and quantum threats. NIST and other standards bodies continuously evaluate and update these primitives to address evolving computational capabilities and vulnerabilities, including the standardization of post-quantum algorithms such as ML-KEM, ML-DSA, and SLH-DSA in 2024 and lightweight cryptography standards in 2025.

Definition and Fundamentals

Definition

A cryptographic primitive is a low-level, well-defined mathematical or that serves as a fundamental building block for constructing higher-level cryptographic protocols and systems. These primitives are designed to provide specific security functionalities, such as through or integrity through hashing, based on their formally specified mathematical properties. Their security relies on well-established computational hardness assumptions, for instance, the difficulty of factoring large composite numbers or computing discrete logarithms in certain groups. Unlike higher-level cryptographic constructs, such as protocols or schemes that combine multiple components to achieve broader objectives, are atomic in nature, meaning they cannot be decomposed into simpler cryptographic elements and focus on a single, narrowly defined goal. This atomicity ensures that operate as reliable, standalone units within larger systems, emphasizing precision and mathematical rigor over at their core level. Examples of cryptographic primitives within this scope include basic encryption functions like block ciphers, one-way hashing functions, and digital signature algorithms, each targeting a distinct property without encompassing full mechanisms. Key attributes of these primitives include deterministic behavior, where identical inputs consistently produce the same outputs, and their dependence on unproven but widely accepted hardness assumptions to guarantee resistance against computational attacks. In practice, they form the foundational elements for s that address complex needs in and data protection.

Key Characteristics

Cryptographic primitives are engineered to balance computational efficiency for authorized parties with intractability for adversaries, ensuring operations are feasible within practical resource constraints. Efficiency is typically measured in terms of time and space complexity, where legitimate computations, such as encryption or hashing, run in polynomial time—often linear in the input size, denoted as O(n) for processing n blocks in symmetric ciphers—while attacks require superpolynomial effort. For instance, symmetric primitives like block ciphers are optimized for high throughput on resource-limited devices, processing data in fixed-size blocks (e.g., 128 bits for AES) at speeds orders of magnitude faster than asymmetric counterparts. The structure of is rigidly defined to support specific goals, with most accepting variable- or fixed-length inputs and producing deterministic or probabilistic outputs in prescribed formats. primitives, for example, take inputs and a key to yield ciphertexts of comparable length, while hash functions compress arbitrary-length messages into fixed-size digests (e.g., 256 bits for SHA-256), ensuring . This structure facilitates modular integration into protocols, where inputs may include messages, keys, or nonces, and outputs serve as verifiable artifacts like signatures or authenticated tags. Randomness and key management form a core trait, with many primitives requiring secret keys or nonces generated from large, uniformly random key spaces to thwart exhaustive searches. Key sizes are chosen to provide adequate entropy—such as 128 bits for AES, yielding a $2^{128} key space that renders brute-force attacks computationally prohibitive even with vast resources—often derived via secure random number generators. Key generation is integral to primitive design, emphasizing unpredictability to avoid reducing effective key space through biases or patterns. Security foundations for primitives rely on provable frameworks grounded in computational hardness assumptions, where the primitive's resistance is reduced to solving well-studied intractable problems. Common assumptions include the , where computing logarithms in finite fields is infeasible, or the underlying RSA-like schemes; proofs demonstrate that any efficient adversary breaking the primitive could efficiently solve the assumed-hard problem, often via polynomial-time reductions. These reductions ensure security holds under standard models, with negligible advantage for probabilistic polynomial-time attackers.

Rationale and Historical Development

Purpose and Motivation

Cryptographic primitives provide the foundational building blocks essential for constructing and data protection systems in a modular . This modularity allows cryptographers to assemble complex from independently verified components, thereby reducing the likelihood of introducing novel design errors that could compromise security in custom-built solutions. By isolating core functions such as or into primitives, developers can focus on higher-level protocol logic while leveraging the rigorous mathematical analysis already performed on these low-level elements, enhancing overall system reliability. A key motivation for adopting cryptographic lies in their role in promoting , which fosters across diverse systems and vendors. Organizations like NIST develop and endorse these to ensure that cryptographic implementations are compatible, allowing seamless in multi-vendor environments without sacrificing . This also facilitates extensive peer-reviewed scrutiny prior to widespread deployment, as undergo thorough for their properties, enabling confident use in critical applications. These are specifically engineered to counter prevalent threats in systems, offering layered resistance against attacks such as brute-force searches and side-channel exploitations. For instance, their design incorporates properties that withstand chosen-plaintext attacks by ensuring that even partial information leaks do not reveal underlying secrets, while side-channel countermeasures like constant-time operations mitigate timing or risks. This threat-responsive architecture ensures that primitives provide robust defenses tailored to real-world adversarial models. Furthermore, the economic and practical imperatives drive the development of efficient cryptographic primitives, particularly for resource-constrained environments such as () devices. These low-level constructs are optimized to minimize computational overhead, enabling secure operations on with limited processing power, , and , without unduly impacting device performance or battery life. Such is crucial for scaling across billions of interconnected devices, balancing protection needs with practical deployment constraints.

Evolution Over Time

The concept of cryptographic primitives has roots in ancient practices, with early examples like the —a simple substitution shift used by around 50 BC to protect —representing rudimentary forms of , though these lacked the formal structure of modern primitives. Similar classical techniques, such as polyalphabetic ciphers developed in the , provided basic security but were vulnerable to . The formalization of cryptographic primitives as systematic building blocks emerged in the mid-20th century, influenced by experiences with the , a rotor-based electromechanical cipher employed by from the late 1930s until its in the early 1940s by Allied forces, including Alan Turing's team at . This era underscored the need for stronger, mathematically grounded systems. A pivotal advancement came in 1949 with Claude Shannon's seminal paper "Communication Theory of Secrecy Systems," which introduced the fundamental principles of (obscuring the relationship between and ) and (spreading the influence of each bit across many bits), laying the theoretical foundation for modern cryptographic design. These concepts shifted cryptography from methods to a scientific discipline informed by , enabling the development of primitives that could be analyzed for security against known attacks. The 1970s marked the transition to standardized, practical primitives amid growing computational power and data protection needs. In 1976, and published "New Directions in Cryptography," introducing the Diffie-Hellman key exchange protocol, which pioneered by allowing secure key agreement without prior shared secrets, fundamentally altering symmetric-only paradigms. This was followed in 1977 by the U.S. National Bureau of Standards (now NIST) adopting the (DES) as the first widely standardized , a 56-bit symmetric algorithm selected from a public competition to protect sensitive government data. The same year saw Rivest, Shamir, and Adleman announce the algorithm, though its formal paper appeared in 1978, providing the first viable public-key cryptosystem based on the difficulty of . The and saw rapid proliferation and refinement of primitives, driven by academic research and standardization efforts. gained widespread adoption in the for secure communications, exemplified by its integration into protocols like SSL. In 1991, Ronald Rivest designed , a 128-bit intended for digital signatures and message integrity, published in 1321 the following year. Concurrently, the field advanced toward provable security; and Silvio Micali's 1984 paper "Probabilistic Encryption" introduced , a formal model ensuring that ciphertext reveals no partial information about plaintext, influencing subsequent primitive designs. Entering the 2000s, concerns over DES's shrinking key size led to its replacement by the Advanced Encryption Standard (AES) in 2001, a NIST-selected symmetric block cipher with key sizes up to 256 bits, selected from global submissions for its efficiency and security. The turn of the millennium also highlighted quantum computing threats, with Peter Shor's 1994 algorithm demonstrating that quantum computers could efficiently factor large integers, rendering RSA and similar primitives insecure—a realization that spurred research in the 2020s. In response, NIST launched its Post-Quantum Cryptography Standardization Process in 2016, with initial selections in 2022 of lattice-based algorithms like CRYSTALS-Kyber (standardized as ML-KEM in FIPS 203) for key encapsulation and CRYSTALS-Dilithium (standardized as ML-DSA in FIPS 204) for signatures, along with the hash-based SLH-DSA (FIPS 205). These were finalized in August 2024, with further selection of the code-based HQC algorithm in March 2025 for standardization, marking an ongoing shift toward quantum-resistant primitives to safeguard future systems.

Classification

Symmetric-Key Primitives

Symmetric-key primitives form a fundamental class of cryptographic algorithms where the same secret is used for both and decryption operations, ensuring that the processes are reversible only by parties possessing the . This must be securely distributed and maintained in confidence between communicating entities, as its compromise would allow an adversary to perform both and decryption. The reliance on a single distinguishes these primitives from asymmetric alternatives, prioritizing computational efficiency for scenarios where has already occurred. The primary functions of symmetric-key primitives encompass encryption mechanisms tailored to different needs. Stream ciphers operate by generating a pseudorandom keystream from the secret key, which is then combined bitwise with the to produce , allowing continuous encryption of without fixed boundaries. In contrast, block ciphers process data in fixed-length blocks, typically 128 bits, applying a series of transformations controlled by the key to each block independently, which facilitates structured handling of discrete data units. These functions enable versatile applications while maintaining the invertibility essential to symmetric . Security for symmetric-key primitives is predicated on the assumption that the shared key remains secret, with algorithms designed to withstand various adversarial models. A core requirement is resistance to known-plaintext attacks, where an attacker has access to pairs of plaintexts and their corresponding ciphertexts but cannot derive the key or decrypt arbitrary messages. This model ensures that even partial knowledge of plaintext does not compromise the system's integrity, provided the key's secrecy is upheld and the primitive adheres to established security notions like indistinguishability under . Due to their high performance and low overhead, symmetric-key primitives are ideally suited for efficient bulk data encryption in resource-intensive environments. They are commonly employed in protocols, such as those underpinning virtual private networks (VPNs), where large volumes of traffic require rapid protection without the computational burden of for each session. This efficiency makes them indispensable for protecting or at rest in high-throughput systems.

Asymmetric-Key Primitives

Asymmetric-key primitives, also known as public-key primitives, enable cryptographic operations using a pair of mathematically related keys: a public key available to anyone and a private key kept secret by its owner. This approach contrasts with symmetric-key primitives by eliminating the need for secure pre-distribution of shared secrets, instead relying on the for key dissemination. The core mechanism depends on one-way functions, which are computationally easy to evaluate in the forward direction using the public key but extremely difficult to invert without knowledge of the private key, or , that facilitates efficient reversal. These primitives support primary functions including , , and digital signatures. For , protocols like Diffie-Hellman allow two parties to jointly compute a over an insecure channel without exchanging the secret itself, based on the difficulty of the problem. In , schemes such as ElGamal use the recipient's public key to encrypt messages, with only the private key enabling decryption, providing confidentiality in open environments. Digital signatures, conversely, involve signing messages with the private key and verifying them with the public key, ensuring authenticity and . The security model of asymmetric-key primitives hinges on the presumed hardness of specific mathematical problems, such as or logarithms, which underpin the functions and resist efficient computation by adversaries lacking the private . However, these primitives are susceptible to man-in-the-middle attacks, where an interceptor impersonates parties during , unless supplemented by mechanisms like certificates. In practice, asymmetric-key primitives facilitate secure email systems like Pretty Good Privacy (PGP), where public keys encrypt messages and sign them for trusted delivery. They also underpin web authentication in HTTPS handshakes, using public-key operations to verify server identity and negotiate symmetric session keys for efficient data protection.

Hash-Based Primitives

Hash-based primitives are cryptographic functions designed to produce a fixed-size output, known as a hash value or digest, from an input of arbitrary length, serving as a one-way mechanism for data integrity and identification. These primitives are characterized by their irreversibility, meaning it is computationally infeasible to reverse the process to recover the original input from the output. Unlike symmetric or asymmetric primitives that rely on keys for encryption or decryption, hash-based primitives operate without keys in their basic form, focusing instead on properties that ensure data cannot be altered without detection. The core mechanism of hash-based involves collision-resistant functions that map to a fixed-size output, such as a 256-bit digest, while providing preimage resistance, where finding an that produces a specific output is computationally hard. This ensures that even minor changes to the result in a significantly different output, making them ideal for detecting tampering. The security model assumes the computational difficulty of finding collisions—two distinct yielding the same output—or preimages, with resistance levels typically scaling with the output size; for instance, preimage resistance requires approximately 2^n operations for an n-bit output. A foundational approach to constructing these primitives is the Merkle-Damgård construction, an that processes input blocks through a compression function to build the final digest, preserving if the underlying compression function is secure. Primary functions of hash-based primitives include acting as digital fingerprints for verifying , securely storing passwords by hashing them to prevent direct exposure of , and enabling proof-of-work mechanisms in blockchains, where computational effort is required to find a meeting specific criteria. In practice, these primitives support use cases such as , where a digest confirms unaltered , and generating nonces in protocols to ensure uniqueness and prevent replay attacks. These applications leverage the primitives' efficiency and one-way nature to provide robust protection against forgery without the need for reversible operations found in key-based primitives.

Composition and Applications

Methods of Combining Primitives

Cryptographic primitives are frequently combined to build higher-level protocols that achieve desired security properties such as , , and . These combinations follow structured methods to ensure the resulting system inherits the security guarantees of the individual primitives while addressing limitations like computational efficiency or . Key approaches include sequential composition, where primitives are applied in series; parallel composition, where multiple instances operate concurrently; hybrid constructions, which integrate different types of primitives; and formal models that rigorously analyze the security of such assemblies. Sequential composition chains primitives end-to-end, with the output of one serving as input to the next. A prominent example is the paradigm for , in which a message is first encrypted using a symmetric-key to produce a , and then a () is computed over that using a separate key. This method ensures both privacy through and integrity/authenticity through the , preventing attacks like chosen-ciphertext manipulations that could arise in alternative orderings such as MAC-then-encrypt. The security of generic EtM constructions has been proven under standard assumptions for the underlying and primitives, making it a foundational in standards like TLS. Parallel composition employs multiple instances of the same or similar primitives simultaneously to achieve collective functionality without sequential dependency. In multi-signature schemes, for example, several signers each generate an individual signature on the same message using their private keys, and these signatures are aggregated into a single, compact verification value that attests to the endorsement by the entire group. This approach reduces communication and storage overhead in scenarios like consensus or authorities, where multiple parties must approve a . Seminal constructions, such as those based on bilinear pairings, demonstrate that parallel multi-signatures can be as secure as individual signatures under the computational Diffie-Hellman assumption in appropriate groups. Hybrid constructions merge symmetric-key and asymmetric-key primitives to optimize performance, using the latter for secure and the former for efficient bulk processing. A typical setup involves generating a random symmetric key (e.g., for encryption), encrypting the actual data with it, and then encrypting the symmetric key itself using an asymmetric primitive like before transmission; the recipient decrypts the symmetric key asymmetrically and uses it to decrypt the data. This balances the scalability of symmetric encryption with the capabilities of asymmetric methods, as seen in widely adopted protocols for secure and . Such hybrids are provably secure when the asymmetric component provides IND-CCA security and the symmetric one offers strong . Formal models provide theoretical foundations for verifying the of combined primitives, ensuring that compositions preserve overall system guarantees. Ran Canetti's universally composable () framework, introduced in , defines security relative to an ideal functionality and proves that protocols remain secure under arbitrary parallel or sequential compositions with other protocols, even in multi-party settings with potential corruption. This model addresses limitations of earlier black-box composition theorems by incorporating setup assumptions and sub-protocol emulation, enabling modular protocol design in complex environments like the internet.

Security Considerations in Composition

When composing cryptographic primitives, the can critically impact , as certain sequences may introduce vulnerabilities not present in individual components. For instance, in schemes, the MAC-then-encrypt paradigm can enable attacks that recover plaintexts or forge messages, whereas encrypt-then-MAC provides stronger guarantees when both the encryption and MAC primitives are secure. This preference for encrypt-then-MAC emerged from analyses in the early , highlighting how composition affects resistance to chosen-ciphertext attacks. Provable security frameworks address these risks through composition theorems that enable modular proofs of security for combined systems. Black-box reductions allow demonstrating that breaking the composed scheme implies breaking the underlying primitives, without requiring knowledge of their internal workings. Hybrid arguments further support this by sequentially replacing components with ideal versions, bounding the distinguishing advantage of an adversary across intermediate hybrids to establish overall indistinguishability. These techniques ensure that security proofs remain composable, facilitating the design of protocols like TLS where multiple primitives interact. Side-channel attacks pose additional threats to composed systems, where interactions between primitives can amplify leakage through timing variations. For example, variable execution times in or padding checks across and steps may reveal key material when primitives are chained. To mitigate this, implementations must employ constant-time operations throughout the , ensuring that duration remains independent of secret and preventing timing-based of intermediate values. In the post-quantum era, compositions must account for quantum adversaries capable of solving or factoring problems, necessitating hybrids that pair classical with quantum-resistant ones during transition. NIST guidelines recommend such modes to maintain security against both classical and quantum threats, evaluating overall resistance based on the weakest link in the chain. This approach ensures that combined systems remain robust as advances, with ongoing standardization emphasizing verifiable security in multi-primitive protocols.

Examples of Primitives

Block Ciphers and Modes

Block ciphers are symmetric-key cryptographic primitives that encrypt fixed-size blocks of into of the same size using a secret key, serving as foundational building blocks for secure protection in symmetric encryption schemes. These primitives operate through iterative rounds of and to achieve , ensuring that small changes in input produce significant alterations in output. As part of symmetric-key primitives, block ciphers enable efficient bulk encryption but require careful selection to handle variable-length securely. The Data Encryption Standard (DES), adopted in 1977, exemplifies an early block cipher with a 64-bit block size and a 56-bit effective key length. DES processes data through 16 rounds of Feistel operations, involving expansion, substitution via S-boxes, permutation, and XOR with subkeys derived from the key. Its short key length rendered it vulnerable to brute-force attacks; in 1998, the Electronic Frontier Foundation's DES Cracker machine exhaustively searched the keyspace in 56 hours, demonstrating DES's obsolescence for modern security needs. In contrast, the (AES), standardized in 2001 based on the Rijndael algorithm, provides robust security with a fixed 128-bit block size and variable key lengths of 128, 192, or 256 bits. AES employs a substitution-permutation network structure across 10, 12, or 14 rounds respectively, featuring byte substitution via an for non-linearity, row shifting, column mixing for , and key addition via XOR. The round keys are generated through a key expansion schedule that incorporates the cipher's round constants and operations to prevent related-key attacks. AES's design ensures high resistance to cryptanalytic attacks, including , as formalized by Matsui in 1993 for ciphers like . To extend block ciphers beyond single-block encryption, modes of operation define how multiple blocks are processed, transforming the primitive into a secure system for arbitrary data lengths. The Electronic Codebook (ECB) mode simply encrypts each block independently as C_i = \Enc_k(P_i), but it is insecure for patterned data since identical blocks yield identical ciphertext blocks, leaking structure. (CBC) mode addresses this by chaining blocks: C_i = \Enc_k(P_i \oplus C_{i-1}) for i \geq 1, with C_0 as a random (IV), ensuring deterministic across blocks but requiring the IV to be unpredictable. (CTR) mode operates in a stream-cipher fashion without chaining: C_i = P_i \oplus \Enc_k(\IV || \ctr_i), where \ctr_i is an incrementing , providing parallelizable and resistance to certain malleability attacks when the counter is unique. Key security properties of block ciphers like and include the , where a single-bit change in or alters approximately half the output bits, promoting rapid as per Shannon's principles. This strict avalanche criterion enhances resistance to differential and by ensuring high nonlinearity and bit independence in the output. , in particular, achieves near-ideal avalanche behavior after just a few rounds, contributing to its widespread adoption in standards like TLS and .

Digital Signature Schemes

Digital signature schemes are asymmetric cryptographic primitives that provide , , and for digital messages by allowing a signer to produce a signature using a private key, which can be verified by anyone using the corresponding public key. These schemes ensure that only the holder of the private key can generate valid signatures, while the public key enables efficient without revealing the private key. As part of asymmetric-key primitives, they rely on computationally hard problems like or discrete logarithms to achieve security. One of the earliest and most influential schemes is the signature algorithm, introduced in 1977 by Rivest, Shamir, and Adleman. In signatures, the signer computes the signature s = m^d \mod n on a message m using the private exponent d and modulus n, where n = pq for large primes p and q; verification is performed by checking if m = s^e \mod n using the public exponent e. To prevent attacks such as chosen-ciphertext vulnerabilities, signatures incorporate padding schemes like PKCS#1 v1.5, which structures the message with specific byte patterns before signing. The (ECDSA), standardized by NIST in 2000, extends the (DSA) to s, offering comparable security to but with smaller key sizes and faster computations. ECDSA generates keys as points on an over a , with the private key being a scalar and the public key the corresponding curve point; signing involves computing a pair (r, s) based on a random , a of the message, and the private key, while verification uses the public key and the same . Performance benchmarks show ECDSA signatures are significantly faster than equivalent-strength signatures, particularly for key sizes providing at least 128 bits of security, due to the efficiency of operations. The security of modern digital signature schemes is typically analyzed in terms of existential unforgeability under chosen-message attacks (EUF-CMA), where an adversary with access to signatures on chosen messages cannot produce a valid signature on a new message. This notion was formalized by Goldwasser, Micali, and Rivest in 1988, building on earlier work. A precursor to multi-use schemes like RSA and ECDSA is the Lamport one-time signature from 1979, which uses a one-way function to generate pairs of values for signing bits of a message hash but is limited to single use per key pair to avoid forgery. Digital signature schemes find widespread applications in code signing, where developers use certificates to sign software binaries, ensuring authenticity and preventing tampering during distribution. In blockchain transactions, such as those in , ECDSA signatures authorize spending by proving ownership of funds without a central .

Hash Functions

Hash functions serve as fundamental unkeyed cryptographic primitives within the hash-based category, providing a one-way from variable-length inputs to fixed-length outputs to ensure by detecting unauthorized modifications. These functions are designed to be computationally efficient while resisting inversion and collision-finding attacks, making them essential for applications like digital forensics and blockchain ledgers. A notable early construction is , introduced by Ronald Rivest in 1992 as an improvement over , featuring a 128-bit output and employing the Merkle-Damgård structure with 4 rounds of 16 operations each (64 steps total) involving bitwise operations such as XOR, AND, and rotations. Despite initial widespread adoption, MD5's security was compromised in 2004 when Xiaoyun Wang and colleagues demonstrated practical collision attacks, generating distinct inputs with identical hashes in approximately $2^{39} operations using differential cryptanalysis, which exploited flaws in the compression function and illustrated the vulnerabilities of the Merkle-Damgård paradigm to such weaknesses. This breakthrough underscored the need for stronger designs, as collisions undermine integrity checks by allowing forged data to produce the same digest. In response, the family, including SHA-256 standardized by NIST in 2001 and detailed in FIPS 180-4, offers a more robust alternative with a 256-bit output and 64 rounds of processing in its Merkle-Damgård-based compression function, which incorporates bitwise operations like the choice function defined as \text{Ch}(x, y, z) = (x \land y) \oplus (\neg x \land z) along with majority and parity functions to enhance and resistance to . No practical breaks have been found for SHA-256, maintaining its status as a widely adopted for secure hashing. For modern performance needs, BLAKE3, introduced in 2020 by Jack O'Connor and collaborators, represents an advancement with configurable output sizes (default 256 bits) and a tree-based construction that enables high parallelism across multiple threads or devices, achieving speeds up to 15 times faster than on certain hardware by structuring the computation as a of BLAKE2s-style compression blocks, thus avoiding the sequential limitations of traditional designs like Merkle-Damgård. BLAKE3 inherits proven security from its components, providing full collision and preimage resistance without known weaknesses. Key security properties for these hash functions include second preimage resistance, where, given an input x, finding a distinct x' \neq x such that h(x') = h(x) requires approximately $2^n operations for an n-bit output, ensuring that targeted forgeries remain infeasible. , a related but weaker property, demands that discovering any pair of distinct inputs with equal outputs is hard, with the theoretical bound set by the at roughly $2^{n/2} evaluations, as derived from the birthday paradox; for instance, MD5's 128-bit size yields a $2^{64} bound that was practically surpassed, while SHA-256's 256 bits provide a $2^{128} barrier considered secure against current computational capabilities.

References

  1. [1]
    Cryptographic primitive - Glossary | CSRC
    Definitions: A low-level cryptographic algorithm used as a basic building block for higher-level cryptographic algorithms. Sources: NIST SP 800-175B Rev.
  2. [2]
    Cryptographic Standards and Guidelines | CSRC
    NIST's cryptographic standards include primitives, algorithms, and schemes, covering areas like block ciphers, digital signatures, hash functions, key ...
  3. [3]
    CWE-1240: Use of a Cryptographic Primitive with a Risky ...
    Cryptographic primitives are defined to accomplish one very specific task in a precisely defined and mathematically reliable fashion. For example, suppose that ...
  4. [4]
    [PDF] Cryptography Knowledge Area Version 1.0.1 - CyBOK
    The purpose of this chapter is to explain the various aspects of cryptography which we feel should be known to an expert in cyber-security.
  5. [5]
    Cryptographic Services Glossary - Oracle Help Center
    A well-established, low-level algorithm that functions as a basic building block in security systems. Primitives are designed to perform single tasks in a ...
  6. [6]
    Chapter 3. Cryptographic Primitives — Network Security
    Oct 29, 2025 · It might seem that this increased the key size to 168 bits ( 3 × 56 ) but because of the 3-round structure of triple DES, an attacker in ...
  7. [7]
    Overview of encryption, digital signatures, and hash algorithms in .NET
    Cryptographic Primitives ; Secret-key encryption (symmetric cryptography), Performs a transformation on data to keep it from being read by third parties. This ...<|control11|><|separator|>
  8. [8]
    [PDF] Companion to Cryptographic Primitives, Protocols and Proofs
    We use lower-case letters to signify variables. A function is a deterministic mapping from some set A to a set B, written f : A → B. The number 1 is ...
  9. [9]
  10. [10]
    [PDF] Cryptographic Primitives - College of Science and Engineering
    Cryptographic primitives include symmetric/private-key ciphers, asymmetric/public-key ciphers, trapdoor computations, and concepts like RSA and Diffie-Hellman.Missing: authoritative | Show results with:authoritative
  11. [11]
    [PDF] Cryptographic Assumptions: A Position Paper
    to cryptographic primitives, as exemplified below. Consider for example the ... Note that both Definitions 2.1 and 2.4 capture average-case hardness assumptions, ...
  12. [12]
    [PDF] CSE107: Intro to Modern Cryptography - UCSD CSE
    May 10, 2022 · Benefits: Modularity of design and analysis, speed. UCSD CSE107: Intro to Modern Cryptography; Hybrid Encryption and KEMs. 7/38. Page 13 ...
  13. [13]
    [PDF] NIST.SP.800-175Br1.pdf
    Mar 1, 2020 · By ensuring interoperability among the products of different vendors, standards permit an organization to select from various available products ...
  14. [14]
    [PDF] NIST Cryptographic Standards and Guidelines Development Process
    NIST strives to standardize secure cryptographic algorithms, schemes, and modes of operation whose security properties are well understood, and are efficient, ...
  15. [15]
    [PDF] Threshold Schemes for Cryptographic Primitives
    Provided suitable definitions and assumptions, any crypto- graphic primitive can be implemented in a threshold manner based on SMPC. Often this is based on ...
  16. [16]
    [PDF] Side-channel resistance of cryptographic primitives based on error ...
    Apr 10, 2024 · Grover's algorithm could be used as a brute-force attack against symmetric cryptography, but increasing the keys sizes prevents this attack.
  17. [17]
    Deterministic Systems for Cryptographic Primitives Used in Security ...
    It is designed to validate the security of cryptographic protocols within adversarial settings. It operates under the Dolev-Yao attacker model, which assumes ...
  18. [18]
    [PDF] NIST SP 800-57, Recommendation for Key Management - Part 1
    Jul 10, 2012 · Symmetric key algorithm. A cryptographic algorithm that uses the same secret key for an operation and its complement (e.g., encryption and ...<|control11|><|separator|>
  19. [19]
    Key Establishment - Key Management | CSRC
    Symmetric key cryptography is more computationally efficient than public key cryptography, and is commonly used to protect larger volumes of information ...
  20. [20]
    block cipher - Glossary | CSRC
    NIST SP 800-38F A symmetric-key cryptographic algorithm that transforms one block of information at a time using a cryptographic key. For a block cipher ...Missing: stream | Show results with:stream
  21. [21]
    [PDF] Chapter 5 Symmetric Encryption
    We consider an encryption scheme to be “secure against chosen-plaintext attack” if an adversary restricted to using “practical” amount of resources (computing ...
  22. [22]
    [PDF] Another Look at Security Definitions - Cryptology ePrint Archive
    Mar 27, 2012 · Abstract. We take a critical look at security models that are often used to give. “provable security” guarantees.Missing: characteristics | Show results with:characteristics
  23. [23]
    [PDF] Guide to IPsec VPNs - NIST Technical Series Publications
    Jun 1, 2020 · VPNs use symmetric cryptography to encrypt and decrypt their command and data channels. ... the encryption algorithms to use, and the ...
  24. [24]
    [PDF] New Directions in Cryptography
    Abstract Two kinds of contemporary developments in cryp- communications over an insecure channel order to use cryptog- tography are examined.
  25. [25]
    [PDF] Trapdoor one-way functions and zero-knowledge proofs
    Apr 17, 2008 · A trapdoor one-way function is a one-way function with the additional property that if you know some secret “trapdoor” information then you can ...
  26. [26]
    [PDF] New Directions in Cryptography - Stanford Electrical Engineering
    DIFFIE. AND. HELLMAN: NEW. DIRECTIONS. IN CRYPTOGRAPHY. 653 of possible keys. Though the problem is far too difficult to be laid to rest by such simple methods ...
  27. [27]
    [PDF] A public key cryptosystem and a signature scheme based on ...
    The paper described a public key cryptosystem and a signature scheme based on the difficulty of computing discrete logarithms over finite fields. The ...
  28. [28]
    [PDF] nist.sp.800-56br1.pdf
    Mar 25, 2019 · This Recommendation specifies key-establishment schemes using integer factorization cryptography, based on ANS X9.44, Key-establishment using ...
  29. [29]
    Cryptographic hash function - Glossary | CSRC
    The function is expected to have the following three properties: 1. Collision resistance (see Collision resistance), 2. Preimage resistance (see Preimage ...
  30. [30]
    [PDF] Recommendation for Applications Using Approved Hash Algorithms
    The security strength of a hash function is determined by its collision resistance strength, preimage resistance strength or second preimage resistance strength ...
  31. [31]
    One Way Hash Functions and DES - SpringerLink
    Jul 6, 2001 · One way hash functions are a major tool in cryptography. DES is the best known and most widely used encryption function in the commercial world today.
  32. [32]
    Authenticated Encryption: Relations among Notions and Analysis of ...
    Jul 30, 2008 · Three composition methods are considered, namely Encrypt-and-MAC, MAC-then-encrypt, and Encrypt-then-MAC. For each of these and for each notion ...<|control11|><|separator|>
  33. [33]
    Authenticated Encryption: Relations among notions and analysis of ...
    Three composition methods are considered, namely Encrypt-and-MAC, MAC-then-encrypt, and Encrypt-then-MAC. For each of these, and for each notion of security, we ...Missing: authoritative | Show results with:authoritative
  34. [34]
    Post-Quantum Cryptography | CSRC
    NIST will issue guidance regarding any transitions of symmetric key algorithms and hash functions to protect against threats from quantum computers when we can ...
  35. [35]
    [PDF] NIST IR 8547 initial public draft, Transition to Post-Quantum ...
    Nov 12, 2024 · This report describes NIST's approach to transitioning from quantum-vulnerable to post-quantum cryptography, identifying standards for IT ...Missing: composition | Show results with:composition
  36. [36]
    [PDF] FIPS 197, Advanced Encryption Standard (AES)
    Nov 26, 2001 · The AES algorithm is a symmetric block cipher that can encrypt (encipher) and decrypt (decipher) information. Encryption converts data to an ...
  37. [37]
    [PDF] Communication Theory of Secrecy Systems - cs.wisc.edu
    First, there are three general types of secrecy system: (1) concealment systems, including such methods as invisible ink, concealing a message in an innocent ...
  38. [38]
    [PDF] NIST SP 800-38A, Recommendation for Block Cipher Modes of ...
    The five modes—the Electronic Codebook (ECB), Cipher Block Chaining (CBC), Cipher Feedback. (CFB), Output Feedback (OFB), and Counter (CTR) modes—can provide ...
  39. [39]
    [PDF] Data Encryption Standard - NIST Computer Security Resource Center
    Jan 8, 2020 · Applicability: This standard will be used by Federal departments and agencies for the crypto graphic protection of computer data when the ...
  40. [40]
    EFF Builds DES Cracker that proves that Data Encryption Standard ...
    Jan 19, 1999 · EFF set out to design and build a DES Cracker to counter the claim made by U.S. government officials that American industry or foreign ...Missing: 1990s | Show results with:1990s
  41. [41]
    Linear Cryptanalysis Method for DES Cipher - SpringerLink
    Jul 13, 2001 · We introduce a new method for cryptanalysis of DES cipher, which is essentially a known-plaintext attack. As a result, it is possible to break 8-round DES ...Missing: original | Show results with:original
  42. [42]
    [PDF] The Design of Rijndael - AES — The Advanced Encryption Standard
    Nov 26, 2001 · Rijndael was the surprise winner of the contest for the new Advanced En- cryption Standard (AES) for the United States.
  43. [43]
    RFC 8017 - PKCS #1: RSA Cryptography Specifications Version 2.2
    This document provides recommendations for the implementation of public-key cryptography based on the RSA algorithm.
  44. [44]
    [PDF] Constructing Digital Signatures from a One Way Function
    Oct 18, 1979 · With our method, there is no way for. Q to forge the check, so the endorsed check is as good as a check payable directly to. R signed by. P . ( ...Missing: paper | Show results with:paper
  45. [45]
    [PDF] Chapter 9 - Hash Functions and Data Integrity
    Hash functions map messages to fixed-length outputs, acting as a compact representative of the input, used for data integrity and message authentication.
  46. [46]
    RFC 1321 MD5 Message-Digest Algorithm - IETF
    ... Rivest [Page 1] RFC 1321 MD5 Message-Digest Algorithm April 1992 The MD5 algorithm is designed to be quite fast on 32-bit machines. In addition, the MD5 ...
  47. [47]
    [PDF] fips pub 180-4 - federal information processing standards publication
    Aug 4, 2015 · This Standard specifies secure hash algorithms, SHA-1, SHA-224, SHA-256, SHA-384, SHA-. 512, SHA-512/224 and SHA-512/256. All of the algorithms ...
  48. [48]
    [PDF] BLAKE3 - GitHub
    As a general-purpose hash function, BLAKE3 is suitable whenever a collision-resistant or preimage-resistant hash function is needed to map some arbitrary-size ...