Fact-checked by Grok 2 weeks ago

Secure Hash Algorithms

Secure Hash Algorithms (SHA) are a family of cryptographic hash functions designed by the (NSA) and published by the National Institute of Standards and Technology (NIST) as federal standards to generate a fixed-length message digest from input messages of arbitrary length, ensuring properties such as preimage resistance, second-preimage resistance, and that make it computationally infeasible to reverse the process or find two distinct inputs producing the same output. These algorithms are iterative, one-way functions primarily used for applications like digital signatures, message authentication codes, and verification in cryptographic protocols. The development of SHA began in the early , with the initial Secure Hash (SHS) specified in Federal Information Processing (FIPS) 180 in 1993, introducing the original algorithm, which was soon revised to in 1995 due to a discovered weakness. In response to growing concerns over cryptographic vulnerabilities, NIST advanced the family in 2002 with FIPS 180-2, defining the series to provide stronger security margins against emerging attacks. Following successful of , including practical collision attacks demonstrated in 2017, NIST deprecated for most uses in 2022 and initiated a public competition in 2007 that culminated in the selection of Keccak as the basis for , standardized in FIPS 202 in 2015. The SHA family encompasses several variants tailored for different security levels and output sizes: produces a 160-bit digest but is no longer approved for new applications; the SHA-2 subfamily includes (224 bits), (256 bits), (384 bits), (512 bits), and truncated versions SHA-512/224 and SHA-512/256, all approved for federal use in generating secure hashes. In contrast, , based on construction unlike the Merkle-Damgård structure of prior versions, offers SHA3-224, SHA3-256, SHA3-384, and SHA3-512, along with extendable-output functions (XOFs) like SHAKE128 and SHAKE256 for variable-length outputs, providing enhanced flexibility and resistance to certain attack vectors. These algorithms form the backbone of modern cryptographic systems, with NIST recommending and for ongoing deployments to maintain data security amid evolving threats.

Overview

Definition and Purpose

A secure hash algorithm is a designed to map input data of arbitrary length to a fixed-length output, known as a message digest or hash value, through a one-way mathematical transformation. This process produces a unique digital fingerprint for the input, such that even a single-bit change in the data results in a substantially different output, enabling efficient verification of . The fixed output size, typically 256 or 512 bits, ensures computational feasibility regardless of input complexity. The primary purposes of secure hash algorithms include safeguarding , supporting digital signatures by condensing messages for signing, securely storing passwords through salted hashing to resist brute-force attacks, and facilitating message authentication via constructs like . These functions are essential in cryptographic protocols where confirming unaltered transmission or storage is critical, as the probability of two distinct inputs producing the same (a collision) is designed to be negligible. In contrast to encryption algorithms, which are reversible with the appropriate key to recover the original data, secure hash algorithms are intentionally irreversible, making it computationally infeasible to derive the input from the hash value alone. This one-way property arose from the need for robust primitives after vulnerabilities in early hash functions like MD4 and MD5 exposed risks to cryptographic systems, leading to the standardization of stronger alternatives. Secure hash algorithms find practical use in , where the hash of a allows users to confirm it matches the without or tampering during . They are also fundamental in systems, where hashes link blocks to maintain ledger integrity and prevent unauthorized alterations.

Key Properties

Secure hash algorithms are designed to exhibit several core cryptographic properties that ensure their reliability and security in applications such as verification and digital signatures. These properties distinguish them from non-cryptographic hash functions and make it computationally infeasible for adversaries to reverse-engineer or manipulate the hashing process. One fundamental property is preimage resistance, which means that given a hash output, it is computationally infeasible to find any input message that produces that exact output. This one-way nature prevents attackers from reversing the hash to recover the original data, a critical feature for protecting sensitive information like passwords. Second preimage resistance builds on this by ensuring that, for a given input and its corresponding output, it is computationally infeasible to find a different input that produces the same value. This property safeguards against targeted attacks where an adversary attempts to substitute one valid with another that hashes identically, thereby maintaining the of authenticated . is perhaps the strongest of these resistance properties, requiring that it is computationally infeasible to find any two distinct input messages that produce the same output. Unlike second preimage attacks, which target a specific hash, collision resistance applies universally, making it exponentially harder to achieve and essential for preventing forged documents or certificates. This property implies second preimage resistance but not , providing a higher threshold. Secure hash algorithms also demonstrate the avalanche effect, where a minimal change in the input—such as flipping a single bit—results in a substantial and unpredictable change in the output, typically altering approximately half of the output bits. This property enhances resistance to attacks and ensures that similar inputs do not yield similar hashes, contributing to overall unpredictability. Additionally, these algorithms produce a fixed output size regardless of the input length, generating a digest of predetermined (e.g., 256 bits for SHA-256), which facilitates uniform storage and comparison while compressing arbitrary-length messages. They operate in a deterministic manner, meaning the same input always yields the identical output without any or variability, ensuring across computations. Finally, the outputs of secure hash algorithms exhibit pseudorandomness, appearing indistinguishable from random values to an observer without knowledge of the input, akin to the behavior modeled in the random oracle paradigm. This property underpins their use in protocols assuming ideal randomness, such as signature schemes, and is formalized in cryptographic proofs to analyze security.

History and Development

Origins and Early Versions

The Secure Hash Algorithm (SHA) family originated with the development of an initial version by the National Institute of Standards and Technology (NIST) in 1993, proposed as a federal standard to support cryptographic applications requiring and . This original , now retroactively termed SHA-0, was specified in Federal Information Processing Standard (FIPS) PUB 180, issued on May 11, 1993, and became effective on October 15, 1993. Designed to produce a 160-bit message digest, SHA-0 was intended for use with the (DSA) outlined in the forthcoming Digital Signature Standard (DSS), addressing the need for a robust in U.S. government systems for secure digital signatures. SHA-0 drew significant inspiration from earlier message digest algorithms developed by Ronald L. Rivest, particularly (1990) and (1991), adapting their core principles of processing messages in 512-bit blocks while incorporating enhancements for expanded output size and improved security against known attacks on MD-family functions. However, shortly after its publication, a significant technical flaw was identified by researchers at the (NSA), leading NIST to announce in May 1994 that the algorithm required revision to address the weakness, which was not publicly detailed at the time to avoid aiding potential adversaries. As a result, SHA-0 was effectively withdrawn before widespread implementation, with NIST emphasizing its unsuitability for secure applications by 1996. The revised version, SHA-1, was finalized and published in FIPS 180-1 on April 17, 1995, incorporating a minor but critical modification—a single in the schedule—to mitigate the identified while preserving with the original . This update aligned with the release of FIPS 186 ( Standard) on May 19, 1994, which mandated SHA for DSA-based signatures in federal systems, facilitating early adoption in U.S. government protocols for secure communications, such as and protection. By 1995, SHA-1 had become the foundational in these standards, enabling widespread use in digital certificates and authentication within government networks.

Standardization Process

The National Institute of Standards and Technology (NIST) established the Secure Hash Standard (SHS) through (FIPS) publications, formalizing the use of SHA algorithms for federal systems and encouraging broader adoption. FIPS 180-1, approved on April 17, 1995, defined as a 160-bit required for applications such as the () in federal information processing. This standard emphasized 's role in generating message digests resistant to reversal or collision attacks, with mandatory compliance for U.S. government agencies by the effective date of October 2, 1995. FIPS 180-2, published on August 1, 2002, revised and expanded the standard to address growing needs for longer hash outputs, incorporating the SHA-2 family alongside SHA-1. It specified SHA-256 (256-bit output), SHA-384 (384-bit), and SHA-512 (512-bit), algorithms designed by the (NSA) to provide enhanced security against brute-force and other attacks. These additions maintained compatibility with existing SHA-1 implementations while supporting applications requiring higher security levels, such as digital signatures and message authentication. As vulnerabilities in SHA-1 emerged, NIST initiated a process to phase it out in favor of more robust alternatives. SHA-1 was deprecated in 2011 following analysis of its reduced . Its use for generation was disallowed after December 31, 2013, while other applications such as hash-only uses were permitted with caveats until further notice, with legacy applications allowed in read-only modes. NIST announced in December 2022 that SHA-1 should be phased out by December 31, 2030, for all applications. NIST's Special Publication 800-131A provides detailed transition guidance, outlining timelines, acceptable uses during migration, and recommendations for adopting or to maintain cryptographic strength. The publication stresses risk assessments for legacy systems and promotes through validated implementations. To address long-term needs, NIST initiated a public competition in 2007 to develop a new hash algorithm standard. After evaluating 64 submissions, the Keccak algorithm, designed by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche, was selected in 2012 as the basis for . FIPS 202, published in August 2015, standardized and introduced extendable-output functions (XOFs) like SHAKE128 and SHAKE256. For international recognition, SHA algorithms were integrated into the ISO/IEC 10118 series, which standardizes hash functions for security techniques like authentication and integrity protection. SHA-1 and the SHA-2 family were adopted in ISO/IEC 10118-3:2004, enabling global use in environments requiring conformance to both U.S. and international norms. Subsequent revisions, such as ISO/IEC 10118-3:2018, updated specifications to align with evolving NIST standards while preserving backward compatibility.

Design Principles

Construction Methods

Secure Hash Algorithms (SHA) primarily employ two distinct construction methods: the Merkle-Damgård paradigm for the SHA-1 and SHA-2 families, and the sponge construction for SHA-3. These methods transform variable-length inputs into fixed-length digests while aiming to preserve security properties such as collision resistance. The Merkle-Damgård construction processes the input message in fixed-size blocks using an iterative compression function, starting from an initial value (IV) or chaining variable. Each block is compressed with the previous chaining value to produce the next, culminating in the final chaining value as the hash output. This method, independently proposed by Ralph Merkle and Ivan Damgård, ensures that if the underlying compression function is collision-resistant, the overall hash function inherits this property. In SHA algorithms using this construction, the block size is typically 512 bits for SHA-1 and SHA-256, or 1024 bits for SHA-512 variants, with internal state represented in words of 32 or 64 bits, respectively, and multiple rounds (e.g., 64 or 80) per compression to enhance diffusion and confusion. The compression function for SHA-1 and SHA-2 consists of multiple rounds of bitwise operations and modular additions. To handle messages not divisible by the block size, a standardized scheme is applied: a '1' bit is appended, followed by zeros to reach a length of block size minus 64 bits (for 512-bit block variants such as and SHA-256) or minus 128 bits (for 1024-bit block variants such as SHA-512), and finally the message length (64 or 128 bits, respectively) in big-endian format. This padding ensures uniqueness and prevents certain attacks, such as length extension in some contexts. In contrast, SHA-3 adopts the sponge construction, which operates on a fixed-width state (e.g., 1600 bits for standard instances) divided into rate (r) bits for input/output and capacity (c) bits for internal security. The input is "absorbed" into the state by XORing with the rate portion and applying a function, padding as needed with a multi-rate scheme; once absorbed, the state is "squeezed" by XORing out rate-sized outputs iteratively to produce the digest. This method, introduced by Bertoni et al., provides flexibility for variable output lengths and avoids length extension vulnerabilities inherent in Merkle-Damgård.

Mathematical Foundations

Secure Hash Algorithms (SHA) operate primarily on fixed-size words, typically 32 bits for SHA-1 and SHA-256, where all arithmetic additions are performed modulo $2^{32}, treating the words as elements in the finite ring \mathbb{Z}/2^{32}\mathbb{Z}. This modular arithmetic prevents overflow and ensures efficient computation on hardware, while the choice of modulus aligns with the word size to facilitate bitwise manipulations without sign extension issues. Bitwise logical operations form the core of SHA's mixing steps, including AND (\wedge), OR (\vee), XOR (\oplus), and bitwise complement (NOT, \neg), alongside circular rotations (ROTL) and right shifts (SHR). These operations, applied to 32-bit words, promote avalanche effects where small input changes propagate widely, enhancing ; for instance, rotations by specific bit counts (e.g., 5, 30 bits in ) mix bits across positions without loss of information. Nonlinear Boolean functions are integral to SHA's compression, providing resistance to algebraic attacks; a representative example from SHA-1 is the choice function \text{Ch}(x, y, z) = (x \wedge y) \oplus (\neg x \wedge z), which selects bits from y or z based on x, mimicking an operation at the bit level and ensuring the function's nonlinearity as measured by its algebraic degree of 2. Initialization vectors (IVs) in SHA-2 algorithms are constructed from the first 32 (or 64) bits of the fractional parts of the square roots of the first eight prime numbers (2, 3, 5, 7, 11, 13, 17, 19 for SHA-256), a method chosen to generate unpredictable yet verifiable constants without embedding designer bias, often termed "nothing-up-my-sleeve" values. The security of against collision attacks ideally scales with the output n bits, requiring approximately $2^{n/2} operations to find a collision due to the birthday paradox, which bounds the probability of shared hash values in a random mapping to the output space.

Specific Algorithms

is a designed to produce a 160-bit (20-byte) message digest from an input of arbitrary . It operates on 512-bit message blocks and employs 80 rounds of to update an internal state consisting of five 32-bit words. The processing begins with message padding to ensure the length is a multiple of 512 bits. A '1' bit is appended to the message, followed by enough '0' bits to make the total length congruent to 448 512; then, a 64-bit big-endian representation of the original message length in bits is added. The padded message is divided into 512-bit . The initial value (IV) comprises five 32-bit words:
H<sub>0</sub> = 0x67452301,
H<sub>1</sub> = 0xEFCDAB89,
H<sub>2</sub> = 0x98BADCFE,
H<sub>3</sub> = 0x10325476,
H<sub>4</sub> = 0xC3D2E1F0.
Each is processed sequentially via a that incorporates the current and the to produce an updated ; the final yields the digest.
The compression function expands the 512-bit into eighty 32-bit words W<sub>t</sub> (for t = 0 to 79) and performs 80 iterative steps, grouped into four rounds of 20 steps each. For t = 0 to 15, W<sub>t</sub> is the t<sup>th</sup> 32-bit big-endian word of the . For t = 16 to 79,
W<sub>t</sub> = (W<sub>t-3</sub> ⊕ W<sub>t-8</sub> ⊕ W<sub>t-14</sub> ⊕ W<sub>t-16</sub>) rotated left by 1 bit.
Starting with A = H<sub>0</sub>, B = H<sub>1</sub>, C = H<sub>2</sub>, D = H<sub>3</sub>, E = H<sub>4</sub>, each step computes:
TEMP = (A rotated left by 5 bits) + f<sub>t</sub>(B, C, D) + E + K<sub>t</sub> + W<sub>t</sub> (all additions 2<sup>32</sup>),
followed by rotating the working variables: E ← D, D ← C, C ← (B rotated left by 30 bits), B ← A, A ← TEMP.
After 80 steps, the hash values are updated as H<sub>i</sub> ← H<sub>i</sub> + corresponding working variable for i = 0 to 4.
Nonlinearity is introduced via four functions f<sub>t</sub> that cycle every 20 steps:
For 0 ≤ t ≤ 19 (f<sub>0</sub>): f<sub>t</sub>(B, C, D) = (B ∧ C) ∨ (¬B ∧ D),
For 20 ≤ t ≤ 39 (f<sub>1</sub>): f<sub>t</sub>(B, C, D) = B ⊕ C ⊕ D,
For 40 ≤ t ≤ 59 (f<sub>2</sub>): f<sub>t</sub>(B, C, D) = (B ∧ C) ∨ (B ∧ D) ∨ (C ∧ D),
For 60 ≤ t ≤ 79 (f<sub>3</sub>): f<sub>t</sub>(B, C, D) = B ⊕ C ⊕ D.
These resemble logical operations from earlier designs like , providing bitwise mixing.
Constants K<sub>t</sub> (for t = 0 to 79) are the leading 32 bits of the fractional parts of the cube roots of the first 80 prime numbers (2, , 5, ..., ), scaled by 2<sup>32</sup>. For instance, K<sub>0</sub> = 0x5A827999 (from ∛2), K<sub>20</sub> = 0x6ED9EBA1 (from ∛71). This selection ensures transparency and avoids arbitrary constants that could suggest hidden weaknesses. retains legacy applications where is not critical, such as in for generating unique identifiers for repository objects to verify during storage and transfer. It was also widely deployed in TLS protocols prior to 2010, including for signing server certificates and handshake messages.

SHA-2 Family

The SHA-2 family comprises a set of cryptographic hash functions designed by the (NSA) and published by the National Institute of Standards and Technology (NIST), including SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. These algorithms produce fixed-size message digests of 224, 256, 384, 512, 224, or 256 bits, respectively, and operate on message blocks of 512 bits for the 32-bit word variants (SHA-224 and SHA-256) or 1024 bits for the 64-bit word variants (SHA-384, SHA-512, SHA-512/224, and SHA-512/256). All SHA-2 variants employ 64 compression rounds, building upon the Merkle-Damgård construction similar to but with enhanced nonlinear functions for greater security. The core compression function uses two bitwise operations: the Maj(x, y, z) = (x ∧ y) ⊕ (x ∧ z) ⊕ (y ∧ z), which selects the majority bit across three inputs, and the Ch(x, y, z) = (x ∧ y) ⊕ (¬x ∧ z), which acts as an selector. Additionally, message expansion incorporates functions: for 32-bit words, Σ₀(x) = ROTR²(x) ⊕ ROTR¹³(x) ⊕ ROTR²²(x) and Σ₁(x) = ROTR⁶(x) ⊕ ROTR¹¹(x) ⊕ ROTR²⁵(x), where ROTR denotes right ; analogous 64-bit versions use shifts of 28/34/39 and 14/18/41 bits. These functions promote and nonlinearity, improving resistance to attacks compared to . The message schedule for each block expands the input into 64 words: the first 16 words are the padded block divided into 32-bit or 64-bit chunks, while subsequent words W_t (for t = 16 to 63) are computed as W_t = σ₁(W_{t-2}) + W_{t-7} + σ₀(W_{t-15}) + W_{t-16}, where + denotes addition modulo 2³² or 2⁶⁴. For the 32-bit variants (SHA-224 and SHA-256), σ₀(x) = ROTR⁷(x) ⊕ ROTR¹⁸(x) ⊕ SHR³(x) and σ₁(x) = ROTR¹⁷(x) ⊕ ROTR¹⁹(x) ⊕ SHR¹⁰(x). For the 64-bit variants (SHA-384, SHA-512, SHA-512/224, and SHA-512/256), σ₀(x) = ROTR¹(x) ⊕ ROTR⁸(x) ⊕ SHR⁷(x) and σ₁(x) = ROTR¹⁹(x) ⊕ ROTR⁶¹(x) ⊕ SHR⁶(x). This expansion leverages rotations and XORs to generate pseudorandom extensions from the message block. Initial hash values (IVs), denoted H⁰ through H⁷, are predefined constants with "nothing-up-my-sleeve" properties. For SHA-256, they are derived from the first 32 bits of the fractional parts of the square roots of the first eight primes (2, 3, 5, 7, 11, 13, 17, 19), yielding values such as H⁰ = 0x6a09e667. For SHA-224, distinct predefined IVs are specified (e.g., H⁰ = 0xc1059ed8). For SHA-512, the IVs are derived from the first 64 bits of the fractional parts of the square roots of the first eight primes, providing H⁰ = 0x6a09e667f3bcc908. For SHA-384, they use the ninth through sixteenth primes (23, 29, 31, 37, 41, 43, 47, 53). The truncated variants SHA-512/224 and SHA-512/256 use their own distinct IVs derived similarly from subsequent primes. These IVs initialize the working variables before processing each block. SHA-224 differs from SHA-256 primarily in its initialization and output: it uses the same 32-bit word and rounds but employs distinct predefined IVs and truncates the final 256-bit hash by omitting the least significant 32 bits to produce a 224-bit digest. This variant was introduced to provide a shorter output while maintaining compatibility with SHA-256's internals. Similarly, SHA-512/224 and SHA-512/256 truncate the 512-bit output to 224 or 256 bits using their specific IVs, offering performance benefits from -bit operations for shorter digests. The SHA-2 family was standardized in FIPS 180-2.

SHA-3

SHA-3, or Secure Hash Algorithm 3, represents a paradigm shift in cryptographic hash function design, selected by the National Institute of Standards and Technology (NIST) as the winner of the SHA-3 Cryptographic Hash Algorithm Competition. This international contest, launched in 2007 and concluded in 2012, evaluated 64 submissions to identify a robust alternative to previous SHA algorithms amid growing concerns over potential weaknesses in earlier designs. The victor, Keccak, was developed by Guido Bertoni, , Michaël Peeters, and , and officially standardized in Federal Information Processing Standard (FIPS) 202 in August 2015. At its core, SHA-3 employs the Keccak sponge construction, a versatile framework that processes input data through a fixed-width of bits, denoted as b = r + c, where r is the (bits absorbed or squeezed per iteration) and c is the (unabsorbed internal bits providing ). During the absorbing , input message blocks are XORed into the rate portion of the state, followed by applications of the underlying ; in the squeezing , output is extracted from the rate portion after additional permutations until the desired length is reached. This construction enables efficient handling of variable-length inputs and outputs while maintaining a uniform permutation-based core. The SHA-3 family defines four primary hash functions—SHA3-224, SHA3-256, SHA3-384, and SHA3-512—each producing fixed digest lengths of 224, 256, 384, and 512 bits, respectively, to align with common security requirements. These variants utilize the same Keccak-f permutation but adjust the capacity c (e.g., c = 448 for SHA3-224, ensuring at least 112 bits of security) and rate r = 1600 - c to balance performance and strength, with the output length determining the number of squeeze operations. Additionally, SHA-3 incorporates two extendable-output functions (XOFs), SHAKE128 and SHAKE256, which allow arbitrary output lengths beyond fixed digests, enhancing adaptability for diverse applications like key derivation. The function, Keccak-f, forms the of and operates on a state structured as a 5×5 of 64-bit lanes (totaling 25 lanes). It comprises 24 iterative rounds, each executing five sequential step mappings: (theta) computes column parities and applies XOR differences for broad diffusion across the state; ρ (rho) rotates each lane by a predefined offset to introduce bit-level shifts; π (pi) rearranges lanes in a fixed to mix positions; χ () performs a nonlinear within each row for local algebraic complexity; and ι () XORs a round-specific constant into the central lane to prevent symmetry and fixed points. This round structure ensures strong avalanche effects and resistance to and . A primary advantage of SHA-3's sponge-based design is its inherent flexibility, supporting extendable outputs through the XOFs without requiring separate algorithms, which facilitates future-proofing and integration into protocols needing variable-length hashes. Furthermore, unlike Merkle-Damgård constructions used in prior SHA variants, the sponge resists length-extension attacks by concealing the full internal state in the output, preventing adversaries from appending data to a known hash to forge valid extensions while preserving security margins. These properties contributed significantly to Keccak's selection, emphasizing efficiency, versatility, and enhanced attack resistance in NIST's evaluation.

Comparison

Performance Metrics

Performance metrics for Secure Hash Algorithms primarily focus on computational efficiency, measured by throughput (e.g., cycles per byte or megabytes per second) and resource usage such as . These metrics vary by hardware architecture, implementation optimizations, and input size, with modern 64-bit processors favoring algorithms designed for larger word sizes. Representative benchmarks illustrate the trade-offs between speed and design complexity across the family. Note that values are based on mid-2010s hardware (e.g., Haswell, Skylake); on post-2020 processors like or , throughputs are higher due to advanced instructions. Throughput is a key indicator of efficiency, often quantified in cycles per byte (cpb) on specific CPUs. On processors, baseline software implementations of SHA-256 achieve around 16 cpb without acceleration, improving to approximately 9.75 cpb with enabled. SHA-512 demonstrates superior performance on 64-bit due to its native 64-bit operations and fewer effective rounds per byte (80 rounds on 1024-bit blocks versus SHA-256's 64 rounds on 512-bit blocks), yielding about 6-8 cpb on Haswell-era processors. , while deprecated for security reasons, processes data at roughly 4.5-6 cpb in optimized setups. In contrast, variants like SHA3-256 typically require 15-20 cpb on comparable owing to the construction's overhead, though optimized implementations can reach as low as 7.8 cpb with AVX2 on Skylake. On processors with v8 Crypto extensions, throughput is comparable, with SHA-256 at 10-20 cpb, while lags by a factor of 1.5-2x.
AlgorithmCycles per Byte (Intel, with extensions, mid-2010s)Cycles per Byte (ARMv8, approximate, mid-2010s)Source
~4.5-6~5-10General implementations
SHA-256~9.75~12-18wolfSSL Benchmarks
SHA-512~6-8~8-12Keccak Performance Summary
SHA3-256~15-17~20-30Keccak Performance Summary
Hardware accelerations significantly boost software performance for SHA-1 and SHA-2. Intel's SHA-NI instructions (introduced in and later cores) perform multiple rounds in a single cycle, yielding up to 50% speedup for SHA-256 over pure software. Similarly, ARMv8 includes dedicated SHA1 and SHA256 instructions, enabling parallel round computations and reducing cpb by 30-50% compared to scalar implementations. SHA-3 lacks widespread dedicated hardware support, relying on general-purpose vector instructions like for gains, but remains slower in software due to its wider state permutations. These extensions are crucial for high-throughput applications, shifting bottlenecks from computation to memory access for large inputs. Memory requirements remain low across the family, emphasizing their suitability for resource-constrained environments. and use minimal working state: 20 bytes (160 bits) for (five 32-bit words), 32 bytes (256 bits) for (eight 32-bit words), and 64 bytes (512 bits) for (eight 64-bit words), plus temporary buffers for padding typically under 1 total. employs a larger 1600-bit (200-byte) sponge state, visualized as a 5x5x64-bit , which increases peak memory to around 300-500 bytes including input absorption, though implementations rarely exceed 1 without large message buffering. These footprints enable efficient streaming processing with negligible overhead. Benchmark examples highlight practical efficiency for common tasks like hashing a 1 MB file. On an processor with SHA extensions, SHA-256 completes this in approximately 5-6 ms (based on 169-200 MB/s throughput), while SHA-512 takes 3-4 ms due to its efficiency on 64-bit systems. SHA-3-256 requires 10-20 ms on the same hardware, reflecting its higher cpb. On ARM-based systems like recent Apple M-series or , SHA-256 hashes 1 MB in 4-7 ms leveraging extensions, with SHA-3 again 1.5-2x slower at 6-14 ms. These times assume optimized libraries like and exclude I/O . Trade-offs in performance arise from output length and design: longer hashes like SHA-512 incur marginally higher per-byte costs in 32-bit environments but excel on modern 64-bit CPUs, balancing speed with enhanced security margins. SHA-3's broader security profile comes at the expense of throughput, making SHA-2 preferable for latency-sensitive uses despite equivalent output sizes (e.g., vs. SHA3-256).

Security Profiles

The security profile of a Secure Hash Algorithm (SHA) variant is primarily determined by its resistance to key cryptographic attacks, such as finding collisions, preimages, or second preimages, measured in terms of computational effort required. Under the birthday paradox, the theoretical bound for finding collisions in an ideal n-bit is approximately 2^{n/2} operations. For , with a 160-bit output, this yields a of 2^{80}, though practical breaks have reduced its effective security far below this level. In contrast, from the SHA-2 family offers 256-bit output and thus 2^{128} , while variants like SHA3-256 maintain the same 2^{128} bound due to their sponge construction design. NIST provides explicit guidelines on the approved security strengths of SHA algorithms, aligning them with overall cryptographic security levels for federal applications. SHA-1 is deemed insecure for new designs and digital signature applications, with NIST mandating its phase-out by December 31, 2030, in favor of stronger alternatives. The SHA-2 family and SHA-3 are approved for use beyond 2030, with security strengths categorized as follows: SHA-224 and SHA-512/224 at 112 bits, SHA-256 and SHA-512/256 at 128 bits, SHA-384 at 192 bits, and SHA-512 at 256 bits; analogous levels apply to SHA-3 variants (e.g., SHA3-224 at 112 bits). These levels reflect the minimum bits of security against generic attacks like brute-force preimage searches, which require 2^n effort for an n-bit output in classical computing.
Algorithm VariantOutput Size (bits)Security Strength (bits)Collision Resistance (classical)
16080 (deprecated)2^{80}
SHA-2242241122^{112}
SHA-2562561282^{128}
SHA-3843841922^{192}
SHA-5125122562^{256}
SHA3-2562561282^{128}
SHA3-5125122562^{256}
In the context of quantum threats, provides a quadratic speedup for brute-force searches, reducing preimage resistance for an n-bit hash from 2^n to approximately 2^{n/2} operations on a quantum computer; for example, SHA-256's preimage resistance would drop to 2^{128}, which remains adequate for near-term security but underscores the need for larger outputs like SHA-512 in long-term applications. NIST recommends migrating from SHA-1 to at least SHA-256 for all new systems to ensure compliance and maintain robust protection against both classical and emerging quantum risks.

Security Considerations

Known Vulnerabilities

Secure Hash Algorithm 0 (SHA-0), the precursor to SHA-1, was withdrawn by the (NSA) shortly after its initial publication in 1993 due to an undisclosed weakness that permitted collisions, leading to its replacement by in 1994. In 2017, researchers from and the Centrum Wiskunde & Informatica (CWI) demonstrated the first practical collision attack on the full algorithm, known as SHAttered, by generating two distinct PDF files with identical hashes after approximately 2^{63} operations using specialized . This attack confirmed the long-suspected vulnerability of to collision attacks, rendering it unsuitable for applications requiring . For the SHA-2 family, no practical collision attacks have been achieved on the full algorithms, though theoretical has identified semi-free-start collisions for reduced-round versions, such as a 39-step SHA-256 collision reported in 2024. These advances improve upon prior theoretical paths but remain far from breaking the full 64-round SHA-256 or SHA-512 in feasible time. SHA-3, based on the Keccak sponge construction, has no known cryptanalytic breaks such as collisions, with its core design providing strong resistance; however, certain hardware implementations are susceptible to side-channel attacks like . These implementation-specific issues do not affect the algorithm's theoretical security. The practical exploitation of collisions in , such as the 2012 Flame worm's use of an collision to forge valid code-signing certificates, underscored the risks of weak hash functions and eroded trust in related algorithms like , accelerating efforts.

Attack Vectors and Mitigations

Collision attacks on secure hash algorithms, particularly , exploit weaknesses in the compression function through cryptanalysis. Attackers construct paths that propagate differences through the hash computation steps, allowing the identification of message pairs with the same hash output. For , practical collisions have been achieved by combining these paths with meet-in-the-middle techniques, where partial computations from both ends of the message blocks are matched to find a full collision efficiently. This approach builds on earlier theoretical vulnerabilities, such as the specific collision demonstrated in prior analyses, but focuses on scalable search methods rather than isolated instances. The Merkle-Damgård construction underlying and the family introduces a , where an attacker possessing a value and the original can append arbitrary and compute the extended without access to the secret . This enables forgery in protocols like naive , as the attacker can craft valid hashes for modified inputs. To mitigate this, () incorporates a secret key into the hashing process, transforming the construction to resist extensions by binding the output to undisclosed information. NIST guidance affirms that and remain suitable for despite other weaknesses, as the keying prevents length-extension exploitation. Side-channel attacks target implementation details of hash functions rather than the algorithms themselves, leaking via observable physical phenomena. Timing attacks measure variations in computation duration influenced by input-dependent operations, such as conditional branches in the hash rounds, potentially revealing partial message bits. , including simple and differential variants, examines fluctuations in power consumption during hash processing to infer internal states. Countermeasures emphasize constant-time implementations, where execution paths and resource usage remain uniform irrespective of inputs, thereby eliminating timing leaks and complicating power-based inferences. Libraries like BearSSL exemplify this by designing hash primitives, including those for SHA variants, to operate without branch dependencies on secrets. Quantum computing introduces threats to hash functions primarily through Grover's algorithm, which provides a quadratic speedup for unstructured search problems like preimage attacks, reducing the effective security from $2^n to $2^{n/2} operations for an n-bit hash. While targets and discrete logarithms, quantum collision-finding algorithms like Brassard-Høyer-Tapp reduce the complexity to approximately $2^{n/3} operations, compared to the classical birthday bound of $2^{n/2}. Mitigation involves increasing output lengths—such as adopting SHA-512 over SHA-256 or using variants with larger digests—to restore post-quantum security margins against these threats. NIST evaluations confirm that , with its sponge construction, inherits similar quantum vulnerabilities but benefits from flexible output sizing for enhanced resilience. Best practices for deploying secure hash algorithms include salting passwords to thwart precomputed attacks like rainbow tables, where unique per-user salts ensure distinct hashes even for identical passwords; this is integrated into keyed functions like or atop /. For broader security, organizations should implement phased migrations away from compromised algorithms, prioritizing for immediate use and for long-term robustness, with NIST mandating SHA-1 phase-out by December 31, 2030, in validated modules.

Applications and Implementations

Common Uses

Secure Hash Algorithms (SHAs) are integral to numerous cryptographic protocols and systems, providing essential integrity and authenticity verification. In digital signatures, SHA-256 is commonly paired with or ECDSA to generate and verify signatures within (PKI) frameworks, ensuring that signed documents or messages remain unaltered and attributable to the signer. This combination is approved in standards like FIPS 186-5, where the message may be hashed using SHA-256 (among other approved functions) before signature operations to produce a fixed-length digest suitable for asymmetric . In blockchain technologies, underpins proof-of-work consensus mechanisms, notably in , where miners compute double SHA-256 hashes of block headers to find a that produces a hash meeting the network's difficulty target, thereby securing the ledger against tampering. This application leverages SHA-256's to make altering transaction history computationally infeasible without re-mining subsequent blocks. For file integrity verification, generate checksums that allow users to confirm downloads have not been corrupted or maliciously altered; for instance, distributions like provide official SHA-256 checksums for ISO images, which users compute and compare to validate authenticity before installation. This practice is widespread in to detect transmission errors or unauthorized modifications. In password storage, algorithms are incorporated into key derivation functions like , which applies HMAC-SHA-256 iteratively with a unique per password to slow brute-force attacks and prevent exploitation, as recommended in NIST SP 800-63B for memorable secrets. Similarly, , while based on Blowfish, often integrates salted hashing principles akin to those in SHA-based schemes to enhance resistance against offline attacks. Within TLS/SSL protocols, SHA-256 is preferred for signing certificates, replacing due to the latter's vulnerabilities, as outlined in Baseline Requirements that mandate at least 2048-bit keys with SHA-256 for publicly trusted certificates to ensure secure web communications. This shift, supported by IETF recommendations, bolsters the integrity of certificate chains in connections.

Validation and Testing

Validation and testing of Secure Hash Algorithm (SHA) implementations ensure correctness and compliance with standards, preventing errors that could compromise security. The National Institute of Standards and Technology (NIST) provides official test vectors as input/output pairs for verifying SHA variants, including and SHA-3. These vectors cover short messages like "abc", empty s, and longer predefined inputs to confirm proper padding, , and finalization. For instance, the SHA-256 of the ASCII "abc" is ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad, while SHA-3-256 yields 3a985da74fe225b2045c172d6bd390bd855f086e3e9d525b46bfe24511431532 for the same input. Developers use these to compare outputs from their implementations against expected results, ensuring adherence to FIPS 180-4 for and FIPS 202 for SHA-3. Known-answer tests (KATs) form the core of the NIST Cryptographic Validation Program (CAVP) for , supplying fixed inputs with corresponding expected digests to validate individual algorithm components like message padding and block processing. KATs include single- and multi-block messages to test incremental updates and boundary conditions. Complementing KATs, tests (MCTs) assess randomness by iteratively hashing a value (typically 100,000 times) and checking the sequence against predefined outputs, detecting flaws in or . These tests, detailed in the Secure Hash Algorithm Validation System (SHAVS), are mandatory for CAVP certification and help identify non-conformant implementations early. Practical validation often employs standard tools for quick verification against NIST vectors. The OpenSSL dgst command computes SHA hashes from files or strings, allowing direct comparison; for example, echo -n "abc" | [openssl](/page/OpenSSL) dgst -sha256 outputs the expected digest for SHA-256. In , the hashlib module provides a simple interface: hashlib.sha256(b"abc").hexdigest() yields the correct value, facilitating automated tests in scripts. Both tools implement NIST-approved algorithms and serve as references during development. For production use, especially in government systems, and its successor require certification of cryptographic modules incorporating through the Cryptographic Module Validation Program (CMVP). This involves CAVP testing of the SHA implementation followed by module-level validation for , , and operational environment. Hardware modules, such as those in smart cards or secure enclaves, undergo rigorous scrutiny to achieve levels 1-4, ensuring tamper resistance and approved use. As of November 2025, over 1,100 modules are actively validated, many supporting and SHA-3. Common pitfalls in SHA implementations include endianness mismatches and padding errors, which can alter digests unpredictably. SHA standards mandate big-endian byte order for message blocks and constants, but little-endian platforms may require explicit swaps during word expansion, leading to incorrect results if overlooked. follows a specific scheme—appending a '1' bit, zeros, and the 64-bit length—but errors in bit positioning or length encoding (especially for messages near block boundaries) frequently cause failures on NIST vectors. Implementers should cross-verify with reference code from RFC 6234 to avoid these issues.

References

  1. [1]
    FIPS 180-1, Secure Hash Standard | CSRC
    This standard specifies a Secure Hash Algorithm (SHA-1) which can be used to generate a condensed representation of a message called a message digest.
  2. [2]
    FIPS 180-2, Secure Hash Standard (SHS) | CSRC
    This standard specifies four secure hash algorithms, SHA-1, SHA-256, SHA-384, and SHA-512. All four of the algorithms are iterative, one-way hash functions.
  3. [3]
    NIST Retires SHA-1 Cryptographic Algorithm
    Dec 15, 2022 · It is a slightly modified version of SHA, the first hash function the federal government standardized for widespread use in 1993. As today's ...
  4. [4]
    [PDF] fips pub 202 - federal information processing standards publication
    The SHA-3 family consists of four cryptographic hash functions, called SHA3-224,. SHA3-256, SHA3-384, and SHA3-512, and two extendable-output functions (XOFs),.
  5. [5]
    Hash Functions | CSRC - NIST Computer Security Resource Center
    The SHA-2 family of hash functions (i.e., SHA-224, SHA-256, SHA-384 and SHA-512) may be used by Federal agencies for all applications using secure hash ...
  6. [6]
    Hash Functions | CSRC - NIST Computer Security Resource Center
    Jan 4, 2017 · A hash algorithm is used to map a message of arbitrary length to a fixed-length message digest. Approved hash algorithms for generating a condensed ...NIST Policy · SHA-3 Standardization · SHA-3 Project · News & Updates<|control11|><|separator|>
  7. [7]
    Secure Hash Algorithm - Glossary | CSRC
    A hash algorithm with the property that it is computationally infeasible 1) to find a message that corresponds to a given message digest, or 2) to find two ...
  8. [8]
    [PDF] FIPS 180-2, Secure Hash Standard (superseded Feb. 25, 2004)
    Aug 1, 2002 · Secure hash algorithms are typically used with other cryptographic algorithms, such as digital signature algorithms and keyed-hash message ...
  9. [9]
    NIST Special Publication 800-63B
    For PBKDF2, the cost factor is an iteration count: the more times the PBKDF2 function is iterated, the longer it takes to compute the password hash.
  10. [10]
    Cryptographic Hash Functions – Networks at ITP
    ... irreversible. The premise of encryption is exactly what it sounds like – to encode data into a different piece of information. Hashing, on the other hand ...
  11. [11]
    Cryptography | CSRC - NIST Computer Security Resource Center
    SHA-0 was intended to take the place of an earlier MD5 algorithm because of weaknesses discovered in MD5. SHA-0 was finalized in May 1993 in the original ...
  12. [12]
    [PDF] Hashing Techniques for Mobile Device Forensics
    Cryptographic hash functions provide forensic examiners with the ability to verify the integrity of acquired data. The resulting hash value, a fixed-size bit ...
  13. [13]
    Beyond Bitcoin: Emerging Applications for Blockchain Technology
    Feb 14, 2018 · An important component of blockchain technology is the use of cryptographic hash functions. Blockchain technologies take a list of transactions ...
  14. [14]
    Cryptographic hash function - Glossary | CSRC
    A function that maps a bit string of arbitrary length to a fixed length bit string and is expected to have the following three properties.Missing: encryption | Show results with:encryption
  15. [15]
  16. [16]
    Second preimage resistance - Glossary | CSRC
    ### Definition of Second Preimage Resistance
  17. [17]
    Collision resistance - Glossary | CSRC
    ### Definition of Collision Resistance for Hash Functions
  18. [18]
    [PDF] Recommendation for Applications Using Approved Hash Algorithms
    The security strength of a hash function is determined by its collision resistance strength, preimage resistance strength or second preimage resistance strength ...
  19. [19]
    [PDF] Random Oracles are Practical: A Paradigm for Designing Efficient ...
    Abstract. We argue that the random oracle model —where all parties have access to a public random oracle— provides a bridge between cryptographic theory and ...
  20. [20]
    [PDF] FIPS PUB 180 - NIST Technical Series Publications
    May 11, 1993 · SECURE HASH STANDARD. 1. INTRODUCTION. FIPS PUB 180. The Secure Hash Algorithm (SHA) is required for use with the Digital Signature Algorithm.
  21. [21]
    [PDF] FIPS PUB 180-1 - NIST Technical Series Publications
    Apr 17, 1995 · This standard specifies a Secure Hash Algorithm (SHA-1) which can be used to generate a condensed representation of a message called a message ...
  22. [22]
    [PDF] NIST Cryptographic Standards and Guidelines Development ...
    May 13, 2014 · NIST announced a weakness in the announcement for the draft FIPS 180-1, which proposed changes to the SHA specification. This modified algorithm ...Missing: undisclosed | Show results with:undisclosed
  23. [23]
    FIPS 186, Digital Signature Standard (DSS) | CSRC
    Date Published: May 19, 1994 (Change Notice 1, 12/30/1996). Supersedes: FIPS 186 (05/19/1994). Planning Note (12/30/1996): This change notice updates ...
  24. [24]
    Transitioning the Use of Cryptographic Algorithms and Key Lengths
    This Recommendation (SP 800-131A) provides more specific guidance for transitions to the use of stronger cryptographic keys and more robust algorithms.Missing: SHA | Show results with:SHA<|separator|>
  25. [25]
  26. [26]
    [PDF] NIST Cryptographic Standards & Their Adoptions in International ...
    NIST Crypto Standards Adoptions in. ISO/IEC JTC SC 27 (Hash Functions). SHA-1 and SHA-2 (in FIPS 180-4) families are adopted in. ISO/IEC 10118-3:2004.
  27. [27]
    ISO/IEC 10118-3:2018 - IT Security techniques — Hash-functions
    2–5 day deliveryThis document specifies dedicated hash-functions, i.e. specially designed hash-functions. The hash-functions in this document are based on the iterative use ...
  28. [28]
    One Way Hash Functions and DES
    The first definition was apparently given by. Merkle [1.2] who also gave a method of constructing one-way hash functions from random block ciphers. More recent ...
  29. [29]
    A Design Principle for Hash Functions - SpringerLink
    CRYPTO' 89 Proceedings (CRYPTO 1989). A Design Principle for Hash Functions. Download book PDF. Ivan Bjerre Damgård.
  30. [30]
    [PDF] Chapter 9 - Hash Functions and Data Integrity
    Hash functions map messages to fixed-length outputs, acting as a compact representative of the input, used for data integrity and message authentication.
  31. [31]
    [PDF] fips pub 180-4 - federal information processing standards publication
    Aug 4, 2015 · Secure hash algorithms are typically used with other cryptographic algorithms, such as digital signature algorithms and keyed-hash message ...<|control11|><|separator|>
  32. [32]
    One Way Hash Functions and DES - SpringerLink
    Jul 6, 2001 · One way hash functions are a major tool in cryptography. DES is the best known and most widely used encryption function in the commercial world today.
  33. [33]
    hash-function-transition Documentation - Git
    A SHA-256 repository can communicate with SHA-1 Git servers (push/fetch). Users can use SHA-1 and SHA-256 identifiers for objects interchangeably (see "Object ...
  34. [34]
    NIST Transitioning Away from SHA-1 for All Applications | CSRC
    Background. SHA-1 was first specified in 1995 in Federal Information Processing Standard (FIPS) 180-1, Secure Hash Standard (SHS). In ...Missing: history | Show results with:history
  35. [35]
    RFC 6234 - US Secure Hash Algorithms (SHA and SHA-based ...
    ... SHA-1, as part of a Federal Information Processing Standard (FIPS), namely SHA-224, SHA-256, SHA-384, and SHA-512. This document makes open source code ...
  36. [36]
    RFC 3874 - A 224-bit One-way Hash Function: SHA-224
    First, the SHA-256 hash value is computed, except that a different initial value is used. Second, the resulting 256-bit hash value is truncated to 224 bits.
  37. [37]
    NIST Selects Winner of Secure Hash Algorithm (SHA-3) Competition
    Oct 2, 2012 · The winning algorithm, Keccak (pronounced "catch-ack"), was created by Guido Bertoni, Joan Daemen and Gilles Van Assche of STMicroelectronics and Michaël ...
  38. [38]
    IR 7896, Third-Round Report of the SHA-3 Cryptographic Hash ...
    Nov 15, 2012 · NIST announced the winning algorithm of the SHA-3 competition – Keccak. This report summarizes the evaluation of the five finalists and the selection of the ...
  39. [39]
    [PDF] The K SHA-3 submission - Keccak Team
    Jan 14, 2011 · http://keccak.noekeon.org/. Version 3. January 14, 2011. 1STMicroelectronics. 2NXP Semiconductors. Page 2. The K SHA-3 submission. 1 Defining K.
  40. [40]
    [PDF] Recommendation for Key Management: Part 1 - General
    May 5, 2020 · The security strength of a hash function is determined by the properties required by the application in which it is used. See SP 800-107 for ...
  41. [41]
    NIST Transitioning Away from SHA-1 for All Applications
    Dec 15, 2022 · NIST will transition away from the use of SHA-1 for applying cryptographic protection to all applications by December 31, 2030.Missing: timeline | Show results with:timeline
  42. [42]
    [PDF] A Quantum World and How NIST is Preparing for Future Crypto
    ▻ Hash functions: ◦ SHA-1, SHA-2 and SHA-3 FIPS 180-4, Draft FIPS 202. Page ... ▻ Grover's Algorithm. ◦ Quadratic speed-up in searching database.
  43. [43]
    SHAttered
    Is Hardened SHA-1 vulnerable? No, SHA-1 hardened with counter-cryptanalysis (see 'how do I detect the attack') will detect cryptanalytic collision attacks. In ...
  44. [44]
    Announcing the first SHA1 collision - Google Online Security Blog
    Feb 23, 2017 · While those numbers seem very large, the SHA-1 shattered attack is still more than 100,000 times faster than a brute force attack which remains ...
  45. [45]
    [PDF] The first collision for full SHA-1 - Cryptology ePrint Archive
    5 calls to SHA-1 on GPU. Based on this attack, the authors projected that a collision attack on SHA-1 may cost between US$75 K and US$120 K by renting GPU ...
  46. [46]
    Research Results on SHA-1 Collisions | CSRC
    A team of researchers from the CWI Institute in Amsterdam and Google have successfully demonstrated an attack on the SHA-1 hash algorithm.Missing: birthday bound
  47. [47]
    New Records in Collision Attacks on SHA-2
    Feb 27, 2024 · We successfully find the first practical semi-free-start (SFS) colliding message pair for 39-step SHA-256, improving the best 38-step SFS collision attack.
  48. [48]
    [PDF] Side Channel Analysis of the SHA-3 Finalists - CSRC
    In this contribution we identify side channel vulnerabilities for JH-MAC,. Keccak-MAC, and Skein-MAC and demonstrate attacks on their respec tive reference ...
  49. [49]
    [PDF] Side-channel security analysis and protection of SHA-3
    ... SHA-3 are very vulnerable to side-channel power attacks. In Chapter 4, I present the methods to mitigate side-channel power leakages of SHA-3 by using the ...
  50. [50]
    Flame malware collision attack explained - Microsoft
    Jun 6, 2012 · They had to perform a collision attack to forge a certificate that would be valid for code signing on Windows Vista or more recent versions of ...
  51. [51]
    CWI cryptanalyst discovers new cryptographic attack variant in ...
    Jun 7, 2012 · Flame uses a completely new variant of a 'chosen prefix collision attack' to impersonate a legitimate security update from Microsoft.
  52. [52]
    [PDF] Practical Free-Start Collision Attacks on 76-step SHA-1 - Marc Stevens
    The full differential path for SHA-1 collision attacks are made of two parts. ... Our initial non-linear path was thus generated using the meet-in-the-middle ...
  53. [53]
    [PDF] Keying Merkle-Damgård at the Suffix
    Mar 7, 2025 · This observation can particularly be seen as argument to step away from HMAC on top of SHA-1/SHA-2 and use KMAC instead. That said, inspired ...
  54. [54]
    [PDF] Public Comments on FIPS 180-4, Secure Hash Standard (SHS)
    Sep 9, 2022 · On June 9, 2022, NIST's Crypto Publication Review Board announced a proposal to revise Federal. Information Processing Standards (FIPS) 180-4 ...
  55. [55]
    [PDF] Side-Channel Attacks: Ten Years After Its Publication and the ...
    Side Channels are defined to be unintended output channels from a system. Paul Kocher in 1996 published the seminal paper “Timing Attacks on Implementations of ...
  56. [56]
    Constant-Time Crypto - BearSSL
    Timing attacks are a subset of a more general class of attacks known as side-channel attacks. A computer system runs operations in a conceptual abstract ...
  57. [57]
    Quantum Implementation of MD5 - Cryptology ePrint Archive - IACR
    Sep 3, 2025 · Abstract. Quantum attacks such as Grover's algorithm reduce the security of classical hash functions such as MD5. In this paper, we present ...
  58. [58]
    Password Storage - OWASP Cheat Sheet Series
    Modern hashing algorithms such as Argon2id, bcrypt, and PBKDF2 automatically salt the passwords, so no additional steps are required when using them. · Peppering ...
  59. [59]
    [PDF] Digital Signature Standard (DSS) - NIST Technical Series Publications
    Feb 5, 2024 · FIPS 186-4 approves the use of implementations of either or both of these standards and specifies additional requirements. (3) The Elliptic ...
  60. [60]
    [PDF] FIPS 186-5 - NIST Technical Series Publications
    Feb 3, 2023 · For both the signature generation and verification processes of RSA, ECDSA, and HashEdDSA, the message (i.e., the signed data) is converted to a ...
  61. [61]
    [PDF] Bitcoin: A Peer-to-Peer Electronic Cash System
    The proof-of-work involves scanning for a value that when hashed, such as with SHA-256, the hash begins with a number of zero bits.
  62. [62]
    How to verify your Ubuntu download
    1. Overview · 2. Necessary software · 3. Download checksums and signatures · 4. Retrieve the correct signature key · 5. Verify the SHA256 checksum · 6. Check the ISO.
  63. [63]
    Certificate Contents for Baseline SSL | CA/Browser Forum
    CA Certificates. Recommended key strengths are at least 2048-bit RSA using SHA-256, SHA-384 or SHA-512 or Elliptic Curve using NIST P-256, P-384, or P-521.
  64. [64]
    RFC 5487 - Pre-Shared Key Cipher Suites for TLS with SHA-256 ...
    Due to recent analytic work on SHA-1 [Wang05], the IETF is gradually moving away from SHA-1 and towards stronger hash algorithms. Related TLS cipher suites ...
  65. [65]
    [PDF] Secure Hash Algorithm- Message Digest Length = 160
    Input Message: "abc". ============================================================== Initial hash value: H[0] = 67452301. H[1] = EFCDAB89. H[2] = 98BADCFE.
  66. [66]
    Secure Hashing - Cryptographic Algorithm Validation Program | CSRC
    Oct 5, 2016 · Secure Hash Standard Validation System (SHAVS) specifies validation testing requirements for the SHA-1 and SHA-2 family of functions in FIPS 180 ...
  67. [67]
    [PDF] Description of Known Answer Test (KAT) and Monte Carlo Test ...
    Feb 20, 2008 · These KAT and MCT tests are based on tests specified in The Secure Hash Algorithm Validation System. (SHAVS) [SHAVS], which describes tests ...
  68. [68]
    hashlib — Secure hashes and message digests — Python 3.14.0 ...
    This module implements a common interface to many different hash algorithms. Included are the FIPS secure hash algorithms SHA224, SHA256, SHA384, SHA512.Missing: test | Show results with:test
  69. [69]
    Search - Cryptographic Module Validation Program | CSRC
    Use this form to search for information on validated cryptographic modules. Select the basic search type to search modules on the active validation list.