Fact-checked by Grok 2 weeks ago

Key stretching

Key stretching is a cryptographic technique designed to strengthen weak or low-entropy inputs, such as user-chosen , by deliberately increasing the computational effort required to derive cryptographic keys from them, thereby deterring brute-force and dictionary attacks. This process typically involves iteratively applying a pseudorandom function (PRF), often based on a or , multiple times to the password combined with a value, which expands the short input into a longer, more secure key while making exhaustive searches economically infeasible for attackers. The core goal is to balance —ensuring quick for legitimate users—with robust against offline attacks, where an adversary has to hashed passwords. One of the earliest and most widely standardized key stretching methods is (Password-Based Key Derivation Function 2), defined in RFC 2898 as part of PKCS #5 v2.0, which uses a PRF like HMAC-SHA-1 or HMAC-SHA-256 iterated a configurable number of times (e.g., at least 600,000 iterations for HMAC-SHA256 as recommended by as of 2023) on the password and salt to produce a derived key of arbitrary length. This iteration count serves as the primary stretching mechanism, multiplying the attacker's workload by the number of rounds without requiring excessive memory. PBKDF2 remains prevalent in protocols like TLS and standards, though its CPU-bound nature makes it vulnerable to acceleration via parallel hardware like GPUs. To address limitations in CPU-only stretching, later algorithms introduced memory-hardness to raise the cost of specialized hardware attacks. , introduced in 1999, adapts the Blowfish block cipher's key setup into an iterative process with a tunable cost factor (originally 6-8, but modern recommendations suggest 10 or higher for adequate security), incorporating a 128-bit to prevent precomputation attacks like rainbow tables. , proposed in 2009, extends this by requiring to large amounts of memory (parameterized by N, often 2^14 or higher), using functions like SMix and ROMix to make parallelization and hardware optimization difficult, thus increasing the economic barrier for attackers. The current state-of-the-art, , winner of the 2015 , combines time-cost (iterations), memory-cost, and parallelism factors in a memory-hard design resistant to side-channel attacks, with variants like Argon2id recommended for hybrid resistance to GPU and ASIC threats. These methods are essential in modern for applications like password storage, key derivation in (e.g., full-disk ), and authenticated protocols. NIST recommends using approved PBKDFs with salts of at least 128 bits and iteration counts tuned to induce a delay of at least 100-500 milliseconds per attempt on target hardware. Current guidelines, such as OWASP's 2023 recommendations, emphasize using HMAC-SHA256 or stronger for and adjusting parameters to target a time of at least 100 milliseconds on the target . As hardware evolves, the tunable parameters in these algorithms allow to adapt, ensuring long-term viability against advancing computational power.

Introduction

Definition

Key stretching is a cryptographic technique designed to derive a cryptographically strong key from a potentially weak input, such as a or , by deliberately increasing the computational effort required for the derivation process. This method transforms low-entropy inputs into fixed-length outputs suitable for use in or other security protocols, thereby enhancing resistance to unauthorized access attempts. Unlike simple hashing, which produces a fixed-size digest primarily for verification or fast lookups, key stretching emphasizes a controlled slowdown through repeated applications of functions or resource-intensive computations. This intentional delay raises the cost of exhaustive searches, making it impractical for adversaries to guess the original input efficiently. The core components of key stretching include the input key material, typically a user-provided secret; a , which is a random value unique to each derivation to thwart precomputation attacks like rainbow tables; and the resulting output key of a specified length. The ensures that identical inputs produce different outputs, while the stretching process binds these elements to yield a robust key.

Purpose

Key stretching primarily aims to protect cryptographic systems against brute-force and attacks by deliberately increasing the computational expense required to test each potential guess. This technique transforms weak or low-entropy passwords—often chosen by users for memorability—into more secure equivalents that demand significant processing time and resources to crack, thereby deterring attackers who rely on exhaustive search methods. Even a modestly tuned count, such as 1,000 repetitions of a pseudorandom function, imposes a negligible burden on legitimate users during while multiplying the effort for adversaries by orders of magnitude, effectively slowing attack rates to impractical levels on commodity hardware. A key benefit of key stretching is its ability to enhance the effective of without imposing stricter requirements on user behavior, allowing systems to tolerate shorter or predictable passphrases while maintaining robust security. By integrating —random values unique to each —it further mitigates precomputed attacks like rainbow tables, as the salt ensures that identical yield distinct outputs, rendering offline lookup tables useless. This approach not only counters dictionary-based guesses but also addresses the vulnerability of unsalted hashes to massive parallelization on specialized . In the broader context of cryptography, key stretching functions as a password-based key derivation function (PBKDF or KDF), generating uniform and unpredictable cryptographic keys from human-memorable inputs that would otherwise lack the randomness needed for secure encryption or authentication. These derived keys exhibit properties essential for downstream applications, such as resistance to cryptanalysis, while the process assumes a threat model where the attacker has obtained access to stored password hashes but faces resource constraints relative to the system's design goals for legitimate access. This makes key stretching indispensable for securing sensitive data in scenarios like disk encryption or secure storage, where direct password use would expose systems to rapid compromise.

Techniques

General Process

Key stretching involves a structured process to transform a potentially weak input, such as a , into a stronger cryptographic by intentionally increasing the computational effort required. The process typically begins with the combination of the input and a randomly generated to create an initial value for processing. This , an arbitrary string of characters or bytes, is unique for each key derivation instance and serves to ensure that identical passwords produce different outputs, thereby thwarting offline precomputation attacks like rainbow tables or dictionary assaults. Salts are generated using an approved random bit generator and should be at least 128 bits in length to provide sufficient ; they must be securely stored alongside the derived or hash for future verification or re-derivation. Next, a , such as a one-way , is applied iteratively or in a chained manner to the salted input for a predefined number of rounds, often denoted as the iteration count. This repetition, controlled by a parameter typically set to at least 1,000 iterations (and higher for enhanced security), amplifies the time and resources needed for each derivation, making brute-force attempts computationally expensive without affecting legitimate use cases significantly. The iteration count is chosen based on the system's performance constraints and security requirements, balancing usability with protection against parallelized attacks. The final step produces a fixed-length output key of sufficient size (e.g., 128 bits or more) suitable for use in , , or other cryptographic operations. This derived is the result of aggregating or truncating the output from the iterative applications, ensuring it meets the and length needs of the intended application. The process as a whole relies on the underlying primitive's resistance to reversal, with the enhancing overall . A high-level pseudocode outline for the basic iterative process is as follows:
Input: [password](/page/Password) (P), [salt](/page/Salt) (S), iterations (N), [hash function](/page/Hash_function) (H)
Initialize: hash = H(P || S)  // Initial hash of salted [password](/page/Password)
For i = 1 to N-1:
    hash = H(hash || P || S)  // Chain iterations with input elements
Output: derived_key = hash  // Fixed-length [key](/page/Key) (may be truncated)
This representation captures the essence of iterative hashing without specifying details.

Iterative Hashing

Iterative hashing in key stretching involves repeatedly applying a one-way cryptographic hash function or pseudorandom function (PRF) in a loop to increase the computational effort required for deriving a key from a password or passphrase, thereby slowing down brute-force attacks without significantly impacting legitimate users. This approach relies on CPU-intensive iterations that amplify the work factor, typically using functions like SHA-256 or HMAC-based PRFs, executed thousands or millions of times to enforce a deliberate delay. The core mechanism ensures that even fast hardware cannot easily accelerate the process beyond the intended iteration count, making it a foundational technique for password-based key derivation. One prominent method is (Password-Based Key Derivation Function 2), standardized in 2898, which uses a PRF—often with a hash like or SHA-256—in an iterative chain to produce a pseudorandom output of specified length. The function takes inputs including the password P, S, iteration count c (a security parameter controlling the number of iterations, recommended to be at least 1,000 for new applications), and desired key length dkLen. The derivation proceeds as follows: First, compute the initial block U_1 = PRF(P, S || INT(i)), where INT(i) is a 4-byte encoding of the block index i (starting from 1), and || denotes concatenation. Subsequent blocks are chained: U_2 = PRF(P, U_1), U_3 = PRF(P, U_2), and so on up to U_c = PRF(P, U_{c-1}). The final block T_i for index i is the XOR of all U_k for k=1 to c: T_i = U_1 \oplus U_2 \oplus \cdots \oplus U_c. To obtain the full derived key, concatenate T_1 || T_2 || \cdots || T_{hLen}, where hLen is the output size of the PRF divided by 8 (in bytes), and truncate or pad as needed to reach dkLen; if multiple blocks are required, increment i accordingly. This XOR-chaining design ensures that each iteration depends on the previous, preventing easy parallelization of the inner loop while allowing outer blocks to be computed independently. Bcrypt, introduced in 1999, implements iterative hashing through an adaptive variant of the Blowfish , where the is modified to perform exponential work based on a configurable cost parameter. The algorithm begins by expanding the and into a Blowfish key setup, then iterates the cipher's subkey generation process 2^{cost} times (with cost typically ranging from 4 to 31, doubling the time per increment). This exponential scaling in iterations—derived from repeated mixing of the into the cipher's S-boxes and subkeys—makes bcrypt self-tuning for hardware advancements, as increasing the cost parameter linearly adjusts security without redesign. The output is a 184-bit truncated to 128 bits for storage, including the cost, , and result in a modular format like $2a$10$salt.hash. In comparison, offers standardization and flexibility in choosing the underlying PRF, making it suitable for protocols like TLS, but its linear iteration structure allows efficient parallelization on GPUs or , reducing its effectiveness against specialized . , by contrast, is more adaptive to evolving threats through its cost parameter and provides partial to GPU due to the sequential, memory-access-heavy Blowfish setup, though it remains and less versatile for non-password uses. Both methods emphasize high iteration counts to achieve delays of at least 100 milliseconds per derivation on target , balancing with .

Memory-Hard Functions

Memory-hard functions represent an advanced class of key stretching techniques that impose significant requirements on the process, thereby increasing the resource demands for attackers using specialized hardware such as or GPUs. Unlike traditional iterative hashing methods, which primarily rely on computational cycles and can be efficiently parallelized on high-end hardware, memory-hard functions enforce sequential access patterns and high usage to hinder parallelization and reduce the economic advantage of custom attack hardware. This design forces attackers to either allocate substantial , which is costly and power-intensive, or resort to less efficient time-memory trade-offs. One prominent memory-hard function is , introduced in 2009 by as a password-based aimed at countering parallel hardware attacks. Scrypt employs a sequential memory-hard (SMF) parameterized by N, which controls the memory cost, typically requiring O(N) space. It wraps the core SMF, known as ROMix, within for additional security, where ROMix mixes an array of size N using a like SHA-256 to create a large block of pseudorandom data accessed in a dependent manner. The ROMix operates as follows:
Algorithm ROMix_H(B, N)
Parameters: H (hash function), k (output length in bits), Integerify (bijective function)
Input: B (k bits), N (integer work metric)
Output: B' (k bits)
1: X ← B
2: for i = 0 to N-1 do
3:     V[i] ← X
4:     X ← H(X)
5: end for
6: for i = 0 to N-1 do
7:     j ← Integerify(X) mod N
8:     X ← H(X ⊕ V[j])
9: end for
10: B' ← X
This structure ensures that the entire array V must be stored in , as random accesses prevent effective compression or reuse without full allocation. Argon2, declared the winner of the 2015 , builds on these principles with a more flexible and secure -hard design tailored for password hashing and other applications. It features three variants: d for data-dependent addressing, which prioritizes resistance to GPU/ASIC attacks but is susceptible to side-channel vulnerabilities; i for data-independent addressing, enhancing side-channel resistance at the cost of slightly reduced GPU resistance; and id, a that combines the first pass of i with subsequent data-dependent passes for balanced protection. parameters include time cost t (iterations), cost m (in KiB), and parallelism p (lanes/threads), allowing tunable trade-offs between resources. The core mechanism involves filling a of p lanes by q columns of 1024-byte blocks using a compression function G, where indexing depends on the variant to control and resist timing attacks. The block filling for a lane is:
For each lane i (0 ≤ i < p):
    B[i][0] = H′(H0 || 0 || i)
    For j from 1 to q-1:
        If Argon2d:
            refBlock = map(first 64 bits of B[i][j-1], next 64 bits of B[i][j-1])
        Else (Argon2i or Argon2id first pass):
            Run G2 with counter to get pseudo-random J1, J2
            refBlock = map(J1, J2)
        B[i][j] = G(B[i][j-1], refBlock)
Here, G(X, Y) = P(X ⊕ Y) ⊕ (X ⊕ Y), with P as a Blake2b-based , ensuring memory accesses are either pseudorandom (data-independent) or truly random (data-dependent) to mitigate specific threats. These memory-hard functions provide superior resistance to custom hardware attacks compared to purely time-based methods, as the scales poorly with parallel architectures. Argon2, in particular, has emerged as the current standard for password hashing, recommended by the in its Password Storage Cheat Sheet for its robust protection against both side-channel and parallel attacks, and endorsed by the German Federal Office for Information Security (BSI) in its 2025 Technical Guideline TR-02102-1 as a preferred memory-hard function for secure key derivation.

Security Aspects

Computational Strength

The computational strength of key stretching is assessed through metrics that quantify the resources required for , thereby increasing the cost of brute-force attacks. Key metrics include the iteration count, which measures CPU computational cost by repeating the underlying a specified number of times; size, which enforces usage to deter parallel hardware attacks; and parallelism lanes, which limit the number of concurrent threads to reduce scalability on multi-core systems. These parameters are tuned to achieve a time of approximately 100 milliseconds to 1 second on target user hardware, balancing against while ensuring attackers face significantly higher costs. The effective strength added by key stretching can be calculated as the additional bits of provided by the work factor, approximated by \log_2 of the number of operations per attempt. For instance, with using $10^6 s on a 1 GHz CPU—assuming roughly one per clock —this adds approximately 20 bits of , as \log_2(10^6) \approx 20, effectively multiplying the attack space by $2^{20}. This quantifies how key stretching transforms a weak password's into a more robust defense against exhaustive search. Attack models distinguish between online attacks, limited by server rate-limiting (e.g., a few attempts per second), and offline attacks, where stolen hashes enable unlimited trials but at the cost of the stretching parameters. Parallelization factors exacerbate offline threats; for example, GPUs can reduce per-core computation time by up to 1000 times compared to CPUs for parallelizable functions, though memory-hard designs mitigate this by requiring shared across , limiting effective . Recommendations from standards bodies emphasize minimum thresholds to ensure adequate strength. NIST SP 800-132 specifies a minimum of 1000 iterations for , scalable to higher values based on hardware, while advising iteration counts large enough to impose a 1-second delay for high-security applications. As of 2025, recommends at least 600,000 iterations for PBKDF2-HMAC-SHA256 and 210,000 for PBKDF2-HMAC-SHA512, tuned to induce delays of 100-500 milliseconds on target hardware. For memory-hard functions like , 9106 (2021) provides parameter options including up to 2 GiB of memory (e.g., m=2^21 KiB) with parallelism (p) of 1-4 lanes to maintain sequential bottlenecks, though practical guidelines like suggest 47 memory, 1-2 iterations, and p=1 for balanced usability in contemporary implementations. The time T required for a brute-force attack is given by the equation T = \frac{2^{e} \times W}{S}, where e is the password entropy in bits, W is the work factor (e.g., iteration count or equivalent operations), and S is the attacker's attempt speed in operations per second; this formula highlights how increasing W proportionally extends T, providing a quantifiable measure of .

Performance Trade-offs

Key stretching enhances by increasing computational demands, but this creates trade-offs in usability and system performance. For legitimate users, the added delay during can affect times; guidelines recommend configuring parameters so that hashing takes less than one second on production to avoid noticeable slowdowns, with targets often set at 250-500 milliseconds to protection against offline attacks with responsive user experiences. On servers handling high volumes of concurrent logins, such as during peak hours, elevated work factors amplify CPU and memory usage, potentially leading to and requiring to ensure without compromising service availability. Hardware environments further influence these trade-offs, as mobile devices typically possess less power and life compared to servers, necessitating lower iteration counts or memory parameters on client-side implementations to prevent excessive drain or sluggish performance. Servers, benefiting from more robust resources, can employ higher parameters for stronger protection. To adapt to ongoing hardware improvements following , systems often use dynamic tuning, such as doubling iteration counts annually or biennially during parameter updates, ensuring that security margins keep pace with advancing computational capabilities. Despite these benefits, key stretching introduces limitations, including heightened susceptibility to denial-of-service (DoS) attacks, where adversaries flood the system with authentication requests to exhaust resources through repeated expensive computations. Memory-hard functions like scrypt aim to mitigate parallelization on application-specific integrated circuits (ASICs) by demanding substantial on-chip memory, which raises hardware costs and hinders efficient scaling, though subsequent ASIC developments have only partially overcome this resistance. Common mitigation strategies include implementing to cap authentication attempts per user or , thereby curbing floods while preserving access for valid sessions. Additional approaches involve , such as GPU offloading for legitimate verifications on servers, and hybrid models that distribute stretching between client and server to reduce backend load. In the 2025 landscape, poses minimal direct threats to key stretching's classical computation focus, but integrations with post-quantum key derivation functions are emerging in protocols to combine brute-force resistance with quantum-safe key exchanges.

Historical Development

Early Methods

The origins of key stretching trace back to the mid-1970s, when the Unix operating system introduced the function as a means to securely store passwords. Developed by Robert Morris at , this function utilized a modified version of the (DES) algorithm, iterating the cipher 25 times on a fixed input derived from the password and a 12-bit to produce a 64-bit hash output. This iteration was specifically designed to amplify the computational effort required for offline brute-force attacks on stolen password files, as single DES encryptions were deemed too fast even on contemporary hardware of the era. During the and , the primary motivation for such techniques stemmed from the rapid increase in computational capabilities, which heightened the feasibility of exhaustive and brute-force attacks against unsalted or weakly protected passwords. Early systems like Unix stored passwords in or with minimal protection, but as mainframe and performance improved, cryptographers recognized the need to impose deliberate delays in verification processes to deter attackers from rapidly testing large numbers of candidate passwords offline. This approach balanced —ensuring login times remained acceptable—against , marking a foundational shift toward computationally expensive one-way functions in password handling. By the early 1990s, these ad hoc methods evolved into formalized standards, with the release of PKCS#5 version 1.0 in 1991 by Data Security. This specification introduced PBKDF1 (Password-Based Key Derivation Function 1), which generalized the iterative approach by applying a —either or —multiple times (with a configurable but fixed iteration count) to combine a and into a derived key of up to the hash output length. PBKDF1 aimed to provide a standardized, interoperable method for password-based encryption, building on the Unix model's principles while supporting emerging public-key infrastructures. Despite their innovations, early key stretching methods suffered from significant limitations that diminished their long-term effectiveness. The fixed iteration counts, such as the hardcoded 25 in Unix crypt or the non-mandatory iterations in PBKDF1, were non-adaptive, preventing adjustments in response to Moore's Law-driven hardware advancements that eventually rendered attacks feasible. Moreover, reliance on underlying primitives like DES—with its 56-bit effective key length—exposed systems to exhaustive key searches, as demonstrated by practical brute-force breaks in the late 1990s using specialized hardware. Similarly, MD2 and MD5 in PBKDF1 later revealed collision vulnerabilities, undermining the one-way property essential for secure derivation.

Modern Evolutions

The standardization of in RFC 2898 marked a significant advancement in key stretching, specifying its use with a pseudorandom function such as HMAC-SHA-1 to iteratively derive keys from passwords, thereby enhancing resistance to brute-force attacks. This function achieved widespread adoption in cryptographic protocols and standards due to its flexibility and integration with existing hash functions. In 1999, bcrypt was introduced by Niels Provos and David Mazières as a password-hashing function based on the Blowfish cipher, featuring an adaptive cost factor that allows the computational work to be tuned over time to counter improving hardware. This adaptability addressed the limitations of fixed-iteration methods by enabling future-proofing against faster processors. The development of scrypt in 2009 by Colin Percival introduced a memory-hard approach to key stretching, designed to thwart parallelization on GPUs and ASICs by requiring substantial sequential memory access. Building on this, Argon2 emerged in 2015 from the Password Hashing Competition (PHC), launched in 2013 to identify robust defenses against hardware-accelerated attacks; Argon2, designed by Alex Biryukov, Daniel Dinu, and Dmitry Khovratovich, was selected as the winner for its balanced resistance to both GPU and side-channel threats. These functions responded directly to the proliferation of specialized hardware for cryptanalysis, prioritizing memory and time costs over pure computation. Regulatory endorsements further solidified these evolutions: the 2017 NIST SP 800-63B recommended for secure password storage, with later revisions (as of 2025 in SP 800-63B-4) endorsing , , , and , emphasizing parameters that impose at least 0.1 seconds of computation on verifier hardware. As of 2025, in version 3.0 of its Technical Guideline TR-02102-1 (updated January 2025), the German Federal Office for Information Security (BSI) recommends for password-based key derivation, particularly when cryptographic hardware is unavailable. Recent trends in key stretching emphasize side-channel resistance through variants like and , which mitigate timing and cache attacks, alongside auxiliary integration with quantum-resistant primitives for hybrid schemes. No major new key-stretching functions have emerged since , with focus shifting to parameter tuning—such as optimizing memory and iteration counts for 2025-era hardware like multi-core CPUs and secure enclaves—to maintain practical security margins without excessive latency. This timeline of innovations—from in 1999, in 2009, to the PHC's 2015 culmination—reflects a progression toward hardware-aware, adaptive defenses in key stretching. By 2025, NIST's final SP 800-63B-4 explicitly includes , , and alongside , with guidance on tuning parameters for contemporary hardware threats.

Applications

Password Storage

In password storage, key stretching is applied by combining the user's password with a unique and processing it through a slow, iterative to produce a derived key, which is then stored alongside the salt in the database. During , the submitted password is salted with the stored salt, re-derived using the same stretching parameters, and compared to the stored for . Modern systems prioritize memory-hard key stretching algorithms such as Argon2id or for new password implementations, as these provide resistance to both computational and hardware-accelerated attacks. Legacy systems using unsalted or lightly stretched hashes like or should migrate by re-hashing passwords upon user login, layering a modern stretch over the old hash without requiring immediate password resets. For example, PHP's password_hash() function defaults to with automatic ing and a configurable factor for , producing a PHC string format that includes the algorithm, parameters, and . Similarly, PostgreSQL's pgcrypto extension supports via the crypt() function with gen_salt('bf') for per-user ing and an adjustable work factor up to 31 (corresponding to 2^31 iterations), though must be handled at the application layer. Key stretching significantly enhances security for weak passwords, such as "password123", by enforcing billions of computational operations per guess; for instance, with a cost factor of 12 on modern GPUs can require days to crack an 8-character lowercase password using high-end hardware clusters. Best practices include generating unique, random salts per user (handled automatically by recommended algorithms), tuning parameters to target under 1 second for legitimate verifications while scaling with hardware advances, and following guidelines to increase work factors progressively—such as using at least 19 MiB memory and 2 iterations for Argon2id in 2025 deployments.

Key Derivation in Protocols

Key stretching plays a crucial role in protocols by deriving secure s from potentially weak user s, enhancing resistance to offline attacks during . In the Secure Remote (SRP) , which enables without transmitting the over the network, a salted hash of the (typically ) is used to compute the SRP verifier and derive the shared . Implementations may incorporate key stretching using functions like for added protection. This integration ensures that even if an attacker intercepts messages, brute-forcing the remains computationally expensive. SRP, standardized for use in TLS , thus leverages hashing to protect against dictionary and brute-force attacks in remote access scenarios. In protocols, key stretching is embedded to generate pre-shared keys (PSKs) from . For WPA2 personal mode, the IEEE 802.11i standard specifies with HMAC-SHA1, employing 4096 and the network's SSID as the salt to derive a 256-bit PSK from the . WPA3 personal mode uses (SAE) instead, offering enhanced protection against offline attacks without . This PSK serves as the pairwise master key (PMK) in the 4-way handshake, from which temporal keys are further derived for encrypting traffic. The fixed iteration count balances security against brute-force attacks with acceptable device performance, making it a foundational element in securing networks against cracking. Beyond key exchange and wireless, key stretching appears in various systems for deriving encryption keys from passphrases. OpenSSH employs bcrypt as its default key derivation function (KDF) in the private key format since version 7.8, with scrypt available as an option in some configurations, to protect stored private keys against passphrase attacks. Similarly, in TLS, client certificates' private keys are often encrypted using PKCS#8 format with PBKDF2 to derive the decryption key from a passphrase, ensuring secure handling during mutual authentication. For disk encryption, LUKS2 in Linux uses Argon2id by default as its PBKDF, applying memory-hard key stretching to convert user passphrases into master keys that unlock encrypted volumes. These implementations highlight key stretching's versatility in protecting keys at rest and in use across diverse environments. A advantage of incorporating stretching in protocols is the use of dynamic s derived from nonces or timestamps, which prevents reuse across sessions and thwarts precomputation attacks like rainbow tables. By tying the salt to protocol-specific ephemeral values, such as exchanged nonces in establishment, the derived keys remain unique per interaction, enhancing and resistance to replay attacks. This approach, recommended in standards for derivation, ensures that compromised session data does not expose long-term credentials. As of 2025, key derivation functions related to key stretching are evolving alongside through hybrid mechanisms in IETF drafts, combining classical and quantum-resistant key encapsulation (e.g., ML-KEM) with traditional methods in protocols like HPKE and SSH. These hybrids use derived keys to mitigate harvest-now-decrypt-later threats from quantum adversaries. Such updates in ongoing drafts aim to protocol key derivation against emerging computational capabilities.

References

  1. [1]
    None
    ### Summary of Key Stretching and Password-Based Key Derivation Functions from NIST SP 800-132
  2. [2]
    RFC 2898: Password-Based Cryptography Specification, Version 2.0
    PBES2 combines a password-based key derivation function, which shall be PBKDF2 (Section 5.2) for this version of PKCS #5, with an underlying encryption scheme ( ...Missing: stretching | Show results with:stretching<|control11|><|separator|>
  3. [3]
    [PDF] A Future-Adaptable Password Scheme - USENIX
    A Future-Adaptable Password Scheme. Niels Provos and David Mazières. The OpenBSD Project. © 1999 by The USENIX Association. All Rights Reserved. Rights to ...
  4. [4]
    SP 800-132, Recommendation for Password-Based Key Derivation
    Dec 22, 2010 · This Recommendation specifies techniques for the derivation of master keys from passwords or passphrases to protect stored electronic data or data protection ...
  5. [5]
    [PDF] stronger key derivation via sequential memory-hard functions colin ...
    and less widely used, and consequently has been less studied. Page 11. STRONGER KEY DERIVATION VIA SEQUENTIAL MEMORY-HARD FUNCTIONS. 11. 7. scrypt ... pdf. [2] ...
  6. [6]
    Password Hashing Competition
    We received 24 candidates, including many excellent designs, and selected one winner, Argon2, an algorithm designed by Alex Biryukov, Daniel Dinu, and Dmitry ...
  7. [7]
    RFC 2898: PKCS #5: Password-Based Cryptography Specification Version 2.0
    ### Summary of Key Stretching and Password-Based Key Derivation Function from RFC 2898
  8. [8]
  9. [9]
  10. [10]
  11. [11]
    RFC 2898: PKCS #5: Password-Based Cryptography Specification Version 2.0
    ### Summary of PBKDF2 Key Stretching Process (RFC 2898)
  12. [12]
    [PDF] Argon2: the memory-hard function for password hashing and other ...
    Dec 26, 2015 · The Password Hashing Competition, which started in 2014, highlighted the following problems: • Should the memory addressing (indexing ...
  13. [13]
    Password Storage - OWASP Cheat Sheet Series
    Modern hashing algorithms such as Argon2id, bcrypt, and PBKDF2 automatically salt the passwords, so no additional steps are required when using them. · Peppering ...Background · Password Hashing Algorithms · BcryptMissing: stretching | Show results with:stretching<|control11|><|separator|>
  14. [14]
    [PDF] Recommendations and Key Lengths, Version 2025-01 - BSI
    (iii) The general recommendation of at least about 20 bits of entropy for the password used in a ... Argon2 Memory-Hard Func- tion for Password Hashing and Proof- ...
  15. [15]
    How Parallel Computing Will Affect The Security Industry - Auth0
    Mar 30, 2021 · Parallel computing will affect the security industry why GPUs crack passwords much faster than CPUs.
  16. [16]
    [PDF] Sloth: Key Stretching and Deniable Encryption using Secure ...
    Jul 6, 2024 · Sloth is a key stretching method using smartphone Secure Elements to limit password guessing by using the chip's limited bandwidth as a ...
  17. [17]
  18. [18]
    Scrypt proof of work - Bitcoin Wiki
    Apr 24, 2019 · Scrypt proof of work ... As scrypt is not GPU-resistant, FPGA-resistant nor ASIC-resistant, it has failed to meet its stated goals entirely.
  19. [19]
    [PDF] Key Derivation Functions Without a Grain of Salt
    Apr 10, 2025 · Their functionality is to turn raw key material, such as a Diffie–Hellman secret, into a strong cryptographic key that is indistinguishable from ...
  20. [20]
    Traditional crypt - USENIX
    The 64-bit constant ``'' is encrypted 25 times with the DES key. The final output is the 12-bit salt concatenated with the encrypted 64-bit value. The resulting ...
  21. [21]
    [PDF] Passwords^12 - Openwall
    ... History", 1978. Page 6. Late 1970s - Unix: DES-based crypt(3). Unix 7th Edition crypt(3) is a cryptographic one-way hash function built upon the. DES block ...
  22. [22]
    The Ambitious Amateur vs. crypt(3)
    Since 1975, UNIX passwords have been encrypted using crypt(3). crypt(3) uses the first 8 characters of a user's password as a key to encrypt a block of zeros ...
  23. [23]
    NIST Special Publication 800-63B
    Verifiers SHALL store look-up secrets in a form that is resistant to offline attacks. Look-up secrets having at least 112 bits of entropy SHALL be hashed with ...
  24. [24]
    password_hash - Manual - PHP
    PASSWORD_BCRYPT - Use the bcrypt algorithm to create the hash. This will produce a standard crypt() compatible hash using the $2y$ identifier. PASSWORD_ARGON2I ...Password_verify · Predefined Constants · Crypt · SensitiveParameter
  25. [25]
    Documentation: 18: F.26. pgcrypto — cryptographic functions
    The functions crypt() and gen_salt() are specifically designed for hashing passwords. crypt() does the hashing and gen_salt() prepares algorithm parameters for ...
  26. [26]
    Cracking bcrypt: New-gen hardware speeds up password hacking
    Sep 19, 2025 · Computing power is growing due to reduced costs and AI. Learn how it has impacted bcrypt password cracking times.Missing: operations | Show results with:operations
  27. [27]
    Secured Authentication Using Anonymity and Password-Based Key ...
    Aug 7, 2025 · The proposed scheme uses the Secure Remote Password (SRP) protocol and Password-Based Key Derivation Function 2 (PBKDF2) to enhance the ...
  28. [28]
    RFC 5054 - Using the Secure Remote Password (SRP) Protocol for ...
    This memo presents a technique for using the Secure Remote Password protocol as an authentication method for the Transport Layer Security protocol.
  29. [29]
    The Great WiFi and Password Protector: PBKDF2 - Medium
    Oct 30, 2023 · With this, WPA-2 uses 4,096 interations. Its main focus is to produced a hashed version of a password and includes a salt to reduce the ...Missing: WPA3 | Show results with:WPA3
  30. [30]
    Encryption algorithm used in WPA/WPA2
    Sep 5, 2015 · The output of the PBKDF2 function is the pre-shared key or PSK. The PSK is used directly as PMK (pairwise master key) in the 4-way handshake. ...hmac - Understanding WPA2 authentication in detailsWPA/WPA2 Handshake -- Why are Nonces not encrypted?More results from crypto.stackexchange.comMissing: WPA3 | Show results with:WPA3
  31. [31]
    docs/v2.0.0-ReleaseNotes · main · cryptsetup / cryptsetup - GitLab
    LUKS2 introduces support for Argon2i and Argon2id as a PBKDF. Argon2 is the winner of Password Hashing Competition and is currently. in final RFC draft ...
  32. [32]
    [PDF] Recommendation for Key-Derivation Methods in Key-Establishment ...
    Aug 2, 2020 · This document recommends techniques for deriving keying material from a shared secret established during a key-establishment scheme.
  33. [33]
  34. [34]
    PQ/T Hybrid Key Exchange in SSH - IETF
    Jan 29, 2025 · This document defines Post-Quantum Traditional (PQ/T) Hybrid key exchange methods based on traditional ECDH key exchange and post-quantum ...