Fact-checked by Grok 2 weeks ago

Lattice-based cryptography

Lattice-based cryptography is a leading paradigm in , relying on the computational difficulty of lattice problems—such as the Shortest Vector Problem (SVP) and the Learning With Errors (LWE) problem—in high-dimensional Euclidean spaces to build secure that resist attacks from both classical and quantum computers. These problems involve finding short vectors in discrete lattices (additive subgroups of \mathbb{R}^n generated by integer combinations of basis vectors) or distinguishing noisy linear equations modulo a prime, assumptions that lack efficient quantum algorithms despite decades of study. The origins of lattice-based cryptography trace back to the mid-1990s, when Miklós Ajtai introduced the Short Integer Solution () problem and demonstrated the first public-key scheme with provable based on the worst-case hardness of problems like approximate SVP. Independent practical proposals, such as the cryptosystem in 1996 and the GGH scheme in 1997, highlighted lattices' potential for efficient and signatures, though early schemes like GGH were later cryptanalyzed. A pivotal advancement came in 2005 with Oded Regev's formulation of the LWE problem, which provided a versatile average-case hardness assumption reducible from worst-case problems and enabled the first efficient public-key with strong guarantees. Subsequent innovations in the 2010s enhanced efficiency and applicability, including the introduction of Ring-LWE in 2010, which leverages structured polynomial rings for compact keys and faster operations, and trapdoor techniques for signatures and . Lattice-based methods support diverse primitives beyond basic encryption and signatures, such as protocols, hash functions, and advanced applications like fully homomorphic encryption (e.g., Gentry's 2009 scheme) that allow computations on encrypted data. Their versatility stems from powerful tools like worst-to-average-case reductions and the ability to base security on quantum-hard problems without relying on unproven number-theoretic assumptions like . In recognition of their robustness and efficiency, lattice-based schemes have been prioritized in global standardization efforts; in August 2024, NIST finalized ML-KEM (based on CRYSTALS-Kyber) for key encapsulation and ML-DSA (based on CRYSTALS-Dilithium) for digital signatures as part of its standards, marking a shift toward quantum-resistant . These developments underscore lattice-based cryptography's role in addressing the "" threat from quantum adversaries, with ongoing research focusing on side-channel resistance and further optimizations.

Historical Development

Early Pioneering Work

The foundations of lattice-based cryptography trace back to early work on techniques, notably the 1983 paper by and Andrew Odlyzko, which demonstrated how could solve low-density subset sum problems, a precursor to cryptographic applications by linking problems to hard computational tasks. This laid groundwork for exploiting lattice hardness in cryptography, though initial uses were more aligned with than construction. A major breakthrough came in 1996 with Miklós Ajtai's seminal work, which constructed one-way functions whose security relies on the average-case hardness of the shortest vector problem (SVP) in s, establishing the first provably secure lattice-based primitive under worst-case lattice assumptions. Ajtai showed that generating hard lattice instances with a known short vector enables one-way functions that are invertible SVP is solvable approximately, connecting average-case and worst-case complexities for the first time in cryptography. In 1997, Ajtai and extended this to the first public-key scheme with provable security based on the worst-case hardness of lattice problems. Independently in 1996, Jeffrey Hoffstein, Jill Pipher, and Joseph H. Silverman proposed the cryptosystem, a practical lattice-based public-key scheme using rings, which demonstrated efficiency despite lacking initial provable security reductions. Building on this, Daniele Micciancio contributed in 2002 to lattice-based collision-resistant hash functions, improving Ajtai's framework by providing constructions with worst-case/average-case equivalence based on lattice problems like the covering . In 1997, , , and Shai Halevi introduced the Goldreich-Goldwasser-Halevi (GGH) public-key encryption scheme, one of the first lattice-based encryption systems, relying on the hardness of the closest vector problem (CVP). The GGH construction uses a trapdoor where the public key is a "bad" basis, and encryption perturbs the message by a short error vector; decryption employs Babai's nearest plane algorithm to round the noisy ciphertext back to the nearest point, decoding the message efficiently with the private short basis. This scheme highlighted lattices' potential for public-key primitives despite practical challenges like large key sizes. Finally, in 2005, Oded Regev formulated the (LWE) problem, providing a versatile average-case assumption based on worst-case hardness, and applied it to an efficient public-key scheme. In LWE, given a modulus q, dimension n, and error distribution \chi, the search version asks to find a secret vector \mathbf{s} \in \mathbb{Z}_q^n from samples (\mathbf{a}_i, b_i = \langle \mathbf{a}_i, \mathbf{s} \rangle + e_i \mod q), where \mathbf{a}_i is uniform and e_i \sim \chi is small noise. Regev's encryption uses this: the public key includes random \mathbf{a} and c = \langle \mathbf{a}, \mathbf{s} \rangle + e \mod q for secret \mathbf{s}, with decryption recovering e via decoding to obtain the message. This work marked a shift toward more practical and generalizable primitives.

Emergence in Post-Quantum Context

Lattice-based cryptography gained prominence in the post-quantum era following Oded Regev's introduction of the (LWE) problem in 2005, which established a quantum-resistant alternative to classical hard problems like and the underlying and Diffie-Hellman schemes. Regev demonstrated a reduction from worst-case lattice problems, such as the approximate shortest vector problem, to the average-case LWE problem, proving that solving LWE is at least as hard as these lattice problems even for quantum adversaries. This connection positioned LWE as a foundational primitive for constructing efficient public-key cryptosystems secure against both classical and quantum attacks, spurring research into practical implementations. A significant advancement came in 2010 with the development of Ring-LWE (RLWE) by Vadim Lyubashevsky, Chris Peikert, and Oded Regev, which adapted LWE to structured polynomial rings for faster and more efficient cryptographic schemes. RLWE leverages the algebraic structure of ideal lattices to reduce computational and storage costs while preserving the worst-case hardness guarantees of general lattices against quantum algorithms. This variant enabled the design of cryptosystems with smaller key sizes and quicker operations, making lattice-based methods viable for real-world deployment. The 2010s saw growing community momentum through dedicated workshops and standardization efforts, including the PQCrypto conference series, which began in 2006 and fostered collaboration on quantum-resistant primitives. A pivotal event was the National Institute of Standards and Technology (NIST) call for post-quantum cryptographic submissions in 2016, which highlighted the urgency of transitioning from vulnerable classical systems. Efficiency milestones further propelled adoption, such as the 2012 work by Daniele Micciancio and Chris Peikert on improved trapdoor techniques for ideal lattices, which significantly reduced key sizes and enhanced performance in lattice-based constructions. Complementing this, the introduction of Module-LWE extended RLWE to higher-degree modules over polynomial rings, offering better scalability for large-scale applications while maintaining security reductions. By 2019, lattice-based schemes had emerged as the dominant paradigm, comprising over 50% of the candidates advancing to NIST's third round of evaluation, underscoring their efficiency, versatility, and robustness in the post-quantum landscape.

Mathematical Foundations

Lattice Definitions and Structures

A lattice in \mathbb{R}^n is defined as a of \mathbb{R}^n, consisting of all linear combinations of a set of n linearly independent basis vectors B = [\mathbf{b}_1, \dots, \mathbf{b}_n] \in \mathbb{R}^{n \times n}, formally L(B) = \{ B \mathbf{x} \mid \mathbf{x} \in \mathbb{Z}^n \}. Full-rank lattices are those that span the entire \mathbb{R}^n, meaning the basis vectors form a complete basis for the space, whereas sublattices are of a full-rank lattice that may not span the full dimension. The L^* of a L \subseteq \mathbb{R}^n is the set of all \mathbf{y} \in \operatorname{[span](/page/Span)}(L) such that the standard inner product \langle \mathbf{y}, \mathbf{x} \rangle \in \mathbb{Z} for every \mathbf{x} \in L. This structure plays a crucial role in , with the property that the of the recovers the original , i.e., (L^*)^* = L. Key geometric properties of include the \det(L), which equals the of the of the basis matrix |\det(B)| and represents the volume of the fundamental spanned by the basis . Successive minima \lambda_i(L) are defined as the smallest radii such that there exist i linearly independent in L of length at most \lambda_i(L). Minkowski's first theorem guarantees that \lambda_1(L) \leq \sqrt{n} (\det L)^{1/n}, providing a bound on the shortest nonzero in terms of the . The integer lattice \mathbb{Z}^n serves as a fundamental example, generated by the standard basis vectors \mathbf{e}_i = (0, \dots, 1, \dots, 0) for i = 1 to n, with \det(\mathbb{Z}^n) = 1 and \lambda_1(\mathbb{Z}^n) = 1. In two dimensions, the hexagonal lattice, generated by basis vectors (1, 0) and \left(\frac{1}{2}, \frac{\sqrt{3}}{2}\right), exemplifies a denser packing structure compared to \mathbb{Z}^2, achieving the optimal sphere packing in \mathbb{R}^2 with \det = \sqrt{3}/2. For enhanced efficiency in cryptographic applications, ideal lattices are employed, which are sublattices corresponding to ideals in the of a number , such as cyclotomic s. A common uses the \mathbb{Z}_q / ([f(x)](/page/F/X)), where q is a and f(x) is an , often f(x) = x^n + 1 for power-of-two n in cyclotomic settings like the m-th cyclotomic \mathbb{Z}[\zeta_m] with \zeta_m a primitive m-th .

Core Hardness Problems

Lattice-based cryptography derives its security from the computational hardness of several fundamental problems on lattices, which are believed to resist both classical and quantum attacks. These problems include the Shortest Vector Problem (SVP), the Closest Vector Problem (CVP), and the Learning With Errors (LWE) problem, along with its variant over rings, Ring-LWE (RLWE). The hardness of these problems underpins the provable security of lattice-based schemes, with reductions establishing connections between worst-case lattice problems and average-case instances used in cryptography. The Shortest Vector Problem (SVP) asks to find the shortest non-zero vector in a given L. It has several variants: the search version requires outputting the vector, while the decision version determines if the shortest vector is shorter than a given bound. SVP is typically studied in its form, \gamma-SVP, where one seeks a non-zero vector no longer than \gamma times the length of the shortest one, for an factor \gamma \geq 1. Approximating SVP to within factors better than \sqrt{2} is NP-hard, and even for \gamma, it remains hard on average. The Closest Vector Problem (CVP) generalizes SVP by requiring, given a lattice L and a target t \in \mathbb{R}^n, to find a lattice vector v \in L that minimizes the \|v - t\|. Like SVP, CVP has search and decision variants, and is often considered in the approximate setting, \gamma-CVP, where the goal is to find a vector within \gamma times the minimal . CVP is NP-hard even for exact solutions in the \ell_2 , and approximating it within constant factors is also NP-hard. The (LWE) problem provides an average-case formulation suitable for . In the search-LWE variant, given dimension n, integer q, secret s \in \mathbb{Z}_q^n, polynomially many samples m, each (a_i, b_i = \langle a_i, s \rangle + e_i \mod q) where a_i \in \mathbb{Z}_q^n are uniform and e_i are small errors from a distribution \chi (e.g., discrete Gaussian), the task is to recover s. The decision-LWE variant asks to distinguish such samples from uniform pairs (a_i, u_i \in \mathbb{Z}_q). LWE is conjectured hard for appropriate parameters, with decision-LWE being at least as hard as search-LWE via standard reductions. Ring-LWE (RLWE) extends LWE to the ring R_q = \mathbb{Z}_q/(x^n + 1), where n is a power of 2. Samples are pairs (a_i, b_i = a_i \cdot s + e_i) with a_i, s \in R_q uniform or from a , and errors e_i from the discrete Gaussian \chi over R. Search-RLWE requires recovering s, while decision-RLWE involves distinguishing from uniform. RLWE enables more efficient constructions due to its while retaining . A key result connecting these problems is the worst-case to average-case reduction from lattice problems like SVP to LWE, showing that solving LWE on average is at least as hard as approximating SVP (and related problems like SIVP) in the worst case to within polynomial factors. This reduction, due to Micciancio and Regev, relies on Gaussian measures to map worst-case instances to average-case LWE samples. Lattice problems exhibit quantum resistance, as no known quantum polynomial-time algorithm solves SVP or CVP, even approximately. The best quantum algorithms, such as quantum variants of sieving or BKZ, offer only exponential speedups similar to classical ones, with no analogue to for factoring. Classical algorithms like provide only polynomial approximation, leaving exact or fine-grained approximation hard even quantumly.

Cryptographic Primitives and Constructions

Public-Key Encryption and Key Encapsulation

Lattice-based public-key encryption (PKE) schemes leverage the hardness of the (LWE) problem to provide against chosen-plaintext attacks (). The LWE problem, introduced by Regev in , posits that it is computationally difficult to distinguish random linear equations perturbed by small errors from truly random ones, forming the foundational assumption for these constructions. A basic LWE-based PKE scheme operates over a modulus q and dimension n. The secret key is a vector \mathbf{s} \in \mathbb{Z}_q^n, while the public key consists of a random matrix \mathbf{A} \in \mathbb{Z}_q^{m \times n} and \mathbf{b} = \mathbf{A} \mathbf{s} + \mathbf{e}, where \mathbf{e} is a small error vector sampled from a discrete Gaussian distribution. To encrypt a message bit m \in \{0, 1\}, the sender samples a random vector \mathbf{r} \in \mathbb{Z}_q^m and small error vectors \mathbf{e}_1, e_2, computing the ciphertext as \mathbf{u} = \mathbf{A}^T \mathbf{r} + \mathbf{e}_1 and v = \mathbf{b}^T \mathbf{r} + e_2 + m \lfloor q/2 \rfloor. Decryption recovers m by computing v - \mathbf{u}^T \mathbf{s}, rounding the result to the nearest multiple of \lfloor q/2 \rfloor, and checking if it is closer to 0 or \lfloor q/2 \rfloor, which succeeds with overwhelming probability due to the small errors. This construction achieves IND-CPA security directly under the LWE assumption. For improved efficiency, Ring-LWE (RLWE) variants replace matrix-vector operations with ring multiplication over polynomial rings, such as \mathbb{Z}_q/(x^d + 1), reducing key and ciphertext sizes significantly. RLWE-based schemes maintain security based on the hardness of distinguishing ring elements perturbed by small errors from uniform ones, analogous to LWE. An example is the NewHope (KEM) from 2016, which uses RLWE for compact keys and IND-CCA2 security achieved via the Fujisaki-Okamoto transform applied to an underlying CPA-secure PKE. The Fujisaki-Okamoto transform, originally proposed in 1999 and refined in 2013, converts a CPA-secure scheme into one secure against chosen-ciphertext attacks (CCA) by incorporating message randomization and re-encryption checks. Key encapsulation mechanisms (KEMs) differ from PKE by focusing on encapsulating a key k rather than directly encrypting arbitrary messages. In a lattice-based KEM, the sender computes a ciphertext c = \text{Enc}(pk, k) using the public key pk, while the receiver decapsulates c with the secret key sk to recover k, which can then secure a symmetric session. This encapsulation process inherits the LWE or RLWE hardness for IND-CPA security, with CCA security via transformations like Fujisaki-Okamoto. Lattice-based PKE and KEMs provide under the LWE assumption, ensuring that ciphertexts reveal no information about plaintexts beyond their length. For larger messages, these schemes are typically used in hybrid encryption: the PKE or KEM encapsulates a symmetric key, which encrypts the bulk data via an efficient algorithm like . Most lattice-based PKE and KEM constructions achieve chosen-ciphertext security through generic transformations such as Fujisaki-Okamoto, avoiding the need for Fiat-Shamir-style proofs used in other primitives.

Digital Signatures

Lattice-based digital signature schemes provide a quantum-resistant alternative to classical signatures by leveraging the hardness of problems, enabling provably secure constructions for message authentication and integrity verification. These schemes typically follow a hash-and-sign paradigm, where a derived from structures allows efficient signing while maintaining security against forgery. The foundational framework for such signatures was introduced by , Peikert, and Vaikuntanathan in , who demonstrated how to generate signatures using trapdoors. In this approach, the signer uses a secret trapdoor to sample a short vector \mathbf{r} from a , computes the signature as \mathbf{s} = H(m) \cdot \mathbf{r} + \mathbf{e}, where H(m) is a of the message m and \mathbf{e} is a small error vector, and rejects the attempt if the norm of \mathbf{s} exceeds a predefined bound to ensure the output remains statistically close to uniform. This construction relies on the ability to find short vectors in lattices with trapdoors, which is computationally infeasible without the secret key. To transform interactive identification protocols into non-interactive signatures, lattice-based schemes often apply the Fiat-Shamir heuristic, which replaces the verifier's random challenge with a hash of the and message. This method, originally proposed by Fiat and Shamir in 1986, is adapted to lattice settings to produce signatures from zero-knowledge proofs of knowledge for short vectors satisfying certain linear equations modulo a . However, direct application can lead to signatures that leak information about the secret key due to non-uniform distributions in lattice sampling. To address this, the "with aborts" technique, developed by Lyubashevsky in 2012, introduces during signing: if an intermediate value exceeds a threshold, the process aborts and retries with fresh randomness, effectively masking the signer's key distribution and producing signatures indistinguishable from uniform random vectors. The security of these lattice-based signatures is proven in the model, achieving existential unforgeability against adaptive chosen-message attacks under the (LWE) or Short Integer Solution () assumptions, where the SIS problem involves finding short solutions to random linear equations over a . in the with-aborts method ensures that signatures reveal no information about the private key, even after multiple signing queries. Compared to signatures, lattice-based ones typically produce larger outputs—often several kilobytes versus hundreds of bytes for equivalent security levels—but offer resistance to quantum attacks via , making them suitable for post-quantum environments.

Homomorphic Encryption Schemes

Lattice-based homomorphic encryption schemes enable computations on encrypted data without decryption, a property rooted in the of lattices that allows additive and multiplicative operations on ciphertexts to correspond to operations on plaintexts, albeit with accumulating noise that must be managed. These schemes, particularly fully (FHE), support arbitrary depth circuits, making them suitable for privacy-preserving and secure . The breakthrough in lattice-based FHE came with Craig Gentry's construction, which realized the first FHE scheme using ideal lattices and sparse secret keys. In this scheme, encryption is based on the hardness of the approximate shortest vector problem in ideal lattices, where ciphertexts include noise that grows with homomorphic operations. To achieve unlimited computation depth, Gentry introduced , a process that refreshes noisy ciphertexts by homomorphically evaluating the decryption circuit itself, effectively reducing noise while keeping the scheme fully homomorphic. This "boostrappable" approach transformed somewhat homomorphic schemes—limited to bounded depth—into fully homomorphic ones, though at significant computational cost. Building on learning with errors (LWE) encryption, which provides additively homomorphic properties where adding ciphertexts c_1 and c_2 yields a ciphertext decrypting to the sum of messages m_1 + m_2, lattice-based schemes extend to multiplicativity through techniques like relinearization. Relinearization decomposes multiplied ciphertexts to maintain compact size and control noise growth during repeated multiplications. In 2011, Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan proposed the BGV scheme, advancing leveled FHE based on ring-LWE (RLWE), which supports a fixed number of operations without bootstrapping. Key innovations include modulus switching, which scales down the modulus to bound noise while preserving correctness, and key switching, which replaces the secret key in ciphertexts to enable relinearization without sparse keys. This results in more efficient leveled homomorphy for practical depths, with security reducible to standard LWE assumptions. The Gentry-Sahai-Waters (GSW) scheme from 2013 introduced a "somewhat" homomorphic approach using encodings over LWE, optimized for single-instruction multiple-data (SIMD) operations like batching multiple plaintexts into one ciphertext. It represents messages as approximate eigenvectors, allowing multiplications to perform homomorphic operations efficiently, and supports extension to full homomorphy via , though it excels in leveled settings for computations. Noise management is central to these schemes, as homomorphic incurs linear noise growth (||e_1 + e_2|| \leq ||e_1|| + ||e_2||), while amplifies it multiplicatively; specifically, after multiplying two ciphertexts with errors e_1 and e_2, the resulting noise satisfies \|e'\| \leq \|e_1\| \cdot \|e_2\| + B, where B bounds additional terms from the public key and message scaling. This quadratic growth necessitates techniques like for unlimited depth or modulus reduction in leveled schemes to prevent noise from overwhelming the modulus and causing decryption errors. Practical implementations, such as the SEAL library, leverage RLWE-based schemes like BFV (derived from BGV) for efficient homomorphic operations, enabling applications in privacy-preserving and secure cloud analytics where data remains encrypted throughout processing.

Collision-Resistant Functions

Lattice-based cryptography provides constructions for collision-resistant functions and pseudorandom functions (PRFs) primarily relying on the hardness of the Short Integer Solution () and (LWE) problems. These primitives offer post-quantum security, resisting attacks from both classical and quantum adversaries, unlike some traditional hash functions built on number-theoretic assumptions such as factoring, which are vulnerable to quantum algorithms like Shor's. A seminal construction is Ajtai's 1996 collision-resistant , which takes an input \mathbf{x} \in \{0,1\}^m and computes the output \mathbf{y} = A \mathbf{x} \mod q, where A is a publicly fixed in \mathbb{Z}_q^{n \times m} with parameters n, m, q chosen such that m = \Theta(n \log q). Collisions in this function—distinct inputs \mathbf{x}, \mathbf{x}' yielding the same \mathbf{y}—imply the existence of a short nonzero \mathbf{z} = \mathbf{x} - \mathbf{x}' satisfying A \mathbf{z} = \mathbf{0} \mod q, solving an instance of the SIS problem. This ties the directly to the computational difficulty of finding short vectors in lattices, a core hardness assumption in lattice . The SIS problem asks, given a random A \in \mathbb{Z}_q^{n \times m}, to find a nonzero \mathbf{x} \in \mathbb{Z}^m with \|\mathbf{x}\|_\infty \leq \beta (typically \beta = O(\sqrt{n})) such that A \mathbf{x} = \mathbf{0} \mod q. Ajtai showed that the average-case hardness of is equivalent to the worst-case hardness of approximating the Shortest Vector Problem (SVP) to within O(n) factors in the \ell_2-norm on n-dimensional ; this equivalence was refined and tightened in subsequent work using Gaussian measures. As noted in the mathematical foundations, SVP involves finding the shortest nonzero vector in a lattice, providing the underlying lattice-theoretic security. Advanced lattice-based hash functions leverage trapdoor sampling techniques for applications requiring incremental or verifiable properties, such as accumulators and verifiable delay functions (VDFs). Micciancio and Peikert introduced efficient methods for generating and using strong s in lattices, enabling preimage sampling from distributions that are statistically close to discrete Gaussians while preserving one-wayness. These s facilitate constructions where hash values can be incrementally updated or verified after sequential computations, as seen in lattice-based accumulators that support compact membership proofs and in VDFs that enforce time-bound evaluations without trusted setups. These functions underpin security in standardized post-quantum schemes, such as the use of in ML-KEM for key encapsulation. For pseudorandom functions, lattice-based constructions from LWE provide efficient PRFs secure under the decisional LWE assumption. A key approach, building on dual-Regev style , evaluates the PRF as F_k(i) = \langle \mathbf{a}_i, \mathbf{s} \rangle + e_i \mod q, where \mathbf{s} is the secret , \{\mathbf{a}_i\} are public evaluation points derived from a , and e_i is small ; indistinguishability from random functions follows from the hardness of distinguishing LWE samples. This construction supports key-homomorphic properties, allowing evaluation on combined keys, and achieves concrete suitable for symmetric in post-quantum settings.

Notable Algorithms and Implementations

CRYSTALS-Kyber

CRYSTALS-Kyber is an IND-CCA2-secure (KEM) based on the hardness of the () problem, submitted to the process in November 2017 as part of suite. It was selected by NIST in July 2022 for and finalized in August 2024 as ML-KEM in FIPS 203, providing efficient post-quantum secure suitable for protocols like TLS. Kyber's design emphasizes simplicity, speed, and small key sizes, making it practical for deployment while relying on lattice-based assumptions resistant to quantum attacks. The core construction of Kyber builds on the Module-LWE problem over polynomial rings. Key generation samples a secret vector \mathbf{s} \in R_q^k and error \mathbf{e} \in R_q^k from centered binomial distributions, where R_q = \mathbb{Z}_q[X]/(X^n + 1), and computes the public key as \mathbf{t} = A \mathbf{s} + \mathbf{e}, with A a uniformly random k \times k matrix over R_q; the public key is (\rho, \hat{\mathbf{t}}), where \rho seeds A and \hat{\mathbf{t}} is the NTT representation of a compressed \mathbf{t}, while the private key is \mathbf{s}. For encapsulation, a random message m (encoding the shared secret) is hashed to derive randomness r, then the ciphertext consists of \mathbf{u} = A^T \mathbf{r} + \mathbf{e}_1 and v = \mathbf{t}^T \mathbf{r} + e_2 + \Delta(m), where \mathbf{r}, \mathbf{e}_1 \in R_q^k, e_2 \in R_q are small errors, and \Delta encodes m by scaling; the shared key is derived via a key derivation function (KDF) from a hash of m and the ciphertext. Decapsulation recovers m' by computing v' = \mathbf{t}^T \mathbf{u} - \mathbf{s}^T \mathbf{u}, decoding m' from the small result, re-encapsulating to verify, and outputting the KDF-derived key if consistent or a fixed rejection value otherwise; this Fujisaki-Okamoto transform ensures CCA security from the underlying CPA-secure PKE. Kyber operates in a module lattice setting with polynomial degree n = 256 and q = 3329, using module rank k = 2, 3, or $4 for different levels; compression factors d_u and d_v (e.g., 10 and 4 for k=2) reduce sizes to 768–1568 bytes by quantizing coefficients, introducing controlled while preserving decodability. The parameter sets are -512 (k=2, targeting NIST level 1 with 128-bit post-quantum ), Kyber-768 (k=3, level 3 with 192-bit ), and Kyber-1024 (k=4, level 5 with 256-bit ), achieving decryption failure rates below $2^{-138}. To enable fast polynomial arithmetic, Kyber employs the Number Theoretic Transform (NTT) for multiplication in R_q, representing elements in the NTT domain during computations for O(n \log n) efficiency. As of November 2025, (as ML-KEM) has been integrated into major browsers like (starting with version 131) and for TLS , supporting hybrid post-quantum-classical modes, with over half of Cloudflare's traffic protected by post-quantum . Implementations use constant-time operations, such as masked arithmetic and uniform , to resist timing and side-channel attacks like simple .

CRYSTALS-Dilithium

CRYSTALS-Dilithium is a lattice-based scheme designed for post-quantum security, relying on the hardness of the Learning With Errors (MLWE) and Short Integer Solution (MSIS) problems. It was first submitted to the Process in 2017 as part of suite and selected for in 2022 during Round 3, with final approval as FIPS 204 (renamed ML-DSA) in August 2024. The scheme employs the Fiat-Shamir with aborts paradigm to transform an identification protocol into a signature mechanism, emphasizing simplicity, efficiency, and resistance to side-channel attacks. As of November 2025, ML-DSA has seen broad adoption in protocols and networks for quantum-resistant signatures. The core construction uses module lattices over polynomial rings \mathbb{Z}_q/(x^n + 1), where vectors of polynomials serve as the underlying structure to balance security and performance. Key generation involves sampling small secret vectors s_1, s_2 from a centered and computing the public key components A (a ) and t = A s_1 + s_2, with the public key consisting of A and the higher bits of t. This setup ensures that signatures remain short while providing provable security reductions. Dilithium offers three parameter sets aligned with NIST security levels: Dilithium2 for approximately 128-bit security, Dilithium3 for 192-bit, and Dilithium5 for 256-bit. All variants use q = 8380417 and n = 256, with varying module dimensions (k, l) and bounds \gamma_1, \gamma_2 to scale security. For instance, Dilithium2 employs (k=4, l=4), \gamma_1 = 2^{17}, and \gamma_2 = (q-1)/88, resulting in public key sizes around 1.3 KB and signature sizes around 2.4 KB. The choice of q = 2^{23} - 2^{13} + 1 enables efficient Number Theoretic Transform (NTT) implementations for , as q-1 factors appropriately for roots of unity. The signing process begins with sampling a commitment vector y with coefficients uniformly from [- \gamma_1, \gamma_1]. Compute w = A y \mod q, then decompose w into higher bits w_1 = HighBits(w, 2 \gamma_2) and lower bits w_0 = LowBits(w, 2 \gamma_2). The challenge c is derived as c = \mathcal{H}(M \| w_1), where \mathcal{H} is a hash function and M is the message; if c falls in a rejection set, resample y. The response is z = y + c \cdot s_1 \mod q. Rejection occurs if \|z\|_\infty \geq \gamma_1 - \beta, where \beta = 60 typically. A hint h is then computed from r_0 = w_0 - c \cdot s_2 \mod 2\gamma_2 to aid verification without revealing secrets. The signature is (z, h, c). Verification recomputes w' = A z - c \cdot t \mod q, checks \|z\|_\infty < \gamma_1 - \beta, extracts w_1' and w_0', ensures consistency with h for the low bits, and confirms c = \mathcal{H}(M \| w_1'). This rejection sampling ensures uniform challenge distribution and bounded signature norms. Security is proven in the Euclidean Unforgeability under Chosen Message Attacks (EUF-CMA) model, with reductions to the hardness of MLWE and MSIS in the Quantum Random Oracle Model (QROM). Concrete security estimates place above the NIST Level 2 threshold against lattice attacks, including core-SVP and MSIS solvers. To mitigate side-channel vulnerabilities, implementations incorporate shuffling of secret coefficients during sampling and constant-time operations throughout the algorithms. NIST recommends ML-DSA () as the primary general-purpose post-quantum for widespread deployment by 2025, citing its balance of performance, security, and ease of implementation over alternatives like , despite Dilithium's larger signature sizes (e.g., 2.4 KB vs. Falcon's ~0.7 KB at comparable levels).

NTRU and Variants

NTRU, introduced in 1996, is a ring-based public-key encryption scheme that operates over the polynomial ring \mathbb{Z}/(x^N - 1) with coefficients reduced modulo a small prime p (for messages) and a large modulus q (for ciphertexts), where p and q are coprime. The key generation involves selecting short polynomials f and g (with small coefficients, e.g., in {-1, 0, 1}) such that f is invertible modulo both p and q, and computing the public key h = p * g * f^{-1} mod q, where * denotes convolution multiplication. Encryption of a message polynomial m (with coefficients in [-p/2, p/2]) uses a short random polynomial r to form the ciphertext c = p * r * h + m mod q. Decryption computes a = f * c mod q; assuming coefficients of p * r * g + f * m are small enough to avoid wrapping around q/2, center-lift a to obtain the correct representatives, then compute b = a mod p = f * m mod p, and recover m = f^{-1} mod p * b mod p. The security of relies on the difficulty of recovering the short polynomials from the , which corresponds to finding short in the associated . This is generated by the polynomial h and incorporates the structure, where the shortest problem (SVP) in this high-dimensional underpins the hardness; an attacker succeeding in SVP could reconstruct f and g from h. Unlike (LWE)-based schemes, does not assume an LWE distribution but instead leverages the inherent geometry of the for its security, often resulting in smaller sizes compared to early LWE constructions—for instance, parameters can achieve comparable security with around 700 bytes versus over 1 KB for equivalent early LWE variants. Variants of address vulnerabilities in the original design, particularly algebraic attacks exploiting the cyclotomic ring structure. , proposed in 2017, modifies the ring to use non-cyclotomic polynomials, avoiding subfield attacks and reducing the while maintaining efficiency; it includes Streamlined NTRU Prime, an optimized version for key encapsulation that simplifies parameters for faster implementation. For digital signatures, extends the NTRU lattice to the approximate closest vector problem (CVP), generating signatures by finding lattice points close to a of the message and verifying via distance checks, with later refinements addressing forgeries through centered distributions. These variants fix early algebraic weaknesses, such as those from hybrid attacks combining with ring automorphisms, enhancing quantum resistance. NTRUEncrypt saw commercial adoption in the 2000s through NTRU Cryptosystems, Inc., integrated into standards like IEEE 1363.1 and used in embedded systems for its speed and low resource demands. As of 2025, NTRU variants appear in hybrid protocols, such as combinations with for in experimental TLS and IKEv2 drafts, providing diversified post-quantum security.

Security Analysis

Provable Security Reductions

Lattice-based cryptographic schemes derive their security from the hardness of average-case problems such as Learning With Errors (LWE) and Short Integer Solution (SIS), which are linked to worst-case lattice problems like Shortest Vector Problem (SVP) through provable reductions. A seminal quantum reduction, established by Regev, shows that solving average-case LWE with noise rate \alpha is at least as hard (for quantum algorithms) as approximating worst-case SVP or Shortest Independent Vectors Problem (SIVP) to within \tilde{O}(n / \alpha) factors, where n is the lattice dimension. This reduction implies that any efficient algorithm for LWE would break these hard lattice problems, providing a foundation for the security of many lattice-based primitives. For digital signatures, tight security reductions from SIS and LWE to the Fiat-Shamir-with-aborts paradigm have been proven in the model. Lyubashevsky demonstrated that lattice-based identification schemes can be transformed into signatures with tight security bounds, where the signature size and efficiency are optimized by rejecting invalid samples during signing to maintain short signatures without relying on trapdoors. These reductions ensure that forging a signature is as hard as solving SIS or LWE instances of comparable parameters. The hardness of LWE is preserved even against quantum adversaries, as shown by reductions that maintain the problem's difficulty under quantum computation. Peikert provided a classical reduction from worst-case GapSVP to LWE, but further analysis confirms that quantum attacks do not significantly weaken LWE beyond the quantum reductions already established, ensuring post-quantum security. Parameter selection in lattice-based schemes relies on concrete security estimates that model the cost of attacks, particularly using Block Korkine-Zolotarev (BKZ) algorithms to approximate SVP. Simulations of BKZ behavior, such as the core-SVP estimator, predict the required block size β for solving SVP in lattices of dimension n with q, allowing designers to choose parameters that achieve target security levels like 128 bits against classical or quantum ers; for instance, core-SVP hardness ensures that the root Hermite factor or bounds align with estimated costs. Most lattice-based schemes are proven secure in the , where hash functions are idealized as to facilitate proofs. For post-quantum , the quantum random oracle model (QROM) extends ROM to quantum queries, with Zhandry establishing that standard ROM proofs carry over to QROM with appropriate adjustments, preserving the of Fiat-Shamir-based constructions against quantum adversaries. Decision LWE can be quantified by distinguisher advantage bounds, which depend on the error distribution. For typical discrete Gaussian errors, the advantage Adv_{LWE} of distinguishing LWE samples from is at most n / q^{1/2}, ensuring negligible distinguishability when the modulus q is sufficiently large relative to the dimension n.

Known Attacks and Vulnerabilities

Lattice reduction attacks form the cornerstone of cryptanalysis against lattice-based schemes, primarily targeting the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP), which underpin the hardness of these cryptosystems. The Lenstra–Lenstra–Lovász () algorithm, introduced in 1982, provides a polynomial-time method to find a reduced basis for a , yielding vectors that are approximately orthogonal and of comparable length, though it only guarantees an exponential approximation factor for SVP solutions. This makes a foundational tool for initial basis reduction but insufficient alone for breaking practical parameters. Subsequent advancements, such as the Block Korkine–Zolotarev () algorithm from 1994, iteratively apply to blocks of the basis to achieve better approximations, solving SVP and CVP more effectively by enumerating short vectors in subspaces. Improvements like BKZ 2.0, proposed in 2011, enhance simulation models to predict behavior in high dimensions with block sizes up to 100 or more, effectively doubling the feasible block size in security estimates by incorporating early termination and techniques to reduce computational overhead. Algebraic attacks exploit structural properties of specific lattice schemes, often recovering small secrets embedded in the . A seminal example is the 2006 attack by and Regev on and GGH signatures, which models the problem as learning a and uses embedding to transform CVP instances into SVP, allowing to reveal the secret key with lattices of roughly twice the original. This approach succeeds against parameters where the secret polynomials have small coefficients, demonstrating vulnerability when embedding increases the effective modestly. Complementing this, Coppersmith's 1996 , refined by Howgrave-Graham in 1997, finds small roots of polynomial congruences modulo an integer using on the coefficients, applicable to lattice cryptosystems with small secrets by constructing lattices from the polynomial's monomials and solving the resulting SVP to recover bounded solutions. These techniques highlight how algebraic structure can amplify the effectiveness of beyond generic SVP hardness. Quantum attacks on lattice problems offer potential speedups but remain limited compared to classical ones for core search problems. Regev's 2005 quantum reduction connects worst-case lattice problems to average-case (LWE), but it provides no asymptotic speedup beyond Grover's quadratic improvement for unstructured search in SVP or CVP, preserving the exponential hardness of lattice-based cryptography against quantum adversaries. More targeted quantum enhancements, such as Laarhoven's 2015 application of to sieve algorithms, accelerate SVP solving to a heuristic time complexity of approximately $2^{0.286n + o(n)} in dimension n, improving on classical sieving by leveraging quantum amplitude amplification for nearest-neighbor searches in high dimensions. These quantum methods, while theoretically faster, still require enormous resources for NIST-level parameters, underscoring the post-quantum resilience of lattice schemes. Decapsulation faults in key encapsulation mechanisms like enable information leakage akin to Bleichenbacher's oracle attacks on , where faulty decryptions reveal partial secret key bits through repeated queries. A 2023 analysis demonstrates that such faults in allow an adversary to recover the private key by exploiting error patterns in the decoding process, requiring only a few thousand faulty decapsulations. This vulnerability is mitigated through masking techniques that randomize intermediate computations, ensuring constant-time operations and uniform failure distributions without exposing structure. As of 2025, no cryptanalytic breaks have succeeded against the NIST-standardized parameters of lattice-based schemes like , with security margins intact against known attacks. For instance, Kyber-512 resists core-SVP attacks estimated at $2^{112} operations, corresponding to a root Hermite factor of approximately 1.005 in effective dimension around 500, where the factor measures the quality of needed to approximate the shortest vector within the scheme's tolerance. These estimates, derived from refined BKZ simulations, confirm that practical instantiation parameters exceed the computational feasibility of current and near-term adversaries.

Standardization and Practical Adoption

NIST Post-Quantum Standardization

In December 2016, the National Institute of Standards and Technology (NIST) issued a formal call for proposals to standardize quantum-resistant public-key cryptographic algorithms, with a submission deadline of November 30, 2017. A total of 82 algorithms were submitted, of which 69 were deemed complete and proper after initial review, proceeding to Round 1 of evaluation in December 2017. Among these, lattice-based schemes were dominant, comprising 26 submissions, reflecting their prominence due to favorable security and performance profiles. The evaluation progressed through multiple rounds, with Round 1 concluding in January 2019 and advancing 26 candidates to Round 2, which ran from March 2019 to July 2020. In Round 3, from October 2020 to July 2022, NIST selected CRYSTALS-Kyber for key encapsulation and CRYSTALS-Dilithium for digital signatures to advance to standardization, while other lattice-based candidates like (for key encapsulation) were not selected despite reaching the finalist stage. These advancements were based on NIST's criteria, prioritizing security strength against quantum attacks, computational cost (measured in CPU cycles for , encapsulation, and verification), and interoperability for practical deployment. NIST finalized its first post-quantum standards in August , publishing (FIPS) 203 for Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM, derived from ), FIPS 204 for Module-Lattice-Based Digital Signature Algorithm (ML-DSA, derived from ), and FIPS 205 for Stateless Hash-Based Digital Signature Algorithm (SLH-DSA, a non-lattice scheme). These lattice-based selections, ML-KEM and ML-DSA, were chosen for their balanced levels (targeting at least 128 bits post-quantum) and efficiency, outperforming alternatives in cycle counts and key sizes while ensuring with existing protocols. 203 and 204 specify sets (e.g., ML-KEM-512 for level 1) to support various use cases, with ML-KEM variants optimized for different strengths. Following these releases, NIST initiated an additional call for digital signature proposals in August 2022, with a deadline of June 1, 2023, receiving 40 submissions to diversify options beyond the initial selections. In October 2024, NIST announced 14 second-round candidates from this call, including lattice-based schemes like CROSS and LESS, continuing evaluation into 2025. As of November 2025, the second round remains under evaluation following the Sixth PQC Standardization Conference in September 2025, with no third-round advancements announced. For key encapsulation, NIST selected the code-based HQC as a backup to ML-KEM in March 2025, but lattice-based ML-KEM remains the primary recommendation due to its superior performance-security trade-off. As of November 2025, NIST's standards mandate a transition for federal systems, with Memorandum 10 requiring completion of post-quantum migration by 2035 and prohibiting new deployments of vulnerable algorithms after 2030. Lattice-based algorithms like ML-KEM and ML-DSA were prioritized in this process for their ability to provide robust quantum resistance at reasonable computational costs, facilitating widespread adoption in constrained environments.

Challenges in Deployment

Lattice-based cryptographic schemes, such as those standardized by NIST including CRYSTALS-Kyber (now ML-KEM), face significant performance challenges in deployment due to their larger and sizes compared to classical algorithms. For instance, the public for Kyber-512 is 800 bytes, roughly three times larger than a 2048-bit public at 256 bytes, which can strain bandwidth-limited environments and storage constraints. Optimizations like AVX2 instructions and Number Theoretic Transforms (NTT) have reduced computational overhead on x86 processors, enabling efficient polynomial multiplications central to these schemes. However, performance degrades notably on resource-constrained embedded devices, such as Cortex-M4, where encapsulation operations remain slower than classical counterparts due to the intensive arithmetic over rings. Benchmarks illustrate this gap: Kyber-512 encapsulation requires approximately 210,000 cycles on processors, compared to around 10,000 cycles for X25519 ECDH at equivalent security levels, highlighting the need for hardware accelerations in practical rollouts. Side-channel attacks pose another deployment hurdle, particularly targeting the Gaussian sampling step used in key generation and operations, which can leak timing or power information revealing secret keys. For example, cache-timing attacks on schemes like BLISS exploit variations in sampling rejection rates to recover the private key from as few as 10,000 signatures. Countermeasures such as masking, which split sensitive data into randomized shares to obscure leakages, are essential but introduce substantial overhead; first-order masking can increase runtime by up to 10 times due to the need for secure multi-party computations during sampling and arithmetic. These protections, while effective against differential power analysis, complicate implementations and elevate costs, especially in hardware-constrained settings like smart cards. Interoperability and migration to lattice-based systems require careful integration to maintain backward compatibility, often through hybrid modes that combine classical and post-quantum primitives. In TLS 1.3, for instance, hybrid key exchange using X25519 alongside Kyber-768 concatenates shared secrets from both to provide immediate quantum resistance without disrupting existing deployments. This approach necessitates API updates in libraries and protocols to support PQC parameters, posing engineering challenges for legacy systems. Broader quantum migration efforts, including updating public key infrastructure (PKI) with classical-plus-PQC signatures, demand significant resources; estimates for certificate reissuance and validation across large enterprises exceed $100 million, driven by the scale of revoking and reissuing millions of certificates. As of 2025, OpenSSL 3.5 and later versions include native support for Kyber in hybrid configurations, facilitating adoption, though only about 8.6% of web PKI endpoints have integrated post-quantum mechanisms, per scans of top domains as of June 2025. NIST's standardization of these algorithms underscores the urgency, but practical barriers like these continue to slow widespread rollout.

References

  1. [1]
    [PDF] Lattice-based Cryptography
    Jul 22, 2008 · In this chapter we describe some of the recent progress in lattice-based cryptography. Lattice-based cryp- tographic constructions hold a ...
  2. [2]
    [PDF] Lattice Based Cryptography for Beginners - IACR
    Lattice based cryptography is a cryptosystem of post-quantum age, where fundamental problems are hard even against quantum computers.
  3. [3]
    [PDF] An Introduction to Lattice-Based Cryptography - University of Maryland
    Summary. • Lattice-based cryptography is a promising approach for efficient, post-quantum cryptography. • All the basic public key primitives can be.<|control11|><|separator|>
  4. [4]
    [PDF] A Decade of Lattice Cryptography - Cryptology ePrint Archive
    Feb 17, 2016 · Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems.
  5. [5]
    Post-Quantum Cryptography | CSRC
    Background. NIST initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms.Post-Quantum · Workshops and Timeline · NIST PQC standards · Presentations
  6. [6]
    Solving low-density subset sum problems | Journal of the ACM
    An algorithm is proposed that searches for a solution when given an instance of the subset sum problem. This algorithm always halts in polynomial time.
  7. [7]
    Improved cryptographic hash functions with worst-case/average ...
    Micciancio. Improving lattice based cryptosystems using the hermite normal form. In J. Silverman, editor, Cryptography and Lattices Conference --- CaLC'2001 ...
  8. [8]
    Public-key cryptosystems from lattice reduction problems
    May 17, 2006 · We present a new proposal for a trapdoor one-way function, from which we derive public-key encryption and digital signatures.Missing: GGH | Show results with:GGH
  9. [9]
    [PDF] On Lattices, Learning with Errors, Random Linear Codes, and ...
    May 2, 2009 · Our main result is a reduction from worst-case lattice problems such as GAPSVP and SIVP to a certain learning problem. This learning problem ...
  10. [10]
    Conferences - Post-quantum cryptography
    PQCrypto is the main conference series devoted to post-quantum cryptography: #17: PQCrypto 2026. Saint-Malo, France, 14–16 April 2026.
  11. [11]
    Call for Proposals - Post-Quantum Cryptography | CSRC
    Jan 3, 2017 · NIST is soliciting proposals for post-quantum cryptosystems and it will solicit comments from the public as part of its evaluation process. NIST ...
  12. [12]
    Trapdoors for Lattices: Simpler, Tighter, Faster, Smaller
    We give new methods for generating and using strong trapdoors in cryptographic lattices, which are simultaneously simple, efficient, easy to implement.Missing: ideal | Show results with:ideal
  13. [13]
    [PDF] Status Report on the Third Round of the NIST Post-Quantum ...
    Sep 29, 2022 · 3.2.3 Lattice-based. Seven of the 15 third-round candidates are lattice-based cryptosystems.9 These cryptosys- tems are connected to a large ...
  14. [14]
    [PDF] An Efficient Noncommutative NTRU from Semidirect Product
    wherever required. 2.1 Lattices. Definition 1 (Lattice). Let B ∈ Rn×m ... It should be observed that Z[ω] is a regular hexagonal lattice in C. ∼. = R2 ...
  15. [15]
    [PDF] The Learning with Errors Problem
    The Learning with Errors Problem. Oded Regev. ∗. Abstract. In this survey we describe the Learning with Errors (LWE) problem, discuss its properties, its ...
  16. [16]
    [PDF] The Shortest Vector in a Lattice is Hard to Approximate to within ...
    The shortest vector problem (SVP) in a lattice involves finding the shortest nonzero vector. Approximating it within a constant less than p√2 is hard. Lattices ...
  17. [17]
    [PDF] Complexity of the Closest Vector Problem in a Lattice
    1 Introduction. The closest vector problem, often referred to as CVP, is to find a vector in a lattice that is closest to a given (input) vector.
  18. [18]
    Worst‐Case to Average‐Case Reductions Based on Gaussian ...
    We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case.
  19. [19]
    [PDF] Finding shortest lattice vectors faster using quantum search
    Quantum search improves finding shortest lattice vectors, with provably 21.799n+o(n) time and heuristically 20.268n+o(n) time, improving classical results.
  20. [20]
    [PDF] Post-quantum Key Exchange—A New Hope - USENIX
    Aug 10, 2016 · In this section we briefly revisit the passively secure key- encapsulation mechanism (KEM) that was proposed by. Peikert [77] and instantiated ...
  21. [21]
    Trapdoors for Hard Lattices and New Cryptographic Constructions
    We show how to construct a variety of trapdoor cryptographic tools assuming the worst-case hardness of standard lattice problems.
  22. [22]
    [PDF] Fully Homomorphic Encryption without Bootstrapping
    Abstract. We present a radically new approach to fully homomorphic encryption (FHE) that dramatically im- proves performance and bases security on weaker ...
  23. [23]
    Fully homomorphic encryption using ideal lattices
    May 31, 2009 · We propose a fully homomorphic encryption scheme -- ie, a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt.
  24. [24]
    [PDF] Homomorphic Encryption from Learning with Errors
    Jun 8, 2013 · Abstract. We describe a comparatively simple fully homomorphic encryption (FHE) scheme based on the learning with errors (LWE) problem.
  25. [25]
    [PDF] Simple Encrypted Arithmetic Library 2.3.1 - Microsoft
    This documents describes the core features of SEAL 2.3.1, and attempts to provide a practical high-level guide to using homomorphic encryption for a wide ...
  26. [26]
    Papercraft: Lattice-based Verifiable Delay Function Implemented
    We propose Papercraft, a working implementation of a VDF based entirely on lattice techniques and thus plausibly post-quantum secure.
  27. [27]
    Pseudorandom Functions and Lattices - SpringerLink
    We give direct constructions of pseudorandom function (PRF) families based on conjectured hard lattice problems and learning problems.
  28. [28]
    [PDF] Module-Lattice-Based Digital Signature Standard | FIPS 204
    Aug 13, 2024 · ML-DSA is derived from one of the selected schemes, CRYSTALS-DILITHIUM [5, 6], and is intended to protect sensitive U.S. Government information ...
  29. [29]
    [PDF] CRYSTALS-Dilithium
    Feb 8, 2021 · Dilithium is a digital signature scheme based on the hardness of finding short vectors in lattices, designed to be simple to implement securely.
  30. [30]
    CRYSTALS-Dilithium: A Lattice-Based Digital Signature Scheme
    Feb 14, 2018 · In this paper, we present the lattice-based signature scheme Dilithium, which is a component of the CRYSTALS (Cryptographic Suite for Algebraic Lattices) suite.
  31. [31]
    NTRU: A ring-based public key cryptosystem - SpringerLink
    May 24, 2006 · J. Hoffstein, J. Pipher, J.H. Silverman, NTRU: A new high speed public key cryptosystem, Preprint; presented at the rump session of Crypto 96.Missing: original | Show results with:original
  32. [32]
    [PDF] A ring-based public key cryptosystem - NTRU
    NTRU: A Ring-Based Public Key Cryptosystem. Jeffrey Hoffstein, Jill Pipher, Joseph H. Silverman. ABSTRACT. We describe NTRU, a new public key cryptosystem.
  33. [33]
    [PDF] NTRU and Lattice-Based Crypto: Past, Present, and Future
    Jan 16, 2015 · More practical lattice-based cryptosystem were pro- posed in 1996 by Goldreich, Goldwasser, and Halevi. (GGH, inspired by AD), and independently ...
  34. [34]
    NTRU Prime: reducing attack surface at low cost
    May 13, 2016 · This paper (1) proposes NTRU Prime, which tweaks NTRU to use rings without these structures; (2) proposes Streamlined NTRU Prime, a public-key cryptosystem.Missing: original | Show results with:original
  35. [35]
    NTRUSign: Digital Signatures Using the NTRU Lattice - SpringerLink
    Feb 28, 2003 · In this paper we introduce NTRUSign, an ew family of signature schemes based on solving the approximate closest vector problem (appr-CVP) in NTRU-type lattices.
  36. [36]
    [PDF] High-speed key encapsulation from NTRU - Peter Schwabe
    Aug 28, 2017 · Abstract. This paper presents software demonstrating that the 20- year-old NTRU cryptosystem is competitive with more recent lattice-.
  37. [37]
    draft-skyline-ipsecme-ntru-ikev2-00 - Post-quantum Hybrid Key ...
    Jul 7, 2025 · Nagai NTT 7 July 2025 Post-quantum Hybrid Key Exchange with NTRU in the Internet Key Exchange Protocol Version 2 (IKEv2) draft-skyline-ipsecme- ...
  38. [38]
    Public-Key Cryptosystems from the Worst-Case Shortest Vector ...
    We construct public-key cryptosystems that are secure assuming the \emph{worst-case} hardness of approximating the length of a shortest nonzero vector.Missing: SVP | Show results with:SVP
  39. [39]
    BKZ 2.0: Better Lattice Security Estimates - SpringerLink
    We propose an efficient simulation algorithm to model the behaviour of BKZ in high dimension with high blocksize ≥ 50, which can predict approximately both the ...Missing: double | Show results with:double
  40. [40]
    Quantum Computation and Lattice Problems | SIAM Journal on ...
    We present the first explicit connection between quantum computation and lattice problems. Namely, our main result is a solution to the unique shortest vector ...Missing: Laarhoven | Show results with:Laarhoven
  41. [41]
    Finding shortest lattice vectors faster using quantum search
    Apr 14, 2015 · In this paper, we closely study the best-known algorithms for solving the shortest vector problem, and how quantum algorithms may speed up these algorithms.
  42. [42]
    [PDF] Decryption Failure Attacks on Post-Quantum Cryptography
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, ...
  43. [43]
    [PDF] Status Report on the Fourth Round of the NIST Post-Quantum ...
    Kyber — that was then standardized as ML-KEM in FIPS 203 [14].Missing: Hermite 1.005
  44. [44]
    2023.10.03: The inability to count correctly - cr.yp.to: blog
    Oct 3, 2023 · A discretization attack easily hides the fact that NTRU has smaller sizes than Kyber at intermediate security levels, but it doesn't hide NTRU- ...
  45. [45]
  46. [46]
    Cyber Centre's summary review of final candidates for NIST Post ...
    Mar 1, 2021 · Twenty‑six of the initial 69 submissions were lattice‑based, as are 5 of the 7 finalists. Several techniques used in submissions to the ...
  47. [47]
    NIST Post-Quantum Cryptography Standardization
    Post-Quantum Cryptography Standardization ... HQC was selected for standardization on March 11, 2025. NIST IR 8545, Status Report on the Fourth Round of the NIST ...Round 3 Submissions · Call for Proposals · Round 1 Submissions
  48. [48]
    [PDF] Submission Requirements and Evaluation Criteria for the Post ...
    A complete submission requires a cover sheet, algorithm specifications, optical media, and intellectual property statements. A proper submission must meet ...
  49. [49]
    NIST Releases First 3 Finalized Post-Quantum Encryption Standards
    Aug 13, 2024 · The standard uses the CRYSTALS-Dilithium algorithm, which has been renamed ML-DSA, short for Module-Lattice-Based Digital Signature Algorithm.
  50. [50]
    Cost (Evaluation Criteria) - Post-Quantum Cryptography
    Jan 3, 2017 · Schemes will be evaluated based on the sizes of the public keys, ciphertexts, and signatures that they produce.Missing: cycles interoperability
  51. [51]
    Post-Quantum Cryptography: Additional Digital Signature Schemes
    Aug 29, 2022 · NIST posted a call for additional digital signature proposals to be considered in the PQC standardization process. The call for submissions closed June 1, 2023.Round 2 Additional Signatures · Round 1 Additional Signatures · News & Updates<|separator|>
  52. [52]
    [PDF] Update on the NIST standardization of additional signature schemes
    Jan 15, 2025 · In October 2024, NIST announced 14 Second-Round candidates chosen from 40 First-Round submissions, ... Lattice-Based Digital Signature.<|control11|><|separator|>
  53. [53]
    NIST Selects HQC as Fifth Algorithm for Post-Quantum Encryption
    Mar 11, 2025 · The new algorithm, called HQC, will serve as a backup defense in case quantum computers are someday able to crack ML-KEM.Missing: mandatory federal 2035
  54. [54]
    [PDF] NIST IR 8547 initial public draft, Transition to Post-Quantum ...
    Nov 12, 2024 · National Security Memorandum 10 (NSM-10) establishes the year 2035 as the primary target for completing the migration to PQC across Federal ...
  55. [55]
    Security (Evaluation Criteria) - Post-Quantum Cryptography
    Jan 3, 2017 · Call for Proposals 4.A Security The security provided by a cryptographic scheme is the most important factor in the evaluation.Missing: cycles interoperability
  56. [56]
    Kyber | Open Quantum Safe
    Public key size (bytes), Secret key size (bytes), Ciphertext size (bytes), Shared secret size (bytes), Keypair seed size (bytes), Encapsulation seed size (bytes) ...
  57. [57]
    Goodbye ECDH, and Hello To Kyber - Medium
    Nov 4, 2023 · With ECDH, we have a private key of 32 bytes (256 bits) and a public key size of 64 bytes (512 bits). These sizes will increase for Kyber and ...
  58. [58]
    pq-crystals/kyber - GitHub
    This repository contains the official reference implementation of the Kyber key encapsulation mechanism, and an optimized implementation for x86 CPUs ...
  59. [59]
    Post-Quantum Kyber Benchmarks (ARM Cortex-M4) - wolfSSL
    Mar 6, 2024 · Note that Kyber512, from a security perspective, is comparable to ECDH at SECP256R1. The numbers speak for themselves: Kyber wins.Missing: cycles x86
  60. [60]
    Performance and Storage Analysis of CRYSTALS-Kyber (ML-KEM ...
    Aug 7, 2025 · On x86_64, Kyber's cost of approximately 210,800 cycles is roughly 25 times faster than RSA ( 5.2 million cycles) and 72 times faster than SECP ...Missing: x86 | Show results with:x86
  61. [61]
    [PDF] A Cache Attack on the BLISS Lattice-Based Signature Scheme
    Mar 4, 2016 · We present attacks on the two implemented methods for sampling from a discrete Gaussian and for both successfully obtain the secret signing key.
  62. [62]
    On the Masking-Friendly Designs for Post-Quantum Cryptography
    Nov 14, 2023 · Our results show that the design decisions have a significant impact on the efficiency of integrating masking countermeasures into lattice-based ...
  63. [63]
    [PDF] Side-channel Analysis of CRYSTALS-Kyber and A Novel Low-Cost ...
    Jan 5, 2023 · This paper proposes side-channel leakage detection on CRYSTALS-Kyber's decryption, and evaluates masking countermeasures, including a novel ...
  64. [64]
    [PDF] The Challenge of Side-Channel Countermeasures on Post ...
    Smartcards: In real life. Timing attacks are indeed important to consider. But all other classical side-channel attacks are definitely real threats!
  65. [65]
    TLS 1.3 Hybrid Key Exchange using X25519Kyber768 / ML-KEM
    Oct 31, 2024 · Browsers and cloud providers started rolling out hybrid key exchange in TLS 1.3 (primarily 1 using X25519 with Kyber768).Missing: modes | Show results with:modes
  66. [66]
    Hybrid key exchange in TLS 1.3 - IETF
    Sep 7, 2023 · Hybrid key exchange refers to using multiple key exchange algorithms simultaneously and combining the result with the goal of providing security.Missing: modes | Show results with:modes
  67. [67]
    Modernizing federal cryptography in the quantum age - Tanium
    Sep 4, 2025 · ... quantum cryptography will cost approximately $7.1 billion. This ... Quantum migration must be framed not as an IT initiative but as ...
  68. [68]
    OpenSSL 3.5.0 now contains post-quantum procedures | heise online
    Apr 8, 2025 · The version released on April 8, 2025 is also an LTS (long term stable) and will be provided with updates for five years until April 8, 2030.
  69. [69]
    The State of Post-Quantum Cryptography (PQC) on the Web | F5 Labs
    Jun 26, 2025 · For general encryption, NIST selected CRYSTALS-Kyber and for digital signatures, it chose CRYSTALS-Dilithium, FALCON, and SPHINCS+. These ...
  70. [70]
    [PDF] Module-Lattice-Based Key-Encapsulation Mechanism Standard
    Aug 13, 2024 · Other NIST-approved key establishment schemes are specified in NIST Special Publication (SP) 800-56A, Recommendafion for Pair-Wise Key- ...