Fact-checked by Grok 2 weeks ago

Learning with errors

Learning with Errors (LWE) is a fundamental computational problem in cryptography, defined as the task of distinguishing between samples of the form (a, b = \langle a, s \rangle + e \mod q), where a is chosen uniformly at random from \mathbb{Z}_q^n, s is a fixed secret vector, and e is a small error term drawn from a bounded distribution such as a discrete Gaussian, from uniformly random pairs (a, u) over \mathbb{Z}_q^n \times \mathbb{Z}_q. This average-case problem captures the difficulty of decoding noisy linear equations modulo q, with parameters n (dimension), q (modulus, often polynomial in n), and noise level controlled by \alpha q where \alpha = 1/\poly(n). Introduced by Oded Regev in 2005, LWE extends earlier ideas from learning parity with noise and provides a versatile foundation for secure cryptographic primitives resistant to quantum attacks. The hardness of LWE is equivalent to the worst-case complexity of approximating fundamental lattice problems, such as the shortest vector problem (SVP) and shortest independent vectors problem (SIVP), to within \tilde{O}(n / \alpha) factors, via quantum and classical reductions. Specifically, Regev's original work established a quantum reduction showing that solving LWE implies efficient approximation of these lattice problems, while subsequent results by Regev, Peikert, and others provided classical reductions for decision-LWE, confirming its robustness even against non-quantum adversaries. No subexponential-time algorithms are known for LWE beyond $2^{\Omega(n)} runtime, making it a reliable assumption for security levels comparable to factoring or discrete logarithms. LWE has enabled a broad range of lattice-based cryptographic constructions, including public-key schemes that achieve under chosen-plaintext attacks, as first demonstrated by Regev using a dual-Regev variant. Its applications extend to key encapsulation mechanisms (KEMs), digital signatures, , and fully , with variants like Ring-LWE (RLWE) and Module-LWE optimizing efficiency by operating over ring or module structures for faster computations and smaller key sizes. Notably, the NIST Post-Quantum Cryptography Standardization Process selected CRYSTALS-Kyber, a Module-LWE-based KEM (standardized as ML-KEM in FIPS 203), as a primary quantum-resistant for , underscoring LWE's role in transitioning to post-quantum .

Introduction

Definition

The Learning with Errors (LWE) problem is an average-case hard problem over the integers q, conjectured to be computationally difficult and serving as a foundational assumption in . It involves recovering a secret from a collection of noisy linear equations, where the noise is drawn from a small error distribution, making it resistant to known efficient algorithms. Formally, the search-LWE problem is parameterized by the dimension n, a modulus q > 0, an error distribution \chi over \mathbb{Z} (typically a discrete Gaussian distribution with standard deviation \sigma), and the number of samples m. The input consists of m pairs (a_i, b_i) for i = 1 to m, where each a_i is chosen uniformly at random from \mathbb{Z}_q^n, the secret s is a fixed vector in \mathbb{Z}_q^n, and b_i = \langle a_i, s \rangle + e_i \pmod{q} with e_i sampled from \chi (ensuring e_i is small relative to q). The goal is to recover the secret vector s. This setup draws motivation from the task of with additive noise, where one aims to learn an underlying from perturbed observations, but adapted to to connect to the hardness of problems like the shortest problem. In , LWE's average-case hardness enables the construction of secure primitives resistant to quantum attacks, as its difficulty holds even against adversaries with unlimited computational power in certain parameter regimes. For illustration, consider a small instance with n=2, q=7, and \chi uniform over \{-1, 0, 1\}. Suppose the secret is s = (3, 5) \pmod{7}. One possible sample might be a_1 = (2, 4), e_1 = 1, yielding b_1 = \langle (2,4), (3,5) \rangle + 1 = (6 + 20) + 1 = 27 \equiv 6 \pmod{7}, so the pair is ( (2,4), 6 ). Another sample could be a_2 = (1, 3), e_2 = -1, giving b_2 = \langle (1,3), (3,5) \rangle - 1 = (3 + 15) - 1 = 17 \equiv 3 \pmod{7}, for the pair ( (1,3), 3 ). Recovering s from such noisy equations becomes challenging as n and m grow, even with multiple samples.

Historical Development

The learning with errors (LWE) problem was introduced by Oded Regev in as a generalization of earlier parity learning problems, such as the learning parity with noise (LPN) problem, providing a new average-case hardness assumption rooted in problems. This formulation aimed to bridge the gap between worst-case hardness assumptions—known to be computationally difficult—and more efficient average-case problems suitable for cryptographic applications. In 2008, , Peikert, and Vaikuntanathan introduced the dual-Regev encryption scheme, which offered improved efficiency and security guarantees compared to the original construction, solidifying LWE's role as a foundation for public-key . During the , the field expanded significantly through Chris Peikert's contributions, including efficient worst-case to average-case reductions that enhanced the practicality of LWE for and novel sampling techniques for generating secure error distributions. Concurrently, in 2010, Vadim Lyubashevsky, Chris Peikert, and Oded Regev introduced ring-LWE (RLWE), a structured variant that leveraged ideal lattices to achieve greater efficiency while preserving hardness, enabling applications like . LWE gained institutional recognition starting in 2016 with its inclusion in the U.S. National Institute of Standards and Technology (NIST) post-quantum cryptography standardization process, which sought quantum-resistant algorithms amid concerns over large-scale quantum computers. This culminated in 2022 when NIST selected the LWE-based CRYSTALS-Kyber key-encapsulation mechanism as one of the first standardized post-quantum algorithms, marking LWE's transition from theoretical construct to deployed standard. From 2023 to 2025, research has intensified on practical attacks against LWE, with benchmarking frameworks evaluating solvers like uSVP and on real-world parameters from schemes such as , revealing insights into concrete security levels. Parallel efforts have focused on quantum hardness proofs, including analyses of LWE variants resistant to quantum algorithms, such as those incorporating quantum amplitudes to strengthen oblivious sampling and maintain post-quantum security.

Problem Formulations

Search LWE

The search version of the Learning with Errors (LWE) problem, denoted as Search-LWE_{n,q,\chi}, requires recovering a secret vector s \in \mathbb{Z}_q^n given m samples (a_i, b_i) for i = 1, \dots, m, where each a_i is chosen uniformly at random from \mathbb{Z}_q^n, b_i = \langle a_i, s \rangle + e_i \pmod{q}, and the errors e_i are drawn independently from a distribution \chi over \mathbb{Z}_q. Typically, s is also sampled from \chi or uniformly, and \chi is a bounded or discrete Gaussian distribution centered at 0. The hardness of Search-LWE is assumed under computational limits when m is polynomial in the dimension n, the modulus q is polynomial in n, and \chi has small width relative to q, such as a discrete Gaussian with standard deviation \sigma = \alpha q where \alpha = 1/\mathrm{poly}(n). This regime ensures the errors are large enough to obscure the secret (e.g., \alpha q \geq 2\sqrt{n}) but small enough relative to q to avoid excessive wrapping around the modulus, preserving the approximate linear structure without making the samples statistically close to uniform. The problem is believed hard even for quantum computers in this setting, with connections to approximating the shortest vector problem (SVP) in q-ary lattices to within \tilde{O}(n/\alpha) factors. Basic methods for solving Search-LWE include on the formed by the samples, which succeeds in O(n^3) time without errors but amplifies during row operations, rendering it exponential-time infeasible for n \gtrsim 100 even with small errors. More effective approaches use on the q-ary lattice generated by the a_i's and q-scaled identity, such as the Block Korkine-Zolotarev (BKZ) , which iteratively solves bounded-distance decoding or unique SVP instances to recover an approximate s; however, practical success requires block sizes leading to $2^{\Omega(n)} . These methods exploit the fact that short error vectors distinguish LWE samples from uniform . In low dimensions, the impact of errors is evident; for example, with n=4, q=17, and secret s = (0, 13, 9, 11), samples might yield perturbed equations like $14s_1 + 15s_2 + 5s_3 + 2s_4 \approx 8 \pmod{17} (actual inner product 7, error +1), where recovering s via elimination or reduction succeeds if errors remain below q/4, but larger noise obscures the solution entirely. This illustrates the delicate balance in parameter regimes: errors too small allow exact recovery, while those too large (σ comparable to q) make samples indistinguishable from uniform, shifting hardness to the decision variant.

Decision LWE

The decision variant of the Learning With Errors (LWE) problem, denoted as Decision-LWE, asks to distinguish between samples drawn from the LWE distribution and samples drawn uniformly at random. Specifically, given an integer modulus q, n, number of samples m, error distribution \chi over \mathbb{Z}_q, and secret s \in \mathbb{Z}_q^n, the LWE samples consist of pairs (a_i, b_i) where a_i is chosen uniformly from \mathbb{Z}_q^n, b_i = \langle a_i, s \rangle + e_i with e_i \sim \chi, for i = 1, \dots, m. In contrast, uniform random samples are pairs (a_i, u_i) where both a_i and u_i are chosen uniformly from \mathbb{Z}_q^n and \mathbb{Z}_q, respectively. The problem is hard when no efficient algorithm can distinguish these two distributions with non-negligible advantage. In the standard parameter regime with small \alpha \approx 1/\poly(n), where errors are typically Gaussian with width \alpha q, the LWE distribution is statistically far from uniform (with distance approaching 1), but assumed computationally indistinguishable, providing the basis for cryptographic hardness. This closeness ensures computational indistinguishability under the average-case assumption, as the error term prevents efficient recovery or detection without solving hard problems. The hardness of Search-LWE reduces to Decision-LWE via a hybrid argument that amplifies a non-negligible distinguishing into one sufficient for secret recovery. In this approach, an that distinguishes LWE samples from with advantage \epsilon is used iteratively across hybrid distributions (shifting the error gradually) to identify discrepancies that reveal the secret s, succeeding with probability proportional to \epsilon. Conversely, reduces to Search-LWE by leveraging a to solve the , for example by recovering s and verifying consistency of the samples. This establishes polynomial-time equivalence between the search and decision variants. provides for encryption schemes under chosen-plaintext attacks, as the indistinguishability of LWE samples from uniform ensures that encryptions of different messages are computationally indistinguishable. As an average-case problem, its hardness holds for random instances rather than worst-case inputs, making it suitable for cryptographic applications where parameters like n, q, \alpha are chosen to balance and . The hardness of the Learning with Errors (LWE) problem in its average case is closely tied to the worst-case of standard lattice problems, such as the approximate shortest vector problem (GapSVP) and the shortest vectors problem (SIVP). Specifically, solving random instances of LWE is at least as hard as solving these worst-case problems to within certain approximation factors, under quantum reductions established by Regev. These connections provide a foundational basis for the cryptographic security of LWE-based schemes, as the believed intractability of worst-case lattice problems extends to the average-case setting of LWE. The hardness of LWE depends on key parameters: the dimension n, the modulus q (with \log q representing its ), and the noise standard deviation \sigma. As n increases, the problem becomes harder due to the higher-dimensional structure; larger \log q allows for more samples but requires balancing to maintain ; and smaller \sigma relative to q intensifies the challenge by making the noise subtler. Quantum reductions preserve this hardness across parameter regimes, ensuring that advances against worst-case problems would imply efficient solvers for average-case LWE. A related hardness assumption is the Short Integer Solution (SIS) problem, which serves as a dual to LWE and underpins lattice-based hash functions and signatures. SIS involves finding a short nonzero in the kernel of a modulo q, and its average-case hardness reduces from worst-case problems like approximate shortest vector problem (SVP), similar to LWE. This duality allows SIS to complement LWE in cryptographic constructions, such as combining them for collision-resistant functions. Reductions establishing LWE's rely on assumptions about the , particularly its and controlled growth. The errors are typically drawn from a Gaussian centered at zero with standard deviation \sigma, ensuring that the remains subdominant to the q while allowing the probabilistic guarantees to hold without excessive accumulation. Deviations from such stable can weaken the security connections, necessitating careful parameter choices to prevent in iterative . Empirically, no efficient algorithms beyond lattice reduction techniques have been found to solve LWE instances for moderate parameters (e.g., n \approx 256, \log q \approx 12, \sigma \approx 3.2) that are suitable for practical cryptography, supporting its conjectured hardness. These lattice reduction methods, while effective for small dimensions, scale poorly and fail to break secure parameters within feasible computational resources.

Variants

Ring-LWE

Ring-LWE (Ring Learning With Errors) is an algebraic variant of the standard LWE problem, defined over a ring of polynomials. Specifically, the ring is given by R = \mathbb{Z} / \langle f(x) \rangle, where f(x) is an irreducible cyclotomic polynomial of degree k, and the quotient ring modulo q is R_q = R/qR, with q a prime modulus. A Ring-LWE instance consists of samples (a_i, b_i) \in R_q \times R_q, where each a_i is chosen uniformly at random from R_q, the secret s \in R_q is fixed, and b_i = a_i \cdot s + e_i, with \cdot denoting multiplication in the ring R_q and e_i a "small" error term sampled from a discrete Gaussian distribution over the ring elements or a bounded uniform distribution in R. The parameters for Ring-LWE include the ring degree k, which determines the polynomial degree, and the modulus q, with problem dimension n = k, analogous to the dimension in plain LWE but structured across the ring components. The error e_i is typically a ring element whose coefficients are small, with standard deviation scaled to ensure the noise remains negligible compared to q while preserving hardness. This structure contrasts with plain LWE by imposing algebraic dependencies among the samples, enabling more efficient representations. Compared to plain LWE, Ring-LWE offers significant efficiency gains through its algebraic structure. It allows encoding roughly k independent LWE samples into a single ring element, resulting in compact keys and ciphertexts that reduce communication bandwidth by a factor of approximately k. Additionally, ring multiplication and other operations can be accelerated using the Number Theoretic Transform (NTT), an analog of the adapted to finite fields, achieving O(k \log k) for polynomial multiplications. These advantages have made Ring-LWE a foundation for practical post-quantum protocols, such as the NewHope . The computational hardness of Ring-LWE is tied to worst-case lattice problems over ideal lattices. Solving Ring-LWE is at least as hard as approximating the shortest independent vectors problem (SIVP) and the (SIS) to within polynomial factors in the worst case over ideal lattices, under a quantum reduction. This equivalence holds for cyclotomic rings and provides strong security guarantees against both classical and quantum adversaries. As an illustrative example, consider in the ring R = \mathbb{Z} / (x^2 + 1), so k = 2 and elements are of the form a_0 + a_1 x with coefficients in \mathbb{Z}_q. Multiplication is defined as (a_0 + a_1 x) \cdot (b_0 + b_1 x) = (a_0 b_0 - a_1 b_1) + (a_0 b_1 + a_1 b_0) x \pmod{q}. A sample might be a = 3 + 5x, secret s = 1 + 2x, error e = 0 + 1x, yielding b = a \cdot s + e = (3\cdot1 - 5\cdot2) + (3\cdot2 + 5\cdot1)x + e = (-7) + 11x + (0 + 1x) = -7 + 12x \pmod{q}, demonstrating how the ring operation mixes components while adding component-wise error.

Module-LWE

Module learning with errors (Module-LWE) is a variant of the learning with errors problem defined over lattices, serving as an intermediate structure between the unstructured learning with errors (LWE) and the cyclotomic learning with errors (Ring-LWE). In this formulation, the problem operates over a M = R^d, where R = \mathbb{Z}_q[X]/(X^n + 1) denotes a of degree n modulo a prime q, and d is the rank. Samples consist of a \mathbf{A} \in R^{m \times d} chosen uniformly at random and a \mathbf{b} = \mathbf{A} \mathbf{s} + \mathbf{e} \pmod{q} \in R^m, where \mathbf{s} \in R^d is a secret , \mathbf{e} is a small with entries sampled from a (such as a centered ), and m is the number of samples. The search version of Module-LWE requires recovering \mathbf{s} from such samples, while the decision version involves distinguishing these from uniformly random pairs (\mathbf{A}, \mathbf{u}). Key parameters of Module-LWE include the module rank d, which determines the dimensionality of the secret and columns (with d = 1 recovering Ring-LWE and d = n effectively yielding plain LWE when treating polynomials componentwise), and the degree n, often fixed at values like 256 for efficiency in cryptographic applications. These parameters enable a tunable : small d enhances compactness through arithmetic, while larger d reduces algebraic structure to improve resilience against specialized attacks on cyclotomic rings. Module-LWE underpins the security of the CRYSTALS-Kyber , selected by NIST for standardization as FIPS 203 in 2024. The computational hardness of Module-LWE is established via quantum worst-case to average-case reductions from lattice problems on module lattices, including the shortest vector problem (module-SVP) and the shortest independent vectors problem (module-SIVP). Specifically, for appropriate error rates \alpha and q satisfying \alpha q > 2\sqrt{N} \cdot \omega(\sqrt{\log N}) where N = n d, solving Module-LWE is at least as hard as approximating module-SIVP to within \gamma = O(\sqrt{N / d} / \alpha) factors in the worst case. Compared to Ring-LWE, Module-LWE produces larger public keys and ciphertexts due to the higher-dimensional module but provides more conservative security margins by weakening the exploitable ring automorphisms. Ring-LWE corresponds to the special case of Module-LWE with module rank d = 1.

Other Structured Variants

Other structured variants of the Learning with Errors (LWE) problem extend the algebraic framework beyond standard rings and modules, incorporating more general number-theoretic or geometric structures to achieve specific efficiency gains or novel hardness properties. These variants maintain the core LWE hardness while introducing constraints that tie security to specialized problems, often drawing from or geometry. Recent work as of 2025 unifies many such variants under algebraic frameworks over number fields and orders. Learning with errors over rings of integers in general algebraic number fields generalizes the ideal lattice problems underlying Ring-LWE to arbitrary number fields, enabling constructions analogous to that support trapdoor functions for . This approach allows flexible parameter choices that enhance compactness in lattice-based schemes. Its hardness reduces from worst-case problems over ideal lattices in the field, providing provable security under structured assumptions. Cyclic Algebra LWE (CLWE), proposed in 2019, defines LWE samples over non-commutative rings derived from cyclic algebras, extending the commutative structure of Ring-LWE to introduce new algebraic dependencies. This setup leverages the non-commutativity to create distinct hardness assumptions, potentially resisting certain algebraic attacks that exploit commutativity in standard variants. CLWE's structure facilitates efficient computations in non-abelian settings, broadening the applicability of LWE to emerging cryptographic protocols. Variety-LWE (VLWE), proposed in 2025, constructs LWE instances over algebraic varieties, departing from quotient rings to incorporate geometric constraints for vectorized schemes. By lattices into varieties, VLWE relaxes the constraints of prior variants, enabling more efficient handling of vector spaces while preserving average-case hardness. This geometric approach supports advanced encodings that improve upon the limitations of -based structures in multi-dimensional settings. These variants offer enhanced efficiency through tailored algebraic operations and potential resistance to side-channel attacks via their structured error distributions, with hardness directly linked to solving short problems in their respective structured lattices. Compared to the Module-LWE, they explore niche geometries for specialized post-quantum applications. However, their reliance on specific algebraic structures introduces trade-offs, including vulnerabilities to targeted algebraic attacks that may not affect the more general plain LWE security.

Hardness and Security

Worst-Case to Average-Case Reductions

The worst-case to average-case reductions for the Learning with Errors (LWE) problem establish a direct connection between the average-case hardness of LWE and the worst-case hardness of standard approximation problems. Originating in Regev's foundational work, these reductions provide a that solves worst-case instances of the Gap Shortest Vector Problem (GapSVP) and (SIVP) given an efficient solver for average-case LWE, encompassing both search and decision variants. This framework implies that any quantum polynomial-time algorithm for LWE would yield efficient approximations for these problems. The core technique embeds worst-case lattice instances into LWE samples by leveraging q-ary lattices, where the modulus q defines a lattice structure that incorporates short vectors from the original lattice problem. Short basis vectors or approximate shortest vectors are mapped into the secret vector or error components of LWE samples, transforming the task of approximating lattice problems into solving or distinguishing LWE distributions. The reduction preserves efficiency under quantum computation and incurs an approximation factor of \tilde{O}(n / \alpha), where n denotes the lattice dimension and \alpha the noise parameter, while requiring the error distribution to have standard deviation \sigma \geq 2\sqrt{n} for the Gaussian noise. These reductions exploit dualities inherent to LWE, including the left-right between formulations where the error is added before or after the linear transformation (e.g., e^\top A + s^\top versus A s + e), facilitated by dual q-ary and the properties of discrete Gaussians. Consequently, the assumed quantum hardness of worst-case problems like GapSVP and SIVP to within factors directly implies the average-case hardness of LWE, underpinning its use in cryptographic constructions resistant to quantum attacks.

Key Hardness Results

The foundational hardness result for the Learning with Errors (LWE) problem was established by Oded Regev in 2005, who demonstrated a polynomial-time quantum reduction from the worst-case shortest vector problem (SVP) in the \ell_2-norm to average-case LWE. Specifically, this reduction shows that solving LWE instances with modulus q = \poly(n) and error standard deviation \sigma = O(\sqrt{n}) (where n is the dimension) is at least as hard as approximating SVP to within \tilde{O}(n / \alpha) factors on the worst-case \ell_2-lattice problems. In 2009, Chris Peikert extended these foundations with an efficient classical reduction from worst-case approximate SVP and inhomogeneous SIVP to LWE, enabling polynomial modulus sizes and providing tools for generating LWE samples efficiently using discrete Gaussian distributions over the integers. This work also introduced concepts supporting the left-right sampling duality, which was formalized in the context of (RLWE) to show that RLWE samples can be generated equivalently from left or right , preserving under ideal hardness. Peikert's 2013 contributions further refined LWE hardness by establishing classical reductions that hold for a broader class of error distributions, including those with sub-Gaussian tails beyond the standard Gaussian, while maintaining stability in the reduction process. These deterministic techniques ensure that LWE remains hard even when errors are drawn from distributions with bounded moments, tying average-case LWE directly to worst-case problems in dimension without relying on quantum computation. For the Ring-LWE variant, Lyubashevsky, Peikert, and Regev in 2010 proved quantum hardness by reducing worst-case problems on ideal lattices, such as approximate shortest independent vectors (SIVP), to average-case RLWE over the ring \mathbb{Z}/(x^n + 1). This result establishes that solving RLWE is as hard as \tilde{O}(n / \alpha) approximations of ideal-SVP for quantum polynomial-time algorithms, with error parameters analogous to those in the original LWE. Recent advances in 2025 have addressed potential quantum threats to LWE from holographic principles in quantum gravity, showing that LWE resists attacks via holographic decoders that attempt to reconstruct secrets through extremal surface measurements in dual theories. This confirms LWE's quantum hardness persists even against such exotic computational models, supporting its use in post-quantum standards like those selected by NIST.

Attacks and Security Analysis

Lattice-based attacks on the Learning with Errors (LWE) problem primarily rely on lattice reduction techniques to solve instances by approximating the shortest vector problem (SVP) in associated lattices. The BKZ 2.0 algorithm, an advanced implementation of the Block Korkine-Zolotarev (BKZ) reduction, improves upon earlier methods by incorporating progressive block size increases and better enumeration strategies, enabling more efficient solving of LWE instances in dimensions up to several hundred. Progressive BKZ variants further enhance this by starting with small block sizes and incrementally increasing them, which balances runtime and reduction quality for small-scale LWE problems (n < 100), often achieving root Hermite factors close to 1.005 in practical runs. Security estimates for these attacks frequently use the core-SVP model, which conservatively bounds the cost of LWE solving by reducing it to unique-SVP hardness and simulating BKZ performance, predicting that classical attacks require at least 2^100 operations for secure parameters. Side-channel attacks exploit implementation details in LWE variants, such as Ring-LWE (RLWE), where timing variations in the Number Theoretic Transform (NTT) can leak secret information. Non-constant-time NTT implementations, particularly those using conditional branches in , allow attackers to recover secrets by measuring execution time differences across thousands of encryptions, reducing effective security by 20-40 bits in vulnerable RLWE schemes. Algebraic attacks on structured LWE variants, such as those over number fields, leverage unique-SVP (uSVP) solvers to exploit algebraic dependencies, estimating that uSVP instances derived from LWE samples in cyclotomic rings can be solved 10-100 times faster than general SVP using embedding techniques. Security levels for LWE-based schemes are categorized by NIST into levels 1 through 5, corresponding to classical security strengths of 128, 192, 256, and higher bits against generic attacks, with concrete parameters tuned via core-SVP estimates. For instance, the (now ) variant targets NIST level 1 with n=256, q=3329, and noise standard deviation σ=2/√2, providing approximately 128 bits of classical security and 89 bits quantum security under BKZ simulations. Recent 2025 developments include /ML-based attacks on LWE, facilitated by the datasets, which provide standardized LWE instances across dimensions n=100-600 and noise levels for training neural networks to recover secrets. These datasets enable attacks like and Cool and Cruel, achieving up to 10% better recovery rates than classical methods on sparse-secret LWE for n=300, though they remain sub-exponential in overall hardness. Claims of quantum polynomial-time algorithms for LWE, such as those proposed in early 2024, have been refuted due to errors in lattice embedding and depth assumptions, confirming that no sub-exponential quantum beyond known Grover-like enhancements exists for general LWE.

Cryptographic Applications

Public-Key Encryption

The Learning with Errors (LWE) problem provides the basis for efficient public-key encryption schemes secure under chosen-plaintext attacks (IND-CPA). The foundational construction, introduced by Oded Regev in 2005, relies on the decision variant of LWE to achieve , where an adversary cannot distinguish encryptions of different messages. This scheme uses LWE samples to form the public key and incorporates to mask information during encryption. In the key generation process, a secret \mathbf{s} \in \mathbb{Z}_q^n is sampled uniformly at random. A \mathbf{A} \in \mathbb{Z}_q^{m \times n} is generated uniformly, and an error \mathbf{e} \in \mathbb{Z}_q^m is drawn from \chi^m. The public key is the pair (\mathbf{A}, \mathbf{b}), where \mathbf{b} = \mathbf{A s} + \mathbf{e} \pmod{q}, and the private key is \mathbf{s}. To encrypt a bit m \in \{0, 1\}, a random \mathbf{u} \in \mathbb{Z}_q^m is selected uniformly at random, and the is (\mathbf{u}, v), with v = \langle \mathbf{u}, \mathbf{b} \rangle + m \cdot \lfloor q/2 \rfloor \pmod{q}. Decryption recovers the by computing v - \langle \mathbf{u}, \mathbf{A s} \rangle \pmod{q}, which simplifies to \langle \mathbf{u}, \mathbf{e} \rangle + m \cdot \lfloor q/2 \rfloor \pmod{q}; the small error term \langle \mathbf{u}, \mathbf{e} \rangle allows rounding to determine m by proximity to 0 or \lfloor q/2 \rfloor. Correctness holds with overwhelming probability when the noise remains below q/4. The IND-CPA security of this scheme follows directly from the decision-LWE assumption: ciphertexts encrypting 0 are computationally indistinguishable from uniform random values, as they consist of LWE samples (\mathbf{A}^T \mathbf{u}, \mathbf{b}^T \mathbf{u}) \pmod{q}, and the message-dependent shift by \lfloor q/2 \rfloor for m = 1 is masked by the modulus and noise, preventing distinguishability in a hybrid argument. For efficiency, the basic scheme incurs O(n^2 \log q) time and space costs due to matrix operations in key generation and encryption, scaling as O(n^2) in the dimension n. Enhancements via sparse secrets, such as binary or low-hamming-weight \mathbf{s}, reduce computational overhead and parameter sizes by lowering the secret norm, enabling faster arithmetic while preserving security. Typical parameters set m \approx 2n \log q to provide sufficient LWE samples for hardness, with \chi as a centered discrete Gaussian distribution ensuring negligible decryption error (e.g., standard deviation \sigma = O(\sqrt{q / n})). For example, with n = 512 and q = 2^{16}, the scheme supports 128-bit security with a public key of roughly 16 MB, balancing correctness and resistance to known attacks—highlighting the need for structured variants like Ring-LWE for more compact keys. Variants like Ring-LWE yield more compact keys and better performance for practical deployment, relying on analogous hardness in cyclotomic rings.

Key Exchange and Secure Communication

Learning with errors (LWE)-based protocols enable two parties to establish a key over an insecure channel, leveraging the hardness of the LWE problem to resist quantum attacks. These protocols typically operate as key encapsulation mechanisms (KEMs), where one party generates a public key and the other encapsulates a using it, allowing secure key agreement without direct key transmission. As a building block, they extend LWE-based public-key encryption by focusing on symmetric key derivation rather than message encryption, facilitating post-quantum migration for protocols like TLS. A basic LWE-based KEM begins with the receiver generating a public key pair: a matrix \mathbf{A} \in \mathbb{Z}_q^{m \times n} sampled uniformly or pseudorandomly, a secret \mathbf{s} \in \mathbb{Z}_q^n, and \mathbf{t} = \mathbf{A} \mathbf{s} + \mathbf{e}, where \mathbf{e} is a small error from a discrete Gaussian or centered binomial distribution. The sender samples a random secret \mathbf{u} \in \mathbb{Z}_q^n and computes an encapsulation ciphertext \mathbf{v} = \mathbf{A} \mathbf{u} + \mathbf{e}_1 and \mathbf{w} = \mathbf{u}^T \mathbf{t} + \mathbf{e}_2, where \mathbf{e}_1, \mathbf{e}_2 are small errors; the shared secret is derived as K = \lfloor (\mathbf{s}^T \mathbf{v} + H(\mathbf{w})) / 2^{\ell} \rfloor, incorporating error reconciliation to handle noise. The receiver decapsulates by computing \mathbf{u}^T \mathbf{A}^T \mathbf{s} + \mathbf{u}^T \mathbf{e} + \mathbf{e}_2, recovering K via reconciliation mechanisms like those introduced by Ding et al., which ensure the noisy inner product rounds to the same value despite errors. This protocol flow supports a client-server exchange, where the server sends the public key, the client encapsulates and responds with the ciphertext, and both derive the independently, with reconciliation mitigating decryption failures from noise. Ring-LWE (RLWE) variants enhance efficiency through structured , reducing computational and bandwidth costs via polynomial multiplication. The NewHope protocol, proposed by Alkim et al., instantiates an RLWE-based using cyclotomic rings over \mathbb{Z}_q/(x^{1024} + 1), where parties exchange ring elements analogous to the matrix-vector operations in plain LWE, employing a mechanism to agree on a 256-bit shared key. NewHope achieves IND-CPA security under the decision RLWE assumption, with performance metrics showing encapsulation in under 1 ms on modern CPUs, making it suitable for real-world deployment like experimental TLS integrations. To attain stronger IND-CCA security, LWE-based KEMs apply the Fujisaki-Okamoto (FO) transform, which converts an IND-CPA-secure scheme into an IND-CCA-secure one by re-encrypting ciphertexts with hash functions, proven secure in the quantum model for settings. This transform, adapted for post-quantum use, ensures negligible decryption failure rates and multi-user , directly relying on the decision-LWE . The NIST-standardized ML-KEM (formerly CRYSTALS-Kyber), based on module-LWE, uses pseudorandom generation via a to optimize key sizes (e.g., 800 bytes for Kyber-512 at 128-bit ) and incorporates the FO transform for IND-CCA , with encapsulation/decapsulation times around 0.1 ms on standard hardware. Kyber's client-server flow mirrors the basic KEM but adds IND-CCA protections, establishing it as the primary post-quantum for standards like TLS 1.3 hybrids.

Digital Signatures

LWE-based digital signatures typically rely on the Fiat-Shamir heuristic applied to interactive zero-knowledge proofs of knowledge of short solutions to LWE instances, where the protocol rejects samples that fall outside predefined bounds to ensure the signed messages do not leak information about the secret key. This rejection mechanism, known as Fiat-Shamir with aborts, transforms the identification protocol into a non-interactive signature scheme while maintaining statistical zero-knowledge properties. The resulting signatures are proven secure under the hardness of the problem, a associated with LWE lattices. Ring-LWE variants extend this framework to structured lattices for efficiency gains. The BLISS (Bimodal Lattice Signature Scheme) scheme, introduced in 2013, uses ring-LWE and employs discrete Gaussian sampling for the secret key, combined with bimodal Gaussian distributions during signing to facilitate that produces signatures statistically independent of the message. This approach enables compact signatures with strong zero-knowledge guarantees and EUF-CMA (existentially unforgeable under chosen-message attacks) security reduced from the ring-SIS assumption in the model. in BLISS ensures short signature vectors, minimizing public key and signature sizes while preserving correctness with high probability. More advanced module-LWE-based schemes emerged around , with CRYSTALS- representing a key example that applies the Fiat-Shamir with aborts paradigm to module lattices, incorporating trapdoor techniques for efficient sampling akin to hash-and-sign paradigms. Dilithium was selected by NIST in 2022 for standardization as ML-DSA in FIPS 204 (August 2024), providing EUF-CMA security based on the hardness of module-LWE and module-SIS problems in the quantum model. Like its predecessors, it uses to generate short signatures that hide the signer's secret from the verifier. For 128-bit post-quantum security, Dilithium parameters yield public keys of approximately 1.3 kB and signatures of about 2.4 kB, while ring-LWE variants like BLISS achieve comparable sizes around 1-2 kB for keys and 3-5 kB for signatures. These schemes support applications such as certificates, where the decision-LWE assumption aids in ensuring the indistinguishability of valid and invalid proofs.

Advanced Constructions

One of the most significant advanced applications of the Learning with Errors (LWE) problem is in fully homomorphic encryption (FHE), which enables arbitrary computations on encrypted data without decryption. Craig Gentry's seminal 2009 construction introduced the first FHE scheme based on ideal lattices, relying on the hardness of LWE to achieve —a technique that refreshes ciphertexts to manage noise growth during homomorphic operations, allowing unlimited computation depth. This breakthrough transformed LWE from a foundational hardness assumption into a practical enabler for privacy-preserving computation, though initial implementations suffered from high overhead due to frequent bootstrapping. Subsequent advancements optimized FHE by leveraging structured variants of LWE, such as Ring-LWE (RLWE) and Module-LWE (MLWE), to support efficient approximate arithmetic. The Brakerski-Gentry-Vaikuntanathan (BGV) scheme, introduced in 2012, uses RLWE to construct leveled FHE without relying on for shallow , scaling parameters polynomially with circuit depth while maintaining exact integer computations. For real-world applications involving floating-point data, the Cheon-Kim-Kim-Song (CKKS) scheme from 2017 employs RLWE (and extensions to MLWE) to approximate homomorphic operations, encoding plaintexts as polynomials and controlling error through scaling factors, which is particularly suited for tasks like inference. These schemes reduce ciphertext sizes and computation times by factors of thousands compared to Gentry's original, enabling practical deployments on consumer hardware. LWE also underpins advanced zero-knowledge proofs, particularly succinct non-interactive arguments of (zk-SNARKs) for verifiable . Lattice-based zk-SNARKs, such as those constructed from square span programs over ideal in 2018, leverage LWE hardness to produce proofs whose size is polylogarithmic in the size, allowing efficient verification of complex like circuit satisfiability without revealing inputs. These protocols extend LWE's utility to scenarios requiring both privacy and verifiability, such as blockchain scalability, by combining arithmetic circuit representations with lattice commitments for sublinear proof generation. In multi-party computation (MPC), LWE facilitates cryptography, where secrets are shared among parties to enable collaborative operations without a . signatures based on LWE, as proposed in 2024 constructions like , distribute signing keys across parties using RLWE shares, allowing any subset to generate valid signatures while resisting malicious adversaries through proactive . Similarly, key exchange protocols from LWE, such as distributed and decryption in RLWE-based systems from 2022, enable secure group key establishment by sharing LWE secrets additively, supporting applications like secure multi-party messaging with post-quantum security. Recent advances from 2024 to 2025 have integrated LWE-FHE approaches for privacy-preserving (PPML), combining FHE with partial decryption or primitives to accelerate on encrypted data. For instance, frameworks like FHEMaLe optimize CKKS-based FHE for edge devices, enabling encrypted computations including on genomic datasets. Surveys of these developments highlight their role in handling real-valued computations for models like convolutional s, preserving guarantees under LWE assumptions. Despite these progresses, advanced LWE constructions face key challenges in noise management and overhead. In FHE schemes, accumulated from homomorphic multiplications can exceed decryption thresholds, necessitating that incurs computational costs scaling as O(\lambda^3) or higher, where \lambda is the security parameter, limiting scalability for deep circuits. Ongoing focuses on approximate techniques in CKKS to mitigate this, but practical deployments still require careful parameter tuning to balance and .

References

  1. [1]
    [PDF] The Learning with Errors Problem
    In this survey we describe the Learning with Errors (LWE) problem, discuss its properties, its hardness, and its cryptographic applications. In recent years, ...
  2. [2]
    [PDF] On Lattices, Learning with Errors, Random Linear Codes, and ...
    On Lattices, Learning with Errors,. Random Linear Codes, and Cryptography. Oded Regev ∗. May 2, 2009. Abstract. Our main result is a reduction ...
  3. [3]
    [PDF] Module-Lattice-Based Key-Encapsulation Mechanism Standard
    Aug 13, 2024 · As a result, in 2016, NIST initiated a public Post-Quantum Cryptography (PQC) Standardization process to select quantum-resistant public-key ...
  4. [4]
    On lattices, learning with errors, random linear codes, and ...
    This learning problem is a natural extension of the 'learning from parity with error' problem to higher moduli. It can also be viewed as the problem of decoding ...Missing: original | Show results with:original
  5. [5]
    [PDF] The Learning with Errors Problem - People | MIT CSAIL
    The learning with errors (LWE) problem was introduced in its current form in a seminal work of Oded Regev for which he won the Gödel prize in 2018.
  6. [6]
    [PDF] On Ideal Lattices and Learning with Errors Over Rings
    Apr 24, 2012 · Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the ...
  7. [7]
    NIST Post-Quantum Cryptography Standardization
    NIST has initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms.Round 3 Submissions · Call for Proposals · Round 1 SubmissionsMissing: LWE | Show results with:LWE
  8. [8]
    PQC Standardization Process: Announcing Four Candidates to be ...
    Jul 5, 2022 · NIST is announcing four Post-Quantum Cryptography candidates for standardization, plus candidates for a fourth round of analysis.<|separator|>
  9. [9]
    [PDF] Refined Strategy for Solving LWE in Two-step Mode
    BKZ [16] is the most popular lattice reduction algo- rithm for solving LWE. It combines the LLL [30] algorithm with an SVP solver to balance the time cost and ...
  10. [10]
    [PDF] On Ideal Lattices and Learning with Errors Over Rings
    Abstract. The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, ...
  11. [11]
    [PDF] Post-quantum key exchange – a new hope - Cryptology ePrint Archive
    Aug 19, 2015 · This KEM closely resembles previously introduced. (Ring-)LWE encryption schemes [66, 70] but due to a new error-reconciliation mechanism, one Rq ...
  12. [12]
    Worst-Case to Average-Case Reductions for Module Lattices
    In this work, we define the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. We prove that these average ...
  13. [13]
    CRYSTALS -- Kyber: a CCA-secure module-lattice-based KEM
    Jun 27, 2017 · A portfolio of post-quantum cryptographic primitives built around a key-encapsulation mechanism (KEM),based on hardness assumptions over module lattices.
  14. [14]
    [PDF] Algebraically Structured LWE, Revisited
    May 22, 2024 · A main message of our work is that it is straightforward to use the hardness of the original Ring-LWE problem as a foundation for the hardness ...
  15. [15]
    On the hardness of NTRU problems - SpringerLink
    Apr 2, 2022 · In this paper, we show that for any algebraic number field K, the NTRU problem with suitable parameters defined over the ring of integers R is at least as hard.
  16. [16]
    Non-commutative Ring Learning with Errors from Cyclic Algebras
    Jul 13, 2022 · In this work, we introduce a novel variant of LWE over cyclic algebras (CLWE) to replicate the addition of the ring structure taking LWE to Ring LWE.
  17. [17]
  18. [18]
    [PDF] Public-Key Cryptosystems from the Worst-Case Shortest Vector ...
    Sep 1, 2009 · At its heart is a collection of injective trapdoor functions based on. LWE. This collection was defined in the recent work of Gentry, Peikert, ...
  19. [19]
    [PDF] Classical Hardness of Learning with Errors - arXiv
    Jun 3, 2013 · Abstract. We show that the Learning with Errors (LWE) problem is classically at least as hard as standard worst-case lattice problems, ...
  20. [20]
  21. [21]
    BKZ 2.0: Better Lattice Security Estimates - SpringerLink
    BKZ 2.0 is a state-of-the-art implementation of BKZ, a lattice reduction algorithm, incorporating improvements to revise lattice security estimates.
  22. [22]
    Refined Strategy for Solving LWE in Two-step Mode
    Oct 8, 2022 · Learning with Errors (LWE) and its variants are widely used in ... progressive BKZ as the lattice reduction and sieving as the SVP call.Missing: small | Show results with:small
  23. [23]
    TAPAS: Datasets for Learning the Learning with Errors Problem
    To fill this gap and accelerate AI research on LWE attacks, we propose the TAPAS datasets, a Toolkit for Analysis of Post-quantum cryptography using AI Systems.Missing: ML | Show results with:ML
  24. [24]
    Fully homomorphic encryption using ideal lattices
    We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt.
  25. [25]
    Fully Homomorphic Encryption without Bootstrapping
    Paper 2011/277. Fully Homomorphic Encryption without Bootstrapping. Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan. Abstract. We present a radically ...
  26. [26]
    Practical Two-Round Threshold Signatures from Learning with Errors
    Jul 9, 2024 · In this work, we propose, implement, and evaluate a lattice-based threshold signature scheme, Ringtail, which is the first to achieve a combination of ...
  27. [27]
    R-LWE-Based Distributed Key Generation and Threshold Decryption
    In this work, we will give two original threshold protocols based in the lattice problem R-LWE: one for key generation and one for decryption.
  28. [28]
    Recent advances of privacy-preserving machine learning based on ...
    This article aims to introduce recent representative results of FHE-based privacy-preserving machine learning, helping users understand the pros and cons of ...