Fact-checked by Grok 2 weeks ago

Lattice problem

In mathematics and computer science, a lattice problem refers to a family of computational challenges based on the geometric structure of point lattices in \mathbb{R}^n, where a L is defined as a of \mathbb{R}^n generated by linear combinations of a basis of n linearly independent s, ensuring it spans the full for full-rank lattices. The most prominent lattice problems include the Shortest Vector Problem (SVP), which requires finding the shortest non-zero in the (i.e., a v \in L \setminus \{0\} minimizing \|v\|), and the Closest Vector Problem (CVP), which involves identifying the lattice closest to a given target point t \in \mathbb{R}^n. These problems are NP-hard in the worst case and remain intractable even for approximation factors beyond certain thresholds, with no known polynomial-time algorithms for exact solutions in high dimensions. Lattice problems gained prominence in cryptography due to their conjectured hardness against both classical and quantum computers, serving as the foundation for , a that constructs secure protocols from the difficulty of solving these problems. Unlike traditional number-theoretic schemes like , which rely on and are vulnerable to quantum attacks via , lattice-based systems derive security from worst-case hardness assumptions, meaning that solving average-case instances is as hard as the hardest cases of problems like SVP or CVP. This property enables robust security guarantees, with key primitives including public-key encryption, digital signatures, and advanced functionalities such as fully , all resistant to quantum adversaries. Beyond , lattice problems have applications in optimization, , and algorithm design, where efficient approximations (e.g., via techniques like the Lenstra–Lenstra–Lovász algorithm) are used for tasks including and error-correcting codes. Variants such as the (LWE) problem, which posits distinguishing noisy linear equations over a from random ones, further extend their utility by providing average-case hardness reductions to worst-case lattice problems, facilitating practical implementations in post-quantum standards such as NIST's ML-KEM and ML-DSA finalized in 2024. Ongoing research focuses on refining parameter selections and exploring ring or module variants to balance security and efficiency.

Preliminaries

Definition of Lattices

In , particularly in the , a L in the \mathbb{R}^n is defined as a generated by linear combinations of k linearly independent basis vectors \mathbf{b}_1, \dots, \mathbf{b}_k \in \mathbb{R}^n, where k \leq n. Formally, L = \{ B \mathbf{m} \mid \mathbf{m} \in \mathbb{Z}^k \}, with B denoting the n \times k whose columns are the basis vectors, assuming \operatorname{[rank](/page/Rank)}(B) = k. This structure ensures that L is closed under addition and taking additive inverses, contains the origin, and has no accumulation points, making it a of k. A prominent example is the , which consists of all points with integer coordinates in \mathbb{R}^n and is generated by the vectors \mathbf{e}_i = (0, \dots, 1, \dots, 0) for i = 1, \dots, n. Lattices are classified as full-rank if k = n, in which case they span the entire \mathbb{R}^n, or non-full-rank otherwise. In cryptographic applications, q-ary lattices play a key role; these are lattices \Lambda \subseteq \mathbb{Z}^n satisfying q \mathbb{Z}^n \subseteq \Lambda \subseteq \mathbb{Z}^n for some integer q > 1, such as the set \{ \mathbf{x} \in \mathbb{Z}^n \mid \langle \mathbf{x}, \mathbf{c} \rangle \equiv 0 \pmod{q} \} for a fixed \mathbf{c} \in \mathbb{Z}^n, which exhibit periodicity modulo q. Key properties of lattices include the successive minima \lambda_i(L), introduced in the , defined for i = 1, \dots, k as the infimum of all r > 0 such that the B(\mathbf{0}, r) = \{ \mathbf{y} \in \mathbb{R}^n \mid \|\mathbf{y}\| \leq r \} (in the ) contains at least i linearly independent vectors from L. These minima quantify the distribution of short vectors in the lattice, with \lambda_1(L) being the length of the shortest nonzero vector. The study of lattices traces its origins to the , pioneered by in his 1896 monograph, which laid the groundwork for analyzing lattice points in convex bodies.

Lattice Bases and Norms

Lattices in Euclidean space are typically represented using a basis, which consists of a set of linearly independent vectors \mathbf{b}_1, \dots, \mathbf{b}_k \in \mathbb{R}^m such that the lattice L is the set of all integer linear combinations: L = \{ \sum_{i=1}^k z_i \mathbf{b}_i \mid z_i \in \mathbb{Z} \}, or equivalently, L = \operatorname{span}_\mathbb{Z} \{ \mathbf{b}_1, \dots, \mathbf{b}_k \}. This representation is not unique; any lattice admits infinitely many bases, obtained by applying unimodular transformations (integer matrices with determinant \pm 1) to a given basis. When the basis spans the full ambient space, i.e., k = m and the vectors are linearly independent over \mathbb{R}, it is called a full-rank basis, and the lattice is full-dimensional. To analyze lattice structure, the Gram-Schmidt orthogonalization process is applied to a basis B = (\mathbf{b}_1, \dots, \mathbf{b}_n), producing an B^* = (\mathbf{b}^*_1, \dots, \mathbf{b}^*_n) and coefficients \mu_{i,j} for i > j. The vectors are computed recursively: \mathbf{b}^*_1 = \mathbf{b}_1, and for i = 2, \dots, n, \mathbf{b}^*_i = \mathbf{b}_i - \sum_{j=1}^{i-1} \mu_{i,j} \mathbf{b}^*_j, \quad \text{where} \quad \mu_{i,j} = \frac{\langle \mathbf{b}_i, \mathbf{b}^*_j \rangle}{\| \mathbf{b}^*_j \|^2}. This yields orthogonal vectors satisfying \langle \mathbf{b}^*_i, \mathbf{b}^*_j \rangle = 0 for i \neq j, with the original basis expressible as \mathbf{b}_i = \sum_{j=1}^i \mu_{i,j} \mathbf{b}^*_j. The orthogonality defect of the basis measures how close B^* is to the original basis, often defined as \delta(B) = \prod_{i=1}^n \frac{\| \mathbf{b}_i \|}{\| \mathbf{b}^*_i \|}, which quantifies the deviation from and is crucial for assessing basis quality. Vector norms provide metrics for distances in lattice problems, with the Euclidean norm \| \mathbf{x} \|_2 = \sqrt{ \sum_{i=1}^m x_i^2 } being the most commonly used due to its geometric interpretability and prevalence in hardness assumptions. Other \ell_p-norms include the infinity norm \| \mathbf{x} \|_\infty = \max_i |x_i|, which bounds the maximum coordinate, and the \ell_1-norm \| \mathbf{x} \|_1 = \sum_{i=1}^m |x_i|, though lattice cryptography and optimization primarily rely on the \ell_2-norm for defining shortest or closest vectors. General \ell_p-norms are \| \mathbf{x} \|_p = \left( \sum_{i=1}^m |x_i|^p \right)^{1/p} for $1 \leq p < \infty, with reductions showing equivalence between problems in different norms up to polynomial factors. The dual lattice L^* of a lattice L \subseteq \mathbb{R}^n is defined as L^* = \{ \mathbf{y} \in \operatorname{span}_\mathbb{R}(L) \mid \langle \mathbf{y}, \mathbf{x} \rangle \in \mathbb{Z} \ \forall \mathbf{x} \in L \}, consisting of all vectors whose inner products with lattice points are integers. If B is a basis matrix for L (with columns as basis vectors), the dual basis matrix is B^* = B^{-T}, the inverse of the transpose, ensuring L^* = \operatorname{span}_\mathbb{Z} \{ \mathbf{b}^*_1, \dots, \mathbf{b}^*_n \} and \det(L^*) = 1 / \det(L). This construction preserves the lattice structure and facilitates analysis of properties like successive minima through transference theorems.

Shortest Vector Problem (SVP)

Formulation

The Shortest Vector Problem (SVP) is a fundamental computational problem in the geometry of numbers, concerned with identifying the minimal length of a nonzero vector within a given lattice. Formally, given a basis B = \{\mathbf{b}_1, \dots, \mathbf{b}_n\} consisting of n linearly independent vectors in \mathbb{R}^m (with m \geq n), which generates a lattice L(B) = \{ B \mathbf{z} \mid \mathbf{z} \in \mathbb{Z}^n \}, the search version of SVP requires finding a nonzero vector \mathbf{v} \in L(B) that minimizes the Euclidean norm \|\mathbf{v}\|_2 = \sqrt{\sum_{i=1}^m v_i^2}. This norm is the most commonly used, though SVP can be defined with respect to any vector norm, such as the \ell_p-norm for p \in [1, \infty]. Geometrically, SVP seeks the radius of the smallest sphere centered at the origin that contains a nonzero lattice point, corresponding to the first successive minimum \lambda_1(L) = \min_{\mathbf{0} \neq \mathbf{v} \in L} \|\mathbf{v}\|_2. The decision version of SVP, which is often studied for its connections to complexity theory, asks whether there exists a nonzero \mathbf{v} \in L(B) such that \|\mathbf{v}\|_2 \leq r, given the basis B and a challenge value r > 0. Equivalently, it distinguishes cases where \lambda_1(L(B)) \leq r from those where \lambda_1(L(B)) > r. In practice, exact SVP is intractable for high dimensions, leading to the study of approximate variants. The approximation version SVP_\gamma, parameterized by an approximation factor \gamma \geq 1, requires finding a nonzero \mathbf{v} \in L(B) such that \|\mathbf{v}\|_2 \leq \gamma \cdot \lambda_1(L(B)). For constant \gamma > 1, this remains hard, while polynomial-time algorithms exist for \gamma = 2^{O(n)} via lattice reduction techniques, though exact solutions are only feasible up to dimension around 100 in the Euclidean norm. These formulations underpin the hardness assumptions in lattice-based cryptography and complexity theory.

Hardness Results

The Shortest Vector Problem (SVP) is NP-hard in the worst case, even for the exact version in the \ell_2 norm. This was first shown by Ajtai in 1998 using randomized reductions from lattice-free problems like subset sum. Subsequent work by Micciancio and Regev in 2004 extended this to show that approximating SVP to within a factor of \gamma = 2^{O(\sqrt{n \log n / \log \log n})} is NP-hard under randomized reductions, with improvements closing the gap to nearly polynomial factors. Deterministic hardness results hold under additional assumptions, such as the existence of smooth numbers, establishing NP-hardness without randomization. SVP's hardness extends to approximation versions like GapSVP_\gamma, where the distinguishes \lambda_1 \leq 1 from \lambda_1 > \gamma. For constant \gamma > 1, GapSVP_\gamma is NP-hard. There are also known reductions showing that SVP is at most as hard as CVP, via polynomial-time transformations that preserve and factors, such as Kannan's technique or direct geometric constructions. Conversely, CVP reduces to SVP in some settings, but detailed equivalences are explored in the context of norm-specific hardness. No polynomial-time quantum algorithm is known for SVP, making it a candidate for post-quantum hardness. Quantum speedups, such as Grover's algorithm applied to sieving, yield only quadratic improvements, resulting in subexponential but still exponential time in dimension n. This resistance stems from the geometric nature of lattices, unlike algebraic problems solvable by Shor's algorithm.

Algorithms

The Lenstra–Lenstra–Lovász (LLL) algorithm provides a polynomial-time approximation for the shortest vector problem (SVP), achieving an approximation factor of \gamma \approx 2^{n/2} in the Euclidean norm, where n is the lattice dimension. This seminal method performs lattice basis reduction by iteratively applying Gram–Schmidt orthogonalization and size-reduction steps to produce a reduced basis whose shortest vector approximates the lattice minimum distance within the specified factor. While efficient for low dimensions, LLL's exponential approximation deteriorates rapidly with increasing n, limiting its utility for high-precision tasks. Block Korkine–Zolotarev (BKZ) reduction extends by incorporating calls to an exact or approximate on subspaces (blocks) of tunable size \beta, yielding better approximations at the cost of exponential time in \beta. For moderate \beta, BKZ achieves practical approximations in cryptographic dimensions (e.g., n \approx [400](/page/400)) with root-Hermite factors around 1.02–1.05, corresponding to \gamma \approx \beta^{O(1)}. In the theoretical regime, full BKZ (with \beta = n) solves exact but requires $2^{O(n^2)} time; using approximate enable solving \gamma- for constant \gamma in $2^{O(n)} time via sieving techniques integrated into the . Advanced exponential-time algorithms improve upon classical , such as Kannan's 1983 method, which solves exact SVP in $2^{O(n^2)} time by recursively enumerating lattice points in successively reduced subspaces. Micciancio and Voulgaris (2010) introduced a deterministic $2^{O(n)}-time for \gamma = O(\poly(n))-SVP on "most" lattices (in a measure-theoretic sense), relying on random sampling of short vectors from Voronoi cells to construct near-shortest solutions; this approach also extends to unique-SVP variants common in . Their method uses a combination of lattice preprocessing and geometric sampling to achieve single-exponential runtime without heuristics, marking a breakthrough in derandomizing prior sieving techniques. Kannan's embedding technique reduces SVP to multiple instances of the closest vector problem (CVP) by augmenting the lattice basis with an extra containing a target of length proportional to the suspected shortest , transforming the shortest search into finding a close point to the in the embedded . This reduction preserves approximation factors up to constants and is particularly effective for unique-SVP, where the shortest is isolated, enabling efficient solving via CVP oracles like Babai's nearest plane method on reduced bases. Recent advancements in 2024 focus on progressive BKZ variants, which incrementally increase block sizes during reduction to balance runtime and quality, achieving improved approximation factors (e.g., root-Hermite \delta \approx 1.003 for \gamma \approx 1.1 \poly(n)) in high-dimensional cryptographic settings like n=1000 for learning with errors (LWE) instances. These include pump-and-jump strategies that dynamically adjust enumeration pruning and sieving within BKZ tours, reducing core runtime by up to 50% compared to fixed-block BKZ while maintaining provable progress toward optimal reduction. Such optimizations are critical for estimating security in lattice-based schemes, where they enable solving approximate SVP instances that challenge proposed parameters.

Gap Shortest Vector Problem (GapSVP)

Definition

The Gap Shortest Vector Problem, denoted \gamma-GapSVP for an approximation factor \gamma \geq 1, is a decision version of the Shortest Vector Problem (SVP). Given a basis B for an n-dimensional L \subseteq \mathbb{R}^n and a d > 0, the task is to decide whether the length of the shortest nonzero vector \lambda_1(L) \leq d (YES instance) or \lambda_1(L) > \gamma \cdot d (NO instance), where \|\cdot\| typically denotes the (\ell_2) unless specified otherwise. This problem studies the hardness of approximating SVP within factor \gamma, building on the exact SVP from the previous section. The optimization version of approximate SVP seeks a nonzero lattice vector v with \|v\| \leq \gamma \cdot \lambda_1(L), but GapSVP focuses on the decisional gap, which is often used in complexity and cryptography due to easier reductions.

Hardness and Decision Versions

The hardness of the Gap Shortest Vector Problem (GapSVP_\gamma) has been extensively studied in computational complexity theory. In the \ell_\infty norm, GapSVP is NP-hard, as established by van Emde Boas in 1981 via a reduction from the exact closest vector problem. In the Euclidean (\ell_2) norm, Ajtai proved in 1998 that GapSVP_\gamma is NP-hard for any polynomial approximation factor \gamma = n^{O(1)} under randomized reductions, marking a foundational result that connected lattice problems to cryptographic hardness. This worst-case hardness was further linked to average-case hardness through random self-reducibility techniques introduced in the same work, showing that solving hard worst-case instances implies efficient solutions for the average case over random lattices. For decision versions of GapSVP, search-to-decision exist that preserve the dimension and factor up to small losses. Specifically, for \gamma \leq 1 + O(\log n / n), the search version (approximate SVP) reduces to the decision version (GapSVP_\gamma), establishing their equivalence in this regime. This equivalence holds under both classical and quantum computations, facilitating the use of decision problems in . Quantum hardness of GapSVP_\gamma persists for factors up to \gamma = \exp(O(\sqrt{\log n \log \log n})), as no quantum algorithms outperform classical ones significantly in this range, supporting quantum-resistant cryptographic constructions. Recent advances have strengthened hardness results for smaller \gamma. Using Regev's lemma from 2004—which enables efficient sampling and density arguments for lattice distributions—an analysis in has shown of GapSVP_\gamma for constant factors \gamma > 1 (up to nearly \sqrt{2}) under randomized reductions, via simple proofs that avoid complex number-theoretic constructions. As of 2025, no subexponential-time algorithms are known for GapSVP_\gamma with \gamma = \mathrm{poly}(n), underscoring its foundational difficulty. These hardness results underpin key lattice-based cryptographic assumptions, such as the Short Integer Solution () problem, where average-case SIS hardness directly follows from worst-case GapSVP_\gamma for \gamma = \mathrm{poly}(n) via Micciancio-Regev reductions.

Closest Vector Problem (CVP)

Formulation

The Closest Vector Problem (CVP) is a fundamental computational problem in the , concerned with identifying the closest to a given target point. Formally, given a basis B = \{\mathbf{b}_1, \dots, \mathbf{b}_n\} consisting of n linearly independent in \mathbb{R}^m (with m \geq n), which generates a L(B) = \{ B \mathbf{z} \mid \mathbf{z} \in \mathbb{Z}^n \}, and a target \mathbf{t} \in \mathbb{R}^m, the search version of CVP requires finding a \mathbf{v} \in L(B) that minimizes the \|\mathbf{v} - \mathbf{t}\|_2 = \sqrt{\sum_{i=1}^m (v_i - t_i)^2}. This is the most commonly used, though CVP can be defined with respect to any , such as the \ell_p- for p \in [1, \infty]. Geometrically, CVP seeks the lattice point in the Voronoi cell of the target, corresponding to the minimal distance from \mathbf{t} to the lattice, denoted \mathrm{dist}(\mathbf{t}, L) = \min_{\mathbf{v} \in L} \|\mathbf{v} - \mathbf{t}\|_2. The decision version of CVP asks whether there exists a \mathbf{v} \in L(B) such that \|\mathbf{v} - \mathbf{t}\|_2 \leq r, given the basis B, target \mathbf{t}, and a challenge value r > 0. Equivalently, it distinguishes cases where \mathrm{dist}(\mathbf{t}, L(B)) \leq r from those where \mathrm{dist}(\mathbf{t}, L(B)) > r. For full-rank lattices (m = n), the worst-case distance is bounded by the covering radius \rho(L) = \max_{\mathbf{t} \in \mathbb{R}^n} \mathrm{dist}(\mathbf{t}, L). In practice, exact CVP is intractable for high dimensions, leading to the study of approximate variants. The approximation version CVP_\gamma, parameterized by an approximation factor \gamma \geq 1, requires finding a \mathbf{v} \in L(B) such that \|\mathbf{v} - \mathbf{t}\|_2 \leq \gamma \cdot \mathrm{dist}(\mathbf{t}, L(B)). For constant \gamma > 1, this remains hard, while polynomial-time algorithms exist for \gamma = 2^{O(n)} via lattice reduction techniques, though exact solutions are only feasible up to dimension around 50-100 depending on the norm and instance. These formulations underpin hardness assumptions in lattice-based cryptography, particularly for problems like Bounded Distance Decoding (BDD).

Relationship to SVP

The Closest Vector Problem (CVP) is closely related to the Shortest Vector Problem (SVP), with CVP serving as a where SVP corresponds to the special case of \mathbf{t} = \mathbf{0} but requiring a nonzero (to avoid the trivial zero ). Both problems are foundational in and , but CVP introduces a target-dependent challenge that amplifies hardness. A key relationship is the polynomial-time, dimension- and approximation-preserving from \gamma-SVP to O(\gamma)-CVP, introduced by Goldreich, Micciancio, Safra, and Seifert in 1999. This constructs a series of CVP instances from an SVP instance by perturbing the origin with random short , allowing an approximate CVP oracle to recover an approximate shortest while avoiding the zero through careful instance . The reduction implies that CVP is at least as hard as SVP: any efficient algorithm for approximate CVP would yield one for approximate with comparable factors. Conversely, reductions from CVP to SVP exist but typically incur losses in or ; for example, Kannan's reduces CVP to SVP in one higher by augmenting the basis with a incorporating the target, though this increases the search space. In the worst case, CVP and SVP are equivalent up to factors for approximations \gamma = O(\sqrt{n}), leveraging theorems like those of Banaszczyk, which relate successive minima of a to those of its , ensuring hardness transfers with polylogarithmic losses. For instance, there are efficient reductions showing that \gamma-CVP hardness implies \poly(n) \cdot \gamma-SVP hardness, and vice versa in certain norms. A geometric foundation for these connections is provided by the structure of Voronoi cells and successive minima. For an n-dimensional L, the covering radius \rho(L) satisfies \rho(L) \leq \sqrt{n/2} \cdot \lambda_n(L), where \lambda_n(L) is the last successive minimum, bounding how far targets can be from lattice points relative to short vectors. This ensures that short basis vectors (from SVP approximations) facilitate CVP solutions, though computational challenges persist.

Hardness Results

The Closest Vector Problem (CVP) is NP-hard, even in the exact formulation, as established by van Emde Boas in 1981 via a reduction from the . This result holds in the \ell_p norm for any $1 \leq p \leq \infty. Approximating CVP within a constant factor \gamma = O(1) is also NP-hard, following from direct inapproximability proofs and the fact that exact CVP hardness extends to small constant approximations under standard complexity assumptions. Since there exists a polynomial-time, - and approximation-preserving from the Shortest Vector Problem (SVP) to CVP, CVP is at least as hard as SVP. This , introduced by Goldreich, Micciancio, Safra, and Seifert in 1999, involves constructing multiple CVP instances from an SVP instance to identify short vectors while avoiding trivial solutions. Additionally, the unique-SVP—where the shortest nonzero vector is unique up to sign—is equivalent to CVP under polynomial-time reductions, up to an approximation factor of \sqrt{n / \log n}, as shown by Micciancio in 2009; this equivalence highlights CVP's role in amplifying SVP's target-dependent challenges. No polynomial-time quantum algorithm is known to solve CVP, distinguishing it from problems like integer factorization where Shor's algorithm applies efficiently; Shor's periodicity-finding technique is irrelevant to the geometric structure of lattice problems. Quantum algorithms for CVP, such as adaptations of sieving methods, achieve subexponential time complexity but remain exponential in the lattice dimension n, preserving the problem's quantum hardness. Recent work has explored bidirectional hardness between CVP and SVP across s, showing that CVP hardness in the \ell_p implies SVP hardness in the \ell_q (where $1/p + 1/q = 1) for p \geq 2, via deterministic that preserve factors up to polylogarithmic terms. These results, building on -specific equivalences, underscore CVP's foundational difficulty relative to SVP in varied geometric settings.

Algorithms

The Lenstra–Lenstra–Lovász () algorithm provides a polynomial-time method to approximate CVP indirectly by first reducing the basis, achieving an approximation factor of \gamma \approx 2^{n/2} in the norm when combined with decoding procedures, where n is the dimension. performs basis via Gram–Schmidt orthogonalization and size- steps to produce a "good" basis whose vectors are nearly orthogonal, enabling subsequent approximation algorithms to perform well. While efficient, 's exponential factor limits precision in high dimensions. A standard for CVP is Babai's nearest plane (1986), which uses an LLL-reduced basis to greedily select coefficients by the of the target onto successive orthogonalized basis vectors, starting from the coarsest . This yields an approximate within $2^{n/2} of optimal in the norm, with runtime in n after reduction. For better approximations, block Korkine–Zolotarev (BKZ) reduction extends LLL by solving exact or approximate SVP on projected blocks of size \beta, improving CVP performance; practical BKZ with \beta \approx 20-50 achieves root-Hermite factors of 1.02–1.05 for cryptographic dimensions (n \approx 400), corresponding to \gamma \approx n^{c} for small c. Theoretical full BKZ solves exact CVP in $2^{O(n^2)} time but is impractical; variants enable $2^{O(n)}-time for constant-factor CVP. For exact CVP, Kannan's 1983 enumeration recursively searches lattice cosets by enumerating points in reduced subspaces, achieving $2^{O(n^2)} time and space, feasible up to n \approx 50. Micciancio and Voulgaris (2010) improved this to a deterministic $2^{O(n)}-time for \gamma = O(n^{c})-CVP using Voronoi computations and sampling, derandomizing sieving approaches; this extends to BDD variants critical for . Kannan's embedding technique can reduce certain CVP instances to SVP by the target into an augmented , solving the higher-dimensional SVP to recover a close , preserving approximations up to constants and effective for or bounded-distance cases. Recent 2024-2025 advancements include optimized progressive BKZ for CVP in LWE-based schemes, achieving \delta \approx 1.003 (for \gamma \approx 1.1 \poly(n)) in n=1000, using dynamic and sieving to cut runtime by 50% while estimating post-quantum security. As of November 2025, these optimizations aid in breaking weak parameters but affirm CVP's hardness for secure choices.

Gap Closest Vector Problem (GapCVP)

Definition

The Gap Closest Vector Problem (GapCVP), parameterized by an approximation factor \gamma \geq 1, is a decision version of the Closest Vector Problem (CVP). Given a basis B for a full-rank L \subseteq \mathbb{R}^m of n and a vector t \in \mathbb{R}^m, the \gamma-GapCVP asks to distinguish between two cases: (1) there exists a lattice vector v \in L such that \|v - t\| \leq 1, or (2) \|v - t\| > \gamma for all v \in L, where \|\cdot\| denotes the . The input is typically an m \times n integer matrix representing the basis and the target t, often normalized so the decision is 1 . The value \mu(L, t) denotes the minimal distance \min_{v \in L} \|v - t\|. The approximate search version, \gamma-CVP, requires finding a vector v \in L such that \|v - t\| \leq \gamma \cdot \mu(L, t). The decision version of GapCVP is NP-hard for any constant \gamma > 1, under randomized reductions, extending to the optimization problem and making exact or constant-factor approximations intractable in the worst case for high dimensions. This hardness holds even in the \ell_2 norm and relates closely to CVP, with reductions showing that approximating GapCVP within \gamma is as hard as solving CVP to within \gamma / 2. GapCVP shares similarities with GapSVP but focuses on proximity to a target rather than the origin.

Known Approximation Factors

The polynomial-time algorithms for approximating GapCVP achieve relatively poor factors. The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm, when combined with Babai's nearest plane method, solves GapCVP to within an approximation factor of $2^{n/2} in polynomial time. More advanced lattice reduction techniques, such as the Block Korkine–Zolotarev (BKZ) algorithm with block size \beta, improve the approximation to roughly \beta / \sqrt{2\pi e} at the cost of exponential running time $2^{O(n \beta)}. Hardness results establish that no polynomial-time algorithm exists for constant approximation factors below certain thresholds unless P=NP. In particular, Bennett, Golovnev, and Stephens-Davidowitz showed that GapCVP in the \ell_p norm for odd p \geq 1 is NP-hard to approximate to within any constant factor less than $2^{1/p}; for the norm (p=2), this threshold is \sqrt{2} \approx 1.41. A more recent refines the constant-factor hardness for small \gamma, indicating no polynomial-time solution for \gamma < \sqrt{2/\pi e} \approx 0.65 in certain settings, based on reductions from Gap-ETH. Heuristic approaches, such as sampling-based methods leveraging Kannan's embedding technique or probabilistic sampling from the lattice Voronoi cell, achieve constant approximation factors \gamma = O(1) in practice, particularly for cryptographic lattice dimensions (n ≈ 100–500). These methods perform well beyond polynomial-time guarantees but lack rigorous worst-case analysis. Open questions persist regarding the precise threshold separating efficient solvability from hardness. Heuristic algorithms can attain \gamma \approx 1.1 for moderate dimensions, while average-case hardness assumptions (e.g., from ) suggest intractability even for \gamma = 1.0001 in high dimensions. Recent 2025 advances, including variants of the nearest-colattice algorithm, explore subexponential-time approximations for large \gamma, but no speedup improves constant-factor thresholds beyond classical .

Shortest Independent Vectors Problem (SIVP)

Formulation

The Shortest Independent Vectors Problem (SIVP) is a fundamental computational problem in the geometry of numbers, concerned with identifying a set of n linearly independent short vectors within a given lattice, where n is the dimension. Formally, given a basis B = \{\mathbf{b}_1, \dots, \mathbf{b}_n\} consisting of n linearly independent vectors in \mathbb{R}^m (with m \geq n), which generates a lattice L(B) = \{ B \mathbf{z} \mid \mathbf{z} \in \mathbb{Z}^n \}, the search version of SIVP requires finding n linearly independent vectors \mathbf{v}_1, \dots, \mathbf{v}_n \in L(B) such that \|\mathbf{v}_i\|_2 \leq \gamma \cdot \lambda_n(L(B)) for all i = 1, \dots, n, where \lambda_n(L) is the n-th successive minimum of the lattice (the smallest r such that there exist n linearly independent vectors in L of length at most r), and the Euclidean norm \|\mathbf{v}\|_2 = \sqrt{\sum_{i=1}^m v_i^2} is most commonly used, though other norms like \ell_p-norms can be considered. Geometrically, SIVP seeks the radius of the smallest sphere centered at the origin that contains n linearly independent lattice points, corresponding to the last successive minimum \lambda_n(L). The decision version of SIVP, known as GapSIVP_\gamma, asks whether \lambda_n(L(B)) \leq r or \lambda_n(L(B)) > \gamma r, given the basis B and a challenge value r > 0. In practice, exact SIVP is intractable for high dimensions, leading to the study of approximate variants. The approximation version SIVP_\gamma, parameterized by an factor \gamma \geq 1, requires finding n linearly independent \mathbf{v}_i \in L(B) such that \|\mathbf{v}_i\|_2 \leq \gamma \cdot \lambda_n(L(B)). For constant \gamma > 1, this remains hard, while polynomial-time algorithms exist for \gamma = 2^{O(n)} via techniques, though exact solutions are only feasible up to low dimensions. These formulations underpin the hardness assumptions in and .

Relationship to SVP

The Shortest Independent Vectors Problem (SIVP) is intimately connected to the Shortest Vector Problem (SVP), with both problems serving as foundational hardness assumptions in and of numbers. A key relationship is established through efficient reductions that link their approximate versions. Specifically, there is a randomized from γ-SIVP to γ^{O(1)}-SVP, allowing an for approximate SVP to solve approximate SIVP with only a loss in the approximation factor. This reduction relies on techniques such as randomized and self- methods to generate candidate short vectors from multiple SVP oracle calls. The converse direction—from SVP to SIVP—can be achieved deterministically in polynomial time using covering arguments on subspaces. These arguments exploit the structure of lattice bases to construct a set of linearly independent short vectors by covering the space with balls centered at short vectors found via SVP oracles, ensuring the resulting set spans the while bounding their lengths. Such reductions preserve the dimension and rank, showing that solving SVP implies the ability to solve SIVP with comparable approximation factors. In the worst case, SIVP and SVP are equivalent up to factors when the parameter γ is , where n is the . This follows from -preserving that leverage theorems, such as Banaszczyk's result relating successive minima and radii, implying that hardness for one problem translates to the other with only a blowup in . For instance, there is an efficient -preserving from (√n ⋅ γ)-SIVP to γ-SVP for any γ ≥ 1. Thus, assuming the hardness of γ-SVP for γ = implies the hardness of poly(n)-SIVP, and vice versa. A fundamental geometric guarantee underpinning these relationships is provided by Minkowski's second theorem on successive minima. For an n-dimensional lattice L with determinant det(L), the product of the successive minima satisfies \prod_{i=1}^n \lambda_i(L) \leq \gamma_n^{n/2} \det(L)^{1/n}, where λ_i(L) is the length of the i-th shortest linearly independent vector in L (with respect to the norm), and γ_n ≤ (4/3)^{(n-1)/4} + O(n^{-1/2}) is the n-th Hermite constant, which bounds the minimal possible value of λ_n(B) \det(B)^{-1/n} over all n-dimensional s with basis B. This theorem ensures the existence of short independent vectors whose lengths are controlled relative to the lattice volume, providing a theoretical foundation for why SIVP instances always admit solutions within certain bounds, though finding them remains computationally challenging.

Algorithms

The Lenstra–Lenstra–Lovász (LLL) algorithm provides a polynomial-time for the shortest independent vectors problem (SIVP), achieving an approximation factor of \gamma \approx 2^{n/2} relative to \lambda_n(L) in the Euclidean norm, where n is the lattice dimension. This method performs lattice basis reduction by iteratively applying Gram–Schmidt orthogonalization and size-reduction steps to produce a reduced basis whose vectors approximate the successive minima within the specified factor. While efficient for low dimensions, LLL's exponential approximation deteriorates rapidly with increasing n, limiting its utility for high-precision tasks. Block Korkine–Zolotarev (BKZ) reduction extends by incorporating calls to an exact or approximate SVP oracle on subspaces (blocks) of tunable size \beta, yielding better approximations for SIVP at the cost of time in \beta. For moderate \beta, BKZ achieves practical approximations in cryptographic dimensions (e.g., n \approx [400](/page/400)) with root-Hermite factors around 1.02–1.05, corresponding to \gamma \approx \beta^{O(1)} relative to the successive minima. In the theoretical regime, full BKZ (with \beta = n) solves exact SIVP but requires $2^{O(n^2)} time; variants using approximate oracles enable solving \gamma-SIVP for constant \gamma in $2^{O(n)} time via sieving techniques integrated into the reduction process. Advanced exponential-time algorithms improve upon classical enumeration, such as Kannan's method, which solves exact SIVP in $2^{O(n^2)} time by recursively enumerating lattice points in successively reduced subspaces to build a set of independent short vectors. Micciancio and Voulgaris (2010) introduced a deterministic $2^{O(n)}-time algorithm for \gamma = O(\poly(n))-SIVP on "most" lattices (in a measure-theoretic sense), relying on random sampling of short vectors from Voronoi cells to construct a set of nearly shortest independent solutions; this approach also extends to unique-SIVP variants common in cryptography. Their method uses a combination of lattice preprocessing and geometric sampling to achieve single-exponential runtime without heuristics, marking a breakthrough in derandomizing prior sieving techniques. Kannan's embedding technique reduces SIVP to multiple instances of the closest vector problem (CVP) by augmenting the basis with extra dimensions containing target vectors proportional to suspected successive minima norms, transforming the independent vectors search into finding close points in the embedded . This reduction preserves approximation factors up to constants and is particularly effective for unique-SIVP, where the short independent vectors are isolated, enabling efficient solving via CVP oracles like Babai's nearest plane method on reduced bases. Recent advancements as of focus on progressive BKZ variants, which incrementally increase block sizes during reduction to balance runtime and quality, achieving improved factors (e.g., root-Hermite \delta \approx 1.003 for \gamma \approx 1.1 \poly(n)) in high-dimensional cryptographic settings like n=1000 for (LWE) instances. These include pump-and-jump strategies that dynamically adjust enumeration pruning and sieving within BKZ tours, reducing core runtime by up to 50% compared to fixed-block BKZ while maintaining provable progress toward optimal reduction. Such optimizations are critical for estimating in lattice-based schemes, where they enable solving approximate SIVP instances that challenge proposed parameters.

Bounded Distance Decoding (BDD)

Definition

The Bounded Distance Decoding (BDD) problem is a promise variant of the Closest Vector Problem (CVP) on lattices. Given a lattice L \subseteq \mathbb{R}^n, a target point t \in \mathbb{R}^n, and a distance bound d > 0, the task is to find the unique lattice vector v \in L such that \|t - v\| \leq d, under the promise that such a v exists and is the unique closest lattice point to t. Typically, the promise ensures d < \lambda_1(L)/2, where \lambda_1(L) is the length of the shortest non-zero vector in L, guaranteeing uniqueness by the geometry of lattices. The approximate version, BDD_\eta, requires solving instances where the error distance is at most \eta \cdot \lambda_1(L)/2 for some \eta < 1. BDD is crucial in lattice-based cryptography, as its average-case hardness under small \eta (e.g., $1/\mathrm{poly}(n)) is equivalent to worst-case hardness of problems like GapSVP via reductions.

Decoding Guarantees

In Bounded Distance Decoding (BDD), unique decoding is guaranteed when the promised distance d from the target t to the closest lattice point satisfies d < \lambda_1(L)/2, as this ensures no other lattice point lies within that ball around t. For random targets t, unique decoding succeeds with high probability (exponentially close to 1) when solving BDD instances over polynomially many random t, enabling reductions from approximation problems like GapSVP to BDD with approximation factor \sqrt{n / \log n}. In the worst case, BDD is NP-hard for approximation factors \eta > 1/2 - \epsilon for any constant \epsilon > 0, particularly in \ell_p norms where the hardness threshold approaches $1/2 as p \to \infty. This establishes that even when the promise ensures near-uniqueness (close to half the minimum distance), finding the close vector remains computationally intractable. On average, BDD instances can be viewed as equivalent to the Learning With Errors (LWE) problem, where the target is a random coset leader plus small Gaussian noise. While cryptographic LWE parameters yield hard average-case BDD, solvers based on lattice reduction techniques, such as BKZ, address BDD for \eta = 1/\mathrm{poly}(n) in the average case over random lattices, achieving feasibility for moderate dimensions though not strictly polynomial time in the worst case. Quantum algorithms offer a speedup for the underlying search in BDD enumeration, reducing exponential classical runtimes from $2^{O(n)} to $2^{O(n/2)}, but no known fully breaks BDD in polynomial time. Recent advances confirm polynomial-time solvability for average-case BDD on random lattices when d \leq \sqrt{n}, providing high-probability success (at least $1 - e^{-c m} for constants c > 0) via SVD-based methods. These results underpin LWE-based decoding in and highlight ongoing progress toward tighter feasibility bounds, including explorations of \eta = 1/2^{O(\sqrt{\log n})} regimes.

Covering Radius Problem (CRP)

Formulation

The covering radius problem (CRP) is a computational problem in the , concerned with determining the smallest radius such that spheres centered at points cover the entire . Formally, given a basis B = \{\mathbf{b}_1, \dots, \mathbf{b}_n\} of n linearly independent vectors in \mathbb{R}^n, generating the L(B) = \{ B \mathbf{z} \mid \mathbf{z} \in \mathbb{Z}^n \}, the covering radius \rho(L(B)) is defined as \rho(L(B)) = \max_{\mathbf{x} \in \mathbb{R}^n} \min_{\mathbf{v} \in L(B)} \|\mathbf{x} - \mathbf{v}\|_2 = \sup_{\mathbf{x} \in \mathbb{R}^n} \mathrm{dist}(\mathbf{x}, L(B)), where \|\cdot\|_2 is the norm and \mathrm{dist}(\mathbf{x}, L(B)) = \min_{\mathbf{v} \in L(B)} \|\mathbf{x} - \mathbf{v}\|_2. Geometrically, \rho(L) is the radius of the largest empty sphere that can be placed in space without intersecting any point, equivalent to the maximum depth of a hole in the Voronoi cell of the . The problem can be defined with respect to any vector , such as the \ell_p-norm for p \in [1, \infty]. The search version of CRP requires computing \rho(L(B)) exactly. The decision version, often studied in , asks whether \rho(L(B)) \leq d for a given value d > 0. The approximation version, GapCRP_\gamma for \gamma \geq 1, distinguishes cases where \rho(L(B)) \leq d from those where \rho(L(B)) > \gamma \cdot d. Exact CRP is computationally intensive in high dimensions, leading to focus on approximate variants. Polynomial-time algorithms exist for large approximation factors like \gamma = 2^{O(n)}, but constant-factor s remain challenging.

Computational Results

The covering radius of a can be computed exactly using algorithms that systematically search for the deepest holes in the Voronoi , running in time $2^{O(n)} for n. These methods extend techniques originally developed for the closest problem to identify the maximum from any point in space to the . algorithms achieve a constant-factor estimate of the covering radius within any \gamma > 1 in random exponential time $2^{O(n)}. A notable early approach uses Voronoi computations to approximate the covering radius within a factor of \sqrt{2}, leveraging the of the Voronoi regions to bound the maximum . The \sqrt{n}-approximation of the covering radius is NP-hard. Additionally, constant-factor s are hard under the that the shortest vector problem (SVP) is hard to approximate within constant factors. Approximating within O(\sqrt{n}/\log n) is not NP-hard unless the collapses. For random lattices, the covering radius satisfies bounds arising from probabilistic analysis of lattice geometry with high probability.

Shortest Basis Problem (SBP)

Definition

The Shortest Basis Problem (SBP) is an on that seeks a basis \{b_1, \dots, b_n\} for a given full-rank L \subseteq \mathbb{R}^m of dimension n such that \lambda = \max_{1 \leq i \leq n} \|b_i\| is minimized, where \|\cdot\| denotes the Euclidean norm. The value \lambda(L) denotes this minimal maximum basis vector length over all possible bases of L. The input consists of any basis for L, typically represented by an m \times n integer , and the output is a basis achieving the shortest possible \lambda, which necessarily satisfies \lambda \geq \lambda_n(L), the nth successive minimum of L. The approximate version of the problem, denoted \gamma-SBP for \gamma \geq 1, requires finding a basis where \max_i \|b_i\| \leq \gamma \cdot \lambda(L). This formulation is equivalent to seeking a reduced basis in the sense of techniques, which aim to minimize the maximum vector length while preserving the structure. The decision version of SBP, which asks whether there exists a basis with \max_i \|b_i\| \leq r for a given bound r > 0, is NP-hard even for constant approximation factors greater than 1. This hardness holds under randomized reductions and extends to the , making exact and near-constant approximations computationally intractable in the worst case. The problem shares conceptual similarities with the Shortest Independent Vectors Problem (SIVP), though SBP specifically requires the vectors to form a spanning basis for L.

Basis Reduction Connections

Basis reduction algorithms provide the primary practical and theoretical tools for approximating solutions to the Shortest Basis Problem (SBP), transforming an arbitrary input basis into one where the maximum vector length is minimized within guaranteed factors. The Lenstra-Lenstra-Lovász (LLL) algorithm, introduced in 1982, achieves a polynomial-time approximation for SBP with a factor of $2^{n/2}, where n is the lattice dimension, ensuring that the longest vector in the reduced basis has length at most $2^{n/2} times the optimal SBP length. This exponential approximation, while coarse for high dimensions, enables efficient computation and forms the foundation for more advanced methods. Subsequent improvements, such as the Block Korkine-Zolotarev (BKZ) algorithm with block size \beta, refine this approximation to a factor of (\beta / \sqrt{2\pi e})^{O(1)}, offering better guarantees as \beta increases, though at higher computational cost; this bound arises from the Gaussian heuristic applied to projected sublattices during reduction. The Hermite-Korkine-Zolotarev (HKZ) reduction specifically targets SBP by ensuring each basis vector is the shortest in the lattice orthogonal to the previous ones, providing an approximation factor of O(\sqrt{n}) to the optimal \lambda(L). HKZ is computationally expensive but improves upon LLL guarantees. Theoretically, the of vectors in an optimal SBP basis is constrained by the Hadamard ratio r(B) = \det(B) / \prod_{i=1}^n \|b_i\|, which satisfies r(B) \leq [1](/page/1) for any basis B by the Hadamard inequality, but for the shortest basis, the maximal achievable ratio over all bases of a given is at least $1 / \gamma_n^{n/2}, where \gamma_n denotes the n-dimensional Hermite . This lower bound on the ratio implies an upper bound on the product \prod \|b_i\| \leq \gamma_n^{n/2} \det(B), providing a geometric limit on how orthogonal the shortest basis can be and thus bounding the optimal maximum above by roughly \gamma_n^{1/2} \det(B)^{1/n} \cdot n^{O(1)}. The Hermite \gamma_n \approx n / (2\pi e) grows linearly with dimension, highlighting the inherent non-orthogonality in high-dimensional . Solving SBP exactly is computationally hard, inheriting the difficulty of the Shortest Vector Problem (SVP), as an optimal SBP basis must include a shortest vector (or one of comparable length), and no solves exact SVP or SBP in subexponential time in the worst case. Approximations better than exponential factors remain elusive without exponential runtime, underscoring the problem's even for constant approximation ratios. Implementations in libraries like fplll enable practical approximations of SBP in high dimensions (up to around 200) using BKZ with parallelization, taking hours on standard hardware for cryptographic applications. These tools leverage and sieving techniques for block solving, which overlap with SVP-focused methods to achieve tighter SBP approximations in practice.

Applications in Cryptography

Hardness Assumptions

Lattice-based cryptography relies on the computational hardness of certain average-case problems, which are shown to be at least as difficult as worst-case instances of fundamental lattice problems like the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP). These reductions enable the construction of secure cryptographic primitives under well-defined assumptions, bridging theoretical hardness with practical security. Central to these assumptions is the Shortest Vector Hypothesis (SVH), which posits that the approximate SVP (SVP_γ) is computationally infeasible for approximation factors γ = 2^{o(√n)}, where n is the lattice dimension. A key example is the (LWE) problem, whose average-case hardness is established via reductions from worst-case approximations of SVP and the Bounded Distance Decoding (BDD) problem, a variant of CVP. Specifically, solving LWE is at least as hard as approximating SVP or BDD to within polynomial factors in the worst case. The foundational worst-case to average-case reduction was provided by Ajtai in 1996, demonstrating that the average-case GapSVP (and related problems) is as hard as worst-case instances of problems like the Shortest Independent Vectors Problem (SIVP). This was later generalized by Micciancio and Regev in 2004, who developed a using Gaussian measures to connect worst-case hardness directly to average-case problems over general lattices, improving efficiency and applicability. Other prominent assumptions include the Short Integer Solution (SIS) problem, which inherits its average-case hardness from worst-case SIVP via Ajtai's reduction, making SIS suitable for hash functions and signatures. Similarly, the NTRU problem, underlying the NTRU cryptosystem, is based on the hardness of approximate CVP over ideal lattices, with reductions showing that average-case NTRU solving is equivalent to worst-case CVP approximations. As of 2025, these classical hardness assumptions remain valid against quantum adversaries, as no efficient quantum algorithms exist for solving general lattice problems like SVP or CVP beyond the approximations achievable classically.

Post-Quantum Developments

Lattice problems have played a pivotal role in the development of , particularly through the efforts led by the National Institute of Standards and Technology (NIST). In 2022, NIST selected CRYSTALS- as a primary key encapsulation mechanism (KEM) candidate, based on the Module-Learning With Errors (Module-LWE) problem, which is a structured variant related to the Bounded Distance Decoding (BDD) problem—a decision version of the Closest Vector Problem (CVP). CRYSTALS-, along with (FN-DSA) for compact signatures, was similarly chosen for digital signatures, relying on Module-LWE and Short Integer Solution () assumptions derived from lattice hardness. These selections advanced from the third round of NIST's process, initiated in 2016 to identify quantum-resistant algorithms. By August 2024, NIST finalized the standards as FIPS 203 (ML-KEM, derived from Kyber) and FIPS 204 (ML-DSA, from Dilithium), with FIPS 206 (FN-DSA from ) submitted as a draft in August 2025 and expected to finalize soon; FIPS 205 (SLH-DSA from SPHINCS+) marks the first official post-quantum encryption and signature standards. Saber, another module-based KEM using Learning With Rounding (LWR), was a finalist in the third round but was not selected for , though it remains influential in implementations seeking efficiency trade-offs. Beyond key encapsulation and signatures, lattice problems underpin advanced in post-quantum settings. The CKKS (Cheon-Kim-Kim-Song) scheme enables fully for approximate computations on encrypted data, grounded in the Ring-LWE problem, which reduces to solving approximate CVP instances for decryption and . This allows operations like addition and multiplication on ciphertexts without decryption, with security relying on the hardness of finding close vectors in ring . For zero-knowledge proofs, constructions leverage the Gap Shortest Vector Problem (GapSVP), where a prover demonstrates of a short vector in a lattice without revealing it, enabling non-interactive statistical zero-knowledge proofs for lattice-based commitments and relations. These proofs, introduced in seminal works, support applications like verifiable computation and privacy-preserving protocols, with efficiency improvements via Fiat-Shamir transformations. Despite their quantum resistance, lattice-based schemes have faced scrutiny from side-channel attacks, prompting mitigations from 2023 to 2025. and attacks target implementations of Module-LWE-based systems like and , exploiting operations such as number-theoretic transforms (NTT) for multiplication. Researchers developed higher-order non-profiled side-channel attacks, demonstrating key recovery with reduced traces on masked implementations. Countermeasures include masking schemes that split secrets into randomized shares to thwart leakage, as well as shuffling and constant-time arithmetic, achieving security against first- and second-order attacks with modest overhead. No structural breaks of the underlying problems have occurred, and quantum algorithms like Shor's are ineffective against them, as Shor targets and discrete logarithms rather than lattice approximation factors. Ongoing evaluations confirm that lattice hardness remains intact even against Grover's search acceleration, which offers only speedup. Looking ahead, is expanding into multi-party computation (MPC) protocols secure against quantum adversaries, integrating LWE/ for threshold schemes that distribute computation across parties without a trusted dealer. Quantum-secure signatures continue to evolve, with lattice variants like enabling aggregate and ring signatures for and identity systems. As of 2025, international bodies such as ISO are progressing toward adopting NIST's standards into frameworks like ISO/IEC 14888 for signatures, with PQC parts entering draft stages to facilitate global migration to post-quantum infrastructure.

References

  1. [1]
    [PDF] Lattice Based Cryptography for Beginners - IACR
    1.1.1 Definitions. Lattice. A lattice L of Rn is by definition a discrete subgroup of Rn. In this note we only deal. with full-rank lattice, i.e., L spans Rn ...
  2. [2]
    [PDF] A Decade of Lattice Cryptography - Cryptology ePrint Archive
    Feb 17, 2016 · Page 2. Abstract. Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure ...
  3. [3]
    [PDF] Geometry_of_Numbers-Cassels.pdf
    Lattices. 1.1. Introduction. In this chapter we introduce the most important concept in the geometry of numbers, that of a lattice, and develop some of its ...
  4. [4]
    [PDF] Lattices and the Geometry of Numbers - arXiv
    By the definition we can see that a lattice is a subgroup and a free abelian group of rank m, of the vector space 𝑹𝒏. The rank and dimension of the lattice is ...
  5. [5]
    [PDF] The Mathematics of Lattices
    Cryptographic functions based on q-ary lattices involve only arithmetic modulo q. Definition (q-ary lattice). Λ is a q-ary lattice if qZn ⊆ Λ ⊆ Zn.
  6. [6]
    Geometrie der Zahlen : Minkowski, H. (Hermann), 1864-1909
    Apr 5, 2006 · Geometrie der Zahlen. by: Minkowski, H. (Hermann), 1864-1909. Publication date: 1910. Topics: Number theory. Publisher: Leipzig : Teubner.Missing: lattice | Show results with:lattice
  7. [7]
    [PDF] Chapter 1 BASICS - UCSD CSE
    In this chapter we give some background about lattices and complexity theory. 1. Lattices. Let Rm be the m-dimensional Euclidean space. A lattice in Rm is the.<|separator|>
  8. [8]
    [PDF] Lattice-based Cryptography
    Jul 22, 2008 · Lattices: A lattice is defined as the set of all integer combinations ... lattice by its lower triangular HNF basis, and the perturbed lattice.
  9. [9]
    [PDF] Lattices - UW Math Department
    In this chapter, we introduce the concept of lattices. Lattices are fundamentally important in discrete geometry, cryptography, discrete optimization and ...
  10. [10]
    [PDF] COS 598D - Lattices - cs.Princeton
    2.1 Gram-Schmidt orthogonalization. The Gram-Schmidt procedure takes a set of n linearly independent vectors and produces another set of n linearly ...
  11. [11]
    [PDF] Lattice Basis Reduction and Integer Programming
    The technique of basis reduction uses the tool of Gram-Schmidt Orthogonalization (GSO). Given the ba- sis {b1,...,bn}, the GSO process produces a set of ...
  12. [12]
    [PDF] Lattice Problems and Norm Embeddings
    Mar 10, 2006 · We present reductions from lattice problems in the `2 norm to the corresponding problems in other norms such as `1, `∞ (and in fact in any ...
  13. [13]
    [PDF] Lecture 8 Dual Lattices
    DEFINITION 1 For a full-rank lattice Λ we define its dual lattice (sometimes known as the reciprocal lattice) Λ∗ = {y ∈ Rn | ∀x ∈ Λ, hx, yi ∈ Z}. In general, ...
  14. [14]
    [PDF] 2: The dual lattice - UCSD CSE
    Dual lattice and Dual bases. Definition 1. The dual of a lattice Λ is the set ˆΛ of all vectors x ∈ span(Λ) such that hx,yi is an integer for all y ∈ Λ.
  15. [15]
    [PDF] Lattice-based Cryptography∗ - UCSD CSE
    Nov 7, 2008 · Our focus here will be mainly on the practical aspects of lattice-based cryptography and less on the methods used to es- tablish their security.
  16. [16]
    [PDF] 1 Shortest Vector Problem
    Definition 1.1 (Shortest Vector Problem, exact form). The exact form of SVP has three common variants, which we restrict to integer lattices (and so integral ...
  17. [17]
  18. [18]
    [PDF] Hardness of Approximating the Closest Vector Problem with Pre ...
    In this paper we show that CVPP is NP-hard to ap- proximate within any constant factor. Under the stronger assumption that NP6⊆DTIME(2poly log(n)), we show ...
  19. [19]
    [PDF] Approximating shortest lattice vectors is not harder than ...
    Arora, L. Babai, J. Stern, Z. Sweedyk, The Hardness of Approximate Optima in Lattices, Codes, and. Systems of Linear Equations, Journal of ...
  20. [20]
    [PDF] On Bounded Distance Decoding, Unique Shortest Vectors, and the ...
    May 29, 2009 · So, in summary, all three problems uSVPγ, BDD1/γ and GapSVPγ are equivalent up to polynomial ap- proximation factors, and all currently known ...
  21. [21]
    [PDF] Quantum Algorithms for Attacking Hardness Assumptions in ...
    Jun 13, 2022 · Abstract. In this survey, the authors review the main quantum algorithms for solving the computational problems that serve as hardness ...<|separator|>
  22. [22]
    [PDF] Deterministic Hardness-of-Approximation of Unique-SVP and ... - arXiv
    Oct 19, 2025 · Abstract. We establish deterministic hardness of approximation results for the Shortest Vector Problem in. ℓp norm (SVPp) and for Unique-SVP ...
  23. [23]
    [PDF] Dimension-Preserving Reductions Between SVP and CVP in ... - arXiv
    Apr 14, 2021 · In fact, this reduction works with a (1 + ε)-approximate SVPq oracle as well, and even with a (1 + ε)-unique SVPp oracle.
  24. [24]
    [PDF] Lecture 4 1 The LLL Algorithm - People | MIT CSAIL
    Sep 16, 2015 · The LLL algorithm actually transforms, in polynomial time, the given basis into a “LLL-reduced” basis for the same lattice.
  25. [25]
    A Complete Analysis of the BKZ Lattice Reduction Algorithm
    The first rigorous dynamic analysis of BKZ, the most widely used lattice reduction algorithm besides LLL, is presented and it is observed that in certain ...
  26. [26]
    [PDF] SVP algorithms. BKZ Technology Innovation Institute, Abu Dhabi, UAE
    Improved analysis of Kannan's shortest lattice vector algorithm. • [Kan83] Ravi Kannan. Improved algorithms for integer programming and related lattice problems ...Missing: SIVP | Show results with:SIVP
  27. [27]
    [PDF] A Deterministic Single Exponential Time Algorithm for Most Lattice ...
    Dec 8, 2010 · A possible explanation for the difficulty of extending the result of [5] to the exact solution of SIVP and CVP was offered by Micciancio in [39] ...
  28. [28]
    [PDF] Chapter 18 - Algorithms for the Closest and Shortest Vector Problems
    18.3 The Embedding Technique. Another way to solve CVP is the embedding technique, due to Kannan (see page 437 onwards of [330]). Let B be a basis matrix for ...Missing: SIVP | Show results with:SIVP
  29. [29]
    [PDF] A Complete Analysis of the BKZ Lattice Reduction Algorithm - HAL
    Oct 9, 2024 · For instance, our analysis suggests to replace the exact SVP enumeration oracle used in the BKZ variant [ABF+20] with an approximate one [ABLR21] ...
  30. [30]
    [PDF] On the Complexity of Computing Short Linearly Independent Vectors ...
    log(n) is not NP-hard. Key Words. Algorithmic geometry of numbers, lattices, worst-case complexity, average-case complexity, shortest lattice basis ...<|control11|><|separator|>
  31. [31]
    [PDF] Solving Hard Lattice Problems and the Security of Lattice-Based ...
    Sep 10, 2012 · This paper is a tutorial introduction to the present state-of-the-art in the field of security of lattice- based cryptosystems.
  32. [32]
    On the complexity of computing short linearly independent vectors ...
    The shortest vector problem is NP- hard for randomized reductions, Proc. 30th Symposium on Theory of Computing 1998, pp. 10-19.
  33. [33]
    [PDF] Search-to-Decision Reductions for Lattice Problems with ... - arXiv
    Apr 24, 2017 · Specifically, γ-GapSVP asks us to approxi- mate the length of the shortest non-zero vector of a lattice up to a factor of γ, and γ-GapCVP asks ...
  34. [34]
    [PDF] Hardness of the (Approximate) Shortest Vector Problem: A Simple ...
    Feb 14, 2022 · Abstract. We give a simple proof that the (approximate, decisional) Shortest Vector Problem is NP-hard under a randomized reduction.
  35. [35]
    Finding the closest lattice vector when it's unusually close
    This paper, by Philip Klein, published in SODA '00, discusses finding the closest lattice vector when it's unusually close.
  36. [36]
    [PDF] Efficient reductions among lattice problems - UCSD CSE
    Oct 9, 2007 · Next, we reduce SVP′ to CVP. The idea underlying the reduction is that SVP′ can be regarded as a variant of CVP where the target vector can be ...Missing: unique- | Show results with:unique-
  37. [37]
    [PDF] New Lattice Based Cryptographic Constructions
    New Lattice Based Cryptographic Constructions. Oded Regev ∗. August 17, 2004. Abstract. We introduce the use of Fourier analysis on lattices as an integral ...
  38. [38]
    [PDF] Dimension-Preserving Reductions Between Lattice Problems
    Sep 6, 2016 · The reductions from uSVP to BDD and from uSVP to GapSVP only work for polynomially bounded approximation factors 7. The equiva- lence of CVP and ...
  39. [39]
    [math/0204158] Successive Minima and Lattice Points - arXiv
    Apr 12, 2002 · The main purpose of this note is to prove an upper bound on the number of lattice points of a centrally symmetric convex body in terms of the successive minima ...
  40. [40]
    Solve Approximate CVP via Variants of Nearest-Colattice
    Jun 23, 2025 · In this work, we bridge this gap by advancing the Colattice framework for solving approximate CVP with large approximation factors.
  41. [41]
    [PDF] Hardness of Bounded Distance Decoding on Lattices in `p Norms
    Mar 17, 2020 · Bounded Distance Decoding BDDp,α is the problem of decoding a lattice when the target point is promised to be within an α factor of the minimum ...
  42. [42]
    [PDF] The Learning with Errors Problem
    ... poly(n) (see Figure 1). In fact, as we shall mention later, it can be shown ... , called SVP, which over ideal lattice is more or less equivalent to SIVP).
  43. [43]
    [2506.16662] Bounded Distance Decoding for Random Lattices - arXiv
    Jun 20, 2025 · The current paper investigates the bounded distance decoding (BDD) problem for ensembles of lattices whose generator matrices have sub-Gaussian entries.Missing: guarantees | Show results with:guarantees
  44. [44]
    [PDF] The Complexity of the Covering Radius Problem on Lattices and ...
    Definition 2.3 (Covering Radius Problem) An input to GapCRPγ is a pair (B,d) where B is a rank n lattice basis and d is a rational number. In YES inputs ρ(B) ...
  45. [45]
    [PDF] On the Voronoi Regions of Certain Lattices - Neil Sloane
    Unfortunately little is known about the polytopes of these lattices. For E6* and ET*, for example, even the covering radius (the distance of the furthest vertex ...
  46. [46]
  47. [47]
    [PDF] Worst-case to Average-case Reductions based on Gaussian ...
    Dec 14, 2005 · The paper shows that solving modular linear equations is as hard as approximating lattice problems within a factor almost linear in dimension, ...
  48. [48]
    [PDF] Making NTRU as Secure as Worst-Case Problems over Ideal Lattices
    In the present work, we show how to modify NTRUEncrypt to make it provably secure in the standard model, under the assumed quantum hardness of standard worst- ...Missing: CVP | Show results with:CVP
  49. [49]
    State of the post-quantum Internet in 2025 - The Cloudflare Blog
    Oct 28, 2025 · Here, we'd like to share an anecdote from the chaotic week when it was not clear yet that Chen's quantum algorithm against lattices was flawed.