Fact-checked by Grok 2 weeks ago

Nearest neighbor search

Nearest neighbor search (NNS) is a core problem in and processing that aims to identify, for a given query point q in a , the point p in a S of points that minimizes the distance \dist(p, q), where \dist is a predefined metric such as . The more general k-nearest neighbor search extends this by retrieving the k closest points to q from S, forming a set R where no other point in S is closer to q than those in R. This technique underpins numerous applications across fields like , where it serves as a non-parametric method for and by assigning labels based on majority voting or averaging among the nearest neighbors of an unlabeled query. In , multimedia search, and , NNS enables efficient similarity matching in large datasets, such as recommending items or detecting anomalies. Its importance stems from the need to handle massive, high-dimensional data volumes, where naive exhaustive searches, which exhibit linear , become computationally infeasible. Exact NNS guarantees the true closest points but can be inefficient in high dimensions due to the curse of dimensionality, prompting the development of approximate methods that trade precision for speed, achieving sublinear query times in practice. Key approaches include space-partitioning structures like k-d trees and ball trees for exact searches in low dimensions, hashing-based techniques such as for high-dimensional approximate retrieval, and graph-based methods that leverage randomized projections or clustering for scalability. These methods are categorized broadly into branch-and-bound, walks, mapping-based, and epsilon-net strategies, with ongoing research focusing on enhancements for environments like distributed systems and .

Fundamentals

Definition and Problem Statement

Nearest neighbor search (NNS), also known as proximity search or similarity search, is a core in and that seeks to identify the data point or points in a given most similar to a specified query point, measured by a distance function in a . This process is fundamental for tasks requiring the retrieval of nearest matches, such as and database querying, where efficiency becomes critical as dataset sizes grow into millions or billions of points. Formally, given a finite set of points P = \{p_1, p_2, \dots, p_n\} in a metric space (X, d) equipped with a distance function d: X \times X \to \mathbb{R}_{\geq 0} satisfying the properties of non-negativity, symmetry, and the triangle inequality, and a query point q \in X, the single nearest neighbor problem requires finding the point p^* \in P that minimizes d(q, p). A common example occurs in Euclidean space \mathbb{R}^d, where the distance is defined as d(q, p) = \sqrt{\sum_{i=1}^d (q_i - p_i)^2}, representing the straight-line distance between vectors q and p. The problem extends to finding the k nearest neighbors, which identifies the k points in P with the smallest distances to q, often ordered by increasing distance. The nearest neighbor search problem admits several variants depending on the dataset and query requirements. In the static case, the set P remains fixed after initial construction, allowing preprocessing into efficient data structures for repeated queries. Conversely, dynamic datasets support insertions, deletions, or updates to P, necessitating adaptive structures to maintain search performance. These variants underpin NNS's role in similarity-based retrieval for large-scale datasets, enabling applications from recommendation systems to genomic analysis by approximating or exactly identifying close matches without exhaustive pairwise comparisons. The origins of nearest neighbor methods trace back to the early 1950s in statistical , with pioneering work by Fix and Joseph Hodges introducing nonparametric classification techniques that relied on proximity to training samples. Their 1951 unpublished report for the US Air Force School of Aviation Medicine laid the groundwork for consistency properties in nearest neighbor classifiers, influencing subsequent developments in and discriminant analysis.

Distance Metrics

In nearest neighbor search, a defines the notion of similarity between points, enabling the identification of the closest neighbors in a . These must typically satisfy certain properties to ensure meaningful comparisons, such as , where the distance from point q to p equals the distance from p to q; , meaning the distance is zero only if the points are identical and positive otherwise; and the , which states that the direct distance between two points is no greater than the sum of distances via an intermediate point. These properties hold in spaces, which form the foundation for many nearest neighbor algorithms, though some applications relax them in non-metric spaces for better domain fit. The most widely used metrics in nearest neighbor search belong to the Minkowski family, defined for vectors \mathbf{q} = (q_1, \dots, q_d) and \mathbf{p} = (p_1, \dots, p_d) in d-dimensional space as d(\mathbf{q}, \mathbf{p}) = \left( \sum_{i=1}^d |q_i - p_i|^p \right)^{1/p}, where p \geq 1 is a . For p = 2, this yields the (L2 norm), which measures the straight-line distance and is the default in many implementations due to its geometric interpretability in continuous spaces. The Manhattan distance (L1 norm, p = 1) sums absolute differences along each dimension, offering robustness to outliers and lower computational cost in grid-like data, as it avoids the and root operations required by . As p increases, the metric emphasizes the maximum deviation (approaching at p \to \infty), but values between 1 and 2 balance sensitivity to large differences with efficiency. Beyond Minkowski metrics, non-Euclidean distances suit specific data types. The counts differing bits between binary vectors, making it ideal for categorical or high-dimensional sparse data like genetic sequences, with a computational cost of O(d) via bitwise operations. , often converted to distance as $1 - \cos(\theta) where \cos(\theta) = \frac{\mathbf{q} \cdot \mathbf{p}}{||\mathbf{q}|| \, ||\mathbf{p}||}, measures angular separation and excels in high-dimensional spaces like text embeddings, ignoring magnitude differences for direction-focused similarity. For sequential data such as strings, the (Levenshtein distance) quantifies the minimum insertions, deletions, or substitutions needed to transform one string into another, though its O(mn) limits in large nearest neighbor searches. In non-metric spaces, distances may violate the , as seen in some kernel-induced measures, allowing flexible modeling but complicating exact search guarantees. Metric selection depends on computational cost, sensitivity to data scaling, and domain requirements. is sensitive to feature scales, often necessitating , while is less so but still benefits from it in mixed-scale data. Domain-specific choices include the (EMD) for comparing distributions like color histograms, which minimizes the work to transform one into the other under a ground metric and satisfies the if the ground distance does, though at higher cost than vector norms. For example, consider vectors \mathbf{q} = (1, 3) and \mathbf{p} = (4, 1). The is computed as \sqrt{(4-1)^2 + (1-3)^2} = \sqrt{9 + 4} = \sqrt{13} \approx 3.606. For , first compute norms: ||\mathbf{q}|| = \sqrt{1^2 + 3^2} = \sqrt{10}, ||\mathbf{p}|| = \sqrt{4^2 + 1^2} = \sqrt{17}; $1 \cdot 4 + 3 \cdot 1 = 7; thus \cos(\theta) = 7 / (\sqrt{10} \cdot \sqrt{17}) \approx 0.601, and distance $1 - 0.601 = 0.399, highlighting cosine's insensitivity to vector lengths.

Challenges and Limitations

One of the primary challenges in nearest neighbor search arises from the curse of dimensionality, which manifests as distance concentration in high-dimensional spaces. In such spaces, the distances between a query point and data points tend to become increasingly similar, causing the nearest neighbor to appear nearly as distant as the farthest point. This effect makes it difficult to distinguish meaningful nearest neighbors, as the relative differences in distances diminish. For instance, under assumptions of uniform data distribution in a unit hypercube, empirical analyses show that by around 10-20 dimensions, the contrast between nearest and farthest distances drops dramatically, often by orders of magnitude compared to low dimensions. A mathematical foundation for this phenomenon is the relative contrast, defined as the ratio of the expected maximum distance D_{\max} to the expected minimum distance D_{\min} from a query to the dataset points. Beyer et al. demonstrate that if the variance of the scaled distance distribution converges to zero as dimensionality increases, then for any \epsilon > 0, the probability P[D_{\max} \leq (1 + \epsilon) D_{\min}] approaches 1. This implies that with high probability, all points lie within a narrow range of distances from the query in sufficiently high dimensions. Geometrically, this concentration stems from the fact that the volume of a d-dimensional hypersphere of radius r is largely contained in a thin outer shell of thickness \epsilon; specifically, the fraction of volume between r - \epsilon and r is $1 - (1 - \epsilon/r)^d, which approaches 1 as d \to \infty. Scalability presents another significant limitation, particularly for large datasets. The brute-force approach requires computing distances to all n points, yielding a time complexity of O(d n) per query, where d is the dimensionality; this becomes infeasible for high d and large n, such as in modern datasets with millions of points in hundreds of dimensions. While indexing structures seek to mitigate this by preprocessing into O(n \log n) storage, their query efficiency degrades in high dimensions due to increased overlap in partitioning cells or boundary effects, often reverting to near-linear scan costs. A cost model analysis confirms that for dimensions exceeding 50-80, the overhead of index traversal outweighs sequential scanning, exacerbating computational demands. Additional limitations include sensitivity to outliers, challenges with dynamic updates, and dependence on metric choice. Outliers can disproportionately affect search results, as in high dimensions, certain points—termed hubs—frequently appear as nearest neighbors to many queries due to distance concentration, skewing outcomes and amplifying . For dynamic datasets, most traditional indexing methods lack efficient support for insertions or deletions, typically necessitating full rebuilds at O(n \log n) cost, which hinders applicability in streaming or evolving data scenarios. The choice of further influences performance; for example, (L_2) intensifies concentration in dense data, while metrics with lower p in L_p norms may preserve greater . These issues collectively the need for approximate techniques to achieve practical and robustness.

Applications

In Machine Learning and Pattern Recognition

In machine learning, nearest neighbor search underpins the k-nearest neighbors (k-NN) classifier, a foundational lazy learning algorithm that defers generalization until prediction time by assigning a class label based on the majority vote among the k closest training instances to a query point. This approach, introduced as a non-parametric method for pattern classification, relies on distance metrics to identify neighbors and is particularly effective for datasets where decision boundaries are irregular, as it stores the entire training set and computes similarities on-the-fly without building an explicit model. As a core component of , nearest neighbor search enables algorithms that store and retrieve specific training examples to inform predictions, extending its utility to unsupervised tasks such as clustering. For instance, the algorithm leverages nearest neighbor queries to estimate local data density, grouping points into clusters based on within a specified neighborhood while marking low-density points as . This density-based paradigm, formalized in early work on spatial clustering, highlights how nearest neighbor search facilitates the discovery of arbitrary-shaped clusters without assuming spherical distributions. Nearest neighbor search also plays a key role in by quantifying deviation through distances to nearby points; outliers are identified as instances with unusually large distances to their k-th nearest neighbor, providing a distance-based criterion for flagging irregularities in data distributions. In practical applications, such as handwritten digit recognition on the , k-NN classifiers achieve accuracies around 97% using , demonstrating robust performance on grayscale image features despite the high dimensionality of the input space. Beyond standalone use, nearest neighbor search integrates with other techniques, including feature selection methods like , which iteratively evaluates feature relevance by contrasting distances to nearest neighbors of the same and different classes to prioritize discriminative attributes. In modern pipelines, it evaluates learned embedding spaces, where contrastive learning frameworks project data into low-dimensional representations and employ k-NN to assess , often yielding top-1 accuracies exceeding 70% on downstream tasks like ImageNet .

In Information Retrieval and Databases

In information retrieval, nearest neighbor search enables content-based retrieval by representing documents or images as high-dimensional vector embeddings, typically derived from feature extraction models, and identifying the most similar items to a query vector using distance metrics like cosine similarity. This approach facilitates semantic search, where queries retrieve relevant content without relying on exact keyword matches, such as finding similar articles in a corpus based on topical embeddings. For instance, in content-based image retrieval (CBIR), nearest neighbor techniques scan embedding spaces to return visually akin images, enhancing user experience in media archives. In databases, nearest neighbor search is integral to indexing structures for efficient similarity queries, particularly in spatial databases where it supports geographic applications. , an extension for , implements k-nearest neighbor (k-NN) queries using the <-> distance operator, which leverages indexes to order and retrieve the closest points to a query location, such as finding the nearest hospitals within a city grid. This method optimizes performance for large geospatial datasets by avoiding full scans, enabling real-time location-based services. Recommendation systems employ nearest neighbor search in to suggest items by computing similarities between user or item profiles, often represented as vectors of interaction histories. User-based variants identify k-nearest neighbors among users with akin preferences to predict ratings, while item-based approaches match items to those previously favored by similar users, improving in platforms like sites. These techniques scale via approximate methods to handle sparse matrices efficiently. At real-world scales, nearest neighbor search manages billions of vectors in search engines through approximate algorithms integrated into distributed systems. Elasticsearch's k-NN plugin, introduced in version 8.0, uses hierarchical navigable (HNSW) graphs for sublinear query times on dense embeddings, supporting across petabyte-scale indices. Similarly, Google's Cloud Spanner employs ScaNN for approximate nearest neighbor queries on massive datasets, achieving high recall with low in production environments. Vector databases like Pinecone and specialize in nearest neighbor search for and retrieval, optimized for high-throughput applications. Pinecone's serverless architecture integrates filtering with approximate indexing to maintain accuracy during hybrid searches over millions of , while supports distributed k-NN queries via IVF and HNSW indexes, enabling scalable similarity matching in pipelines. These systems underscore the shift toward vector-native for retrieval tasks.

In Computer Graphics and Vision

In computer graphics, nearest neighbor search (NNS) plays a pivotal role in processing, particularly for tasks such as surface and in models. For meshing, algorithms like the Ball-Pivoting Algorithm (BPA) rely on identifying nearby points within a specified radius to connect seed triangles and reconstruct triangular meshes from unorganized point clouds, enabling efficient surface approximation without explicit connectivity information. This approach is especially effective for scanned objects, where NNS helps pivot a virtual ball around mesh edges to find intersecting points, forming coherent surfaces while handling and sparsity. In collision detection, NNS facilitates proximity queries between point clouds by constructing spatial indices like k-d trees, which quickly retrieve the k nearest points to assess overlaps in dynamic scenes such as virtual environments or simulations. For instance, in high-speed navigation systems, k-d tree-based NNS on point clouds assigns collision costs to nearby obstacles, enabling real-time avoidance in and graphics applications. In , NNS is fundamental to through feature matching, as seen in descriptors like (SIFT) and (ORB). SIFT extracts local keypoints and descriptors invariant to scale and rotation, then uses nearest neighbor matching in descriptor space to identify correspondences between images, supporting applications like and panorama stitching. This matching often employs brute-force or approximate methods to find the closest descriptor, followed by ratio tests to filter outliers, achieving robust performance across viewpoint changes. Similarly, ORB, designed for real-time efficiency, generates binary descriptors and applies nearest neighbor search—typically via libraries like FLANN—for fast matching, making it suitable for resource-constrained vision tasks such as . These techniques enable by comparing high-dimensional feature vectors, prioritizing speed and accuracy in large databases. NNS accelerates ray tracing and rendering by optimizing intersection queries in complex scenes. Traditional ray tracing traverses acceleration structures like bounding volume hierarchies (BVH) to find nearest intersections, but recent hardware advancements repurpose ray tracing cores for general NNS, treating neighbor searches as ray-geometry queries to boost throughput. For example, RTNN formulates k-nearest neighbor problems as ray tracing operations on GPUs, achieving up to 10x speedups over conventional methods in graphics workloads like particle simulations and light transport. This integration leverages dedicated RT hardware for unrestricted searches, reducing latency in rendering pipelines. Practical implementations in graphics tools like and incorporate NNS within their physics and spatial query systems for efficient interactions. In , the physics engine uses spatial partitioning—such as grids and BVHs—to perform nearest neighbor-like queries for and proximity checks in game scenes, enabling dynamic object placement and raycasts. Blender's physics integration similarly employs tree-based structures for broad-phase culling, where NNS identifies potential collisions among meshes and particles in animations and simulations. These tools often reference methods, like octrees, to prune search spaces for NNS in rendering and modeling workflows. Advances in real-time NNS have enhanced (AR) and (VR) for , where fast feature matching supports immersive tracking. In mobile AR systems, approximate NNS on binary descriptors enables loop closure detection and pose estimation by quickly retrieving similar keypoints from environmental maps, achieving sub-millisecond latencies on edge devices. This is critical for on-the-fly in dynamic scenes, as seen in vision-based pipelines that use hierarchical navigable (HNSW) graphs for k-NN searches on feature points. Such techniques ensure seamless integration of virtual overlays with real-world geometry, scaling to high-frame-rate VR applications.

Exact Methods

The brute-force search, also known as linear scan, is the simplest exact method for nearest neighbor search. It involves directly computing the distance between a query point q and every point in the dataset P of n points, then selecting the point with the minimum distance as the nearest neighbor. This approach requires no preprocessing or indexing structure, making it straightforward to implement for small to moderate-sized datasets. The algorithm proceeds in the following steps: given a query point q and dataset P = \{p_1, p_2, \dots, p_n\}, where each point is in a d-dimensional space, compute the distance d(q, p_i) for each p_i \in P using a chosen distance metric (such as the ); track the minimum distance encountered and the corresponding point; after scanning all points, return the point achieving the minimum distance. The following pseudocode illustrates the process:
function nearest_neighbor(q, P):
    min_dist = infinity
    nearest = null
    for each p in P:
        dist = distance(q, p)
        if dist < min_dist:
            min_dist = dist
            nearest = p
    return nearest
This implementation assumes a distance function that computes d(q, p), with the loop iterating over all n points. In terms of complexity, the time required for a single query is O(nd), as it involves n distance computations, each taking O(d) time in the worst case for vector operations. The space complexity is O(nd), primarily for storing the dataset itself, with no additional structures needed beyond the input points. The primary advantages of brute-force search are its simplicity, guaranteed exactness, and lack of preprocessing overhead, allowing immediate use on any dataset without building auxiliary data structures. It serves as a reliable baseline for evaluating more advanced methods and remains viable for small datasets where n is on the order of thousands. However, the method becomes infeasible for large n due to its linear scaling, particularly in high-dimensional spaces where the "curse of dimensionality" exacerbates the computational cost. For example, on a dataset of 275,465 points in 60 dimensions, a single nearest neighbor query took over 43,000 seconds (approximately 12 hours) using a dual-processor 1.6 GHz AMD Opteron system in 2003, highlighting the impracticality even for datasets approaching 1 million points without significant hardware advances. On modern hardware, query times for 1 million points in 128 dimensions can be reduced to under 1 second using optimized libraries, but the approach still limits scalability for real-time applications with frequent queries or billion-scale data. One optimization involves early termination: if an approximate upper bound on the nearest neighbor distance is available (e.g., from a prior coarse search), the algorithm can stop scanning once all remaining points are guaranteed to exceed this bound based on spatial pruning, though this requires careful bound management to preserve exactness.

Space Partitioning Structures

Space partitioning structures organize data points in a multi-dimensional space by recursively dividing it into non-overlapping regions, enabling efficient pruning of irrelevant areas during nearest neighbor queries. These methods preprocess the dataset to build an index, contrasting with brute-force approaches, and are particularly effective for exact searches in low- to medium-dimensional spaces. Common structures include tree-based partitions like and , as well as grid-based ones such as and , each leveraging geometric properties to reduce the number of distance computations required. KD-trees, introduced by Bentley in 1975 as a multidimensional binary search tree, partition the space using hyperplanes orthogonal to coordinate axes. Construction begins by selecting the dimension with the largest spread (or cycling through dimensions alternately), finding the median value in that dimension among the points, and splitting the dataset into two subsets on either side of the resulting hyperplane; this process recurses on each subset until a stopping criterion, such as a single point or maximum depth, is met, yielding a balanced tree with O(n log n) build time on average. For nearest neighbor queries, traversal starts at the root: the algorithm descends to the leaf containing the query point q by following the hyperplane splits, then backtracks up the tree while maintaining the current best distance d_best. At each internal node, the distance from q to the splitting hyperplane is computed; if this distance exceeds d_best, the opposite subtree can be , as no point there can be closer than d_best. For example, in a 2D KD-tree split along the x-axis at median x_m, if q lies left of x_m and |q_x - x_m| > d_best, the right subtree is pruned since all points there satisfy x >= x_m, making their minimum possible distance at least |q_x - x_m|. This pruning relies on the and ensures that only promising branches are explored. Friedman, , and Finkel in 1977 optimized this for nearest neighbor search, demonstrating logarithmic expected query time under uniform distributions. Ball trees, proposed by Omohundro in 1989, represent subspaces as bounding hyperspheres (balls) rather than hyperplanes, which can be more effective when data clusters are spherical. During construction, points are randomly partitioned into two subsets, and for each node, the smallest enclosing ball is computed for its points—typically by selecting a center (e.g., the centroid or a data point) and minimizing the radius; this recurses until subsets are small, resulting in a binary tree where parent balls contain child balls, with build time O(n log n) using efficient algorithms like the one based on furthest-point Voronoi diagrams. Variants include the cone tree, which adds angular separation for better pruning in angular metrics. For queries, the algorithm traverses the tree, pruning a node if the minimum distance from q to the node's ball exceeds d_best (no points inside can be nearer) or if the maximum distance to the ball is less than d_best (all points inside are nearer, updating d_best accordingly); the distance between balls uses the formula min_dist = max(0, dist(centers) - r1 - r2) and max_dist = dist(centers) + r1 + r2, enabling effective culling based on spherical geometry. This structure excels when data has natural spherical clusters, as in certain pattern recognition tasks. Quadtrees and octrees provide grid-based partitioning suited to 2D and spaces, respectively. Quadtrees, developed by Finkel and Bentley in 1974, divide a bounding square into four equal quadrants at each level, assigning points to the quadrant containing their coordinates and recursing until quadrants hold few points or are empty; regions are represented as leaves, with internal nodes indicating subdivisions, and construction achieves O(n log n) time for balanced cases. Octrees extend this to by subdividing cubes into eight octants, as formalized by Meagher in 1982 for , where each node stores the minimum bounding cube and points are binned accordingly. Queries in these structures locate the leaf containing q, then search neighboring cells within d_best using a ordered by minimum distance to the cell, pruning distant ones; for instance, in a quadtree, adjacent quadrants are traversed if their bounding box intersects the search hypersphere of radius d_best centered at q. These are particularly useful in for ray tracing and due to their alignment. In terms of complexity, structures offer average-case O(log n) query time for nearest neighbor search in balanced trees with n points, assuming uniform data distribution and low dimensions, while construction requires O(n log n) time. However, worst-case performance degrades to if the tree becomes unbalanced or in high dimensions, where of dimensionality causes distances to become nearly uniform, reducing effectiveness. These methods trade preprocessing time and space for faster queries, making them suitable for static datasets in dimensions up to around 20, beyond which alternative approaches are often needed.

Approximate Methods

Locality-Sensitive Hashing

Locality-sensitive hashing (LSH) is a probabilistic method for approximate nearest neighbor search that leverages families of hash functions to group similar data points into the same buckets with high probability, enabling efficient retrieval in high-dimensional spaces. Introduced as a to the curse of dimensionality in nearest neighbor problems, LSH avoids exhaustive searches by focusing on likely candidates through randomized hashing. This approach trades off exactness for speed, achieving sublinear query times while maintaining practical accuracy for many applications. The core idea behind LSH relies on hash functions that preserve locality: points close in the are more likely to collide (hash to the same value) than distant points. Formally, a family of hash functions \mathcal{H} is (r, cr, p_1, p_2)-sensitive with respect to a metric d if, for any points p and q, \Pr_{h \sim \mathcal{H}}[h(p) = h(q)] \geq p_1 when d(p, q) \leq r, and \Pr_{h \sim \mathcal{H}}[h(p) = h(q)] \leq p_2 when d(p, q) > cr, where c > 1 is the approximation factor and p_1 > p_2 > 0. Such families are constructed for specific metrics; for the (L_2) , the E^2LSH scheme employs random projections onto one-dimensional subspaces using p-stable distributions, typically Gaussian for p=2. Each in E^2LSH is h(\mathbf{v}) = \left\lfloor \frac{\mathbf{v} \cdot \mathbf{u} + b}{w} \right\rfloor, where \mathbf{u} is a random , b is drawn uniformly from [0, w), and w controls the bin width to balance collision probabilities. To perform a query, LSH builds multiple independent tables, each amplifying locality by concatenating k base functions into a single composite g(\mathbf{v}) = (h_1(\mathbf{v}), \dots, h_k(\mathbf{v})), which increases the gap between p_1^k and p_2^k for close and far points. The query point is hashed across L such s, and all points in the matching s are retrieved as candidate neighbors, whose distances are then computed exactly. This process ensures that nearby points collide in at least one with high probability, while distant points rarely do, allowing early termination of irrelevant bucket checks. LSH achieves linear O(n) for n points, independent of dimensionality, and sublinear query time O(1 + n^{\rho}), where \rho = \frac{\log p_1}{\log p_2} < 1 depends on c and the , often yielding \rho \approx 1/c for spaces. These guarantees make LSH suitable for high-dimensional data, such as vectors in recommendation systems, where it efficiently identifies similar user preferences or item features in sparse, large-scale datasets.

Graph-Based Search Techniques

Graph-based search techniques for approximate nearest neighbor (ANN) search leverage proximity graphs, where nodes represent data points and edges connect nearby points based on distance metrics, enabling efficient navigation through the data structure. These methods, including Delaunay triangulations and k-nearest neighbor (k-NN) graphs, exploit local connectivity to approximate global searches by performing greedy traversals starting from an entry point, such as a randomly selected node or a dedicated entry vertex. In Delaunay graphs, edges form a triangulation that includes all nearest neighbor connections without crossing links, ensuring that the nearest neighbor of any point lies among its adjacent nodes. Similarly, NN-graphs connect each point to its k closest neighbors, forming directed or undirected structures that preserve local proximity for routing. A prominent advancement in this domain is the Hierarchical Navigable (HNSW) graph, which extends navigable (NSW) graphs by introducing multiple layers resembling skip lists for hierarchical navigation. In HNSW, the bottom layer contains the full dataset as a proximity graph with short-range connections, while higher layers are sparser with longer-range edges, allowing rapid coarse-grained search followed by refinement in lower layers. Construction proceeds incrementally: new points are inserted by greedily descending from the top layer to the bottom, selecting candidate neighbors via heuristics like selecting the closest unused connection within a maximum degree limit (M) to balance density and navigability. This process approximates both Delaunay and NSW properties, ensuring logarithmic for traversal. Querying in HNSW involves a greedy descent: starting from the top layer's entry point, the algorithm navigates to the nearest neighbor in the query's , descending layers while expanding a beam of candidate nodes to control versus speed; wider beams improve accuracy at the cost of more computations. mitigates local minima by exploring multiple paths, achieving high through this controlled exploration. Unlike exhaustive methods, these graphs mitigate of dimensionality by preserving locality in high-dimensional spaces via explicit connections that favor intrinsic data over uniform distributions. In the 2020s, HNSW has seen widespread adoption, notably integrated into the FAISS library by Meta, where it delivers approximately 95% recall for k-NN queries on million-scale datasets with latencies under 2 milliseconds on CPUs, offering up to 10x speedups over brute-force search while maintaining robustness across dimensions. The algorithmic complexity for HNSW queries is O(log n) in expectation, benefiting from the hierarchical structure's logarithmic depth, with space requirements of O(n log n) due to multiple layers and bounded connections per node. These properties position graph-based techniques as state-of-the-art for scalable ANN in applications demanding both efficiency and accuracy.

Clustering and Compression Approaches

Clustering and compression approaches approximate nearest neighbor search by reducing the dimensionality or size of the dataset while preserving approximate distances, enabling efficient queries on large-scale data. These methods trade off exactness for speed and memory savings, often achieving high recall rates on compressed representations. Key techniques include random projections for and for compression, which can be combined with indexing structures like inverted files to scale to billions of vectors. Random projections leverage the Johnson-Lindenstrauss lemma, which states that for a set of n points in high-dimensional , there exists a to a lower-dimensional of size O(\log n / \epsilon^2) that preserves pairwise distances up to a factor of $1 \pm \epsilon with high probability. This reduction facilitates faster nearest neighbor computations in the projected , as lower dimensions mitigate the curse of dimensionality, though it introduces minor distortion in distance estimates. In practice, random projections using Gaussian matrices are applied before other indexing methods to preprocess high-dimensional embeddings, such as those from models. Product quantization (PQ) compresses vectors by decomposing the D-dimensional space into m subspaces of dimension D/m, quantizing each subspace independently with a codebook of k^* centroids, resulting in a code of length m \log_2 k^* bits per vector. For nearest neighbor search, distances are approximated using precomputed lookup tables between query subvectors and codebook centroids, avoiding exhaustive computation. Asymmetric distance computation (ADC) enhances accuracy by keeping the query vector unquantized and computing distances only to the quantized database vectors, minimizing quantization error compared to symmetric variants where both are quantized. Hierarchical clustering builds a of using hierarchical k-means, where each level partitions the into clusters via k-means, forming a multi-level structure for coarse-to-fine partitioning. Queries traverse the tree by finding the nearest centroid at each level, distant branches to limit space. In the inverted file (IVF) index, this clustering assigns database vectors to the nearest leaf centroid, creating inverted lists of vectors per cluster; during search, only lists near the query's assigned centroid are probed, often with multi-probe techniques to improve . The FAISS implements IVF with hierarchical k-means for efficient training and traversal on large datasets. The inverted file with asymmetric distance computation (IVFADC) combines IVF clustering with PQ on the residual vectors in each inverted list, enabling both coarse selection and fine compressed search. This hybrid reduces memory by storing only codes and indices, while queries first identify candidate clusters and then approximate distances within them using . On the SIFT1M dataset (1 million 128D descriptors), IVFADC with 1024 clusters and 64-bit PQ codes achieves approximately 74% recall@1 while scanning only 1% of the database, balancing ratios of up to 16:1 (from 512 bytes to 32 bytes per vector) against recall. These approaches exhibit clear trade-offs: higher (fewer bits or clusters) reduces memory and speeds up search but lowers , while increasing probe sizes or bits improves accuracy at higher computational . In the 2020s, clustering and methods like IVFADC have become foundational in vector databases such as and Pinecone, supporting billion-scale approximate nearest neighbor search for applications in recommendation systems and semantic retrieval by integrating with distributed storage and GPU acceleration.

Variants

k-Nearest Neighbors

The k-nearest neighbors (k-NN) search is a generalization of the nearest neighbor search that identifies the k data points in a closest to a given query point, according to a specified metric such as the , and typically returns these points sorted by increasing from the query. This approach allows for retrieving multiple similar instances rather than a single one, which is particularly useful in applications requiring a set of top matches. In contexts, the k nearest neighbors vote on the class label for the query, with the majority class assigned as the prediction; this method was formalized as providing an error rate bounded by twice the under certain conditions. To adapt existing exact or approximate nearest neighbor search algorithms for k-NN, the search process is modified to maintain a bounded (often a max- of size k) that tracks the k smallest encountered, discarding farther points as closer ones are found. For instance, in , all are computed, and a ensures only the k nearest are retained efficiently. In , this retrieved set enables schemes, where each neighbor's contribution is inversely proportional to its , emphasizing closer points for more accurate decisions; this distance-weighted variant improves performance over uniform voting by reducing the influence of outliers. The of k-NN search introduces only a marginal increase over single nearest neighbor search, primarily due to the additional O(k log k) cost for sorting the final k results if ordered output is needed, assuming k is much smaller than the size n. For example, in -based implementations, inserting n candidates into a k-sized yields O(n log k) time overall. The parameter k is typically selected via cross-validation, evaluating accuracy across a grid of k values to find the one minimizing error on held-out data, thereby balancing underfitting (small k) and (large k).

Fixed-Radius Nearest Neighbors

Fixed-radius nearest neighbor search, also known as radius search or range query with a , involves retrieving all data points within a predefined from a query point. Specifically, given a query point q and r, the task is to find all points p in the such that the d(q, p) \leq r. This variant differs from standard nearest neighbor search by focusing on a distance-bound rather than the closest points, making it suitable for scenarios where the number of relevant neighbors is variable and context-dependent. Existing data structures can be adapted for fixed-radius queries to improve efficiency. In space partitioning trees like kd-trees, branches can be pruned during traversal if their bounding regions lie entirely outside the query ball of radius r, avoiding unnecessary exploration of irrelevant subspaces. For approximate methods such as (LSH), the hash functions are tuned so that the collision probability is high for pairs at distance less than r and low for those exceeding r, allowing buckets to concentrate potential neighbors. This search paradigm finds applications in and . In , fixed-radius queries enable uniform kernel methods by counting neighbors within r to approximate local data density, as seen in clustering algorithms like where the radius defines core points based on neighborhood size. In , particularly for and simulations, it identifies all objects or particles within interaction range r of a query position, facilitating efficient pairwise checks without examining the entire dataset. The time complexity of fixed-radius nearest neighbor search is output-sensitive, typically O(\log n + m) for tree-based structures in low dimensions, where n is the dataset size and m is the number of points returned. Preprocessing often requires O(n \log n) time to build the index. A key trade-off is the potential for early termination once all points within r are found, which can yield sublinear query times when m is small relative to n. However, in the worst case, if r is large enough to encompass the entire dataset, the complexity degrades to O(n), necessitating careful radius selection or hybrid approaches for robustness.

All-Nearest Neighbors

All-nearest neighbors (ANN) search refers to the problem of determining, for each point p in a P of n points in d-dimensional , the nearest neighbor to p among all other points in P \setminus \{p\}, typically under the metric. This batched variant of nearest neighbor search processes every point in P as a query, producing a directed nearest neighbor where each points to its closest counterpart. The naive approach computes an all-pairs , requiring O(n^2) time and , which becomes impractical for large n. Exact algorithms improve upon this using spatial partitioning; for instance, Vaidya's divide-and-conquer method achieves O(n \log n) time in any fixed dimension by recursively partitioning the point set and merging nearest neighbor information across subproblems. Other techniques, such as Clarkson's , also attain O(n \log n) expected time using well-separated pair decompositions. Graph-based optimizations can further accelerate computation by constructing approximate nearest neighbor graphs and traversing them to identify candidates, often leveraging existing proximity structures. ANN search finds applications in graph construction, where the resulting nearest neighbor graph serves as a foundation for algorithms like mutual nearest neighbors, in which pairs of points are reciprocally closest and form dense blocks for of arbitrary shapes. It is also employed in , such as in geographic information systems (GIS) for proximity-based pattern detection, including assessing clustering or dispersion of features like urban infrastructure or ecological sites. Efficiency varies by method: exact algorithms with spatial indexes like kd-trees or Voronoi diagrams enable O(n \log n) time while using O(n) space for the output , contrasting the naive method's space overhead. Approximate variants address for high-dimensional or massive datasets; for example, a simple randomized projection finds all nearest neighbors in expected O(n^{3/2}) time by projecting points onto a random line and checking local vicinities, with extensions to k-nearest neighbors in O(kn^{3/2}) comparisons. These approximations are particularly useful for large-scale construction in pipelines.

Reverse Nearest Neighbors

Reverse nearest neighbor (RNN) search is a variant of nearest neighbor queries that identifies the set of points in a P for which a given query point q is the nearest neighbor. Formally, the RNN set for q is defined as \{p \in P \mid \forall r \in P \setminus \{p\}, \dist(p, q) \leq \dist(p, r)\}, where \dist denotes the distance metric, such as . This can be extended to the top-k reverse nearest neighbors (RkNN), which finds points p such that q is among the k closest points to p. Unlike standard nearest neighbor search, RNN emphasizes the "influence" of q over the , reversing the query direction. Algorithms for RNN search often distinguish between monochromatic (query and dataset from the same set) and bichromatic (query from a separate set Q, dataset P) cases, with bichromatic queries being computationally simpler as they avoid self-comparisons. Seminal approaches use spatial indexes like R-trees for efficient processing via a filter-refinement framework: the filter step prunes irrelevant regions using bounding rectangles or Voronoi cell approximations, while refinement verifies candidates through exact distance computations. Branch-and-bound techniques traverse the index tree, bounding subtrees based on distance thresholds to avoid full scans, and skyline methods identify non-dominated points to further prune the search space. For dynamic datasets, updates to the index maintain query efficiency without recomputing all distances. RNN queries find applications in location-based services, such as identifying customers for whom a particular store is the closest facility, enabling targeted or in geographic information systems (GIS). In influence maximization, especially in geo-social networks, RNN helps select points of interest that maximize reach to users who consider them among their top-k nearest options, supporting tasks like or planning. Post-2010 research has increasingly focused on spatial databases, incorporating obstacles, network constraints, and continuous monitoring for real-time GIS applications. Exact RNN algorithms typically require O(n \log n) preprocessing time for index construction, with query times ranging from O(\sqrt{n} + m) to O(n) in the worst case depending on the index and dimensionality, where m is the output size. Approximations improve in high dimensions or large via sampling techniques, such as estimating influence regions by sampling dataset points to approximate Voronoi cells, achieving sublinear query times with probabilistic guarantees on accuracy.

Evaluation and Implementation

Algorithmic Complexity

The algorithmic complexity of nearest neighbor search varies significantly between exact and approximate methods, influenced by factors such as dataset size n, dimensionality d, and the specific technique employed. For exact methods, the brute-force approach computes distances to all n points for each query, yielding a query time complexity of O(nd) and space complexity of O(nd), with no preprocessing required. This baseline is straightforward but scales poorly for large n or high d. Tree-based exact methods, such as the k-d tree, improve average-case performance through spatial partitioning: preprocessing requires O(n \log n) time to build the structure, while query time is O(\log n) on average in low dimensions, though it degrades to O(n) in the worst case, particularly as d increases due to the curse of dimensionality, where the number of regions visited can grow exponentially as O(2^d). Space complexity remains O(n) for these structures. Lower bounds for exact nearest neighbor search confirm the inherent challenges, with worst-case query time \Omega(n) in general models, even with preprocessing, underscoring that linear-time queries cannot be avoided in adversarial cases. Approximate methods trade precision for efficiency, often providing sublinear query times with probabilistic recall guarantees (e.g., returning neighbors within a factor of (1+\epsilon) with high probability). Locality-sensitive hashing (LSH) exemplifies this: preprocessing time is O(d n^{1+\rho}) and space is O(d n^{1+\rho}), where \rho < 1 depends on the approximation factor and distance distribution, while query time achieves sublinear O(d n^\rho \log n), mitigating the curse of dimensionality by avoiding exhaustive searches but incurring exponential dependence on d in space terms like O((1+\epsilon)^d n). Graph-based techniques, such as Hierarchical Navigable Small World (HNSW) graphs, offer further improvements for approximate search: preprocessing is O(n \log n) time and O(n) space, with query time O(\log n) under typical parameter settings, independent of d in practice though influenced by graph connectivity factors like \log d in navigation steps. These sublinear scalings (O(n^\rho) for LSH and O(\log n) for HNSW) enable efficient handling of high-dimensional data, where exact methods falter due to volume concentration effects that make distances behave like \Omega(d) between random points. Overall, dimensionality d amplifies complexities across methods—adding \log d factors in tree traversals or exponential terms in partitioning efficiency—necessitating dimension reduction or approximation for scalability beyond low d.

Benchmarking and Modern Libraries

Benchmarking approximate nearest neighbor (ANN) search algorithms typically involves evaluating their performance on standard datasets using metrics such as recall@k, which measures the fraction of true k-nearest neighbors retrieved, and (QPS), which assesses throughput under constraints. Common datasets include SIFT1M, consisting of 1 million 128-dimensional SIFT descriptors from image features, and GIST1M, with 1 million 960-dimensional GIST descriptors from image textures, both sourced from the evaluation corpus for high-dimensional vector search. These datasets enable standardized comparisons of accuracy-speed trade-offs across methods. The ANN-Benchmarks project, initiated in 2017, provides an open-source framework for reproducible evaluations of in-memory ANN libraries, focusing on versus QPS curves for various algorithms and hardware configurations. It has benchmarked dozens of implementations on datasets like SIFT1M and GIST1M, revealing that graph-based methods often achieve higher at moderate QPS compared to hashing techniques, while product quantization variants excel in memory efficiency for large-scale deployments. Modern libraries for ANN search include FAISS, developed by (formerly ) since 2017, which supports product quantization (PQ) for and hierarchical navigable (HNSW) graphs for indexing, along with GPU for billion-scale datasets. Annoy, from since 2013 with ongoing updates, uses trees for fast approximate searches in high dimensions, emphasizing simplicity and mmap-based loading for low-latency queries. HNSWlib, an open-source C++ library implementing the HNSW algorithm, offers header-only bindings for and supports dynamic insertions, making it suitable for real-time applications with high recall requirements. Recent advances from 2020 to 2025 have integrated ANN libraries with transformer-based embedding models, enabling efficient semantic search over text and multimodal data; for instance, Pinecone's managed vector database leverages HNSW indexing for approximate nearest neighbor retrieval of embeddings generated by models like Sentence Transformers from Hugging Face. To illustrate performance differences, the following table summarizes recall@10 versus QPS results from the Big ANN Benchmarks on 1-billion vector datasets (e.g., BIGANN-1B with 96-dimensional vectors), comparing an LSH-based baseline (FAISS-CPU) against a graph-based method (DiskANN). These evaluations highlight graph methods' superior recall at the cost of lower throughput on CPU hardware.
MethodTypeDatasetRecall@10QPS (at target recall)
FAISS-CPULSH/PQ0.63410,000
DiskANN0.9491,500

References

  1. [1]
    (PDF) A Survey on Nearest Neighbor Search Methods - ResearchGate
    Aug 7, 2025 · In this paper, by opening a new view to this problem, a comprehensive evaluation on structures, techniques and different algorithms in this field is done.
  2. [2]
    Enhancing K-nearest neighbor algorithm: a comprehensive review ...
    Aug 11, 2024 · This paper presents a comprehensive review and performance analysis of modifications made to enhance the exact kNN techniques.
  3. [3]
    Introduction to machine learning: k-nearest neighbors - PMC
    The article introduces some basic ideas underlying the kNN algorithm, and then focuses on how to perform kNN modeling with R. The dataset should be prepared ...
  4. [4]
    [PDF] Very Large Scale Nearest Neighbor Search - LIACS Leiden University
    Similarity or more precisely nearest neighbor searches are thus crucial in the analysis, indexing and utilization of these massive multimedia databases. In this ...Missing: motivation | Show results with:motivation
  5. [5]
    K-nearest neighbor - Scholarpedia
    Feb 21, 2009 · In an unpublished US Air Force School of Aviation Medicine report in 1951, Fix and Hodges introduced a non-parametric method for pattern ...
  6. [6]
    [PDF] Near Neighbor Search in Large Metric Spaces - VLDB Endowment
    the triangle inequality (see Section 3). Hence metric spaces are a very general concept and can be applied to vectors (for example, under Euclidean distance) as.
  7. [7]
    [PDF] Pattern Recognition and Machine Learning - Microsoft
    A companion volume (Bishop and Nabney,. 2008) will deal with practical aspects of pattern recognition and machine learning, and will be accompanied by Matlab ...
  8. [8]
    [PDF] Distance Metric Learning: A Comprehensive Survey
    May 19, 2006 · Abstract. Many machine learning algorithms, such as K Nearest Neighbor (KNN), heav- ily rely on the distance metric for the input data ...
  9. [9]
    [PDF] Chapter 3 - Finding Similar Items - Stanford InfoLab
    9: Prove that the edit distance discussed in Exercise 3.5.8 is indeed a distance measure. Exercise 3.5.10: Find the Hamming distances between each pair of the ...
  10. [10]
    The Earth Mover's Distance as a Metric for Image Retrieval
    We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval.
  11. [11]
    [PDF] 19960016399.pdf - NASA Technical Reports Server (NTRS)
    Space and the Curse of Dimensionality. The difficulty of dimensionality has ... (2) The volume of a hypersphere concentrates in the outside shell. The ...
  12. [12]
    [PDF] A Cost Model For Nearest Neighbor Search in High-Dimensional ...
    In this paper, we present a new cost model for nearest neigh- bor search in high-dimensional data space. We first analyze different nearest neighbor algorithms, ...
  13. [13]
    [PDF] On the Difficulty of Nearest Neighbor Search - Google Research
    Approximate nearest neighbors: towards removing the curse of dimen- sionality. In STOC, 1998. Liu, T., Moore, A.W., Gray, A., and Yang, K. An in ...
  14. [14]
    Nearest neighbor pattern classification | IEEE Journals & Magazine
    The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points.
  15. [15]
    [PDF] A Density-Based Algorithm for Discovering Clusters in Large Spatial ...
    In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to dis- cover clusters of ...
  16. [16]
    Efficient algorithms for mining outliers from large data sets
    In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor.
  17. [17]
    None
    ### Performance Results of k-NN on MNIST Dataset (Accuracy Metrics)
  18. [18]
    [PDF] 1992-The Feature Selection Problem: Traditional Methods and a ...
    We then introduce and theoretically examine a new algorithm Relief which selects relevant features using a statistical method. Relief does not depend on ...
  19. [19]
    [PDF] arXiv:2002.05709v3 [cs.LG] 1 Jul 2020
    Jul 1, 2020 · This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive ...
  20. [20]
    A nearest-neighbor approach to relevance feedback in content ...
    Each image is ranked according to a relevance score depending on nearest-neighbor distances. This approach is proposed both in low-level feature spaces, and in ...
  21. [21]
    Medical Image Retrieval via Nearest Neighbor Search on Pre ...
    Nearest neighbor search, also known as NNS, is a technique used to locate the points in a high-dimensional space closest to a given query point.
  22. [22]
    PostGIS distance operator <->
    The <-> operator returns the 2D distance between two geometries. Used in the "ORDER BY" clause provides index-assisted nearest-neighbor result sets.
  23. [23]
    A Deep Dive into PostGIS Nearest Neighbor Search - Crunchy Data
    Apr 27, 2020 · PostGIS uses this framework and the R-tree spatial index to provide excellent performance for nearest neighbour queries on its spatial data ...
  24. [24]
    Adaptive KNN-Based Extended Collaborative Filtering ... - MDPI
    May 31, 2023 · In this paper, we enhance the K-nearest neighbor (KNN)-based collaborative filtering algorithm for a recommendation system, by considering the similarity of ...Adaptive Knn-Based Extended... · 2. Related Work · 4. Experiments
  25. [25]
    (PDF) Recommendation system using the k-nearest neighbors and ...
    We are going to apply two algorithms: k-nearest neighbours (KNN) and the matrix factorization algorithm of collaborative filtering which are based on the ...
  26. [26]
    Introducing approximate nearest neighbor search in Elasticsearch 8.0
    Feb 7, 2022 · Elasticsearch 8.0 improves the scalability of vector search with the introduction of fast approximate nearest neighbor (ANN) search.
  27. [27]
    Spanner now supports Approximate Nearest Neighbor (ANN) search
    Aug 8, 2024 · Let us walk you through the details of the innovations in Spanner that deliver vector search at scale. Spanner leverages ScaNN (Scalable Nearest ...Missing: Elasticsearch | Show results with:Elasticsearch
  28. [28]
    Accurate and Efficient Metadata Filtering in Pinecone's Serverless ...
    This paper presents the design of metadata filtering in Pinecone's serverless vector database, which achieves high accuracy by integrating filtering into the ...
  29. [29]
    [PDF] Milvus: A Purpose-Built Vector Data Management System
    Mar 25, 2021 · This paper presents Milvus, a purpose-built data management system to efficiently manage large-scale vector data. Milvus sup- ports easy-to ...
  30. [30]
    [PDF] The Ball-Pivoting Algorithm for Surface Reconstruction
    By pivoting a ball of a fixed radius around different edges in a point cloud, a mesh can be efficiently reconstructed. The algorithm adds a triangle when ...
  31. [31]
  32. [32]
    RTNN: accelerating neighbor search using hardware ray tracing
    Mar 28, 2022 · This paper proposes to formulate neighbor search as a ray tracing problem and leverage the dedicated ray tracing hardware in recent GPUs for acceleration.
  33. [33]
    [PDF] Nearest Neighbor Search: the Old, the New, and the Impossible
    Sep 4, 2009 · The Nearest Neighbor (NN) problem preprocesses objects so that given a query, the most similar data object can be found efficiently.
  34. [34]
    [PDF] SRS: Solving c-Approximate Nearest Neighbor Queries in High ...
    The brute-force linear scan algorithm has a trivial query time O(dn). [47]'s query complexity is O(dn1−εd ), but εd goes to 0 rapidly with d. [4] ...
  35. [35]
    [PDF] An Investigation of Practical Approximate Nearest Neighbor Algorithms
    Typically for MT-DFS, we observe an order of magnitude speed-up over naıve linear scan and other popular data structures such as SR-trees. However, MT-DFS ...
  36. [36]
  37. [37]
    An Algorithm for Finding Best Matches in Logarithmic Expected Time
    This paper presents an algorithm for finding best matches in logarithmic expected time, authored by Jerome H. Friedman, Jon Louis Bentley, and Raphael Ari  ...
  38. [38]
    Multidimensional binary search trees used for associative searching
    Sep 1, 1975 · This paper develops the multidimensional binary search tree (or k-d tree, where k is the dimensionality of the search space) as a data ...<|control11|><|separator|>
  39. [39]
    [PDF] Five Balltree Construction Algorithms - Steve Omohundro
    Abstract. Balltrees are simple geometric data structures with a wide range of practical applica tions to geometric ·learning tasks.Missing: original | Show results with:original
  40. [40]
    Quad trees a data structure for retrieval on composite keys
    The quad tree is a data structure for storing information to be retrieved on composite keys, especially for two-dimensional retrieval.Missing: original paper
  41. [41]
    [PDF] Geometric Modeling Using Octree Encoding
    A geometric modeling technique called Octree Encoding is presented. Arbitrary 3-D objects can be represented to any specified resolution in a hierarchical ...
  42. [42]
    [PDF] Locality-Sensitive Hashing Scheme Based on p-Stable Distributions
    ABSTRACT. We present a novel Locality-Sensitive Hashing scheme for the Ap- proximate Nearest Neighbor Problem under lp norm, based on p-.
  43. [43]
    A Locality Sensitive Hashing Based Approach for Federated ...
    In this paper, we propose a Locality Sensitive Hashing (LSH) based approach for federated recommender system.
  44. [44]
    Efficient and robust approximate nearest neighbor search using ...
    Mar 30, 2016 · We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, ...
  45. [45]
    Approximate nearest neighbor algorithm based on navigable small ...
    We propose a novel approach to solving the approximate k-nearest neighbor search problem in metric spaces. The search structure is based on a navigable small ...
  46. [46]
    Faiss: A library for efficient similarity search - Engineering at Meta
    Mar 29, 2017 · Many indexing libraries exist for around 1 million vectors, which we call small scale. For example, nmslib contains very efficient algorithms ...Missing: runtime | Show results with:runtime
  47. [47]
    [PDF] Product quantization for nearest neighbor search - l'IRISA
    Abstract— This paper introduces a product quantization based approach for approximate nearest neighbor search. The idea is to decomposes the space into a ...
  48. [48]
    [PDF] The Faiss library - arXiv
    Faiss implements two non-exhaustive search ap- proaches that operate at different memory vs. speed trade-offs: inverted file and graph-based. 5.1 Inverted files.
  49. [49]
    [PDF] Efficient Distributed Algorithms for the K-Nearest Neighbors Problem
    The K-nearest neighbors is a basic problem in machine learning with numerous applications. In this problem, given a (training) set of n data points with ...
  50. [50]
    [PDF] Formally Verified k-d Tree Construction and Search in Coq - arXiv
    Nov 18, 2023 · Generalizing the search to K-nearest neighbors1 involves main- taining a K-bounded max priority queue instead of a single closest known point.
  51. [51]
    (PDF) Adaptive k -Nearest-Neighbor Classification Using a Dynamic ...
    Aug 7, 2025 · PDF | Classification based on k-nearest neighbors (kNN classification) is one of the most widely used classification methods.
  52. [52]
  53. [53]
    Improving the speed and stability of the k-nearest neighbors method
    The complexity of sorting is O(n log n), where n is the number of data ... Here we point out that with the exception of the first k closest neighbors, sorting the ...
  54. [54]
    [PDF] parallel algorithms for nearest neighbor search problems ... - PADAS
    We propose two parallel algorithms for the direct calculation of the KNN problem. The first one prioritizes computing time over memory by replicating the query ...
  55. [55]
    1.6. Nearest Neighbors — scikit-learn 1.7.2 documentation
    For dense matrices, a large number of possible distance metrics are supported. For sparse matrices, arbitrary Minkowski metrics are supported for searches.
  56. [56]
    Fast and exact fixed-radius neighbor search based on sorting - PeerJ
    Mar 29, 2024 · Fixed-radius near neighbor search is a fundamental data operation that retrieves all data points within a user-specified distance to a query ...
  57. [57]
    [PDF] exact fixed-radius nearest neighbor search - arXiv
    Dec 15, 2022 · SNN is a new fixed-radius neighbor search method that uses sorting to prune the search space, returning exact results with no parameter tuning.Missing: seminal | Show results with:seminal
  58. [58]
    [PDF] Efficient Radius Neighbor Search in Three-dimensional Point Clouds
    In robotics, kD-trees [9] and octrees [10] are widely adopted to search for nearest neighbors in three- dimensional data and research concentrated on memory- ...
  59. [59]
    [PDF] Optimizing Search Strategies in k-d Trees
    Nearest neighbor searches generally begin with the construction of a “clipping window,” which defines a region of the tree space to search. For example, a three ...
  60. [60]
    [1607.04800] Collision detection or nearest-neighbor search? On ...
    Jul 16, 2016 · Abstract:The complexity of nearest-neighbor search dominates the asymptotic running time of many sampling-based motion-planning algorithms.Missing: fixed | Show results with:fixed
  61. [61]
    [PDF] ANN Programming Manual - UMD Department of Computer Science
    Fixed-radius k-Nearest Neighbor Search: This member function is a modification of the above procedure, which searches for up to k nearest neighbors, but ...
  62. [62]
  63. [63]
  64. [64]
    [PDF] Fast Approximate kNN Graph Construction for High Dimensional ...
    Nearest neighbor graphs are widely used in data mining and machine learning. A brute-force method to compute the exact kNN graph takes Θ(dn2) time for n data ...
  65. [65]
    [PDF] Finding All Nearest Neighbors with a Single Graph Traversal
    An important variant of nearest neighbor query is the all nearest neighbor. (ANN) query, which reports all nearest neighbors for a given set of query objects.
  66. [66]
    Clustering algorithm based on mutual K‐nearest neighbor ...
    Feb 9, 2012 · We present here the notion of a pair of points being mutual K-nearest neighbors (MKNN), and based on the idea of MKNN we develop our clustering ...
  67. [67]
    [PDF] All-Nearest-Neighbors Queries in Spatial Databases
    Given two sets A and B of multidimensional objects, the all-nearest-neighbors (ANN) query retrieves for each object in A its nearest neighbor in B. Although ...
  68. [68]
    A Simple Randomized Algorithm for All Nearest Neighbors
    A Simple Randomized Algorithm for All Nearest Neighbors. Soroush Ebadian, Hamid Zarrabi-Zadeh. Published: 31 Dec 2018, Last Modified: 12 Jul 2024CCCG 2019 ...
  69. [69]
    Nearest Neighbor Graph Construction on High-Dimensional Data ...
    Dec 4, 2021 · The k-nearest neighbor graph (KNNG) on high-dimensional data is a data structure widely used in many applications such as similarity search, dimension ...
  70. [70]
    [PDF] In uence Sets Based on Rev erse Nearest Neighbor Queries
    In this paper, we formalize a novel notion of influence based on reverse neighbor queries and its variants. Since the nearest neighbor relation is not symmetric ...
  71. [71]
    [PDF] Reverse kNN Search in Arbitrary Dimensionality
    This paper focuses on conventional (i.e., monochromatic) reverse nearest neighbor queries. In addition to single RNN search, we deal with reverse k nearest ...
  72. [72]
  73. [73]
    Maximizing the Influence of Bichromatic Reverse k Nearest ... - arXiv
    Apr 21, 2022 · We explore a new problem, called Maximizing the Influence of Bichromatic Reverse k Nearest Neighbors (MaxInfBRkNN).
  74. [74]
    None
    ### Summary of Time and Space Complexities for Approximate Nearest Neighbor Search Using LSH
  75. [75]
    [PDF] Nearest Neighbor Search and the Curse of Dimensionality 1 kd-trees
    Nearest neighbor search is a fundamental computational building block in computer vision, graphics, data mining, machine learning, and many other subfields.
  76. [76]
    [PDF] A Lower Bound on the Complexity of Approximate Nearest-Neighbor ...
    In this context it is worth mentioning that a stronger lower bound for exact nearest neighbor search is now known. Recent results of Borodin et al. [8] show ...
  77. [77]
    ANN-Benchmarks
    ANN-Benchmarks is a benchmarking environment for approximate nearest neighbor algorithms search. This website contains the current benchmarking results.Missing: project | Show results with:project
  78. [78]
    Evaluation of Approximate nearest neighbors: large datasets
    This page provides several evaluation sets to evaluate the quality of approximate nearest neighbors search algorithm on different kinds of data and varying ...
  79. [79]
    A Benchmarking Tool for Approximate Nearest Neighbor Algorithms
    Jul 15, 2018 · Abstract:This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms.
  80. [80]
    Benchmarks of approximate nearest neighbor libraries in Python
    This project contains tools to benchmark various implementations of approximate nearest neighbor (ANN) search for selected metrics.Missing: Google | Show results with:Google
  81. [81]
    spotify/annoy: Approximate Nearest Neighbors in C++/Python ...
    Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point.Annoy-java · Issues · Pull requests 8 · Actions
  82. [82]
    nmslib/hnswlib: Header-only C++/python library for fast approximate ...
    Hnswlib - fast approximate nearest neighbor search. Header-only C++ HNSW implementation with python bindings, insertions and updates.
  83. [83]
    Sentence Transformers: Meanings in Disguise - Pinecone
    The dense embeddings created by transformer models are so much richer in information that we get massive performance benefits despite using the same final ...
  84. [84]
    Billion-Scale Approximate Nearest Neighbor ... - Big ANN Benchmarks
    In these deployment scenarios, benchmarks such as cost, preprocessing time, power consumption become just as important as the recall-vs-latency tradeoff.