Fact-checked by Grok 2 weeks ago

k -nearest neighbors algorithm

The k-nearest neighbors (k-NN) algorithm is a non-parametric, instance-based supervised method used for both and tasks. In , it assigns a label to a new data point by identifying the k closest training examples based on a distance metric, such as , and selecting the majority class among them; for , it predicts a continuous value by averaging the targets of those k neighbors. The algorithm is "lazy," meaning it performs no explicit training phase and stores the entire dataset, deferring computation until prediction time. Originally proposed by Evelyn Fix and Joseph Hodges in 1951 as a non-parametric approach to pattern , the k-NN method gained prominence through the work of Thomas M. and Peter E. Hart in , who analyzed its probabilistic error bounds and showed that the 1-NN classifier's error rate approaches at most twice the Bayes optimal error rate as the sample size grows large. and Hart extended the analysis to multi-class settings, proving that the error probability R satisfies R ≤ R ≤ R(2 - M R*/(M-1))**, where R* is the Bayes error and M is the number of classes, establishing k-NN's consistency and admissibility among similar nonparametric rules. This theoretical underscores its reliability in capturing up to half the available in optimal classifiers. Key hyperparameters include the value of k, which controls smoothness—small k (e.g., 1) captures fine-grained patterns but risks overfitting to noise, while larger k provides robustness but may underfit by smoothing boundaries—and the choice of distance metric, often requiring feature normalization to handle varying scales. The algorithm excels in forming complex, data-adaptive decision boundaries and handling multimodal distributions without assuming underlying data distributions, making it suitable for applications like image recognition, recommendation systems, and bioinformatics. However, it suffers from high computational cost (O(N d) per query for N samples in d dimensions), sensitivity to irrelevant features and the curse of dimensionality, and vulnerability to noisy data, often necessitating techniques like dimensionality reduction or approximate nearest neighbor search for scalability.

Fundamentals

Statistical Setting

The k-nearest neighbors (k-NN) algorithm operates within the framework of statistical , where the primary goal is to classify an unobserved feature vector \mathbf{x} into one of several classes by estimating the P(y \mid \mathbf{x}), with y denoting the class label. This estimation addresses the challenge of inferring the distribution from a of points \{(\mathbf{x}_i, y_i)\}_{i=1}^n, drawn independently from an underlying joint distribution over the feature space and labels. The approach treats as a problem of local in the feature space, without assuming a form for the underlying distribution. Geometrically, the data points are embedded in a p-dimensional feature space, where proximity is measured by a metric, such as the , to identify the k nearest neighbors to a query point \mathbf{x}. These neighbors represent the most similar observations in the training set, under the intuition that points close in feature space are likely to share the same class label. This local geometry underpins the algorithm's nonparametric nature, relying on the empirical distribution of the data rather than a global model. The statistical setting of k-NN typically assumes that the training features \mathbf{x}_i are independent and identically distributed (i.i.d.) from a probability density function f(\mathbf{u}) over the feature space, with class-conditional densities f(\mathbf{x} \mid y) and possibly uniform or known prior probabilities P(y) across classes; feature independence is not strictly required but simplifies analysis unless specified otherwise. The true regression function, such as the conditional expectation E[y \mid \mathbf{x}] for binary classification, is assumed to be continuous or piecewise constant. In this context, the Bayes optimal classifier serves as the theoretical benchmark, assigning \mathbf{x} to the class \hat{y} = \arg\max_y P(y \mid \mathbf{x}) to minimize the expected misclassification risk, achieving the lowest possible error rate known as the Bayes error. k-NN approximates this optimum nonparametrically by leveraging the locality of the data, converging to the Bayes classifier as the sample size n \to \infty and k is chosen appropriately. The core estimation in k-NN for the conditional probability is given by the proportion of neighbors belonging to class y: \hat{P}(y \mid \mathbf{x}) \approx \frac{1}{k} \sum_{j=1}^k \mathbb{I}(y_j = y), where \mathbb{I}(\cdot) is the indicator function, and y_1, \dots, y_k are the labels of the k nearest neighbors to \mathbf{x}. The predicted class is then \hat{y} = \arg\max_y \hat{P}(y \mid \mathbf{x}). This empirical approximation captures local variations in the class-conditional densities, providing a flexible alternative to parametric methods when the true distribution is unknown or complex.

Algorithm Description

The k-nearest neighbors (k-NN) algorithm is a non-parametric, instance-based method for that operates by storing the entire and making predictions for new query points based on the of their closest neighbors in the feature space. The core procedure involves two main phases: during , the algorithm simply memorizes the labeled examples without any explicit model building, resulting in trivial storage of the . For prediction on a query point, it computes distances from the query to all points, identifies the k nearest ones, and assigns the that appears most frequently among those neighbors via . The standard k-NN classification can be outlined in pseudocode as follows:
function kNN_classify(query_point, training_data, k, distance_metric):
    # Compute distances from query to all training points
    distances = []
    for each training_point in training_data:
        dist = distance_metric(query_point, training_point.features)
        distances.append((dist, training_point.label))
    
    # Sort distances and select k smallest
    sorted_distances = sort(distances)  # ascending order
    k_nearest = sorted_distances[0:k]
    
    # Majority vote on labels
    label_counts = count_labels_in(k_nearest)
    predicted_label = argmax(label_counts)  # class with highest count
    
    return predicted_label
This assumes a distance metric such as and handles the basic retrieval and voting mechanics. In cases where ties occur during majority voting (i.e., multiple classes have the same maximum count), common resolutions include random selection among tied classes or, less frequently, reducing k incrementally until a unique majority emerges; implementations often favor odd values of k to minimize such ties. To illustrate, consider a simple dataset with three classes (A, B, C) and training points: (1,1) labeled A, (2,2) labeled A, (4,3) labeled B, (5,1) labeled B, and (3,4) labeled C. For a query point at (3,2) with k=3 using , the distances are approximately 2.24 to (1,1), 1.00 to (2,2), 1.41 to (4,3), 2.24 to (5,1), and 2.00 to (3,4). The three nearest neighbors are (2,2) A, (4,3) B, and (3,4) C; this results in one neighbor from each class, leading to a tie. In such cases, tie-breaking rules such as random selection among the tied classes may be applied. The time complexity for a single query in the naive implementation is O(n d), where n is the number of training points and d is the feature dimensionality, due to distance computations across the full dataset; training requires only O(n d) for storage. The choice of k influences prediction smoothness, with larger k reducing variance but potentially increasing bias.

Parameter Selection

Choosing k

The parameter k in the k-nearest neighbors (k-NN) algorithm is a critical hyperparameter that governs the model's flexibility and performance through the bias-variance tradeoff. Small values of k (e.g., k=1) produce low-bias estimates that closely follow the training data, but they exhibit high variance, making predictions sensitive to noise and leading to overfitting. In contrast, large values of k yield high-bias, smoothed predictions with low variance, which can result in underfitting by averaging over too many neighbors and missing local patterns. The primary method for selecting an optimal k is cross-validation, which provides an unbiased estimate of the model's . In k-fold cross-validation (typically with 5 or 10 folds), the training is partitioned into k equal-sized subsets; for each candidate value of k in k-NN, the algorithm is trained on k-1 subsets and validated on the remaining one, repeating this process across all subsets and averaging the errors. The k value minimizing this cross-validation error is selected, ensuring a balance between bias and variance tailored to the dataset. As a practical starting point before cross-validation, a common sets k \approx \sqrt{n}, where n is the number of training samples, offering an initial guess that scales with size while avoiding extremes. In tasks, choosing an odd value of k is recommended to prevent ties in the majority vote among neighbors, ensuring a decisive . Empirically, the impact of k can be visualized by plotting cross-validation or error against different k values on a . For instance, on the Smarket financial , the error is about 50% at k=1, rises to approximately 53% at k=3, and varies for larger k, illustrating the and the value of optimization.

Distance Metrics

The k-nearest neighbors (k-NN) relies on a metric to quantify the similarity between points, determining which points are considered neighbors during or . The choice of metric fundamentally shapes the 's performance, as it defines the of the feature space in which proximity is measured. Standard metrics assume continuous numerical features, while alternatives are suited to specific types or structures. The most commonly used distance metric in k-NN is the , which measures the straight-line distance between two points in . This metric is the default in many implementations and was employed in early theoretical analyses of the nearest neighbor rule. For two points \mathbf{x} = (x_1, x_2, \dots, x_d) and \mathbf{y} = (y_1, y_2, \dots, y_d) in d-dimensional space, the is given by: \|\mathbf{x} - \mathbf{y}\|_2 = \sqrt{\sum_{i=1}^d (x_i - y_i)^2} It assumes that features are on comparable scales and that differences along any dimension contribute equally to dissimilarity. The , also known as the L1 distance, computes the along each dimension, resembling the distance traveled along grid lines. It is less sensitive to outliers than and can be preferable in urban or grid-like data structures. The generalizes both (p=2) and (p=1) metrics through the parameter p, providing a flexible family of L_p norms. The general formula for between \mathbf{x} and \mathbf{y} is: \|\mathbf{x} - \mathbf{y}\|_p = \left( \sum_{i=1}^d |x_i - y_i|^p \right)^{1/p} for p \geq 1. Studies comparing these metrics on benchmark datasets show that Manhattan often yields comparable or superior accuracy to Euclidean in k-NN classification tasks with moderate dimensions. For categorical data, where features take discrete values without inherent order, the Hamming distance is appropriate, as it counts the number of positions at which the corresponding symbols differ between two strings of equal length. This metric treats mismatches equally regardless of category labels, making it suitable for nominal attributes in datasets like genetic sequences or survey responses. In mixed datasets, hybrid approaches combine Hamming for categorical features with Euclidean or Manhattan for numerical ones. In high-dimensional sparse data, such as text documents represented as term-frequency vectors, the (derived from ) is often preferred over or metrics. measures the angular separation between vectors, emphasizing direction over magnitude: d(\mathbf{x}, \mathbf{y}) = 1 - \frac{\mathbf{x} \cdot \mathbf{y}}{\|\mathbf{x}\|_2 \|\mathbf{y}\|_2} This mitigates the curse of dimensionality by focusing on relative orientations, leading to more robust neighbor selection in sparse spaces where many features are zero. Empirical evaluations confirm its effectiveness for tasks integrated with k-NN. Feature scales can distort distances if variables have differing ranges or units, such as age in years versus income in dollars; thus, normalization or standardization is typically required before applying any metric. Z-score normalization (standardization) transforms features to have zero mean and unit variance: z_i = \frac{x_i - \mu}{\sigma} where \mu and \sigma are the mean and standard deviation. Min-max normalization scales features to a fixed range, often [0, 1]: x'_i = \frac{x_i - \min}{\max - \min} Both methods ensure equitable contributions from all features, with z-score being robust to outliers and min-max preserving exact ranges. Experiments on classification datasets demonstrate that applying such preprocessing improves k-NN accuracy by 5-15% across metrics like and . The choice of distance metric directly influences the k-NN decision boundaries, which are the hypersurfaces separating classes based on majority votes from k neighbors. For instance, consider a 2D dataset with two classes forming concentric clusters; produces smooth, circular boundaries centered on class centroids, while Manhattan distance yields diamond-shaped boundaries aligned with axes, potentially capturing elongated clusters better but introducing axis biases. Such differences can impact error rates, with Manhattan reducing misclassifications by up to 10% in grid-distorted data.

Core Variants

1-Nearest Neighbor

The 1-nearest neighbor (1-NN) algorithm represents the simplest instance of the k-nearest neighbors method, where the class label of a query point is determined solely by the label of its single closest training example, measured via a specified distance metric such as the . This direct assignment leverages the assumption that nearby points in the feature space are likely to share the same class, making 1-NN a nonparametric classifier that requires no explicit model training beyond storing the dataset. The origins of 1-NN trace back to early work in during the 1950s, when Evelyn Fix and Joseph Hodges introduced it as part of nonparametric discriminant analysis in a technical report for the U.S. Air Force School of Aviation Medicine. Their approach addressed the need for methods that did not rely on assumptions about distributions, laying foundational groundwork for techniques in machine intelligence. In geometric terms, the decision regions formed by the 1-NN classifier align precisely with the constructed from the training points, partitioning the feature space into cells where each cell encompasses all locations nearer to its generating training point than to any other. Within each Voronoi cell, the classifier assigns the label of the corresponding training point, creating a piecewise constant decision surface that adapts fully to the training data's spatial arrangement. This highlights 1-NN's intuitive reliance on local proximity for boundary delineation. Theoretically, 1-NN demonstrates strong asymptotic performance, achieving an asymptotic error rate that is at most twice the —the theoretical minimum error for any classifier—under certain conditions on the underlying probability densities as the training sample size grows to infinity. Cover and Hart (1967) established that the excess error of 1-NN over the Bayes risk is bounded above by the Bayes risk itself, ensuring the classifier's error rate is at most twice the optimal value in the limit. This consistency property underscores 1-NN's appeal as a for more complex methods. However, these theoretical guarantees do not mitigate practical limitations, as 1-NN often displays high variance, capturing in finite samples and leading to on training data. Additionally, its sensitivity to outliers is pronounced, since an anomalous training point can skew the Voronoi cell boundaries, incorrectly influencing classifications across substantial portions of the feature space. Such vulnerabilities make 1-NN particularly challenging in noisy or contaminated datasets, where even minor perturbations amplify prediction instability.

Weighted Nearest Neighbors

In weighted nearest neighbors, a modification to the standard k-nearest neighbors (k-NN) algorithm, each of the k nearest neighbors is assigned a weight based on its distance to the query point, allowing closer neighbors to contribute more to the prediction than farther ones. This approach refines the decision-making process by emphasizing local structure in the data. Common weighting schemes include , where the weight for the i-th neighbor is given by w_i = \frac{1}{d_i} and d_i is the to the query point, and kernel-based weighting using a , w_i = \exp\left( -\frac{d_i^2}{\sigma^2} \right), where \sigma controls the decay rate. These schemes ensure that the influence diminishes with increasing , with the inverse method providing linear decay and the Gaussian offering smoother, exponential falloff. For classification, predictions are made via a weighted vote, where the label assigned to the query is the one with the highest sum of weights from its supporting neighbors. In , a weighted of the neighbors' values is computed, though detailed formulations are addressed elsewhere. These weighted predictions lead to smoother decision boundaries compared to unweighted k-NN, as the gradual influence of neighbors reduces abrupt changes near edges. Additionally, mitigates the impact of outliers by downplaying distant or noisy points that might otherwise skew results in unweighted voting. The parameter \sigma in kernel schemes is typically tuned using cross-validation to optimize performance on validation data, balancing under- and over-smoothing. For instance, on datasets with added noise, such as synthetic two-class problems, weighted k-NN with achieves higher accuracy (e.g., up to 10-15% improvement) than unweighted k-NN by better handling perturbations in neighbor distances.

Theoretical Properties

Error Rates

The Bayes error rate serves as the fundamental lower bound on the classification error for any estimator, including the k-nearest neighbors (k-NN) classifier, representing the irreducible error due to overlapping class distributions under the optimal Bayes classifier. For the 1-NN classifier, Cover and Hart established that the error rate is at most twice the Bayes error rate in the asymptotic limit as the sample size n approaches infinity, providing an early theoretical guarantee on its performance relative to the optimum. As k increases appropriately with n, the k-NN error rate converges to the Bayes error, ensuring consistency under mild conditions on the data distribution. Under Hölder smoothness assumptions on the (typically with smoothness 2), the excess error of the k-NN classifier decomposes into and terms, yielding an asymptotic bound of \mathbb{E}[L(\hat{g}_{n,k}) - L^*] \leq C_1 / k + C_2 \left( k / n \right)^{4/d}, where L(\hat{g}_{n,k}) is the risk of the k-NN classifier, L^* is the Bayes risk, d is the , and C_1, C_2 > 0 are constants depending on the ; optimizing k achieves an overall of O(n^{-4/(d+4)}). This highlights the nonparametric nature of k-NN, where the error decreases slowly in high dimensions due to the curse of dimensionality. The -variance tradeoff in k-NN manifests in its excess risk decomposition for under 0-1 , where small k yields low (highly adaptive local decisions) but high variance (sensitivity to in sparse neighborhoods), while large k reduces variance (smoother averaging over more neighbors) at the cost of higher (over-generalization away from local structure). This decomposition, adapted from settings, underscores k's role in balancing these components to minimize total error, though the non-convex 0-1 complicates exact separation compared to squared . Empirical error estimation for k-NN often relies on resubstitution (apparent error on the training set), which severely underestimates the true error by over-optimistically assessing fit to the same data used for neighborhood construction. In contrast, leave-one-out cross-validation provides a nearly unbiased estimate by predicting each point using all others, avoiding overlap bias and yielding consistent error rates closer to test performance, particularly for small k where resubstitution optimism is pronounced. High-dimensional data exacerbates k-NN error rates through the curse of dimensionality, as nearest-neighbor distances concentrate around the mean, rendering local neighborhoods uninformative and effectively randomizing classifications, which can inflate error by factors exponential in d. Similarly, class imbalance increases error for minority classes, as majority-class points disproportionately influence neighbor votes, skewing decisions and elevating overall misclassification unless addressed by or sampling.

Decision Boundaries

The decision boundaries of the k-nearest neighbors (k-NN) algorithm partition the feature space into regions where each region is assigned to a class based on the majority vote of the k closest training points to any query point within it. These boundaries are inherently non-linear and piecewise, formed by the loci where the class assignment changes, which occur at the midpoints (perpendicular bisectors in ) between sets of neighboring points from different classes. This geometric structure arises directly from the algorithm's reliance on local density and proximity in the data distribution. For the special case of k=1 (1-NN), the decision boundaries are precisely the edges of the Voronoi tessellation induced by the training points, where each Voronoi cell consists of all points closer to a particular training sample than to any other, and boundaries only appear between cells of differing classes. These boundaries are hyperplanes perpendicular to the line segments joining pairs of nearest neighbors from opposite classes, resulting in a complex, irregular partitioning that tightly follows the data geometry. In contrast, for general k > 1, the boundaries are more intricate, defined by surfaces where the k-th nearest neighbor distance ties or the majority class vote shifts, often creating overlapping influences from multiple cells. Increasing has the effect of these decision boundaries, as the incorporates a broader local neighborhood, reducing sharp jaggedness and approximating linear separators in densely sampled regions while mitigating the influence of isolated points. For instance, in two-dimensional toy datasets like the two-class "moons" or concentric circles, visualizations show that small (e.g., =1 or 3) yields highly fragmented, curvilinear boundaries that hug individual data points, whereas larger (e.g., =15 or 31) produces broader, more continuous curves that better capture overall class separations but may underfit subtle structures. The shape of these boundaries is also influenced by the distance metric used, such as versus , which alters the of neighbor selection. A key limitation arises in datasets with high levels, where small k values lead to fragmented and erratic boundaries that overreact to outliers, creating spurious regions and increasing to perturbations in the training data. This fragmentation diminishes reliability in noisy environments, though larger k can partially alleviate it by averaging over more points, at the cost of reduced resolution for fine-grained patterns.

Computational Complexity

The naive implementation of the k-nearest neighbors (k-NN) algorithm stores the entire training dataset in memory, requiring O(nd) space, where n is the number of training samples and d is the dimensionality. For , it computes the from a query point to all n training points, followed by sorting or partial sorting to identify the k closest neighbors, yielding an O(nd + n \log k) per query in the worst case. This brute-force approach becomes prohibitive for large n or high d, as the exhaustive calculations dominate the runtime. To address these limitations, spatial indexing structures accelerate queries while maintaining exact results in favorable conditions. KD-trees, introduced by , , and Finkel, partition the data space using hyperplanes aligned with coordinate axes, enabling nearest neighbor searches with expected O(\log n) query time when d is low (typically d < 10). Ball trees, developed by Omohundro, organize points into nested hyperspheres, achieving similar O(\log n) average-case query efficiency and outperforming KD-trees in moderate dimensions (d \approx 10-20) by better handling non-uniform data distributions. Both structures incur O(nd \log n) preprocessing time for construction but reduce per-query costs significantly compared to the naive method. In high-dimensional settings, exact methods like KD-trees and ball trees suffer from degraded performance due to the curse of dimensionality, where query times approach linear in n. Approximate techniques, such as proposed by Indyk and Motwani, circumvent this by hashing similar points into the same buckets with high probability, enabling sublinear expected query times of O(n^\rho) where $0 < \rho < 1 depends on the approximation factor and distance metric. Modern libraries like leverage vectorized operations in for batch predictions, processing multiple queries in O(q n d) time for q queries through optimized distance matrix computations, further enhancing scalability for moderate-sized datasets. Despite these advances, the intrinsic O(nd) scaling in high d remains a core challenge, often necessitating complementary strategies like dimensionality reduction to preserve efficiency.

High-Dimensional Challenges

Feature Extraction

Feature extraction plays a crucial role in preparing data for the (k-NN) algorithm by transforming raw inputs into representations that better capture underlying patterns, thereby improving classification accuracy and robustness to noise. This process focuses on generating or selecting features that enhance the discriminability of data points in the feature space, allowing k-NN to identify relevant neighbors more effectively based on distance measures. Effective feature extraction mitigates issues like irrelevant attributes that can dilute the influence of informative ones, leading to more reliable nearest neighbor retrieval. Manual feature engineering involves domain-specific transformations tailored to the problem at hand, such as computing ratios between variables to normalize scales or applying logarithmic transformations to address skewed distributions in datasets like financial or biological data. For instance, in credit risk assessment, engineers might derive debt-to-income ratios from raw financial metrics to better reflect borrower profiles for k-NN-based classification. These handcrafted features leverage expert knowledge to emphasize relationships that raw data might obscure, directly benefiting k-NN's distance-based decision-making by aligning the feature space with domain semantics. Automated methods expand the feature set through systematic generation, including polynomial features that introduce higher-degree powers of existing attributes and interaction terms that multiply pairs or groups of features to capture non-linear dependencies. Polynomial features, for example, can model quadratic relationships in datasets where linear distances fail to represent complexity. Such expansions enable k-NN to operate in a richer space without assuming a specific model form, though care must be taken to avoid overfitting through subsequent selection. Feature selection techniques further refine the extracted set by evaluating relevance, such as using mutual information to quantify the dependency between individual features and the target class while minimizing redundancy among selected features. The minimum redundancy maximum relevance (mRMR) approach, which balances relevance via mutual information against inter-feature redundancy, has been shown to boost k-NN performance on gene expression datasets; for example, it achieves 100% accuracy on leukemia data compared to lower rates with full features. Similarly, the chi-squared test assesses independence between categorical features and the target, selecting those with the lowest p-values; when applied prior to k-NN, it enhances classification by reducing noise from irrelevant terms. A representative example is the extraction of texture features from images for k-NN classification, where Haralick features derived from gray-level co-occurrence matrices quantify properties like contrast, homogeneity, and entropy across spatial directions. These 14 texture descriptors, computed from pixel intensity correlations, have been effectively used in k-NN to classify X-ray images for detection, achieving accuracies of 87.7% by capturing spatial patterns invisible in raw pixel data. This domain-specific extraction transforms high-dimensional images into a compact, interpretable set suited for k-NN's instance-based learning. In practice, feature extraction integrates as a preprocessing step in pipelines with k-NN, where transformations and selections precede model fitting to ensure consistent application across training and testing data. This modular approach, often implemented via sequential processing, allows iterative refinement and has been validated to improve k-NN efficiency on large-scale datasets by focusing computational resources on informative features.

Dimension Reduction

In high-dimensional spaces, the k-nearest neighbors (k-NN) algorithm suffers from the curse of dimensionality, where distances between points become increasingly uniform and neighbors become sparse, leading to degraded performance in identifying relevant similarities. This phenomenon arises because, as dimensionality grows, the volume of the space expands exponentially, causing most points to appear equidistant from a query point and diluting the effectiveness of distance-based neighbor selection. To mitigate these issues, linear dimension reduction techniques such as (PCA) project data onto lower-dimensional subspaces by identifying principal components through eigendecomposition of the covariance matrix, retaining the directions of maximum variance. PCA reduces computational demands in k-NN by lowering the feature space while preserving global structure, often improving classification accuracy on datasets like handwritten digits by capturing essential variance with fewer components. For non-linear alternatives, t-distributed stochastic neighbor embedding (t-SNE) excels in visualization by preserving local neighborhoods through probabilistic modeling of pairwise similarities, mapping high-dimensional data to two or three dimensions where clusters remain discernible. Primarily used for exploratory analysis, t-SNE enhances k-NN interpretability by revealing neighbor relationships that are obscured in original high dimensions, though it is less suited for direct predictive use due to its non-deterministic nature. Uniform manifold approximation and projection (UMAP), a more general-purpose method, constructs a fuzzy topological representation via k-nearest neighbor graphs and optimizes a low-dimensional embedding that balances local and global structure preservation, making it applicable for both visualization and downstream k-NN tasks. Dimension reduction methods tailored to k-NN emphasize preserving local structure, such as the k-nearest neighbor graph, to ensure that post-reduction embeddings maintain similar neighbor relations to the original space. Techniques like UMAP explicitly build upon neighbor graphs to embed data manifolds, reducing sparsity while upholding the locality critical for accurate k-NN predictions. Evaluating these reductions involves estimating the intrinsic dimension before and after transformation, often using k-NN distance-based methods like maximum likelihood estimation on neighbor distances to quantify the minimal effective dimensionality and assess preservation of manifold structure. Such metrics confirm whether the reduced space alleviates high-dimensional challenges without losing essential data geometry for k-NN.

Metric Learning

Standard distance metrics, such as the Euclidean distance, often fail to capture the underlying structure of the data in k-nearest neighbors (k-NN) classification, leading to suboptimal neighbor selection and reduced accuracy. Metric learning addresses this by deriving task-specific metrics from labeled training data, adapting the distance function to emphasize relevant similarities and differences between points. A common approach involves learning a Mahalanobis distance metric, defined as d_M(\mathbf{x}, \mathbf{y}) = \sqrt{(\mathbf{x} - \mathbf{y})^T M (\mathbf{x} - \mathbf{y})}, where M is a positive semi-definite matrix that can be parameterized as M = L^T L for a linear transformation L. One method to learn M is through (LDA), which derives the metric from between-class and within-class scatter matrices to maximize class separability. More advanced techniques, such as (LMNN), optimize M specifically for k-NN by minimizing an objective that pulls target neighbors of the same class closer while pushing impostors (nearby points of different classes) apart beyond a margin. The LMNN objective combines a pulling term \epsilon_{\text{pull}} = \sum_{i} \sum_{j \in \mathcal{T}_i} \| L(\mathbf{x}_i - \mathbf{x}_j) \|^2 to minimize distances to the k nearest same-class points and a pushing term with hinge loss \epsilon_{\text{push}} = \sum_{i} \sum_{j \in \mathcal{T}_i} \sum_{l \notin \mathcal{C}_i} [1 + \| L(\mathbf{x}_i - \mathbf{x}_j) \|^2 - \| L(\mathbf{x}_i - \mathbf{x}_l) \|^2 ]_+ to enforce large margins, balanced by a regularization parameter \mu. Another prominent algorithm is Information-Theoretic Metric Learning (ITML), which learns the Mahalanobis matrix by minimizing the differential relative entropy (LogDet divergence) between a prior metric and the learned one, subject to similarity and dissimilarity constraints derived from class labels. For k-NN, ITML enforces upper bounds on distances between same-class points and lower bounds between different-class points, enabling scalable optimization without eigenvalue decompositions. Evaluations demonstrate the benefits of learned metrics over Euclidean distance in k-NN classification. On the , LMNN achieves an error rate of 3.11%, compared to 4.87% with Euclidean distance, across 100 random train-test splits (70/30 ratio). Similarly, ITML yields error rates within optimal bounds on and other UCI datasets, outperforming Euclidean and other baselines like MCML in k-NN tasks.

Data Reduction Methods

Class Outlier Selection

Class outlier selection refers to a preprocessing technique in the algorithm aimed at identifying and eliminating data points that deviate significantly from the typical structure of their assigned class, thereby purifying the training dataset from intra-class noise. These outliers are typically defined as instances located far from the centroid of their class or distant from the cluster of their same-class neighbors, which can distort local decision boundaries and degrade classification performance. Detection of such class outliers often relies on computing the distance from a point to its k nearest neighbors within the same class; if this intra-class k-NN distance exceeds a predefined threshold—calibrated based on the dataset's distribution—the point is flagged as an outlier. Alternatively, the (COF) provides a composite score incorporating the probability that the point's class label matches the majority vote of its k nearest neighbors, its deviation from the class mean, and its k-distance to the nearest non-outlier, with higher COF values indicating greater outlierness. This approach extends general outlier detection principles but focuses exclusively on class-specific anomalies to enhance k-NN reliability. The removal strategy employs iterative editing, where flagged outliers are progressively eliminated from the training set in multiple passes, recomputing neighbor distances or scores after each round until convergence or a fixed number of iterations is reached, effectively pruning noise without altering the overall class proportions. A foundational algorithm for this process is Wilson's (ENN) rule, which builds a cleaned subset by repeatedly removing samples misclassified by the k-NN classifier applied to the current reference set, prioritizing the elimination of noisy or outlier points that would otherwise mislead nearest neighbor searches. By excising these class outliers, the technique reduces storage requirements through dataset downsizing and enhances k-NN accuracy on contaminated data, with empirical studies reporting improvements in classification performance compared to unedited baselines on noisy benchmarks.

Condensed Nearest Neighbor

The Condensed Nearest Neighbor (CNN) algorithm aims to identify the smallest subset S of the original training set such that the k-nearest neighbors classifier trained on S correctly classifies all points in the full training set, thereby reducing storage requirements while preserving consistency with the 1-nearest neighbor rule. This approach addresses the inefficiency of storing the entire training dataset for nearest neighbor classification by selecting prototypes that maintain the decision boundaries defined by the original data. The procedure begins with an empty set S (referred to as the STORE) and processes the training points in an arbitrary order. The first point is added to S. For each subsequent point x, its nearest neighbor in S is found; if it is misclassified (i.e., the nearest neighbor has a different label), x is added to S; otherwise, it is placed in a temporary set (GRABBAG). Once all points are processed, the algorithm iterates over the GRABBAG, reclassifying its points against the updated S and transferring any misclassified points back to S. This iteration continues until no further transfers occur or the GRABBAG is empty, at which point S serves as the consistent prototype set, and the GRABBAG is discarded. The order of processing can affect the final size of S, but the algorithm guarantees consistency regardless. Wilson's editing can be combined with CNN to form a hybrid method that first removes outliers or noisy points using an edited nearest neighbor rule—discarding points misclassified by their k-nearest neighbors in the original set—before applying condensation to the cleaned subset, enhancing robustness to noise while further minimizing the prototype set. In the worst case, the algorithm has O(n^2) time complexity for a training set of size n, as each of up to n iterations may require classifying n points, each against up to n prototypes using naive distance computations. For large datasets, approximations such as the Fast Condensed Nearest Neighbor rule achieve sub-quadratic complexity by optimizing the search for misclassified points, often converging in fewer iterations while maintaining consistency. Empirical results demonstrate typical 20-50% reductions in dataset size with minimal accuracy loss relative to the full training set, as the consistent subset preserves training accuracy by design.

Extensions

k-NN Regression

In k-NN regression, the algorithm extends the core nearest neighbors principle to predict continuous output values rather than discrete classes, making it suitable for tasks where the target variable is numeric. For a query point x, the model identifies the k training samples closest to x based on a distance metric, such as , and computes the prediction as the mean of their corresponding target values y_i. This non-parametric approach assumes that similar inputs produce similar outputs, leveraging local patterns in the data without assuming a global functional form. The prediction formula for unweighted k-NN regression is: \hat{y}(x) = \frac{1}{k} \sum_{i=1}^{k} y_{(i)} where y_{(i)} denotes the target value of the i-th nearest neighbor to x, ordered by increasing distance. This simple averaging provides an unbiased estimate under uniform sampling assumptions when k = n, reducing to the global mean, but typically k is chosen smaller to capture local structure. Weighted variants assign higher influence to nearer neighbors, often using inverse distance or kernel-based weights, which can improve performance by emphasizing proximity more sharply. A notable variant connects k-NN regression to kernel methods: the Nadaraya-Watson estimator, which uses a kernel function to weight all training points by their distance to x, can be viewed as a limiting case of weighted k-NN where k grows with the sample size and the bandwidth controls the effective neighborhood size. Introduced independently in the 1960s, this kernel regression form smooths predictions continuously and achieves optimal rates under smoothness assumptions on the underlying regression function. Regarding error analysis, the mean squared error (MSE) of k-NN regression decomposes into bias and variance terms, similar to the misclassification rate analysis in classification, with both converging to their Bayes optimal values as the training size n \to \infty for fixed k, provided the data distribution satisfies regularity conditions like Lipschitz continuity of the regression function. The optimal MSE convergence rate is O(n^{-4/(d+4)}) in d-dimensions when k is tuned appropriately (e.g., k \sim n^{4/(d+4)}), though this rate degrades in high dimensions, highlighting the curse of dimensionality. A representative application is predicting house prices, where input features such as square footage, number of bedrooms, and location distance to city centers are used to find k comparable listings, and the sale price is estimated by averaging their values; empirical studies on real estate datasets show k-NN regression yielding competitive MSE compared to linear models when nonlinear local effects dominate.

k-NN Outlier Detection

The k-nearest neighbors (k-NN) algorithm is adapted for outlier detection by assigning anomaly scores to data points based on their similarity to nearby points in the feature space, identifying those that deviate from local patterns as potential outliers. One foundational method uses the distance to the k-th nearest neighbor, denoted as D_k(p), as the outlier score for a point p; points with larger D_k(p) values are considered outliers because they reside in sparser regions compared to the denser clusters of normal data. This approach ranks all points by their D_k(p) and selects the top n as outliers, where k and n are user-specified parameters, avoiding the need for a predefined distance threshold. A more sophisticated extension is the Local Outlier Factor (LOF), which computes scores by comparing the local density of a point to that of its neighbors, capturing varying density structures in the data. The local reachability density lrd_k(p) for a point p is defined as the inverse of the average reachability distance to its k nearest neighbors: lrd_k(p) = \frac{|N_k(p)|}{\sum_{o \in N_k(p)} reach\text{-}dist_k(p, o)} where N_k(p) is the set of k nearest neighbors of p, and the reachability distance reach\text{-}dist_k(p, o) = \max\{k\text{-}distance(o), d(p, o)\}, with k\text{-}distance(o) being the distance from o to its k-th nearest neighbor and d(p, o) the Euclidean distance between p and o. The LOF score is then the average ratio of the neighbors' densities to the point's own density: LOF_k(p) = \frac{\sum_{o \in N_k(p)} \frac{lrd_k(o)}{lrd_k(p)}}{|N_k(p)|} A value of LOF_k(p) significantly greater than 1 indicates that p has a lower density than its neighbors, marking it as an outlier; conversely, values near 1 suggest conformity to the local structure. Outlier identification typically involves thresholding these scores, where points with scores exceeding a user-defined \tau (e.g., LOF_k(p) > \tau or D_k(p) > \tau) are flagged as anomalies, often combined with to select the most extreme cases. In applications such as fraud detection, k-NN-based methods like LOF identify anomalous transactions by their deviation from typical spending patterns, achieving high detection rates in datasets. For network intrusion detection, recent improvements to k-NN, such as entropy-weighted distance metrics, have enhanced accuracy in classifying attacks on benchmark datasets like IoTID20, with reported accuracies reaching 99.17% in 2025 studies. Performance of k-NN outlier detection is evaluated using (ROC) curves on benchmark datasets like those in the OpenML repository, where (AUC) scores measure the trade-off between true positive (detected outliers) and false positive rates; for instance, LOF often yields AUC values above 0.9 on datasets with 3-5% injected outliers, outperforming simpler distance-based methods in varying density scenarios.

Evaluation Techniques

Validation Methods

Validation methods for the k-nearest neighbors (k-NN) algorithm are essential due to its non-parametric nature, which lacks explicit model and relies on the entire for predictions, necessitating robust techniques to estimate performance without introducing bias. The hold-out method, a fundamental approach, involves partitioning the into disjoint and sets, typically in ratios such as 70:30 or 80:20, to evaluate the model's performance on unseen data. This simple technique provides an unbiased estimate of error but can be sensitive to the specific split, particularly in smaller datasets where variance may be high. To address potential imbalances in class distributions during splitting, is employed in hold-out validation, ensuring that the and test sets maintain the same proportion of as the original . This is particularly crucial for k-NN tasks, where uneven splits could skew distance-based neighbor selection and lead to misleading performance estimates. For instance, in with minority , stratified hold-out preserves class ratios across subsets, enhancing the reliability of the validation process. Leave-one-out cross-validation (LOOCV) is especially well-suited for k-NN, offering an exact and nearly unbiased estimate of the expected error rate by iteratively using all but one sample for training and the held-out sample for testing. In k-NN, LOOCV computes the error precisely because the nearest neighbors for a test point are determined from the remaining points without altering the model's structure significantly, avoiding the approximations needed in parametric methods. This method is computationally intensive for large datasets but provides a low-variance estimate, making it a standard for small-to-medium k-NN applications. Theoretical analyses confirm that LOOCV for k-NN converges to the true risk under mild conditions, supporting its use for reliable error estimation. Bootstrap resampling addresses the need for variance estimation in k-NN validation by generating multiple datasets through sampling with replacement from the original data, allowing computation of the error's across resamples. This technique is valuable for quantifying in k-NN predictions, particularly in scenarios with noisy or high-dimensional data, where a single hold-out or run may not capture variability. Studies comparing bootstrap to estimators have shown it to be effective for k-NN in applications like , providing robust confidence intervals for performance metrics. A key consideration in k-NN validation is avoiding the inherent in , which calculates the rate using the same data for both fitting (implicitly storing neighbors) and , often underestimating true —especially for small values of k where the algorithm memorizes points. For k-NN classifiers, this arises because nearest neighbors frequently include instances, leading to artificially low rates that do not reflect performance on new data. To mitigate this, validation should prioritize independent test sets or over resubstitution, ensuring more realistic assessments. Generalized resubstitution techniques have been proposed to smooth this by incorporating empirical probability measures, though standard practice favors hold-out or for k-NN. For imbalanced datasets, where class distributions skew validation results, techniques like SMOTE (Synthetic Minority Over-sampling Technique) can be integrated into cross-validation by generating synthetic minority class samples within each training fold to balance the data before k-NN application, preventing majority class dominance in neighbor voting. This method creates new instances along line segments between minority samples and their k-nearest neighbors, improving k-NN's sensitivity to rare classes during validation without altering the test fold. Alternatively, class-weighted cross-validation assigns higher weights to minority class errors in the validation objective, adjusting the overall error computation to reflect imbalance and yielding fairer performance estimates for k-NN. These approaches ensure that validation procedures account for class disparities, enhancing the algorithm's applicability in real-world skewed scenarios.

Performance Metrics

In classification tasks, the performance of the k-nearest neighbors (k-NN) algorithm is commonly evaluated using accuracy, which measures the proportion of correctly classified instances out of the total, derived from , and itself that summarizes true positives, true negatives, false positives, and false negatives across classes. quantifies the fraction of positive predictions that are actually correct, while captures the fraction of actual positives correctly identified by the model. These metrics are particularly useful for imbalanced datasets, where accuracy alone may mislead due to class distribution skews. For regression applications, k-NN predictions are assessed with (MSE), (MAE), and R-squared (R^2). MSE calculates the average squared difference between observed and predicted values: \text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 where y_i are true values and \hat{y}_i are predictions. MAE provides the average absolute difference, offering interpretability in the same units as the target variable: \text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i| R^2 indicates the proportion of variance in the dependent explained by the model, ranging from 0 to 1, with higher values denoting better fit. In outlier detection using k-NN, key metrics include the area under the curve (AUC-ROC), which evaluates the trade-off between true positive and false positive rates across thresholds, and precision@K, measuring the fraction of the top-K predicted s that are true anomalies. AUC-ROC values closer to 1 signify superior discrimination between outliers and inliers. For multi-class classification scenarios, k-NN performance extends standard metrics via macro-averaging, which computes , , or F1-score per class and averages them equally, and micro-averaging, which aggregates contributions across all instances before averaging, emphasizing overall accuracy in imbalanced settings. Recent benchmarks on UCI datasets, such as , Wine, and , demonstrate k-NN's competitive accuracy (often 85-95% in classification) against baselines like decision trees and support vector machines, though enhancements like weighted prototypes improve F1-scores by up to 5% in multi-class tasks. In on datasets like Boston Housing, standard k-NN achieves R^2 values around 0.7-0.8, with variants reducing MSE by 10-20% compared to linear models. These evaluations highlight k-NN's robustness but underscore sensitivity to k and metrics.

References

  1. [1]
    Lecture 2: k-nearest neighbors / Curse of Dimensionality
    The k-nearest neighbor classifier fundamentally relies on a distance metric. The better that metric reflects label similarity, the better the classified will be ...
  2. [2]
    [PDF] CSC 411: Lecture 05: Nearest Neighbors
    Algorithm (kNN):. 1. Find k examples {x(i), t(i)} closest to the test ... K-NN Summary. Naturally forms complex decision boundaries; adapts to data ...
  3. [3]
    [PDF] Nearest Neighbor Pattern Classification
    It assigns to an unclassified point the class most heavily represented among its k, nearest neighbors. Rx and Hodges established the consistency of this ...
  4. [4]
    [PDF] k-Nearest Neighbors
    knnx take an optional argument, algorithm, which controls the procedure used to search for nearest neighbors. The default is to use kd_tree, i.e., the k-d tree ...
  5. [5]
    [PDF] Lecture 15: The Bayes classifier and k-nearest neighbors
    k-nearest neighbors (KNN) is a “model-free” classification method in the sense that. • It only assumes close data points share similar labels.<|control11|><|separator|>
  6. [6]
    Nearest neighbor pattern classification | IEEE Journals & Magazine
    The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points.
  7. [7]
    [PDF] 150/Proj-21-49-004-Rept-4 - NYU Computer Science
    in the case of nonparametric discrimination: we cannot, with any ... space is finite (as is the case in discriminatory analysis when there are ...
  8. [8]
    Time complexity and optimality of kNN - Stanford NLP Group
    Table 14.3 gives the time complexity of kNN. kNN has properties that are quite different from most other classification algorithms. Training a kNN ...
  9. [9]
    [PDF] k-Nearest Neighbour Classifiers - A Tutorial
    Basic k-NN classifiers that use a simple Minkowski distance will have a time behaviour that is. O(dn) wheren is the number of data samples andd is the number of ...
  10. [10]
    Elements of Statistical Learning: data mining, inference, and ...
    The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second Edition February 2009. Trevor Hastie, Robert Tibshirani, Jerome Friedman.
  11. [11]
  12. [12]
    [PDF] Effects of Distance Measure Choice on KNN Classifier Performance
    Chomboon et al [13] analyzed the performance of KNN classifier using 11 distance measures. These include. Euclidean, Mahalanobis, Manhattan, Minkowski, ...
  13. [13]
    [PDF] Pattern Recognition and Machine Learning - Microsoft
    A companion volume (Bishop and Nabney,. 2008) will deal with practical aspects of pattern recognition and machine learning, and will be accompanied by Matlab ...
  14. [14]
  15. [15]
    Compressed kNN: K-Nearest Neighbors with Data ... - PubMed Central
    In case the dataset only contains categorical variables, the Hamming distance is applied (see Equation (2)). If the dataset only contains numerical variables, ...
  16. [16]
    Location difference of multiple distances based k-nearest neighbors ...
    For numerical input variables, Euclidean distance is commonly used, whereas, for categorical variables, Hamming distance is used [43]. As discussed above ...
  17. [17]
    Enhancing K-nearest neighbor algorithm: a comprehensive review ...
    Aug 11, 2024 · This paper presents a comprehensive review and performance analysis of modifications made to enhance the exact kNN techniques.Missing: heuristic | Show results with:heuristic
  18. [18]
    Cosine Similarity - an overview | ScienceDirect Topics
    Cosine similarity is particularly effective for handling sparse and high-dimensional vectors, such as term-frequency vectors in text analysis, because it ...
  19. [19]
    [PDF] Improving Algorithm Accuracy K-Nearest Neighbor Using Z-Score ...
    Sep 7, 2020 · There are several normalization techniques which often be used, those are min-max normalization, Z-Score normalization and decimal scaling ...
  20. [20]
    (PDF) Comparison of Min-Max normalization and Z-Score ...
    Aug 9, 2025 · The novelty of this research lies in the comparison between the two min-max normalizations and the Z-score normalization in the k-NN algorithm.
  21. [21]
    [PDF] STAT 451: Introduction to Machine Learning Lecture Notes
    In this section, we build some intuition for the decision boundary of the NN classification model. Assuming a Euclidean distance metric, the decision boundary ...
  22. [22]
    K-nearest neighbor - Scholarpedia
    Feb 21, 2009 · For k-nearest neighbors, the predicted class of test sample \textbf{x} is set equal to the most frequent true class among k nearest training ...
  23. [23]
    Effective k-nearest neighbor models for data classification ...
    Apr 11, 2025 · Further, KNN is sensitive to outliers, as noisy or distant data points can disproportionately affect distance calculations and predictions.
  24. [24]
  25. [25]
    [PDF] Weighted k-Nearest-Neighbor Techniques and Ordinal Classification
    Oct 13, 2004 · In this paper we present an extended version of this technique, where the distances of the near- est neighbors can be taken into account. In ...
  26. [26]
    [PDF] Bias Plus Variance Decomposition for Zero-One Loss Functions
    We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning.
  27. [27]
    Prediction error estimation: a comparison of resampling methods
    We compare several methods for estimating the 'true' prediction error of a prediction model in the presence of feature selection.
  28. [28]
  29. [29]
  30. [30]
    [PDF] HOG and Haralick Feature Extractions with Machine Learning ...
    The KNN model had a better recall value among the three models, whereas. KNN with the Haralick feature extraction model had the lowest recall value compared to ...
  31. [31]
    A preprocessing data-driven pipeline for estimating number of clusters
    The pipeline consists of two main procedures: threshold learning for FS and neighborhood (cluster) size estimation.
  32. [32]
    Is the k-NN classifier in high dimensions affected by the curse of ...
    There is an increasing body of evidence suggesting that exact nearest neighbour search in high-dimensional spaces is affected by the curse of dimensionality ...
  33. [33]
    K-Nearest Neighbor Regression with Principal Component Analysis ...
    In this paper, we integrate PCA with KNN that can not only reduce the data dimensionality to speed up the calculation of KNN, but also reduce redundancy ...
  34. [34]
    K-Nearest Neighbors and Curse of Dimensionality - GeeksforGeeks
    Jul 23, 2025 · Increased Computational Complexity: With higher dimensionality, the computational cost of KNN increases significantly. The algorithm needs ...What is the KNN algorithm (K... · How does Dimensionality...
  35. [35]
    t-SNE - Laurens van der Maaten
    The perplexity of a fair die with k sides is equal to k. In t-SNE, the perplexity may be viewed as a knob that sets the number of effective nearest neighbors.
  36. [36]
    [PDF] Performance Evaluation of t-SNE and MDS Dimensionality ... - arXiv
    The results suggest that t-SNE is more compatible with nearest neighbor-based algorithms; however, MDS seems to be more consistent with the SVM technique. IV.
  37. [37]
    Improving the Separation Between Similar Classes Using a Mutual k ...
    Aug 18, 2021 · In general, for most of the mutual k-NN graph based vectors, there is a better separation between similar classes than the original UMAP vectors ...
  38. [38]
    [PDF] Understanding How Dimension Reduction Tools Work
    dimensional space; UMAP explicitly attracts only a group of nearest neighbors; TriMap mainly considers triplets that contain a k-nearest neighbor, and we ...<|control11|><|separator|>
  39. [39]
    [PDF] Distance Metric Learning for Large Margin Nearest Neighbor ...
    In this paper, we show how to learn a Mahalanobis distance metric for kNN classification. The algorithm that we propose was described at a high level in earlier ...
  40. [40]
    [PDF] Metric Learning - Computer Vision Group
    IRIS dataset after LDA setosa versicolor ... Idea: Learn a Mahalanobis metric explicitly to improve k-nn classification. ... NCA (maximizes knn accuracy).
  41. [41]
    [PDF] Information-Theoretic Metric Learning
    We compare our Information Theoretic Metric Learning al- gorithm (ITML) to existing methods across two applica- tions: semi-supervised clustering and k-nearest ...
  42. [42]
    (PDF) Edited Nearest Neighbor Rule for Improving Neural Networks ...
    Aug 7, 2025 · Their findings suggested that while classification accuracy generally improved, aggressive instance removal by ENN had an adverse effect in some ...
  43. [43]
    The condensed nearest neighbor rule (Corresp.) - IEEE Xplore
    The condensed nearest neighbor rule (Corresp.) Published in: IEEE Transactions on Information Theory ( Volume: 14 , Issue: 3 , May 1968 )Missing: algorithm | Show results with:algorithm
  44. [44]
    Asymptotic Properties of Nearest Neighbor Rules Using Edited Data
    The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.
  45. [45]
    [PDF] Nearest Neighbour Editing and Condensing Tools–Synergy ...
    Abstract: The objective of this study has been to explore and exploit the synergy among the Nearest Neighbour (NN) editing and.
  46. [46]
    [PDF] Fast Condensed Nearest Neighbor Rule
    The nearest neighbor decision rule (Cover & Hart,. 1967) (NN rule in the following) assigns to an un- classified sample point the classification of the near-.
  47. [47]
    Optimization of an Instance-Based GOES Cloud Classification ...
    The fast condensed nearest-neighbor (FCNN) method reduced the size of the individual training sets by 68.3% (fourfold cross-validation testing average) while ...
  48. [48]
    [PDF] Rate of Convergence of k-Nearest-Neighbor Classification Rule
    In order to have non-trivial rate of convergence of the classification error probability, one has to assume tail and smoothness conditions. We treat two ...<|control11|><|separator|>
  49. [49]
    [PDF] Predicting house prices with machine learning methods - DiVA portal
    In this study, the machine learning algorithms k-Nearest-Neighbours regres- sion (k-NN) and Random Forest (RF) regression were used to predict house prices ...
  50. [50]
    Efficient algorithms for mining outliers from large data sets
    The paper proposes a distance-based outlier detection using the kth nearest neighbor, and a partition-based algorithm that prunes partitions to save ...
  51. [51]
    [PDF] Efficient Algorithms for Mining Outliers from Large Data Sets
    Outlier detection is also aided by the computation of the minimum and maximum possible distance between a point and an MBR, which we define below. In this paper ...
  52. [52]
    [PDF] LOF: Identifying Density-Based Local Outliers
    We introduce the notion of the local outlier factor LOF, which cap-.
  53. [53]
    How to Select k for k-nearest-neighbors-based outlier detectors
    In contrast to LOF, KNN is adequate for detecting distance-based outliers, but not for density-based outliers. Instead of considering distance or density, ...
  54. [54]
    Network intrusion detection based on improved KNN algorithm
    Aug 14, 2025 · Improved KNN has superiority in accuracy, so the clustering speed in KNN does not have a significant advantage.<|control11|><|separator|>
  55. [55]
    [PDF] On the Evaluation of Outlier Detection: Measures, Datasets, and an ...
    ROC AUC scores, for each method using the best k, on the datasets with 3 to 5% of outliers, averaged over the different dataset variants where available. The ...
  56. [56]
    [PDF] Theoretical Analysis of Cross-Validation for Estimating the Risk of ...
    The present work aims at deriving theoretical guaranties on the behavior of some cross- validation procedures applied to the k-nearest neighbors (kNN) rule in ...
  57. [57]
    Parametric, bootstrap, and jackknife variance estimators for the k ...
    Dec 15, 2011 · For this study, two resampling estimators, the bootstrap and the jackknife, were investigated and compared to a parametric estimator for ...
  58. [58]
    [PDF] Generalized Resubstitution for Classification Error Estimation
    Exact representation of the second- order moments for resubstitution and leave-one-out error estimation for linear discriminant analysis in the univariate ...
  59. [59]
    SMOTE: Synthetic Minority Over-sampling Technique
    Jun 1, 2002 · This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) ...
  60. [60]
    Class Confidence Weighted kNN Algorithms for Imbalanced Data Sets
    Aug 7, 2025 · In this paper, a novel k-nearest neighbors (kNN) weighting strategy is proposed for handling the problem of class imbalance.
  61. [61]
    Comparative performance analysis of K-nearest neighbour (KNN ...
    Apr 15, 2022 · This paper presents a study on different KNN variants (Classic one, Adaptive, Locally adaptive, k-means clustering, Fuzzy, Mutual, Ensemble, Hassanat and ...
  62. [62]
    Performance Evaluation of Regression Models for the Prediction of ...
    Sep 14, 2021 · This paper aims to evaluate the performance of multiple non-linear regression techniques, such as support-vector regression (SVR), k-nearest neighbor (KNN), ...
  63. [63]
    Outlier detection algorithm based on k-nearest neighbors-local ...
    Mar 15, 2022 · We propose a novel outlier detection algorithm which is named as kNN-LOF. First, the k-nearest neighbors algorithm is applied to divide different areas for ...
  64. [64]
    [PDF] Comparative analysis of ensemble methods for outlier detection in ...
    The results of the work were evaluated by three indicators: AUC-. ROC, Precision @ Rank n and algorithm execution time. They allowed us to assess the accuracy ...<|separator|>
  65. [65]
    Comparative performance analysis of K-nearest neighbour (KNN ...
    Apr 15, 2022 · This paper presents a study on different KNN variants (Classic one, Adaptive, Locally adaptive, k-means clustering, Fuzzy, Mutual, Ensemble, Hassanat and ...Missing: seminal | Show results with:seminal
  66. [66]
    Random kernel k-nearest neighbors regression - Frontiers
    Jun 30, 2024 · This paper introduces the random kernel k-nearest neighbors (RK-KNN) regression as a novel approach that is well-suited for big data applications.