Fact-checked by Grok 2 weeks ago

Local outlier factor

The Local Outlier Factor (LOF) is an unsupervised density-based algorithm for anomaly detection that assigns to each data point a score representing its degree of outlierness, computed as the local deviation of its density relative to the densities of its k-nearest neighbors. Introduced in 2000 by Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng, and Jörg Sander, LOF addresses limitations of global outlier detection methods by focusing on local neighborhood structures, making it particularly effective for datasets with varying cluster densities. The algorithm operates on the principle that outliers are points whose local densities are significantly lower than those of their surrounding points, enabling nuanced identification rather than binary classification. At its core, LOF relies on three key concepts: the reachability distance between two points p and o, defined as the maximum of the k-distance of o (distance to its k-th nearest neighbor) and the actual distance d(p, o); the local reachability density (lrd) of a point p, which is the inverse of the average reachability distance from p to its MinPts-nearest neighbors; and the local outlier factor of p, calculated as the average of the ratios of the lrd values of p's neighbors to p's own lrd. Points with LOF scores close to 1 are considered typical, while scores substantially greater than 1 indicate outliers, with higher values signifying greater abnormality. The computational complexity is O(n²) in the worst case due to nearest-neighbor searches, but it performs efficiently on large, high-dimensional datasets when indexed properly. LOF's advantages include its ability to detect outliers in non-uniform density distributions, where global methods like distance-based approaches fail—for instance, distinguishing sparse regions from dense clusters without misclassifying boundary points. This has led to its widespread in applications such as detection in financial transactions, intrusion detection in , and in and sensor data . Over the years, variants like Incremental LOF (ILOF) and Connectivity-based Outlier Factor (COF) have extended its utility to and linear structures, respectively, while maintaining the core -based paradigm. Implemented in major libraries such as , LOF remains a foundational in , with the original paper garnering thousands of citations and influencing subsequent research in outlier detection.

Introduction

Definition and Overview

The Local Outlier Factor (LOF) is a density-based for detection that quantifies the degree to which a point deviates from the local of its neighborhood, enabling the identification of anomalies that may not be apparent under global assumptions. Introduced by Breunig et al. in 2000, LOF measures outlierness as a continuous score rather than a , allowing for nuanced ranking of potential anomalies based on their relative isolation within varying local contexts. The primary purpose of LOF is to detect outliers in high-dimensional where data distributions are uneven, such as in detection within electronic commerce transactions, by focusing on local deviations rather than assuming uniform density across the entire . This approach is particularly valuable in settings, where no labeled examples of outliers are available, and it addresses limitations of global methods that might overlook anomalies in dense clusters or misidentify points in sparse but consistent regions. At a high level, LOF operates by estimating the local density of each data point using its k-nearest neighbors and then comparing that density to the densities of its neighboring points to compute an outlier score. For instance, in a containing clusters of varying densities, LOF would assign higher outlier scores to points located in sparser regions relative to their immediate surroundings, effectively highlighting local anomalies without requiring prior knowledge of the data's overall structure.

Historical Development

The Local Outlier Factor (LOF) algorithm was introduced in 2000 by Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng, and Jörg Sander in their seminal paper presented at the ACM SIGMOD International Conference on Management of Data. This work addressed key limitations in prior outlier detection techniques, which predominantly relied on distance-based methods—such as DB(pct, dmin)-outliers—that struggled to identify anomalies in datasets exhibiting varying local densities, often resulting in binary classifications that overlooked nuanced degrees of outlierness. By shifting focus to local density comparisons within neighborhoods, LOF provided a more flexible, density-based framework suitable for knowledge discovery in databases (KDD) applications, where rare events like or errors held greater analytical value than common patterns. Following its debut, LOF rapidly gained traction in the and database communities, with the original paper "LOF: Identifying Density-Based Local Outliers" becoming a cornerstone reference cited over 10,000 times by 2025. Early adoption highlighted its utility in handling real-world datasets with non-uniform cluster densities, outperforming global methods in benchmarks involving synthetic datasets and real-world data such as the NHL hockey dataset and German soccer player statistics. Its influence extended to subsequent density-based techniques. A notable milestone in LOF's evolution occurred in 2017 with its incorporation into the library as the LocalOutlierFactor class in version 0.19, enabling widespread accessibility for practitioners and researchers in Python-based workflows.

Core Concepts

Basic Idea

The local outlier factor (LOF) addresses the limitations of traditional outlier detection methods by identifying anomalies based on deviations in local rather than assuming a uniform across the entire . In this approach, an is a point whose is significantly lower than the densities of its surrounding points, capturing the intuitive notion that points in sparse regions relative to their neighbors are anomalous. This local perspective allows LOF to detect outliers that may be inconspicuous in a global context but stand out within their immediate vicinity, providing a continuous score of outlier-ness rather than a . To illustrate, consider a two-dimensional featuring a dense C1 of 400 closely packed points and a sparser C2 of 100 points nearby. A point o1 located far from C1 would have low local compared to the high-density neighbors in C1, marking it as an . Similarly, a point o2 positioned near C2 but in an even sparser area would exhibit lower than the points in C2, also identifying it as anomalous. In contrast, a point on the edge of C1 maintains high local relative to its neighbors within the , classifying it as an inlier despite its peripheral position. This example highlights how LOF leverages the k-nearest neighbors to define the local context, enabling the detection of context-specific outliers that vary by region. Global outlier detection methods, such as distance-based techniques like z-score or those using a fixed minimum , often fail in datasets with multiple density modes or clusters of varying tightness, as they apply a uniform criterion across all points. For instance, in the aforementioned example, a global method might overlook o2 because its to C2 appears normal relative to the dataset's overall spread, but LOF succeeds by adapting to the local variations in around each point. This adaptability makes LOF particularly effective for real-world applications like fraud detection, where anomalies are defined by local patterns rather than global norms. Reachability distance further refines influence by accounting for the k-distance ( to the k-th nearest ) of the neighboring points, helping to robustly measure these local without direct interference.

Key Components

The Local Outlier Factor (LOF) algorithm relies on several foundational concepts to assess local densities in a , beginning with the identification of nearest neighbors for each data point. The k-nearest neighbors of a point p, denoted as N_k(p), are defined as the set of k closest points to p in the dataset D, excluding p itself, based on a chosen metric d. Formally, N_k(p) = \{ q \in D \setminus \{p\} \mid d(p, q) \leq k\text{-distance}(p) \}, where the size of this set is at least k (greater than k if there are ties at the k-distance). Central to this neighborhood definition is the k-distance of p, which measures the distance to the k-th nearest neighbor of p. It is the smallest distance d(p, o) such that at least k points o' \in D \setminus \{p\} satisfy d(p, o') \leq d(p, o), and at most k-1 points satisfy d(p, o') < d(p, o). This k-distance effectively sets the radius of the local neighborhood around p, allowing for a consistent scale in density comparisons across varying data regions. These components underpin local density estimation in LOF by providing a basis for quantifying the density around each point relative to its surroundings; specifically, the local density of p is inversely proportional to the average reachability distances from p to the points in N_k(p), which adjust the distances from p by the local scales (k-distances) of its neighbors. This approach enables the detection of outliers through deviations in local densities without requiring a global density model. As prerequisites, k-nearest neighbors and k-distance facilitate scalable, localized computations by focusing on small subsets of the data, and they operate with any distance metric, not limited to Euclidean space, making LOF applicable to diverse data types such as graphs or strings.

Formal Definition

Reachability Distance

The reachability distance is a key concept in the local outlier factor (LOF) algorithm, defined for two objects o and p in a dataset with respect to a parameter k (a positive integer representing the neighborhood size). Formally, the reachability distance of p with respect to o is given by \text{reach-dist}_k(p, o) = \max \{ k\text{-distance}(o), d(p, o) \}, where d(p, o) denotes the pairwise distance between p and o (using any suitable distance metric, such as Euclidean distance), and k-distance(o) is the distance from o to its k-th nearest neighbor. This definition incorporates the k-distance of the reference object o as a threshold, ensuring that the reachability distance is at least as large as the extent of o's local neighborhood. The primary purpose of the reachability distance is to smooth out statistical fluctuations in pairwise distances when estimating local densities, particularly for objects that are close to the reference object o. By replacing the actual distance d(p, o) with k-distance(o) when p lies within o's k-neighborhood (i.e., when d(p, o) \leq k-distance(o)), it reduces variability in distance measures among nearby points, making density comparisons more stable across regions of varying densities. This approach mitigates the undue influence of minor distance variations or isolated points (potential outliers) in density estimation, effectively "reaching" neighbors by considering the sparser of the two local neighborhoods—either o's or the direct path to p. The smoothing effect is tunable via k: larger values of k lead to more uniform reachability distances within a neighborhood, enhancing robustness to local density differences. For illustration, consider a reference object o with k = 4, where its k-distance is 2 units. If a nearby object p_1 is 1 unit away from o, then \text{reach-dist}_4(p_1, o) = \max\{2, 1\} = 2, using the neighborhood size instead of the smaller actual distance. In contrast, for a distant object p_2 that is 5 units away, \text{reach-dist}_4(p_2, o) = \max\{2, 5\} = 5, retaining the actual distance. This mechanism ensures that close neighbors contribute consistently to density calculations without underestimating influences from slightly varying local structures. The reachability distance exhibits symmetry in its practical effect for density-based comparisons within similar neighborhoods, though the measure itself is not strictly symmetric (as \text{reach-dist}_k(o, p) would use k-distance(p) instead). It is robust to local density variations by design, preventing overestimation of reachability in dense areas or underestimation in sparse ones. Additionally, it is computable using any dissimilarity measure that supports k-nearest neighbor computations, making it versatile for non-Euclidean spaces.

Local Reachability Density

The local reachability density (LRD) of a point p in a dataset quantifies the density of its local neighborhood by measuring how reachable p is from its k-nearest neighbors, providing a robust estimate that accounts for varying local densities. It is defined as the reciprocal of the average reachability distance from p to the objects in its k-nearest neighborhood N_k(p), where the reachability distance incorporates both the distance to neighbors and their own neighborhood sparsity to avoid overestimating density in sparse regions. The formal formula for LRD is given by: \text{lrd}_k(p) = \frac{|N_k(p)|}{\sum_{o \in N_k(p)} \text{reach-dist}_k(p, o)} where |N_k(p)| typically equals k, and \text{reach-dist}_k(p, o) is the reachability distance between p and each neighbor o. This formulation ensures that LRD is inversely proportional to the typical reachability distance in the neighborhood, yielding a high value for points in dense clusters and a low value for those in sparse areas. In computation, LRD is derived by first identifying the k-nearest neighbors of p and then aggregating the reachability distances to those neighbors, which serves as a normalized inverse average to estimate local density without being overly sensitive to isolated distant points. This approach provides a more stable density measure compared to simple inverse distance averages, as the reachability distance caps the influence of outliers in the neighborhood. For edge cases, such as when a point has fewer than k neighbors (e.g., in small datasets or boundary regions), the LRD uses the actual size of the available neighborhood |N_k(p)| < k in both the numerator and the summation denominator, adjusting the density estimate accordingly to reflect the limited local structure. Additionally, LRD can become infinite if all reachability distances are zero, which occurs when there are at least k duplicate points sharing the same coordinates as p, though the method assumes no such duplicates for practical datasets.

Local Outlier Factor

The local outlier factor (LOF) quantifies the degree to which a data point deviates from the density of its local neighborhood, serving as a measure of its outlier-ness. It is computed as the average of the ratios of the local reachability densities of the point's k-nearest neighbors to the local reachability density of the point itself. This approach captures relative isolation by comparing local densities rather than relying on global statistics. Formally, for a point p and parameter k, the LOF is defined as: \text{LOF}_k(p) = \frac{\sum_{o \in N_k(p)} \frac{\text{lrd}_k(o)}{\text{lrd}_k(p)}}{|N_k(p)|} where N_k(p) denotes the k-nearest neighbors of p, and \text{lrd}_k(\cdot) is the local reachability density, which serves as the basis for these density ratios. The interpretation of the LOF score is straightforward: values approximately equal to 1 indicate that the point has a similar local density to its neighbors, marking it as an inlier. In contrast, LOF values greater than 1 signal lower density relative to neighbors, with values much larger than 1 (often >>1) denoting strong s due to significant isolation. For scoring and outlier identification in settings, thresholds are typically set empirically; for instance, points with LOF > 1.5 are considered outliers in practical applications, as higher scores reflect progressively stronger deviations. Key properties of the LOF include its locality, which confines the to the immediate neighborhood of each point, and its , which allows for varying degrees of outlier-ness across different density regions without assuming uniform data distribution. These characteristics enable the detection of outliers in with clusters of differing .

Algorithm and Implementation

Computational Procedure

The of Local Outlier Factor (LOF) scores involves a multi-step that relies on density-based neighborhood for a of n points in a . The procedure assumes a user-specified MinPts (often denoted as ), which defines the neighborhood size, and uses a metric such as . Preprocessing typically includes building spatial indexing structures to enable efficient nearest-neighbor queries, avoiding the naive exhaustive pairwise . The algorithm proceeds in the following steps:
  1. Compute k-distance for all points: For each point p, determine the k-distance(p), defined as the distance to its k-th nearest neighbor, where k = MinPts. This step identifies the minimum distance threshold enclosing exactly k neighbors for each point. the distances to all other points achieves this naively, but indexing structures accelerate it.
  2. Find k-nearest neighbors (k-NN) for each point: Construct the k-NN neighborhood N_k(p) for every point p, consisting of all points q such that the distance d(p, q) ≤ k-distance(p). If multiple points lie at exactly the k-distance, the neighborhood may include more than k points to ensure inclusivity. Efficient range queries via indexing retrieve these neighborhoods.
  3. Calculate reachability distances for pairs in neighborhoods: For each point p and every neighbor o in N_k(p), compute the reachability distance reach-dist_k(p, o) as the maximum of k-distance(o) and d(p, o). This measure accounts for the sparser of the two neighborhoods when assessing mutual . These values are calculated only within the identified neighborhoods to limit computations.
  4. Compute local reachability density (LRD) for each point: For each p, calculate its LRD as the inverse of the average reachability distance from p to all points in N_k(p). This yields a density estimate local to p's neighborhood, with higher values indicating denser regions. The average is taken over all |N_k(p)| neighbors.
  5. Compute LOF for each point: Finally, derive the LOF score for p as the of the LRD of each o in N_k(p) to the LRD of p itself. This aggregates the relative density deviation, with values greater than 1 indicating outlierness.
Preprocessing enhances efficiency through data structures like KD-trees for low-dimensional data (up to 10 dimensions) or X-trees for higher dimensions, enabling approximate or exact k-NN and range queries. These structures materialize the neighborhoods in a single pass, reducing reliance on O(n^2) pairwise distances. For very large datasets, approximations such as sampling subsets of points or bounding neighborhood sizes (e.g., using MinPts^{UB} to limit upper-bound neighbors) further mitigate computational demands. The naive implementation without indexing requires O(n^2) time and space due to exhaustive distance computations and neighborhood traversals. With spatial indexing, the time complexity improves to O(n log n) for neighborhood materialization in low dimensions and O(n) for the subsequent LRD and LOF calculations, assuming constant-time queries via grids or trees. Space usage remains O(n m), where m is the average neighborhood size (bounded by MinPts). A high-level pseudocode outline for the core procedure is as follows:
# Assume [dataset](/page/Data_set) D with n points, [parameter](/page/Parameter) MinPts = k
# Preprocessing: Build KD-tree or similar index on D

for each point p in D:
    kdist_p = k_distance(p, D, k)  # Distance to k-th NN using index
    N_k_p = k_NN(p, D, kdist_p)   # Retrieve neighbors within kdist_p

for each point p in D:
    sum_reach = 0
    for each o in N_k_p:
        reach = max(k_distance(o), distance(p, o))
        sum_reach += reach
    lrd_p = |N_k_p| / sum_reach if sum_reach > 0 else 0

for each point p in D:
    sum_ratio = 0
    for each o in N_k_p:
        sum_ratio += lrd_o / lrd_p if lrd_p > 0 else 0
    lof_p = sum_ratio / |N_k_p| if |N_k_p| > 0 else 1
This outline iterates over points to build neighborhoods first, then computes densities and scores in subsequent passes, leveraging precomputed distances where possible for optimization.

Parameters and Selection

The primary parameter in the Local Outlier Factor (LOF) algorithm is k, the number of nearest neighbors defining the local neighborhood for density estimation. Typically, k ranges from 5 to 20, with values below 10 increasing sensitivity to and causing statistical fluctuations in density estimates, while larger values introduce a effect that approximates global rather than local density. Strategies for selecting k include evaluating a range of values, such as from a lower bound of at least 10 to an upper bound reflecting expected sizes (e.g., 20–50), and assigning each data point the maximum LOF score across this range to reduce sensitivity issues. When is available, cross-validation can tune k via search, optimizing performance metrics like the separation between and inlier LOF scores on validation folds. Domain knowledge also informs choices, such as using smaller k values in high-dimensional settings like intrusion detection to capture subtle local deviations. Other parameters include the distance metric for nearest neighbor computation, commonly (Minkowski with p=2) for continuous data, though (p=1) may better suit sparse or grid-like features. Handling ties in nearest neighbors is implementation-dependent; for instance, tree-based approximations (e.g., KD-trees) select the k closest points, resolving equal distances arbitrarily but consistently to ensure unique neighborhoods. LOF is sensitive to k, as varying it can alter outlier rankings non-monotonically, with scores stabilizing for larger k but potentially masking local anomalies. Validation often involves stability plots of LOF scores versus k, where consistent rankings across a range indicate robustness.

Evaluation and Performance

Advantages

The Local Outlier Factor (LOF) algorithm provides local sensitivity by detecting outliers based on their deviation from the density of surrounding neighborhoods, enabling it to identify anomalies in both dense clusters and sparse regions that global methods might overlook. This context-dependent approach captures varying outlier behaviors, such as points that appear normal in high-density areas but anomalous in low-density ones, through comparisons of local reachability densities. LOF exhibits versatility, operating in any metric space and accommodating arbitrary data dimensions without requiring assumptions of normality, Gaussian distributions, or predefined cluster structures. It has been successfully applied to diverse datasets, including high-dimensional 64-dimensional color histograms for image outlier detection, demonstrating its robustness across domains. Empirically, LOF has shown strong performance in real-world benchmarks; for example, a combination of LOF and achieves approximately 99.62% accuracy in detection tasks. It also proves effective as part of hierarchical models for intrusion detection on the KDD Cup 1999 dataset and for identifying outliers in image data, often outperforming distance-based alternatives in capturing meaningful anomalies. The interpretability of LOF scores enhances its utility, as the factor quantifies the relative degree of outlierness—values near 1 indicate normal points, while scores substantially greater than 1 indicate outliers, with higher values signifying greater abnormality.

Limitations

The Local Outlier Factor (LOF) incurs a high computational cost, with an exact computation requiring O(n²) due to the pairwise calculations and k-nearest searches for all n data points. This quadratic scaling renders it impractical for very large datasets without resorting to approximations or indexing structures, which may compromise accuracy. However, optimized implementations, such as in using KD-tree or ball-tree indexing, enable efficient computation on datasets with hundreds of thousands of points. LOF results are sensitive to the parameter , representing the number of nearest neighbors, as suboptimal choices can misclassify outliers or normal points by altering local density estimates. Additionally, there is no universal threshold for interpreting LOF scores, with values like 1.5 potentially signaling outliers in dense datasets but failing to do so in others, thus requiring dataset-specific tuning. The quotient-based LOF scores, derived from ratios of local reachability densities, offer a relative measure of outlierness but lack intuitive meaning, hindering the establishment of global cutoffs for identification across varied applications. LOF demonstrates sensitivity to noise in high-dimensional settings, where the curse of dimensionality dilutes meaningful distance metrics and distorts density-based assessments. It also performs poorly on very sparse data, as insufficient neighbor points in low-density regions lead to unreliable local density computations and erroneous labeling.

Notable Extensions

Feature Bagging LOF, introduced by Lazarevic and Kumar in 2005, extends the original LOF by computing outlier scores across multiple random subsets of features (projections) and aggregating them to enhance robustness in high-dimensional datasets. This bagging technique reduces the effects of and the curse of dimensionality, where irrelevant or noisy features can degrade LOF performance, by averaging scores from diverse subspaces, leading to more stable and accurate outlier identification in noisy environments. The Local Outlier Probability (), proposed by Kriegel et al. in , builds on LOF by transforming density-based scores into interpretable probabilities within the [0,1] range using a based on probabilistic set distances. This normalization makes LoOP less sensitive to the parameter k, the neighborhood size, as it employs a soft for local densities rather than strict k-nearest neighbors, providing outlier scores that directly represent the likelihood of a point being anomalous relative to its surroundings. The Incremental Local Outlier Factor (ILOF), proposed by Pokrajac, Lazarevic, and Latecki in 2007, extends to data streams by incrementally updating outlier scores upon insertion or deletion of points, affecting only a limited number of neighboring points. This allows efficient processing of without recomputing the entire dataset, making it suitable for real-time in dynamic environments like sensor networks. The Connectivity-based Outlier Factor (COF), introduced by , , Fu, and in 2002, addresses limitations of LOF in datasets with chain-like or linear structures by using connectivity (shortest paths in k-nearest neighbor graphs) instead of direct densities. COF computes an outlier score based on the average chaining distance to neighbors, improving detection of outliers in non-spherical or sparse linear clusters where traditional density measures fail. Simplified LOF (SLOF), introduced by and Sun in , approximates LOF computations for efficiency on large-scale data by substituting the reachability distance with the simpler k-nearest neighbor in the local reachability density calculation. This modification eliminates nested neighborhood computations, reducing computational overhead while maintaining O(n²) , often achieving practical efficiency close to O(n log n) with spatial indexing, and preserving the ability to detect local outliers effectively, making it particularly suitable for datasets where computational speed is a priority without substantial loss in detection quality. More recent extensions address scalability and flexibility for modern data challenges. Distributed LOF variants, such as the approach by Cao et al. in 2017, enable parallel processing in frameworks like by partitioning data and approximating k-nearest neighbors across nodes, allowing LOF to handle millions of points in distributed environments with minimal accuracy trade-offs compared to centralized implementations.

Comparisons with Other Methods

Distance-based outlier detection methods, such as DB(p, D) proposed by Knorr and Ng, operate on a global scale by identifying points farther than a specified distance d_{\min} from at least p\% of the , assuming uniform density across the . These approaches fail to detect local s in regions of varying density, such as points near dense clusters that appear isolated globally. In contrast, LOF excels in such scenarios by evaluating local densities via k-nearest neighbors, effectively identifying s in clusters of varying sizes and densities, as demonstrated in synthetic s where distance-based methods overlook anomalies adjacent to high-density regions. Isolation Forest, introduced by Liu et al. in 2008, is a tree-based ensemble method that isolates outliers through random partitioning, achieving linear time complexity O(n \log n) on average, which makes it significantly faster than LOF's O(n^2) pairwise distance computations for large datasets. While performs well on global anomalies and scales to high-volume data, LOF provides superior accuracy for local outliers by directly comparing neighborhood densities, outperforming in metrics like on certain UCI datasets despite longer runtimes (e.g., over 9000 seconds for large sets versus under 16 seconds for ). One-Class SVM, developed by Schölkopf et al. in 2001, is a parametric method that learns a boundary around normal data assuming an underlying distribution, often using RBF kernels for non-linear separation. LOF, being non-parametric, avoids such assumptions and better handles multi-modal data with irregular densities, detecting subtle local deviations that boundary-based methods might miss. However, One-Class SVM can achieve higher clustering quality scores like Silhouette (0.38 vs. LOF's 0.06) in structured domains such as seismic data. Benchmark studies highlight LOF's strengths in density-varying datasets, such as geographic or scenarios, where it achieves higher (e.g., 0.91 on large low-outlier sets like ) compared to (0.83), though it lags in overall speed and average (0.22 vs. 0.38). Recent work as of shows approaches combining LOF with SVM, such as the Multi-Model Approach for Detection, yield improved (up to 0.99) by leveraging LOF's local insights for feature enhancement before SVM .

References

  1. [1]
    [PDF] LOF: Identifying Density-Based Local Outliers
    In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local ...
  2. [2]
    A Review of Local Outlier Factor Algorithms for Outlier Detection in ...
    This paper presents a literature review of local outlier detection algorithms in static and stream environments, with an emphasis on LOF algorithms.<|control11|><|separator|>
  3. [3]
    LocalOutlierFactor — scikit-learn 1.7.2 documentation
    LocalOutlierFactor (LOF) is an unsupervised method for outlier detection. It measures a sample's local density deviation, capturing its 'degree of abnormality'.
  4. [4]
    LOF: identifying density-based local outliers - ACM Digital Library
    LOF assigns a degree of being an outlier, based on how isolated an object is with respect to its surrounding neighborhood.
  5. [5]
    Advances in Density-Based Outlier Detection Algorithms
    This review presents an outline of density-based outlier detection algorithms for instance LOF and other prominent approaches that have been proposed over the ...
  6. [6]
    [PDF] LOF: Identifying Density-Based Local Outliers
    For many KDD applications, such as detecting criminal activities in. E-commerce, finding the rare instances or the outliers, can be more.
  7. [7]
    [PDF] Automatic Hyperparameter Tuning Method for Local Outlier Factor ...
    LOF uses two hyperparameters: the first is neighborhood size (k), which defines the neighborhood for the computation of local density; the second is ...Missing: sensitivity | Show results with:sensitivity
  8. [8]
    Local Outlier Factor (LOF) — Algorithm for outlier identification
    Aug 30, 2020 · The distance measure is problem-specific (Euclidean, Manhattan, etc.) Illustration of reachability distance with K=2. In layman terms, if a ...
  9. [9]
    A systematic review of literature on credit card cyber fraud detection ...
    (2020). Isolation forest showed accuracy as 99.72% while local outlier factor showed accuracy as 99.62%. Isolation factor is better observed in online ...
  10. [10]
    Local outlier factor and stronger one class classifier based ...
    Apr 18, 2016 · Experiments are conducted on KDD CUP'99 and SSENet-2011 datasets. The results show that the proposed model is able to identify attacks with ...Local Outlier Factor And... · Article Pdf · Additional Information<|control11|><|separator|>
  11. [11]
    Fast Outlier Detection Using a Grid-Based Algorithm | PLOS One
    Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity ... O(n2)[4].
  12. [12]
  13. [13]
  14. [14]
    Impact of Dimensionality Reduction on Outlier Detection - NIH
    Local Outlier Factor (LOF) [6], [11] is an unsupervised density based outlier detection technique that can be used to identify a wide variety of outliers. It ...
  15. [15]
  16. [16]
    Feature bagging for outlier detection - ACM Digital Library
    In this paper, a novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed.
  17. [17]
    LoOP: local outlier probabilities - ACM Digital Library
    LoOP: local outlier probabilities. Authors: Hans-Peter Kriegel. Hans-Peter ... Published: 02 November 2009 Publication History. 358citation2,337Downloads.
  18. [18]
    Local outlier detection reconsidered: a generalized view on locality ...
    Dec 18, 2012 · Here we provide a formalized method of analysis to allow for a theoretical comparison and generalization of many existing methods.
  19. [19]
    [PDF] Distributed Local Outlier Detection in Big Data
    Aug 13, 2017 · In this work we focus on designing LOF algorithms that are inherently parallel and work in virtually any distributed computing paradigm. This ...
  20. [20]
    An Efficient Approach for Outlier Detection with Imperfect Data Labels
    Jun 26, 2013 · We present kernel k -means clustering method and kernel LOF-based method to compute the likelihood values. In the second step, we ...
  21. [21]
  22. [22]
    Comparative Study of Neighbor-based Methods for Local Outlier Detection
    ### Summary of Sections Comparing LOF to Distance-Based Methods
  23. [23]
    Anomaly Detection based on Isolation Forest and Local Outlier Factor
    Sep 30, 2023 · In this research we compare two machine learning algorithms that have been used for anomaly detection: Isolation Forest (IForest) and Local Outlier Factor (LOF ...
  24. [24]
    advanced machine learning techniques for seismic anomaly ...
    This study presents a comprehensive comparison of three machine learning algorithms for anomaly detection within seismic data
  25. [25]
    [PDF] Benchmarking conventional outlier detection methods⋆ - Lirias
    The main purpose of this paper is to add to the existing body of research on outlier detection by comparing traditional (conventional) outlier detection methods ...
  26. [26]
    A hybrid approach for efficient outlier detection using supervised ...
    This paper's key premise is that you may "combine two methodologies to create a hybrid and reap the benefits of both worlds." We put forth a cutting-edge ...