Fact-checked by Grok 2 weeks ago

Harris corner detector

The Harris corner detector is a fundamental algorithm in for identifying corners in digital images, defined as local regions where the image intensity exhibits significant changes in all directions, achieved by analyzing the eigenvalues of the (also known as the or ) computed from spatial gradients within a Gaussian-weighted neighborhood. Introduced by Chris Harris and Mike Stephens in their 1988 paper "A Combined Corner and Edge Detector", the method was developed as part of the to enable consistent feature tracking for scene interpretation from sequences, addressing the limitations of prior detectors like Moravec's by integrating corner and edge responses into a unified framework based on the local auto-correlation function. The core process involves computing derivatives I_x and I_y, forming the M with elements A = I_x^2 \otimes w, B = I_y^2 \otimes w, and C = I_x I_y \otimes w (where w is a Gaussian window), and deriving a corner response measure R = \det(M) - k \cdot \trace(M)^2 (with k \approx 0.04 to $0.06) to classify regions: corners yield high positive R (both eigenvalues large), edges yield near-zero R (one eigenvalue dominant), and flat areas yield low R. Corners are then selected as local maxima of R exceeding a , often followed by non-maximum suppression and sub-pixel refinement via quadratic . The detector's key strengths include rotational invariance—due to the eigenvalue-based measure remaining unchanged under image rotation—and robustness to moderate illumination variations and when using appropriate Gaussian (\sigma_d \approx [1](/page/1), \sigma_i \approx 2.5), making it effective for real-time applications like camera calibration, , and in unconstrained natural scenes. However, it lacks , performing poorly under zooming or affine transformations, and can be sensitive to parameter choices like the \tau (typically 10–130) or k, which affect the number and quality of detected features. These limitations spurred variants, such as the Shi-Tomasi improvement (1994) that prioritizes the minimum eigenvalue for better stability in tracking, and integrations with methods like Harris-Laplace for multi-scale detection. Since its publication, the Harris corner detector has profoundly influenced feature extraction techniques, garnering over 22,000 citations and serving as a benchmark in libraries such as (via cv.cornerHarris()) and (via detectHarrisFeatures), with ongoing adaptations for event-based vision, FPGA acceleration, and hybrids in domains including autonomous navigation and .

Overview

Definition and Purpose

The Harris corner detector is a feature detection algorithm in that identifies corners, or interest points, in an where the local exhibits significant variations in all directions. These corners are distinguished from edges, which show variation primarily in one direction, and flat regions, which lack substantial change in any direction, by analyzing local image structure to locate discrete, reliable feature points suitable for further processing. The primary purpose of the Harris corner detector is to enable robust extraction of stable image for applications such as feature matching, object tracking, and across image sequences. It supports tasks like by providing points that remain consistent under transformations, including rotational invariance—due to its reliance on eigenvalues of the local that are unaffected by orientation changes—and invariance to additive illumination shifts, as the method uses image derivatives insensitive to uniform brightness adjustments. This makes it particularly valuable for analyzing natural scenes in video from mobile cameras, facilitating the construction of representations through feature tracking. The algorithm outputs a collection of detected corner points, each accompanied by a corner strength score that quantifies the quality of the detection, allowing for prioritization or thresholding in subsequent steps. These keypoints serve as foundational elements in broader pipelines, such as , where Harris scores refine initial corner candidates for efficient descriptor computation.

Historical Development

The Harris corner detector was introduced in 1988 by Chris Harris and Mike Stephens in their seminal paper titled "A Combined Corner and Edge Detector," presented at the Fourth Alvey Vision Conference in , . This work emerged within the broader context of early research on interest point detection, particularly for applications in scene reconstruction from sequences. It built directly upon Hans Moravec's foundational 1977 corner detection method, which identified interest points based on intensity variations in local windows but suffered from sensitivity to noise and lack of rotational invariance. Harris and Stephens addressed these limitations by developing a more robust approach that improved reliability under rotation, enabling better performance in dynamic scenarios. The primary motivation for the detector stemmed from the need for stable, trackable features in tasks such as stereo matching and motion analysis across image sequences captured by mobile cameras. At the time, existing edge detectors like the Canny operator excelled at linear features but struggled with junctions and connectivity issues in complex scenes, while pure corner detectors lacked the ability to distinguish edges effectively. The combined detector aimed to extract both corners and edges consistently, facilitating interpretation of unconstrained environments by providing richer structural information from natural imagery. In the original publication, the algorithm's implementation included key parameters such as a sensitivity factor k typically set between 0.04 and 0.06 to balance edge and corner responses. Empirical validation was conducted on real-world images, including outdoor sequences, demonstrating superior consistency in detecting corners compared to prior methods such as Beaudet and Kitchen-Rosenfeld operators. These tests highlighted the detector's practical utility in handling noise and varying lighting, laying the groundwork for its widespread adoption in feature tracking applications.

Mathematical Foundations

Image Gradients and Derivatives

gradients quantify the rate of change in across an image, serving as a fundamental measure of local variations that distinguish edges, textures, and corners from uniform regions. In the context of corner detection, these gradients capture how evolves in different directions, enabling the of points where changes occur abruptly in multiple orientations. The Harris corner detector specifically relies on the first-order partial derivatives of the image function I(x, y), denoted as I_x (horizontal component) and I_y (vertical component), which approximate the directional shifts at each . To understand why these derivatives highlight edges and corners, consider a first-order expansion of the around a point (x, y): I(x + \Delta x, y + \Delta y) \approx I(x, y) + I_x(x, y) \Delta x + I_y(x, y) \Delta y. This approximation models small displacements (\Delta x, \Delta y) and their impact on ; along edges, the change is large in one direction but minimal to it, whereas at corners, significant variations occur in all directions due to intersecting edges. The detector assumes a input image, where I(x, y) represents the scalar value at each , simplifying the analysis to two-dimensional spatial changes without color channel complications. Since digital images are discrete grids, continuous derivatives cannot be computed directly and must be approximated using methods or with s. A basic for the horizontal gradient is achieved by convolving the image with the [-1, 0, 1], which estimates I_x as the central difference between neighboring pixels. For improved accuracy and noise robustness, the is commonly employed as an alternative, using a that combines differentiation with averaging: I_x = I \ast \frac{1}{8} \begin{bmatrix} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{bmatrix}, \quad I_y = I \ast \frac{1}{8} \begin{bmatrix} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{bmatrix}. The division by 8 normalizes the smoothing effect from the kernel weights. These operators provide a balance between edge localization and suppression of isolated noise pixels. To further mitigate sensitivity to image noise, which can amplify spurious gradients, a Gaussian low-pass filter is often applied as preprocessing before derivative computation in practical implementations. This blurring step, with a typical standard deviation \sigma_d \approx 1 pixel, reduces high-frequency artifacts while preserving the broader intensity structures essential for detecting meaningful features. The Gaussian kernel G(x, y) = \frac{1}{2\pi\sigma_d^2} \exp\left(-\frac{x^2 + y^2}{2\sigma_d^2}\right) is convolved with the image to yield a smoothed version, upon which the derivative kernels are then applied. This pre-filtering enhances the reliability of I_x and I_y for subsequent corner analysis.

Structure Tensor

The structure tensor, also known as the matrix, is a fundamental 2×2 that quantifies the local intensity variations in an around a given point (x, y). It is defined as M(x,y) = \begin{bmatrix} \langle I_x^2 \rangle & \langle I_x I_y \rangle \\ \langle I_x I_y \rangle & \langle I_y^2 \rangle \end{bmatrix}, where I_x and I_y denote the partial derivatives of the image intensity with respect to the x and y directions, respectively, and the angle brackets \langle \cdot \rangle represent averaging over a local neighborhood using a Gaussian window function w(u,v) = \exp\left(-\frac{u^2 + v^2}{2\sigma_i^2}\right) with typical \sigma_i \approx 2.5. This captures the second-moment information of the image gradients, providing a compact representation of the local image structure. The elements of the structure tensor are computed element-wise by convolving the squared and cross-product gradient images with the Gaussian window. Specifically, the components are given by M_{xx} = I_x^2 \ast w, \quad M_{xy} = I_x I_y \ast w, \quad M_{yy} = I_y^2 \ast w, where \ast denotes , and the integration is performed over a small radius, typically 3 to 5 pixels (e.g., radius \approx 2\sigma_i), to ensure computational efficiency while capturing relevant local variations. This smoothing with the Gaussian window reduces sensitivity and emphasizes the dominant directions within the neighborhood. The eigenvalues \lambda_1 and \lambda_2 (with \lambda_1 \geq \lambda_2) of the correspond to the principal curvatures of the local intensity surface, offering insight into the geometric nature of the image patch. Regions where both eigenvalues are large indicate significant variation in all directions, characteristic of a corner, whereas one large and one small eigenvalue suggests an , and both small values point to a flat area. As a of the gradients, the is symmetric and positive semi-definite, ensuring non-negative eigenvalues that reflect the variance in principal directions. Its reliance on second moments confers rotation invariance, as the eigenvalues remain unchanged under image rotation, making it robust for detection across orientations.

Algorithm Implementation

Preprocessing Steps

The Harris corner detector typically operates on images, requiring conversion from color inputs such as RGB to a single- representation. This is achieved using a weighted sum formula that accounts for , given by I = 0.299R + 0.587G + 0.114B, where R, G, and B are the , , and values, respectively. This luminance-based conversion preserves perceptual brightness while simplifying subsequent computations. To mitigate the impact of in the input , an optional Gaussian step is applied prior to derivative estimation. This involves convolving the grayscale with a Gaussian kernel of standard deviation \sigma typically in the range 0.5 to 1.5, which suppresses high-frequency while minimizing blurring. The original formulation recommends this to enhance the reliability of local measures without significantly altering corner structures. Image gradients I_x and I_y are computed using approximations, such as 3×3 Sobel kernels. Preprocessing also includes defining the Gaussian window for the computation, commonly set to a size of 7×7 pixels with \sigma = 1, to capture local intensity variations effectively. These parameters are tuned empirically based on image characteristics to optimize detection performance. The original Harris detector is designed for single-scale analysis, applying preprocessing to the full-resolution image. However, for robustness to scale variations, extensions of the method may prepare an image pyramid by downsampling and at multiple levels, enabling multi-scale corner detection.

Corner Response Function

The corner response function serves as the key metric in the Harris corner detector for evaluating the likelihood of a pixel being a corner, based on the eigenvalues of the M. It is defined as R = \det(M) - k \, [\operatorname{trace}(M)]^2, where \det(M) = \lambda_1 \lambda_2 is the determinant (product of the eigenvalues \lambda_1 and \lambda_2), \operatorname{trace}(M) = \lambda_1 + \lambda_2 is the trace (sum of the eigenvalues), and k is an empirical constant typically set to approximately 0.05 (ranging from 0.04 to 0.06). This response R distinguishes different local image structures by analyzing variations in intensity. At corners, both eigenvalues are large, indicating significant change in all directions, which produces a large positive R > 0. Along edges, one eigenvalue dominates while the other is small, resulting in a negative R < 0. In flat or uniform regions, both eigenvalues are small, yielding R \approx 0. In practice, corners are identified by applying a threshold to R, typically in the range of 10 to 130 depending on the implementation and image characteristics, to retain only strong responses while suppressing noise. The formulation enables efficient computation, as \det(M) and \operatorname{trace}(M) can be calculated directly from the elements of M without performing a full eigendecomposition.

Feature Selection

The feature selection phase in the Harris corner detector identifies robust corner candidates from the computed response map by applying non-maximum suppression to retain only local maxima, thereby preventing the selection of clustered or redundant points. This suppression is typically performed within a small neighborhood, such as an 8-connected (3x3) window around each pixel, where a point is retained if its response value R exceeds that of all its immediate neighbors; larger or adaptive windows, with radii on the order of $2\sigma (where \sigma relates to the Gaussian smoothing scale used earlier), may be employed to handle varying image resolutions or noise levels. Following suppression, a is applied to the response values to discard weak candidates, ensuring only significant corners are selected; the original formulation uses R > 0 as the baseline, but practical implementations often employ a higher empirical (e.g., around 10-130 depending on and ) to out flat or edge-like regions effectively. Optionally, sub-pixel refinement can enhance localization accuracy by interpolating the response around candidate points, commonly via fitting to estimate offsets from coordinates, which improves precision in applications requiring fine-grained feature matching. The output of this stage is a list of selected corner coordinates (x, y) paired with their corresponding response strengths R, often sorted in descending order of R to facilitate top-N selection for downstream tasks like feature tracking. Computationally, the suppression and thresholding are ideally applied to a Gaussian-smoothed version of the response map to enhance stability against minor perturbations, with provisions for anisotropic windows in cases of directional variations, though isotropic suffices for most standard scenarios.

Variants and Enhancements

Shi-Tomasi Improvement

The Shi-Tomasi improvement, introduced by Jianbo Shi and Carlo Tomasi in their 1994 paper "Good Features to Track," refines the specifically for applications in feature tracking across image sequences. Rather than relying on the Harris cornerness response R = \det(M) - k \trace(M)^2, where M is the and k is an empirically tuned parameter, the method evaluates corner quality using the eigenvalues \lambda_1 and \lambda_2 of M directly. A candidate window is selected as a good feature if \min(\lambda_1, \lambda_2) > \lambda, with the threshold \lambda chosen empirically to balance sensitivity to noise and retention of prominent features. This approach stems from analyzing the tracking error in affine motion models, ensuring selected corners correspond to stable, real-world points that minimize displacement under small transformations. A key advantage of the Shi-Tomasi method is the elimination of the k parameter, which in the original Harris detector requires manual adjustment and can lead to inconsistent results across different images or scales. By focusing on the smaller eigenvalue, the method prioritizes corners with more isotropic strength—balanced \lambda_1 and \lambda_2—which are inherently more robust for tracking, as they exhibit lower sensitivity to small perturbations in . This results in features that better support algorithms like the Lucas-Kanade tracker, enhancing overall stability in dynamic scenes without ad hoc tuning. In terms of implementation, the Shi-Tomasi variant retains the Harris detector's preprocessing steps, including Gaussian smoothing of image gradients to form the M, and computes its eigenvalues at each window location. The primary difference lies in the response function and selection: eigenvalues are extracted explicitly, and the minimum value serves as the score, with corners ranked and selected by exceeding a relative , such as 1% of the maximum minimum-eigenvalue in the image, to yield a fixed number of top features. This eigenvalue-based criterion simplifies deployment while aligning detection more closely with tracking performance metrics. Empirical evaluations in the original work demonstrate the Shi-Tomasi method's superior in motion sequences compared to the Harris detector. For instance, in a 26-frame real sequence simulating forward camera (2 mm per frame), the approach using affine motion dissimilarity effectively identified and tracked 102 stable features while discarding ambiguous ones affected by occlusions or reflections, achieving lower tracking errors than translation-only measures or prior interest operators. These benchmarks highlight its effectiveness in real-world video analysis, where consistent feature correspondence is critical.

Modern Adaptations

To achieve , the Harris corner detector has been extended by computing the response across multiple scales, often using Gaussian image pyramids or representations. In these approaches, the is evaluated at different pyramid levels, and keypoints are selected based on maximum response across scales to ensure under resizing. A seminal method, the Harris-Laplace detector, combines the Harris operator for precise corner localization with the Laplacian of Gaussian for scale selection, approximating the latter via difference-of-Gaussians for efficiency; this yields robust multi-scale features that served as a precursor to more advanced descriptors like SIFT. For accelerated implementations, binary approximations such as the FAST detector (2006) draw inspiration from the Harris window-based intensity change checks but replace the full eigenvalue analysis with a rapid segment test around a candidate , achieving up to 4-5 times faster detection while maintaining comparable corner quality on standard benchmarks. GPU optimizations further enable real-time processing; for instance, CUDA-based parallelization of gradient computation and assembly on modern GPUs processes VGA-resolution images at over 100 frames per second, making Harris viable for embedded and video applications. Learning-based hybrids integrate Harris as an initialization or for detectors, where classical Harris responses generate pseudo-ground-truth labels or guide initial keypoint proposals during , enhancing and robustness. has also enabled adaptive tuning of the Harris response parameter k, optimizing it per image via models trained on diverse datasets to balance edge and corner sensitivity. Recent trends up to 2025 emphasize Harris's role in SLAM systems, such as ORB-SLAM3, where it scores FAST candidates for , supporting real-time mapping in dynamic environments.

Applications and Limitations

Key Applications

The Harris corner detector plays a pivotal role in feature tracking for estimation, particularly when integrated with methods like the Lucas-Kanade algorithm to analyze motion in video sequences. By identifying robust corner points that exhibit significant intensity changes in multiple directions, these features serve as reliable keypoints for tracking frame-to-frame displacements, enabling accurate estimation of pixel motion patterns. This approach is widely applied in video stabilization systems, where detected corners are tracked across frames to compute global motion models and compensate for unwanted camera shake, resulting in smoother footage for applications such as handheld videography. In image matching and registration tasks, the Harris corner detector facilitates the extraction of distinctive keypoints essential for aligning images from different viewpoints. These corners are used to establish correspondences between images, supporting processes like stitching where multiple overlapping photographs are seamlessly blended into a wide-field composite. Furthermore, in structure-from-motion pipelines for , Harris-detected corners provide initial feature points that are matched across a sequence of images to estimate camera poses and recover scene geometry, forming the basis for building scalable models from inputs. For , Harris corners contribute as local feature detectors in bag-of-words models, where detected points are clustered and quantized into visual vocabularies to represent image content invariantly to scale and viewpoint changes. This method treats corner descriptors as "words" to index and match objects within large databases, enabling efficient search and localization of query objects in complex scenes. In mobile systems, Harris corners are often paired with compact descriptors like BRIEF to form lightweight feature sets that support real-time and overlay rendering on resource-constrained devices. In robotics and autonomous systems, the Harris corner detector underpins visual simultaneous localization and mapping (SLAM) frameworks by providing stable landmarks for real-time pose estimation and environment mapping. These corners are tracked in monocular or stereo camera feeds to build incremental maps while localizing the or , as demonstrated in early visual SLAM implementations for indoor and outdoor . Such applications enhance in unmanned vehicles by enabling robust feature-based in dynamic environments.

Limitations and Comparisons

The Harris corner detector exhibits several key limitations that constrain its applicability in diverse scenarios. Primarily, it is not inherently scale-invariant, as its fixed window size leads to degraded performance when features are zoomed or resized; experimental evaluations show rates peaking near 100% at a scale factor of 1 but dropping to below 50% for factors as low as 0.5 or as high as 2. Additionally, the detector relies on an empirical parameter k (typically set between 0.04 and 0.06) in the corner response function, which requires manual tuning based on content to balance rejection and corner detection, potentially leading to suboptimal results without domain-specific adjustment. Computationally, it operates at O(1) complexity per (O(N) for the entire ) due to Gaussian smoothing and eigenvalue computations, making it intensive for high-resolution images; for instance, autocorrelation matrix calculation alone consumes about 50 ms on a 1600×1200 using optimized implementations. Further shortcomings arise in challenging imaging conditions. The detector is highly sensitive to noise, with two-directional derivatives amplifying effects and causing rapid declines in repeatability (e.g., from over 80% to under 20% as noise standard deviation increases from 0 to 30). It also degrades in low-contrast or uniformly textured regions, where weak gradients fail to produce distinct corner responses, and is vulnerable to illumination variations, such as multiplicative changes, which can reduce repeatability by up to 40% in structured scenes. Moreover, the original formulation lacks affine invariance, meaning it performs poorly under viewpoint distortions like shearing, necessitating extensions such as Harris-Affine for broader robustness. In comparisons with other detectors, the Harris method improves upon earlier approaches like Moravec's by considering gradient changes in all directions rather than discrete 45-degree shifts, yielding greater invariance and reduced directional . However, it lags behind scale-invariant methods like SIFT, which offer superior robustness to , , and affine transformations at the cost of higher computational demands; Harris achieves faster detection but lower matching correctness in dynamic scenes, such as video frame correspondence for . Relative to FAST, Harris provides higher accuracy in distinguishing true corners from edges but is significantly slower, with FAST enabling performance through simplified segment tests while maintaining comparable repeatability in noise-free conditions. Benchmark evaluations underscore these trade-offs, with Harris demonstrating 70-90% repeatability in ideal, transformation-free scenarios on standard datasets like the Affine Covariant Regions, though this falls to 40-60% under scale or illumination perturbations. In contemporary pipelines as of 2025, its standalone use has become outdated for edge cases involving deep learning-based tasks, where hybrid integrations—combining Harris initialization with neural networks for refinement—enhance robustness in applications like object tracking amid occlusions.

References

  1. [1]
    [PDF] A COMBINED CORNER AND EDGE DETECTOR - BMVA Archive
    To cater for image regions containing texture and isolated features, a combined corner and edge detector based on the local auto-correlation function is.
  2. [2]
    [PDF] An Analysis and Implementation of the Harris Corner Detector
    In this work, we present an implementation and thorough study of the Harris corner detector. This feature detector relies on the analysis of the eigenvalues of ...
  3. [3]
    [PDF] Local Image Features
    Review: Harris corner detector. • We want to find distinctive patches that don't look self-similar to neighboring patches.
  4. [4]
  5. [5]
    Harris Corner Detection - OpenCV Documentation
    Goal. In this chapter,. We will understand the concepts behind Harris Corner Detection. We will see the following functions: cv.cornerHarris(), cv.Missing: original | Show results with:original
  6. [6]
    detectHarrisFeatures - Detect corners using Harris–Stephens ...
    [1] Harris, C., and M. Stephens, "A Combined Corner and Edge Detector," Proceedings of the 4th Alvey Vision Conference, August 1988, pp. 147-151. Extended ...
  7. [7]
    [PDF] 7.0 Corner Detection - Carnegie Mellon University
    ✓ Only derivatives are used => invariance to intensity shift I → I + b ... Scale invariant? Page 73. Properties of the Harris corner detector. Rotation ...
  8. [8]
    ORB (Oriented FAST and Rotated BRIEF) - OpenCV Documentation
    First it use FAST to find keypoints, then apply Harris corner measure to find top N points among them. It also use pyramid to produce multiscale-features.
  9. [9]
    Visual Mapping by a Robot Rover | Semantic Scholar
    Semantic Scholar extracted view of "Visual Mapping by a Robot Rover" by Hans P. Moravec. ... Corner Detectors for Affine Invariant Salient Regions: Is Color ...
  10. [10]
    RGB to Grayscale Conversion - Mustafa Murat ARAT
    May 13, 2020 · This is the grayscale conversion algorithm that OpenCV's cvtColor() use (see the documentation) The formula used is: Y=0.299×R+0.587×G+0.114×B<|control11|><|separator|>
  11. [11]
    Color conversions - RGB <-> - OpenCV Documentation
    The conversion from a RGB image to gray is done with: cvtColor(src, bwsrc, cv::COLOR_RGB2GRAY);. More advanced channel reordering can also be done with cv ...
  12. [12]
    Grayscaling of Images using OpenCV - Python - GeeksforGeeks
    Sep 23, 2025 · Weighted formula: Gray = 0.2989*R + ... This method converts an image to grayscale by averaging the contributions of color channels (RGB).
  13. [13]
    None
    ### Summary of Harris Corner Response Function from the Article
  14. [14]
    [PDF] Good Features to Track - Duke Computer Science
    Good features for tracking are selected by optimizing the tracker's accuracy, and have good texture properties, not just based on interest or cornerness.
  15. [15]
    [PDF] Scale & Affine Invariant Interest Point Detectors
    Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the ...
  16. [16]
  17. [17]
    Image Stabilization Based on Harris Corners and Optical Flow
    An affine transformation matrix can be solved by those two corner sets above. Stabilized video sequence was got by affine transformation of current frame.
  18. [18]
    Efficient Video Panoramic Image Stitching Based on an Improved ...
    Dec 4, 2013 · We present a new algorithm: (i) Improved, self-adaptive selection of Harris corners. The successful stitching relies heavily on the accuracy of ...Missing: structure- | Show results with:structure-
  19. [19]
    3D reconstruction based on SIFT and Harris feature points
    Dec 19, 2009 · This paper presents a new 3D reconstruction method using feature points extracted by the SIFT and Harris corner detector.
  20. [20]
    [PDF] Video Google: A Text Retrieval Approach to Object Matching in Videos
    The method uses a text retrieval analogy to find an object in a video, using pre-computed matches and temporal continuity to track regions.Missing: Harris | Show results with:Harris
  21. [21]
    [PDF] Evaluation of interest point detectors for Visual SLAM - Uni Freiburg
    Harris corner detectors has also been used as landmarks for monocular SLAM (Davison and. Murray [12]) or in Autonomous Blimps (Hygounenc et al. [13]). Finally, ...
  22. [22]
  23. [23]
  24. [24]
    [PDF] A Comparative Study between Moravec and Harris Corner Detection ...
    Moravec in 1977. The Moravec operator is considered to be a corner detector because it defines interest points as points where there are large intensity.
  25. [25]
    [PDF] A Comparison of SIFT and Harris Conner Features for ... - TAU
    Abstract—This paper presents a comparative study of two competing features for the task of finding correspondence points in consecutive image frames.