Fact-checked by Grok 2 weeks ago
References
-
[1]
(PDF) Review of visual odometry: types, approaches, challenges ...This paper presents a review of state-of-the-art visual odometry (VO) and its types, approaches, applications, and challenges.
-
[2]
[PDF] Obstacle Avoidance and Navigation in the Real World by a Seeing ...Sep 2, 1980 · The robot uses a TV camera, stereo vision to locate objects, and a computer to plan and adjust obstacle-avoiding paths based on new perceptions.
-
[3]
[PDF] Two Years of Visual Odometry on the Mars Exploration RoversDuring the first two years of operations, Visual Odometry evolved from an “extra credit” capability into a critical vehicle safety system.
-
[4]
[PDF] Approaches, Challenges, and Applications for Deep Visual OdometrySep 6, 2020 · Abstract—Visual odometry (VO) is a prevalent way to deal with the relative localization problem, which is becoming in-.
-
[5]
[PDF] Visual OdometryDec 8, 2011 · The term VO was coined in 2004 by Nis- ter in his landmark paper [1]. The term was chosen for its similarity to wheel odometry, which ...
- [6]
-
[7]
[PDF] Deep Monocular Visual Odometry for fixed-winged AircraftIn the early 1980s, Moravec [3] laid the foundation for the problem which would later be called Visual Odometry (VO). Early research was motivated by National ...
-
[8]
[PDF] Visual Odometry on the Mars Exploration Rovers - JPL RoboticsVisual odometry tracks features in stereo images to estimate position and attitude, correcting wheel odometry errors, using maximum likelihood estimation.
-
[9]
The DARPA Grand Challenge: Ten Years LaterMar 13, 2014 · The DARPA Grand Challenge, a first-of-its-kind race to foster the development of self-driving ground vehicles.Missing: visual odometry
- [10]
- [11]
-
[12]
OpenVINSThe OpenVINS project houses some core computer vision code along with a state-of-the art filter-based visual-inertial estimator.Getting Started · Pages · Classes · Ov_core namespaceMissing: 2018 | Show results with:2018
-
[13]
DeepVO: Towards End-to-End Visual Odometry with Deep ... - arXivSep 25, 2017 · This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs).Missing: 2018 | Show results with:2018
-
[14]
Event-based Vision, Event Cameras, Event Camera SLAMWe develop an event-based feature tracking algorithm for the DAVIS sensor and show how to integrate it in an event-based visual odometry pipeline. Features ...Missing: post- | Show results with:post-
-
[15]
Monocular visual SLAM, visual odometry, and structure from motion ...Sep 30, 2024 · Review article. Monocular visual SLAM, visual odometry, and structure from motion methods applied to 3D reconstruction: A comprehensive survey.
-
[16]
Review of visual odometry: types, approaches, challenges, and ...Oct 28, 2016 · The idea of estimating a vehicle's pose from visual input alone was introduced and described by Moravec in the early 1980s (Nistér et al. 2004; ...
-
[17]
[PDF] Monocular Visual Odometry using a Planar Road Model to Solve ...Each class of algorithms has different benefits and draw- backs. Monocular algorithms suffer from the scale ambiguity in the translational camera movement ...Missing: PTAM | Show results with:PTAM
-
[18]
[PDF] Tackling The Scale Factor Issue In A Monocular Visual Odometry ...Dec 5, 2018 · This paper presents a method of resolving the scale ambiguity and drift observed in a monocular camera-based visual odometry by using the slant ...
-
[19]
[PDF] Instant Visual Odometry Initialization for Mobile AR - arXivJul 30, 2021 · In this paper, we present a 6-DoF monocular visual odometry that initializes instantly and without motion parallax. Our main contribution is a.
-
[20]
[PDF] Parallel Tracking and Mapping for Small AR WorkspacesThis paper presents a method of estimating camera pose in an un- known scene. While this has previously been attempted by adapting. SLAM algorithms developed ...
-
[21]
Parallel Tracking and Mapping for Small AR WorkspacesWe describe a fast method to relocalise a monocular visual SLAM (simultaneous localisation and mapping) system after tracking failure. The monocular SLAM ...
-
[22]
[PDF] Three-Point Direct Stereo Visual Odometry - BMVA ArchiveIn this paper, we propose a novel three-point direct method for stereo visual odometry, which is more accurate and robust to outliers. To improve both accuracy ...
-
[23]
[PDF] KinectFusion: Real-Time Dense Surface Mapping and TrackingKinectFusion: Real-time 3D reconstruction and inter- action using a moving depth camera. In Symposium on User Interface. Software and Technology (UIST), 2011.
-
[24]
Robust RGB-D Odometry Using Point and Line Features - IEEE XploreLighting variation and uneven feature distribution are main challenges for indoor RGB-D visual odometry where color information is often combined with depth ...
-
[25]
[PDF] Real-Time Stereo Visual Odometry for Autonomous Ground VehiclesVisual odometry, which estimates vehicle motion from a sequence of camera images, offers a natural complement to these sensors: it is insensitive to soil ...
-
[26]
[PDF] Visual Odometry based on Stereo Image Sequences with RANSAC ...The information given by such images suffices for precise motion estimation based on visual information [1], called visual odometry (e.g., Nistér et. ... Motion, ...
-
[27]
Fast visual odometry and mapping from RGB-D data - IEEE XploreIn this paper, we present a real-time visual odometry and mapping system for RGB-D cameras. The system runs at frequencies of 30Hz and higher in a single thread ...Missing: key methods
-
[28]
[PDF] High Altitude Stereo Visual Odometry - RoboticsThis paper presents a novel modification to stereo visual odometry for accurate pose estimation at high altitudes, even with poor calibration, using a ...Missing: seminal | Show results with:seminal
-
[29]
[PDF] Robust RGB-D Odometry Using Point and Line FeaturesLighting variation and uneven feature distribution are main challenges for indoor RGB-D visual odometry where color information is often combined with depth ...
-
[30]
A Robust and Versatile Monocular Visual-Inertial State EstimatorAug 13, 2017 · Access Paper: View a PDF of the paper titled VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator, by Tong Qin and 2 ...
-
[31]
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State ...Jul 27, 2018 · In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for ...
-
[32]
[PDF] Towards Robust Multi Camera Visual Inertial OdometryJul 24, 2020 · IMUs provide high frequency data which can give useful information about short-term dynamics, while cameras provide useful exteroceptive.Missing: seminal | Show results with:seminal
-
[33]
How Visual Inertial Odometry (VIO) Works - Think Autonomous.Apr 3, 2024 · Visual Inertial Odometry is the science of fusing both Visual Odometry (from camera images) with Inertial Odometry (from an IMU).
-
[34]
Advantages and Applications of Visual-Inertial Odometry - ALLPCBSep 10, 2025 · The advantage of VIO stems from the complementary characteristics of cameras and IMUs. Sensor Complementarity. Cameras perform well in most ...
-
[35]
[PDF] Event-based Vision: A Survey - Robotics and Perception GroupWe present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision ( ...
-
[36]
[PDF] Event-based Visual Odometry with Full Temporal Resolution ... - arXivJun 1, 2023 · Event-based cameras perform better than traditional cam- eras in these challenging scenarios. They detect pixelwise intensity change and report ...
-
[37]
Event-Based Visual/Inertial Odometry for UAV Indoor NavigationThe proposed approach uses event cameras, fusing events, standard frames, and inertial measurements for indoor navigation, with a front-end and back-end thread.Missing: PX4 autopilot
-
[38]
[PDF] Modeling Varying Camera-IMU Time Offset in Optimization-Based ...Abstract. Combining cameras and inertial measurement units (IMUs) has been proven effective in motion tracking, as these two sensing modal-.Missing: advantages seminal<|separator|>
-
[39]
[PDF] Embedded Event-based Visual Odometry - HALFeb 12, 2024 · Noise elimination is essential when using DVS. Indeed, the more noise, the more computation time and the more latency. In our case, a Background ...Missing: demands | Show results with:demands
-
[40]
Event-Based Visual Simultaneous Localization and Mapping ... - MDPIKey challenges with EMs include balancing the temporal resolution, computational load, and processing efficiency [27,38]. Efficient algorithms are essential to ...
-
[41]
(PDF) EVO: A Geometric Approach to Event-based 6-DOF Parallel ...This paper addresses a critical challenge in Industry 4.0 robotics by enhancing Visual Inertial Odometry (VIO) systems to operate effectively in dynamic and ...
-
[42]
[PDF] arXiv:2302.01867v1 [cs.RO] 3 Feb 2023Feb 3, 2023 · Abstract—Integration of Visual Inertial Odometry (VIO) methods into a modular control system designed for deployment of Unmanned Aerial ...
-
[43]
[PDF] Object Recognition from Local Scale-Invariant Features 1. IntroductionThis paper presents a new method for image feature gen- eration called the Scale Invariant Feature Transform (SIFT). This approach transforms an image into ...
-
[44]
ORB: An efficient alternative to SIFT or SURF - IEEE XploreIn this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise.
-
[45]
(PDF) ORB: an efficient alternative to SIFT or SURF - ResearchGateAug 6, 2025 · In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise.
-
[46]
ORB-SLAM: A Versatile and Accurate Monocular SLAM SystemThis paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large ...
-
[47]
Data Flow ORB-SLAM for Real-time Performance on Embedded ...We adopted a data flow paradigm to process the images, obtaining an efficient CPU/GPU load distribution that results in a processing speed of about 30 frames ...
-
[48]
[PDF] DSO.pdf - Direct Sparse Odometry - Jakob EngelIn this paper we propose a sparse and direct approach to monocular visual odometry. To our knowledge, it is the only fully direct method that jointly ...
-
[49]
[PDF] LSD-SLAM: Large-Scale Direct Monocular SLAM - Jakob EngelThe main contributions of this paper are (1) a framework for large-scale, direct monocular SLAM, in particular a novel scale-aware image alignment algorithm to ...
-
[50]
[PDF] SVO: Fast Semi-Direct Monocular Visual OdometryAbstract— We propose a semi-direct monocular visual odom- etry algorithm that is precise, robust, and faster than current state-of-the-art methods.
-
[51]
FlowNet: Learning Optical Flow with Convolutional Networks - arXivApr 26, 2015 · In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task.
-
[52]
SuperPoint: Self-Supervised Interest Point Detection and DescriptionThis paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry ...
-
[53]
Full article: Bi-direction Direct RGB-D Visual OdometryOct 11, 2020 · Similar as traditional visual odometry, RGB-D visual odometry can also be divided into feature-based methods and direct methods. Feature ...Introduction · Direct Motion Estimation · EvaluationMissing: key | Show results with:key
-
[54]
[2305.06121] Transformer-based model for monocular visual odometryMay 10, 2023 · Abstract: Estimating the camera pose given images of a single camera is a traditional task in mobile robots and autonomous vehicles.Missing: ViTVO | Show results with:ViTVO
-
[55]
[PDF] Vision Transformer based Visual Odometry with Attention SupervisionAug 14, 2024 · In this paper, we develop a Vision Transformer based visual odometry (VO), called ViTVO. ViTVO introduces an attention mechanism to perform ...
-
[56]
[2510.03348] Visual Odometry with Transformers - arXivOct 2, 2025 · In this work, we demonstrate that monocular visual odometry can be addressed effectively in an end-to-end manner, thereby eliminating the need ...Missing: ViTVO | Show results with:ViTVO
-
[57]
(PDF) Can Visual Foundation Models Achieve Long-term Point ...Aug 24, 2024 · Our findings indicate that features from Stable Diffusion and DINOv2 exhibit superior geometric correspondence abilities in zero-shot settings.
-
[58]
Visual odometry | IEEE Conference PublicationWe give examples of camera trajectories estimated purely from images over previously unseen distances and periods of time. Published in: Proceedings of the 2004 ...
-
[59]
[PDF] An Efficient Solution to the Five-Point Relative Pose ProblemThe problem is to find the possible solutions for relative camera motion between two calibrated views given five corresponding points. The algorithm consists of ...
-
[60]
[PDF] Monocular Omnidirectional Visual Odometry for Outdoor Ground ...Abstract. This paper describes an algorithm for visually computing the ego-motion of a vehicle relative to the road under the assumption of planar motion.
-
[61]
None### Summary of ORB-SLAM: Bundle Adjustment, Local Optimization, and Loop Closure with Bag-of-Words
-
[62]
None### Summary of VINS-Mono Optimization Techniques for VIO
-
[63]
[PDF] High-Precision, Consistent EKF-based Visual-Inertial OdometryThis paper addresses the problem of tracking a vehicle's egomotion in GPS-denied environments, using an inertial measurement unit (IMU) and a monocular camera.
-
[64]
None### Summary of Schur Complement Use in SchurVINS for Visual Inertial Navigation/Odometry
- [65]
-
[66]
[PDF] Evaluating Egomotion and Structure-from-Motion Approaches Using ...Two frequently employed methods are the relative pose error (RPE) and the absolute trajectory error (ATE). The RPE measures the difference between the estimated ...
-
[67]
NVIDIA-ISAAC-ROS/isaac_ros_visual_slam: Visual SLAM ... - GitHubVSLAM provides a method for visually estimating the position of a robot relative to its start position, known as VO (visual odometry). This is particularly ...
-
[68]
[PDF] Scalability in Perception for Autonomous Driving: Waymo Open ...Most autonomous driving systems fuse sensor readings from multiple sensors, including cameras, LiDAR, radar,. GPS, wheel odometry, and IMUs. Recently released ...
-
[69]
Full Self-Driving (Supervised) | Tesla SupportOn-board cameras with 360-degree visibility check your blind spots and move your Tesla vehicle into a neighboring lane while maintaining your speed and avoiding ...<|separator|>
-
[70]
How Oculus squeezed sophisticated tracking into pipsqueak hardwareAug 22, 2019 · The term for what the headset does is simultaneous localization and mapping, or SLAM. It basically means building a map of your environment in ...
-
[71]
[PDF] Robotic Operations During Perseverance's First Extended MissionAVOID ALL The primary Autonomous Driving mode, the rover uses Visual Odometry to maintain position knowledge, and employs Terrain Mapping (stereo vision ...
-
[72]
Localization of the Perseverance Rover at the Van Zyl ... - NASA ADSTo assess the accuracies of our localization software, based on established Visual Odometry (VO) techniques, we analyzed sequences of stereo images acquired by ...
-
[73]
Evolving Visual Odometry for Autonomous Underwater VehiclesThis paper improves visual odometry for AUVs, addressing robustness issues in complex scenarios, and shows progress in vehicle displacement estimation.
-
[74]
EndoSLAM dataset and an unsupervised monocular visual ...In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy ...
- [75]
-
[76]
[PDF] Robustness of State-of-the-Art Visual Odometry and SLAM SystemsJun 11, 2023 · This thesis attempts to evaluate the robustness to motion blur of two open-source state-of-the-art VIO and SLAM systems, namely Delayed ...Missing: demands | Show results with:demands
-
[77]
Online learning-based anomaly detection for positioning systemApr 1, 2025 · In this study, an online learning-based anomaly detection approach is proposed for positioning systems of autonomous mobile robots.
-
[78]
RGB-D SLAM Dataset and Benchmark - Computer Vision GroupThe dataset contains RGB-D data, ground-truth trajectory, and Kinect accelerometer data, recorded at 30Hz, for evaluating visual odometry and SLAM systems.Dataset Download · Useful tools · File Formats · Submission form for automatic...
-
[79]
[PDF] The TUM VI Benchmark for Evaluating Visual-Inertial OdometryThe TUM VI benchmark is a dataset with diverse sequences for evaluating visual-inertial odometry, featuring 1024x1024 images, IMU data, and synchronized ...