Fact-checked by Grok 2 weeks ago
References
-
[1]
Motion Estimation - an overview | ScienceDirect TopicsMotion estimation (ME) is defined as the process of estimating the motion that occurs between a reference frame and the current frame in a video sequence, ...
-
[2]
[PDF] a review on vision-based motion estimation - arXivJul 19, 2024 · Existing vision-based motion estimation methods can be classified into two branches: matching-based methods, which work from Lagrangian ...
- [3]
-
[4]
Motion Estimator - an overview | ScienceDirect TopicsA motion estimator is defined as a computational algorithm that calculates the optical flow by estimating the motion vectors between consecutive image frames, ...
-
[5]
[PDF] Motion Estimation Techniques - Marco Cagnazzomotion estimation. Motion estimation plays an important role in a broad range of applications encompassing image sequence analysis, computer vision and video.
-
[6]
46 Motion Estimation - Foundations of Computer VisionMotion tells us how objects move in the world, and how we move relative to the scene. It is an important grouping cue that lets us discover new objects.
-
[7]
[PDF] Lecture 13: Tracking motion features – optical flowNov 9, 2011 · • Brightness constancy: projection of the same point looks the same in every frame. • Spatial coherence: points move like their neighbors.
-
[8]
[PDF] Motion and optical flow AnnouncementsFeb 2, 2017 · How to estimate pixel motion from image H to image I? • Solve pixel correspondence problem. – given a pixel in H, look for nearby pixels of the ...
-
[9]
A review of motion estimation algorithms for video compressionFor this purpose, the block-based motion estimation (BBME) technique has been successfully applied in the video compression standards from H.261 to H.264.
-
[10]
Motion Estimation using Optical Flow - Scaler TopicsJul 11, 2023 · Optical flow motion estimation is a method used in computer vision to estimate the motion of objects between consecutive frames of an image or video sequence.
-
[11]
Multiple View Geometry in Computer VisionGeometry of single axis motions using conic fitting. ... Richard Hartley, Australian National University, Canberra, Andrew Zisserman, University of Oxford.
-
[12]
[PDF] An Iterative Image Registration Technique - CMU Robotics InstituteIn this paper we present a new image registration technique that uses spatial intensity gradient information to direct the search for the position that yields ...
-
[13]
[PDF] Multiple View Geometry in Computer Vision, Second EditionPART 0: The Background: Projective Geometry, Transformations and Esti- mation. 23. Outline. 24. 2. Projective Geometry and Transformations of 2D.
-
[14]
[PDF] Algorithmic Issues in Modeling Motion - Duke Computer ScienceAnother possible trade-off is between efficiency and accuracy. How much efficiency can be gained by maintaining a geometric structure approximately? For example ...
-
[15]
Determining optical flow - ScienceDirect.comOptical flow cannot be computed locally, requiring a second constraint. A method assumes smooth brightness pattern velocity, using an iterative implementation.
-
[16]
[PDF] Probabilistic and Sequential Computation of Optical Flow Using ...In this paper we present a temporal, multi-frame extension of the dense optical flow estimation formulation proposed by Horn and Schunck [1] in which we use ...
-
[17]
[PDF] Determining Optical Flow - FacultyBerthold K.P. Horn and Brian G. Schunck ABSTRACT Optical flow cannot be computed locally, since only one independent measurement is available from the image ...
-
[18]
Glacier Surface Motion Estimation from SAR Intensity Images Based ...Aug 6, 2020 · This paper proposes a robust subpixel frequency-based image correlation method for dense matching and integrates the improved matching into a ...
-
[19]
[PDF] Robust motion estimation under varying illuminationThe basic notion behind the use of a robust estimator is to obtain a statistical characterization of the data that is immune to the outliers. Several approaches ...
-
[20]
[PDF] Kanade-Lucas-Tomasi (KLT) Tracker - Carnegie Mellon UniversityKanade. Tomasi. Good Features to Track. 1994. Tomasi. Shi. History of the. Kanade-Lucas-Tomasi. (KLT) Tracker. The original KLT algorithm. Page 5. Method for ...
-
[21]
(PDF) Block matching algorithms for motion estimation - ResearchGateOct 21, 2017 · This paper is a review of the block matching algorithms used for motion estimation in video compression. It implements and compares 7 different types of block ...
-
[22]
[PDF] Distinctive Image Features from Scale-Invariant KeypointsJan 5, 2004 · This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between ...
-
[23]
ORB: An efficient alternative to SIFT or SURF - IEEE XploreIn this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise.
-
[24]
Random sample consensus: a paradigm for model fitting with ...Jun 1, 1981 · A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a ...
-
[25]
FlowNet: Learning Optical Flow with Convolutional Networks - arXivApr 26, 2015 · In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task.
-
[26]
RAFT: Recurrent All-Pairs Field Transforms for Optical Flow - arXivMar 26, 2020 · We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts per-pixel features.Missing: unsupervised | Show results with:unsupervised
-
[27]
[PDF] FlowNet 2.0: Evolution of Optical Flow Estimation With Deep NetworksIn this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are ...
-
[28]
Learning to Estimate Hidden Motions with Global Motion AggregationApr 6, 2021 · We introduce a global motion aggregation module, a transformer-based approach to find long-range dependencies between pixels in the first image.
-
[29]
ICCV 2025 Open Access RepositoryLeveraging the synergy between regression and diffusion, GENMO achieves accurate global motion estimation while enabling diverse motion generation. We also ...
-
[30]
[PDF] Fast Local and Global Projection-Based Methods for Affine Motion ...The perfor- mance of the iterative nonlinear least squares estima- tors depend on both the convexity of the objective function (sum of the squared image ...
-
[31]
Motion displacement estimation using an affine model forBrockett6 developed a least-squares approach to approx- imate optical flow by affine vector fields using shape gramians. A broad class of gradient-based methods ...
-
[32]
Parametric estimation of affine deformations of planar shapesAug 5, 2025 · In Section 4, we propose an iterative solution of overdetermined systems, a direct analytical solution of non-singular systems, and a ...
-
[33]
Homography estimation using local affine frames - IEEE XploreThroughout this paper we propose a simple, direct linear transformation (DLT) like solution to the problem of homography estimation using local affine frames, ...
-
[34]
[PDF] Planar Affine Rectification from Change of Scale - CMPA proof of degenerate case of collinear points can be found in appendix A. 2 The Method. First, the concept of local scale change under planar homography is ...Missing: configurations | Show results with:configurations
-
[35]
[PDF] ASIFT: An Algorithm for Fully Affine Invariant ComparisonThe ASIFT feature computation complexity is therefore 13.5 times the complexity for computing SIFT features. The complexity growth is “linear” and thus marginal ...
-
[36]
[PDF] High Accuracy Optical Flow Estimation Based on a Theory for ...Abstract. We study an energy functional for computing optical flow that com- bines three assumptions: a brightness constancy assumption, a gradient ...
-
[37]
[PDF] Pyramidal Implementation of the Lucas Kanade Feature Tracker ...The overall pyramidal tracking algorithm proceeds as follows: first, the optical flow is comptuted at the deepest pyramid level Lm. Then, the result of the ...
-
[38]
[PDF] Hierarchical Model-Based Motion EstimationArguments for use of hierarchical (i.e. pyramid based) estimation techniques for mo- tion estimation have usually focused on issues of computational efficiency.
-
[39]
[PDF] PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost ...We present a compact but effective CNN model for op- tical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established ...
-
[40]
[PDF] The Laplacian Pyramid as a Compact Image CodeApr 4, 1983 · Abstract—We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis.
-
[41]
H.264 : Advanced video coding for generic audiovisual services**Summary of H.264 Motion Estimation and Related Features:**
-
[42]
H.265 : High efficiency video coding**Summary of Motion Estimation in HEVC (H.265):**
- [43]
- [44]
-
[45]
[PDF] Multiple Reference Motion Compensation - UC San DiegoAbstract. Motion compensation exploits temporal correlation in a video sequence to yield high compression efficiency. Multiple reference frame motion.
-
[46]
H.262 : Information technology - Generic coding of moving pictures and associated audio information: Video**Summary of Motion Estimation in MPEG-2/H.262 (Block-Based, Half-Pixel Accuracy):**
-
[47]
(PDF) Motion Vector Coding and Block Merging in Versatile Video ...Sep 6, 2021 · This paper overviews the motion vector coding and block merging techniques in the Versatile Video Coding (VVC) standard developed by the Joint Video Experts ...<|control11|><|separator|>
-
[48]
H.266 : Versatile video coding**Summary of H.266 (Versatile Video Coding) from ITU-T REC-H.266:**
-
[49]
An efficient versatile video coding motion estimation hardwareJan 29, 2024 · In this paper, we propose an efficient VVC ME hardware. It is the first VVC ME hardware in the literature. It has real time performance with small hardware ...
-
[50]
[PDF] Bundle Adjustment — A Modern SynthesisAbstract. This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision ...
- [51]
-
[52]
[PDF] Structure-From-Motion Revisited - CVF Open AccessThis paper proposes a SfM algorithm that overcomes key challenges to make a further step towards a general-purpose. SfM system. The proposed components of ...
-
[53]
Crowdsource Drone Imagery – A Powerful Source for the 3D ...Dec 28, 2020 · In this paper, we propose the idea of using crowdsource drone images and videos which are captured by amateurs for the documentation of heritage sites.
-
[54]
Direct Iterative Closest Point for real-time visual odometryAbstract: In RGB-D sensor based visual odometry the goal is to estimate a sequence of camera movements using image and/or range measurements.
-
[55]
Understanding Iterative Closest Point (ICP) Algorithm with CodeApr 30, 2025 · Iterative Closest Point (ICP) is a widely used classical computer vision algorithm for 2D or 3D point cloud registration.
-
[56]
ORB-SLAM3: An Accurate Open-Source Library for Visual ... - arXivJul 23, 2020 · This paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras.
-
[57]
Loop Closure Detection for Monocular Visual Odometry - IEEE XploreIn order to decrease monocular visual odometry drift by detecting loop closure, this paper presents a comparison between state of the art, 2-channel and ...
-
[58]
Multi-Object Tracking Using Kalman Filter and Historical Trajectory ...We propose a multi-object tracking using Kalman filter and historical trajectory correction for surveillance videos.
-
[59]
FusionSORT: Fusion Methods for Online Multi-object Visual TrackingMay 10, 2025 · In our tracker, we use Kalman filter (KF) [8] with a constant-velocity model for motion estimation of object tracklets in the image plane, ...
-
[60]
Visual multi-object tracking with re-identification and occlusion ...This paper proposes an online visual multi-object tracking (MOT) algorithm that resolves object appearance–reappearance and occlusion.
-
[61]
Deep Learning Based Real-Time Object Detection on Jetson Nano ...Aug 6, 2025 · Deep Learning Based Real-Time Object Detection on Jetson Nano Embedded GPU. June 2023; Lecture Notes in Electrical Engineering. DOI:10.1007/978 ...
-
[62]
Vision-Based Embedded System for Noncontact Monitoring ... - arXivSep 2, 2025 · We introduce an embedded monitoring system that utilizes a quantized MobileNet model deployed on a Raspberry Pi for real-time behavioral state ...
-
[63]
Kitti Odometry Dataset - Andreas GeigerFor this benchmark you may provide results using monocular or stereo visual odometry, laser-based SLAM or algorithms that combine visual and LIDAR information.
-
[64]
An unsupervised video anomaly detection method via Optical Flow ...Our proposed method, OFST, combines optical flow reconstruction and video frame prediction to improve video anomaly detection. OFST is composed of two modules, ...
-
[65]
Statistical Modeling of Long-Range Drift in Visual OdometryAug 7, 2025 · This paper models the drift as a combination of wide-band noise and a first-order Gauss-Markov process, and analyzes it using Allan variance.
-
[66]
[PDF] Past Research, State of Automation Technology, and ... - NHTSAAdvanced Research Projects Agency (DARPA) challenges (e.g., Montemerlo et al., 2008; Umson et ... 13.08 s and 15.60 s to complete primarily visual and combined ...
-
[67]
RD-VIO: Robust Visual-Inertial Odometry for Mobile Augmented ...We also compared the ability to eliminate outliers in visual observations using IMU pre-integration predicted poses. ... SLAM system ORB-SLAM3 and recent DynaVINS ...
- [68]
-
[69]
[PDF] Super Odometry: A Robust LiDAR-Visual-Inertial Estimator for ...We propose Super Odometry, a high-precision multi-modal sensor fusion framework, providing a simple but effective way to fuse multiple sensors.Missing: 2020s | Show results with:2020s
-
[70]
Camera, LiDAR, and IMU Based Multi-Sensor Fusion SLAM: A SurveySep 22, 2023 · This paper can be considered as a brief guide to newcomers and a comprehensive reference for experienced researchers and engineers to explore ...
-
[71]
CVPR Poster On-Device Self-Supervised Learning of Low-Latency ...Online, on-device learning allows robots to “train in their test environment”. We improve the time and memory efficiency of the self-supervised contrast ...Missing: contrastive | Show results with:contrastive
-
[72]
Self-Supervised Optical Flow Estimation for Event-based CamerasFeb 19, 2018 · We present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras.Missing: contrastive motion
-
[73]
[PDF] Unsupervised Joint Learning of Optical Flow and Intensity with Event ...Event cameras rely on motion to obtain information about scene appearance. This means that appearance and motion are inherently linked: either both are ...
-
[74]
Using Bayesian deep learning approaches for uncertainty-aware ...Bayesian methods can quantify that uncertainty, and deep learning models exist that follow the Bayesian paradigm. These models, namely Bayesian neural ...Missing: flows motion
-
[75]
Representing Model Uncertainty in Deep Learning - arXivJun 6, 2015 · In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian ...Missing: flows motion robotics
-
[76]
[PDF] A General Framework for Uncertainty Estimation in Deep LearningIn this paper, we propose a novel framework for uncertainty estimation of deep neural network predictions. By combin- ing Bayesian belief networks [5], [6], [7] ...
-
[77]
A survey of uncertainty in deep neural networksJul 29, 2023 · This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges,
-
[78]
Quality Scalable Quantization Methodology for Deep Learning on ...Jul 15, 2024 · The methodology uses 3-bit parameter compression and quality scalable multipliers to reduce energy and size of CNNs for edge computing, with on ...Missing: motion FlowNet
-
[79]
Energy-Efficient Optical Flow Estimation using Sensor Fusion and ...In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range ...
-
[80]
100FPS@1W Dense Optical Flow For Tiny Mobile Robots - arXivNov 21, 2024 · In this paper, we propose EdgeFlowNet, a high-speed, low-latency dense optical flow approach for tiny autonomous mobile robots by harnessing the ...Missing: FlowNet | Show results with:FlowNet
-
[81]
[PDF] Quantum Motion SegmentationThis paper introduces the first algorithm for motion segmentation that relies on adiabatic quantum optimization of the objective function. The proposed method ...
-
[82]
(PDF) Ethical Considerations in AI-Powered Surveillance SystemsOct 29, 2024 · This paper examines the moral implications of AI-driven surveillance, highlighting tensions between national security, public safety, and individual privacy.
-
[83]
Legal and ethical implications of AI-based crowd analysis - NIHWhile AI offers promise in analysing crowd dynamics and predicting escalations, its deployment raises significant ethical concerns, regarding privacy, bias, ...