Fact-checked by Grok 2 weeks ago

Optical flow

Optical flow is the distribution of apparent velocities of patterns in an , arising from relative motion between objects and the viewer. In , it describes the 2D motion field estimated from consecutive frames of an , capturing how intensities displace over time under the assumption of constancy. The concept of optical flow traces its origins to , where James J. Gibson introduced it in the mid-20th century to explain how animals perceive their environment through dynamic visual patterns during self-motion, such as the radial outflow of texture during forward locomotion. In , it was formalized in 1981 through two seminal works: Berthold K. P. Horn and Brian G. Schunck proposed a global method using variational principles and a constraint to solve the inherent aperture problem, where local intensity changes yield only one equation for two velocity unknowns. Concurrently, Bruce D. Lucas and developed a local differential approach assuming constant flow within small windows, enabling iterative estimation for applications like stereo vision. Optical flow estimation has since evolved into a cornerstone of , with methods progressing from classical techniques—such as gradient-based and models—to energy-based, phase-based, and frameworks that address challenges like occlusions and large displacements. Key benchmarks, including the Middlebury dataset for small-motion evaluation and the KITTI and Sintel datasets for real-world scenes, have driven improvements in accuracy and robustness. The technique finds broad applications across domains, including video for action recognition and compression, for and , biomedical for tracking tissue deformation and blood flow, and for and crowd motion . Recent advancements incorporate , such as convolutional neural networks trained end-to-end on large datasets, along with 2025 developments like integration with depth foundation models and event-based cameras for robust estimation in dynamic scenes, to achieve state-of-the-art performance on complex scenes with non-rigid motions.

Fundamentals

Definition and Principles

Optical flow refers to the pattern of apparent motion of objects, surfaces, and edges in a visual scene, arising from the relative motion between an observer and the environment. This phenomenon describes how the visual stimulus changes over time as the observer or scene elements move, creating a dynamic array of light patterns on the retina or image sensor. Unlike true motion, which represents the actual three-dimensional velocities of objects in space, optical flow is a two-dimensional projection influenced by and effects in the imaging process. For instance, the same physical movement can produce different flow patterns depending on the observer's viewpoint and the scene's depth structure, emphasizing that optical flow captures perceived rather than literal motion. In human vision, optical flow plays a crucial role in motion perception by enabling the detection of self-motion (ego-motion) and the differentiation between object movement and environmental changes. It supports depth estimation through cues like motion parallax, where nearby elements appear to move faster across the than distant ones, and facilitates understanding of heading direction via patterns such as the focus of expansion during forward locomotion. These perceptual mechanisms allow observers to navigate and interact effectively with their surroundings without relying solely on static visual cues. A foundational principle underlying optical flow is the brightness constancy assumption, which posits that the light intensity reflected from surfaces remains consistent as viewpoints change, such that observed motion in the image stems primarily from geometric transformations rather than illumination variations. However, local measurements of this flow often suffer from the problem, where the motion direction is ambiguous when viewed through a small window, as only the component perpendicular to local edges can be directly inferred, necessitating with global contextual to resolve full vectors.

Mathematical Representation

Optical flow is mathematically represented as a dense \mathbf{u}(x,y) = (u(x,y), v(x,y)) over the , where u(x,y) and v(x,y) denote the horizontal and vertical components of the apparent motion of patterns at each (x, y). The foundational assumption underlying this representation is the brightness constancy principle, which posits that the I of a point remains unchanged as it moves across the image sequence: I(x, y, t) = I(x + u \Delta t, y + v \Delta t, t + \Delta t). This equation implies that observed changes in intensity arise solely from the motion of image features. To derive the optical flow constraint from this assumption, consider a first-order expansion of the intensity function around (x, y, t): I(x + u \Delta t, y + v \Delta t, t + \Delta t) \approx I(x, y, t) + \frac{\partial I}{\partial x} u \Delta t + \frac{\partial I}{\partial y} v \Delta t + \frac{\partial I}{\partial t} \Delta t. Setting the expanded form equal to the original intensity and dividing by \Delta t yields the differential constraint I_x u + I_y v + I_t = 0, where I_x = \frac{\partial I}{\partial x}, I_y = \frac{\partial I}{\partial y}, and I_t = \frac{\partial I}{\partial t} are the spatial and temporal intensity gradients. This constraint relates the flow components to the image derivatives but provides only one equation for the two unknowns u and v. The optical flow arises from the of scene motion onto the 2D image under a . For a point at 3D position (X, Y, Z) with velocity \mathbf{V} = (V_x, V_y, V_z) relative to the camera, and f, the image coordinates are x = f X / Z and y = f Y / Z. Differentiating these projections gives the flow components u = \frac{f V_x - x V_z}{Z}, \quad v = \frac{f V_y - y V_z}{Z}. This mapping highlights how depth Z and radial motion V_z influence the observed 2D flow. Despite its elegance, the optical flow constraint equation is inherently underconstrained, offering a single linear relation for two flow variables at each point, which necessitates additional assumptions—such as —for unique solutions.

Historical Development

Early Concepts

The concept of optical flow emerged in the mid-20th century through studies in and , focusing on how patterns of visual motion inform self-motion and environmental structure. During , aviation psychology research investigated pilot disorientation, contributing to early understandings of optic flow patterns—such as radial expansions during approach or contractions during climb—that could lead to spatial errors. These studies from the revealed that misinterpretation of flow fields could lead to vertigo and control loss, prompting efforts to model visual cues for safer . James J. Gibson advanced these ideas in the 1950s through his framework of ecological optics, positing that optic flow provides direct information for animal navigation and perception of affordances—action possibilities in the environment—without requiring internal representations. In his seminal book, Gibson described optic flow as the continuous transformation of the visual array during locomotion, where the entire retinal field exhibits differential velocities signaling heading, speed, and obstacles, as seen in animals maintaining balance via flow gradients. This approach emphasized the global, textured nature of visual motion over isolated cues, influencing later biological models. Concurrently, psychophysical research on insect vision introduced correlation-based mechanisms for , laying groundwork for understanding optic computation. In 1956, Bernhard Hassenstein and Werner Reichardt proposed a model for the optomotor response in , using temporal of changes across adjacent receptors to detect directionality, which implicitly captured local elements in a dense manner. This work demonstrated how simple neural circuits could process motion fields for stabilization, bridging and early . By the late 1970s, computational theories began integrating optic into visual processing hierarchies. David Marr and Shimon Ullman, in their 1979 paper published in 1981, outlined directional selectivity in early vision, contributing to the computation of velocity fields from motion, distinct from sparse feature tracking that follows only prominent points like edges. This marked an initial theoretical shift toward dense estimation, assuming brightness constancy to relate image changes to motion, enabling 3D structure recovery from 2D projections.

Key Advancements

The marked a pivotal shift toward computational methods for optical flow , beginning with the seminal variational approach by and Schunck in 1981. This method formulated optical flow as an minimizing a data fidelity term derived from the brightness constancy assumption alongside a smoothness regularization term, enabling the computation of dense flow fields across the entire image. It addressed the problem by enforcing spatial , representing a foundational strategy that influenced subsequent dense techniques. Concurrently, Lucas and Kanade introduced a local least-squares solution in 1981, focusing on sparse feature points where motion is assumed constant within small windows. This approach solved for flow parameters using spatial gradients, offering computational efficiency for tracking distinct features and laying the groundwork for pyramidal implementations to handle larger displacements in later extensions. The saw advancements in handling uncertainties and outliers, with Anandan's 1989 Bayesian framework providing a hierarchical structure for dense displacement estimation. By integrating probabilistic measures and multiresolution processing, it improved robustness to noise and illumination variations, bridging local and global paradigms. Complementing this, Black and Anandan's 1993 work incorporated inspired by the Mumford-Shah model to manage motion discontinuities and outliers, replacing quadratic penalties with convex robust estimators that preserved sharp boundaries while suppressing erroneous flows. A notable shift toward multilayer representations emerged in the late and early with subspace methods for motion, which decomposed complex flows into lower-dimensional subspaces to model rigid or affine transformations efficiently in structured scenes. These techniques facilitated layered motion analysis, separating foreground from background by fitting models to subspaces of image data. In the , computational efficiency advanced through GPU-accelerated methods, exemplified by Brox et al.'s coarse-to-fine warping strategy. This variational framework combined brightness and gradient constancy assumptions with regularization, yielding high-accuracy dense flows by iteratively refining estimates across scales and leveraging hardware for real-time performance. The mid-2010s marked the transition to in optical flow estimation, beginning with FlowNet in 2015, which used convolutional neural networks for end-to-end prediction of flow fields. Recent pre-2015 trends integrated convolutional matching as precursors to , such as in DeepFlow (2013), which fused descriptor-based matching with variational optimization to capture large displacements robustly. This hybrid approach enhanced endpoint accuracy on benchmarks by embedding learned features into traditional pipelines, paving the way for end-to-end neural methods.

Estimation Methods

Classical Models

Classical models for optical flow estimation emerged in the and rely on optimization techniques that enforce the constancy assumption alongside spatial smoothness or local constancy constraints to resolve the problem. These methods typically formulate the problem as minimizing an functional comprising a term derived from derivatives and a regularization term to promote coherent flow fields. They are solved iteratively using techniques like or least-squares optimization, making them suitable for dense flow computation on grayscale images. One foundational approach is the global regularization method proposed by and Schunck, which minimizes the energy functional E = \int \left( I_x u + I_y v + I_t \right)^2 + \alpha \left( |\nabla u|^2 + |\nabla v|^2 \right) \, dx \, dy, where I_x, I_y, I_t are the spatial and temporal derivatives, u and v are the components, \alpha > 0 balances the fidelity and smoothness terms, and the integral is over the domain. This functional is solved by deriving Euler-Lagrange equations and applying iterative fixed-point methods, yielding a dense field that assumes smooth variations in the scene. The method excels in regions of uniform motion but can propagate errors across occlusions due to the global coupling. In contrast, local parametric models like the Lucas-Kanade approach assume constant within small image windows and solve for the motion parameters by least-squares fitting. For a window of pixels, the system is formulated as \mathbf{A}^T \mathbf{A} \mathbf{d} = \mathbf{A}^T \mathbf{b}, where \mathbf{A} is the matrix of stacked image gradients [I_x, I_y] for each pixel, \mathbf{d} = [u, v]^T is the vector, and \mathbf{b} = -I_t collects temporal derivatives. This yields a sparse-to-dense by tracking features or averaging over overlapping windows, providing computational but to and large motions outside the small-displacement assumption. To address limitations with large displacements, multiresolution strategies employ image pyramids for coarse-to-fine refinement, starting with low-resolution levels to estimate coarse and warping subsequent finer levels accordingly. The pyramidal Lucas-Kanade method, for instance, builds Gaussian pyramids of the input frames and iteratively refines the from the coarsest level upward, scaling the previous estimate to initialize each level. This hierarchical process extends the valid estimation while maintaining the local constancy assumption. Robust variants enhance these models by replacing penalties with non-convex functions to better handle outliers from occlusions or illumination changes. For example, L1 (TV-L1) formulations minimize \int |I_x u + I_y v + I_t| + \lambda (|\nabla u| + |\nabla v|) \, dx \, dy, solved efficiently via duality-based primal-dual optimization for performance. Such methods reduce error propagation at discontinuities, improving accuracy in complex scenes. Performance of classical models is commonly evaluated using metrics like average angular error (AAE), which measures the angular deviation between estimated and ground-truth flow directions, and endpoint error (EPE), the between flow vectors. These are benchmarked on datasets such as the Middlebury optical flow evaluation set, first released in 2007 with sequences featuring subpixel and diverse motions. On this dataset, Horn-Schunck typically yields EPE around 1-2 pixels for small motions, while pyramidal Lucas-Kanade reduces this for larger displacements, highlighting trade-offs in smoothness versus locality.

Learning-Based Methods

Learning-based methods for optical flow estimation represent a from classical optimization techniques, employing deep neural networks to directly learn motion patterns from large-scale datasets, achieving superior performance on challenging scenarios such as occlusions and large displacements. These approaches, prominent since the mid-2010s, typically involve convolutional neural networks (CNNs) that process pairs of images to predict dense pixel displacements, often incorporating specialized layers for feature correlation and refinement. Supervised learning-based methods pioneered end-to-end optical flow estimation using CNNs trained on ground-truth flow data. The seminal FlowNet, introduced in 2015, was the first such network, featuring a layer that computes dense matches between patches extracted from two input frames via multiplicative patch comparisons, followed by convolutional layers to regress the flow field. This architecture enabled direct supervision from synthetic datasets, marking a departure from hand-crafted features and iterative optimization in prior methods. To address the scarcity of annotated real-world data, unsupervised methods emerged, relying on photometric consistency assumptions without requiring ground-truth flow labels. These techniques formulate losses based on image reconstruction errors, using backward warping to align pixels from one frame to another according to the predicted , thereby enforcing constancy. For instance, UnFlow (2018) incorporates a bidirectional loss that robustly handles occlusions by estimating forward and backward flows, combined with photometric terms to minimize warping discrepancies. Self-supervised refinements have further advanced accuracy through iterative architectures that refine initial flow estimates. RAFT (2020), a recurrent all-pairs field transform network, constructs multi-scale 4D correlation volumes from pixel-wise features and employs a GRU-based update operator for multiple iterative refinements, yielding state-of-the-art results with an endpoint error of 2.855 pixels on the Sintel benchmark. This design effectively captures fine-grained motions and handles large displacements iteratively. Transformer-based models have integrated attention mechanisms to model long-range dependencies, enhancing robustness in complex scenes. The Global Motion Aggregation (GMA) module (2021), built atop RAFT, uses a transformer to aggregate global motion cues across the image, propagating reliable flow estimates to occluded or ambiguous regions via self-attention on feature similarities. Key datasets have facilitated the training and evaluation of these methods. The FlyingChairs dataset (2015), comprising 22,872 synthetic image pairs of rendered chairs against backgrounds with ground-truth flow, served as a foundational resource for supervised training due to its controlled generation of diverse motions. The MPI-Sintel dataset (2012), derived from animated sequences with realistic shading, large motions, and specularities, provides a rigorous evaluation benchmark, particularly for assessing handling of occlusions and non-rigid deformations. These datasets highlight persistent challenges like textureless regions and motion boundaries, where learning-based methods excel by generalizing from data patterns. Overall, learning-based approaches have delivered significant performance gains, including sub-pixel accuracy on benchmarks like Sintel and inference speeds exceeding 20 frames per second on modern GPUs, enabling practical deployment in resource-constrained settings. Since 2021, advancements have continued with more efficient and robust architectures. For example, SEA-RAFT (2024) simplifies for faster inference while achieving state-of-the-art endpoint error of 3.69 pixels on the benchmark. Diffusion-based models like FlowDiffuser (2024) incorporate generative priors to improve generalization across domains, particularly in low-texture areas. Additionally, DPFlow (2025) introduces adaptive dual-path processing for high-resolution scenes, attaining top results on MPI-Sintel and KITTI 2015 benchmarks. These developments, reviewed in recent surveys as of 2024, emphasize efficiency, cross-dataset generalization, and handling of real-world complexities.

Applications

Computer Vision Tasks

Optical flow plays a central role in various computer vision tasks by providing dense motion information that enables the analysis of dynamic scenes in images and videos. In motion segmentation, optical flow fields are clustered to isolate independently moving objects from the static background, often using techniques like k-means on flow vector magnitudes or residuals after egomotion compensation. For instance, k-means clustering applied to estimated optical flow vectors segments motion components by grouping pixels with similar trajectories, facilitating the separation of foreground objects in video sequences. This approach enhances robustness in dynamic environments by leveraging the spatial coherence of flow patterns. Video stabilization relies on optical flow to estimate unintended camera shake, followed by compensation through warping to produce smoother footage. Algorithms compute dense between consecutive to model global motion, then apply smoothing filters to the estimated camera path before warping accordingly. A neural network-based method, for example, infers per-pixel warp fields directly from input optical flow to mitigate in handheld videos. This integration ensures real-time applicability in post-processing pipelines. In action recognition, optical flow captures temporal dynamics as stacked input channels to convolutional neural networks, complementing spatial features from RGB frames. The two-stream architecture processes optical flow separately to extract motion-specific representations, achieving state-of-the-art performance in 2014 on datasets like UCF101, where it reached 88.0% accuracy when pre-trained on Sports-1M. This method highlights optical flow's value in modeling subtle action cues, such as limb trajectories, over single-frame analysis. Object tracking benefits from optical flow by predicting feature displacements across frames, which is fused with Kalman filters for robust state estimation and occlusion handling. Flow propagation from Harris corner points initializes Kalman predictions, updating object positions while accounting for motion uncertainties in sequences. Such hybrid approaches improve tracking precision in cluttered scenes by combining dense motion cues with probabilistic filtering. For scene understanding in egocentric videos, optical flow from first-person perspectives aids in analyzing wearer intent, such as gaze prediction, by modeling head and eye movements through flow patterns. Algorithms estimate angular head motion using optical flow magnitudes and directions, correlating them with gaze shifts in social interactions. This enables unsupervised prediction of attention foci without explicit eye-tracking hardware. Optical flow integrates into broader pipelines, notably as a front-end component in systems for initial pose estimation. Dense flow tracks feature correspondences to compute relative camera motion, providing uncertainty estimates that refine monocular odometry before back-end optimization. In dynamic scenes, this role ensures accurate by filtering outlier flows during pose recovery. Learning-based flow estimation further enhances front-ends by offering robust, end-to-end motion supervision.

Robotics and Navigation

In robotics and navigation, optical flow serves as a critical cue for estimating ego-motion and interacting with dynamic environments, enabling autonomous agents to perform without reliance on external positioning systems. , a key application, integrates successive optical flow measurements over time to reconstruct a robot's , providing pose estimates in GPS-denied settings. For instance, the ORB-SLAM system employs feature tracking via optical flow under a constant velocity assumption to maintain map consistency and loop closure in its back-end optimization, achieving accurate performance across indoor and outdoor scenes. This approach has been foundational for wheeled robots and UAVs, where cumulative flow integration corrects for drift and supports long-term . Obstacle avoidance leverages patterns in optical flow fields, particularly expansion or contraction indicating time-to-contact (TTC) with approaching surfaces, to trigger evasive maneuvers. Insect-inspired systems from the 1990s pioneered this by mimicking fly retinotopic processing, where radial outward flow signals imminent collisions, allowing robots to adjust speed or direction based on flow divergence without explicit depth sensing. In drone stabilization, optical flow contributes to altitude hold and velocity control in feature-rich, GPS-denied environments; the PX4 autopilot, for example, fuses flow-derived horizontal velocities with rangefinder data to maintain stable hover and prevent drift indoors. Bio-inspired applications extend these principles to mimic behaviors, such as corridor centering, where flies balance lateral optic on both sides to maintain equidistance from walls. Robotic implementations by Franceschini and colleagues in 2007 demonstrated this on a flapping-wing microrobot, using paired elementary motion detectors to regulate yaw and achieve uncrewed flight through narrow passages by equalizing contralateral rates. In multi-agent coordination, optical facilitates in by enabling local collision avoidance and alignment; for instance, drone swarms use flow-based control graphs to maintain separation and cohesive motion, ensuring collision-free dynamics even under partial communication failures. Despite these advances, challenges persist in achieving lighting invariance and computational efficiency for systems. Variations in illumination violate the constancy underlying most estimators, leading to erroneous motion fields in shadowed or textured environments. Additionally, dense computation demands high processing power, constraining real-time deployment on resource-limited hardware; optimizations like sparse feature tracking or bio-inspired event-based sensors are thus essential to balance accuracy with low-latency requirements in mobile robots.

Hardware Implementations

Optical Flow Sensors

Optical flow sensors are specialized devices designed to compute motion estimates directly from captured data, bypassing the need for full-frame cameras or extensive post-processing. These sensors primarily operate on -based principles, where local patches from consecutive frames are compared using 2D correlators to detect shifts in patterns. This approach enables sub- precision in flow estimation by identifying the peak offset between patches, often implemented in analog or mixed-signal VLSI chips for performance. Such directly outputs displacement vectors, making it ideal for applications requiring low latency. A key example of commercial optical flow sensors is the ADNS series, introduced by Agilent Technologies (later acquired by Avago) in the early 2000s for use in optical computer mice. These CMOS-based chips integrate an , LED illumination, and correlation processor to compute 2D optical flow at high speeds, achieving frame rates of up to 6400 with 30x30 resolution. The ADNS-3060, for instance, supports tracking velocities up to 40 and accelerations of 15g, providing robust across varied surfaces without mechanical components. This series demonstrated the feasibility of dedicated flow computation in compact, cost-effective hardware, influencing subsequent designs in and . Event-based optical flow sensors represent an advanced category, drawing inspiration from insect vision to produce asynchronous outputs only when motion-induced changes occur. Introduced around 2005, these neuromorphic chips, such as those mimicking processing, generate sparse "events" encoding local flow directions and magnitudes rather than full images, reducing data volume and power draw. Early implementations, like -inspired navigation sensors, used parallel address-event representation to compute flow in , enabling applications in dynamic environments. Miniaturized optical flow sensors have also been developed for constrained platforms, notably through ongoing work at EPFL since the 1990s on . These bio-inspired chips, with areas as small as 1 mm², employ arrays of photodetectors and local processing elements to estimate flow via contrast changes, suitable for micro-robots where size and weight are critical. A 20x20 continuous-time , for example, operates at 1 kHz to provide 2D motion cues in compact form factors. These sensors excel in power efficiency, with many designs consuming less than 1 mW, facilitating into battery-powered devices like drones or wearables. Neuromorphic implementations, in particular, achieve this through event-driven processing that avoids constant sampling. Despite these advantages, optical flow sensors face inherent limitations, including short operational ranges—typically limited to a few millimeters from the target surface due to integrated —and low (often under 30x30 pixels), which pales in comparison to software methods on high-resolution cameras. These constraints restrict their use to close-proximity tasks but underscore their role as efficient in specialized hardware.

Integrated Systems

Integrated systems for optical flow encompass hardware architectures that embed optical flow computation directly into sensors, processors, or multi-modal platforms, enabling efficient, low-latency motion estimation in resource-constrained environments such as robotics and edge devices. These systems typically combine dedicated ASICs or FPGAs with imaging sensors and auxiliary components like inertial measurement units (IMUs), reducing data transfer overhead and power consumption compared to software-based approaches on general-purpose CPUs. By performing computations on-sensor or within a tightly coupled SoC, they achieve real-time performance while minimizing latency, often targeting applications in drones, autonomous vehicles, and augmented reality. A prominent example is the on-sensor optical flow camera developed by integrating a global shutter CMOS image sensor with a custom ASIC for parallel flow computation. This design processes full-resolution frames (1124 × 1364 pixels) at up to 88 frames per second (fps) and reduced-resolution frames (280 × 336 pixels) at 240 fps, with power efficiency suitable for nano-drones and . The ASIC implements a gradient-based , delivering sub-pixel accuracy while consuming under 100 mW, demonstrating a 10-20× over CPU implementations on platforms. Such integration eliminates the need for offloading raw frames, enabling deployment where is limited. In visual-inertial odometry (VIO) systems, optical flow hardware is fused with IMU data to enhance robustness in dynamic environments. The VD56G3 sensor from integrates an optical flow ASIC with a 300 global shutter camera, paired with an MPU6500 IMU and processed on a Compute Module 4. This setup modifies the VINS-Mono pipeline by replacing CPU feature tracking with on-sensor flow vectors, reducing end-to-end latency by 49.4% (from 148 ms to 75 ms), compute load by 53.7%, and power by 14.24% (630 mW savings) at 50 . The system maintains tracking accuracy on datasets like EuRoC, with average endpoint errors below 0.05 pixels, supporting applications in UAV navigation. Neuromorphic integrated circuits offer bio-inspired alternatives, leveraging (SNNs) on event-driven hardware for sparse, asynchronous processing. Platforms like Intel's Loihi chip implement optical flow via SNNs trained on datasets such as MVSEC, achieving real-time rates of 36 with a weighted average endpoint error (WAEE) reduction of up to 15.6% over conventional methods. These systems use dynamic vision sensors (DVS) like the DVS128, integrating address-event representation (AER) interfaces to process motion events directly, with power efficiencies below 1 mW per core. By compressing models to 0.32 million parameters, they enable deployment on low-power chips, ideal for always-on in . Further advancements include VLSI designs for multi-core optical flow processors, such as those using directional histogram matching to generate one motion per cycle. Fabricated in 0.18 μm , these achieve 1080p resolution at 30 with 1.2 power draw, integrating with SoCs for automotive driver assistance systems. Overall, these integrated approaches prioritize and , with ongoing focusing on hybrid photonic-electronic circuits to push beyond 1000 while handling high-dynamic-range scenes.

References

  1. [1]
    [PDF] Determining Optical Flow - Faculty
    ABSTRACT. Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow ...Missing: original | Show results with:original
  2. [2]
    [PDF] Optical flow modeling and computation: a survey - Hal-Inria
    Dec 19, 2015 · Abstract. Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many.<|control11|><|separator|>
  3. [3]
    Optic Flow: Perceiving and Acting in a 3-D World - PMC - NIH
    Feb 3, 2021 · According to Gibson, the visual surroundings produce an outflow pattern of motion that radiates from a “focus of expansion” (see Figure 5). In ...
  4. [4]
    Estimating optical flow: A comprehensive review of the state of the art
    This survey provides an overview of optical flow techniques and their application. For a comprehensive review, this survey covers both classical frameworks and ...
  5. [5]
    Motion Field and Optical Flow: Qualitative Properties - DSpace@MIT
    In this paper we show that the optical flow, a 2D field that can be associated with the variation of the image brightness pattern, and the 2D motion field ...Missing: seminal | Show results with:seminal
  6. [6]
    Motion Field and Optical Flow | Baeldung on Computer Science
    Mar 22, 2024 · Motion field represents 3D motion projected onto an image, while optical flow is the apparent motion of brightness patterns in an image.
  7. [7]
  8. [8]
    The Ecological Approach to Visual Perception - Google Books
    This is a book about how we see: the environment around us (its surfaces, their layout, and their colors and textures); where we are in the environment.
  9. [9]
    [PDF] Determining Optical Flow - People | MIT CSAIL
    B.K.P. HORN AND B.G. SCHUNCK. FIG. I. The basic rate of change of image brightness equation constrains the optical flow velocity. The velocity (u. v ) has to ...
  10. [10]
  11. [11]
    Determining optical flow - ScienceDirect.com
    A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere ...
  12. [12]
    [PDF] An Iterative Image Registration Technique - CMU Robotics Institute
    Image registration finds a variety of applications in computer vision, such as image matching for stereo vision, pattern recognition, and motion analysis.
  13. [13]
    A computational framework and an algorithm for the measurement of ...
    This paper describes a hierarchical computational framework for the determination of dense displacement fields from a pair of images, and an algorithm ...
  14. [14]
    [PDF] A framework for the robust estimation of optical flow
    This paper proposes a novel robust local optical flow approach based on a modified Hampel estimator for local motion estimation via robust regression with ...Missing: Mumford | Show results with:Mumford
  15. [15]
    [PDF] A Database and Evaluation Methodology for Optical Flow - Middlebury
    Nov 30, 2010 · In this paper, we consider both motion field estimation and apparent motion estimation, referring to them collec- tively as optical flow. The ...Missing: 1980s | Show results with:1980s
  16. [16]
    [PDF] High Accuracy Optical Flow Estimation Based on a Theory for ...
    Abstract. We study an energy functional for computing optical flow that com- bines three assumptions: a brightness constancy assumption, a gradient ...
  17. [17]
    [PDF] DeepFlow: Large Displacement Optical Flow with Deep Matching
    DeepFlow efficiently handles large displacements occur- ring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it ...
  18. [18]
    [PDF] Determining Optical Flow - DSpace@MIT
    These papers begin by assuming that the optical flow has already been determined. ... Memo 572 Horn & Schunck. Page 16. Determining Optical Flow ' Page 15. Figure ...Missing: original | Show results with:original
  19. [19]
    [PDF] An Iterative Image Registration Technique with an Application to ...
    Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly.
  20. [20]
    [PDF] Pyramidal Implementation of the Lucas Kanade Feature Tracker ...
    The overall pyramidal tracking algorithm proceeds as follows: first, the optical flow is comptuted at the deepest pyramid level Lm. Then, the result of the that ...
  21. [21]
    [PDF] A Duality Based Approach for Realtime TV-L1 Optical Flow - VRVis
    Abstract. Variational methods are among the most successful approaches to calculate the optical flow between two image frames. A particularly.Missing: Zach | Show results with:Zach
  22. [22]
    [PDF] A Database and Evaluation Methodology for Optical Flow - Middlebury
    The quantitative evaluation of optical flow algorithms by. Barron et al. led to significant advances in the performance of optical flow methods.Missing: original | Show results with:original
  23. [23]
    [2004.02853] Optical Flow Estimation in the Deep Learning Age - arXiv
    Apr 6, 2020 · We provide an overview of the various optical flow approaches introduced in the deep learning age, including those based on alternative learning paradigms.Missing: survey | Show results with:survey
  24. [24]
    FlowNet: Learning Optical Flow with Convolutional Networks - arXiv
    Apr 26, 2015 · Title:FlowNet: Learning Optical Flow with Convolutional Networks. Authors:Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser ...
  25. [25]
    UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional ...
    Nov 21, 2017 · We design an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth ...
  26. [26]
    RAFT: Recurrent All-Pairs Field Transforms for Optical Flow - arXiv
    Mar 26, 2020 · We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts per-pixel features.
  27. [27]
    Learning to Estimate Hidden Motions with Global Motion Aggregation
    We introduce a global motion aggregation module, a transformer-based approach to find long-range dependencies between pixels in the first image.Missing: Lin attention
  28. [28]
    MPI Sintel Dataset
    The MPI Sintel dataset is for evaluating optical flow from the Sintel film, featuring long sequences, large motions, and specular reflections.Downloads · Results · Visual Results by Method · MPI Sintel Stereo Training Data
  29. [29]
    [PDF] Learning Video Stabilization Using Optical Flow - UCSD CSE
    We propose a novel neural network that infers the per- pixel warp fields for video stabilization from the optical flow fields of the input video.
  30. [30]
    A robust real-time video stabilization algorithm - ScienceDirect
    In this paper, we propose a robust real-time video stabilization algorithm that alleviates the undesirable jitter motions from the unstable video to produce a ...Introduction · Optical Flow Computation · Smoothing Of Camera Motion...Missing: shake | Show results with:shake
  31. [31]
    Two-Stream Convolutional Networks for Action Recognition in Videos
    Jun 9, 2014 · Title:Two-Stream Convolutional Networks for Action Recognition in Videos. Authors:Karen Simonyan, Andrew Zisserman. View a PDF of the paper ...
  32. [32]
    (PDF) Object tracking using Harris corner points based optical flow ...
    ... Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm. Article. Jan 2000. Jean-Yves Bouguet · View · Pyramidal ...
  33. [33]
    [PDF] Estimating Head Motion from Egocentric Vision
    Figure 1: We estimate angular head motion in egocentric video, using both visual content and optical flow. everyday interactions. Egocentric vision thus ...
  34. [34]
    [PDF] Robust Dense Optical Flow with Uncertainty for Monocular Pose ...
    These algorithms typically consist of a front-end and a back-end part as illustrated in Figure 1. For a vision-based SLAM system, the front-end typically per-.<|control11|><|separator|>
  35. [35]
    [PDF] Integration of Deep Optical Flow in Visual-Inertial Odometry
    Handcrafted optical flow has been widely used in the front-end of visual SLAM to track feature points for a long period. Recently deep-learning-based ...
  36. [36]
    ORB-SLAM: a Versatile and Accurate Monocular SLAM System - arXiv
    Feb 3, 2015 · This paper presents ORB-SLAM, a feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor environments.
  37. [37]
    Optic flow-based collision-free strategies: From insects to robots
    This review provides an overview of optic flow-based collision-free strategies with a biorobotic viewpoint.
  38. [38]
    A bio-inspired flying robot sheds light on insect piloting abilities
    This fly-by-sight micro-robot can perform exacting tasks such as take-off, level flight, and landing.Missing: corridor centering
  39. [39]
    [PDF] Optical Correlator based Optical Flow Processor for Real Time ...
    Jun 1, 2007 · It exploits the principle of Joint Transform Correlation (JTC) in an optoelectronic setup using the Optical. Fourier Transform (Goodman, 1968).
  40. [40]
  41. [41]
    Insect-Inspired Optical-Flow Navigation Sensors
    Oct 1, 2005 · Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation.Missing: ALV5 1990s
  42. [42]
    Characterization of a Low-Cost Optical Flow Sensor When Using an ...
    In this paper, a low cost optical flow sensor is combined with an external laser device to measure surface displacements and mechanical oscillations.
  43. [43]
    A Fast and Accurate Optical Flow Camera for Resource-Constrained ...
    May 22, 2023 · This paper analyzes the achievable accuracy, frame rate, and power consumption when using a novel optical flow sensor consisting of a global ...
  44. [44]
  45. [45]