Fact-checked by Grok 2 weeks ago

Epipolar geometry

Epipolar geometry is the intrinsic that governs the relationship between two views of a three-dimensional scene captured by cameras at distinct positions, independent of the scene's structure and solely dependent on the cameras' internal and external parameters. It provides constraints on point correspondences between the images, reducing the search space for matching features from two-dimensional areas to one-dimensional lines, thereby facilitating tasks such as and depth estimation in . Central to epipolar geometry are the epipoles, which are the images of one camera's optical center projected onto the other camera's image plane, and the epipolar lines, which arise from the intersection of epipolar planes—formed by the baseline (line joining the two camera centers) and a world point—with the image planes. These lines ensure that corresponding points in the two images lie on conjugate epipolar lines, embodying the epipolar constraint that simplifies correspondence finding and enforces coplanarity in the 3D-to-2D projection process. The geometry is mathematically encapsulated by the fundamental matrix \mathbf{F}, a 3×3 matrix satisfying \mathbf{p}'^\top \mathbf{F} \mathbf{p} = 0 for corresponding points \mathbf{p} and \mathbf{p}', which can be computed from at least eight point correspondences via the eight-point algorithm. Introduced formally by Longuet-Higgins in 1981 through the essential matrix for calibrated cameras, epipolar geometry has become foundational in multiview , enabling applications like , , and by allowing recovery of relative camera pose and scene structure from image pairs without prior 3D knowledge. For uncalibrated cameras, the fundamental matrix extends this framework, while related concepts like the essential matrix \mathbf{E} = [\mathbf{t}]_\times \mathbf{R} (with 5 ) address \mathbf{R} and \mathbf{t} in calibrated setups. Its principles underpin algorithms in libraries such as and continue to influence advancements in autonomous systems and .

Fundamental Concepts

Epipolar Plane

The epipolar is defined as the that contains the optical centers of two cameras, denoted as O_1 and O_2, along with a point X in the three-dimensional world, and is equivalently described as the spanned by the baseline—the connecting O_1 and O_2—and the point X. This plays a central role in epipolar geometry by establishing among the camera centers, the world point, and the corresponding image points in each view. Geometrically, the epipolar plane provides intuition for the constraints in stereo vision: the rays from X to O_1 and from X to O_2 both lie within this plane, and its with the s of the two cameras produces a pair of corresponding lines—known as epipolar lines—that bound the possible locations of the projections of X. This limits the search for matching points between views to one dimension along these lines, rather than across the entire two-dimensional , thereby simplifying correspondence problems in tasks. The epipole in each is the point where the pierces the opposite , serving as the point for all such epipolar lines. The concept of the epipolar plane originated in 19th-century studies of stereo vision and , with early formalization attributed to G. Hauck in 1883 that explored projective relations in paired images. It was later rigorously integrated into modern through the foundational work of H.C. Longuet-Higgins in 1981, who developed algorithms for scene reconstruction that highlighted the plane's geometric implications for two-view correspondences. Diagrams illustrating the epipolar typically depict the as an around which a of such planes rotates, with each slicing through the camera centers and a specific X, showing the ray bundles from X to each optical center confined within the and their projections forming intersecting lines on the image planes. These visualizations emphasize how varying X generates a family of epipolar planes, all sharing the , to constrain multi-view projections.

Epipole

In epipolar geometry, the epipole refers to the of one camera's optical onto the of the other camera. Specifically, for two cameras with optical centers O_1 and O_2, the epipole e_1 in the second is the of O_1, while e_2 in the first is the of O_2. This point arises from the epipolar plane formed by the baseline connecting the two optical centers and a point in the scene, where the epipole marks the intersection of the baseline with the respective . The epipole serves as a fixed point in each image that encapsulates the geometric relationship between the two views, independent of the scene structure. It represents the of the baseline direction in the image and is the intersection of all epipolar lines corresponding to points along that . In uncalibrated cameras, the epipole is the right null-vector of the fundamental matrix F (satisfying F e = 0) or the left null-vector (satisfying e'^T F = 0); in calibrated cameras, it similarly relates to the essential matrix E through the matrices K and K' via E = K'^T F K. A special case occurs when the cameras are in a parallel configuration involving pure parallel to the s, positioning the epipole at and resulting in parallel epipolar lines across the images. Otherwise, the epipole occupies a finite position in the , determined by the relative and between the cameras. Computationally, the epipole can be found geometrically by projecting the optical center of one camera using the of the other; for instance, the epipole e' in the second image is given by e' = P' C, where P' is the of the second camera and C is the homogeneous of the first camera's center [O_1; 1].

Epipolar Line

In epipolar geometry, the epipolar line in one image is defined as the intersection of the epipolar plane with that image's plane. This line represents the set of all possible projections onto the image plane of rays emanating from a 3D point through the center of the other camera. A key property of epipolar lines is that all such lines in a given image pass through the epipole, which is the projection of the other camera's optical center. Corresponding points in the two images, say x_1 and x_2, lie on a pair of conjugate epipolar lines l_1 and l_2, ensuring that the rays from each camera center to these points are coplanar with the baseline connecting the centers. Geometrically, the epipolar line plays a crucial role in stereo vision by constraining the search for matching points: instead of searching the entire 2D plane of the second image for a correspondent to a point in the first image, the search is reduced to a 1D along the epipolar line. This simplification is fundamental to efficient stereo correspondence algorithms in . For example, consider a point x_1 observed in the first image; the corresponding epipolar line l_2 in the second image is determined by the of the two camera positions, forming a pencil of possible locations where the matching point x_2 must lie. In a typical , this is illustrated as a line emanating from the epipole in the second image, highlighting the constraint imposed by the epipolar plane.

Mathematical Framework

Epipolar Constraint

The epipolar constraint is a fundamental algebraic relation in epipolar geometry that links corresponding points in two images captured by different cameras. For a scene point X projecting to homogeneous image points \mathbf{x} and \mathbf{x}' in the first and second images, respectively, the constraint asserts that \mathbf{x}'^T F \mathbf{x} = 0, where F is the 3×3 fundamental matrix encoding the epipolar geometry between the views. This constraint arises from the of the . Consider two cameras with projection matrices P = K [I | 0] and P' = K' [R | T], where K and K' are intrinsic matrices, R is the relative , and T is the (baseline) between camera centers C and C'. The projections are \mathbf{x} \sim P X and \mathbf{x}' \sim P' X, with \sim denoting equality up to scale. The points C, C', and X define an epipolar plane \pi, which also contains the image points \mathbf{x} and \mathbf{x}'. Geometrically, the optical rays from each camera center through \mathbf{x} and \mathbf{x}', together with the baseline, are coplanar. For the calibrated case where K = K' = I, the normalized coordinates satisfy \mathbf{x}'^T [T]_\times R \mathbf{x} = 0, or equivalently \mathbf{x}'^T E \mathbf{x} = 0 with essential matrix E = [T]_\times R. For uncalibrated cameras, incorporating intrinsics gives the fundamental matrix F = K'^{-T} E K^{-1}, leading to the \mathbf{x}'^T F \mathbf{x} = 0. This derivation holds under the assumption that the cameras are in , with non-coincident centers and no degenerate configurations. Geometrically, the constraint enforces that the corresponding point \mathbf{x}' must lie on the epipolar line \mathbf{l}' = F \mathbf{x} in the second image, reducing the search space for matches from the entire plane to a one-dimensional line. This reflects the of the optical rays from each camera center through \mathbf{x} and \mathbf{x}' with the . The derivation relies on the , assuming ideal perspective projection without or other aberrations, and requires the cameras to be uncalibrated or calibrated only through the intrinsics embedded in F. The fundamental matrix F is a 3×3 homogeneous matrix of 2, possessing 7 : it has 9 entries up to scale (8 DOF), minus one additional constraint from the rank deficiency \det(F) = 0. This ensures that the epipole \mathbf{e}', the of the first camera center in the second image, satisfies F \mathbf{e}' = 0, aligning with the geometric degeneracy at the epipole.

Fundamental Matrix

The fundamental matrix F is a 3×3 that encodes the epipolar geometry between two uncalibrated cameras, satisfying the epipolar \mathbf{x}'^\top F \mathbf{x} = 0 for corresponding points \mathbf{x} and \mathbf{x}' in the two images. This matrix relates the projective structures of the two views without requiring knowledge of the camera intrinsics. Key properties of F include its rank being exactly 2, which arises from the geometric constraint of the epipolar configuration. The epipoles are the null vectors of F and F^\top, satisfying F \mathbf{e} = 0 and F^\top \mathbf{e}' = 0, where \mathbf{e} and \mathbf{e}' are the epipoles in the respective images. Epipolar lines are obtained as \mathbf{l}' = F \mathbf{x} in the second image for a point \mathbf{x} in the first, and symmetrically \mathbf{l} = F^\top \mathbf{x}' in the first image for a point \mathbf{x}' in the second. Overall, F has 7 due to its rank deficiency and scale ambiguity. Estimation of F typically requires at least 8 point correspondences and employs the 8-point algorithm, which solves a linear system via least squares to minimize the algebraic error \sum (\mathbf{x}'^\top F \mathbf{x})^2. The algorithm constructs a data matrix A from the correspondences, where each row encodes the outer product terms, and solves A \mathbf{f} = 0 for the vectorized F (denoted \mathbf{f}) as the right singular vector corresponding to the smallest singular value of A. To enforce the rank-2 constraint post-estimation, singular value decomposition (SVD) is applied to set the smallest singular value to zero: if F = U \operatorname{diag}(\sigma_1, \sigma_2, \sigma_3) V^\top, then the corrected F' = U \operatorname{diag}(\sigma_1, \sigma_2, 0) V^\top. For numerical stability, points are pre-normalized by translating to the origin (centroid) and scaling so the average distance from the origin is \sqrt{2}, which significantly improves conditioning. In the presence of outliers, the 8-point algorithm is often combined with RANSAC, which iteratively samples minimal subsets (8 points) to hypothesize F, then counts inliers based on a thresholded epipolar error before refitting on the consensus set. Decomposition of F allows recovery of camera matrices up to a projective transformation, with one canonical form being P = [I \mid 0] and P' = [\mathbf{e}'_\times F \mid \mathbf{e}'], where \mathbf{e}' is the epipole and [\cdot]_\times denotes the . This reconstruction is ambiguous up to the 15 in the projective group. In degenerate cases, such as pure planar motion where all points lie on a , the fundamental matrix reduces in rank or structure, effectively relating to a between the views with 6 .

Essential Matrix

The essential matrix, introduced by Christopher Longuet-Higgins in 1981, provides a fundamental constraint for calibrated stereo vision systems, enabling the recovery of relative camera pose from corresponding image points. In the context of two calibrated cameras, it relates normalized image coordinates \mathbf{x} and \mathbf{x}' (obtained by applying the inverse of the intrinsic matrix K to pixel coordinates) through the equation \mathbf{x}'^\top E \mathbf{x} = 0. This matrix encodes the epipolar geometry in Euclidean space and is expressed as E = [\mathbf{t}]_\times R, where R is the 3×3 rotation matrix describing the orientation between the cameras, \mathbf{t} is the translation vector (up to scale), and [\mathbf{t}]_\times denotes the skew-symmetric matrix formed from \mathbf{t}. For uncalibrated cameras working with pixel coordinates \tilde{\mathbf{x}} and \tilde{\mathbf{x}}', the essential matrix relates to the fundamental matrix F via F = K'^{-\top} E K^{-1}, where K and K' are the intrinsic calibration matrices of the respective cameras. The essential matrix is a 3×3 matrix of rank 2, characterized by two equal non-zero singular values and one zero singular value, reflecting its geometric constraints. It possesses 5 degrees of freedom, arising from the 3 degrees of freedom in the rotation matrix and the 2 degrees of freedom in the direction of the translation vector (with overall scale ambiguity). To recover the camera pose, the essential matrix undergoes (SVD) as E = U \Sigma V^\top, where \Sigma = \diag(\sigma, \sigma, 0) with \sigma > 0. The decomposition proceeds by forming E' = U \diag(1,1,0) V^\top, yielding the translation direction as the last column of U (up to sign) and rotation candidates via R = U W V^\top or R = U W^\top V^\top, where W is the matrix for a 90-degree around the shared . This process generates four possible relative pose configurations, and the correct one is selected through a chirality check, ensuring that the reconstructed 3D points lie in front of both cameras (positive depth).

Reconstruction and Applications

Triangulation

Triangulation in epipolar geometry recovers the 3D position of a point \mathbf{X} from its corresponding 2D projections \mathbf{x} and \mathbf{x}' in two calibrated images, by finding the intersection of the back-projected rays originating from the camera centers through these points along the epipolar lines. This process assumes known camera projection matrices P and P', and relies on the epipolar constraint to validate correspondences. The rays are typically parameterized in homogeneous coordinates as \mathbf{X} = \mathbf{C} + \lambda M^{-1} \tilde{\mathbf{x}} for the first camera, where \mathbf{C} is the camera center, M is the camera matrix, and \lambda is a scalar, with a similar form for the second view. The linear method, known as the (DLT), solves the homogeneous system A \mathbf{X} = 0, where A is a $4 \times 4 formed from the projection equations P \mathbf{X} = w \mathbf{x} and P' \mathbf{X} = w' \mathbf{x}' in , with w and w' as scale factors. Specifically, the rows of A are derived from the cross-product forms \mathbf{x} \times (P \mathbf{X}) = 0 and \mathbf{x}' \times (P' \mathbf{X}) = 0, yielding two independent equations per view. The solution is obtained via (SVD) of A, taking \mathbf{X} as the right singular vector corresponding to the smallest , ensuring \| \mathbf{X} \| = 1. This approach is projective-invariant and computationally efficient but minimizes algebraic error rather than geometric reprojection error. For refinement, non-linear least-squares optimization minimizes the reprojection error: \mathbf{X} = \arg\min_{\mathbf{X}} \left\| d(\mathbf{x}, P \mathbf{X}) \right\|^2 + \left\| d(\mathbf{x}', P' \mathbf{X}) \right\|^2 where d(\cdot, \cdot) is the Euclidean distance in the image plane; this can be solved iteratively (e.g., using Levenberg-Marquardt) starting from the DLT estimate or via a non-iterative sixth-degree polynomial root-finding method for a global optimum under Gaussian noise assumptions. Accuracy in triangulation is influenced by the baseline length between cameras: longer baselines reduce depth ambiguity and improve precision, as short baselines (e.g., 1 unit relative to scene depth) can amplify errors from noise, leading to up to 10% relative error in position for 1- noise. To resolve potential ambiguities, is enforced by selecting the solution where \mathbf{X} lies in front of both cameras, verified by positive depth (e.g., w > 0 and w' > 0 in the projection equations). Implementation of the linear DLT can follow this outline:
function triangulateDLT(P, x, P_prime, x_prime):
    # Form the 4x4 [matrix](/page/Matrix) A
    A = zeros(4, 4)
    A[0:2, :] = cross_product_matrix(x) * P
    A[2:4, :] = cross_product_matrix(x_prime) * P_prime
    
    # [SVD](/page/SVD): A = U * diag(s) * V^T
    U, s, Vt = [svd](/page/SVD)(A)
    
    # X is the last column of V^T (smallest singular value)
    X = Vt[-1, :]  # [Homogeneous coordinates](/page/Homogeneous_coordinates)
    
    # Normalize and check [chirality](/page/Chirality) (depth > 0)
    if depth(P, X) > 0 and depth(P_prime, X) > 0:
        return X / X[3]  # Inhomogeneous
    else:
        return None  # Invalid due to [chirality](/page/Chirality)
where cross_product_matrix computes the skew-symmetric matrix for the cross-product, and depth extracts the third component of the projected point.

Stereo Matching

Epipolar geometry significantly simplifies stereo matching by constraining the possible locations of corresponding points between two images to epipolar lines, transforming the otherwise exhaustive 2D search into an efficient 1D search along these lines. This geometric constraint allows for the computation of disparity d = x - x', where x and x' are the coordinates of matching points in the left and right images after rectification, providing a direct measure of depth variation. To facilitate this 1D search, stereo is typically applied as a preprocessing step, warping the images via homographies derived from the fundamental matrix \mathbf{F} or essential matrix \mathbf{E} to align epipolar lines horizontally across views. The Loop-Zhang method computes these homographies iteratively to achieve rectification without requiring full camera , ensuring corresponding points share the same vertical coordinate and simplifying disparity estimation to horizontal shifts. Classical algorithms leverage this setup through block matching, which compares small image patches centered on each pixel in the reference image to candidates along the epipolar line in the target image, often using the sum of absolute differences (SAD) as the similarity metric to select the best match. More sophisticated local methods aggregate costs over larger windows or multiple scales to improve robustness. For global optimization, semi-global matching (SGM) extends local matching by propagating costs along multiple 1D paths across the image, minimizing an energy function that balances data and smoothness terms while approximating full global consistency. Introduced by Hirschmüller in 2005, SGM excels in handling weakly textured regions and has been widely adopted for its balance of accuracy and efficiency. Occlusions, where points visible in one view are hidden in the other, pose challenges to matching; address this by formulating stereo as a minimum-cut problem in a , with nodes representing pixels and possible disparities, edges encoding matching costs, smoothness penalties, and explicit labels. Kolmogorov and Zabih's 2001 approach uses layered representations to exactly solve for , producing sharp disparity boundaries and filling invisible regions appropriately. Since 2015, has enhanced epipolar-based stereo matching by employing convolutional neural networks (s) as feature extractors to compute invariant descriptors and matching costs, outperforming handcrafted methods like SIFT in varied lighting and textures. Zbontar and LeCun's MC-CNN, for example, trains a CNN on patch pairs to predict similarity scores, integrating seamlessly with traditional aggregation steps for end-to-end disparity refinement. These learned approaches have driven state-of-the-art performance on benchmarks, enabling robust 1D searches along epipolar lines even in unrectified setups via geometric priors. In practice, epipolar-guided stereo matching generates dense depth maps essential for , supporting tasks like and object grasping through real-time disparity analysis. By 2025, it underpins advanced driver-assistance systems (ADAS) in autonomous vehicles, providing scalable for detection and path at highway speeds. These correspondences can then inform for full recovery.

Structure from Motion

(SfM) is a technique that reconstructs three-dimensional scenes and estimates camera poses from a set of two-dimensional images captured from unknown viewpoints, leveraging epipolar geometry to establish correspondences and relative orientations across multiple views. The process is inherently iterative, beginning with pairwise image analysis using the or matrix to initialize epipolar constraints, which constrain feature matches to lines in corresponding images, thereby reducing the search space for accurate point correspondences. This enables the incremental or global estimation of camera positions and scene structure, forming the foundation for applications in where direct depth measurements are unavailable. The typical SfM pipeline commences with feature detection and matching, where robust descriptors like SIFT are extracted from images and matched pairwise, often guided by the epipolar constraint derived from the fundamental matrix to filter outliers and ensure geometric consistency. Initial camera poses are then estimated for image pairs using the essential matrix for relative orientation when intrinsics are known, followed by to recover initial 3D points, though this step builds on pairwise geometry without resolving full multi-view consistency yet. The core refinement occurs via , a nonlinear least-squares optimization that jointly minimizes the reprojection error across all camera poses P_i and 3D points X_j by adjusting parameters to align observed image points with their projected counterparts. Scale ambiguity, inherent in projective reconstructions from uncalibrated cameras, is resolved by incorporating known intrinsics or ground control points to yield metric structure, preventing arbitrary scaling of the scene. Epipolar geometry plays a pivotal role in SfM by providing the epipolar constraint for initializing relative poses between views, ensuring that matched features satisfy the geometric relationship encoded in the fundamental matrix for uncalibrated setups. For calibrated cameras, the essential matrix further decomposes into and components, enabling precise relative orientation estimation that propagates through the multi-view . This constraint not only facilitates robust matching but also underpins the of across the image set, mitigating drift in pose estimation. Advancements in SfM have distinguished incremental approaches, which sequentially add images to a growing while performing local s to maintain accuracy, from global methods that jointly optimize all poses and points in a single framework for scalability. Incremental SfM, as implemented in software like COLMAP, excels in robustness and precision for unordered image collections by incrementally registering new views against the existing model and refining via repeated . In contrast, global SfM techniques, such as those in GLOMAP, estimate rotations and translations across the entire view graph simultaneously, offering faster computation for large-scale datasets while addressing challenges like scale ambiguity through hybrid optimization. In modern applications as of 2025, SfM remains integral to for generating detailed 3D models from aerial imagery, supports by enabling real-time scene reconstruction, and underpins (SLAM) in for dynamic environment navigation. Recent developments, including GPU-accelerated frameworks like cuSfM, have dramatically improved efficiency, achieving up to 10-fold speedups over traditional CPU-based systems like COLMAP while preserving reconstruction quality, thus facilitating deployment in resource-constrained and AR/VR pipelines.

Special Configurations

Simplified Camera Setups

In simplified camera setups, epipolar geometry exhibits reduced complexity, facilitating easier correspondence matching and reconstruction in stereo vision systems. One common configuration involves parallel optical axes, where the two camera image planes are aligned parallel to each other, often achieved through rectification. In this setup, the epipoles lie at infinity, resulting in parallel epipolar lines that are typically horizontal across both images. This alignment confines corresponding points to the same image row, simplifying the search for matches to a one-dimensional disparity computation along horizontal scanlines, which directly relates to depth estimation via the formula d = \frac{b f}{Z}, where d is disparity, b is baseline length, f is focal length, and Z is depth. Another special case is pure forward translation, where the cameras undergo motion solely along the (perpendicular to the image planes) without . Here, the essential simplifies to the skew-symmetric form E = _\times, where _\times represents the cross-product of the translation vector t. This configuration positions epipoles at the focus of expansion, typically the principal point, and epipolar lines radiate from this point, constraining correspondences to one-dimensional searches along radial lines and easing algorithms by reducing the search space from two dimensions. Verging cameras, with converging optical axes, represent a setup akin to human binocular vision, where the cameras are toed-in toward a convergence point. In this arrangement, epipoles are finite and often located within or near the image centers, causing epipolar lines to fan out from these points rather than being parallel. This fanning geometry increases matching complexity due to varying line orientations but models natural visual convergence effectively. To mitigate these complexities, the process transforms arbitrary camera pairs into a configuration using two homographies, H_1 and H_2, computed from the fundamental matrix F. These homographies rotate and translate the images so that epipolar lines become and horizontal, simulating pure translation without altering the intrinsic geometry. The process typically involves finding an optimal rotation that maps epipoles to infinity, often via the normalized eight-point for F estimation. These simplifications offer significant benefits, including reduced computational load by constraining searches to linear paths, which accelerates stereo matching algorithms. Additionally, error analysis shows that longer baselines in parallel setups improve depth accuracy by minimizing quantization errors in disparity, though excessive baseline can introduce occlusion issues; for instance, depth precision scales inversely with baseline length in rectified systems.

Pushbroom Sensor Geometry

Pushbroom sensors, commonly employed in systems, feature a linear of detectors oriented to the platform's of motion, capturing one-dimensional lines that are combined to form elongated strip images as the sensor advances. This contrasts with the instantaneous full-frame capture of traditional pinhole cameras, resulting in a dynamic where each line corresponds to a distinct camera and along the . Seminal models, such as the linear pushbroom camera , represent this process as a non-linear transformation with 11 , encapsulating the sensor's forward motion and fixed attitude. In the context of epipolar geometry, the correspondence between points in two pushbroom images deviates from straight epipolar lines, manifesting instead as curved paths—typically hyperbolas or hyperbola-like curves—arising from the continuous motion and the non-coplanarity of the exposed lines relative to the . These curves stem from the intersection of the epipolar plane with the varying focal planes over time, with epipoles positioned at along the scan direction due to the linear approximating relative motion between corresponding lines. This curvature constrains feature matching to one-dimensional searches along non-linear loci, reducing compared to unconstrained two-dimensional searches while accounting for the sensor's sweeping acquisition. To adapt the epipolar constraint, a modified fundamental matrix is employed, often formulated as a 4×4 linear pushbroom (LP) matrix that operates on extended homogeneous coordinates (u, uv, v, 1) to enforce the bilinear relation between corresponding points, yielding the epipolar curve equation \begin{pmatrix} u' & u'v' & v' & 1 \end{pmatrix} F \begin{pmatrix} u \\ uv \\ v \\ 1 \end{pmatrix} = 0. This matrix, with 11 degrees of freedom, accommodates affine or projective transformations specific to the pushbroom model and enables estimation from at least 11 point correspondences, facilitating robust matching along the curved paths without requiring full ephemeris data in simplified variants. Piecewise linear approximations of these curves are sometimes used for practical implementation, particularly in high-resolution imagery. Applications of pushbroom epipolar geometry are prominent in , exemplified by Landsat satellites, which utilize pushbroom sensors like the Operational Land Imager (OLI) to acquire multispectral strips for . Epipolar resampling techniques leverage this geometry to perform orthorectification, aligning stereo pairs by transforming images such that corresponding points lie along common horizontal lines, thereby enabling accurate (DEM) extraction and terrain analysis with sub-pixel precision. For instance, resampling methods based on orbital models have demonstrated mean errors below 0.3 pixels in SPOT and KOMPSAT imagery, underscoring their utility in geometric correction. Recent advancements integrate pushbroom epipolar geometry with multi-spectral and hyperspectral data, particularly post-2019 studies focusing on terrain mapping. These efforts employ and tie-point extraction tailored to hyperspectral pushbroom sensors, improving feature correspondence in aerial surveys for applications like forest monitoring and topographic , with methods achieving robust from raw spectral lines without prior geometric models.

References

  1. [1]
    Multiple View Geometry in Computer Vision<BR>Second Edition
    Multiple View Geometry in Computer Vision Second Edition. Richard Hartley and Andrew Zisserman, Cambridge University Press, March 2004. Sample chapters.
  2. [2]
    [PDF] CS231A Course Notes 3: Epipolar Geometry
    Thus, a basic understanding of epipolar geometry allows us to create a strong constraint between image pairs without knowing the 3D structure of the scene.
  3. [3]
    Introduction to Epipolar Geometry and Stereo Vision - LearnOpenCV
    Dec 28, 2020 · We will learn the concepts of Epipolar geometry and point correspondences. We will then use these concepts discuss how to calculate depth ...
  4. [4]
    [PDF] A computer algorithm for reconstructing a scene from two projections
    A simple algorithm for computing the three-dimensional struc- ture of a scene from a correlated pair of perspective projections is described here, when the ...
  5. [5]
    [PDF] Epipolar Geometry and the Fundamental Matrix
    The epipolar geometry is the intrinsic projective geometry between two views. It is independent of scene structure, and only depends on the cameras' ...Missing: paper | Show results with:paper
  6. [6]
    [PDF] A historical survey of geometric computer vision - Hal-Inria
    Nov 25, 2011 · Epipolar geometry seems to have been first uncovered by Hauck in 1883 [17]. In the same paper as well as follow-up papers [18,19,20,21], ...
  7. [7]
    A computer algorithm for reconstructing a scene from two projections
    Sep 10, 1981 · A computer algorithm for reconstructing a scene from two projections. H. C. Longuet-Higgins. Nature volume 293, pages 133–135 (1981)Cite ...
  8. [8]
    [PDF] In defence of the 8-point algorithm
    The fundamental matrix is a basic tool in the analysis of scenes taken with two uncalibrated cameras, and the 8-point algorithm is a frequently cited method.
  9. [9]
    [PDF] Random Sample Consensus: A Paradigm for Model Fitting with ...
    We introduce a new paradigm, Random Sample. Consensus (RANSAC), for fitting a model to experimental data; and illustrate its use in scene analysis and auto-.
  10. [10]
    [PDF] An Investigation of the Essential Matrix
    Longuet-Higgins in his paper ([?]) introducing the essential matrix assumes that all the internal parameters are known (in fact assuming that internal ...Missing: title | Show results with:title<|control11|><|separator|>
  11. [11]
    [PDF] Triangulation
    Thus an epipolar line in the first d(u, l(t))2 5 t2. 1 1 (tf)2 . image may be written as l(t). 2. Using the fundamental matrix F, compute the corre- sponding ...
  12. [12]
    [PDF] Multiple View Geometry in Computer Vision, Second Edition
    PART 0: The Background: Projective Geometry, Transformations and Esti- mation. 23. Outline. 24. 2. Projective Geometry and Transformations of 2D.
  13. [13]
    [PDF] 13.2 Stereo Matching - Carnegie Mellon University
    Rectify images. (make epipolar lines horizontal). 2.For each pixel a.Find epipolar line b.Scan line for best match c.Compute depth from disparity.
  14. [14]
    [PDF] A “Loop and Zhang” Reader for Stereo Rectification
    Loop and Zhang type of algorithms is that all they need for rectifying a given pair of stereo images of a scene is the 3 × 3. Fundamental Matrix for the images.<|separator|>
  15. [15]
    [PDF] Semi-Global Matching – Motivation, Developments and Applications
    The same study identified Census as the most robust matching cost for stereo vision. Census has been introduced by Zabih and Woodfill (1994). It encodes the ...
  16. [16]
    [PDF] Kolmogorov and Zabih's Graph Cuts Stereo Matching Algorithm
    Jun 16, 2015 · One noteworthy feature of this method is that it handles occlusion: The algorithm detects points that cannot be matched with any point in the ...
  17. [17]
    [PDF] Stereo Matching by Training a Convolutional Neural Network to ...
    Abstract. We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo ...
  18. [18]
    Robust stereo depth estimation in autonomous vehicle applications ...
    Sep 23, 2025 · Stereo matching is a prominent research area in autonomous driving and computer vision. Despite significant progress made by learning-based ...
  19. [19]
    [PDF] Determining the Epipolar Geometry and its Uncertainty: A Review
    This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and ...<|control11|><|separator|>
  20. [20]
    [PDF] Structure-from-Motion Revisited - Johannes Schönberger
    In this paper, we propose a new SfM algorithm to approach this ultimate goal. The new method is evaluated on a variety of challenging datasets and the code is ...
  21. [21]
    [PDF] Bundle Adjustment — A Modern Synthesis
    Bundle adjustment refines visual reconstruction to produce optimal 3D structure and viewing parameter estimates, adjusting all parameters together.Missing: seminal | Show results with:seminal
  22. [22]
    [2407.20219] Global Structure-from-Motion Revisited - arXiv
    Jul 29, 2024 · With this work, we revisit the problem of global SfM and propose GLOMAP as a new general-purpose system that outperforms the state of the art in global SfM.
  23. [23]
    [PDF] “SLAM, SFM AND PHOTOGRAMMETRY: WHAT'S IN A NAME?”
    Jun 15, 2018 · As is well recognised, both Structure from Motion, or SfM, and photogrammetry are also image-based localisation and mapping processes, since ...
  24. [24]
    Visual SLAM and Structure from Motion in Dynamic Environments
    Visual Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SfM) are techniques for three-dimensional environment estimation, particularly ...
  25. [25]
    [2510.15271] CuSfM: CUDA-Accelerated Structure-from-Motion - arXiv
    Oct 17, 2025 · This paper addresses the challenge via cuSfM, a CUDA-accelerated offline Structure-from-Motion system that leverages GPU parallelization to ...
  26. [26]
    [PDF] Epipolar geometry - CS@Cornell
    Apr 28, 2020 · If we can choose where to place cameras, then we can place the two cameras parallel to each other translated along the X axis (the rectified ...
  27. [27]
    [PDF] Linear Pushbroom Cameras
    The epipolar geometry of linear pushbroom cameras is discussed in Section 5.2. In Section 5.3, we prove that an LP fundamental matrix, which encodes the ...
  28. [28]
    [PDF] A Study on the Epipolarity of Linear Pushbroom Images - ASPRS
    Equation 1 shows that epipolar curves for any perspective images are presented as straight lines. This is a very well known property of the epi- polar geometry ...
  29. [29]
    [PDF] A NEW EPIPOLARITY MODEL BASED ON THE SIMPLIFIED ...
    This paper addresses the epipolar geometry of linear pushbroom imagery. Two images of a single scene are related by epipolar geometry which contain all ...
  30. [30]
    Landsat 8
    OLI is a push-broom sensor with a four-mirror telescope and 12-bit quantization. OLI collects data for visible, near infrared, and short wave infrared spectral ...
  31. [31]
    A linear pushbroom satellite image epipolar resampling method for ...
    In this paper, we introduce a novel epipolar resampling method for satellite imagery based on ortho rectification.
  32. [32]
    Bundle Adjustment of Aerial Linear Pushbroom Hyperspectral ...
    May 16, 2024 · A new method that allows the use of established methods for feature-based matching from aerial LP image lines is presented, and observations are retrieved and ...Missing: terrain post-
  33. [33]