Fact-checked by Grok 2 weeks ago

Bundle adjustment

Bundle adjustment is the problem of refining a visual to produce jointly optimal three-dimensional () structure and viewing estimates by minimizing a that quantifies the error between observed points and their predicted positions from the model. This nonlinear least-squares optimization technique simultaneously adjusts the positions of features and the parameters of multiple cameras, such as pose and calibration, to achieve the best fit to the measurements. Originating in for aerial mapping, it addresses the "bundles" of light rays connecting points to their 2D projections across images, ensuring geometric consistency in the . The method was pioneered in the late 1950s by , who developed an analytical least-squares approach for adjusting control points in multi-image photogrammetric blocks, marking the first comprehensive bundle adjustment technique. By the and , advances in solvers and numerical optimization, including Levenberg-Marquardt algorithms and preconditioned conjugate gradients, enabled efficient handling of large-scale problems, transitioning the technique from to broader applications. Modern implementations often incorporate robust cost functions to mitigate outliers and gauge-fixing constraints to resolve scale and coordinate ambiguities inherent in the optimization. In contemporary use, bundle adjustment serves as a core refinement step in structure-from-motion (SfM) pipelines, where it optimizes sparse models derived from correspondences in unordered sets, as demonstrated in large-scale internet photo collections. It is equally essential in (SLAM) systems for real-time robotics and , refining pose estimates and maps from video streams to improve trajectory accuracy and loop closure detection. These applications have driven innovations in scalability, such as GPU acceleration and , allowing reconstructions involving millions of s while maintaining high precision. As of 2025, recent advances include deep learning-grounded methods and event-based photometric bundle adjustment for dynamic scenes and ultra-high-resolution imagery.

Overview

Definition

Bundle adjustment is a technique in and that simultaneously refines estimates of three-dimensional (3D) structure—typically represented by the positions of feature points—and camera parameters, including pose and intrinsic , using observations from multiple images. This joint optimization process adjusts all parameters to produce a globally consistent , leveraging redundant measurements across views to improve accuracy over independent estimations. The core purpose of bundle adjustment is to minimize the discrepancies between observed two-dimensional (2D) image features, such as corner or edge detections, and the corresponding projected locations of the 3D points based on the estimated camera models. By formulating this as an optimization problem—often involving reprojection error minimization—it yields estimates that are optimal in a least-squares sense, enhancing the precision of the overall 3D model. In typical reconstruction pipelines, bundle adjustment acts as the final refinement stage following initial feature matching and coarse pose estimation, such as in structure-from-motion (SfM) systems or (SLAM) frameworks. It assumes that measurement errors follow a Gaussian distribution, positioning it as a maximum likelihood estimator under this noise model; robust variants extend to non-Gaussian cases by incorporating outlier-resistant cost functions.

Historical Context

Bundle adjustment originated in the field of in the 1950s, following the development of since the invention of in 1839, which enabled the measurement of three-dimensional structures from two-dimensional images for and purposes. However, the technique's computational demands made it impractical until the advent of digital computers in the 1950s, allowing for the numerical solution of complex least-squares problems inherent to multi-image adjustments. A pivotal milestone occurred in 1958 when Duane C. Brown introduced the foundational method for bundle adjustment in aerial triangulation, enabling the simultaneous estimation of three-dimensional ground points and camera parameters across multiple images, thus replacing sequential strip-based approaches with more efficient block adjustments. Brown's work, developed under the , laid the groundwork for modern implementations by formulating the problem as a nonlinear least-squares optimization over ray bundles from image points to object points. During the 1970s and 1980s, bundle adjustment became widely adopted in analytical , incorporating models to account for lens distortions and other systematic errors, such as radial and tangential distortions, which improved accuracy in camera and self-calibration techniques. Researchers like Armin Grün and Wolfgang Förstner advanced statistical reliability analysis and least-squares matching, facilitating robust handling of large photogrammetric blocks and transitioning from manual to automated processing. In the 1990s and early 2000s, bundle adjustment shifted toward applications, particularly structure-from-motion (SfM) pipelines, where it refined sparse 3D reconstructions from uncalibrated images. A seminal contribution was the 2000 survey by Bill Triggs, Peter McLauchlan, Richard Hartley, and , titled "Bundle Adjustment—A Modern Synthesis," which synthesized photogrammetric principles with sparse nonlinear optimization techniques tailored for implementers. Following 2000, advances in computing power enabled larger-scale optimizations, culminating in real-time bundle adjustment for by the mid-2010s, as demonstrated in incremental methods for vision-aided navigation that supported in dynamic environments.

Applications

Photogrammetry

In , bundle adjustment serves as a primary technique for refining camera positions and orientations along with coordinates of ground control points during aerial , enabling the creation of accurate large-scale topographic maps from overlapping aerial images. This process simultaneously optimizes the geometric relationship between image measurements and ground points across an entire block of photographs, minimizing discrepancies in the bundle of light rays projecting from cameras to observed features. Originally developed for calibrated cameras in aerial , it has evolved to incorporate self-calibration, allowing of parameters without prior , which is essential for handling variations in camera systems used in mapping projects. Within photogrammetric workflows, bundle adjustment typically follows initial steps of feature extraction and matching, such as identifying tie points across images, and relative to establish preliminary triangulations. It then integrates these inputs to perform a block adjustment, using observation equations based on to refine the entire network, often requiring only a minimal set of ground control points—typically three—for absolute of large image blocks. This integration handles the from multiple overlapping photos, distributing errors across the dataset to achieve sub-pixel accuracy in tie point measurements, which is critical for subsequent processing stages. The application of bundle adjustment significantly enhances the precision of derived products in , including mosaics, digital models (DEMs), and topographic surveys, by reducing systematic errors in camera geometry and feature positions. For instance, in production, it ensures geometric fidelity by correcting for distortions, leading to seamless mosaics with minimal ; similarly, in DEM generation, it improves accuracy for modeling, making it indispensable for scientific and analyses. These benefits are evident in large-scale mapping efforts, such as the U.S. Geological Survey's processing of coastal imagery, where bundle adjustment in tools like Agisoft refines orientations for high-fidelity reconstructions. Historically, bundle adjustment transitioned from manual stereoplotter-based methods in the mid-20th century to automated computational systems during the and , with Duane C. Brown's pioneering work in 1957–1959 introducing the method for U.S. aerial mapping, followed by its first European application in 1972 over the Oberschwaben region in by Bauer and Müller, yielding notable improvements in block accuracy. This evolution addressed key challenges like distortions and terrain-induced variations in bundles, which can introduce biases in initial triangulations; for example, self-calibration techniques mitigate radial distortions, while free-network adjustments handle undulating terrains by avoiding over-constrained ground controls. In mapping initiatives, such as those involving national topographic agencies, bundle adjustment has been routinely applied to integrate aerial data for cadastral and environmental surveys, ensuring compliance with standards for positional accuracy under 1 meter.

Computer Vision and Robotics

In , bundle adjustment serves as a core component of structure-from-motion (SfM) pipelines, enabling the construction of models from unordered collections of photographs by jointly optimizing camera poses and point positions to minimize reprojection errors. This process is particularly valuable for applications requiring high-fidelity reconstructions from diverse viewpoints, such as scanning sites where historical artifacts are digitized using consumer-grade cameras to preserve intricate details without physical contact. By refining initial estimates from feature matching, bundle adjustment achieves sub-pixel accuracy in point clouds, facilitating scalable scene modeling for virtual tourism and archival purposes. In , bundle adjustment underpins () systems, allowing autonomous agents to navigate unknown by refining pose graphs in through , such as in visual-inertial where camera and inertial measurements are combined to estimate trajectories robustly. This integration corrects accumulated errors in sequential pose estimates, enabling reliable mapping during motion. For instance, it supports camera tracking in () systems, where precise 3D alignment overlays virtual elements onto live video feeds for immersive experiences. Similarly, in autonomous vehicles, bundle adjustment refines maps from onboard cameras and lidars, enhancing detection and over extended drives. In , it aids from endoscopic videos, generating accurate surface models of internal organs to guide minimally invasive surgeries despite challenging lighting and deformations. Since the 2010s, bundle adjustment has seen widespread integration into modern and frameworks like ORB-SLAM, which employs it for local and to handle challenges such as from fast camera movements and significant viewpoint changes in dynamic scenes. These advancements have enabled real-time performance on resource-constrained devices, with ORB-SLAM demonstrating loop closure detection that further stabilizes mappings across relocalizations. A key benefit is the reduction of drift in sequential estimation processes, where unoptimized pose chains accumulate errors over time; bundle adjustment mitigates this by globally minimizing inconsistencies, yielding improvements in long-term trajectory accuracy in outdoor benchmarks. Overall, these developments have elevated bundle adjustment from its photogrammetric roots into a foundational tool for adaptive, online processing in mobile vision systems.

Mathematical Foundations

Reprojection Error

The reprojection error serves as the fundamental metric in bundle adjustment, defined as the between an observed two-dimensional image point and the corresponding of a three-dimensional point onto the . This error quantifies the misalignment between the actual feature location captured in an image and the location predicted by the current estimates of camera parameters and 3D structure. Geometrically, the reprojection error arises from the projection of points through a camera model, such as the pinhole model, where each point generates a "bundle" of rays from multiple camera viewpoints converging ideally at the point's location. The error measures the deviation of these projected rays from the observed image points, reflecting inaccuracies in the estimated coordinates or camera poses that cause the rays to fail to intersect precisely. Camera intrinsics play a key role in computing the reprojection error, incorporating distortions such as radial (barrel or effects due to curvature) and tangential (decentering effects from misalignment) components to map distorted 3D-to-2D accurately. These distortions are modeled parametrically within the function, ensuring the error accounts for real-world imperfections beyond ideal . The per-observation reprojection error term for a 3D point b_i observed in view j is given by d(Q(a_j, b_i), x_{ij}), where Q denotes the projection function (including intrinsics and distortions), a_j represents the camera parameters for view j, b_i is the 3D point coordinates, x_{ij} is the observed 2D image point, and d is the Euclidean distance in the image plane. Visually, the reprojection error can be illustrated by depicting multiple cameras with principal rays emanating from their optical centers toward a common 3D point, forming a bundle; residual vectors then extend from the observed image points to the projected points on each image plane, highlighting the geometric misalignment to be minimized.

Formulation as Optimization Problem

Bundle adjustment is formulated as a nonlinear least-squares optimization problem that jointly estimates the parameters of a set of 3D points and camera poses to minimize the discrepancies between observed image features and their predicted projections. Given n 3D points \{\mathbf{X}_i\}_{i=1}^n in world coordinates and m cameras with parameters \{\mathbf{P}_j\}_{j=1}^m, the goal is to refine these variables such that the reprojection errors across all visible observations are minimized. If the cameras are uncalibrated, the intrinsic parameters (such as focal length and principal point) are included in \mathbf{P}_j as additional unknowns. The objective function is the sum of squared distances between the observed image points \mathbf{x}_{ij} and the projected points \pi(\mathbf{P}_j, \mathbf{X}_i), weighted by a indicator v_{ij} that is 1 if point i is observed in image j and 0 otherwise: \min_{\{\mathbf{X}_i\}, \{\mathbf{P}_j\}} \sum_{i=1}^n \sum_{j=1}^m v_{ij} \left\| \mathbf{x}_{ij} - \pi(\mathbf{P}_j, \mathbf{X}_i) \right\|^2 Here, \pi denotes the nonlinear projection function, typically based on the , which maps a point to its image coordinates via a camera matrix \mathbf{P}_j = \mathbf{K}_j [\mathbf{R}_j | \mathbf{t}_j], where \mathbf{K}_j is the intrinsic matrix and [\mathbf{R}_j | \mathbf{t}_j] represents the extrinsic rotation and translation. Each point \mathbf{X}_i has three coordinates, while each camera pose \mathbf{P}_j involves six degrees of freedom for extrinsics (three for rotation and three for translation), plus additional parameters for intrinsics if estimated. The nonlinearity of the problem stems primarily from the projection function \pi, which incorporates for rotations (e.g., via rotation matrices or quaternions) and perspective division to handle the in . This results in a highly nonlinear that cannot be solved in closed form and requires iterative numerical optimization. The visibility term v_{ij} ensures that only relevant observations contribute to the sum, reflecting the sparse structure of real-world imaging where not all points are visible in every camera view. To initiate the optimization, initial estimates for the variables are obtained from simpler linear techniques, such as (DLT) for camera pose estimation or linear for 3D point reconstruction from matched features across views. These provide a starting point close to the global minimum, as the optimization landscape can have multiple local minima due to the nonlinearity.

Solution Methods

Nonlinear Least Squares Optimization

Bundle adjustment is formulated as a special case of (NLS) optimization, where the objective is to minimize the sum of squared residuals between observed and predicted image measurements, typically reprojection errors of points onto images. In this framework, the cost function is expressed as f(\mathbf{x}) = \frac{1}{2} \sum_i \| \mathbf{r}_i(\mathbf{x}) \|^2, with residuals \mathbf{r}_i(\mathbf{x}) capturing the discrepancies in projected point coordinates across multiple views. The problem is solved iteratively using the Gauss-Newton method, which linearizes the nonlinear residuals around the current parameter estimate via a first-order Taylor expansion. This approximation leads to a local quadratic model of the cost function, solved by forming the normal equations \mathbf{J}^T \mathbf{J} \, \delta = -\mathbf{J}^T \mathbf{r}, where \mathbf{J} is the Jacobian matrix of the residuals with respect to the parameters \mathbf{x}, \mathbf{r} is the vector of residuals, and \delta provides the parameter update \mathbf{x} \leftarrow \mathbf{x} + \delta. The Jacobian \mathbf{J} is computed analytically by deriving the partial derivatives of the projection functions with respect to 3D point coordinates and camera poses, enabling efficient evaluation. Due to the sparse visibility relationships in the scene—where each point is observed by only a subset of cameras—the Jacobian and resulting Hessian \mathbf{J}^T \mathbf{J} exhibit a sparsity pattern dictated by the visibility graph, which can be exploited for computational efficiency. For stability, especially when far from the minimum or with poor initial estimates, damping is introduced to the normal equations, modifying the Hessian to ensure descent directions. The Gauss-Newton method typically converges in 10-20 iterations for well-conditioned bundle adjustment problems, achieving quadratic convergence near the solution. In comparison, first-order methods like gradient descent, which rely solely on the residual gradient \mathbf{J}^T \mathbf{r}, are less efficient for bundle adjustment as they converge more slowly near the minimum and require more iterations overall.

Levenberg-Marquardt Algorithm

The Levenberg-Marquardt (LM) algorithm serves as a robust for solving the optimization problem in bundle adjustment, blending the rapid local convergence of the Gauss-Newton method with the global reliability of . It minimizes the sum of squared reprojection errors by successively linearizing the residuals around current parameter estimates and solving a regularized system to compute updates for camera poses and 3D points. This hybrid approach ensures steady progress even when the approximation is ill-conditioned, a common challenge in bundle adjustment due to correlated parameters. At each iteration, the algorithm forms a quadratic approximation of the objective function and derives the parameter increment \delta from the damped normal equations: (\mathbf{J}^T \mathbf{J} + \lambda \mathbf{I}) \delta = -\mathbf{J}^T \mathbf{r} Here, \mathbf{J} denotes the Jacobian matrix of partial derivatives of the residual vector \mathbf{r} with respect to the parameters (as detailed in the nonlinear least squares formulation), \lambda \geq 0 is the scalar damping factor, and \mathbf{I} is the identity matrix. The damping term \lambda \mathbf{I} stabilizes the solution by penalizing large steps and approximating gradient descent when \lambda is large, while reducing to the undamped Gauss-Newton step as \lambda approaches zero. The resulting linear system is typically solved using direct or iterative methods tailored to the problem's sparsity. The damping parameter \lambda is adaptively tuned to balance exploration and exploitation: it initializes at a high value to promote conservative, descent-guaranteed steps akin to steepest descent, particularly useful in the initial phases where the linearization may be inaccurate. Subsequent values of \lambda decrease if the proposed update yields a sufficient reduction in the residual norm (e.g., compared to a quadratic model prediction), accelerating convergence near the optimum; conversely, \lambda increases (often by a factor of 10) if the step fails to reduce the error, rejecting the update and retrying with stronger regularization. This adjustment rule, often based on a gain ratio of actual to predicted error decrease, ensures monotonic progress and prevents divergence. In the context of bundle adjustment, the LM algorithm leverages the block-sparse structure of \mathbf{J}^T \mathbf{J}—arising from independent observations per point and camera—to facilitate efficient computation without dense matrix storage or inversion, enabling scalability to thousands of images. Modern implementations, such as the sba library or Ceres Solver, incorporate these structural optimizations alongside LM's core damping mechanism for practical deployment in photogrammetry and computer vision pipelines. The primary advantages of over undamped Gauss-Newton in bundle adjustment include improved robustness to local minima, rank deficiency in the , and noisy initial estimates, as the mitigates sensitivity to poor linear approximations and enforces reliable in underconstrained scenarios. This has made LM the de facto standard for batch bundle adjustment since its integration into photogrammetric software in the late 20th century. The high-level steps of the LM algorithm applied to bundle adjustment can be outlined as follows:
  1. Initialize structure and camera parameters, set initial \lambda (e.g., based on the maximum diagonal of \mathbf{J}^T \mathbf{J}), and define convergence thresholds for parameter changes or residual norms.
  2. Compute the current residuals \mathbf{r} and Jacobian \mathbf{J} by evaluating reprojection errors and their derivatives for all observations.
  3. Assemble the approximate Hessian \mathbf{H} = \mathbf{J}^T \mathbf{J} and right-hand side \mathbf{g} = \mathbf{J}^T \mathbf{r}, then solve the damped system (\mathbf{H} + \lambda \mathbf{I}) \delta = -\mathbf{g} for the step \delta, exploiting sparsity where possible.
  4. Temporarily apply the step to predict the new residual norm; compute the gain ratio \rho as the ratio of actual error decrease to the predicted decrease.
  5. If \rho > a small (e.g., 0.25), accept the step, update parameters, and reduce \lambda (e.g., divide by 10 or based on $1/(1 + 2\rho)); otherwise, reject and increase \lambda (e.g., multiply by 10).
  6. Optionally, perform a along \delta to further ensure error reduction.
  7. Repeat from step 2 until convergence criteria are met, such as minimal change in parameters or residuals below a tolerance.

Advanced Topics

Large-Scale Bundle Adjustment

Large-scale bundle adjustment addresses the computational demands of optimizing structure-from-motion problems involving thousands to millions of images and 3D points, where the parameter space can reach millions of dimensions. Traditional dense methods become infeasible due to the high dimensionality, as forming and solving dense Jacobians leads to O(n^3) time complexity, prohibitive for large datasets. Instead, these methods exploit the inherent sparsity arising from the visibility structure, where each image observes only a subset of points, resulting in a block-sparse normal equations matrix derived from the visibility graph. Iterative solvers like the conjugate gradient (CG) method are employed to solve these sparse systems without explicit matrix storage or factorization, enabling efficient handling of massive problems. A key technique for scalability is the use of the Schur complement to reduce the system size by marginalizing out one set of variables, typically the 3D points, to focus on camera parameters. This yields a smaller, sparser system for the cameras, formulated as the reduced Hessian \mathbf{H}_{cc}^{\mathrm{red}} = \mathbf{H}_{cc} - \mathbf{B} \mathbf{D}^{-1} \mathbf{B}^T, where \mathbf{H}_{cc} and \mathbf{H}_{pp} are the camera and point blocks of the normal matrix, \mathbf{B} = \mathbf{H}_{cp} is the off-diagonal coupling block, and \mathbf{D} = \mathrm{diag}(\mathbf{H}_{pp}) approximates the point block diagonal for efficiency. The resulting system is solved using CG, which benefits from the reduced dimensionality and sparsity. Preconditioning further accelerates CG convergence by mitigating ill-conditioning; common approaches include block-diagonal preconditioners that approximate the Hessian with diagonal blocks for points and cameras, or incomplete factorizations like SSOR (symmetric successive over-relaxation) that capture local structure without full computation. These techniques can reduce iteration counts significantly, often achieving convergence in tens of iterations for problems with hundreds of thousands of parameters. Practical implementations demonstrate the efficacy of these methods on Internet-scale datasets. For instance, the Multicore Bundle Adjustment system, integrated into tools like VisualSFM, processes collections with up to 1 million images from community photo archives, achieving speedups of 5-10x on multicore CPUs through parallelized and preconditioned Schur solves. Benchmarking often uses the BAL (Bundle Adjustment in the Large) dataset, which provides structured (e.g., Ladybug sequences) and unstructured (e.g., with approximately 1,000 cameras and 80,000 points) problems to evaluate scalability, with results showing robust performance on datasets up to millions of observations. These approaches have enabled applications in global from vast image sets, balancing accuracy and efficiency.

Extensions and Variants

Robust bundle adjustment addresses the sensitivity of traditional least-squares formulations to outliers and non-Gaussian by incorporating robust cost functions and estimators. Instead of minimizing squared reprojection errors, robust variants employ loss functions such as the or Tukey biweight, which apply quadratic penalties to inliers while linearly or constantly penalizing outliers beyond a threshold, thereby reducing the influence of gross errors in feature matches. These approaches often leverage M-estimators, which generalize under heavy-tailed distributions, enabling more reliable in challenging environments like urban scenes with dynamic occlusions. For instance, the has been used to model reprojection errors, providing a probabilistic framework that downweights outliers adaptively during optimization. Incremental bundle adjustment extends the classical batch method to support online processing, particularly in simultaneous localization and mapping (SLAM) systems where new observations arrive continuously. This variant performs localized updates to the optimization problem, avoiding full recomputation by techniques such as marginalization of fixed variables and selective relinearization of the Hessian to maintain efficiency. The iSAM framework exemplifies this, using a Bayes tree representation to incrementally factorize the information matrix, enabling real-time pose and landmark refinement with reduced computational overhead compared to global solves. Similarly, ICE-BA incorporates consistency checks and block-structured solvers tailored to SLAM's sparsity, achieving faster convergence for visual-inertial odometry. Self-calibration in bundle adjustment allows joint estimation of camera intrinsics alongside extrinsic parameters and structure, eliminating the need for prior calibration in uncalibrated setups. This is particularly useful for modeling radial , where or division models parameterize lens imperfections, enabling recovery of , principal point, and distortion coefficients from image correspondences alone. Methods like those integrating GNSS constraints further refine these estimates by incorporating absolute pose priors, improving accuracy in aerial or applications with significant distortion. Post-2020 developments have integrated into bundle adjustment to enhance initialization, residual prediction, or end-to-end optimization, addressing limitations in traditional geometric methods. For example, DeepSFM employs neural networks to iteratively refine depth maps and camera poses via learned bundle adjustment layers, outperforming classical pipelines on datasets with sparse views. Similarly, DBARF uses bundle-adjusting neural radiance fields to jointly optimize scene geometry and poses, incorporating differentiable rendering for robust across unseen environments. Earlier works like BA-Net laid groundwork by applying dense feature-metric bundle adjustment on convolutional feature maps, but recent variants focus on hybrid models that combine learning with probabilistic priors for better uncertainty handling. Recent advances as of 2025 include event-based photometric bundle adjustment for high-dynamic-range sensors and methods for dynamic scene reconstruction using learning-based pose refinements in non-rigid environments. Other variants include graph-based formulations, which represent bundle adjustment as pose optimization over camera nodes and landmark factors, facilitating scalable in large-scale via efficient graph traversals and . Extensions to multi-view incorporate dense by minimizing photometric or geometric residuals across grids or meshes, refining both sparse structure and dense depth maps in a unified optimization. These adaptations address constraints through incremental updates and uncertainty modeling, as in Bayesian bundle adjustment variants that propagate pose covariances using information matrices to quantify reliability in dynamic settings.

References

  1. [1]
    [PDF] Bundle Adjustment — A Modern Synthesis
    A Modern Synthesis. 301. §B ...
  2. [2]
    [PDF] Exploring Photo Collections in 3D - Photo Tourism
    Photo Tourism: Exploring Photo Collections in 3D. Noah Snavely. University of ... The design and im- plementation of a generic sparse bundle adjustment software.
  3. [3]
    [PDF] Photometric Bundle Adjustment for Vision-Based SLAM
    Aug 5, 2016 · We propose a novel algorithm for the joint refinement of structure and motion parameters from image data directly without rely- ing on fixed and ...
  4. [4]
    modern methods of bundle adjustment on the gpu - ResearchGate
    ... 1958 Duane C. Brown described a method to solve large min-. imization problems on the basis of least squares (Brown, 1958). This is the first known method for ...
  5. [5]
    [PDF] Adaptive Relative Bundle Adjustment - Robotics
    Abstract—It is well known that bundle adjustment is the optimal non-linear least-squares formulation of the simultane- ous localization and mapping problem, ...<|control11|><|separator|>
  6. [6]
    Bundle adjustment
    In this case we seek the Maximum Likelihood (ML) solution assuming that the measurement noise is Gaussian: we wish to estimate projection matrices and 3D ...
  7. [7]
    Bundle adjustment – Knowledge and References - Taylor & Francis
    This term originates from photogrammetry for computing the 3D topography for map making from aerial photographs. Its origin dates back to the invention of ...
  8. [8]
    [PDF] MODERN METHODS OF BUNDLE ADJUSTMENT ON THE GPU
    In 1958 Duane C. Brown described a method to solve large min- imization problems on the basis of least squares (Brown, 1958). This is the first known method ...
  9. [9]
    [PDF] calibrating bundle adjustment routines applied to archival aerial ...
    Self- calibration methods were intensively researched and developed in the 1970s and 1980s, where it was confirmed that systematic image errors can be ...
  10. [10]
    [PDF] Incremental Light Bundle Adjustment for Robotics Navigation
    Abstract—This paper presents a new computationally- efficient method for vision-aided navigation (VAN) in au- tonomous robotic applications.Missing: mid- | Show results with:mid-
  11. [11]
    Photogrammetry: Triangulation and bundle block adjustment (part 3)
    The bundle block adjustment uses the A-model for adjustment based on observation equations. The collinearity equation is used for this purpose. The observations ...
  12. [12]
    Automated Aerial Triangulation in Aerial Photogrammetry
    Mar 8, 2023 · Bundle Block Adjustment: The camera positions and orientation are adjusted to ensure consistency across all the images.
  13. [13]
    Aerial Triangulation - an overview | ScienceDirect Topics
    In theory, bundle-block adjustment allows the absolute orientation of an entire block of an unlimited number of photographs using only three GCPs. This requires ...
  14. [14]
    12. Bundle adjustment - Ames Stereo Pipeline - Read the Docs
    Although bundle adjustment is not a required step for generating DEMs, it is highly recommended for users who plan to create DEMs for scientific analysis and ...
  15. [15]
    [PDF] Processing Coastal Imagery With Agisoft Metashape Professional ...
    Metashape calls this “optimizing”; it is commonly known in photogrammetry as a “bundle adjustment” or “bundle block adjustment” and has been a component of ...
  16. [16]
    [PDF] The story of bundle adjustment: Karsten Jacobsen's chapter
    Aug 28, 2025 · “Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal 3D structure and viewing parameter (camera ...Missing: C. | Show results with:C.<|control11|><|separator|>
  17. [17]
    the case study of cultural heritage objects and public interiors - Nature
    Dec 6, 2023 · One of the feature-based methods is Structure-from-Motion (SfM), which is carried out in the following steps: (1) feature extraction; (2) ...
  18. [18]
    [PDF] Pushing the Envelope of Modern Methods for Bundle Adjustment
    Bundle adjustment has become an essential part of struc- ture from motion (SfM) and 3D reconstruction, attracting increased interest from the computer vision ...
  19. [19]
    [PDF] ICE-BA: Incremental, Consistent and Efficient Bundle Adjustment for ...
    This paper proposes a novel solver algorithm for visual- inertial SLAM with the following contributions: a new sliding window based solver that leverages the ...
  20. [20]
    Real-time SLAM for static multi-objects learning and tracking applied ...
    Camera poses are computed using simultaneous localization and mapping (SLAM) based on bundle adjustment process to optimize problem parameters. We propose to ...
  21. [21]
    Reconstruction of a 3D surface from video that is robust to missing ...
    Furthermore, an incremental bundle adjustment strategy is used to refine the camera parameters and 3D structure and produce a more accurate 3D surface.
  22. [22]
    Visual SLAM algorithms: a survey from 2010 to 2016
    Jun 2, 2017 · This paper aims to categorize and summarize recent vSLAM algorithms proposed in different research communities from both technical and historical points of ...
  23. [23]
    [PDF] Bundle Adjustment — A Modern Synthesis 1 Introduction - Hal-Inria
    Brown's original 1958 method for bundle adjustment [16, 19, 100] was based on finding the reduced camera system as above, and solving it using Gaussian ...Missing: David | Show results with:David
  24. [24]
    [PDF] Multiple View Geometry in Computer Vision, Second Edition
    ... Hartley. Australian National University,. Canberra, Australia. Andrew Zisserman ... bundle adjustment to solve more general reconstruction prob- lems. (ii) ...
  25. [25]
    [PDF] Is Levenberg-Marquardt the Most Efficient Optimization Algorithm for ...
    Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is tra- ditionally solved using a sparse variant of the ...
  26. [26]
    Solving Non-linear Least Squares - Ceres Solver
    Another option for bundle adjustment problems is to apply Conjugate Gradients to the reduced camera matrix S instead of H . · Equation (13) is closely related to ...
  27. [27]
    [PDF] Bundle Adjustment in the Large - University of Washington
    Snavely, N., Seitz, S.M., Szeliski, R.: Photo Tourism: Exploring photo collections in 3D. TOG 25 (2006) 835–846. 26. Engels, C., Stewenius, H., Nister, D ...<|control11|><|separator|>
  28. [28]
    [PDF] Multicore Bundle Adjustment - University of Washington
    Multicore bundle adjustment uses multicore CPUs and GPUs to efficiently solve 3D scene reconstruction problems, achieving significant speedups.Missing: seminal | Show results with:seminal
  29. [29]
    [PDF] Adaptive Robust Kernels for Non-Linear Least Squares Problems
    Apr 30, 2020 · We integrated our adaptive kernel into and tested it for two popular state estimation problems in robotics, namely. ICP and bundle adjustment.
  30. [30]
    [1111.1400] Student's T Robust Bundle Adjustment Algorithm - arXiv
    Nov 6, 2011 · In this paper, we propose a novel robust Student's t BA algorithm (RST-BA). We model reprojection errors using the heavy tailed Student's t-distribution.Missing: M- seminal
  31. [31]
    [PDF] Robust Bundle Adjustment Revisited - Semantic Scholar
    This work addresses robust estimation in the bundle adjustment procedure with a method based on lifting a robust cost function into a higher dimensional ...Missing: seminal | Show results with:seminal
  32. [32]
    [PDF] iSAM: Incremental Smoothing and Mapping
    Sep 7, 2008 · As relinearization is not needed in every step, we propose combining it with periodic variable reordering. The SLAM problem is different from a ...
  33. [33]
    [PDF] iSAM2: Incremental Smoothing and Mapping with Fluid ...
    The origi- nal iSAM algorithm incrementally maintains the square root information matrix by applying matrix factorization updates. We analyze the matrix updates ...Missing: marginalization | Show results with:marginalization
  34. [34]
    [PDF] Radial Distortion Self-Calibration - CVF Open Access
    For cameras that strictly obey the perspective pin- hole camera model, the intrinsic parameters can be obtained by means of self-calibration allowing again for ...
  35. [35]
    Camera Self-Calibration with GNSS Constrained Bundle Adjustment ...
    When there is significant radial distortion in the image, it needs to be employed together with the radial distortion model. Therefore, the radial distortion ...
  36. [36]
    [PDF] DeepSFM: Structure From Motion Via Deep Bundle Adjustment
    Simultaneous Localization and Sapping(SLAM) sys- tems track the motion of the camera and build 3D structure from video sequence. [31,9,29,30]. [9] propose the ...
  37. [37]
    [PDF] DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance ...
    Recent works such as BARF and GARF can bundle ad- just camera poses with neural radiance fields (NeRF) which is based on coordinate-MLPs.Missing: post- | Show results with:post-
  38. [38]
    [1806.04807] BA-Net: Dense Bundle Adjustment Network - arXiv
    Jun 13, 2018 · This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA).Missing: original | Show results with:original
  39. [39]
    [PDF] Learning To Bundle-Adjust: A Graph Network Approach to Faster ...
    This study pro- poses a learning-based bundle adjuster using a graph net- work. It works faster and can be used instead of conven- tional optimization-based BA.
  40. [40]
    [PDF] Photometric Bundle Adjustment for Dense Multi-View 3D Modeling
    This can be used at a last refinement step in 3D reconstruction pipelines and helps improving the. 3D reconstruction's quality by estimating the 3D shape and.