Fact-checked by Grok 2 weeks ago

Photogrammetry

Photogrammetry is the science and technology of obtaining reliable measurements and three-dimensional information about physical objects and the environment through the recording, measuring, and interpretation of photographic images. The term derives from the Greek words φῶς (phōs), 'light'; γράφω (gráphō), 'drawing'; and μέτρον (métron), 'measurement'. The process typically involves capturing overlapping images from multiple viewpoints, applying principles of geometry and triangulation to determine spatial relationships, and generating outputs such as orthomosaic maps, digital elevation models, and 3D reconstructions. The origins of photogrammetry date back to early concepts explored by figures like and , but it emerged as a formal discipline in the mid- with the advent of . The term "photogrammetry" was coined by the Prussian Albrecht Meydenbauer in 1867, and the first comprehensive textbook on the subject was published in in 1889. Key early advancements included the application of geometric studies to by Sebastian Finsterwalder in the late 19th century, followed by the development of analytical methods in the that leveraged computers for precise calculations. Photogrammetry finds extensive applications across diverse fields, including topographic mapping, engineering surveys, documentation, and aerospace analysis. In , it processes imagery from satellites, aircraft, and drones to create accurate geospatial data for and . In and , it supports condition assessments of structures and precise for design and maintenance. Modern advancements have integrated digital sensors and software, enabling non-contact measurements in challenging environments like and .

Overview and History

Definition and Scope

Photogrammetry is the science and technology of extracting reliable three-dimensional geometric and thematic information, often over time, about physical objects and environments through the recording, measuring, and interpreting of images and , with a primary emphasis on reconstructing three-dimensional () models from two-dimensional () images. This process enables the extraction of precise spatial , such as object shapes, positions, and dimensions, by leveraging the geometric properties inherent in overlapping photographs taken from different viewpoints. The key objectives of photogrammetry include deriving accurate measurements of distances, areas, volumes, and coordinates without physical contact with the subject, thereby minimizing disturbance and enhancing safety in applications like hazardous terrain mapping. It differs from , which encompasses a broader range of non-contact using various sensors beyond photographic imagery, such as or multispectral devices, by focusing specifically on photographic sources for metrological precision. In contrast to , photogrammetry shares mathematical foundations but emphasizes calibrated, traceable measurements compliant with standards. The scope of photogrammetry spans a wide range of scales, from macroscopic applications involving components and artifacts to planetary-level analyses using for global topographic modeling. It encompasses both passive approaches, which rely on ambient or natural light to capture images, and active methods, such as structured light projection, to illuminate scenes and enhance feature detection in controlled environments. As an interdisciplinary field, photogrammetry integrates principles from for geospatial accuracy, for , and for advanced image processing algorithms, fostering innovations across domains like , , and .

Historical Development

Photogrammetry originated in the mid-19th century with the advent of , when Aimé Laussedat, a , conducted the first topographic surveys using photographs in the late 1850s, establishing foundational techniques for mapping terrains through image-based measurements. Independently, in the 1860s, Albrecht Meydenbauer applied photographic methods to precise architectural measurements, coining the term "photogrammetry" in 1867 to describe the art of measuring from photographs, particularly for documenting historical buildings. The early 20th century marked significant advancements in stereoscopic techniques, with Carl Pulfrich developing the stereocomparator in 1901 at , enabling accurate comparisons of stereo images for topographic mapping and laying the groundwork for stereophotogrammetry. Eduard Dolezal, an Austrian professor, pioneered stereophotogrammetric methods in the 1900s and founded the International Society for Photogrammetry in 1910, promoting global standardization and instrument development. Stereoplotters emerged around this period, allowing operators to reconstruct 3D models from stereo pairs, with widespread adoption following as aerial surveying proliferated for military and civilian mapping needs. The transition to digital photogrammetry began in the 1970s with the introduction of analytical plotters, which used computers to perform precise geometric computations on scanned images, overcoming limitations of . By the 1980s and 1990s, (CCD) sensors and enabled fully automated workflows, reducing reliance on manual stereoplotting. Heinrich Wild, founder of Wild Heerbrugg, contributed key innovations like phototheodolites (e.g., models P30 and FT9) in the mid-20th century, integrating cameras with instruments for accurate terrestrial measurements. In the 2000s, structure-from-motion (SfM) algorithms revolutionized the field by automating from unordered image sets, making photogrammetry accessible beyond specialized equipment. Post-2010, unmanned aerial vehicles (drones) integrated with SfM facilitated high-resolution aerial data collection, expanding applications in environmental and urban monitoring. The 2020s have seen AI enhancements, including for automated feature detection and model optimization, improving efficiency in processing large datasets from diverse sources. In 2025, advancements include AI-powered tools for monitoring at unprecedented scales and accessible software like Artec Studio Lite for professional 3D capture.

Fundamental Principles

Geometric and Mathematical Foundations

Photogrammetry relies on the central perspective projection model, which assumes that light rays from object points pass through the camera's optical center to form images on the sensor plane. This model relates three-dimensional object coordinates (X, Y, Z) to two-dimensional image coordinates (x, y) through the equations, derived from the similarity of triangles formed by the object point, camera center, and image point. The equations are expressed as: x - x_p = -f \frac{r_{11}(X - X_0) + r_{12}(Y - Y_0) + r_{13}(Z - Z_0)}{r_{31}(X - X_0) + r_{32}(Y - Y_0) + r_{33}(Z - Z_0)}, y - y_p = -f \frac{r_{21}(X - X_0) + r_{22}(Y - Y_0) + r_{23}(Z - Z_0)}{r_{31}(X - X_0) + r_{32}(Y - Y_0) + r_{33}(Z - Z_0)}, where f is the focal length, (x_p, y_p) is the principal point, (X_0, Y_0, Z_0) is the camera position, and R = [r_{ij}] is the rotation matrix defining the camera's orientation. These equations, foundational since the mid-20th century, incorporate interior orientation parameters (focal length and principal point) and exterior parameters (position and orientation). In stereo photogrammetry, arises from the displacement of corresponding image points due to separation between cameras, enabling depth computation. Horizontal p_x in a stereo pair is given by p_x = (x_l - x_p) - (x_r - x_p), where subscripts l and r denote left and right images, and depth Z relates inversely to via Z = \frac{b f}{p_x}, with b as the . constrains this matching: for a point in one image, its correspondent lies on the epipolar line in the other, defined by the fundamental matrix F such that \mathbf{m}'^T F \mathbf{m} = 0, where \mathbf{m} and \mathbf{m}' are homogeneous image coordinates. This geometry reduces search dimensionality from 2D to 1D, crucial for accurate correspondence. Resection determines camera exterior orientation from known object points and their image coordinates, often using the (DLT) method, which solves a homogeneous system A \mathbf{h} = 0 for the 11 parameters of the H (up to scale), where \mathbf{h} stacks the matrix elements and A rows derive from for each point. For n \geq 6 points, yields the solution. Intersection triangulates 3D points by intersecting rays from multiple images, minimizing the distance between back-projected rays via . These techniques form the basis for camera and point . Error models in photogrammetry employ to minimize residuals between observed and computed image coordinates, formulated as \mathbf{v} = \mathbf{A} \mathbf{x} - \mathbf{l}, where \mathbf{v} are residuals, \mathbf{A} the , \mathbf{x} corrections to parameters, and \mathbf{l} observations; the solution is \mathbf{x} = ( \mathbf{A}^T \mathbf{P} \mathbf{A} )^{-1} \mathbf{A}^T \mathbf{P} \mathbf{l} with weight matrix \mathbf{P}. Ground control points (GCPs), surveyed points with known coordinates, anchor the model for absolute orientation, transforming relative coordinates to a global datum using similarity transformations involving at least three non-collinear GCPs to estimate , , and . This ensures accuracy. Scale and distortion considerations address deviations from ideal . In , used for distant objects, rays are parallel (x = -f \frac{X}{Z_0} + x_p), avoiding perspective foreshortening but less accurate for close-range; central dominates photogrammetry for its fidelity to optics. distortions include radial (\Delta r = k_1 r^3 + k_2 r^5 + k_3 r^7) causing barrel or effects, and tangential (\Delta x_t = 2 p_1 x y + p_2 (r^2 + 2 x^2), \Delta y_t = p_1 (r^2 + 2 y^2) + 2 p_2 x y) from misalignment, modeled in the Brown-Conrady framework and corrected via additional parameters in collinearity equations.

Image Acquisition and Calibration

Image acquisition in photogrammetry begins with selecting appropriate cameras to ensure geometric fidelity and sufficient detail for subsequent . Metric cameras, designed specifically for photogrammetric applications, feature stable optics with minimal distortion and fiducial marks for precise orientation, enabling sub-pixel accuracy in measurements. In contrast, non-metric cameras, such as consumer digital single-lens reflex (DSLR) models or industrial-grade sensors, lack these built-in calibrations but can be adapted through software correction, offering higher image quality and flexibility for close-range tasks. Multispectral sensors, which capture data across multiple wavelength bands, are used for applications requiring material analysis, such as mapping, by integrating visible and near-infrared channels. requirements typically exceed 20 megapixels for high-accuracy projects to achieve ground sampling distances below 1 cm, minimizing errors during processing. Effective acquisition strategies optimize image coverage and for reliable depth estimation. Forward overlap between consecutive images should range from 60% to 80% to ensure sufficient tie points for , while lateral overlap of 30% to 60% supports stereo pair formation across flight lines or scan paths. The baseline distance, or separation between viewpoints, directly influences depth ; shorter baselines (e.g., 1-2 times the object distance) enhance for small-scale features but require more images, whereas longer baselines improve overall but risk . Lighting conditions must be controlled to minimize shadows and specular reflections, ideally using diffuse, uniform illumination such as overcast skies or artificial sources to maintain consistent across the scene. Camera calibration is essential to determine intrinsic and extrinsic parameters, compensating for lens distortions and sensor alignments. Zhang's method, a widely adopted technique, uses multiple views of a planar checkerboard pattern to estimate intrinsic parameters—including focal length, principal point, and radial distortion coefficients k_1, k_2, and k_3—through homography decomposition, achieving accuracies below 0.1 pixels with 10-15 images. Self-calibration via (SfM) leverages natural scene features without dedicated targets, simultaneously refining camera poses (extrinsic parameters: rotation matrix R and translation vector t) and 3D structure from unordered image sets, suitable for non-metric cameras in dynamic environments. These procedures relate acquired images to geometric projections, setting the stage for post-acquisition processing. Sensor-specific issues can introduce artifacts that degrade photogrammetric accuracy if unaddressed. Rolling shutter sensors, common in consumer cameras, scan lines sequentially, causing geometric (e.g., "wobble" ) during motion, which can shift features by up to 5% of the image height at speeds over 1 m/s; global shutter sensors expose the entire frame simultaneously, eliminating this issue for high-speed acquisitions. Color and radiometric calibration corrects for and sensor response variations, ensuring consistent reflectance values essential for generating true orthophotos; this involves flat-field corrections and reference panels to achieve radiometric errors below 2%. Captured data must be stored in formats that preserve fidelity and for traceability. RAW formats retain unprocessed sensor data, avoiding compression artifacts that could alter pixel intensities in files, thus supporting higher precision in feature detection. metadata embedding captures timestamps, GPS coordinates, and camera settings (e.g., , ), facilitating and temporal analysis without external logs.

Methods and Techniques

Aerial and Satellite Photogrammetry

Aerial photogrammetry employs platforms to acquire overlapping images for large-scale topographic and thematic , offering flexibility in altitude and coverage compared to ground-based methods. photogrammetry, in contrast, leverages orbital sensors for global-scale observations, enabling consistent over vast areas with revisit cycles. Both approaches rely on principles to reconstruct three-dimensional surfaces, but they differ in resolution, cost, and operational constraints. Key platforms in aerial photogrammetry include manned aircraft, which have facilitated image acquisition since the 1920s for applications like agricultural surveying. Unmanned aerial vehicles (UAVs), or drones, have largely supplemented manned systems due to their lower cost and accessibility; fixed-wing UAVs excel in covering extensive areas efficiently, while multirotor platforms provide high-precision imaging for targeted sites. For satellite photogrammetry, missions like the U.S. Geological Survey's Landsat series deliver multispectral data at moderate resolutions for . Commercial satellites, such as Maxar's constellation, support high-resolution acquisitions using agile pointing capabilities. Effective is essential to achieve desired , particularly through calculation of the (GSD), which represents the real-world distance per image . The GSD is computed as GSD = \frac{H \times s}{f}, where H is the flying height, s is the sensor pixel size, and f is the ; this metric guides altitude selection to balance coverage and detail. Nadir-oriented imaging ensures vertical coverage for planimetric mapping, whereas oblique angles facilitate digital surface model (DSM) generation by capturing height variations across . Satellite-specific techniques include pushbroom scanning, where linear sensors capture images continuously along the satellite's orbital path, producing strip-like data suitable for seamless mosaicking. Along-track , achieved by tilting the during consecutive orbits, enables parallax-based height extraction and is particularly effective for temporal in dynamic landscapes. UAV operations face unique challenges, such as wind gusts that induce and reduce image quality, necessitating robust stabilization systems. Battery constraints further limit flight durations to typically 20-30 minutes per , requiring multiple launches for large surveys. Primary outputs from these methods are digital elevation models (DEMs) representing terrain heights and orthomosaics providing geometrically corrected, seamless image maps. With real-time kinematic (RTK) GPS integration on UAVs, horizontal and vertical accuracies often reach errors (RMSE) below 10 cm, meeting standards for engineering-grade mapping. A notable case is the use of drone photogrammetry in post-Hurricane Helene recovery efforts in North Carolina in 2024, where rapid surveys generated orthomosaics and DEMs to assess flood damage and prioritize infrastructure repairs, demonstrating UAVs' role in accelerating disaster response timelines.

Terrestrial and Close-Range Photogrammetry

Terrestrial photogrammetry involves the acquisition of images from ground-based positions to measure and model objects or scenes at close distances, typically using cameras mounted on stable platforms or held by operators. This approach is particularly suited for detailed documentation of accessible structures and artifacts, where direct line-of-sight access allows for high-resolution imaging without the need for elevated viewpoints. Close-range photogrammetry, a subset of terrestrial methods, focuses on object-scale measurements with camera-to-object distances generally less than 100 meters, enabling precise 3D reconstructions of items ranging from small components to building facades. Common platforms in terrestrial and close-range photogrammetry include total stations integrated with digital cameras for combined angular and photogrammetric measurements, handheld consumer-grade or metric cameras for flexible on-site capture, and robotic arms for controlled industrial scanning of large assemblies. Total stations with built-in imaging capabilities facilitate geo-referenced , aligning visual data with survey coordinates for enhanced accuracy in tasks. Handheld devices, often stabilized on tripods, support rapid deployment in field conditions, while robotic arms provide repeatable positioning in controlled environments like facilities. These setups contrast with aerial methods by prioritizing proximity and multi-angle coverage over broad-area . Key techniques in this domain emphasize comprehensive object coverage and precise control. Convergent photography, where multiple images are captured from overlapping viewpoints converging toward the target, ensures full surface documentation by minimizing blind spots and improving depth estimation through varied baselines. Coded targets—distinctive markers with unique patterns, such as retroreflective circles—are placed on or around the object to serve as control points, automating feature matching and camera orientation during processing. For textured surfaces, multi-view stereo () algorithms reconstruct dense point clouds by analyzing across numerous images, often integrated with structure-from-motion pipelines to derive without prior . These methods rely on workflows but reference standard protocols for correction. Despite its strengths, terrestrial and close-range photogrammetry faces specific challenges inherent to ground-level acquisition. Occlusions from object protrusions or environmental elements can obscure parts of the scene, requiring additional viewpoints or manual interventions to achieve complete models. Scale ambiguity arises in unconstrained setups, where relative sizes must be resolved using known references like coded targets or measured baselines to prevent distorted reconstructions. Illumination inconsistencies, particularly in indoor or shadowed settings, degrade image quality and matching reliability, necessitating controlled lighting or radiometric adjustments. Vibration in handheld or mobile platforms introduces , which is mitigated through stabilization tools or high-speed shutters to maintain sub-millimeter precision. In applications, terrestrial photogrammetry excels in industrial , where it supports part inspection with accuracies down to 0.01 mm, as demonstrated in multi-focus imaging for precision components. For heritage documentation, it enables non-invasive of artifacts and structures, capturing geometric details alongside surface conditions for planning. These uses highlight its role in and archival preservation, often yielding models suitable for integration or . Compared to , photogrammetry offers advantages in cost-effectiveness, requiring only cameras and software rather than expensive hardware, making it accessible for fieldwork. It also inherently captures full-color textures during reconstruction, providing visually rich models that enhance analysis in and industrial contexts without secondary texturing steps.

Stereophotogrammetry

Stereophotogrammetry relies on the principle of , analogous to human , where two images of the same scene captured from slightly offset viewpoints are used to reconstruct three-dimensional structure. The core mechanism involves measuring the horizontal disparity d between corresponding points in the left and right images, which arises due to the separation between the viewpoints. This disparity is inversely proportional to depth, enabling the computation of depth Z using the formula Z = \frac{f \cdot B}{d}, where f is the camera's focal length and B is the baseline distance between the two viewpoints. This approach leverages parallax to triangulate object positions, forming the foundation for 3D point extraction in photogrammetric workflows. Camera setups in stereophotogrammetry typically employ either parallel optical axes, which maintain straightforward epipolar geometry for easier correspondence matching, or convergent axes, where cameras are angled inward to converge at a finite distance, potentially reducing radial distortion but introducing vertical disparities that require rectification. For human interpretation of stereo pairs, techniques such as anaglyph viewing—overlaying images in complementary colors (e.g., red-cyan) viewed through filtered glasses—or polarization-based separation, using orthogonally polarized filters to direct images to each eye, facilitate stereoscopic perception without mechanical aids. These methods allow operators to perceive relief and measure contours manually. Automated processing in stereophotogrammetry employs algorithms to identify correspondences, with matching being a widely adopted that iteratively minimizes differences between patches through geometric and radiometric transformations, achieving sub-pixel accuracy. Enhancements include multi-baseline configurations, incorporating additional to resolve ambiguities and improve depth precision across varying scales. Matching can be sparse, targeting distinct features for efficient tie-point generation, or dense, producing comprehensive surface models by correlating every , though dense methods demand higher computational resources. In low-texture regions where natural features are scarce, artificial patterns—such as projected grids or textures—are introduced to enhance reliability. Historically, stereoplotters served as mechanical-optical instruments for manual stereophotogrammetry, enabling operators to view pairs through floating marks and trace contours or profiles directly onto maps. Modern implementations integrate these concepts into digital software, automating disparity computation and reconstruction for scalable . Accuracy in stereophotogrammetry is influenced by the baseline-to-depth ratio, with ratios greater than 1:10 recommended to ensure sufficient for precise measurements while avoiding excessive disparities that complicate matching.

Data Processing and Analysis

Analog and Digital Workflows

In traditional analog photogrammetry, the workflow begins with the exposure of photographic film in metric cameras during aerial or terrestrial surveys, capturing overlapping images of the target area. The exposed film undergoes chemical development in a darkroom process, where developers, stop baths, and fixers convert latent images into visible negatives, followed by drying and quality inspection to ensure uniform density and minimal distortion. These physical negatives are then mounted in stereoplotters, such as the Wild B8 or Kern PG2, where optical projection systems use lenses and mirrors to recreate the central perspective and enable stereoscopic viewing of paired images. Operators manually trace contours, measure elevations via parallax bars, and delineate features on drafting tables or scribing sheets, often producing topographic maps or models through floating marks and mechanical linkages. However, analog workflows suffer from precision limitations due to film shrinkage, emulsion irregularities, and operator fatigue, resulting in errors up to 2% in stereoscopic measurements and overall accuracies typically limited to 1:2,000 scale for mapping. Scalability is further constrained by the labor-intensive manual processes, which become impractical for large datasets or high-resolution requirements, often necessitating weeks of compilation for moderate-area projects. The digital workflow, in contrast, starts with the acquisition of raw digital images from sensors like or arrays in modern cameras, bypassing film entirely and enabling immediate transfer to computational pipelines. For legacy analog images, high-resolution scanning digitizes negatives into raster formats, but contemporary processes emphasize native digital capture for reduced distortion. Automated feature detection identifies keypoints using algorithms such as (SIFT), which detects rotation- and scale-invariant descriptors via difference-of-Gaussians, or Speeded-Up Robust Features (), an approximation of matrices for faster matching. These correspondences feed into Structure-from-Motion (SfM) pipelines to estimate initial sparse 3D point clouds and camera poses through incremental , followed by dense reconstruction using Multi-View Stereo () or patch-based matching to generate high-density points via semi-global optimization or patch correlation. The transition from analog to digital involved hybrid analytical plotters in the 1970s to 1990s, such as the Kern DSR11 or Planicomp, which combined optical projection with computer-controlled servos for automated and , bridging mechanical stereovision with early numerical computation. By the post-2000 era, the full shift to digital workflows was driven by large-format sensors like the DMC and widespread adoption of GPU acceleration for of matching and , enabling real-time handling of multi-gigapixel datasets. In the digital data flow, raw images are preprocessed for radiometric correction before SfM yields sparse points, which are densified into point clouds exportable in format—a binary standard supporting up to billions of points with , , and for with systems and compression via LAZ to manage file sizes often exceeding 100 GB for large scenes. These clouds are then meshed using Poisson surface reconstruction or to form watertight polygonal surfaces, followed by and projection from original images to apply textures, resulting in photorealistic models suitable for or . Digital automation yields significant efficiency gains over analog methods, reducing processing time from weeks of manual stereoplotting to hours via parallelized algorithms and eliminating chemical development delays, as demonstrated in cases where 1,200 km² of is triangulated in under 7 hours on multi-core systems.

Bundle Adjustment and Error Correction

Bundle adjustment (BA) is a fundamental optimization process in photogrammetry that refines the three-dimensional structure of a scene and the camera parameters by minimizing the reprojection errors across multiple images. It simultaneously estimates the positions of object points and the exterior and interior orientation parameters of all cameras involved, ensuring a globally consistent photogrammetric model. This problem is typically formulated as minimizing the that sums the squared differences between observed image coordinates and those predicted by the equations. The optimization is solved iteratively using algorithms such as the Levenberg-Marquardt method, which combines and Gauss-Newton techniques to handle the nonlinearity and ensure convergence even with initial approximations. This approach exploits the sparsity of the normal equations derived from the Jacobian matrix of partial derivatives with respect to the unknown parameters, enabling efficient computation for large datasets. Seminal developments in BA trace back to the work of D.C. Brown in the and , where he introduced analytical methods for adjusting photogrammetric blocks, evolving from strip adjustments to full bundle solutions. BA variants include free-net adjustments, which treat the network as floating without fixed control points to focus on relative geometry, and fixed control adjustments that anchor the model using ground control points (GCPs) for positioning. Incremental BA processes images sequentially, refining the model progressively to manage computational load in structure-from-motion pipelines, while global BA optimizes all parameters simultaneously for higher precision in dense blocks. Outliers, often arising from mismatched features, are handled by integrating robust estimators like RANSAC during initial feature correspondence to exclude blunders before optimization. Error sources in photogrammetry encompass systematic distortions, such as radial and tangential aberrations or in aerial imagery, and random errors like from quantization or thermal effects. BA corrects these by incorporating additional parameters, such as models for within the interior , and by weighting observations according to their variance to downplay noisy measurements. Blunders, including gross measurement errors from incorrect tie points, are detected post-adjustment using editing, which iteratively rejects residuals exceeding a multiple of the standard deviation (typically 2-3σ) derived from the adjustment's variance-covariance matrix. Accuracy assessment in BA relies on metrics like error (RMSE) computed on independent checkpoints, quantifying the planar or vertical discrepancies between adjusted and measured coordinates, often achieving sub-pixel levels in image space (e.g., 0.2-0.5 pixels) for well-calibrated systems. intervals for parameters are derived from the variance-covariance matrix output of the solution, providing statistical reliability estimates scaled by the a posteriori variance factor. Advanced BA techniques distinguish between relative , which establishes the between image pairs without , and absolute , which scales and positions the model in a world using GCPs or direct measurements. integrates GNSS and IMU data as additional observations in the adjustment, constraining exterior orientations to reduce reliance on GCPs and mitigate drift, particularly in UAV or applications where boresight misalignment between sensors is modeled as extra parameters.

Integration with Other Technologies

With Remote Sensing and GIS

Photogrammetry synergizes with by fusing digital models (DEMs) derived from stereo imagery with hyperspectral data to enhance classification accuracy. This integration leverages the geometric precision of photogrammetric DEMs to provide topographic context, which complements the richness of hyperspectral imagery for distinguishing vegetation types, soil compositions, and features in complex environments. For instance, fusing point clouds from photogrammetric processing with hyperspectral bands has been shown to improve semantic segmentation of scenes by incorporating both and signatures. Multi-sensor platforms further amplify these synergies, combining visible-light cameras for photogrammetric with thermal infrared (IR) sensors to capture temperature variations alongside structural data. Such platforms, often deployed on unmanned aerial vehicles (UAVs), enable simultaneous acquisition of RGB imagery for and thermal IR for detecting heat anomalies, like moisture in agricultural fields or structural defects in . Photogrammetric of these multi-spectral and thermal datasets produces orthomosaics and DEMs that reveal environmental patterns not visible in single-sensor data. Integration with geographic information systems (GIS) facilitates the importation of photogrammetric products, such as orthophotos and triangulated irregular networks (TINs), into platforms like and for advanced . Orthophotos serve as georeferenced basemaps for overlaying vector layers, while TINs model terrain surfaces to derive metrics like and , essential for hydrological modeling and . This workflow supports feature , where photogrammetric edges are digitized into GIS polygons for thematic , enhancing the scalability of geospatial databases. Data fusion techniques in this domain emphasize co-registration of photogrammetric optical data with () imagery to enable all-weather mapping capabilities. Co-registration aligns datasets through feature matching or , mitigating SAR's speckle noise with photogrammetry's high-resolution texture for hybrid models that produce consistent DEMs under or at night. These hybrid approaches have been applied in land-use classification, where fused optical-SAR products improve boundary delineation in vegetated or shadowed areas. Standards ensure interoperability between photogrammetric outputs and /GIS ecosystems, with Open Geospatial Consortium (OGC) specifications like (WMS) enabling seamless data sharing across systems. Photogrammetric datasets comply with OGC standards for encoding orthophotos and DEMs in formats such as , promoting plug-and-play integration in distributed GIS environments. Metadata schemas, including ISO 19115, standardize descriptions of lineage, quality, and extent for these products, facilitating discovery and validation in multi-source fusions. The benefits of these integrations are particularly evident in temporal monitoring, where multi-temporal orthophotos from photogrammetry track surface changes like rates over time. By differencing sequential DEMs, analysts quantify volumetric losses, such as in catchments, with accuracies down to centimeters, supporting predictive models for environmental . This approach has revealed erosion dynamics in Mediterranean landscapes, aiding in the assessment of trends.

With Computer Vision and AI

The integration of and has significantly automated and enhanced photogrammetric workflows, enabling more robust extraction, matching, and from complex sets. Traditional methods often struggle with variability in , occlusions, and viewpoint changes, but techniques address these by learning hierarchical representations directly from data. For instance, matching has been revolutionized through self-supervised neural networks like SuperPoint, which detects and describes interest points without manual annotation, improving repeatability and accuracy in multi-view geometry tasks. In photogrammetric applications, SuperPoint has demonstrated superior performance in aerial tie-point matching, achieving higher estimation metrics compared to classical detectors like SIFT. Semantic segmentation further augments photogrammetry by identifying and delineating objects within images, facilitating targeted processing and reducing noise in 3D reconstructions. Deep convolutional networks, such as variants, segment photogrammetric images into classes like buildings, , or ground, enabling selective feature extraction and improved model fidelity. This approach is particularly valuable for crowdsourced or heritage imagery, where it combines with structure-from-motion to monitor structural changes while classifying elements semantically. Advancements in AI-driven dense matching have shifted photogrammetry toward end-to-end neural pipelines, exemplified by MVSNet, which infers depth maps from unstructured multi-view images using cost volume regularization. This network extracts deep features and predicts disparities, yielding denser point clouds than patch-based stereo methods, with applications in aerial reconstruction. Automated ground control point (GCP) detection leverages models like YOLO variants to identify markers in imagery, streamlining and reducing manual intervention in large surveys. Similarly, oriented bounding box adaptations of these models enable precise localization of GCPs in aerial views, enhancing initialization. In the 2020s, generative models have addressed gaps in photogrammetric outputs, with GANs enabling for incomplete models derived from sparse views. These networks generate plausible surface details by learning from exemplar patches, filling holes in SfM reconstructions while preserving photometric , as seen in thermal texture augmentation for multi-spectral models. also supports error prediction in , where neural regressors forecast reprojection residuals to guide adaptive optimization, prioritizing high-uncertainty parameters and converging faster on datasets with outliers. This adaptive approach refines camera poses and structure iteratively, improving global in challenging scenarios. Edge deployments on drones facilitate photogrammetric processing, allowing onboard inference for immediate 3D mapping during flights. Lightweight models run on hardware to perform tracking and partial reconstructions, enabling applications like dynamic avoidance without . Such systems process streams locally, supporting autonomous in surveys. These integrations tackle key challenges in photogrammetry, such as low-texture scenes where classical features fail; deep matching networks like SuperGlue paired with DISK extract reliable correspondences even in uniform areas, improving reconstruction completeness in historical or indoor imagery. For scalability with from UAV swarms, distributed frameworks parallelize processing across clusters, handling terabyte-scale image volumes from coordinated flights while maintaining sub-millimeter precision in orthomosaics. Practical examples include extensions to pipelines like COLMAP, where PyTorch-based deep feature matchers integrate seamlessly via plugins, replacing hand-crafted descriptors with learned ones for enhanced robustness in diverse environments. These hybrid systems exemplify how AI augments established photogrammetric tools, fostering efficiency in large-scale deployments. As of 2025, recent advancements include the integration of neural radiance fields () with photogrammetry for improved and AI-driven in workflows, enhancing accessibility and speed in .

Applications

Cartography and Topographic Mapping

Photogrammetry plays a central role in by enabling the production of accurate topographic maps through the extraction of elevation data from overlapping aerial images. This process begins with the generation of digital elevation models (DEMs) from stereo imagery, which serve as the foundation for deriving contour lines that represent terrain relief. Contour lines are created by interpolating elevation values across the DEM grid, connecting points of equal height to visualize slopes, valleys, and peaks on two-dimensional maps. A key step in preparing imagery for topographic mapping is ortho-rectification, which corrects geometric distortions caused by terrain relief, camera tilt, and sensor orientation. During ortho-rectification, a DEM is used to project image pixels onto a horizontal plane, effectively removing displacement effects and producing scale-consistent orthomosaics suitable for map overlays. This ensures that features like roads and boundaries align precisely with ground coordinates, facilitating reliable cartographic outputs. For large-scale mapping projects, block triangulation is employed to orient and adjust extensive blocks of overlapping photographs, determining the three-dimensional positions of tie points across vast areas. This technique minimizes errors in position and attitude parameters, achieving sub-meter accuracy over hundreds of square kilometers by solving for bundle adjustments in a computational . Additionally, hydro-flattening adjusts water body elevations in DEMs to a constant level, simulating traditional contour-based representations where lakes and rivers appear flat, which is essential for consistent hydrologic modeling in topographic sheets. Photogrammetric adheres to standardized scales ranging from 1:500 for detailed plans to 1:50,000 for regional overviews, balancing resolution with coverage efficiency. The American Society for Photogrammetry and (ASPRS) Positional Accuracy Standards outline requirements for these scales, such as Class 1 accuracy for 1:1,200 , which mandates a horizontal error (RMSEr) of no more than 15 cm to ensure high-fidelity representation of terrain features. These standards guide the validation of map products using independent checkpoints, promoting in national and international cartographic efforts. Common outputs include topographic sheets that integrate orthorectified imagery with vectorized contours, as well as digital terrain models (DTMs) representing bare-earth surfaces by filtering out and structures, in contrast to digital surface models (DSMs) that capture the full topographic envelope including above-ground features. DTMs are preferred for contour generation and hydrological analysis, while DSMs support broader applications like line-of-sight studies. In national programs, such as the U.S. Geological Survey's (USGS) topographic mapping initiatives, aerial photogrammetry has been instrumental since the mid-20th century, producing updated 1:24,000-scale quadrangles through stereo plotting and DEM derivation for the entire . High-resolution DEMs from photogrammetry also aid , as seen in projects generating 1-meter DTMs for infrastructure development and flood risk assessment in densely populated areas.

Archaeology and Cultural Heritage

Photogrammetry plays a pivotal role in and by enabling the non-invasive documentation and analysis of historical sites and artifacts through the generation of accurate models. (SfM) techniques, which reconstruct three-dimensional geometry from overlapping two-dimensional photographs, are particularly suited for creating comprehensive site-wide models of ruins and landscapes, allowing archaeologists to capture spatial relationships and structural details without physical disturbance. Close-range photogrammetry complements this by facilitating high-resolution scanning of individual artifacts, such as or sculptures, often resolving surface details down to 0.1 mm for precise metric analysis during conservation planning. Key applications include virtual reconstructions of ancient ruins, which preserve and visualize lost architectural elements for and . For instance, a 2025 digital study in used photogrammetry and to reconstruct elite residences like the House of the Thiasos, modeling original upper-floor layouts with towering structures as luxurious status symbols offering panoramic views, thereby aiding in the understanding of social hierarchies. Similarly, monitoring environmental threats such as at heritage sites relies on repeated photogrammetric surveys to quantify surface changes over time; at the Sabbath Point archaeological site in central Newfoundland, , UAV-based photogrammetry measured rates on prehistoric structures, revealing annual losses of up to approximately 60 cm in vulnerable areas. Metric documentation for restoration projects further benefits from these methods, providing baseline data for interventions while ensuring compliance with preservation standards. Notable case studies from the demonstrate photogrammetry's integration with unmanned aerial vehicles (UAVs) for large-scale surveys. In , , UAV photogrammetry was used during excavations to generate orthomosaic maps and 3D models of the Nabataean city's plateau, identifying archaeological features with centimeter-level accuracy and supporting ongoing conservation efforts against natural degradation. approaches combining photogrammetry with have enhanced precision in such projects, as seen in documentation of complex facades where photogrammetric texturing overlays laser-derived geometry to achieve sub-millimeter fidelity for detailed inventories. The primary benefits of photogrammetry in this field stem from its non-destructive nature and , allowing for longitudinal studies without risking fragile materials, while enabling public engagement through (VR) models that democratize access to remote or deteriorating sites. However, challenges persist, particularly at delicate locations where low-impact, lightweight drones are essential to minimize and , and where generating standardized for legal and purposes requires robust protocols to ensure across institutions. These techniques, building on close-range methods for artifact-level detail, underscore photogrammetry's value in safeguarding for future generations.

3D Modeling and Industrial Design

Photogrammetry plays a pivotal role in for by generating detailed digital representations from photographic data, enabling precise onto polygonal to produce photorealistic renders. This process involves aligning multiple images to reconstruct surface and then projecting photographic textures onto the resulting mesh, enhancing visual fidelity for design and simulation. For instance, photogrammetric allows for the precise application of high-resolution images onto laser-scanned models, improving the accuracy of digital twins in workflows. In applications, photogrammetry facilitates the conversion of physical objects into editable CAD models by capturing overlapping photographs to generate point clouds and meshes, which are then refined into surfaces suitable for . This method is particularly effective for complex geometries, where photogrammetric serves as a reference for reconstructing accurate CAD representations, bridging the gap between physical prototypes and digital blueprints. Combining photogrammetry with techniques enables the creation of scalable models from image-based scans, reducing the need for manual measurement in product development. Within the , photogrammetry supports part inspection and by producing models that verify dimensional accuracy during and prototyping. Systems like the MaxSHOT 3D photogrammetry camera achieve repeatable measurements on large components, such as , ensuring compliance with tight tolerances in . In and (VFX), it is employed for asset creation, including scanning actors and environments to generate elements with lifelike details, streamlining the integration of real-world references into digital scenes. For architecture, photogrammetry integrates with (BIM) to create as-built models from site photographs, facilitating design updates and simulations in tools like Civil 3D. Photogrammetry delivers sub-millimeter accuracy in prototyping, with reported precisions as fine as 0.01 over meter-scale volumes, making it suitable for high- applications like mold verification and component fitting. This also enables accurate volume calculations for , such as assessing material displacement in prototypes, where errors are minimized through multi-image overlap and . In game development, scanned environments created via photogrammetry provide realistic assets, as seen in titles incorporating photoscanned props and terrains to enhance immersion without extensive manual modeling. employs photogrammetry for aircraft assembly verification, using it to measure passenger entry doors on the 787 model during production stages, ensuring with sub-millimeter (approximately ±0.127 ). Outputs from photogrammetric workflows commonly include OBJ and STL file formats, which support and data for import into CAD, , and rendering software. These models are also compatible with (AR) and (VR) platforms, allowing interactive visualization of designs in immersive environments, such as overlaying prototypes on real-world settings for review. Close-range photogrammetry techniques, often enhanced by for feature detection, further refine these outputs for industrial use.

Engineering, Surveying, and Geotechnical Analysis

In and , photogrammetry enables precise as-built documentation of construction sites by generating detailed models from overlapping photographs, allowing verification of completed structures against design plans with centimeter-level accuracy. This approach is particularly valuable for capturing complex geometries in urban or industrial settings, where traditional methods may be time-consuming or hazardous. For instance, non-metric cameras mounted on drones or tripods facilitate rapid , producing point clouds that quantify deviations in built elements such as foundations or retaining walls. Deformation monitoring represents another critical application, especially for like bridges, where repeat photogrammetric surveys detect subtle movements over time. Unmanned aerial vehicles (UAVs) equipped with high-resolution cameras capture sequential to compute displacements, such as bridge deck deflections under load, achieving sub-millimeter precision through algorithms. This non-contact method minimizes disruption to traffic and enhances safety compared to manual instrumentation, enabling early detection of structural issues in long-span bridges. In geotechnical contexts, photogrammetry supports rock face analysis by discontinuities—such as joints and fractures—on slopes or walls, informing kinematic assessments via stereographic projections. Terrestrial setups, using fixed cameras, generate dense point clouds that quantify and persistence of these features, crucial for predicting potential rockfalls. For engineering volumetric analysis, photogrammetry excels in earthworks by comparing pre- and post-excavation surfaces to calculate volumes, optimizing material transport and site grading. Drone-based surveys produce orthomosaics and digital elevation models (DEMs) that integrate with geographic information systems (GIS) for automated computations, reducing errors from manual cross-sections by up to 20% in large-scale projects. mapping similarly benefits from terrestrial photogrammetry, where stationary camera arrays document interior geometries and deformations in constrained environments, supporting alignment verification during . In mining operations, monitoring via UAV photogrammetry tracks progressive failures by differencing sequential DEMs, alerting to movements exceeding 10 cm that could indicate instability. Case studies from the highlight these applications, such as drone photogrammetry for inspections, where UAVs inspect spillways and abutments for cracks, generating 3D models that reveal deformations as small as 5 mm without . Post-construction verification often achieves cm-level accuracy, as demonstrated in projects where photogrammetric point clouds confirm pavement alignments. Standards for integration with (BIM) further enhance utility, allowing as-planned models to overlay as-built photogrammetric data for discrepancy analysis, streamlining and . This fusion supports automated deviation reporting, with tolerances typically under 2 cm for .

Software and Tools

Commercial Software Packages

Commercial photogrammetry software packages provide proprietary, enterprise-grade solutions for processing images into 3D models, orthomosaics, and geospatial data, catering to industries like , and . As of 2025, the global photogrammetry software market has reached approximately $2.6 billion, with significant growth driven by demand in and projects that require high-precision reality modeling. Among the leading packages, Agisoft Metashape stands out for its focus on structure-from-motion (SfM) and multi-view stereo (MVS) techniques, particularly suited for drone-based aerial photogrammetry. It offers automated workflows for image alignment, dense cloud generation, and / creation, with exports compatible with CAD and GIS formats such as DXF, , and . Pricing follows a perpetual license model at around $3,499 for the professional edition, though it requires a GPU-enabled system for efficient rendering of large datasets. Its strengths include a user-friendly graphical and certified accuracy for surveying applications, meeting standards like those from the American Society of Photogrammetry and Remote Sensing. Pix4D, another market leader, excels in aerial mapping and supports cloud-based processing for scalable operations. Key features encompass automated orthomosaic generation, digital elevation models, and outputs, with seamless integration into ecosystems like for geospatial analysis. Subscription pricing starts at $350 per month or $3,500 annually for PIX4Dmapper, making it accessible for professional operators while emphasizing via GPU for real-time previews. The software's intuitive and validated precision—achieving sub-centimeter accuracy in controlled surveys—position it as a go-to for site monitoring. RealityScan (formerly RealityCapture, acquired by in 2021 and rebranded in 2025), is renowned for its rapid scanning capabilities tailored to (VFX) and documentation. It provides high-speed reconstruction of textured meshes from photographs or laser scans, supporting exports to formats like for tools, and is for individuals and businesses with annual gross revenue under $1 million USD. Annual licensing costs $1,250 per seat for larger users, with GPU-intensive processing enabling quick turnaround for large-scale projects. Its advantages lie in an accessible interface and proven reliability for high-fidelity outputs in professional pipelines. For infrastructure applications, ' iTwin Capture Modeler (formerly ContextCapture) delivers robust photogrammetry for engineering projects, generating multiresolution 3D models from aerial or terrestrial imagery. Features include hybrid processing of photogrammetry and data, with direct integrations to and platforms for enhanced BIM-GIS workflows. It requires GPU support for optimal performance and is priced through enterprise subscriptions, often bundled in Bentley's CONNECT platform starting at several thousand dollars annually. The tool's certified accuracy and streamlined automation make it ideal for large-scale in . Overall, these packages reflect 2025 market trends toward deeper integrations with and suites, facilitating data exchange in multidisciplinary environments while prioritizing ease of use and computational efficiency.

Open-Source and Research Tools

Open-source photogrammetry tools have democratized access to advanced techniques, enabling researchers, academics, and small-scale developers to perform structure-from-motion (SfM) and multi-view stereo () without commercial licensing costs. These tools often feature modular designs with command-line interfaces (CLIs) and extensible APIs, allowing customization for specific workflows such as integrating models for feature detection. COLMAP stands out as a widely adopted open-source pipeline for SfM and MVS, supporting both ordered and unordered image collections through its graphical user interface (GUI) and CLI. Developed initially for research in computer vision, it implements robust algorithms for feature matching, pose estimation, and dense reconstruction, with outputs including sparse and dense point clouds compatible with formats like PLY and OBJ. Its Python bindings facilitate scripting and integration with external libraries, such as those for AI-enhanced feature extraction. In academic prototyping, COLMAP is frequently used for reconstructing cultural heritage sites from archival photos, offering a low-cost alternative to proprietary software while achieving sub-millimeter accuracy in controlled experiments. However, its CLI-heavy workflow presents a steeper learning curve for non-experts, and the GUI lacks the polished visualizations of commercial counterparts. As of mid-2025, version 3.12 introduced enhanced CUDA support for GPU-accelerated dense reconstruction, improving processing speeds by up to 5x on modern NVIDIA hardware for large datasets exceeding 10,000 images. Community extensions have also enabled seamless integration with ROS (Robot Operating System) for real-time robotics applications, such as SLAM in autonomous drones. OpenDroneMap (ODM) specializes in processing UAV-captured imagery, providing a toolkit for generating orthophotos, digital elevation models (DEMs), and textured models via its core engine and web-based interface in WebODM. It employs open algorithms for georeferencing and , with support for metadata from common drone sensors, making it ideal for tasks like forest canopy mapping. The API (PyODM) allows in batch processing pipelines, and community plugins extend functionality for multispectral analysis. Startups leverage ODM for cost-effective in , where it processes datasets of 1,000+ images to produce georeferenced outputs with RMSE errors below 5 cm when ground control points are used. Limitations include higher memory demands for high-resolution inputs—recommending at least 128 GB RAM for 2,500-image sets—and less intuitive error handling compared to user-friendly commercial tools. Updates in 2025 added distortion correction and auto-alignment for multi-temporal datasets, enhancing accuracy in dynamic scenes like crop growth tracking. MicMac, developed by the French National Geographic Institute (), offers a comprehensive suite for dense matching and orientation in photogrammetric workflows, emphasizing research-grade precision through tools like AperiCloud for tie-point computation and dense correlation modules. Its CLI design supports scripted processing of terrestrial and aerial imagery, with outputs tailored for geospatial applications such as ortho-rectification. interfaces enable extensions for custom models, appealing to academic users in geosciences for prototyping deformation analysis in glaciers. In low-budget scenarios, MicMac serves as an alternative for documentation, reconstructing facades from photos with resolutions up to 1 mm/. The tool's complexity, rooted in its modular structure, results in a steeper and minimal support, often requiring familiarity with photogrammetric terminology for effective use. It features ongoing improvements in parallelization for multi-core systems but lags in native GPU acceleration relative to some peers.

Challenges and Future Directions

Current Limitations and Accuracy Issues

Photogrammetry achieves sub-centimeter accuracy in controlled, ideal conditions with high-quality imagery and sufficient ground control points (GCPs), but performance degrades significantly under suboptimal , introducing errors in feature detection and matching. In low-light environments, such as indoor or settings, calibration errors for non-metric cameras increase, leading to higher overall determination errors in 3D reconstructions due to reduced contrast and feature visibility. Accurate relies on GCP density; the optimal number of ground control points (GCPs) varies by site size—for UAV-based surveys, one study found 12 GCPs sufficient for areas up to 39 and 18 for areas up to 342 to achieve reliable absolute positioning. Environmental factors pose substantial challenges to photogrammetric accuracy, particularly in vegetated or complex terrains where shadows and occlusions obscure key features. Dense vegetation creates partial blockages that hinder stereo matching, resulting in incomplete point clouds and elevated reconstruction errors, as photogrammetry relies on visible surface textures that foliage often conceals. In aerial surveys, atmospheric effects like scatter light and reduce image clarity, necessitating dedicated correction algorithms to restore contrast and prevent systematic biases in models. Computational demands limit the of photogrammetry, especially for large datasets from modern sensors. Processing over 1,000 high-resolution images (e.g., 20 MP) typically requires at least 64 GB of to handle dense generation and without excessive swapping or crashes, with real-time applications remaining infeasible without . Ethical and data-related concerns further constrain photogrammetric deployments, particularly in urban settings. Drone-based surveys in populated areas raise issues, as high-resolution imagery can inadvertently capture personal details , prompting calls for stricter regulatory frameworks to balance utility with individual rights. Additionally, AI-driven feature matching algorithms exhibit biases toward textured surfaces, performing poorly on uniform or low-contrast areas like water or bare soil, which can amplify errors in diverse environmental datasets and underscore the need for robust validation across varied textures. In aerial photogrammetry, typical vertical errors are on the order of 1/5,000 of the flying height under standard conditions with adequate GCPs, though this degrades without multi-sensor fusion to address residual uncertainties. The integration of artificial intelligence (AI) and machine learning (ML) into photogrammetry is poised to revolutionize feature extraction through predictive modeling, enabling systems to anticipate and reconstruct incomplete datasets with higher accuracy. By leveraging neural networks for semantic segmentation and anomaly detection, future workflows will automate the identification of geological or architectural elements in imagery, significantly reducing processing times compared to traditional methods. Autonomous drone swarms, coordinated via AI algorithms, will enhance coverage in challenging environments by dynamically optimizing flight paths for comprehensive 3D mapping. In 2025, software advancements like Artec Studio Lite have integrated AI-powered photogrammetry to broaden access to professional 3D tools. Hybrid approaches combining photogrammetry with LiDAR are also gaining traction for enhanced accuracy in vegetated and complex terrains. Hardware innovations are advancing lightweight hyperspectral cameras that capture spectral data across hundreds of bands for enhanced material identification in photogrammetric reconstructions, facilitating on mobile platforms. These compact sensors, weighing under 1 kg, integrate with to produce detailed surface models without compromising portability. Complementing this, networks enable for instantaneous data transmission and processing during drone missions, allowing for on-the-fly and model updates with latencies below 10 ms. In space exploration, photogrammetry will support planetary s through AI-assisted terrain mapping, enabling autonomous navigation on extraterrestrial surfaces like Mars, where from rover cameras generates digital elevation models for hazard avoidance. For monitoring, satellite constellations such as those from will employ photogrammetric pipelines to derive high-resolution DEMs from multispectral imagery, tracking changes in ice sheets and vegetation cover at global scales. Ethical AI frameworks are emerging to ensure bias-free reconstructions, incorporating fairness audits in training data to prevent distortions in 3D models derived from diverse cultural or environmental datasets. The photogrammetry market is forecasted to expand significantly, reaching approximately $3.13 billion by 2033, driven by AI integration and UAV adoption, with a exceeding 10%.

References

  1. [1]
    What is photogrammetry?—ArcGIS Pro | Documentation
    Photogrammetry is the science of obtaining reliable measurements from photographs and digital imagery. The output of the photogrammetric process is often ...
  2. [2]
    10. Photogrammetry | The Nature of Geographic Information
    Photogrammetry is a profession concerned with producing precise measurements of objects from photographs and photoimagery.
  3. [3]
    [PDF] Basics of Photogrammetry - Geodetic Systems, Inc
    The fundamental principle used by photogrammetry is triangulation. By taking photographs from at least two different locations, so-called "lines of sight" ...
  4. [4]
    [PDF] history of photogrammetry
    Early concepts were explored by Aristotle and da Vinci. The term "photogrammetry" was introduced in 1855, with the first German textbook in 1889. Hauck ...
  5. [5]
    [PDF] The Historical Development of Analytical Photogrammetry - ASPRS
    Analytical photogrammetry began with geometrical studies, applied to aerial photography by Finsterwalder, and later made practical by computers and work of ...
  6. [6]
    Series: Photogrammetry Applications and Examples
    Photogrammetry is the science and art of using photographs to extract three-dimensional information from a series of well-placed images.
  7. [7]
    [PDF] Aerial and Close-Range Photogrammetric Technology
    Photogrammetric techniques can be applied to virtually any source of imagery, whether it comes from 35-mm digital cameras or an earth-orbiting satellite.
  8. [8]
    [PDF] Photogrammetric Tools for Condition Assessment of Reclamation ...
    ABSTRACT: Photogrammetry is a low-cost 3D modeling solution that can be used for condition assessment of Reclamation structures.<|control11|><|separator|>
  9. [9]
    Image Science & Analysis Group | Photogrammetry - NASA • ARES
    Examples of Photogrammetry Applications: (Example 1) Determination of depth in damaged surface. (Example 2) 3-D photogrammetry of surface ablation. (Example 3) ...
  10. [10]
    ISPRS Statutes
    Sep 23, 2022 · Photogrammetry is the science and technology of extracting reliable three-dimensional geometric and thematic information, often over time, of ...
  11. [11]
    [PDF] The American Society for Photogrammetry and Remote Sensing
    " Within this definition, Photogrammetry includes the acquisition of imagery from conventional photographic systems, as well as from sensors utilizing other.
  12. [12]
    What is ASPRS?
    Definitions. Photogrammetry is the art, science and technology of obtaining reliable information about physical objects and the environment through processes ...Missing: definition | Show results with:definition
  13. [13]
    Relationship between photogrammmetry and computer vision
    This paper reviews the central issues for both computer vision and photogrammetry and the shared goals as well as distinct approaches are identified.
  14. [14]
    Comparative evaluation of the performance of passive and active 3D ...
    Passive systems provide high accuracy on well defined features, such as targets and edges however, unmarked surfaces are hard to measure. These systems may also ...
  15. [15]
    [PDF] The American Society of Photogrammetry - ASPRS
    Photogrammetry is, by definition, the art and science of obtaining reliable measurements from photography. With time, this definition has broadened so that ...
  16. [16]
    [PDF] GROWING with technology
    In the late 1850's, Aimé Laussedat carried out the first topographi- cal survey of an area by means of a pair of photographs suitably distanced from each other.
  17. [17]
    Full article: Words as tracers in the history of science and technology
    Nov 25, 2020 · Photogrammetry originated with very similar approaches from the pioneering works of Laussedat in France (Polidori Citation2020) and Meydenbauer ...
  18. [18]
    [PDF] Original Carl Pulfrich and the role of instruments to identify ... - NAH
    Figure 2. Photograph of Pulfrich's stereo-comparator (Stereokompa- rator, 1901), an optical device designed to accurately compare stere- oscopic photographs.
  19. [19]
    Prof. E. Dolezal - The Founder of the International Society for ...
    The International Society for Photogrammetry was founded in Vienna on 4 July 1910 by E. Dolezal, Professor of Practical Geometry at the Technical University ...
  20. [20]
    [PDF] Software Aspects of Analytical Plotters - ASPRS
    ABSTRACT: The XII International Congress for Photogrammetry, held in Helsinki in 1976, displayed at least seven new instruments which can be classified as ...
  21. [21]
    Wild photo theodolites P30 (left) and FT9 (right) [20]. - ResearchGate
    Theodolites are fundamental geodetic measuring instruments for all practical geodetic tasks, as well as for experimental geodetic scientific purposes.Missing: Helki | Show results with:Helki
  22. [22]
    'Structure-from-Motion' photogrammetry: A low-cost, effective tool for ...
    Dec 15, 2012 · This paper outlines a revolutionary, low-cost, user-friendly photogrammetric technique for obtaining high-resolution datasets at a range of scales, termed ' ...
  23. [23]
    Unmanned aerial systems for photogrammetry and remote sensing
    We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS).
  24. [24]
    Towards Automated As‐Built Documentation of Underground ...
    Sep 24, 2025 · We present a novel pipeline combining photogrammetry and deep learning to map underground utilities. By integrating instance segmentation and ...
  25. [25]
    [PDF] Least Squares Collocation in Photogrammetry - ASPRS
    THE METHOD of least squares interpolation, known sometimes as collocation, has been extensively applied to the solution of problems related to geodesy.
  26. [26]
    Epipolar Geometry and the Fundamental Matrix (Chapter 9)
    The epipolar geometry is the intrinsic projective geometry between two views. It is independent of scene structure, and only depends on the cameras' internal ...Missing: photogrammetry | Show results with:photogrammetry
  27. [27]
    [PDF] Mathematical Foundations of Photogrammetry - ETH Zürich
    Photogrammetry uses cameras to obtain 3D world information. It uses light rays to measure directions, and the fundamental relation is a collinearity constraint.
  28. [28]
    Direct Linear Transformation from Comparator Coordinates into ...
    Feb 1, 2015 · Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry · Y. Abdel-Aziz, H. Karara ...
  29. [29]
    [PDF] Analytic Absolute Orientation in Photogrammetry
    The scheme in gen- eral is to consider first the coordinates of the control points in the two systems (the ground control system and the model system) for ...
  30. [30]
    [PDF] Lens Distortion for Close-Range Photogrammetry - ASPRS
    ABSTRAcr: A brief review of the formulas for radial and decentering lens distortion is pre- sented, with emphasis on the currently accepted representation ...
  31. [31]
    [PDF] Metric or Non-Metric Cameras - ASPRS
    VARIOUS PROBLEMS in engineering require high precision measurements. In many cases photogrammetric methods have proved to be extremely useful. Special ad-.Missing: DSLRs multispectral resolution
  32. [32]
    (PDF) Non-metric photogrammetry and surveyors - ResearchGate
    Aug 6, 2025 · The images produced using these non-metric cameras is said to have a better picture quality than the low distortion metric cameras [2] .Missing: DSLRs multispectral
  33. [33]
    Radiometric Correction of Multispectral UAS Images: Evaluating the ...
    Feb 6, 2021 · In this study, we perform experiments to assess the accuracy of the Parrot Sequoia camera and sunshine sensor to get an indication if the quality of the data ...
  34. [34]
    [PDF] RESOLUTION DIGITAL CAMERA SYSTEMS
    Jun 3, 1999 · Low-resolution digital camera systems are non-metric types viewed from a photogrammetric standpoint, which suffer from geometric and ...Missing: DSLRs multispectral
  35. [35]
    Best practices for image acquisition and photogrammetry
    Images should be taken with a high overlap: 90% overlap between images taken at the same height and 60% overlap between images taken at different heights.Missing: strategies percentages baseline
  36. [36]
    Photogrammetric error sources and impacts on modeling and ...
    Apr 28, 2014 · At the shooting distance of 15 m, 4 m baseline achieves the best accuracy of the planarity with the photo overlap percentage of 55% (Figure 5).
  37. [37]
    [PDF] Evaluating the Impact of Lighting Conditions on Photogrammetric ...
    photogrammetry, including image resolution, baseline distance, ge- ometry, Ground Sampling Distance (GSD), camera calibration, and object texture, lighting ...
  38. [38]
    [PDF] A Flexible New Technique for Camera Calibration - Microsoft
    Photogrammetric calibration. Camera calibration is performed by observing a calibration object whose geometry in 3-D space is known with very good precision. ...
  39. [39]
    [PDF] CAMERA SELF-CALIBRATION BY USING SfM BASED DENSE ...
    Dec 5, 2021 · The E matrix uses extrinsic camera parameters while the F matrix uses both intrinsic and extrinsic camera parameters in this task.
  40. [40]
    [PDF] PHOTOGRAMMETRIC ACCURACY AND MODELING OF ROLLING ...
    This relatively slow rolling shutter readout can lead to artifacts when capturing fast-moving objects or when recording images and videos from moving cameras ...Missing: radiometric | Show results with:radiometric
  41. [41]
    How to Validate Photogrammetry Data with Software - Anvil Labs
    Apr 5, 2025 · Use formats like RAW or high-quality JPEG for the best results. Metadata Organization: Keep images grouped by session, retain original EXIF data ...
  42. [42]
    EXIF: Unveiling Photogrammetry Secrets - One3D
    Sep 4, 2025 · EXIF data in photogrammetry includes location (latitude/longitude), camera orientation (roll, yaw, pitch), and camera settings, enhancing 3D ...<|control11|><|separator|>
  43. [43]
    [PDF] AN IMPROVED MODEL FOR ALONG TRACK STEREO SENSORS ...
    The simulation of the pushbroom sensor is more complicated than the frame camera model. The scanning effect on the ground is due to the motion of the satellite.
  44. [44]
    3.2.Manned Aircraft - Digital Agriculture Laboratory - UC Davis
    Jan 13, 2023 · Manned aircrafts usage in agriculture first started with aerial application of chemicals on crops during the 1920s.
  45. [45]
    Unmanned aerial vehicles (UAVs): practical aspects, applications ...
    Jan 16, 2023 · These unmanned aerial vehicles are often used for aerial mapping and power line inspection. Such UAVs are not capable of hovering or forward ...
  46. [46]
    Landsat Satellite Missions | U.S. Geological Survey - USGS.gov
    The Landsat Missions are comprised of Earth-observing operational satellites that carry remote sensors to collect data and image our planet.
  47. [47]
    Full article: Vertical artifacts in high-resolution WorldView-2 and ...
    Both the WorldView-2 and the WorldView-3 satellite sensors utilize time delay integration (TDI) charge-coupled devices (CCD) for image capture. In TDI line scan ...
  48. [48]
    Ground sampling distance (GSD) in photogrammetry
    This article explains what the Ground Sampling Distance (GSD) is and how to calculate it. The Ground Sampling Distance (GSD) is the distance between two ...Missing: planning | Show results with:planning
  49. [49]
    Ground Sample Distance Calculator - Propeller Aero
    Our Ground Sample Distance Calculator can affect the accuracy of your survey. Quickly calculate your GSD by entering your drone details and flight height.
  50. [50]
    Generic rigorous model for along track stereo satellite sensors
    A generic rigorous sensor model for high resolution optical satellite sensors, with along track stereoscopic capabilities, is introduced. Along track stereo ...
  51. [51]
    Effect of Climatological Factors on the Horizontal Accuracy of ... - MDPI
    Nov 13, 2024 · Factors such as brightness, temperature, wind, KP index, and solar radiation affect the precision and quality of the images to be used in photogrammetry.
  52. [52]
    Sustainable monitoring coverage of unmanned aerial vehicle ...
    For commercial UAVs, the survey area during a single flight is limited by battery life constraints (Ventura et al., 2016). When surveying a large area using ...
  53. [53]
    Accuracy Assessment of UAV Photogrammetry System with RTK ...
    May 13, 2024 · The results show that without GCP, an analysis of 71 spatially distributed checkpoints produced a root mean square error (RMSE) of 5.58 cm in ...<|separator|>
  54. [54]
    Drone Mapping and Photogrammetry for Disaster Response
    Oct 27, 2024 · Drone mapping and photogrammetry support disaster recovery efforts in North Carolina after Hurricane Helene: an inside story.<|control11|><|separator|>
  55. [55]
    (PDF) CLOSE RANGE DIGITAL PHOTOGRAMMETRY APPLIED TO ...
    Aug 6, 2025 · Close-range photogrammetry is defined as terrestrial photogrammetry when the object size and camera to object distance are both less than 100m.
  56. [56]
    [PDF] Close-Range Photog rammetry - ASPRS
    In close-range photogrammetry, the range of the object-to-camera distance is limited. Some advocate. 300 metres as a lnaximuln limit, while the minimum.Missing: <100m | Show results with:<100m
  57. [57]
  58. [58]
    Terrestrial and Close-Range Photogrammetry | McGraw-Hill Education
    It deals with photographs taken with cameras located on the surface of the earth. The cameras may be handheld, mounted on tripods, or suspended from towers or ...
  59. [59]
    [PDF] Convergent Photos for Close Range - ASPRS
    The convergent condition in close-range photogrammetry can relatively easily be ob- tained by an object rotation in respect to a fixed camera.
  60. [60]
    [PDF] automatic camera calibration in close-range photogrammetry - ASPRS
    Automatic camera calibration via self-calibration with the aid of coded targets is now very much the norm in close- range photogrammetry.
  61. [61]
    Use of terrestrial photogrammetry based on structure-from-motion for ...
    Jul 1, 2015 · Keywords. structure-from-motion multi view stereo (SfM-MVS);; digital terrestrial photogrammetry;; digital terrain model (DTM);; glacier mass ...
  62. [62]
    Terrestrial structure-from-motion: Spatial error analysis of roughness ...
    Feb 1, 2020 · This paper demonstrates the use of Terrestrial Structure-from-Motion (TSfM) photogrammetry to acquire morphology and roughness data at the reach-scale in an ...
  63. [63]
    (PDF) Multi-focus image fusion in high precision close-range ...
    The close-range photogrammetry is a discipline of precise metric measurement ... 161U model, 0.01 mm accuracy) and a digital micrometer (Mitutoyo,.Missing: metrology | Show results with:metrology
  64. [64]
    Heritage Recording and 3D Modeling with Photogrammetry ... - MDPI
    In case of heritage sites and objects, photogrammetry provides accurate 3D reconstructions at different scales and for hybrid 3D models (e.g., terrain model ...
  65. [65]
    [PDF] Precision Evaluations of Digital Imagery for Close-Range ... - ASPRS
    Conventional close-range photogrammetry has had wide ap- plication to engineering and industrial metrology tasks with high accuracy demands (Kenefick, 1971; ...
  66. [66]
    [PDF] Chapter 10 Principles of Photogrammetry
    Mar 31, 1993 · Photogrammetry can be defined as the science and art of determining qualitative and quantitative characteristics of objects from the images ...Missing: introduction | Show results with:introduction
  67. [67]
    [PDF] Stereophotogrammetry as an Anthropometric Tool - ASPRS
    The basic principle of stereophotogram- metry is exactly that of binocular vision. The two eyes send slightly different images of an object to the brain, where ...
  68. [68]
    Notes on photogrammetry for the measurement of antennas
    Three dimensional coordinate data are determined from the two dimensional photographs via optical triangulation. This is called "convergent photogrammetry", to ...
  69. [69]
    [PDF] Methods of stereophotogrammetry: A Review
    This paper also explains the different digital and analogue viewing techniques such as split screen view, anaglyph view, separation by polarization, alternating.
  70. [70]
    (PDF) Methods of Stereophotogrammetry: A Review - ResearchGate
    Aug 7, 2025 · It would give a brief review of the stereophotogrammetric concepts such as parallax, depth perception, stereovision, the different methods of viewing the ...
  71. [71]
    [PDF] A Global Approach for Least-Squares Image Matching and Surface ...
    In digital photogrammetry much attention has been paid to area-based matching, in particular to least-squares methods, since they were first introduced (Helava, ...Missing: stereophotogrammetry | Show results with:stereophotogrammetry
  72. [72]
    Dense Matching of Multiple Wide-baseline Views - ResearchGate
    PDF | This paper describes a PDE-based method for dense depth extraction from multiple wide-baseline images. Emphasis lies on the usage of only a small.Missing: stereophotogrammetry | Show results with:stereophotogrammetry
  73. [73]
    [PDF] a feature-based matching strategy for automated 3d model ... - ASPRS
    Within topographic and stereo close-range photogrammetry, feature-based matching offers an alternative approach to area-based matching via cross correlation ...
  74. [74]
    Multimodal photogrammetry for 3D digitization of low-textured ...
    We present a method which utilizes virtual texture features by an object-static pattern projector. Our innovation lies in the usage of non-visible patterns.
  75. [75]
  76. [76]
    [PDF] Why Analytical Plotters? - ASPRS
    To be able to use whatever photogram- metric exposure taken under the most varied conditions, to plot the projection in any plane whatsoever, or to create all ...Missing: 1970s | Show results with:1970s
  77. [77]
    DEM accuracy and the base to height (B/H) ratio of stereo images
    ... Baseline height ratios are one determinant of possible DEM accuracies. Whilst the stereo correlation coefficients slowly decrease with a longer base-line and ...
  78. [78]
    [PDF] FUNDAMENTALS OF PHOTOGRAMMETRY AND REMOTE SENSING
    exposure and chemical processing (development). It is followed by printing. It has two main types of film: black and white (B & W) film and color film. B &W.
  79. [79]
    [PDF] PHOTOGRAMMETRY: THE CONCEPTS
    Sep 3, 1996 · definition of photogrammetry is therefore: the science of obtaining reliable measurements ... transfer scope. Zoom optics provide ...
  80. [80]
    Photogrammetric Workflows: Traditional, Digital and the future
    Dec 6, 2010 · When asked about the “photogrammetric workflow”, most industry professionals refer to the analog frame camera (eg Leica RC30) workflow.
  81. [81]
    [PDF] Errors in Photogrammetry - ASPRS
    With stereoscopes and parallax bars, the governing source errors are measurements, tilt, and shrinkage and will result in errors as high as 2% which effect ...Missing: analog | Show results with:analog<|separator|>
  82. [82]
    [PDF] TRENDS IN DIGITAL PHOTOGRAMMETRY FROM AN ...
    It started with analogue to digital conversion of aerial images by means of high resolution film scanners about 20 years ago and was followed by the first.Missing: terrestrial handheld
  83. [83]
    [PDF] Transition from Analogue to Digital Photogrammetry
    The basis for digital photogrammetry today are analogue photographs converted to pixel elements with a digital description of the grey levels.
  84. [84]
    [PDF] Dierk Hobbie
    sented a concept for a hybrid photogrammetric affine plotter at the ISP ... analytical plotters produced at CARL ZEISS in Oberko- chen exceeded 1300 ...
  85. [85]
    Comprehensive Guide to 3D Photogrammetry - Ikarus 3D
    Nov 13, 2024 · 3D photogrammetry is a sophisticated technique that takes the help of multiple 2D images of an object or scene to construct a detailed 3D digital model.
  86. [86]
    LASer (LAS) File Format Exchange Activities - ASPRS
    What is the LAS Format? The LAS file format is a public file format for the interchange of 3-dimensional point cloud data data between data users.
  87. [87]
    [PDF] THE ALL DIGITAL PHOTOGRAMMETRIC WORKFLOW
    Photogrammetric software will have to be adjusted to go from current stereo, thus “2-ray photogrammetry” towards “multi-ray” solutions. Beside this “multy-ray” ...
  88. [88]
    [PDF] 53rd Photogrammetric Week: Multi-ray Photogrammetry Meets ...
    Sep 9, 2011 · It uses an innovative ap- proach to aerial triangulation and times of processing are much reduced. 1200km2 of flyingcan be processed in 7 hours, ...
  89. [89]
    [PDF] Department of Photogrammetry and Surveying
    A reduction in operator time from two weeks using visual methods to half an hour using automatic techniques is reported. Both of these applications achieve ...Missing: source | Show results with:source
  90. [90]
    [PDF] Bundle Adjustment — A Modern Synthesis
    Abstract. This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision ...Missing: seminal | Show results with:seminal
  91. [91]
    [PDF] The Bundle Adjustment - Progress and Prospects, D.C. Brown, Int ...
    of the border in photogrammetric adjustment. As shown in Brown (1974), the border can be exploited to introduce new information into the bundle adjust- ment ...Missing: seminal | Show results with:seminal
  92. [92]
    (PDF) Bundle Adjustment Methods in Engineering Photogrammetry
    Aug 9, 2025 · This paper investigates, mainly from a theoretical standpoint, what object space control, if any, need be incorporated in a bundle adjustment.Missing: seminal | Show results with:seminal
  93. [93]
    [PDF] Results of digital aerial triangulation using different software packages
    (7) The integration of the bundle block adjustment increases the automation of the triangulation significantly (e.g. automatic blunder detection and elimination) ...
  94. [94]
    [PDF] Analytic Block Adjustment - ASPRS
    Methods used were: (a) simultaneous adjustment of photographs (bundle adjustment); (b) ... * RMSE (root-mean-square error) = (z&/. RMSE'S' n) 112 whereu = ...
  95. [95]
    Bundle Block Adjustment of Airborne Three-Line Array Imagery ...
    The bundle adjustment statistics for the two models are shown in Table 1, where RMSE is the root mean squared error of control and check point residuals ...
  96. [96]
    [PDF] Relative and Absolute Orientation Error Analysis - ASPRS
    Relative orientation accuracy is influenced by these unfavorable conditions: (1) TV transmission techniques result in low resolution pictures.
  97. [97]
    [PDF] Chapter 1: Overview - Purdue Engineering
    Traditionally, the georeferencing parameters for photogrammetric systems are obtained using Ground Control Points (GCPs) through a bundle adjustment procedure.
  98. [98]
    [PDF] Assessing the Performance of Different Direct-Georeferencing ...
    The use of direct georeferencing by GPS/IMU can be distinguished in two different concepts: 1. Direct georeferencing to measure the exterior orientation ...
  99. [99]
    [PDF] fusion of hyperspectral, multispectral, color and 3d point cloud ... - KIT
    In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral ...
  100. [100]
    Photogrammetric analysis of multispectral and thermal close-range ...
    Jun 1, 2021 · Sensors capable of multispectral and thermal imaging beyond visible bands offer many analysis possibilities for environmental monitoring.
  101. [101]
    Integration of Photogrammetry and Geographic Information System ...
    Aug 10, 2025 · This research has successfully shown that the digital photogrammetry method is able to capture digital data much faster and accurately. The ...Missing: TINs ArcGIS QGIS scholarly
  102. [102]
    Integrating UAV Photogrammetry and GIS to Assess Terrace ... - MDPI
    Integrating UAV Photogrammetry and GIS to Assess Terrace Landscapes in Mountainous Northeastern Türkiye for Sustainable Land Management
  103. [103]
    Fusion of optical and SAR images based on deep learning to ...
    This paper developed an advanced deep learning Spatio-temporal fusion method, ie, Transformer Temporal-spatial Model (TTSM), to synergize the SAR and optical ...
  104. [104]
    Hybrid fusion approach for synthetic aperture radar and ...
    Three different image fusion techniques have been used to merge two co-registered data, radar (Radarsat) and optical (Landsat), to improve the classification ...<|separator|>
  105. [105]
    [PDF] Esri Support for Geospatial Standards: OGC and ISO/TC211
    Esri supports OGC standards, including the Simple Feature Specification, and ISO/TC211 standards, with the ArcGIS platform designed for interoperability.Missing: photogrammetry | Show results with:photogrammetry
  106. [106]
    Standards Inventory
    Jan 26, 2022 · Extends ISO 19115-1:2014 by defining the schema required for an enhanced description of the acquisition and processing of geographic information ...
  107. [107]
    Multi-temporal Digital Photogrammetric Analysis for Quantitative ...
    Aug 6, 2025 · This paper focuses on the analysis of the slope and catchment erosion dynamics in a typical Mediterranean context and its sensitivity to ...
  108. [108]
    Reconstruction of historical soil surfaces and estimation of soil ...
    Erosion rates are determined not only by the slope, but also by its position and distance from the tree mound. This study highlights the need for differential ...
  109. [109]
    SuperPoint: Self-Supervised Interest Point Detection and Description
    Dec 20, 2017 · This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry ...Missing: matching | Show results with:matching
  110. [110]
    [PDF] A SUPERPOINT NEURAL NETWORK IMPLEMENTATION FOR ...
    The experimentation proved that the optimized SuperPoint model provides superior performance using repeatability and homography estimation metrics, compared ...
  111. [111]
    Semantic segmentation and photogrammetry of crowdsourced ...
    Feb 19, 2022 · This article proposes a combination of semantic image segmentation and photogrammetry to monitor changes in built heritage sites.
  112. [112]
    Towards Semantic Photogrammetry - NIH
    In this paper, we propose the introduction of deep learning-based semantic image segmentation into the photogrammetric 3D reconstruction and classification ...
  113. [113]
    MVSNet: Depth Inference for Unstructured Multi-view Stereo - arXiv
    Apr 7, 2018 · We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual ...Missing: original | Show results with:original
  114. [114]
    Deep learning based multi-view stereo matching and 3D scene ...
    In this paper, we propose a practical three-dimensional (3D) real-scene reconstruction framework named Deep3D, which is paired with a deep learning based multi ...Missing: stereophotogrammetry | Show results with:stereophotogrammetry
  115. [115]
    Automating Ground Control Point Detection in Drone Imagery - MDPI
    Drone-based photogrammetry typically requires the task of georeferencing aerial images by detecting the center of Ground Control Points (GCPs) placed in the ...
  116. [116]
    Automatic detection of aerial survey ground control points based on ...
    Mar 6, 2023 · This paper proposes using YOLOv5-OBB to automatically detect ground control points (GCPs) in UAV images, which are used for georeferencing.Missing: automated | Show results with:automated
  117. [117]
    (PDF) THERMAL TEXTURE GENERATION AND 3D MODEL ...
    In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses ...
  118. [118]
    Solving photogrammetric cold cases using AI-based image matching
    After tie points are found with the AI-based approaches, bundle adjustment is performed in COLMAP to reconstruct the image geometry (Schönberger and Frahm, 2016) ...
  119. [119]
    Real-Time Drone Data Processing with Edge Computing | Anvil Labs
    Jun 30, 2025 · Edge computing allows drones to process data locally, reducing delays and enabling real-time decisions, filtering data, and using AI/ML for ...Missing: photogrammetry | Show results with:photogrammetry
  120. [120]
    Edge Computing-Driven Real-Time Drone Detection Using YOLOv9 ...
    This paper utilizes YOLOv9, optimized with pre-trained weights and transfer learning, achieving significant accuracy in real-time drone detection.Missing: photogrammetry | Show results with:photogrammetry
  121. [121]
    A paradigm shift in processing large UAV image datasets for ...
    This work introduces an original high-performance UAV photogrammetry workflow through the implementation of an open-source distributed approach using the ReCaS ...
  122. [122]
    3DOM-FBK/deep-image-matching - GitHub
    Multiview matching with deep-learning and hand-crafted local features for COLMAP and other SfM software. Supports high-resolution formats and images with ...
  123. [123]
    Digital Elevation Models - Natural Resources Canada
    Jan 8, 2025 · Digital elevation models (DEMs) are critical for terrain corrections, generating contour lines, and are used in GIS, flood mapping, and terrain ...
  124. [124]
    [PDF] Stereo Image Matching and Auto-DEM
    The DEMs are created by photogrammetric or cartographic methods" (Wang, 1990). A DEM has wide applications, such as the production of contour maps, orthophoto ...
  125. [125]
    Why Orthorectification is Key for Real-World Terrain Mapping and ...
    Oct 10, 2024 · Orthorectification is an essential process in creating accurate maps of the planet's surface. It corrects distortions in satellite and aerial imagery.
  126. [126]
    Comparison of the strip- and block-wise aerial triangulation using ...
    Feb 4, 2021 · The outcome suggests that the method of block-wise GNSS shift correction is the better method for aerial triangulation and one should use appropriate ...
  127. [127]
    Lidar Base Specification Appendix 2: Hydro-flattening Reference
    Hydro-flattening is the process of creating a lidar-derived DEM in which water surfaces appear and behave as they would in traditional topographic DEMs.
  128. [128]
    [PDF] The ASPRS Positional Accuracy Standards, Edition 2
    In Edition 1, a minimum of 20 checkpoints was required for testing positional accuracy of a final mapping product. This minimum was not based on rigorous ...
  129. [129]
    [PDF] topographic digital data collection and revision by photogrammetric ...
    The photogrammetric methods are used also for production of the digital topo- graphic planes at scale 1:500 - 1:5000. Central Research Institute of Geodesy, Air ...
  130. [130]
    DEM, DSM & DTM: Elevation Models in GIS - GISGeography
    We explore the differences between DEM (bare earth), DSM (natural and built environment) and DTM (vector contours and regularly-spaced points).
  131. [131]
    [PDF] DEVELOPMENT OF PHOTOGRAMMETRY IN THE U. S. ...
    It is the purpose of this circular to present, in brief form, the highlights of Geological Survey activities and developments in the field of photoqram- metry, ...
  132. [132]
    USGS EROS Archive - National Aerial Photography Program (NAPP)
    Jul 9, 2018 · The National Aerial Photography Program (NAPP) was coordinated by the USGS as an interagency project to acquire cloud-free aerial photographs at ...Missing: case | Show results with:case
  133. [133]
    Structure from Motion - Digital Techniques for Documenting and ...
    SfM can be used for the archaeological investigation of landscapes and built structures. The use of a kite or UAV can enable coverage of a large area in high ...
  134. [134]
    Accuracy Verification of Surface Models of Architectural Objects from ...
    Nov 4, 2022 · Validation of Close-Range Photogrammetry for Architectural and Archaeological Heritage: Analysis of Point Density and 3D Mesh Geometry.
  135. [135]
    2,000-Year-Old Pompeii Home Reconstructed in 3D | Live Science
    Oct 7, 2016 · Researchers used 3D technology to digitally reconstruct a wealthy home in Pompeii, showing how it might have looked before Mount Vesuvius ...Missing: photogrammetry virtual
  136. [136]
    Photogrammetric Measurement of Erosion at the Sabbath Point ...
    There have been several archaeological studies focusing on the use of photogrammetry for measuring the erosion of cultural heritage and the use of UAV imagery ...
  137. [137]
    Zooming In on Petra
    How digital archaeologists are using drones and cutting-edge cameras to recreate the spectacular 2000-year-old ruins in Jordan.Missing: UAV study
  138. [138]
    Documentation of cultural heritage using digital photogrammetry ...
    Both photogrammetry and laser scanning are still commonly used for digitisation and 3D modelling for heritage management, protection, reproduction for ...
  139. [139]
    [PDF] Basics of Photogrammetry for VR Professionals: 3D Visualization of ...
    Photogrammetry, the process of creating 3D digital models from a series of 2D photographs, has the potential of making cultural heritage objects and ...
  140. [140]
    (PDF) Photogrammetry as a New Scientific Tool in Archaeology
    Oct 15, 2025 · The main lines of research in photogrammetry applied to archaeology are close-range photogrammetry, aerial photogrammetry (UAV), cultural heritage, excavation, ...
  141. [141]
    Photogrammetric texture mapping: A method for increasing the ...
    Using PTM, the texture map generated via photogrammetry can be precisely mapped onto the more precise geometry of laser scanned 3D digital models with relative ...
  142. [142]
    [PDF] reverse engineering, photogrammetry, scanner 3D, digital camera
    This paper aims at verifying a procedure both for the acquisition and the reconstruction of 3D CAD models by combining Photogrammetry and Reverse Engineering ...
  143. [143]
    Reverse Engineering Objects with Metashape: From Scan to CAD
    Sep 1, 2025 · By combining photogrammetry with Agisoft Metashape's powerful processing engine, you can generate accurate meshes from photographs and convert ...
  144. [144]
    MaxSHOT 3D | CMM Photogrammetry Camera & 3D Measuring ...
    The MaxSHOT 3D enables unprecedented accurate, repeatable and reliable 3D measurements on large-sized parts in a wide range of sectors.
  145. [145]
    Photogrammetry Is Changing How We Make Movies - Frame.io Insider
    Jun 14, 2021 · Photogrammetry has become a staple of VFX workflows because it can save time and money on all kinds of productions, not just action set pieces.
  146. [146]
    Photogrammetry in construction: A guide to building better - Autodesk
    Integrating photogrammetry with BIM. Discover how to incorporate photogrammetry data into Building Information Modeling (BIM) using Autodesk Civil 3D.
  147. [147]
    [PDF] State of the Art of High Precision Industrial Photogrammetry
    Accuracy demands of high precision industrial applica- tions are often 1/100000 or even better (e.g. 0.01 mm object accuracy with an object volume of 1 m3).
  148. [148]
    Accuracy Analysis for 3D Model Measurement Based on Digital ...
    Nov 19, 2022 · The results indicate that the measurement accuracy of 3D models can reach millimeter-level or even sub-millimeter level. The measurement ...
  149. [149]
    Photogrammetry for games | Professional 3D scanning solutions
    Apr 7, 2025 · Photogrammetry is increasingly being used to capture and add real-life props, locations, and even actors' faces, into games.
  150. [150]
    [PDF] Photogrammetry Measurements of Airplane Passenger Entry Doors
    Oct 2, 2015 · As part of our Boeing 787 inspection procedures, photo- grammetry measurements are taken at various stages of the production process on every ...Missing: verification | Show results with:verification
  151. [151]
    Photogrammetry: Step-by-Step Guide and Software Comparison
    This guide will take you through the photogrammetry workflow and offer several alternatives to consider for photogrammetry software tools.
  152. [152]
    (PDF) Photogrammetry for Augmented Reality, A Low-Cost Method ...
    Oct 10, 2025 · required format. The software generally applies various formats such as obj, stl, ftx,. among others, as needed. The next ...<|control11|><|separator|>
  153. [153]
    (PDF) Comparision of photogrammetric point clouds with BIM ...
    Aug 6, 2025 · In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry.
  154. [154]
    Very high resolution bridge deformation monitoring using UAV ...
    Aug 14, 2025 · Though 3D photogrammetric data (point cloud) are also used for bridge inspection [55], damage identification approaches are mainly based on 2D ...
  155. [155]
    Full-Scale Highway Bridge Deformation Tracking via ... - MDPI
    In this work, we explore how 3D geometric measurements extracted from photogrammetric point clouds can be used to evaluate the performance of a highway bridge.
  156. [156]
    Mapping the surface intensity of discontinuities in rock slopes using ...
    To address discontinuity intensity and spatial distribution, we propose incorporating modern field sampling techniques, image analysis, spatial statistics, and ...
  157. [157]
    Assessing the Effectiveness of Photogrammetry in Land Cut and Fill ...
    Oct 30, 2025 · This paper reviews the effectiveness of photogrammetry in managing cut and fill volumes, a tool that, when integrated with Building Information ...
  158. [158]
    Rapid Photogrammetry with a 360-Degree Camera for Tunnel ...
    This paper demonstrates a method for rapid photogrammetric reconstruction of tunnels using a 360-degree camera.
  159. [159]
    Monitoring Slope Stability: A Comprehensive Review of UAV ... - MDPI
    UAV-based systems can capture detailed imagery, point clouds, and multispectral or thermal data for enhanced monitoring of hazardous mine slopes, rehabilitated ...
  160. [160]
  161. [161]
    Weak feature crack detection in high-resolution concrete dam ...
    S. Zhao, F. Kang, J. Li, et al. Structural health monitoring and inspection of dams based on UAV photogrammetry with image 3D reconstruction.
  162. [162]
    Combining inverse photogrammetry and BIM for automated labeling ...
    Automated labeling by fusing BIM and photogrammetric data is proposed. •. The method is enhanced by using as-planned vs. as-built comparison results.
  163. [163]
    Top Photogrammetry Software of 2025: Expert Guide - Datumate
    Apr 11, 2025 · 1. DatuBIM – Specialized for Heavy Civil Construction · 2. Pix4Dmapper – Versatile and User-Friendly · 3. Agisoft Metashape – Robust Processing ...
  164. [164]
    Photogrammetry Software Market to See Incredible Expansion
    Rating 4.5 (11) Aug 1, 2025 · Regions ; Report Features. Details ; Base Year. 2025 ; Based Year Market Size (2025). 2.6Billion ; Historical Period Market Size (2020). 1.4Billion.
  165. [165]
    Agisoft Metashape: Agisoft Metashape
    Agisoft Metashape is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data.
  166. [166]
    Professional Edition - Agisoft Metashape
    Try Agisoft Metashape in demo mode or with 30-day trial license. Try it now. 2025 © Agisoft. Features · Support · Community · Downloads · Buy · About.
  167. [167]
  168. [168]
    Agisoft Metashape Pricing, Alternatives & More 2025 | Capterra
    Rating 4.8 (5) With the help of Capterra, learn about Agisoft Metashape - features, pricing plans, popular comparisons to other 3D CAD products and more.
  169. [169]
    PIX4Dmapper: Reliable photogrammetry software for classic drone ...
    The leading photogrammetry software for professional drone mapping. Transform any aerial and ground images into accurate, georeferenced maps and 3D models.Missing: commercial | Show results with:commercial<|control11|><|separator|>
  170. [170]
    Pix4D: Professional photogrammetry and drone mapping software
    A unique suite of photogrammetry software for drone mapping. Capture images with our app, process on desktop or cloud and create maps and 3D models.PIX4Dcloud · PIX4Dmapper · Supported drones · Mapping and land surveying...Missing: features | Show results with:features
  171. [171]
    Pix4D vs DroneDeploy: Photogrammetry Software Comparison
    Jun 19, 2025 · Pix4D charges $350 per month for its primary software, Pix4Dmapper, with the option to pay annually at $3,500 per year. On the other hand, ...
  172. [172]
    PIX4Dmapper Pricing, Alternatives & More 2025 | Capterra
    Rating 4.5 (10) With the help of Capterra, learn about PIX4Dmapper - features, pricing plans, popular comparisons to other GIS products and more.
  173. [173]
    What is the best photogrammetry software in 2025? - 3D Mag
    Apr 12, 2025 · RealityCapture. RealityCapture is one of the best known photogrammetry platforms on the market. Since it was acquired by Epic Games in 2021, ...
  174. [174]
    RealityCapture Reviews 2025: Details, Pricing, & Features - G2
    RealityCapture is a general-purpose fully-featured photogrammetry software for creating virtual reality scenes, textured 3D meshes, orthographic projections ...
  175. [175]
    RealityCapture Pricing: Everything You Need to Know - FlyPix AI
    Jan 8, 2025 · Cost: $1,250 per seat per year (plus applicable taxes) · Features: Full access to all RealityCapture functionalities, updates, learning resources ...
  176. [176]
    RealityCapture - Pricing, Features, and Details in 2025
    Learn more about RealityCapture. Explore RealityCapture Pricing, features, integrations, popular comparisons, and more. Get a free demo.
  177. [177]
    iTwin Capture Modeler | Infrastructure Engineering Software Company
    iTwin Capture Modeler enables you to automatically generate multiresolution 3D models at any scale and precision.
  178. [178]
    Reality and Spatial Modeling Software - Bentley Systems
    Streamline your workflow to capture, manage, and share your reality and mapping data to provide precise digital context for design, construction, and ...
  179. [179]
    ContextCapture Reviews in 2025 - SourceForge
    Create 3D models from simple photographs and/or point clouds. Reality modeling is the process of capturing the physical reality of an infrastructure asset, ...
  180. [180]
    ContextCapture - Pricing, Features, and Details in 2025
    ContextCapture is Bentley Systems advanced software for creating 3D models from simple photographs and point clouds. It enables precise and scalable reality ...
  181. [181]
    Autodesk & Esri | BIM & GIS Integration
    Learn about Autodesk's strategic alliance with Esri and how BIM & GIS integration helps deliver more sustainable and resilient projects.Missing: photogrammetry | Show results with:photogrammetry
  182. [182]
    COLMAP — COLMAP 3.13.0.dev0 | a5332f46 (2025-07-05 ...
    COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface.Installation · Tutorial · Frequently Asked Questions · Camera ModelsMissing: extensions | Show results with:extensions
  183. [183]
    Drone Mapping Software - OpenDroneMap™
    A user-friendly, extendable application and API for drone image processing. It provides a web interface to ODM with visualization, storage and data analysis ...WebODM · Download WebODM · Documentation · OpenDroneMap
  184. [184]
    Presentation - MicMac
    Introduction. MicMac is a free open-source (Cecill-B licence) photogrammetric suite that can be used in a variety of 3D reconstruction scenarios.
  185. [185]
    Releases · colmap/colmap - GitHub
    New Features. Improved human-readable consistency checks when configuring a reconstruction with rigs, cameras, frames, and images.
  186. [186]
    MicMac – a free, open-source solution for photogrammetry
    Jun 5, 2017 · The publication familiarizes the reader with MicMac - a free, open-source photogrammetric software for 3D reconstruction.
  187. [187]
    An efficient photogrammetric stereo matching method for high ...
    The method can significantly reduce the memory consumption for large scale stereo. Abstract. Stereo matching of high-resolution images is a great challenge in ...Missing: RAM | Show results with:RAM
  188. [188]
    UAV Photogrammetry under Poor Lighting Conditions—Accuracy ...
    The calibration of non-metric cameras is sensitive to poor lighting conditions, which leads to the generation of a higher determination error.
  189. [189]
    Determining the Optimal Number of Ground Control Points ... - MDPI
    [26] reported that GCPs must be placed at a density of 0.5–1 GCP × ha−1. These such studies indicate that 9–12 GCPs are generally required per 100 ha (1 km2).
  190. [190]
    Photogrammetry vs. LiDAR accuracy in RTK drone mapping
    Jul 1, 2025 · Photogrammetry can struggle in thick vegetation due to occlusions and variations in lighting. While it can still produce detailed models in open ...
  191. [191]
    Atmospheric correction of satellite data with haze removal including ...
    By applying suitable thresholds, clear areas as well as haze and cloud regions can be separated and stored as binary images. Cloud areas are identified in the ...
  192. [192]
  193. [193]
    Using Drones to Study Human Beings: Ethical and Regulatory Issues
    The potential use of drones to study public gatherings or other human activities raises novel issues of privacy, confidentiality, and consent.
  194. [194]
    [PDF] ASPRS Positional Accuracy Standards for Digital Geospatial Data
    Mar 2, 2015 · The goal of American Society for Photogrammetry and Remote Sens- ing (ASPRS) is to advance the science of photogrammetry and remote.
  195. [195]
    Trends in Photogrammetry and Its Integration with Artificial Intelligence
    As technology continues to advance, the integration of AI with photogrammetry has led to improvements in model generation with the ability to fill in missing ...
  196. [196]
    AI-based autonomous UAV swarm system for weed detection and ...
    This study introduces a swarm system of unmanned aerial vehicles (UAVs) equipped with neural networks based on YOLOv10 for the detection and geo-location of ...3. Related Work · 4. Proposed Architecture · 6. Results
  197. [197]
    Hyperspectral Imaging Provides Real-Time 3D Visualization
    This combination enables precise real-time hyperspectral data processing from 3D surface reconstruction using lidar data via at-surface spectral reflectance to ...
  198. [198]
    How 5G Enables Real-Time Drone Operations - Anvil Labs
    Oct 5, 2025 · 5G is transforming drone operations by enabling instant communication, faster data transmission, and reliable connections.
  199. [199]
    [PDF] HOW THE CURIOSITY ROVER USES PHOTOGRAMMETRY TO ...
    One of the key technologies used by the rover is photogrammetry, which has enabled scientists to create detailed 3D models of the Martian landscape. These ...
  200. [200]
    Planet4Stereo: A Photogrammetric Open-Source Pipeline for ... - MDPI
    We present Planet4Stereo, an open-source photogrammetry pipeline developed to generate DEMs from low-cost PlanetScope images.
  201. [201]
    Optical AI Enables Greener, Faster Image Creation - IEEE Spectrum
    Oct 6, 2025 · Optical AI technology at UCLA enables rapid image creation with reduced energy use, paving the way for eco-friendly advancements in AR and ...
  202. [202]
    An ethical framework for trustworthy Neural Rendering applied in ...
    We here outline the main ethical findings in this area and place them in a novel framework to guide stakeholders and developers through principles and risks.
  203. [203]
    Photogrammetry Software Market 2025-2033
    Photogrammetry Software Market 2025-2033 | $1.48B to $3.13B Growth | AI, UAV & Digital Twin Trends.