Geophysical survey
A geophysical survey is an indirect, non-invasive method of investigating the subsurface of the Earth by measuring physical properties at or near the surface, such as variations in gravity, magnetism, electrical conductivity, or seismic waves, to infer geological structures, material compositions, and anomalies without direct excavation.[1] These surveys employ a range of techniques, including seismic refraction and reflection, gravity measurements, magnetic field mapping, electrical resistivity, electromagnetic induction, and ground-penetrating radar, each sensitive to specific subsurface attributes like density contrasts, magnetic susceptibility, or electrical properties.[2][3] Originating in the mid-1600s with early magnetic surveys for iron ore detection in Sweden, geophysical methods evolved significantly through the 20th century, driven by petroleum exploration, wartime applications like magnetic and sonar detection during World War II, and adoption in civil engineering beginning in the late 1920s.[2] Today, they are routinely integrated into geological and geotechnical investigations to provide rapid, cost-effective data on site parameters, including in-place dynamic properties for infrastructure design.[2] Airborne variants, such as magnetic and radiometric surveys, enable broad-scale mapping of buried rock types and structures, often complementing ground-based efforts.[4] Geophysical surveys serve diverse applications across resource exploration, environmental management, and hazard assessment. In natural resource sectors, they identify potential deposits of minerals, hydrocarbons, and groundwater by delineating subsurface reservoirs and geologic features.[3][4] Environmental uses include mapping contamination plumes, locating voids or tunnels, and monitoring pollutant migration at hazardous sites.[5] In engineering and construction, methods like seismic refraction determine bedrock depth and soil velocities for foundations, dams, and highways, while also detecting faults and geohazards.[2][6] Archaeological applications leverage non-destructive techniques, such as magnetometry and ground-penetrating radar, to map buried features and guide targeted excavations.[2][7]Overview
Definition and principles
A geophysical survey is the systematic collection of geophysical data to determine the characteristics of subsurface materials indirectly, by measuring variations in physical properties such as density, magnetic susceptibility, and elastic moduli, without direct observation of the subsurface.[2] These surveys rely on contrasts in these properties between different subsurface features to image or map geological structures, such as faults, voids, or mineral deposits.[8] The fundamental principles of geophysical surveys are grounded in established physical laws that govern the propagation and interaction of fields and waves in the Earth. For gravity surveys, the method exploits Newton's law of universal gravitation, which states that the gravitational force between two masses is proportional to the product of their masses and inversely proportional to the square of the distance between them, allowing detection of density contrasts through measured anomalies in the gravitational acceleration.[9] Electromagnetic surveys are based on Maxwell's equations, which describe the behavior of electric and magnetic fields and their propagation as electromagnetic waves, enabling the mapping of subsurface conductivity variations.[10] Seismic surveys depend on the principles of elastic wave propagation, where mechanical waves travel through the Earth and reflect or refract at interfaces due to changes in elastic properties, following the elastic wave equation derived from Hooke's law and Newton's second law.[11] A basic expression for the vertical gravity anomaly due to a density contrast is given by \Delta g_z = G \int_V \frac{\Delta \rho \, z}{r^3} \, dV, where G is the gravitational constant, \Delta \rho is the density contrast, z is the vertical distance from the observation point to the volume element dV, and r is the distance; more precise formulations account for the full geometry.[12] Geophysical data are broadly categorized into potential fields, which are static measurements arising from long-range forces (e.g., gravity anomalies sensitive to density, magnetics to magnetic susceptibility); wavefields, which involve dynamic propagation of energy (e.g., seismic waves measuring elasticity, electromagnetic waves probing conductivity); and electrical methods, which directly assess resistivity or induced polarization in materials.[2] Unlike geological surveys, which involve direct sampling and observation of surface or core-extracted rocks and soils to study lithology and stratigraphy, geophysical surveys provide non-invasive, spatially extensive insights into subsurface physical properties, often complementing geological data for comprehensive site characterization.[2] Common units include nanotesla (nT) for magnetic field variations, where the Earth's field ranges from about 25,000 to 65,000 nT, and milligal (mGal) for gravity anomalies, with typical subsurface contrasts producing anomalies on the order of 0.1 to 10 mGal.[2]Historical development
The roots of geophysical surveying trace back to the 17th and 18th centuries, when scientists began systematically measuring Earth's gravitational and magnetic fields. In 1798, Henry Cavendish conducted the first laboratory experiment to measure the gravitational attraction between masses using a torsion balance, establishing a foundational method for quantifying gravity variations that later informed field surveys.[13] By the early 19th century, interest in geomagnetism grew, leading to the establishment of dedicated observatories; in 1834, Carl Friedrich Gauss founded the Göttingen Magnetic Observatory and invented a sensitive magnetometer, enabling precise measurements of Earth's magnetic field intensity and direction on a global scale.[14] These early efforts laid the groundwork for geophysical exploration by linking local measurements to broader planetary properties. The 20th century marked the transition from theoretical measurements to practical applications in resource exploration, particularly for oil and minerals. A pivotal milestone occurred in 1921 with the first commercial seismic survey using reflection techniques, conducted by J.C. Karcher near Dougherty, Oklahoma, for an independent producer.[15] In the 1920s, Gulf Oil employed seismic refraction surveys along the Texas Gulf Coast, leading to the 1924 discovery of the Orchard salt dome, which resulted in oil production starting in 1926 and marked one of the first oil discoveries guided by geophysical methods, spurring widespread adoption in petroleum prospecting.[16] Gravity methods also advanced during this period, with the Eötvös torsion balance enabling high-precision horizontal gradient measurements for detecting subsurface density contrasts, as applied in early U.S. oil surveys.[17] World War II accelerated innovations in airborne geophysical techniques, particularly magnetics. In the 1930s, the development of fluxgate magnetometers allowed for aerial detection of magnetic anomalies, which the U.S. and British militaries adapted in the early 1940s for locating submerged submarines by sensing distortions in Earth's magnetic field caused by steel hulls—a technology known as Magnetic Anomaly Detection (MAD).[18] This wartime application transitioned to civilian use postwar, enhancing regional magnetic mapping for geological studies. Meanwhile, borehole techniques emerged, including the initial development of nuclear magnetic resonance (NMR) logging in the late 1940s, which provided non-invasive insights into fluid properties within rock formations, complementing surface surveys.[19] Post-1950s advancements were driven by the digital revolution and theoretical breakthroughs. The introduction of digital seismic recording in the late 1950s and 1960s enabled computer-based processing, transforming raw data into 2D and later 3D subsurface models through techniques like migration, vastly improving imaging accuracy for complex structures.[20] The acceptance of plate tectonics theory in the 1960s, supported by marine geophysical data from seafloor spreading and magnetic striping, spurred large-scale global surveys to map tectonic boundaries and ocean basins.[21] By the 1980s, the integration of the Global Positioning System (GPS) provided centimeter-level positioning accuracy for survey instruments, revolutionizing data georeferencing in both airborne and ground-based operations and enabling precise integration of multi-method datasets.[22]Survey Methods
Magnetic and gravity methods
Magnetic and gravity methods are passive geophysical techniques that exploit natural variations in Earth's magnetic and gravitational fields to infer subsurface geological structures without introducing artificial sources. These potential field methods measure anomalies caused by contrasts in magnetic susceptibility or density, respectively, and are particularly useful for regional-scale mapping due to their non-invasive nature and ability to cover large areas efficiently.[23][24]Magnetic Surveys
Magnetic surveys detect variations in the Earth's magnetic field induced by subsurface rocks with differing magnetic properties. The primary sources of magnetization are induced magnetization, where rocks align with the ambient geomagnetic field due to their magnetic susceptibility K, given by \mathbf{M} = K \mathbf{H}_0 (with \mathbf{H}_0 as the inducing field), and remanent magnetization, a permanent "fossil" magnetism acquired during rock formation or alteration.[23][25] Induced magnetization dominates in most surveys, while remanent effects can complicate interpretations if significant.[26] Instruments for magnetic surveys include fluxgate magnetometers, which measure vector components of the field (e.g., B_x, B_y, B_z) with resolutions around 1 nT, and proton precession magnetometers, which determine total field intensity via the Larmor frequency f = \gamma B (where \gamma \approx 2.675 \times 10^8 T^{-1} s^{-1}) and achieve sensitivities of 0.1–1 nT.[23][27] Surveys are conducted on the ground, where portable instruments allow detailed profiling, or airborne, using aircraft-towed or helicopter-borne systems for rapid coverage over hundreds of square kilometers.[28] Data interpretation focuses on magnetic anomalies \Delta B, the deviation from the regional field, often modeled for a point dipole with magnetic moment \mathbf{m} as \Delta \mathbf{B}(\mathbf{r}) = \frac{\mu_0}{4\pi} \frac{3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m}}{r^3}, where \mu_0 = 4\pi \times 10^{-7} H/m is the permeability of free space, \hat{\mathbf{r}} is the unit vector in the direction of \mathbf{r}, and r is the distance from the source; this form approximates the anomalous field contribution from a localized source (with \mathbf{m} = \mathbf{M} V for magnetization \mathbf{M} and small volume V) in the far-field limit.[29] Anomalies typically range from tens to hundreds of nT and reveal structures like igneous intrusions or ore deposits.[23]Gravity Surveys
Gravity surveys measure subtle variations in the acceleration due to gravity, g \approx 9.8 m/s², arising from subsurface density contrasts between 1800–3200 kg/m³ for typical rocks.[24] Anomalies reflect mass excesses or deficits, such as denser ore bodies (positive anomalies) or sedimentary basins (negative anomalies).[12] Key instruments are relative gravimeters like the LaCoste-Romberg model, which use a zero-length spring to balance gravitational force and achieve resolutions of 0.01–0.1 mGal (1 mGal = 10^{-5} m/s²), though they require drift corrections, and absolute gravimeters employing free-fall interferometry for precise, drift-free measurements to 2 μGal.[24][30][31] Raw data undergo corrections: the free-air correction adjusts for elevation (+0.3086 h mGal, h in m) to account for distance from Earth's center, while the Bouguer correction removes the gravitational attraction of material between station and datum (-0.0419 h \rho mGal, \rho in g/cm³). Terrain corrections further refine data for topographic effects, often computed via digital elevation models and adding up to 1 mGal in rugged areas.[24] The complete Bouguer anomaly is thus \Delta g_B = g_{obs} + \delta_{FA} - \delta_B - \delta_T - \delta_{other}, where \delta terms are corrections.[32] Applications include delineating density contrasts in basins (e.g., 10–50 mGal deficits) or mineral deposits (1–5 mGal positives).[24]Field Procedures
Both methods follow grid-based protocols with line spacing of 50–500 m and station intervals of 10–100 m, tailored to target depth and resolution needs; denser grids resolve shallower features.[33][34] Noise sources include diurnal variations from ionospheric currents (up to 50 nT/day for magnetics, 0.3 mGal for gravity tides), corrected by periodic readings at fixed base stations every 1–2 hours.[35][36] Surveys maintain accuracy through level positioning (1 cm for gravity elevations) and avoidance of cultural interference like power lines.[24] These methods are cost-effective for reconnaissance over vast regions, with airborne magnetic surveys covering thousands of km² affordably, but suffer from non-uniqueness in inversions, where multiple subsurface models can produce identical anomalies, necessitating integration with other data for resolution.[37][38][28]Seismic methods
Seismic methods in geophysical surveys employ controlled acoustic sources to generate elastic waves that propagate through the subsurface, revealing structural and stratigraphic information based on wave reflections and refractions at interfaces with contrasting elastic properties. These active-source techniques primarily utilize compressional (P-) waves, which involve particle motion parallel to the direction of propagation and travel through solids, liquids, and gases, and shear (S-) waves, which cause transverse particle motion and are restricted to solids. The propagation of these elastic waves is governed by the material's elastic moduli and density, with velocities typically ranging from 1.5 to 6 km/s for P-waves in sedimentary rocks.[39][40] A fundamental principle in seismic refraction is Snell's law, which describes the bending of waves at an interface between media of different velocities:\frac{\sin \theta_1}{v_1} = \frac{\sin \theta_2}{v_2}
where \theta_1 and \theta_2 are the angles of incidence and refraction relative to the normal, and v_1 and v_2 are the velocities in the respective media. This law enables the prediction of ray paths and critical angles where waves travel along the interface as head waves. In reflection, waves bounce back at interfaces following the angle of incidence equaling the angle of reflection, allowing imaging of deeper structures.[41][42] Reflection seismology involves recording waves that return to the surface after reflecting from subsurface layers, producing stacked profiles that image geological formations, commonly used in hydrocarbon exploration to depths of several kilometers. Refraction seismology, in contrast, measures the arrival times of waves refracted along high-velocity layers, suitable for shallow investigations like bedrock depth in geotechnical applications up to tens of meters. Key instruments include geophones as land-based receivers that detect ground velocity, vibroseis trucks that generate controlled vibrations via hydraulic baseplates for land surveys, and air guns that release compressed air bubbles for marine environments.[43][44][45] Data acquisition in seismic surveys uses linear or areal arrays of sources and receivers in 2D or 3D configurations to provide comprehensive subsurface coverage, with shot-receiver geometries designed to optimize signal-to-noise ratios. A central technique is the common midpoint (CMP) gather, where multiple traces from different source-receiver pairs reflecting at the same subsurface point are collected and processed to enhance imaging through redundancy. In 2D surveys, a single line of receivers yields cross-sectional profiles, while 3D arrays, often with hundreds of channels spaced at 10-50 m, enable volumetric models for complex reservoirs.[46][47] Basic interpretation relies on travel-time curves, which plot first-arrival times versus source-receiver offset to identify refractors or reflectors, often appearing hyperbolic for reflections in homogeneous media. Stacking velocity analysis involves scanning CMP gathers for velocities that maximize reflector coherency, providing interval velocity models for depth conversion. Resolution is limited by the dominant wavelength \lambda = v / f, where v is wave velocity and f is frequency; vertical resolution typically achieves \lambda/4, requiring high-frequency sources (e.g., 50-100 Hz) for fine details in shallow surveys.[48][49][50] Variants include vertical seismic profiling (VSP), where receivers are placed in a borehole to record downgoing and upgoing waves from surface sources, improving velocity control and resolution near the wellbore for depths up to several kilometers. Microseismic methods detect low-magnitude events induced by fluid injection or stress changes to map fracture networks, using geophone arrays to locate hypocenters and infer permeability in reservoirs, often integrated with hydraulic fracturing monitoring.[51][52]