Fact-checked by Grok 2 weeks ago

Earthquake forecasting

Earthquake forecasting encompasses the probabilistic assessment of future seismic events' timing, location, magnitude, and potential impacts, relying on statistical models informed by tectonic physics, historical , and geophysical data rather than deterministic predictions of precise occurrences. Unlike , which benefits from observable atmospheric precursors, earthquake processes arise from sudden frictional failures along faults in the brittle , rendering short-term (days to weeks) predictions unreliable due to the absence of consistently detectable pre-rupture signals. Key achievements include operational earthquake forecasting (OEF) systems for aftershocks, which provide time-dependent probabilities of subsequent events following a mainshock, aiding response by estimating elevated risks over hours to months. Long-term forecasts, such as those in the USGS National Model updated in 2024, delineate regions prone to damaging shaking—covering nearly 75% of the U.S. population—and underpin building codes and insurance practices through maps of expected ground motion. Models like the Uniform California Earthquake Rupture Forecast (UCERF3) incorporate multi-fault ruptures and rate-state friction physics to generate branching forecasts, improving estimates of rare, large events over decades. Challenges persist in validating forecasts prospectively, as retrospective fitting often overstates skill, and probabilistic seismic hazard analysis (PSHA) faces criticism for assuming event independence, potentially underestimating cascading failures in clustered . Recent integration of enhances catalog completeness and fault mapping but has yet to deliver superior short-term forecasts beyond statistical baselines, with empirical tests revealing limited gains amid data scarcity and regional variability. Despite claims of precursors like electromagnetic anomalies or animal behavior, rigorous peer-reviewed scrutiny finds no reproducible evidence supporting their use for actionable warnings, underscoring the field's reliance on empirical realism over speculative causal chains.

Definitions and Scope

Distinction Between Forecasting and Prediction

In , earthquake prediction refers to the deterministic specification of an earthquake's time, location, and with sufficient precision and to enable actionable responses, such as evacuations or shutdowns. This requires defining three key elements: the date and time, the geographic coordinates (typically within tens of kilometers), and the expected range (often to within one unit). No scientifically validated method has achieved this level of reliability, as earthquakes arise from complex, chaotic processes in the where are insufficiently understood or consistent to pinpoint events deterministically. In contrast, earthquake forecasting involves probabilistic assessments of over broader regions and longer time scales, estimating the likelihood of earthquakes exceeding certain magnitudes within defined periods, such as years or decades. These forecasts resemble or outlooks, incorporating statistical models based on historical , fault slip rates, and geodetic data like GPS measurements of tectonic accumulation. For instance, the U.S. Geological Survey (USGS) issues forecasts via National Maps, indicating, for example, a 2-3% annual probability of a magnitude 6.7 or greater earthquake in parts of California. Forecasts for aftershocks following a mainshock apply similar probabilistic logic but over shorter windows, such as days or weeks, to inform temporary risk adjustments. The distinction underscores a fundamental limitation in current geophysical knowledge: while leverages empirical data on seismic cycles and fault behavior to quantify long-term risks for and building codes, remains elusive due to the absence of reliable, causal that could resolve the inherent variability in rupture initiation. This probabilistic approach aligns with the nature of fault dynamics, where thresholds are reached nonlinearly, preventing short-term without violating observed data patterns. Claims of successful predictions, often from non-peer-reviewed sources, have consistently failed validation, reinforcing reliance on for practical .

Time Scales and Probabilistic Nature

Earthquake forecasting is fundamentally probabilistic, yielding estimates of the likelihood of seismic events within specified regions, magnitudes, and intervals rather than deterministic predictions of exact occurrences, owing to the interplay of tectonic stresses, fault heterogeneities, and nonlinear rupture . This approach acknowledges that while long-term patterns emerge from elastic rebound and seismic cycles, short-term triggers involve unpredictable stress perturbations and processes, rendering precise timing elusive even with advanced monitoring. Forecasts span distinct time scales, each leveraging different data and models. Short-term forecasts, typically covering hours to weeks, focus primarily on s following a mainshock, where elevated probabilities arise from stress changes and Coulomb failure criteria. The U.S. Geological Survey (USGS) operational system, for instance, generates advisories approximately 20 minutes after earthquakes of 5 or greater, quoting probabilities of exceeding modified Mercalli IV (light shaking) over the next 1 day, 1 week, and 1 month using epidemic-type sequence (ETAS) models calibrated to observed decay rates. These short-term efforts achieve skill above baseline rates for s but offer limited foresight for spontaneous mainshocks, where probabilities rarely exceed random expectation on daily scales. Intermediate-term forecasts, extending from weeks to several years, incorporate time-dependent signals such as rate anomalies, changes, or geodetic transients, but empirical validation remains sparse due to the infrequency of testable events and confounding noise in . Models like those exploring non-Poissonian clustering or accelerated moment release have shown promise in regions like the southeastern , yet prospective reliability lags, with success rates often indistinguishable from long-term averages. This scale bridges operational alerts and hazard planning but faces criticism for over-reliance on unverified physical mechanisms amid institutional biases toward positive reporting in academic . Long-term forecasts, over decades to centuries, dominate seismic hazard applications through probabilistic seismic hazard analysis (PSHA), integrating fault-specific recurrence distributions, slip-rate estimates from GPS and paleoseismology, and Gutenberg-Richter frequency-magnitude relations to compute exceedance probabilities for ground motions. For example, PSHA underpins national building codes by estimating, say, 2% probability of exceedance in 50 years for , as in USGS national maps updated periodically with refined source models. These assessments, while robust for risk mitigation, assume quasi-periodic fault behavior that overlooks cascading ruptures, and their inputs draw from peer-reviewed catalogs prone to undercounting due to historical detection limits. Advances like incorporate multi-fault ruptures, elevating forecasted probabilities for complex systems such as California's San Andreas network.

Historical Background

Early Concepts and Elastic Rebound Theory

The earliest scientific conceptions of earthquakes, dating to and the , viewed them primarily as manifestations of subterranean forces such as trapped winds, underground waters, or volcanic activity, without mechanisms enabling reliable forecasting. These ideas, prevalent through the , emphasized irregularity and locality, precluding systematic prediction; for instance, 19th-century geologists like Robert Mallet proposed earthquakes as elastic waves propagating through the , based on observations of damage patterns from events like the 1857 Basilicata earthquake, but offered no causal model for timing or recurrence. Empirical records in regions like and documented historical —such as cycles noted in temple chronicles spanning centuries—but lacked theoretical integration for prospective warnings, treating quakes as sporadic rather than cyclically driven. A paradigm shift occurred with the , formulated by American geologist Harry Fielding Reid in 1910 following his investigation of the April 18, , which ruptured approximately 296 miles (477 km) of the and produced surface offsets of 2 to 21 feet (0.6 to 6.4 meters). Reid's analysis, incorporating triangulation surveys from before and after the event, revealed that gradual tectonic displacement—estimated at 1.5 to 2 inches (3.8 to 5 cm) per year across the fault—deformed overlying rocks elastically until accumulated exceeded frictional resistance, triggering brittle failure and rapid rebound to an undeformed state, releasing seismic energy. This process, detailed in Reid's 1910 report to the State Earthquake Investigation Commission, explained coseismic slip as the counterpart to aseismic creep, with energy release scaling to the deformed volume and strain magnitude. The theory's causal realism—rooted in measurable fault geometry, strain rates from geodetic data, and —provided the foundational physics for earthquake forecasting by positing quasi-periodic seismic cycles tied to interplate slip rates, rather than random events. Reid estimated recurrence intervals for the San Andreas system on the order of decades to centuries, contingent on loading rates, enabling rudimentary long-term hazard assessment; for example, post-1906 quiescence along segments implied future strain buildup, influencing early probabilistic zoning in by the 1920s. Unlike prior descriptive models, elastic rebound integrated first-principles , validating observations like foreshocks as minor rebounds and aftershocks as adjustments, though it initially overlooked complexities such as viscoelastic relaxation or fault segmentation. This framework persists as the cornerstone of modern seismotectonics, underpinning forecasts that quantify exceedance probabilities over decades via paleoseismic and geodetic constraints.

Mid-20th Century Initiatives and Seismic Gaps

In the 1950s and early 1960s, advancements in global seismic monitoring laid groundwork for systematic earthquake forecasting efforts, driven by Cold War-era nuclear test detection needs. The U.S. Geological Survey (USGS) analyzed seismic signals from underground explosions and Soviet nuclear tests to probe crustal structure, enhancing understanding of fault dynamics and seismicity patterns. Concurrently, the establishment of the World-Wide Standardized Seismograph Network (WWSSN) in 1961, spurred by initiatives, provided uniform, high-quality data from over 120 stations worldwide, enabling more precise mapping of earthquake distributions and identification of quiescent fault segments. These instrumental improvements shifted focus from ad hoc observations to data-driven analysis of long-term seismic behavior, particularly along subduction zones. A pivotal initiative emerged from Soviet seismological research in the Kuril-Kamchatka region, where S.A. Fedotov and colleagues at the Kamchatka Volcanological Station examined historical catalogs of strong earthquakes (magnitude 7.75 or greater) from the early onward. In 1965, Fedotov documented regularities in the spatial and temporal clustering of these events, noting that subduction arcs divide into segments typically 100-400 km long, each prone to periodic great earthquakes with recurrence intervals of decades to centuries. He identified "gaps"—segments lacking major ruptures for extended periods (e.g., over 30-100 years)—as zones accumulating tectonic stress due to ongoing plate convergence, inferring higher future hazard there based on elastic strain buildup principles. This approach yielded a long-term forecast map for the Kuril-Kamchatka zone, predicting large events in specific gaps, four of which aligned with subsequent quakes by the early . Fedotov's framework constituted an early probabilistic forecasting method, emphasizing seismic quiescence as a precursor to rupture rather than short-term precursors like foreshocks. It relied on empirical recurrence patterns and first-order tectonic loading rates, avoiding unsubstantiated claims of deterministic prediction. Though limited to regional arcs with dense historical data, it influenced global applications by highlighting how time-dependent hazard rises in locked fault sections, a concept later formalized as the . Critiques note that not all gaps rupture predictably, as aseismic slip or variable friction can delay or redistribute stress, underscoring the probabilistic nature over certainty. These mid-century efforts marked a transition from descriptive to hazard zoning, prioritizing empirical fault segmentation amid emerging paradigms.

Post-1970s Developments and Characteristic Earthquakes

In the 1970s and 1980s, earthquake forecasting shifted toward targeted experiments and models emphasizing fault-specific recurrence patterns, building on earlier concepts. The U.S. National Earthquake Hazards Reduction Program (NEHRP), established by Congress in 1977, initially prioritized research, allocating resources to monitor precursors and test hypotheses in seismically active regions. One prominent initiative was the Parkfield Earthquake Prediction Experiment, launched by the U.S. Geological Survey (USGS) in 1985 along the in . This effort targeted a segment where paleoseismic data indicated quasi-periodic magnitude 6 earthquakes recurring approximately every 22 years, with the last event in 1966; forecasters specified a magnitude 6.0 quake between 1985 and 1993, accompanied by potential precursors like foreshocks and groundwater changes. The experiment deployed dense seismic, geodetic, and strainmeter networks to capture pre-event signals, but the anticipated event occurred in 2004 as a magnitude 6.0 quake without clear precursors, exceeding the time window and highlighting limitations in deterministic short-term prediction. Parallel advancements refined the seismic gap hypothesis, which posits heightened risk in fault segments quiescent since prior large ruptures due to stress accumulation. Reviews in the early 1990s assessed gaps along zones and transform faults, such as those identified in the , where gaps were hypothesized to mature into great earthquakes within decades based on historical catalogs and slip rates. Empirical tests, however, showed mixed results; for instance, some gaps ruptured as predicted (e.g., the in ), but others persisted without events, prompting critiques of over-reliance on time-dependent probabilities without integrating clustered or aseismic slip. These evaluations underscored a transition from optimistic short-term predictions to longer-term probabilistic assessments, informed by global catalogs revealing that gaps alone underperform in forecasting when compared to baseline models. The characteristic earthquake model emerged in the late 1970s and early 1980s as a framework for forecasting, proposing that individual fault segments repeatedly generate earthquakes of similar magnitude, rupture area, and slip distribution—termed "characteristic" events—reflecting geometric constraints and stress thresholds on the fault plane. Seminal studies applied this to the , using paleoseismic trenching and historical records to estimate recurrence intervals of 100–300 years for magnitude 7–8 events on segments like Parkfield or the southern San Andreas. Proponents argued that such quasi-periodic behavior, evidenced by offset geomorphic features and radiocarbon-dated strata, allows time-dependent hazard maps by tracking elapsed time since the last rupture. However, subsequent analyses of global datasets, including microseismicity and geodetic measurements, revealed deviations: many faults exhibit Gutenberg-Richter frequency-magnitude distributions rather than discrete characteristic peaks, with variability in rupture styles challenging uniform recurrence assumptions. By the 1990s, the model influenced operational forecasts but faced empirical scrutiny, as non-characteristic events (e.g., along-strike jumps or triggered slips) complicated probabilistic inversions.

Underlying Physical Principles

Tectonic Stress Accumulation and Release

Tectonic plates move relative to one another at rates of 1 to 10 centimeters per year, driven by and slab pull forces, leading to differential motion across plate boundaries and intra-plate faults. Where faults lock due to frictional resistance, the surrounding brittle deforms elastically, accumulating as proportional to the square of the accumulated strain, typically reaching 100-200 megapascals before failure. This stress buildup occurs gradually over decades to centuries, with the of crustal rocks (around 30-80 gigapascals) determining the stored , which is released abruptly during rupture as propagates as seismic waves. The , formulated by Harry Fielding Reid in 1910 following the , posits that this sudden release restores the rocks to their pre-strain configuration, akin to a snapped rubber band. Reid's analysis of surface offsets—up to 6.4 meters of right-lateral displacement along a 477-kilometer rupture on the —demonstrated that the coseismic slip matched the elastic strain accumulated from ongoing plate motion since prior events. For the San Andreas system, long-term slip rates average 3.5 centimeters per year, implying that full-cycle strain accumulation for a typical magnitude 7-8 event requires 100-300 years, though partial releases and aseismic creep modulate this. This cyclic accumulation and release underpins assessment, as unreleased since the last rupture increases the failure on fault segments, elevating rupture probability until equilibrium is restored. Observations from geodetic networks confirm ongoing rates aligning with plate velocities, with deficits in surface slip indicating locked zones primed for future release, though viscous postseismic relaxation can redistribute over years. Deviations from pure behavior, such as viscoelastic flow or pore pressure changes, influence the exact timing but do not negate the dominant elastic framework for transfer in the .

Seismic Cycles and Recurrence Intervals

The seismic cycle encompasses the full sequence of stress accumulation, failure, and relaxation on a fault between successive large earthquakes, rooted in Harry Fielding Reid's developed in 1910 from geodetic surveys of the . This theory explains how continuous tectonic plate motion deforms the brittle upper crust elastically across a locked fault during the dominant interseismic phase, building strain until frictional resistance is overcome, resulting in abrupt coseismic slip that rebounds the deformed rocks to a less strained configuration. Postseismic deformation follows, driven by mechanisms such as aseismic afterslip on the fault and viscoelastic relaxation in the lower crust or mantle, which can redistribute stress over years to decades and contribute a portion of the total displacement budget. Preseismic acceleration of slip or , if it occurs, remains empirically inconsistent and is not a reliable harbinger of rupture timing. Recurrence intervals represent the characteristic time span between similar-magnitude earthquakes on a given fault segment, typically estimated through paleoseismology—via trenching to excavate and date stratigraphic evidence of past surface ruptures—or by dividing long-term geologic or geodetic slip rates by the average coseismic displacement per event. For example, on the , paleoseismic records yield recurrence intervals of approximately 200–400 years for M>7 ruptures, while slip-rate methods corroborate these by integrating offset data with GPS measurements of present-day loading. Variability arises from factors including fault segmentation, stress perturbations from neighboring events, and inherent frictional heterogeneities, leading to clustered rather than uniform timing; coefficients of variation (standard deviation divided by mean interval) often range from 0.2 to 0.6 in global datasets. In earthquake forecasting, seismic cycles inform time-dependent renewal models that treat ruptures as renewals after each event, computing conditional probabilities that rise nonlinearly with elapsed time relative to the mean recurrence interval—unlike time-independent models assuming constant hazard rates. The Brownian passage-time (BPT) distribution, for instance, physically grounds this by modeling stress buildup as a diffusive with aperiodic fluctuations, fitting paleoseismic sequences better than statistics and yielding higher short-term probabilities for overdue faults (e.g., elapsed time exceeding 1.5 times the mean). Empirical validation from catalogs shows these models capture increased likelihood in late-cycle phases but falter with open intervals (where elapsed time exceeds dated records) or multi-fault interactions, underscoring that cycles provide probabilistic rather than deterministic constraints due to unresolved causal complexities in fault .

Forecasting Methods

Statistical and Probabilistic Models

Statistical and probabilistic models form the cornerstone of earthquake forecasting by quantifying the likelihood of seismic events using historical , recurrence statistics, and processes, rather than pinpointing exact times or locations. These approaches recognize the inherent variability in earthquake occurrence, modeling it through probability distributions derived from paleoseismic records, instrumental catalogs, and fault-specific behaviors. Unlike deterministic methods, they provide maps and risk assessments, such as the annual probability of exceeding a certain ground acceleration, which inform building codes and emergency planning. Probabilistic Seismic Hazard Analysis (PSHA), introduced by C. Allin Cornell in , integrates models of earthquake sources, attenuation of ground motions, and uncertainty to estimate the probability of exceeding specified intensity levels over a given time horizon. In PSHA, seismic sources are characterized by recurrence rates from frequency-magnitude relations like the Gutenberg-Richter law, where the number of earthquakes with magnitude M or greater scales as $10^{a - bM}, with b \approx 1. Time-independent variants assume a Poisson process, treating events as memoryless and independent, suitable for long-term regional hazards where clustering is averaged out. This yields exceedance probabilities via the integral \lambda(I) = \sum \nu_i \int G_i(I | m, r) f_i(m, r) dm dr, where \nu_i is the source rate, G_i the ground motion exceedance, and f_i the source density. Time-dependent models enhance accuracy for specific faults by incorporating elapsed time since the last event, aligning with elastic rebound theory's seismic cycles. processes, such as Weibull or lognormal distributions for inter-event times, model conditional probabilities that rise as a fault approaches its average recurrence interval; for instance, on the , the probability of a 7+ event may increase from 2% per year () to over 10% after centuries of quiescence. The Brownian Passage Time (BPT) model, with aperiodicity parameter \alpha, captures quasi-periodic behavior, where \alpha < 0.5 indicates high variability akin to lognormal fits to paleodata. These outperform time-independent models in retrospective tests on faults like the North Anatolian, where hybrid approaches blend for background seismicity with for characteristic events. For short-term forecasting, the Epidemic-Type Aftershock Sequence (ETAS) model treats seismicity as a branching point process, where each event triggers offspring aftershocks with productivity decaying as (t + c)^{-\ p}, typically p \approx 1.1-1.3, superimposed on a stationary background rate. Calibrated via maximum likelihood on catalogs, ETAS excels in declustering and prospectively forecasting sequences, as demonstrated in swarm scenarios like the 2019 Ridgecrest aftershocks, where ensemble ETAS variants improved 3-day event counts over baseline Poisson by accounting for Omori-Utsu law clustering. However, ETAS assumes stationarity and may underperform for induced seismicity or long-range triggering without extensions like time-scaled variants. Evaluation metrics, such as likelihood scores or reliability diagrams, reveal ETAS's skill in moderate events (M < 6) but limited foresight for rare large ruptures due to parameter uncertainty from sparse data.

Geodetic and Seismological Approaches

Geodetic approaches measure crustal deformation to quantify tectonic strain accumulation, providing inputs for long-term earthquake probabilities. Global Positioning System (GPS) networks and Interferometric Synthetic Aperture Radar (InSAR) detect interseismic velocities with sub-millimeter annual precision, revealing fault locking depths and slip rates. On the southern San Andreas Fault, InSAR data from 1992–2000 indicated near-steady strain accumulation reaching equilibrium within about 10 years post-earthquake, supporting elastic rebound theory for forecasting recurrence. These measurements estimate seismic moment deficits by comparing geodetic rates to geologic slip histories, as in California's Uniform California Earthquake Rupture Forecast (UCERF3), where they refine fault segmentation and time-dependent hazards. Seismological approaches analyze earthquake catalogs to model event rates and detect precursors via statistical patterns. The Epidemic-Type Aftershock Sequence (ETAS) model distinguishes background seismicity from triggered aftershocks, enabling operational short-term forecasts; during the 2019 Ridgecrest swarm, USGS ensembles of ETAS variants optimized 3-day probabilities by fitting prior seismicity. Seismic gap analysis identifies quiescent zones on active margins as higher-risk for large events due to unreleased stress, a hypothesis proposed in the 1970s and applied to subduction zones, though tests in Mexico showed it underperformed random baselines for M>7 quakes from 1900–2020. Variations in the Gutenberg-Richter b-value, the slope of magnitude-frequency distributions, serve as a seismological indicator; global averages near 1.0 decrease prior to major ruptures, signaling increased large-event probability from concentrations, with Bayesian estimates improving temporal resolution in catalogs exceeding 2,000 events. Integrating geodetic with seismological models, such as updating ETAS parameters with from InSAR-derived slip, enhances intermediate-term forecasts by linking deformation to dynamic triggering.

Machine Learning and Data-Driven Techniques

Machine learning and data-driven techniques in earthquake forecasting emphasize in large-scale geophysical datasets, including seismic , strain measurements, and satellite-derived , to infer probabilistic risks rather than deterministic predictions. These methods treat earthquakes as processes, employing algorithms to model spatio-temporal dependencies without presupposing complete physical . Supervised models, such as random forests and machines like , classify seismic events or regress magnitudes using features like b-value anomalies or rates, achieving classification accuracies up to 90% in distinguishing earthquakes from explosions on historical catalogs. architectures, including convolutional neural networks (CNNs) for analysis and (LSTM) networks for sequential data, have been applied to predict sequences following mainshocks, with LSTM variants demonstrating superior performance over traditional models in capturing non-linear temporal clustering. Recent advancements incorporate multimodal data fusion, as in the framework, which integrates seismic, geodetic, and environmental variables via graph neural networks to enable scalable intermediate-term (days to months) forecasting, reportedly outperforming baseline probabilistic models in retrospective tests on datasets. Neural point processes extend this by modeling event rates as inhomogeneous processes conditioned on past occurrences, with transformer-based enhancements improving log-likelihood scores for regional forecasts. Gated recurrent units (GRUs) have also been trained on U.S. Geological Survey historical data to forecast event parameters, yielding prediction accuracies around 75-80% for magnitude and location in controlled validations, though these metrics derive from imbalanced datasets favoring frequent small events. Datasets like AEFA, comprising curated precursors and outcomes, facilitate benchmarking, enabling to detect subtle anomalies such as precursory strain transients that correlate with subsequent ruptures in simulation studies. Despite these developments, data-driven models face inherent limitations due to the rarity of large earthquakes, leading to and poor generalization; for example, while phase-picking algorithms using achieve sub-second latencies and accuracies exceeding human levels (e.g., >95% for P-wave arrivals), their extension to prospective has not yielded verified short-term predictions beyond probabilistic baselines. Reviews of applications highlight inconsistent success in , with many models failing rigorous out-of-sample tests across diverse tectonic regimes, as earthquakes' complexity resists purely statistical capture without underlying fault . Reported high accuracies, such as 70% for event anticipation in a 2025 trial using Texas-developed algorithms, remain unconfirmed in peer-reviewed operational settings and are critiqued for potential . Thus, while enhancing and hybrid physics-ML integrations show promise for refining hazard maps, true predictive power awaits validation through prospective experiments.

Precursor-Based Methods

Precursor-based methods for earthquake forecasting attempt to identify observable anomalies or changes in geophysical, geochemical, or biological systems that precede seismic events, with the aim of providing short-term warnings. These approaches rely on the hypothesis that fault zones exhibit measurable precursors during the buildup to rupture, such as variations in propagation, gas emissions, or electromagnetic fields. However, despite extensive research since the mid-20th century, no precursor has demonstrated consistent, verifiable reliability sufficient for deterministic , as fault dynamics involve chaotic, nonlinear processes highly sensitive to initial conditions. Seismological precursors, including foreshocks, seismic quiescence (temporary reductions in background ), and changes in P-wave ratios, have been proposed as indicators of accumulation or in the crust. For instance, some studies interpret quiescence as a reliable signal before moderate-to-large earthquakes, but assessments show it occurs irregularly and lacks specificity, often coinciding with non-seismic factors like influences or artifacts. Foreshocks precede only about 5% of larger events within a week, rendering them statistically indistinguishable from random swarms without hindsight. Overall, seismological fail to yield repeatable patterns for due to the inherent variability of fault slip. Geochemical precursors focus on anomalies in levels, concentrations, or other gas emissions, attributed to crustal deformation opening pathways for fluid migration. spikes have been documented before specific earthquakes, such as anomalies in Iceland's Southern during 1978-1979 events, but systematic reviews indicate these signals are sporadic, influenced by barometric pressure, rainfall, and seasonal effects, without causal linkage to in controlled tests. changes similarly show correlations in case studies, yet global compilations reveal no threshold for prediction, as anomalies appear post-event or in unrelated contexts. Electromagnetic precursors, such as ultra-low-frequency magnetic field variations or radio emissions, are theorized to arise from piezoelectric effects in stressed rocks or electrokinetic processes in fluids. Observations of pre-event signals exist in datasets from events like the 2007 Northern California earthquake, but decades of monitoring yield no convincing evidence of precursors, as claimed anomalies often align with solar activity, instrumentation noise, or post-hoc selection bias. Satellite and ground-based studies, including Swarm mission data, report potential anomalies, yet case-control analyses fail to confirm statistical significance over false positives. Biological precursors, particularly unusual animal behavior, draw from anecdotal reports of agitation in , pets, or hours to days before quakes. Farm animal monitoring in earthquake-prone regions has detected elevated activity in cows, sheep, and dogs up to 20 hours prior, potentially responding to infrasonic or electromagnetic cues imperceptible to humans. However, scientific reviews dismiss these as unreliable for , citing in historical accounts and the absence of controlled, prospective verification; animals detect P- only seconds before shaking, not predictive timescales. In summary, while precursor-based methods persist in research—often challenged by paradigms deeming prediction "impossible in principle" due to deterministic chaos—they have not produced operational successes, with the U.S. Geological Survey affirming no major earthquake has been prospectively predicted. Efforts continue, such as integrating precursors with , but empirical validation remains elusive amid high false-alarm rates and non-uniqueness of signals.

Notable Forecasts and Operational Systems

Uniform California Earthquake Rupture Forecast (UCERF) Models

The Uniform California Earthquake Rupture Forecast (UCERF) models, developed by the Working Group on California Earthquake Probabilities (WGCEP) in collaboration with the U.S. Geological Survey (USGS), Southern California Earthquake Center (SCEC), and California Geological Survey (CGS), provide long-term, time-independent estimates of earthquake rupture rates across California's fault system. These models integrate geologic, geodetic, and seismologic data to forecast the magnitude, location, and frequency of potential ruptures, serving as inputs for probabilistic seismic hazard analysis (PSHA) used in building codes and risk assessment. UCERF represents an evolution from earlier WGCEP forecasts dating to 1988, with formalized UCERF versions beginning in the early 2000s to standardize methodology statewide. UCERF2, released in 2008 by the 2007 WGCEP, assumed fault segmentation where ruptures were confined to predefined sections, producing characteristic earthquakes with prescribed magnitude-frequency distributions (typically Gutenberg-Richter with b=1). It incorporated updated fault parameters and deformation models but excluded multi-fault ruptures, leading to an apparent overprediction of moderate (M6.5–7) earthquake rates relative to observed . Under UCERF2, the 30-year probability of one or more M≥6.7 earthquakes statewide was approximately 46% for the period following its release, though aggregated probabilities exceed 99% when including all events. UCERF3, published in 2013 with a time-dependent extension in 2015, introduced significant advancements by relaxing segmentation and permitting multi-fault ruptures spanning over 350 fault sections, enumerated through systematic rules incorporating physical constraints like stress transfer. A "grand inversion" simultaneously optimizes rupture rates to match long-term deformation budgets from geodetic and geologic data, while aligning with observed and avoiding assumed distributions. This uses 1,440 logic-tree branches to quantify epistemic uncertainty, computed via supercomputing resources. Key results include a statewide 30-year probability exceeding 99% for at least one M≥6.7 earthquake and 7% for M≥8.0, higher than UCERF2's 4.7% for the latter due to inclusion of previously unmodeled large multifault scenarios. These models emphasize that probabilities reflect long-term averages and do not predict specific events, with reducing biases in moderate event rates but introducing complexity from multifault possibilities observed in events like the 1992 Landers sequence. As of 2025, remains the operational standard, informing the National Seismic Hazard Model and state hazard maps, though ongoing WGCEP efforts discuss updates like UCERF4 to incorporate post-2013 data and refined physics.

Global and Regional Operational Examples

Operational earthquake forecasting remains primarily regional, with systems delivering time-dependent probabilities for sequences or short-term hazards rather than deterministic global predictions, reflecting the inherent uncertainties in seismic processes. Globally, no centralized operational system exists, though frameworks like the Collaboratory for the of Earthquake Predictability (CSEP) enable prospective testing of statistical models across worldwide catalogs, incorporating smoothed and rates to generate daily forecasts of rates per unit area, time, and . These efforts, such as the Global Earthquake Activity Rate (GEAR1) model, combine geodetic data with seismicity smoothing for prospective evaluations but serve and hazard assessment rather than routine public advisories. In , the OEF-Italy system, launched in 2014 following lessons from the , operates as one of the earliest nationwide implementations, producing weekly probabilities of local magnitude 4.0 or greater events using an ensemble of epidemic-type aftershock sequence () models calibrated to recent . Updates occur daily at midnight and immediately after magnitude 3.5 or larger events, covering the entire territory with interactive maps, probability timelines, and shaking scenarios to inform civil protection authorities; retrospective validation over 10 years confirms its reliability in capturing seismicity clustering without over- or under-predicting rates. New Zealand's GeoNet program, managed by GNS Science, delivers operational forecasts since the mid-2010s, focusing on sequences from major events like the (magnitude 7.8), with one-year probabilities for magnitude 5.0+ events in defined zones—such as a 29% chance of one or more magnitude 5.0-5.9 quakes in as of recent assessments—using hybrid and clustering models for engineers, , and infrastructure planning. These public-facing outputs include maps, tables, and time series, tested against observed rates to refine background estimates. In the United States, the U.S. Geological Survey (USGS) provides sequence-specific operational forecasts post-major earthquakes, such as after the 2019 Ridgecrest sequence (magnitudes up to 7.1), employing models to estimate probabilities over days to weeks, disseminated via interactive maps and summaries for and public awareness. These advisories, updated in near using catalog data, outperform time-independent baselines in capturing clustering but emphasize probabilistic nature to avoid false alarms. Similarly, Israel's OEF system issues weekly forecasts for magnitudes above 4.0 and 5.5 nationwide, leveraging statistical models tailored to seismicity for integration. Emerging regional systems, such as those under development by Switzerland's Seismological Service, test variants for alpine hazards, aiming for harmonized European models that blend short-term forecasts with long-term rates, though full operativity lags behind established programs. Across these examples, operational forecasts prioritize aftershocks—where physics-based clustering is evident—over mainshock prediction, with performance gauged via likelihood scores and pseudo-prospective tests against benchmarks.

Recent AI-Enhanced Forecasts (2020s)

In the 2020s, models have been integrated into forecasting to process vast seismic datasets, detect non-linear patterns, and generate probabilistic estimates of future events, often building on improved catalogs derived from AI-enhanced detection. These approaches aim to surpass traditional statistical models like the epidemic-type sequence (ETAS) by leveraging convolutional neural networks for spatial feature extraction and recurrent neural networks for temporal dependencies. However, evaluations indicate only marginal gains in forecast skill, constrained by sparse data for rare large-magnitude events and the inherent stochasticity of seismic processes. A key application involves aftershock forecasting, where deep learning models emulate or extend ETAS frameworks. In 2023, researchers at the University of California, Santa Cruz, and the Technical University of Munich developed a neural network trained on seismic sequences that outperformed ETAS in predicting aftershock rates for the 2019 Ridgecrest earthquake sequence in California, achieving higher log-likelihood scores by capturing complex triggering patterns. Similarly, neural temporal point-process (NTPP) models have been proposed for sequence forecasting, offering computational efficiency but limited superiority over ETAS in retrospective tests on global catalogs. The U.S. Geological Survey has pursued machine learning fellowships since 2023 to refine forecasts of earthquake rates, locations, and magnitudes, incorporating these techniques into operational aftershock advisories. For intermediate-term forecasting (spanning days to months), the model, introduced in a 2025 study, fuses multimodal data—including seismic catalogs from the Earthquake Networks Center (1970–2021) and USGS (1998–2023), geologic maps, and fault data—via vision transformers, ResNet-50, and LSTM networks to produce regional probability maps for magnitude 5+ events. It outperformed 13 baselines, including and CNN-BiLSTM, in macro F1 scores, recalling 8 of 18 magnitude 6+ regions in (versus ETAS's 2 of 5) and 21 of 40 magnitude 5+ regions in the contiguous U.S. (versus 11 of 40), with predictions generated in seconds on a single GPU. The model successfully identified 10 of 13 magnitude 6–7 events in a 2015–2017 Chinese test period after fine-tuning for transfer to U.S. data, though it struggles with magnitude 7+ events and lacks precision for real-time alerts. Regional studies have explored for localized forecasts. A 2024 analysis of (2012–2024 data from the Earthquake Data Center) employed ensemble methods like random forests and , reporting retrospective accuracies up to 97.97% for occurrence prediction using features such as recent event counts and depth patterns, a marked improvement over prior 69% benchmarks—but prospective verification remains pending amid risks of to historical . Overall, while augments data-driven probabilistic models, no system has demonstrated reliable deterministic predictions, and standardized prospective testing is essential to distinguish genuine advances from artifacts of model complexity.

Evaluation, Verification, and Performance

Metrics and Testing Protocols

Evaluation of earthquake forecasts relies on probabilistic metrics suited to the inherent uncertainty of , as deterministic predictions remain infeasible due to the chaotic nature of fault dynamics. Key metrics include log-likelihood scores, which quantify how well a model's assigned probabilities align with observed event rates by maximizing the likelihood of the under the forecast . Proper scoring rules, such as the , penalize overconfidence or underconfidence in probabilistic outputs, ensuring forecasts are calibrated against empirical outcomes rather than optimized for apparent sharpness alone. For spatial and temporal assessments, consistency tests like the S-test evaluate whether forecasted event locations match observed spatial patterns, while the R-test checks magnitude distribution fidelity and the N-test verifies total event counts against Poisson or non-homogeneous expectations. Magnitude-weighted variants, such as potency-weighted log-likelihood, prioritize larger events in scoring to reflect their disproportionate impact on hazard. In alarm-based systems, which declare heightened risk zones, metrics include probability of detection (POD)—the fraction of events occurring within alarms—and false alarm ratio (FAR), the proportion of alarms without events, often balanced via (ROC) curves. Testing protocols emphasize prospective evaluation to avoid , as conducted in the Collaboratory for the of Earthquake Predictability (CSEP) experiments, where models submit gridded forecasts for predefined regions, time windows (e.g., 1-day to multi-year), and thresholds (typically M ≥ 4.0–5.0) before outcomes are revealed. These experiments use standardized catalogs, such as Advanced National Seismic System data, and employ multiscore ensembles to assess reliability, resolution, and skill relative to null hypotheses like uniform or baseline models. pseudo-prospective tests simulate forward evaluation on historical data withheld during model development, while Bayesian frameworks incorporate uncertainty in both forecasts and observations for robust inference. Protocols also mandate open-source reproducibility, as in pyCSEP software, to facilitate community scrutiny and iterative refinement.

Historical Successes and Verified Cases

The 1975 Haicheng earthquake in Province, , on , represents the most cited instance of a successful short-term , where authorities issued evacuation orders approximately one day prior to the Ms 7.3 event, reportedly averting tens of thousands of casualties. The forecast relied on a multi-stage process incorporating long-term analysis, mid-term precursory phenomena such as changes and ground deformation, short-term foreshock activity, and imminent indicators like animal behavior anomalies and readings, culminating in a public alert on February 3. While the prediction's reliance on foreshocks—common but not uniquely predictive—has drawn scrutiny, the timely evacuation of urban areas is credited with reducing deaths to around 2,000 despite widespread destruction, distinguishing it from subsequent unpredicted events like the that killed over 240,000. Independent analyses, including declassified Chinese reports, confirm the sequence of precursor observations and decision-making, though some Western seismologists attribute partial success to the region's dense monitoring network rather than novel methodology. Beyond Haicheng, verified deterministic predictions of large mainshocks remain exceedingly rare, with most claims failing rigorous post-hoc verification under controlled metrics like those proposed by the Collaboratory for the Study of Predictability (CSEP). One additional case involves intermediate-term alarms from the VAN seismic electric signal method in , which retrospectively aligned with the 1988 Spitak () Ms 6.9 on , where precursors were detected weeks prior, though the method's specificity and replicability have been contested in peer-reviewed evaluations for lacking statistical robustness beyond chance. Probabilistic forecasts, such as those from the Parkfield experiment predicting a Mw 6.0 event on the between 1985 and 1995 with 95% confidence, did not materialize until 2004, rendering it a in temporal accuracy despite yielding valuable precursory on slip . Operational systems have demonstrated successes in aftershock forecasting, where short-term probabilities exceed baseline rates; for instance, U.S. Geological Survey models post-2019 Ridgecrest sequence accurately delineated elevated risks within days, verified by observed event clustering against null hypotheses. However, for standalone mainshock predictions, no other cases meet the dual criteria of prospective issuance, causal linkage to precursors, and independent verification without , underscoring the field's emphasis on probabilistic hazard assessment over pinpoint timing. These limited verified instances highlight empirical challenges in isolating reliable signals amid , informing modern evaluations that prioritize logarithmic scoring of likelihood ratios over binary success/failure dichotomies.

Notable Failures and Retractions

One prominent example of a failed earthquake prediction is the Parkfield experiment in . In 1985, the U.S. Geological Survey (USGS) issued a specific forecast for a magnitude 6.0 earthquake along the near Parkfield between 1985 and 1993, based on observed recurrence intervals of approximately 22 years from prior events in 1857, 1881, 1901, 1922, 1934, and 1966. No such event occurred within the designated window, rendering the prediction unsuccessful, although the anticipated earthquake eventually struck on September 28, 2004, with a magnitude of 6.0. This outcome challenged the characteristic earthquake model underlying the forecast, which assumed quasi-periodic ruptures on the same fault segment, and contributed to broader skepticism about short-term deterministic predictions despite yielding extensive seismic data. The VAN method, developed by Greek researchers Panayiotis Varotsos and colleagues, proposed using low-frequency seismic electric signals (SES) as precursors to forecast earthquakes in . Claims of successful predictions, particularly for events in the and , were advanced through selectivity in signal selection and estimation. However, independent analyses revealed the method's predictions suffered from high false positive rates, with successes attributable to chance given 's , and lacked rigorous, prospective validation. A 2020 review concluded the updated VAN protocol remains unusable for reliable forecasting due to persistent ambiguities in signal interpretation and verification failures. In , the successful short-term prediction and evacuation for the 1975 Haicheng earthquake (magnitude 7.3) fostered optimism for precursor-based methods, including foreshocks, groundwater changes, and animal behavior anomalies. Yet, this was followed by the catastrophic failure to predict the (magnitude 7.8), which caused approximately 240,000 deaths, as precursors were either absent or misinterpreted despite monitoring efforts. Subsequent unpredicted events, including the (magnitude 7.9), underscored systemic issues with precursor reliability and led to reduced emphasis on deterministic predictions in Chinese policy by the late 1970s. These cases highlighted the challenges of generalizing isolated successes amid variable tectonic conditions.

Limitations and Scientific Challenges

Fundamental Barriers from Chaos and Nonlinearity

The dynamics of fault rupture and stress accumulation in the Earth's crust are governed by highly nonlinear equations, particularly those describing frictional sliding and stress transfer between faults, which exhibit stick-slip behavior akin to deterministic chaos. These nonlinearities arise from heterogeneities in fault properties, such as varying friction coefficients and stress distributions, causing small perturbations—whether from measurement errors, unmodeled subsurface variations, or distant seismic triggers—to amplify exponentially over time, a hallmark of chaotic systems where trajectories diverge rapidly despite deterministic underlying physics. In such systems, the Lyapunov exponent, a measure of divergence rate, is positive for earthquake fault models, indicating inherent unpredictability beyond a finite horizon, typically on the order of days for slow-slip events and even shorter for rapid ruptures. This nature manifests in the irregularity of sequences, where self-similar patterns across scales—from microcracks to major faults—defy linear extrapolation, as evidenced by power-law distributions in event sizes and times that preclude unique deterministic precursors. Numerical simulations of fault systems, incorporating rate-and-state laws, demonstrate that even with perfect initial , long-term forecasts diverge due to multiple stable attractors in , rendering precise timing, location, and predictions infeasible for large events. Observational from catalogs like those maintained by the USGS further support this, showing that while stress fields evolve predictably on geological timescales (e.g., plate motion rates of 2-10 cm/year), the triggering thresholds for rupture remain obscured by nonlinear feedbacks, such as dynamic weakening from thermal pressurization during slip. Consequently, short-term deterministic forecasting faces insurmountable barriers, as the system's sensitivity precludes resolving the precise state amid observational ; for instance, seismic networks detect strains to ~10^{-8} but cannot capture the full of microstructural heterogeneities driving . Although probabilistic models mitigate this by averaging over , they inherently sacrifice precision for reliability, underscoring that enforces a fundamental limit analogous to prediction's two-week horizon, where ensemble methods succeed but point forecasts fail. Recent studies suggest larger earthquakes exhibit relatively less chaotic variability due to scale effects, yet even these retain sufficient nonlinearity to resist routine prediction, as confirmed by analyses of global catalogs spanning 1900-2020.

Data Quality and Observational Constraints

Earthquake catalogs, essential for statistical forecasting models such as the epidemic-type sequence (), frequently exhibit , particularly in the short term following major events due to temporary increases in the of from seismic network overload and swarms overwhelming detection capabilities. This biases parameter estimates, often leading to underforecasting of rates, as smaller events below the varying detection are systematically omitted. Methods like expectation-maximization algorithms have been developed to calibrate models on incomplete by inferring events, improving in regions with long historical records but temporal variations in . However, such corrections rely on assumptions about underlying patterns, which may not hold in heterogeneous tectonic settings, underscoring persistent uncertainties in reliability for probabilistic forecasts. Geodetic observations, including GPS and InSAR, provide complementary strain and deformation data but are hampered by high noise levels that obscure potential precursors. GPS time series often contain coherent noise structures, such as common-mode errors from atmospheric or orbital sources, which can mimic slow slip signals and lead to false positives in precursor detection. Denoising techniques, including deep neural networks and wavelet transforms, enable near-real-time noise reduction but introduce latency and model dependencies that limit their utility for operational short-term forecasting. InSAR data, while effective for mapping co-seismic deformation, suffers from decorrelation in vegetated or urban areas and sparse temporal sampling, constraining its ability to resolve subtle pre-seismic changes against tropospheric noise. These noise issues have fueled debates over claimed precursory deformations, with critiques highlighting that uncorrected artifacts in high-rate GPS data compromise assertions of reliable signals hours before rupture. Broader observational constraints arise from sparse global monitoring networks, which provide inadequate for crustal processes and faults where over 70% of large earthquakes occur. Seismological face and uncertainties, exacerbated by picking errors and site-specific , necessitating probabilistic in forecast models but reducing precision for source and drop estimates. Instrumental coverage remains uneven, with dense networks in yielding higher-quality for models like UCERF but global gaps hindering uniform forecasting reliability. The absence of direct, consistent precursors—such as electromagnetic or hydrological anomalies verifiable across events—further limits constraints, as historical reviews indicate that over a century of observations has yielded no reproducible short-term predictors amid variability. These factors collectively impose fundamental barriers, prioritizing long-term probabilistic approaches over deterministic predictions reliant on imperfect .

Overhype and Public Misconceptions

Public expectations for earthquake forecasting frequently conflate probabilistic hazard assessments with precise, short-term predictions specifying exact time, location, and magnitude, fostering misconceptions about the field's capabilities. In reality, operational forecasts, such as those from the Uniform Earthquake Rupture Forecast (UCERF) models, estimate long-term probabilities—like a 99.7% chance of one or more magnitude 6.7+ events in by 2043—but cannot deliver actionable warnings for imminent quakes due to the chaotic, nonlinear dynamics of fault systems. Media and pseudoscientific claims amplify overhype, portraying unverified methods as breakthroughs while ignoring empirical failures. For instance, the 1985 Parkfield prediction anticipated a characteristic magnitude 6.0 along California's within a five-year window (1985–1990) based on quasi-periodic recurrence, but no such event occurred until September 28, 2004, exceeding the 95% and highlighting the limitations of "characteristic " models. Similarly, a 2005 experiment using electromagnetic precursors to forecast quakes ended in failure, as reported in , underscoring that no precursor-based method has demonstrated retrospective or prospective reliability across global datasets. Recent social media-driven hype exemplifies ongoing issues, with self-proclaimed predictors like Brent Dmitruk gaining followers by issuing vague alerts for seismically active regions, such as or , where events are statistically probable but not causally foreseen. Dmitruk's October 2024 claim of an impending quake in the region succeeded coincidentally with a magnitude 7.1 event off on August 8, 2025, but experts attribute such "hits" to base-rate rather than predictive skill, as high-hazard zones experience frequent activity. This pattern erodes public trust when misses accumulate, as USGS data show no verified short-term prediction of any major ( 7.0+). Persistent misconceptions include the belief that detect precursors like electromagnetic changes or p-wave anomalies, fueled by anecdotes but lacking controlled ; USGS reviews find no causal link, attributing reports to coincidence or post-hoc rationalization. Another holds that foreshocks reliably signal mainshocks, yet prospective identification is infeasible, as only hindsight distinguishes them—small events do not "relieve stress" to avert larger ones, per seismic energy release models. Such errors, amplified by sensational reporting, promote complacency in unprepared regions while discrediting rigorous probabilistic tools essential for mitigation planning.

Controversies and Alternative Viewpoints

Debunked Prediction Claims

Numerous individuals and groups have claimed the ability to predict specific earthquakes using methods ranging from purported precursors to astrological alignments, but these assertions have repeatedly been debunked upon scrutiny, revealing reliance on coincidence, vagueness, or pseudoscientific principles rather than verifiable mechanisms. For instance, researcher Hoogerbeets has promoted predictions based on "planetary " and activity, asserting that alignments influence seismic events. Following the magnitude 7.8 earthquake in and on February 6, 2023, Hoogerbeets highlighted a prior post warning of strong shocks in the region, yet seismologists emphasize that no causal relationship exists between celestial positions and earthquakes, with his apparent successes attributable to frequent, broad warnings in seismically active zones rather than predictive accuracy. The (USGS) has explicitly stated that such methods lack empirical support and that precise remains impossible with current knowledge. A prominent scientific example involves the prediction experiment, where the USGS in forecasted a magnitude 6.0 event along the [San Andreas Fault](/page/San Andreas Fault) near , before the end of 1993, with a stated 95% probability, based on historical recurrence intervals of 22 years (±7 years) from prior events in 1857, 1881, 1901, 1922, 1934, and 1966. No such earthquake occurred within the window, leading to the experiment's reevaluation as a failure of the deterministic framing, despite the eventual magnitude 6.0 quake on September 28, 2004—outside the predicted timeframe—which underscored the challenges in transitioning from statistical patterns to reliable short-term forecasts amid fault complexity and variability in stress accumulation. This case highlighted how even data-driven claims can overstate predictability when nonlinear dynamics and incomplete data prevent precise timing. Pseudoscientific predictions, such as those from Ryo Tatsuki, who forecasted a megaquake in on July 5, 2025, based on dreams and patterns, also failed to materialize, prompting warnings from seismologists about the dangers of public reliance on unverified intuition over probabilistic assessments. Over a century of has yielded no reproducible successes in specific predictions, with claims often retrofitted post-event or dismissed under rigorous testing, reinforcing that occurrence stems from tectonic release governed by chaotic subsurface processes impervious to deterministic short-term .

Skepticism Toward Precursors and Anomalies

Scientific consensus holds that proposed earthquake —such as anomalous seismic patterns, electromagnetic emissions, gas fluctuations, changes, animal behavior, and ionospheric disturbances—lack sufficient reliability for operational forecasting due to inconsistent empirical validation and high rates of false positives. Reviews of decades of observations indicate that while retrospective analyses often identify correlations, prospective tests fail to demonstrate or , as anomalies occur frequently in non-seismic contexts, signal from noise. The U.S. Geological Survey (USGS) maintains that no such precursors have enabled confirmed short-term predictions of major earthquakes, emphasizing instead probabilistic hazard assessments over deterministic claims. Electromagnetic and ionospheric anomalies, frequently cited in studies from regions like and , exemplify this skepticism; for instance, the VAN method, which purported to detect seismic electric signals, generated predictions that underperformed random chance in blinded tests and ignored non-seismic sources of similar signals, such as cultural noise or solar activity. Statistical analyses of ultralow-frequency emissions prior to events reveal that purported precursors do not exceed background variability thresholds reliably, with occurrence rates often below 20-30% even in claimed successes, insufficient for practical alarms given the infrequency of large quakes. Similarly, thermal infrared anomalies detected via , linked to pre-seismic stress-induced heating, show spatial and temporal ambiguities, with critical reviews attributing most to atmospheric or post-hoc data selection rather than fault dynamics. Biological and behavioral anomalies, including unusual animal agitation or mass migrations, have been anecdotally reported for centuries but fail rigorous scrutiny; USGS field studies post-1970s earthquakes, involving surveys of thousands of witnesses, found no consistent patterns distinguishable from stress-induced perceptions or unrelated environmental triggers, concluding such claims stem from amid the rarity of verifiable events. and level variations, proposed as indicators of crustal strain, exhibit diurnal and barometric influences that mimic seismic signals, with peer-reviewed syntheses reporting detection rates too low (often <10%) and non-specific to justify exclusion of false alarms in protocols. These shortcomings underscore a broader challenge: the nonlinear, chaotic evolution of fault systems precludes unambiguous precursor signatures, as small perturbations amplify unpredictably, rendering isolated anomalies probabilistically insignificant without integrated, high-fidelity monitoring unattainable at scale. Despite occasional for multimodal precursor ensembles, the absence of falsifiable mechanisms and reproducible successes perpetuates dismissal in mainstream , prioritizing empirical rigor over speculative correlations.

Policy and Ethical Debates on Issuing Alerts

The issuance of earthquake forecasts raises profound policy challenges, particularly in balancing the potential to mitigate casualties against the risks of public panic and eroded trust from inaccurate predictions. Policymakers must weigh the ethical imperative to warn populations—rooted in beneficence and the duty to prevent harm—against the principle of nonmaleficence, as false alarms can trigger unnecessary evacuations, economic disruptions, and psychological distress without averting actual disasters. In regions with high seismic activity, such as or , governments have grappled with establishing thresholds for alert issuance, often favoring conservative probabilistic statements over deterministic predictions due to the inherent in . This caution stems from causal realities in , where precursors like emissions or animal behavior have repeatedly failed verification, making reliable long-term alerts elusive. A pivotal case illustrating these tensions occurred in , , prior to the April 6, 2009, magnitude 6.3 earthquake that killed 309 people. A government-appointed , including seismologists, assessed ongoing small tremors and swarm activity but concluded there was no heightened of a major event, advising against widespread alarm to avoid undue panic; this communication was later deemed manslaughter by a 2012 court ruling, convicting seven individuals for failing to adequately inform the public. , overturned on appeal in 2014 and fully acquitted by Italy's in 2015, highlighted legal liabilities for scientists, fostering a "chilling effect" where experts hesitate to engage in public forecasting discussions for fear of prosecution, even when probabilities are low. Critics argued the convictions blurred the line between scientific assessment and predictive certainty, underscoring policy needs for legal protections that encourage transparent communication without mandating unattainable precision. Ethical debates further intensify around the "cry wolf" phenomenon, where repeated false positives diminish public responsiveness to future alerts, potentially amplifying harm in genuine events. Studies on analogous warning systems, such as weather or alerts, quantify this via false alarm ratios—defined as false events divided by total forecasts—which can exceed 50% in probabilistic seismic models, leading to complacency and non-compliance rates rising to 70% after multiple inaccuracies. For earthquake forecasting, this risk is amplified by the field's track record of unverified claims, prompting calls for standardized protocols that prioritize empirical validation before public dissemination. Proponents of operational forecasting, like advisories, advocate sharing them globally to build in vulnerable areas, but only if accompanied by clear disclosures to preserve and informed . Conversely, skeptics warn that without robust causal mechanisms, such policies could institutionalize hype, diverting resources from proven mitigation like building codes. Policy frameworks increasingly incorporate cost-benefit analyses, though adapted from early warning systems to forecasting contexts, estimating that a single false in a major could incur billions in indirect costs from interruptions and relocation, far outweighing benefits unless hit rates exceed 80%. bodies like the USGS emphasize "scenario-based" for high-confidence events, such as post-mainshock aftershocks, while avoiding routine long-term predictions to safeguard credibility. Ethically, this approach aligns with by equitably distributing risks, yet it demands meta-awareness of institutional biases, such as academic pressures to publish novel precursors that may inflate perceived forecast viability without rigorous testing. Ultimately, resolving these debates requires interdisciplinary input to define criteria grounded in verifiable data, ensuring enhance rather than undermine societal resilience.

Future Directions and Research Prospects

Advances in Monitoring Technology

Seismic monitoring networks have undergone significant enhancements in density and instrumental diversity over the past decade, enabling higher-resolution detection of seismic activity. Traditional seismometer arrays have been supplemented with nodal sensors and temporary deployments that achieve sub-kilometer spacing in high-risk areas, improving the resolution of aftershock sequences and microseismicity patterns essential for understanding fault dynamics. These networks now incorporate real-time telemetry for rapid data dissemination, as demonstrated by expansions in regions like California and Japan, where station counts have increased by factors of 2-5 since 2015. Distributed Acoustic Sensing (DAS) represents a transformative advance, repurposing existing fiber-optic telecommunication cables into continuous, high-density seismic arrays with spatial resolutions down to 1-10 meters. By interrogating backscattered in the fibers, DAS captures changes from seismic , enabling detection of earthquakes and ambient fields over tens of kilometers without deploying new hardware. Applications include submarine cable networks for offshore monitoring, where DAS has located events with accuracies comparable to sparse seismometers, and urban fiber lines for imaging shallow fault zones. Studies from 2023 onward highlight DAS's ability to reveal low-magnitude events previously undetected, enhancing catalogs for pre-event baseline data. Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite Systems (GNSS) have advanced surface deformation monitoring, providing centimeter-scale measurements of interseismic strain accumulation across broad regions. Persistent scatterer InSAR techniques, refined since 2020, mitigate atmospheric noise through multi-temporal stacking, yielding deformation maps with weekly revisit times via constellations like Sentinel-1. GNSS networks, now exceeding 20,000 global stations, deliver continuous positioning data integrated with InSAR for three-dimensional strain tensor estimation, as applied in monitoring the where slip rates are tracked to within 1 mm/year. Joint InSAR-GNSS workflows correct for orbital and tropospheric errors, improving reliability for identifying precursory slow-slip events. Crowdsourced monitoring via smartphone accelerometers has emerged as a scalable complement, leveraging millions of devices for widespread P-wave detection and early warning. Systems like Google's Android Earthquake Alerts, operational globally by 2025, crowdsource data to detect magnitudes above 4.0 within seconds, achieving detection thresholds in urban areas rivaling professional networks. While noise from human activity limits precision for small events, aggregation algorithms filter signals effectively, contributing to hybrid networks that extend coverage in data-sparse regions.

Integration of Multimodal Data and AI

Recent efforts in earthquake forecasting have increasingly incorporated multimodal data sources, including seismic catalogs, geologic maps, fault distributions, GNSS measurements for crustal deformation, InSAR imagery, and electromagnetic/geo-acoustic signals, to capture and spatio-temporal patterns beyond traditional seismic waveforms alone. , particularly architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers, enables the fusion of these heterogeneous datasets by learning nonlinear interactions and that statistical models like the Epidemic-Type Sequence () struggle to resolve. This integration addresses data sparsity and chaos-induced unpredictability through feature extraction, , and (PINNs) that embed geophysical constraints. The model exemplifies multimodal fusion for intermediate-term (up to 1-year) forecasting of with magnitudes ≥5, utilizing earthquake catalogs (e.g., 52 years of data from China's Earthquake Networks Center, comprising 1,509,163 events), generalized geologic maps, fault maps, and annual seismicity distributions processed via ResNet-50 for spatial features, LSTM for temporal sequences, and for global context. Trained on Chinese regions and transferable to U.S. data from USGS, SafeNet generated probabilistic forecast maps in seconds on a single GPU, outperforming 13 benchmarks including and CNN-BiLSTM models; it achieved superior macro F1 scores and recalled 8 of 18 high-risk regions for M≥6 events in , compared to ETAS's 2 of 5. Such scalability highlights AI's capacity to incorporate tectonic heterogeneity, though performance relies on dense historical data and remains probabilistic rather than deterministic. Dedicated datasets like AEFA support AI-driven analysis by providing four years of continuous electromagnetic and geo-acoustic recordings with physics-derived features (e.g., stress-induced anomalies), enabling classifiers to identify precursors for forecasts spanning days to months. Similarly, the Multimodal Deep Learning Framework (MDLFrame) integrates three-component seismic waveforms and ground-motion parameters from and , achieving 96.11% accuracy in real-time (distinguishing M<5.5 from M≥5.5) within 3 seconds of P-wave arrival, surpassing unimodal models and aiding early systems. Hybrid approaches, such as PINNs combining ML with ground-motion prediction equations (GMPEs), further refine forecasts by simulating crustal deformation and postseismic slip using GNSS/InSAR data. These advancements suggest potential for enhanced reliability in operational forecasting, as seen in ML-augmented models for analysis and determination, which improve catalog completeness by factors of ten through automated phase picking and event association. However, challenges persist in generalizing across tectonic regimes and validating amid noise, necessitating rigorous cross-validation against physical models.

Prospects for Improved Reliability

Despite inherent challenges from the nonlinear dynamics of fault systems, advancements in and offer modest prospects for enhancing the reliability of probabilistic earthquake forecasting, particularly for and intermediate-term hazards. Recent models, such as those employing on seismic catalogs, have demonstrated skill in outperforming traditional epidemic-type sequence () models by capturing subtle spatiotemporal patterns, achieving up to 70% accuracy in forecasting earthquakes one week in advance during retrospective tests on datasets from . However, these gains are limited to statistical probabilities rather than deterministic predictions of exact time, location, and magnitude, as confirmed by the U.S. Geological Survey (USGS), which emphasizes that no method has reliably predicted a major earthquake. Integration of multimodal data, including satellite observations, geodetic measurements from GPS networks, and real-time arrays, holds potential to refine operational earthquake forecasting (OEF) systems, enabling more precise short-term alerts for sequences following large events. For instance, USGS initiatives aim to develop nationwide forecasting tools that incorporate physics-based simulations and to quantify uncertainty and improve public communication of risks. Peer-reviewed studies suggest that scalable frameworks could boost forecast precision by 10-20% in high-seismicity regions like through better handling of sparse historical data via techniques. Yet, systematic evaluations reveal that even advanced algorithms struggle with rare large-magnitude events due to insufficient training data and the chaotic nature of rupture initiation, underscoring that reliability improvements will likely remain incremental without breakthroughs in understanding precursory fault mechanics. Long-term prospects hinge on rigorous and international collaboration to test forecast models against prospective , as advocated by initiatives like the Collaboratory for the Study of Earthquake Predictability (CSEP). While hype around AI-driven "predictions" persists, credible assessments indicate that enhanced monitoring infrastructure—such as denser sensor networks and InSAR satellite interferometry—could yield more reliable probabilistic hazard maps by 2030, reducing uncertainty in seismic risk assessments for . These developments prioritize empirical validation over unverified precursors, aligning with causal understandings of stress accumulation on faults, though full operational reliability for saving lives via evacuations remains improbable in the foreseeable future.

References

  1. [1]
    Can you predict earthquakes? | U.S. Geological Survey - USGS.gov
    No. Neither the USGS nor any other scientists have ever predicted a major earthquake. We do not know how, and we do not expect to know how any time in the ...100% Chance of an Earthquake · Learn about earthquake hazards
  2. [2]
    Developing, Testing, and Communicating Earthquake Forecasts ...
    Aug 13, 2024 · This review captures the current state of OEF worldwide and analyzes expert recommendations on the development, testing, and communication of ...Introduction · The Current State of Research... · Elicitation of Expert Views · Outlook
  3. [3]
    The pursuit of reliable earthquake forecasting - Physics Today
    Jul 16, 2025 · Another hurdle to the development of effective earthquake-forecasting models using AI is the diverse nature of earthquakes across regions.Missing: achievements | Show results with:achievements
  4. [4]
    Aftershock Forecast Overview - Earthquake Hazards Program
    This aftershock forecast can provide situational awareness of the expected number of aftershocks, as well as the probability of subsequent larger earthquakes.
  5. [5]
    New USGS map shows where damaging earthquakes are most ...
    Jan 16, 2024 · Nearly 75 percent of the US could experience damaging earthquake shaking, according to a recent US Geological Survey-led team of 50+ scientists and engineers.<|separator|>
  6. [6]
    Evaluation of a Decade‐Long Prospective Earthquake Forecasting ...
    Apr 12, 2024 · Earthquake forecasting models represent our current understanding of the physics and statistics that govern earthquake occurrence processes.
  7. [7]
    Experimental concepts for testing probabilistic earthquake ...
    This paper is concerned with testing complete probabilistic earthquake forecasting models against observational data. PSHA has been controversial, primarily ...
  8. [8]
  9. [9]
    A benchmark database of ten years of prospective next-day ... - Nature
    Aug 27, 2025 · To this end, earthquake forecasting models should be evaluated against future seismicity in a fully prospective fashion using fair, reproducible ...Missing: controversies | Show results with:controversies
  10. [10]
    What's the difference between predicting and forecasting ...
    May 23, 2016 · A prediction of an earthquake needs to state exactly where and when the event will happen, with enough specifics to be useful for response planning purposes.
  11. [11]
    The nature of earthquake prediction - USGS Publications Warehouse
    Earthquake prediction is inherently statistical. Although some people continue to think of earthquake prediction as the specification of the time, place, ...
  12. [12]
    What is the difference between earthquake early warning ...
    Early warning is a notification that is issued after an earthquake starts. Probabilities and forecasts are comparable to climate probabilities and weather ...
  13. [13]
    Operational Earthquake Forecasting Can ... - GeoScienceWorld
    Sep 1, 2014 · ... time scales from days to decades. ... Figure 1. Schematic diagram of an operational earthquake forecasting system that provides probabilistic ...
  14. [14]
    Machine learning and earthquake forecasting—next steps - Nature
    Aug 6, 2021 · Short-term deterministic earthquake prediction remains elusive and is perhaps impossible; however, probabilistic earthquake forecasting is ...
  15. [15]
    The nature of earthquake prediction - USGS Publications Warehouse
    Although some people continue to think of earthquake prediction as the specification of the time, place, and magnitude of a future earthquake, it has been clear ...
  16. [16]
    An interactive viewer to improve operational aftershock forecasts
    Nov 14, 2022 · The U.S. Geological Survey (USGS) issues forecasts for aftershocks about 20 minutes after most earthquakes above M 5 in the United States ...
  17. [17]
    Operational Aftershock Forecasting | U.S. Geological Survey
    Aug 11, 2025 · The Operational Aftershock Forecasting server runs continuously in the cloud, monitoring the USGS ComCat earthquake catalog.
  18. [18]
    [PDF] The Prediction Problems of Earthquake System Science
    Oct 20, 2014 · Probabilistic Earthquake Forecasting. “Brick-by-Brick Approach ... Forecasting on time scales of less than a decade is currently.
  19. [19]
    novel method for evaluating earthquake forecast model performance ...
    Nov 18, 2024 · Additionally, we apply the SASM-test method to the time-independent probabilistic earthquake forecasting model of the southeastern Tibetan ...
  20. [20]
    Intermediate- and long-term earthquake prediction
    In this review I exclude short-term prediction (time scales of hours to months) since very little progress has been made in that area. For lack of space I also ...
  21. [21]
    Probabilistic Seismic Hazard Analysis at Regional and National ...
    Mar 1, 2020 · ... time scales that range from years to centuries or more all ... probabilistic earthquake forecasting? Bulletin of the Seismological ...
  22. [22]
    [PDF] Revision of Time-Independent Probabilistic Seismic Hazard Maps ...
    The methodology combines estimates of the frequen- cies and magnitudes of earthquakes from potential sources with empirical relationships for the attenuation ...Missing: forecasting | Show results with:forecasting
  23. [23]
    [PDF] Earthquake probabilities in the San Francisco Bay Region
    However, WG99 is concerned with earthquake probabilities over times scales that are much shorter than the mean recurrence interval of any of the faults.
  24. [24]
    [PDF] Real-time Forecasts of Tomorrow's Earthquakes in California
    Aug 2, 2004 · The maps display probabilities of exceeding a specified level of ground shaking over a long time period, typically on the order of 50 years. We ...
  25. [25]
    [PDF] History of Seismology - Institute of Geophysics and Planetary Physics
    Early Ideas about Earthquakes. The most common explanation given for earthquakes in early cultures was the same as for any other natural disaster: they were ...
  26. [26]
    2. The Rise of Earthquake Science | Living on an Active Earth
    In the early 1800s, geology was a new scientific discipline, and most of its practitioners believed that volcanism caused earthquakes, both of which are common ...
  27. [27]
    [PDF] harry fielding reid - 1859—1944 - National Academy of Sciences
    The principal outcome of Reid's seismological work was the formulation and discussion of the theory which he named "The. Elastic Rebound Theory of Earthquakes," ...
  28. [28]
    1906 Marked the Dawn of the Scientific Revolution
    Earthquake Science in the U.S. Before 1906. The 1906 earthquake marked the dawn of modern scientific study of the San Andreas fault system in California.
  29. [29]
    Program History | U.S. Geological Survey - USGS.gov
    During the 1950s, the USGS participated in a program using the seismic signals from underground explosions, Soviet nuclear tests, to learn of the Earth's crust.
  30. [30]
    SEISMIC GAPS IN SPACE AND TIME! - Annual Reviews
    Fedotov (1965) conducted a major study of great (Ms 7.75), shallow earthquakes in the Kamchatka, Kurile, and Japan regions. He noted that these earthquakes ...
  31. [31]
    œSeismic gap hypothesis: Ten years after╚ by Y. Y. Kagan and ...
    Jun 10, 1993 · Since its introduction by Fedotov [1965] the seismic gap hypothesis has evolved as knowledge about earthquake history, prehistoric ...
  32. [32]
    Long- and short-term earthquake prediction in Kamchatka
    The map of long-term prediction for the Kurile—Kamchatka zone compiled in 1965 and supplemented in 1972 by S.A. Fedotov is in good agreement (in four of four ...
  33. [33]
    Fedotov, S.A. (1965) Regularities of the Distribution of Strong ...
    ABSTRACT: Decadal forerunning seismic activity of magnitude Mw ≥ 5.0 is mapped for all 45 mainshocks of Mw 7.7 to 9.1 at subduction zones of the world from 1993 ...Missing: gap | Show results with:gap
  34. [34]
    Seismic gaps and earthquakes - Rong - 2003 - AGU Journals - Wiley
    Oct 14, 2003 · The seismic gap hypothesis implies that earthquake hazard is small immediately following a large earthquake and increases with time thereafter on certain fault ...
  35. [35]
    Long-range synoptic earthquake forecasting: an aim for the millennium
    An early specification (Allen, 1976) laid down that the predicted location, time and magnitude should be stated as windows, and this is still relied on by some ...
  36. [36]
    Memories of the Future: The Uncertain Art of Earthquake Forecasting
    By now, you are probably unimpressed by probabilistic earthquake forecasting techniques. Although probabilistic estimates for well-known structures such as ...<|control11|><|separator|>
  37. [37]
    The Parkfield, California, Earthquake Prediction Experiment
    The Parkfield prediction experiment is designed to monitor the details of the final stages of the earthquake preparation process; observations and reports of ...Summary · A Recurrence Model For... · Crustal Deformation
  38. [38]
    The Parkfield, California, Earthquake Experiment
    Sep 28, 2004 · The Parkfield Experiment is a comprehensive, long-term earthquake research project on the San Andreas fault. Led by the USGS and the State of California.
  39. [39]
    The Earthquake Prediction Experiment at Parkfield, California
    Since 1985, a focused earthquake prediction experiment has been in progress along the San Andreas fault near the town of Parkfield in central California.
  40. [40]
    Seismic Gap Hypothesis: Ten years after - AGU Journals - Wiley
    Dec 10, 1991 · One of the earliest and clearest applications of the seismic gap theory to earthquake forecasting was by McCann et al.Missing: mid- | Show results with:mid-
  41. [41]
    Seismic Gap Hypothesis: Ten Years After
    The seismic gap hypothesis states that earthquake hazard increases with time since the last large earthquake on certain faults or plate boundaries.Missing: history century
  42. [42]
    Scientific Basis - Earthquake Hazards Program
    A model on which a scientific prediction could be based began to be developed in the late 1970's and early 1980's, and is described in three seminal papers.Missing: forecasting | Show results with:forecasting
  43. [43]
    Characteristic Earthquake Model, 1884–2011, RIP - GeoScienceWorld
    Nov 1, 2012 · Some proponents of quasi‐periodic characteristic earthquakes draw support from paleoseismic data, which provide radiometric dates of sediments ...
  44. [44]
    Characteristic Earthquake Magnitude Frequency Distributions on ...
    Nov 28, 2018 · Earthquake magnitude-frequency on faults is suggested to be distributed in an exponential law (Gurenberg-Richter) or characteristic We use ...
  45. [45]
    [PDF] Introduction San Andreas Fault: An Overview
    Slip rate estimates on the San Francisco peninsula section of the San Andreas Fault are in the range of 1.6 to 1.7 cm per year (based on fig. 5 sites I and J). ...
  46. [46]
    Reid's Elastic Rebound Theory - Earthquake Hazards Program
    This gradual accumulation and release of stress and strain is now referred to as the "elastic rebound theory" of earthquakes.
  47. [47]
    How long was the 1906 rupture? - Earthquake Hazards Program
    The total length is 296 miles (477 kilometers). For comparison, the 1989 Loma Prieta earthquake had a rupture length of about 25 miles (40 km). 296 Mile Rupture.
  48. [48]
    [PDF] UCERF3: A New Earthquake Forecast for California's Complex Fault ...
    change with time according to elastic-rebound theory. Faults are less likely to rupture (less ready) when and where there has been a recent earthquake, and ...
  49. [49]
    Fault Slip Rates - Earthquake Processes and Effects
    Their model indicates a deep slip rate of 20 mm/yr for the San Andreas fault, a deep slip rate of 13 mm/yr and shallow creep rate of 0 to 13 mm/yr on the ...
  50. [50]
    Accumulation of permanent deformation during earthquake cycles ...
    Feb 20, 2015 · Our understanding of crustal deformation at time scales of tens to thousands of years is strongly conditioned by elastic rebound theory and the ...
  51. [51]
    Earthquakes
    The seismic "cycle". Inter-seismic slip; Co-seismic slip; Post-seismic ... horizontal rate = slip during average earthquake / earthquake recurrence interval.
  52. [52]
    The seismic cycle (Chapter Five) - The Mechanics of Earthquakes ...
    Dec 17, 2018 · In terms of crustal deformation, the loading cycle is often divided into four phases: preseismic, coseismic, postseismic, and interseismic.
  53. [53]
    Estimation of Recurrence Interval of Large Earthquakes on the ...
    This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.Missing: forecasting | Show results with:forecasting
  54. [54]
    M ≥ 7 earthquake rupture forecast and time‐dependent probability ...
    Apr 2, 2016 · We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault segmentation model.
  55. [55]
    Earthquake forecasting from paleoseismic records - Nature
    Mar 2, 2024 · Here we compare five models and use Bayesian model-averaging to produce time-dependent, probabilistic forecasts of large earthquakes along 93 fault segments ...
  56. [56]
    Time‐Dependent Probabilistic Seismic Hazard Analysis for Seismic ...
    Dec 27, 2023 · We propose a time‐dependent sequence‐based probabilistic seismic hazard analysis (TD‐SPSHA) approach by combining the time‐dependent mainshock probabilistic ...
  57. [57]
    [PDF] A Physically-Based Earthquake Recurrence Model for Estimation of ...
    We have introduced a new failure time distribution for recurrent earthquake sequences, the Brownian passage time model. This model is based upon a simple ...
  58. [58]
    Comparison of Paleoearthquake Elapsed‐Times and Mean ...
    Apr 18, 2025 · ... recurrence interval is considered closed, as a full seismic-cycle is sampled. ... We also find a dependence of seismic-cycle maturity on fault ...
  59. [59]
    [PDF] Probabilistic Seismic Hazard Assessment
    The results of such an analysis are expressed as estimated probabilities per year or estimated annual frequencies.”Missing: timescales | Show results with:timescales
  60. [60]
    [PDF] Introduction to Probabilistic Seismic Hazard Analysis
    The purpose of this document is to discuss the calculations in- volved in PSHA, and the motivation for using this approach. Because many models and data sources ...
  61. [61]
    [PDF] Probabilistic Seismic Hazard Assessment Including Site Effects for ...
    Probabilistic Seismic Hazard Analysis (PSHA) is a method used to estimate the level of ground motion with a specified probability of exceedance (Cornell ...<|separator|>
  62. [62]
    Probabilistic Seismic Hazard Assessment (PSHA)
    The PSHA technique is ideally suited to compiling hazard maps. Having drawn up the zone model for a large area, the calculations are made for a grid of points ...
  63. [63]
    Time-dependent recurrence of strong earthquake shaking near plate ...
    Time-independent recurrence models are commonly adopted for describing the recurrence of small earthquakes, for analysis of regional and global seismic hazard ...
  64. [64]
    (PDF) Characteristic Earthquake Recurrence and Time-Dependent ...
    Sep 16, 2025 · Two different hybrid earthquake recurrence models are developed with time-independent (or Poissonian) and time-dependent (or renewal) characteristics.
  65. [65]
    Estimating Time-Dependent Seismic Hazard of Faults in the ...
    Lognormal, Brownian passage time, and Weibull time-dependent recurrence models are considered. The paper explores application of the method for two faults on ...
  66. [66]
    Applying the ETAS Model to Simulate Earthquake Sequences for ...
    Jan 30, 2025 · ETAS is a state‐of‐the‐art method for modeling seismic sequences that has demonstrated significant potential for forecasting future earthquakes ...ABSTRACT · DEFINING THE ETAS MODEL · ETAS MODEL... · TESTING THE...
  67. [67]
    Improved Aftershock Forecasts Using Mainshock Information in the ...
    Feb 3, 2025 · The Epidemic Type Aftershock Sequence (ETAS) model is the most widely used and powerful statistical model for aftershock forecasting.
  68. [68]
    Ensembles of ETAS models provide optimal operational earthquake ...
    Oct 1, 2019 · We develop 3-day forecasts during the swarm based on an ETAS model fit to all prior seismicity in the region as well as an ETAS model fit only to previous ...<|control11|><|separator|>
  69. [69]
    Constant strain accumulation rate between major earthquakes on ...
    Apr 11, 2018 · Our results show that strain accumulation reaches near steady state within ~10 years of an earthquake. We discuss the implications for seismic ...
  70. [70]
    [PDF] Leveraging geodetic data to reduce losses from earthquakes
    The geodetic data reflect contemporary deformation rates, provide slip rate information on additional faults that lack geologic rate estimates, help quantify ...
  71. [71]
    A Test of the Earthquake Gap Hypothesis in Mexico
    Nov 30, 2022 · We conclude that the gap hypothesis performed poorly at predicting earthquakes in Mexico and, in fact, its predictions were worse than ...
  72. [72]
    New Estimates of Magnitude‐Frequency Distribution and b‐Value ...
    Dec 28, 2023 · The b-value, derived from the MFD, is commonly used to estimate the probability that a future earthquake will exceed a specified magnitude ...
  73. [73]
    [PDF] Earthquake Prediction using Machine Learning - iarjset
    This paper uses machine learning to predict earthquakes, explosions, or no event using past seismic data, with XGBoost showing the highest performance. LIME is ...
  74. [74]
    Earthquake Prediction Using Machine Learning Techniques
    Aug 7, 2025 · This paper proposes a data-driven approach for earthquake prediction using supervised machine learning algorithms trained on historical seismic ...Missing: 2020s | Show results with:2020s<|separator|>
  75. [75]
    A Benchmark for Earthquake Forecasting with Neural Point Processes
    Sep 23, 2025 · Earthquake prediction based on spatio-temporal data mining: an lstm network approach. IEEE Transactions on Emerging Topics in Computing, 8(1): ...
  76. [76]
    Scalable intermediate-term earthquake forecasting with multimodal ...
    Mar 21, 2025 · We propose SafeNet, a scalable deep learning framework designed to address these challenges through the use of multimodal fusion neural networks.
  77. [77]
    [PDF] AI for Earthquake Prediction: A Comparative Analysis of ... - ajcse
    This research presents an AI-driven method for earthquake prediction using the gated recurrent unit (GRU) model and historical seismic data from the US ...
  78. [78]
    AEFA: an earthquake forecasting dataset for AI - ScienceDirect.com
    Earthquake forecasting is a challenging task aiming to ... earthquake prediction problem across the geophysics, statistics, and data science domains.
  79. [79]
    Forecasting future earthquakes with deep neural networks
    Earthquake forecasting focuses on providing probabilistic estimates that indicate the likelihood of earthquakes occurring in a given region within a specific ...Missing: peer- | Show results with:peer-
  80. [80]
    AI-Powered Earthquake Prediction - It's Prodigy
    Jun 20, 2025 · An algorithm developed by the University of Texas accurately predicted 70% of seismic events in a China trial.
  81. [81]
    Recent advances in earthquake seismology using machine learning
    Feb 28, 2024 · Here, we review the recent advances, focusing on catalog development, seismicity analysis, ground-motion prediction, and crustal deformation analysis.
  82. [82]
    Earthquake prediction: a critical review - Oxford Academic
    Earthquake prediction research has been conducted for over 100 years with no obvious successes. Claims of breakthroughs have failed to withstand scrutiny.
  83. [83]
    Precursory seismic quiescence: A preliminary assessment of the ...
    Some investigators have interpreted these observations as evidence that seismic quiescence is a somewhat reliable precursor to moderate or large earthquakes.
  84. [84]
    What is the probability that an earthquake is a foreshock to a larger ...
    Jul 8, 2024 · The likelihood that an earthquake will be followed by a larger earthquake nearby and within a week is about 5%.
  85. [85]
    [PDF] Real time monitoring of radon as an earthquake precursor in Iceland
    Discrete radon samples are being collected weekly from nine sta- tions in the Southern Iceland Seismic Zone (SISZ) and two stations in the. Northern Iceland- ...
  86. [86]
    [PDF] Radon as Earthquake Precursor - IntechOpen
    Mar 2, 2012 · They recorded radon anomalies before different earthquakes: June 1988 (M=6.8);. April, 26, 1986 (M=5.7); July 1986 (M=3.8); Kangra earthquake ...<|separator|>
  87. [87]
    A systematic compilation of earthquake precursors - ScienceDirect
    The earthquake precursors selected for analysis included electric and magnetic fields, gas emissions, groundwater level changes, temperature changes, surface ...Missing: reliability | Show results with:reliability
  88. [88]
    Are earthquakes associated with variations in the geomagnetic field?
    Electromagnetic variations have been observed after earthquakes, but despite decades of work, there is no convincing evidence of electromagnetic precursors ...
  89. [89]
    Slight Shifts in Magnetic Field Preceded California Earthquakes - Eos
    Oct 6, 2022 · The U.S. Geological Survey states that “despite decades of work, there is no convincing evidence of electromagnetic precursors to earthquakes.<|separator|>
  90. [90]
    Case‐Control Study on a Decade of Ground‐Based Magnetometers ...
    Sep 1, 2022 · Rather, in this work we provide evidence for the existence of electromagnetic phenomena preceding earthquakes. While it is possible that the ...
  91. [91]
    Animals & Earthquake Prediction | U.S. Geological Survey - USGS.gov
    Anecdotal evidence abounds of animals, fish, birds, reptiles, and insects exhibiting strange behavior anywhere from weeks to seconds before an earthquake.
  92. [92]
    Do Animals Really Anticipate Earthquakes? Sensors Hint They Do
    Jul 31, 2020 · Cows, sheep and dogs increased their activity before tremors, seemingly reacting, in part, to one another.
  93. [93]
    (PDF) Review: Can Animals Predict Earthquakes? - ResearchGate
    Aug 10, 2025 · PDF | In public perception, abnormal animal behavior is widely assumed to be a potential earthquake precursor, in strong contrast to the ...
  94. [94]
    Precursor-Based Earthquake Prediction Research: Proposal for a ...
    Jan 14, 2021 · The article challenges the currently dominant pessimistic view on precursor-based earthquake prediction resting on the “impossible in principle” ...Introduction · Short Summary of the State of... · Outlines of a Possible... · Discussion
  95. [95]
    Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3)
    Nov 5, 2013 · Provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California.
  96. [96]
    UCERF3: A New Earthquake Forecast for California's Complex Fault ...
    Mar 9, 2015 · Scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events.
  97. [97]
    Third Uniform California Earthquake Rupture Forecast (UCERF3)
    The development of earthquake rupture forecasts by the WGCEP (in 1988, 1990, 1995, 2003, and 2007) shows progress towards more accurate representations of the ...
  98. [98]
    The Uniform California Earthquake Rupture Forecast, Version 2 ...
    Apr 14, 2008 · This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).
  99. [99]
    UCERF3: The Long-Term Earthquake Forecast for California
    The estimate for the likelihood that California will experience a magnitude 8 or larger earthquake in the next 30 years has increased from about 4.7% for UCERF ...
  100. [100]
    Working Group on California Earthquake Probabilities (WGCEP)
    The WGCEP has now completed the time-independent UCERF3 model (UCERF3-TI, which relaxes segmentation and includes multi-fault ruptures) and the long-term, time- ...
  101. [101]
    Operational Earthquake Forecasting – What Is It and How Is It Done?
    Aug 29, 2024 · It is important to distinguish earthquake forecasting from earthquake prediction. It is not possible to predict the exact location, time ...<|separator|>
  102. [102]
    CSEP Testing – Collaboratory for the Study of Earthquake ...
    4 sept 2025 · CSEP supports an international effort to rigorously evaluate earthquake forecasting models and conduct forecast testing experiments.
  103. [103]
    Global earthquake forecasts | Geophysical Journal International
    We have constructed daily worldwide long- and short-term earthquake forecasts. These forecasts specify the earthquake rate per unit area, time and magnitude on ...<|separator|>
  104. [104]
    Prospective evaluation of global earthquake forecast models
    The global earthquake activity rate (GEAR1) seismicity model uses an optimized combination of geodetic strain rates, hypotheses about converting strain rates ...
  105. [105]
    The Establishment of an Operational Earthquake Forecasting ...
    Sep 1, 2014 · OEF‐Italy represents the first attempt to provide an operational earthquake forecasting system in Italy. We foresee many potentially ...
  106. [106]
    New-Generation Earthquake Forecasting Swings into Operation in ...
    Aug 21, 2014 · Italy is approaching the next frontier in earthquake forecasting: an "operational" system that will make quake forecasts routine, ...<|control11|><|separator|>
  107. [107]
    Operational Earthquake Forecasting in Italy: validation after 10 yr of ...
    The system is run in real-time: every midnight and after each ML 3.5 + event, it produces the weekly forecast of earthquakes expected by an ensemble model in ...
  108. [108]
    Operational Earthquake Forecasting in Italy: validation after 10 years ...
    The system is run in real-time: every midnight and after each ML 3.5+ event, it produces the weekly forecast of earthquakes expected by an ensemble model in ...
  109. [109]
    Earthquake forecasts - GeoNet
    The earthquake forecast probabilities are really useful for engineers, infrastructure managers, private companies, Civil Defence, government planning, and ...Central New Zealand · Canterbury · Kaikōura
  110. [110]
    Canterbury earthquake forecasts - GeoNet
    Within the next year, there is a 29% probability (unlikely) of one or more earthquakes of magnitude 5.0 to 5.9 occurring in the area shown in the box in the map ...
  111. [111]
    A Software Tool for Hybrid Earthquake Forecasting in New Zealand
    Jul 26, 2024 · In New Zealand, the GeoNet program within GNS Science is the main source of geological hazard information and has publicly provided earthquake ...
  112. [112]
    Current State of New Zealand's Operational Earthquake Forecasting ...
    Sep 11, 2022 · In New Zealand, GNS Science through the GeoNet programme is the official provider of earthquake forecast information to help communities ...
  113. [113]
    Operational Earthquake Forecasting – Implementing a Real-Time ...
    This capability, known as Operational Earthquake Forecasting (OEF), could provide valuable situational awareness to emergency managers, the public, and other ...
  114. [114]
    Operational earthquake forecasting can enhance earthquake ...
    Sep 2, 2014 · Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help ...
  115. [115]
    An Operational Earthquake Forecasting Experiment for Israel
    The OEF-Israel system produces a weekly forecast for target earthquakes with local magnitudes greater than 4.0 and 5.5 in the entire State of Israel.
  116. [116]
    Developing and Testing ETAS‐Based Earthquake Forecasting ...
    May 24, 2024 · ... Switzerland, aiming to identify suitable candidate models for operational earthquake forecasting (OEF) at the Swiss Seismological Service.
  117. [117]
    Towards a harmonized operational earthquake forecasting model ...
    Mar 5, 2025 · We develop a harmonized earthquake forecasting model for Europe based on the epidemic-type aftershock sequence (ETAS) model to describe the spatiotemporal ...
  118. [118]
    [PDF] Towards a harmonized operational earthquake forecasting model ...
    Mar 5, 2025 · Retrospective and pseudo-prospective tests demonstrate that ETAS-based models outperform the time-independent benchmark model as well as an ETAS.<|separator|>
  119. [119]
    Seismologists use deep learning to forecast earthquakes - News
    Aug 31, 2023 · A team of researchers at UC Santa Cruz and the Technical University of Munich created a new model that uses deep learning to forecast aftershocks.<|separator|>
  120. [120]
  121. [121]
    23-13. Improving earthquake forecasting with machine learning
    The goal of this opportunity is to develop machine-learning approaches to improve earthquake forecasting capabilities – namely, to better predict the rate ...<|separator|>
  122. [122]
    Aftershock forecasting | U.S. Geological Survey - USGS.gov
    Aftershock forecasts use statistical models, physics-based models, and machine learning methods to aid in earthquake response and recovery.Missing: 2020s | Show results with:2020s
  123. [123]
    Scalable intermediate-term earthquake forecasting with multimodal ...
    Mar 21, 2025 · We propose SafeNet, a scalable deep learning framework designed to address these challenges through the use of multimodal fusion neural networks.<|separator|>
  124. [124]
    Improving earthquake prediction accuracy in Los Angeles ... - Nature
    Oct 18, 2024 · This research breaks new ground in earthquake prediction for Los Angeles, California, by leveraging advanced machine learning and neural network models.
  125. [125]
    Earthquake Predictability and Forecast Evaluation Using Likelihood ...
    Oct 8, 2024 · Earthquake probability forecasts are typically based on simulations of seismicity generated by statistical (point process) models or direct ...Missing: metrics | Show results with:metrics
  126. [126]
    Ranking earthquake forecasts using proper scoring rules
    Probabilistic earthquake forecasts are used to estimate the spatial and/or temporal evolution of seismicity and have potential utility during earthquake ...
  127. [127]
    Evaluations — pyCSEP v0.7.0 documentation
    PyCSEP provides two groups of evaluation metrics for grid-based earthquake forecasts. ... Computes the t-test for gridded earthquake forecasts. w_test ...Missing: experiment | Show results with:experiment
  128. [128]
    Magnitude-weighted goodness-of-fit scores for earthquake forecasting
    Here, we propose various weighted measures, weighting each earthquake by some function of its magnitude, such as potency-weighted log-likelihood, and consider ...
  129. [129]
    Efficient testing of earthquake forecasting models - ResearchGate
    We develop a novel evaluation method for alarm-based earthquake forecast, taking into account the magnitude of seismic energy and the impact area of earthquakes ...
  130. [130]
    Earthquake Prediction | Pacific Northwest Seismic Network
    Actual annual numbers since 1968 range from lows of 6-7 events/year in 1986 and 1990 to highs of 20-23 events/year in 1970, 1971 and 1992. Although we are ...Missing: developments | Show results with:developments
  131. [131]
    Predicting the 1975 Haicheng Earthquake - GeoScienceWorld
    Mar 9, 2017 · The prediction consisted of four stages: long-term (a few years), middle-term (one to two years), short-term (a few months), and imminent (hours ...
  132. [132]
    [PDF] Earthquake forecasting and its verification - NPG
    Examples of successful near-term predictions of future earthquakes have been rare. A notable exception was the prediction of the M=7.3 Haicheng earthquake in ...
  133. [133]
    Confirming a Chinese earthquake prediction - Geotimes
    Jun 26, 2006 · The Haicheng earthquake is the "first, and so far only, case where a large earthquake was predicted," says Susan Hough, a seismologist with the ...
  134. [134]
    The 2004 Parkfield Earthquake, the 1985 Prediction, and ...
    Mar 9, 2017 · The 1985 prediction of a characteristic magnitude 6 Parkfield earthquake was unsuccessful, since no significant event occurred in the 95% time window.Missing: details | Show results with:details
  135. [135]
    What Ever Happened to Earthquake Prediction?
    By 1978 it was no longer called a research program, and had become committed to predicting a magnitude 8 earthquake in a highly populated and developed part of ...
  136. [136]
    VAN method - Wikipedia
    A review of the updated VAN method in 2020 says that it suffers from an abundance of false positives and is therefore not usable as a prediction protocol. VAN ...Description of the VAN method · Earthquake prediction using... · Criticisms of VAN
  137. [137]
    VAN method lacks validity
    Varotsos and colleagues (the VAN group) claim to have successfully predicted many earthquakes in Greece. Several authors have refuted these claims, as reported ...Missing: debunked | Show results with:debunked
  138. [138]
    Studies on earthquake precursors in China: A review for recent 50 ...
    Failure of Tangshan earthquake prediction was a blow for the enthusiasm of earthquake prediction, which makes the scientists to attempt to find the shortage ( ...
  139. [139]
    The Great 1976 Tangshan Earthquake: Learning from the 1966 ...
    Jan 5, 2022 · ... Haicheng prediction success and the Tangshan prediction failure. Ahead of the Tangshan earthquake, Chinese seismologists had accumulated ...
  140. [140]
    Quake prediction fantasies mislead public - Global Times
    Mar 15, 2011 · "Our failures to predict the Sichuan earthquake in 2008 and the Qinghai earthquake ... following the successful evacuation of Haicheng ...
  141. [141]
    Are earthquakes predictable? | Geophysical Journal International
    We conclude that an empirical search for earthquake precursors that forecast the size of an impending earthquake has been fruitless.Missing: debunked | Show results with:debunked
  142. [142]
    Unpredictability of Earthquakes - AGU Publications
    Aug 4, 1998 · Chaotic attractors provide the archetype of natural systems characterized by a limited predictability [Nicolis, 1989].
  143. [143]
    The predictable chaos of slow earthquakes - PMC - PubMed Central
    Slow earthquakes result from deterministic chaos and show predictability horizon time of the order of days to weeks.
  144. [144]
    Deterministic chaos in a simulated sequence of slip events on a ...
    Understanding the origin of the variation in recurrence interval is important for the improvement of long-term earthquake forecasting. ... S.H. . ,. Nonlinear ...
  145. [145]
    Unravelling the prediction of strong earthquakes - Nature
    The lack of success in developing useful short-term forecasts for strong earthquakes is due to their self-similarity, nonlinearity, and chaotic nature.
  146. [146]
    Statistical physics approach to understanding the multiscale ...
    Dec 18, 2003 · The magnitude of the potential loss of life and property is so great that reliable earthquake forecasting has been a long-sought-for goal.
  147. [147]
    Large earthquakes are more predictable than smaller ones - Seismica
    Jun 21, 2025 · Large earthquakes are more predictable because their chaotic behavior is inversely related to their magnitude, making them less susceptible to ...
  148. [148]
    Post Seismic Catalog Incompleteness and Aftershock Forecasting
    Existing methods for aftershock forecasting are strongly affected by the incompleteness of the instrumental datasets available soon after the main shock ...
  149. [149]
    Embracing Data Incompleteness for Better Earthquake Forecasting
    Dec 9, 2021 · We propose two methods to calibrate the parameters of the epidemic-type aftershock sequence (ETAS) model based on expectation maximization (EM)
  150. [150]
    Embracing Data Incompleteness for Better Earthquake Forecasting
    May 3, 2021 · We propose two methods to calibrate the parameters of the epidemic-type aftershock sequence (ETAS) model based on expectation maximization (EM)
  151. [151]
    ETAS‐Approach Accounting for Short‐Term Incompleteness of ...
    Sep 14, 2021 · Short‐time incompleteness of earthquake catalogs can significantly bias ETAS model fits. · A closed‐form maximum‐likelihood approach is derived ...
  152. [152]
    The precursory phase of large earthquakes - Science
    Jul 20, 2023 · Coherent noise structures reminiscent of colored noise in GPS data are expected to be strongly attenuated by the stack on multiple earthquakes, ...
  153. [153]
    Denoising daily displacement GNSS time series using deep neural ...
    The flexibility of the method allows for near-real-time noise removal (with a latency of a few days), opening up the possibility of detecting and modelling ...
  154. [154]
    New insights into earthquake precursors from InSAR - Nature
    Sep 20, 2017 · We thus averaged the time series of the PSs within each basin in order to reduce oscillations and noise in the data and to obtain corresponding ...
  155. [155]
    [PDF] Earthquake precursors? Not so fast.
    Jul 24, 2023 · As far as we can tell, uncorrected noise in high-rate GPS time series data fundamentally compromises the paper's claim of discovery of two ...<|control11|><|separator|>
  156. [156]
    Earthquake likelihood model testing - USGS Publications Warehouse
    This paper describes the statistical rules of an experiment to examine and test earthquake forecasts.
  157. [157]
    Challenges in observational seismology | U.S. Geological Survey
    Our ability to collect, process, and analyze earthquake data has been accelerated by advances in electronics, communications, computers, and software (see ...Missing: issues forecasting
  158. [158]
    We May Never Predict Earthquakes, but We Can Make Them Less ...
    Feb 17, 2023 · Earthquakes happen because the slow and steady motions of tectonic plates cause stresses to build up along faults in the Earth's crust. Faults ...
  159. [159]
    (PDF) The 2004 Parkfield Earthquake, the 1985 Prediction, and ...
    Aug 6, 2025 · The 1985 prediction of a characteristic magnitude 6 Parkfield earth-quake was unsuccessful, since no significant event occurred in the 95% ...
  160. [160]
    Earthquake prediction attempt fails | News - Al Jazeera
    Oct 12, 2005 · Earthquake prediction attempt fails. A prolonged attempt to help scientists predict when earthquakes will happen has ended in failure ...
  161. [161]
    An earthquake prediction went viral. Is it giving people false hope?
    Mar 21, 2025 · On social media a self-proclaimed earthquake predictor says he can forecast big shakes, but experts say it's pure luck.
  162. [162]
    Why earthquake predictions are usually wrong - PreventionWeb
    Mar 22, 2025 · Guessing that an earthquake would happen here is an easy bet, Dr Jones said, although a strong magnitude seven is quite rare.
  163. [163]
    Scientists cannot predict earthquakes | AP News
    May 1, 2023 · False. Earthquakes can't be predicted, experts say. Scientists do calculate the probability that earthquakes will occur in various regions ...
  164. [164]
    Earthquake Myths: Separating Fact from Fiction | Cal OES News
    Sep 28, 2023 · Another common myth is that new technology exists to predict earthquakes. And finally, a popular myth is that animals can predict them.Missing: overhype | Show results with:overhype
  165. [165]
    Dynamics and characteristics of misinformation related to ...
    Aug 17, 2023 · In our study, we focused on tweets containing misinformation about earthquake predictions and analyzed their dynamics.Missing: retracted | Show results with:retracted
  166. [166]
    Earthquake Facts & Earthquake Fantasy | U.S. Geological Survey
    A common belief is that people always panic and run around madly during and after earthquakes, creating more danger for themselves and others. Actually, ...
  167. [167]
    No, It Is Not Possible to Predict an Earthquake | Snopes.com
    Feb 9, 2023 · These claims of predicting earthquakes are false and have no basis in scientific fact. We debunked Hoogerbeets' claims back in 2017, and other ...
  168. [168]
    No, you can't predict earthquakes, the USGS says - NPR
    Feb 7, 2023 · The USGS is unequivocal: No one can predict an earthquake. "We do not know how, and we do not expect to know how any time in the foreseeable future," the ...
  169. [169]
    Turkey Earthquake 'Planetary Geometry Prediction' Slammed by ...
    Feb 6, 2023 · There is no evidence that planetary alignment—or any other method—can accurately and systematically forecast earthquakes, scientists told ...<|separator|>
  170. [170]
    The Parkfield prediction fallacy | Bulletin of the Seismological ...
    Mar 3, 2017 · The Parkfield earthquake prediction is generally stated as a 95% probability that the next moderate earthquake there should occur before ...
  171. [171]
    Why the Mw 6 parkfield earthquake expected in the 1985–1993 ...
    Aug 10, 2022 · The earthquake prediction released by USGS in 1985 to the public stated that there was an M w 6.0 earthquake occurring in the Parkfield area by January 1993 ...1. Introduction · 2. Data And Methods · 4. Discussions
  172. [172]
    Viral Manga quake prediction fails, but Japanese scientists warn of ...
    Jul 5, 2025 · Viral Manga quake prediction fails, but Japanese scientists warn of this... ... A popular manga predicted doom, an apocalyptic earthquake ...
  173. [173]
    A Critical Review of Ground Based Observations of Earthquake ...
    The earthquake precursors observed on ground can be generally categorized in two main groups: non-electromagnetic precursors and electromagnetic ones. In this ...
  174. [174]
    Statistical Analysis of Pre‐earthquake Electromagnetic Anomalies in ...
    Sep 24, 2020 · Assessing the statistical significance of electromagnetic anomalies in the ultralow frequency (ULF) range observed prior to earthquakes is a necessary step
  175. [175]
    Looking for Earthquake Precursors From Space: A Critical Review
    What is generically called thermal anomalies refer to anomalous fluctuations of several different parameters such as atmospheric temperature (at various ...
  176. [176]
    Baseline studies of the feasibility and reliability of using animal ...
    This project was established to determine if it were possible to advance the state of the art in earthquake prediction by learning more about claims that ...Missing: review | Show results with:review
  177. [177]
    Electromagnetic and Radon Earthquake Precursors - MDPI
    This paper is a cumulative survey on earthquake precursor research, arranged into two broad categories: electromagnetic precursors and radon precursors.
  178. [178]
    Short-term earthquake prediction: Current status of seismo ...
    May 29, 2009 · Atmospheric anomalies observed during earthquake occurrences. Geophys. Res. Lett. (2004). GellerR. et al. Earthquakes cannot be predicted ...
  179. [179]
    A Critical Review of Geomagnetic and Ionospheric Anomalies as ...
    This chapter presents a critical review of research on geomagnetic and ionospheric anomalies as potential precursors to earthquakes.
  180. [180]
    Ethical dilemmas related to predictions and warnings of impending ...
    Ethical dilemmas include failure to warn, which can cause serious consequences, and false alarms, which may violate nonmaleficence and affect autonomy.
  181. [181]
    Ethical dilemmas related to predictions and warnings of impending ...
    Sep 1, 2013 · Abstract. Scientists and policy makers issuing predictions and warnings of impending natural disaster are faced with two major challenges, that ...
  182. [182]
    Italian scientists convicted over earthquake warning | Reuters
    Oct 22, 2012 · Six scientists and a government official were sentenced to six years in prison for manslaughter by an Italian court on Monday for failing to ...
  183. [183]
    Italy's supreme court clears L'Aquila earthquake scientists for good
    Six scientists convicted of manslaughter for advice they gave ahead of the deadly L'Aquila earthquake in 2009 today were definitively acquitted by Italy's ...
  184. [184]
    Why Italian earthquake scientists were exonerated | Science | AAAS
    Feb 10, 2015 · Six scientists convicted of manslaughter in 2012 for advice they gave ahead of the deadly L'Aquila earthquake were victims of uncertain and fallacious ...
  185. [185]
    Lessons of L'Aquila for Operational Earthquake Forecasting
    Jan 3, 2013 · The indictments appeared to blame the scientists for not alerting the local population of an impending earthquake—for failure to predict. It ...<|separator|>
  186. [186]
    Cry Wolf Effect? Evaluating the Impact of False Alarms on Public ...
    NOAA (2019) defines the false alarm ratio as “the number of false alarms divided by the total number of events forecast.” To assess the actual false alarm ratio ...<|separator|>
  187. [187]
    The ethics of earthquake prediction | Request PDF - ResearchGate
    Aug 2, 2025 · Scientists and policy makers issuing predictions and warnings of impending natural disaster are faced with two major challenges, that is ...Missing: alerts | Show results with:alerts
  188. [188]
    What Are the Ethics of Sharing Earthquake Aftershock Forecasts ...
    May 1, 2024 · Sharing aftershock forecasts globally raises ethical questions: whether it's beneficial for developing countries, if it's scientific ...
  189. [189]
    [PDF] Stigma in science: the case of earthquake prediction - CORE
    Forecasting thus entailed less risk for the scientific community, offsetting the reputational damage that might result from 'false alarms' and miti- gating ...
  190. [190]
    [PDF] Benefits and Costs of Earthquake Early Warning | EEW - Richard Allen
    Mar 23, 2016 · Earthquake early warning (EEW) is the rapid detection of earthquakes underway and the alerting of people and infra- structure in harms way.
  191. [191]
    Benefits and Costs of Earthquake Early Warning - GeoScienceWorld
    Mar 23, 2016 · Individuals can use the alert time to drop, cover, and hold on, reducing injuries and fatalities, or if alert time allows, evacuate hazardous ...
  192. [192]
    Ethical Issues in the Decision-making for Earthquake Preparedness ...
    Game theory is applied as a conceptual framework for the discussion of ethical issues in the decision-making for earthquake preparedness, prediction, ...Missing: forecasting debates
  193. [193]
    Recent advances in earthquake monitoring I: Ongoing revolution of ...
    Recent advances include improved seismic networks, ultra-dense instruments like nodes and fiber-optic sensing, which provides high-resolution data for  ...
  194. [194]
    Seismic arrival-time picking on distributed acoustic sensing data ...
    Dec 11, 2023 · Distributed Acoustic Sensing (DAS) is an emerging technology for earthquake monitoring and subsurface imaging.
  195. [195]
    Earthquake Location with Distributed Acoustic Sensing Subarray ...
    Feb 28, 2025 · The emerging distributed acoustic sensing (DAS) technology allows array measurements of strain rate in an unprecedented spatial resolution and ...
  196. [196]
    Towards a widely applicable earthquake detection algorithm for ...
    Feb 3, 2025 · SUMMARY. Distributed acoustic sensing (DAS) is a promising technology for providing dense (metre-scale) sampling of the seismic wavefield.SUMMARY · INTRODUCTION · THE EARTHQUAKE... · RESULTS
  197. [197]
    Advancements in remote sensing techniques for earthquake ...
    This review highlights the advancements in the integration of remote sensing technologies into earthquake studies.
  198. [198]
    Natural-hazard monitoring with global navigation satellite systems ...
    The review discusses advancements and limitations of GNSS for geophysical applications, its integration with other sensors (e.g., seismometers, InSAR), and ...
  199. [199]
    A joint InSAR-GNSS workflow for correction and selection of ...
    Jun 5, 2023 · Interferometric Synthetic Aperture Radar (InSAR) data provides wide-scale coverage for interseismic deformation monitoring over a wide area.
  200. [200]
    Global earthquake detection and warning using Android phones
    Jul 17, 2025 · We use the global Android smartphone network to develop an earthquake detection capability, an alert delivery system, and a user feedback ...
  201. [201]
    The Role of Machine Learning in Earthquake Seismology: A Review
    Mar 28, 2024 · Convolutional and recurrent neural networks, in particular, have been used with seismic waveform data to train deep learning models for ...
  202. [202]
    A Multimodal Deep Learning Framework for Rapid Real‐Time ...
    May 29, 2025 · Recently, with the rapid increase in global seismic data, seismologists have begun exploring AI techniques for earthquake phase picking.
  203. [203]
    Can We Predict Earthquakes? - Communications of the ACM
    Dec 3, 2024 · When tested, the system was able to accurately predict 70% of earthquakes happening a week out, correctly forecasting 14 earthquakes over the ...
  204. [204]
    Seismologists use deep learning for improved earthquake forecasting
    which can, essentially, learn how to learn — performed slightly better than the ETAS model at forecasting ...<|separator|>
  205. [205]
    National Earthquake Prediction Evaluation Council (NEPEC)
    Due to a lapse in appropriations, the majority of USGS websites may not be up to date and may not reflect current conditions. Websites displaying real-time data ...
  206. [206]
    A Prospect of Earthquake Prediction Research - Project Euclid
    The key for research progress in practical probability earthquake forecasting is to use a multiple prediction formula (Utsu, 1979) such that total probability ...
  207. [207]
    A Scientific Vision and Roadmap for Earthquake Rupture Forecast ...
    Sep 2, 2025 · The general rule of thumb is that every earthquake has about a 5‐10% chance of being followed by something even larger in the week that follows ...