Fact-checked by Grok 2 weeks ago

Uncertainty quantification

Uncertainty quantification (UQ) is the science of systematically identifying, characterizing, quantifying, and managing uncertainties arising in mathematical models, computational simulations, experimental data, and their predictions to enable more reliable decision-making. This interdisciplinary field integrates principles from statistics, probability, and computational science to address variability in inputs, model forms, and outputs, ensuring that assessments of complex systems account for potential errors or unknowns. Originating from efforts to improve the credibility of computer simulations in engineering and physics, UQ has evolved into a foundational element of predictive modeling across diverse domains. Uncertainties in UQ are broadly classified into two main types: aleatoric uncertainty, which represents inherent or irreducible variability in systems (such as processes or measurement ), and epistemic uncertainty, which stems from a lack of and can be reduced through additional or improved modeling (such as parameter errors or model inadequacies). Aleatoric uncertainty is often quantified using probabilistic distributions to capture natural variability, while epistemic uncertainty employs techniques like to identify and mitigate knowledge gaps. These distinctions, formalized in statistical frameworks, allow for targeted propagation of uncertainties through models, distinguishing between what is fundamentally unpredictable and what can be refined. Key methods in UQ include Monte Carlo simulation for forward propagation of input uncertainties, Bayesian inference for updating model parameters with data while accounting for prior knowledge, and surrogate modeling techniques like Gaussian processes or polynomial chaos expansions to efficiently approximate complex responses. A seminal advancement came with the 2001 Bayesian calibration framework by Kennedy and O'Hagan, which introduced model discrepancy terms to quantify biases between simulations and reality, setting a "gold standard" for integrating observational data into predictive uncertainty analysis. These approaches often combine forward UQ (propagating uncertainties to outputs) with inverse UQ (inferring inputs from observations), supported by to ensure model fidelity. UQ finds critical applications in fields such as for robust aircraft design, climate modeling for probabilistic weather forecasts, and healthcare for patient-specific simulations in digital twins and trials. In , it enhances simulations of turbulence and by bounding prediction errors, while in , it supports qualification of new alloys under uncertain conditions. By providing confidence intervals and sensitivity insights, UQ mitigates risks in high-stakes decisions, from policy-making in to optimization in energy systems, ultimately fostering trust in computational predictions.

Sources of Uncertainty

Aleatoric Uncertainty

Aleatoric uncertainty, also known as or irreducible uncertainty, arises from the inherent in physical processes or data-generating mechanisms that cannot be eliminated through additional observations or improved modeling. This type of uncertainty reflects fundamental variability, such as in measurements or environmental fluctuations, and is modeled using probabilistic distributions to capture the stochastic nature of the system. Common examples illustrate this randomness: the unpredictable outcome of a flip, where the probability is fixed at 50% heads or tails regardless of repeated trials; the irregular timing of events, governed by statistics; or sensor in engineering systems, which introduces variability due to thermal or quantum effects. In natural systems, weather patterns exemplify aleatoric uncertainty through chaotic atmospheric dynamics that lead to unpredictable short-term variations, even with perfect initial conditions. These cases highlight how aleatoric uncertainty represents true stochasticity rather than limitations in knowledge. A key distinction is that aleatoric uncertainty remains invariant even with exhaustive data or perfect system knowledge, in contrast to epistemic uncertainty, which diminishes as ignorance is resolved. Mathematically, it is frequently incorporated into predictive models as an error term, such as in the linear form
y = f(\mathbf{x}) + \epsilon,
where y is the observed output, f(\mathbf{x}) is the deterministic function of inputs \mathbf{x}, and \epsilon \sim \mathcal{N}(0, \sigma^2) denotes Gaussian noise representing the irreducible variability, with \sigma^2 quantifying the dispersion. This formulation allows the uncertainty to be parameterized directly within the likelihood function.
The concept of irreducible uncertainty due to processes originated in and during the early 20th century, with foundational probabilistic models for phenomena like developed by in 1905. The term "aleatoric" derives from the Latin aleator, meaning "dice player," emphasizing its roots in chance, and was formalized in modern uncertainty quantification within , notably through the 1996 guest editorial by Helton and Burmaster on treating aleatory and epistemic uncertainties in complex system assessments.

Epistemic Uncertainty

Epistemic uncertainty refers to the type of uncertainty that originates from a lack of about the underlying fundamentals of a , such as incomplete models, unknown parameters, or insufficient , and is characterized by its potential reducibility through additional information or experimentation. This contrasts with aleatoric uncertainty, which arises from inherent variability and cannot be reduced by further . The "epistemic uncertainty" was coined in the risk analysis during the to explicitly differentiate reducible knowledge gaps from irreducible . In practice, epistemic uncertainty manifests in scenarios where critical aspects of a system remain poorly understood, such as the omission of specific physical laws in computational simulations, which leads to incomplete predictive models, or the reliance on sparse calibration datasets in engineering applications, resulting in broad parameter ranges. For example, in hurricane risk assessments, limited knowledge of how climate change alters wind speed distributions introduces epistemic gaps that affect reliability estimates. Similarly, in fault tree analyses for system reliability, insufficient failure rate data for rare events exemplifies epistemic ignorance about component behaviors. Mathematically, epistemic is frequently modeled within Bayesian frameworks as the variability in unknown model parameters \theta, encapsulated by the posterior distribution p(\theta \mid \text{[data](/page/Data)}), which reflects the updated belief about \theta after incorporating observed and knowledge. This posterior distribution serves as a quantitative measure of ignorance, allowing for the propagation of parameter through the model to assess overall predictive confidence. Strategies to mitigate epistemic uncertainty focus on knowledge acquisition and model refinement, including the design of additional experiments to collect targeted that tightens estimates or the development of models to approximate complex systems and thereby narrow confidence intervals. modeling, for instance, enables efficient exploration of spaces in high-fidelity simulations, progressively reducing epistemic gaps by interpolating between known points without exhaustive computations.

Types of Uncertainty Quantification Problems

Forward Problems

Forward uncertainty quantification addresses the propagation of uncertainties from input variables through a computational model to determine the resulting variability in the output quantity of interest. Given uncertain inputs distributed according to X \sim p(X), the task is to characterize the induced output distribution Y = f(X) \sim p(Y), where f represents the forward model mapping inputs to outputs. This process assumes the model f is known and fixed, concentrating exclusively on the input-to-output uncertainty mapping without inferring model parameters. The primary goal in forward problems is to quantify key statistical features of Y, such as its mean, variance, or full (PDF), to assess output reliability under input variability. Input uncertainties may arise from aleatoric sources, reflecting inherent randomness like processes, or epistemic sources, stemming from incomplete knowledge such as measurement errors. A fundamental mathematical framework for this is the , which decomposes the output variance as \operatorname{Var}(Y) = \mathbb{E}[\operatorname{Var}(Y \mid X)] + \operatorname{Var}(\mathbb{E}[Y \mid X]), highlighting how input variability contributes to overall output uncertainty and enabling sensitivity analysis. In practice, forward uncertainty quantification is essential in engineering applications, such as propagating uncertainties in material properties through structural simulations to estimate failure probabilities and inform design robustness. For instance, variations in Young's modulus or yield strength due to manufacturing tolerances can be forwarded through finite element models to predict stress distributions and safety margins, ensuring predictions account for real-world variability without over- or underestimating risks.

Inverse Problems

Inverse problems in uncertainty quantification involve inferring unknown model parameters \theta or systematic biases b from observed Y_{\text{obs}} = f(\theta, X) + \varepsilon + b, where f is the forward model, X represents uncertain inputs, \varepsilon denotes random noise, and uncertainties in X and \varepsilon must be explicitly accounted for during inference. This process contrasts with forward problems by reversing the direction of , aiming to estimate causes from effects while propagating and amplifying uncertainties backward through the model. Sub-types of inverse problems include bias correction, which focuses on adjusting systematic errors b in the model or data without altering parameters \theta; parameter calibration, which estimates \theta assuming negligible bias; and joint bias-parameter estimation, which simultaneously infers both to address model discrepancies and input uncertainties. These approaches are essential in scenarios where observations are indirect and corrupted, requiring careful separation of aleatoric noise from epistemic model errors. Representative examples include calibrating parameters in models, such as convective processes, using historical observational to reduce projection uncertainties, often via Bayesian frameworks that incorporate on parameter ranges. In , inverse problems arise in correcting sensor biases, for instance in computed tomography where systematic distortions in reconstruction must be estimated from noisy scans to improve diagnostic reliability. Mathematically, inverse problems are often ill-posed, exhibiting non- or in solutions due to to data perturbations, as characterized by Hadamard's criteria of , , and continuous dependence. This ill-posedness is addressed through regularization techniques, such as the Tikhonov penalty term \lambda \|\theta\|^2 added to the objective function to stabilize estimates by penalizing large parameter values, where \lambda > 0 balances data fit and smoothness. A unique aspect is that inverse problems can amplify input uncertainties, exacerbating errors in propagation; this is particularly evident in the "inverse crime," where using the same numerical model for data and inversion leads to overly optimistic estimates by masking biases.

Methods for Forward Propagation

Sampling-Based Methods

Sampling-based methods for forward propagation involve generating multiple realizations, or samples, from the probability distributions of uncertain input parameters and propagating them through a to empirically approximate the resulting output statistics and distributions. These techniques treat the model as a black-box , making them versatile for complex, nonlinear systems where analytical solutions are intractable. By simulating the propagation of input variability, sampling methods provide estimates of moments such as the mean and variance of the output, as well as full probability functions or intervals. Among the key sampling-based approaches, (MC) simulation employs simple random sampling from the input distributions, where each sample is independently drawn to represent the stochastic nature of the inputs. (LHS) enhances efficiency by stratifying the input space into equally probable intervals and sampling once from each, ensuring better coverage of the with fewer evaluations than pure random sampling; this method was introduced by McKay, Conover, and Beckman in 1979 for analyzing computer model outputs. Quasi-Monte Carlo (QMC) methods use deterministic low-discrepancy sequences, such as Sobol' or Halton sequences, to generate samples that are more uniformly distributed than random ones, reducing clustering and improving integration accuracy in multidimensional spaces. Mathematically, the estimator for the of an output quantity Y = f(X), where X are the random inputs, is given by \hat{\mu}_Y = \frac{1}{N} \sum_{i=1}^N y_i, \quad y_i = f(x_i), with x_i drawn from the input distribution; this unbiased converges to the true mean \mathbb{E}[Y] as N \to \infty. To reduce the 's variance, techniques like can be applied by incorporating a correlated auxiliary Z with known \mathbb{E}[Z], yielding the adjusted \hat{\mu}_Y^{CV} = \hat{\mu}_Y + b \left( \mathbb{E}[Z] - \hat{\mu}_Z \right), where the optimal coefficient b = \frac{\mathrm{Cov}(Y, Z)}{\mathrm{Var}(Z)} minimizes variance, often estimated from the samples. The convergence rate of standard is O(1/\sqrt{N}) in terms of root-mean-square error, independent of dimensionality but slow for high-precision needs. In contrast, Quasi-Monte Carlo achieves faster convergence, approximately O((\log N)^d / N) in d dimensions for smooth integrands, outperforming MC particularly in low to moderate dimensions (e.g., d \leq 10) by exploiting the uniformity of low-discrepancy sequences. A representative application is the of uncertainty in output predictions, where sampling draws from a for wind speeds (e.g., shape parameter 2, scale 10 m/s) and feeds them into the curve model P = 0.5 \rho A v^3 C_p (with air \rho, rotor area A, and power coefficient C_p); simulations with N = 10^4 samples can quantify the variability in annual energy production, revealing standard deviations around 9-15% due to wind variability. The originated in the 1940s during the at , where it was developed by Stanislaw Ulam, , and others to simulate neutron diffusion in processes, enabling solutions to previously intractable probabilistic problems in physics.

Expansion-Based Methods

Expansion-based methods in uncertainty quantification approximate the model output as a in terms of the input random variables, enabling the analytical computation of uncertainty metrics such as means, variances, and higher-order moments without extensive simulations. These approaches construct models that capture the statistical behavior of complex systems, particularly useful for forward propagation in moderate-dimensional problems where direct evaluations are costly. Key methods include Polynomial Chaos Expansion (PCE), which employs orthogonal polynomials tailored to the input distributions, and , a non-parametric surrogate that models the output as a to provide probabilistic predictions. In PCE, the output Y is approximated as Y \approx \sum_{k=0}^{P} \alpha_k \Psi_k(\xi), where \Psi_k are multivariate orthogonal chaos basis polynomials, \xi represents the standardized input random variables, and the coefficients \alpha_k are determined via , such as \alpha_k = \langle Y, \Psi_k \rangle / \|\Psi_k\|^2, with \langle \cdot, \cdot \rangle denoting the inner product over the . GPR, in contrast, posits the output as a sample from a prior, with the posterior providing mean predictions and variance estimates that quantify predictive directly. These methods offer significant advantages, including the derivation of closed-form expressions for statistical moments—for instance, the variance of Y in PCE is \text{Var}(Y) = \sum_{k=1}^{P} \alpha_k^2 \|\Psi_k\|^2—and high efficiency for repeated evaluations once the is built. PCE is particularly effective for smooth responses, converging exponentially in the polynomial order for analytic functions, while GPR excels in providing calibrated bands even with limited points. Sampling-based methods can complement these expansions for validation in high-dimensional cases. An illustrative example is the application of PCE to uncertainty propagation in simulations, such as plane Poiseuille flow where the kinematic is treated as a following a Gaussian distribution; the expansion quantifies variations in velocity profiles and flow rates with fewer model evaluations than sampling. The foundational concept of PCE traces back to Wiener's 1938 introduction of homogeneous chaos expansions using for Gaussian processes, later generalized in the early 2000s through the Askey scheme to accommodate non-Gaussian inputs via families of orthogonal polynomials like Legendre or Jacobi bases.

Methods for Inverse Uncertainty Quantification

Frequentist Approaches

Frequentist approaches to uncertainty quantification in problems treat model as fixed but unknown quantities, relying solely on observed to construct intervals and regions that achieve desired long-run coverage probabilities without incorporating distributions. These methods emphasize the repeated sampling performance of estimators over hypothetical ensembles of , providing data-driven bounds on parameter values that reflect the variability inherent in the process. Key methods include estimation combined with bootstrap resampling for variance assessment and profile likelihood for deriving confidence regions. In estimation, parameters are obtained by minimizing the sum of squared residuals between observed and predicted data, with bootstrap then used to quantify the sampling variability of these estimates. Profile likelihood refines this by profiling out nuisance parameters, maximizing the likelihood over subsets while fixing parameters of interest to form contours that delineate uncertainty. The bootstrap method approximates the standard error of an estimator \hat{\theta} through resampling: generate B bootstrap samples by drawing with replacement from the original data, compute resampled estimates \hat{\theta}^*_b for b = 1, \dots, B, and estimate \text{SE}(\hat{\theta}) \approx \std(\hat{\theta}^*_b), where \std denotes the sample standard deviation. This nonparametric technique, introduced by Efron in 1979, enables uncertainty estimation for complex statistics without assuming a specific parametric form for the sampling distribution. For maximum likelihood estimators, asymptotic normality provides another foundation: under regularity conditions, \sqrt{n} (\hat{\theta} - \theta) \xrightarrow{d} N(0, I^{-1}(\theta)), where n is the sample size and I(\theta) is the Fisher information matrix, justifying approximate confidence intervals via the inverse of the observed information. In uncertainty quantification applications, these methods support bias correction through , which weights observations to account for heteroscedasticity and , yielding more efficient estimators for parameter calibration in models with noisy inputs. They also provide uncertainty bounds for calibrated parameters, such as in dynamical systems where profile likelihood contours ensure reliable on identifiable subsets. Frequentist approaches assume large sample sizes for asymptotic approximations to hold and can struggle with multimodal likelihood surfaces, where multiple local maxima complicate the identification of global confidence regions.

Bayesian Approaches

Bayesian approaches to uncertainty quantification in inverse problems rely on to update beliefs about model given observed , thereby providing a probabilistic for estimating parameter distributions that incorporate both aleatoric uncertainty—arising from inherent variability or in the likelihood—and epistemic uncertainty—stemming from incomplete of , captured through the and posterior spread. The posterior distribution over θ given Y is defined as p(θ|Y) ∝ p(Y|θ) p(θ), where p(Y|θ) is the likelihood reflecting aleatoric sources and p(θ) encodes epistemic , enabling full propagation of uncertainties to predictions via marginalization over θ. A primary method for computing the posterior is (MCMC), which generates samples from p(θ|Y) even when direct evaluation is intractable. The Metropolis-Hastings algorithm, a cornerstone of MCMC, proposes candidate states θ' from a proposal distribution and accepts them with probability α = min(1, [p(θ') p(Y|θ') / p(θ) p(Y|θ)]), ensuring the chain converges to the target posterior; here, L(θ) denotes the likelihood p(Y|θ). For joint estimation of model bias b (e.g., due to structural inadequacies) and parameters θ, hierarchical Bayesian models extend this by specifying p(b, θ|Y) ∝ p(Y|b, θ) p(b|θ) p(θ), allowing simultaneous quantification of and model-form uncertainties. When MCMC is computationally prohibitive, variational inference approximates the posterior by optimizing a simpler distribution q(θ) from a variational family to minimize the Kullback-Leibler divergence to p(θ|Y), yielding scalable estimates suitable for high-dimensional problems. Advances such as (HMC; originally proposed in 1987), enhance sampling efficiency by simulating to propose distant, low-rejection moves, facilitating exploration of complex posteriors and enabling full predictive distributions that distinguish irreducible aleatoric variability from reducible epistemic —a key advantage over frequentist methods. Recent developments as of 2025 incorporate techniques, such as variational encoder-decoder networks, for data-driven solutions to large-scale inverse problems with efficient propagation. An illustrative application is the Bayesian of susceptible-infected-recovered () epidemiological models, where priors on the infection rate β and recovery rate γ are updated with to quantify uncertainties in , as demonstrated in analyses of influenza-like pathogens that reveal posterior intervals for β reflecting both measurement noise and parameter ignorance.

Applications of Uncertainty Quantification

Engineering and Risk Assessment

In engineering, uncertainty quantification (UQ) plays a pivotal role in ensuring robust designs by characterizing variabilities in materials, loads, and processes to estimate risks and support reliability analysis. This involves distinguishing aleatory uncertainties, which are inherent and irreducible, from epistemic uncertainties due to limited , enabling engineers to propagate these through models and predict outcomes like structural probabilities. By integrating UQ, designs achieve targeted reliability levels while minimizing excessive safety margins that could increase costs. In applications, UQ propagates tolerances and variabilities in wing models, accounting for uncertainties in microstructures and aerodynamic loads to evaluate structural under operational conditions. For instance, forward uncertainty propagation techniques assess how input variabilities affect distributions, informing processes. In , UQ quantifies uncertainties in parameters such as and thermal-hydraulic models, enabling probabilistic risk assessments for accident scenarios and ensuring integrity. These analyses help identify dominant uncertainty sources and prioritize enhancements. Engineering workflows often combine forward and inverse UQ methods for comprehensive ; simulations propagate input uncertainties to construct risk curves that visualize failure probabilities across ranges, while Bayesian approaches calibrate models against experimental data to update posteriors and refine epistemic uncertainties. This integration enhances decision-making in and . Standards like ASCE 7 explicitly incorporate UQ-derived uncertainties into load factors for and seismic effects, targeting specific reliability indices to balance safety and efficiency, thereby reducing over-conservatism relative to purely deterministic methods. The 1986 Challenger Space Shuttle disaster exemplifies the consequences of inadequate UQ, where epistemic gaps in erosion data under low-temperature conditions led to underestimation of failure risks, as limited prior flights provided insufficient statistical evidence for reliable extrapolation. Bayesian analyses of pre-launch data later demonstrated how better uncertainty characterization could have flagged the high-probability joint failure. In the 2020s, surrogate models have advanced real-time UQ for autonomous vehicles, emulating physics-based propagations of and environmental uncertainties to support safe and decision-making in dynamic scenarios.

Scientific and Environmental Modeling

Uncertainty quantification (UQ) plays a crucial role in validating scientific and environmental models by propagating uncertainties from inputs, such as emissions scenarios or conditions, through simulations to assess the reliability of predictions. In modeling, for instance, UQ helps evaluate how variations in or initial atmospheric conditions affect global temperature projections, enabling modelers to distinguish between aleatory and epistemic uncertainties. This propagation process ensures that model outputs, like future states, are accompanied by bounds that reflect input variability, thereby enhancing the of simulations used for long-term . Prominent examples of UQ in scientific modeling include climate projections, where the (IPCC) employs multi-model ensembles to quantify ranges in global temperature increases, accounting for uncertainties in and internal climate variability. In environmental , UQ addresses parameter uncertainties in models, such as hydraulic conductivity variations, to predict contaminant transport or recharge with probabilistic outputs that inform water resource management. These applications demonstrate how UQ integrates observational data with model physics to produce robust estimates, avoiding overconfidence in deterministic simulations. Methods like polynomial chaos expansion (PCE) are integrated for efficient forward uncertainty propagation in high-fidelity environmental simulations, such as atmospheric or circulation models, where PCE surrogate models approximate the impact of input uncertainties on outputs without exhaustive sampling. Bayesian inverse methods complement this by facilitating in environmental modeling, updating model parameters with observations to reduce posterior uncertainties in forecasts like pollutant dispersion or ecosystem dynamics. UQ analyses have revealed structural deficits in models; for example, in climate simulations during the , epistemic uncertainties related to model form dominated over parametric ones, highlighting limitations in representing feedbacks or dynamics. The impact of UQ extends to policy-making, as seen in sea-level rise forecasts that provide likely ranges (17th–83rd percentiles)—such as projections of 0.63–1.32 meters by 2100 under high-emission scenarios—to guide coastal strategies and . Emerging approaches in the combine UQ with for scalable environmental forecasts, using hybrid models to emulate uncertainties in data for applications like prediction or assessment, thereby addressing high-dimensional challenges in real-time simulations.

Challenges in Uncertainty Quantification

Computational and Scalability Issues

Uncertainty quantification (UQ) methods, particularly sampling-based approaches like (MC) simulation, often demand a large number of model evaluations—typically on the order of 10^6 or more—to achieve reliable statistical , rendering them computationally prohibitive for expensive forward models such as those in or climate simulations. This high demand arises because standard MC requires samples scaling as O(1/ε²) for an accuracy of ε in the , leading to infeasible runtimes when each evaluation involves complex numerical solvers that may take hours or days. A primary challenge is the curse of dimensionality, where the number of required samples grows exponentially with the input d, becoming intractable for d > 10 as the volume of the parameter space explodes. In UQ problems, this is compounded by variance explosion in posterior estimators, where ill-posedness and sparse data amplify uncertainties, necessitating even more samples to stabilize variance in Bayesian updates or optimization-based . To mitigate these issues, techniques such as adaptive sampling refine sample placement based on local error estimates, multi-fidelity models leverage hierarchies of cheaper approximations to inform high-fidelity runs, and variance reduction methods like reweight samples to focus on high-probability regions, achieving up to orders-of-magnitude efficiency gains over plain MC. For instance, multifidelity importance sampling integrates surrogate models to correct low-fidelity biases, reducing the effective number of high-fidelity evaluations while preserving statistical accuracy. Additionally, Sobol indices provide a mathematical framework for dimension reduction by decomposing output variance into contributions from individual inputs and interactions, enabling the identification and pruning of non-influential variables to lower effective dimensionality. The first-order Sobol index for input X_i is defined as S_i = \frac{\text{Var}_{X_i}(\mathbb{E}[Y|X_i])}{\text{Var}(Y)}, where Y is the model output, allowing prioritization of key parameters in high-dimensional UQ. In computational fluid dynamics (CFD) applications, surrogate models such as Gaussian processes or polynomial chaos expansions integrated into UQ workflows can halve or more the runtime by approximating expensive simulations, yet challenges persist in scaling to fully turbulent, multiphysics cases where surrogate training itself demands substantial resources. Post-2020 research has explored quantum computing for UQ scalability, proposing algorithms like quantum Monte Carlo that promise exponential speedups in sampling high-dimensional distributions, though practical implementations remain immature due to current hardware limitations in qubit coherence and error rates.

Handling Model and Data Uncertainties

In uncertainty quantification (UQ), challenges extend beyond computational demands to include inadequacies in model formulation and , such as unmodeled physical processes or biases in datasets that lead to incomplete representations of reality. These difficulties often arise from epistemic , which captures gaps in knowledge about the underlying system. Model form uncertainty, for instance, involves selecting an appropriate functional form f for the system, where incorrect choices can propagate errors throughout predictions. Validation processes with limited or noisy data exacerbate this, frequently resulting in overconfident estimates that underestimate true variability. To address model discrepancy—the difference between the true system and the model's approximation— methods construct multiple models or parameter sets to capture variability in predictions. These approaches quantify by analyzing the spread across ensemble members, providing a measure of model inadequacy without assuming a single best form. Robust UQ techniques further mitigate risks by focusing on worst-case scenarios, bounding predictions under extreme assumptions about model errors to ensure reliability in high-stakes applications. Mathematically, model discrepancy is often modeled as an additive term \delta, defined as the difference between observed outcomes Y_{obs} and the expected model prediction: \delta = Y_{obs} - \mathbb{E}[f(\theta, X)] where \theta represents model parameters and X inputs; this \delta is treated as a random process, such as a , to propagate its uncertainty through the system. A prominent example occurs in models, where epistemic gaps in representing subgrid-scale physics, such as processes, contribute to prediction uncertainties on the order of 20% for in long-term projections. Looking ahead, research in the 2020s emphasizes data-driven UQ integrating (PINNs) to bridge model gaps, embedding physical laws into frameworks for more accurate discrepancy handling and uncertainty calibration. These methods enhance traditional approaches by learning from sparse while respecting governing equations, reducing overconfidence in complex systems.

References

  1. [1]
    The Statistical Formalism of Uncertainty Quantification - SIAM.org
    Jun 5, 2025 · Uncertainty quantification (UQ) considers the uncertainties in both the mathematical modeling of processes and the associated data; it is now ...
  2. [2]
    [PDF] Introduction to Uncertainty Quantification
    Jun 25, 2025 · …the science of identifying, quantifying, and reducing uncertainties associated with models, numerical algorithms, experiments and predicted.<|control11|><|separator|>
  3. [3]
    Challenges and opportunities in uncertainty quantification for ...
    Mar 13, 2025 · Uncertainty quantification (UQ) is an essential aspect of computational modelling and statistical prediction.
  4. [4]
    Uncertainty Quantification
    Uncertainty Quantification (UQ) involves systematically quantifying uncertainty and variability in both performance models and data.
  5. [5]
    Aleatoric and epistemic uncertainty in machine learning
    Mar 8, 2021 · Roughly speaking, aleatoric (aka statistical) uncertainty refers to the notion of randomness, that is, the variability in the outcome of an ...
  6. [6]
    [PDF] Uncertainty Quantification: An Overview
    Uncertainty Quantification (UQ) is the science of the characterization and reduction of uncertainties (Saouma & Hariri-Ardebili, 2021).
  7. [7]
    [PDF] Bayesian calibration of computer models
    Summary. We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as ...
  8. [8]
    Uncertainty Quantification in Computational Models of Physical ...
    UQ enables the assessment of uncertainty in system predictions, thereby providing means for robust design optimization, qualification of design performance.<|control11|><|separator|>
  9. [9]
    A review of uncertainty quantification and its applications in ...
    The review highlights the successful applications of UQ in simulating chemical kinetics, turbulence, and scramjet, and outlines potential UQ techniques.
  10. [10]
    Uncertainty Quantification: Theory, Implementation, and Applications
    This book covers basic concepts, theory, and algorithms for quantifying input and response uncertainties, including probability, statistics, and model ...
  11. [11]
    Aleatoric Uncertainty - an overview | ScienceDirect Topics
    Aleatoric uncertainty cannot be reduced only identified and quantified, while epistemic uncertainty can be reduced through more comprehensive study. Epistemic ...<|separator|>
  12. [12]
    Aleatoric and epistemic uncertainty in groundwater flow and ...
    May 16, 2009 · Aleatory uncertainty, also called stochastic or variable uncertainty, refers to uncertainty that cannot be reduced by more exhaustive ...Missing: seminal | Show results with:seminal
  13. [13]
    Aleatoric Uncertainty and Maximum Likelihood Estimation - Medium
    Feb 8, 2022 · Aleatoric uncertainty is the uncertainty introduced by the randomness of an event. For example, the result of flipping a coin is an aleatoric event.
  14. [14]
    SmartUQ: Uncertainty Quantification for more realistic engineering ...
    Feb 23, 2017 · ... aleatory uncertainty are best represented using probability distributions. Examples are the results of rolling dice or radioactive decay.
  15. [15]
    Philosophy of Statistical Mechanics
    Jan 10, 2023 · Statistical Mechanics is the third pillar of modern physics, next to quantum theory and relativity theory. Its aim is to account for the macroscopic behaviour ...4.6 The Mentaculus And The... · 5. The Boltzmann Equation · 7. Further Issues<|control11|><|separator|>
  16. [16]
    Aleatory - Etymology, Origin & Meaning
    Originating from Latin aleatorius meaning "pertaining to a gamester," aleatoric means "of uncertain outcome, depending on chance or randomness."
  17. [17]
    Epistemic Uncertainty - an overview | ScienceDirect Topics
    Epistemic uncertainty is a lack of knowledge on underlying fundamentals and characterized by alternative models. Aleatory uncertainty refers to the inherent ...
  18. [18]
    Uncertainties in risk analysis: Six levels of treatment - ScienceDirect
    This paper examines different levels of analytical sophistication in the treatment of uncertainties in risk analysis, and the possibility of transfer of ...
  19. [19]
  20. [20]
  21. [21]
    Bayesian calibration of computer models - Kennedy - 2001
    Jan 6, 2002 · We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models.
  22. [22]
    Reduction of Epistemic Model Uncertainty in Simulation-Based ...
    The proposed method is applied to a benchmark electronic packaging problem to demonstrate how epistemic model uncertainty is gradually reduced via resource ...
  23. [23]
    [PDF] Epistemic and Aleatoric Uncertainty Quantification and Surrogate ...
    In this work, we apply surrogate modelling to the turbulent transport plasma simulation, replacing the solution of the microscopic turbulence equation with a.<|control11|><|separator|>
  24. [24]
    [PDF] Uncertainty Quantification in Molecular Dynamics Simulations
    The UQ problem is presented in both its main components, namely the forward propagation, which aims at characterizing how uncertainty in model parameters ...
  25. [25]
    Enabling forward uncertainty quantification and sensitivity analysis ...
    We present a new, computationally efficient framework to perform forward uncertainty quantification (UQ) in cardiac electrophysiology.
  26. [26]
    Uncertainty quantification in classical molecular dynamics - PMC
    Uncertainty quantification in classical molecular dynamics. Shunzhou Wan ... By applying the law of total variance, our study showed that the expected ...
  27. [27]
    [PDF] Simulation Credibility - NASA Technical Reports Server
    ... structural simulations, to arrive at various parameters for their ... from uncertainty propagation methods used in experimental data reduction [55, 56].
  28. [28]
    [PDF] Uncertainty Quantification for Science and Engineering Applications
    Nov 13, 2019 · The process of quantifying uncertainties associatied with model cal- culations of true, physical QoIs, with the goals of accounting for all.
  29. [29]
    [PDF] Inverse Problems and Uncertainty Quantification - Andrew Stuart
    For the inverse problem the mathematical model is used to make inferences about inputs u that would result in given measured outputs y [6,10]; in practice, many ...
  30. [30]
    Statistical and Computational Inverse Problems - SpringerLink
    In stockThis book is aimed at postgraduate students in applied mathematics as well as at engineering and physics students with a ?rm background in mathem- ics.
  31. [31]
    [PDF] Inverse Uncertainty Quantification using the Modular Bayesian ...
    Inverse UQ, also referred to as inverse problem or parameter estimation, is the process to quantify the uncertainties of input parameters based on chosen ...
  32. [32]
    Calibration and Uncertainty Quantification of Convective Parameters ...
    Aug 19, 2021 · This paper presents a proof-of-concept, in an idealized setting, of how parameters in climate models can be calibrated using a substantial ...
  33. [33]
    An Introduction to Data Analysis and Uncertainty Quantification for ...
    Inverse problems are found in many applications, such as medical imaging, engineering, astronomy, and geophysics, among others. To solve an inverse problem ...
  34. [34]
    Regularization of Inverse Problems - SpringerLink
    This book is devoted to the mathematical theory of regularization methods. For linear problems, this theory can be considered to be relatively complete.
  35. [35]
    Monte-Carlo Method - Uncertainty Quantification
    The Monte-Carlo method is a non-intrusive method to propagate uncertainties through a given code. Monte-Carlo treats the given code as a black box.
  36. [36]
    [PDF] Modern Monte Carlo Methods for Efficient Uncertainty Quantification ...
    Nov 2, 2020 · In this paper, we mainly focus on the forward UQ, that is the propagation of uncertainties of the input random variable X through the ...Missing: seminal | Show results with:seminal
  37. [37]
    [PDF] A Comparison of Three Methods for Selecting Values of Input ...
    These are called random sampling, stratified sampling, and Latin hypercube sampling. Random Sampling. Let the input values XI,..., XN be a random sample ...
  38. [38]
    [PDF] Monte Carlo and quasi-Monte Carlo methods
    The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((log N^N'1). For quasi-Monte Carlo, both theoretical error ...
  39. [39]
    [PDF] 1 Introduction 2 Control variates - NYU Courant
    Variance reduction is the search for alternative and more accurate estimators of a given quantity. The possibility of variance reduction is what separates.
  40. [40]
    Monte Carlo and quasi-Monte Carlo methods | Acta Numerica
    Nov 7, 2008 · The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN−1). For quasi-Monte Carlo, both ...
  41. [41]
    [PDF] Uncertainty Quantification Techniques in Wind Turbine Design - NREL
    Sep 7, 2017 · ▫ Assess the cumulative impact of uncertainty on output ... Input distributions are then sampled using Monte-carlo algorithm and a distribution.
  42. [42]
    A Monte Carlo simulation method for probabilistic evaluation of ...
    Sep 15, 2023 · The uncertainty of annual energy production is 9.0% in numerical example. Abstract. The wind energy utilization has attracted worldwide ...
  43. [43]
    Hitting the Jackpot: The Birth of the Monte Carlo Method | LANL
    Nov 1, 2023 · First conceived in 1946 by Stanislaw Ulam at Los Alamos† and subsequently developed by John von Neumann, Robert Richtmyer, and Nick Metropolis.
  44. [44]
    Comparison of Surrogate-Based Uncertainty Quantification Methods ...
    Polynomial chaos and Gaussian process emulation are methods for surrogate-based uncertainty quantification and have been developed independently in their ...<|separator|>
  45. [45]
    [PDF] Intrusive Polynomial Chaos for CFD using OpenFOAM
    Here we present two cases, the plane Poiseuille flow with uncertain kinematic viscosity, and the turbulent channel flow with uncertain LES model parameter. The ...
  46. [46]
    [PDF] A Primer of Frequentist and Bayesian Inference in Inverse Problems
    Both Bayesian and frequentist methods require a stochastic model for the data, and both can incorporate constraints on the possible states of the world.
  47. [47]
    [PDF] Chapter 11 The Bootstrap - Statistics & Data Science
    The bootstrap is a method for estimating the variance of an estimator and for finding approximate confidence intervals for parameters. Although the method ...
  48. [48]
    [PDF] Parameter estimation and uncertainty quantification using ... - arXiv
    Nov 24, 2021 · In this work we (1) review likelihood-based inference for parameter es- timation and the construction of confidence regions, and (2) explore ...
  49. [49]
    Bootstrap Methods: Another Look at the Jackknife - Project Euclid
    A general method, called the "bootstrap," is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a ...
  50. [50]
    [PDF] Review on Statistical Inference 5.1 Introduction 5.2 Frequentist ...
    In a parametric model, we often estimate the parameter of interest using the so-called maximum likelihood ... asymptotic normality also implies that (using ...
  51. [51]
    Generalized Nonlinear Least Squares Method for the Calibration of ...
    Sep 4, 2020 · To address these difficulties, we proposed a generalized approximated nonlinear least squares method (GALS) for tuning complex computer codes.
  52. [52]
    Variational Inference: A Review for Statisticians
    In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization.
  53. [53]
    [PDF] metropolis-et-al-1953.pdf - aliquote.org
    INTRODUCTION. THE of. HE purpose of this paper is to describe a general method, suitable for fast electronic computing machines, of calculating the properties ...
  54. [54]
    [PDF] Variational Inference: A Review for Statisticians - Columbia CS
    Feb 27, 2017 · In this article, we review variational inference (VI), a method from machine learning for approximating probability densities. (Jordan et al.
  55. [55]
    [1206.1901] MCMC using Hamiltonian dynamics - arXiv
    Jun 9, 2012 · In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using ...
  56. [56]
    Bayesian uncertainty quantification for transmissibility of influenza ...
    Aug 1, 2016 · Results for the SIR model with recovery rate γ and scaling parameter τ (the constant of proportionality for the observed level of shedding ...<|control11|><|separator|>
  57. [57]
    [PDF] Aleatory and Epistemic Uncertainty Quantification for Engineering ...
    Most computer models for engineering applications are developed to help assess a design or regulatory requirement. • The capability to quantify the impact ...<|separator|>
  58. [58]
    Uncertainty Quantification | UQ and Data-Driven Modeling Group
    Uncertainty quantification (UQ) is a field of study that focuses on understanding, modeling, and reducing uncertainties in computational models and real-world ...Failure Probability... · Bi-Fidelity Boosting... · Polynomial Chaos Expansion...
  59. [59]
    Uncertainty Quantification of Microstructures: A Perspective on ...
    Nov 8, 2024 · Uncertainty quantification (UQ) of aerospace microstructures ... In the review, the forward problem of uncertainty propagation in ...
  60. [60]
    Uncertainty quantification and sensitivity analysis of a nuclear ...
    The research presented in this article describes progress in applying stochastic methods, uncertainty quantification, parametric studies, and variance-based ...
  61. [61]
    Nuclear Data Uncertainty Quantification and Propagation for Safety ...
    Aug 12, 2020 · The Uncertainty Quantification and Propagation (UQ&P) quantifies the influence of input uncertainties on the outputs for a given model. In this ...Abstract · Introduction · Reactor Design and Simulation · Results and Discussion
  62. [62]
    Bayesian Calibration and Uncertainty Quantification for a Physics ...
    Kennedy and O'Hagan [18] provide a classification of the sources of uncertainty in computer simulation models. The literature on formally characterizing ...<|control11|><|separator|>
  63. [63]
    Assessment of ASCE 7 Standard Wind Load Factors for Tall Building ...
    May 1, 2008 · Wind load factors incorporated in the ASCE 7 Standard are based on rough approximations of wind effects and the uncertainties inherent in them.Missing: quantification | Show results with:quantification
  64. [64]
    (PDF) Risk Analysis of the Space Shuttle: Pre-Challenger Bayesian ...
    Feb 25, 2016 · Dalal et al performed a statistical analysis of field and nozzle O-ring data collected prior to the ill-fated launch of the Challenger in ...
  65. [65]
    Justification shift and uncertainty: why are low-probability near ...
    Dec 12, 2012 · Thus, in-flight problems of O-rings that seal the joints could be called near misses, and before the Challenger, which was the 25th flight of ...
  66. [66]
    Hyperdimensional Uncertainty Quantification for Multimodal ... - arXiv
    Mar 25, 2025 · Uncertainty Quantification (UQ) is crucial for ensuring the reliability of machine learning models deployed in real-world autonomous systems.
  67. [67]
    Chapter 4 | Climate Change 2021: The Physical Science Basis
    Assessment of uncertainty relies on multi-model ensembles ... Murphy, J.M. et al., 2004: Quantification of modelling uncertainties in a large ensemble of climate ...
  68. [68]
    Uncertainty Quantification in Climate Modeling and Projection in
    May 1, 2016 · The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model ...BAYESIAN STATISTICAL... · ISSUES AND APPLICATIONS. · Climate projections.
  69. [69]
    Quantifying the Uncertainty Sources of Future Climate Projections ...
    Oct 19, 2022 · We quantified uncertainty in temperature and precipitation projections over global land from three sources—model uncertainty, scenario ...
  70. [70]
    Accelerating uncertainty quantification of groundwater flow ...
    Quantifying the uncertainty in model parameters and output is a critical component in model-driven decision support systems for groundwater management.
  71. [71]
    Emulation of environmental models using polynomial chaos ...
    This paper investigates the applicability of model emulation to speed up simulation time of CPU intensive environmental models. Polynomial chaos expansion ...
  72. [72]
    A Bayesian approach for inverse modeling, data assimilation, and ...
    Oct 6, 2010 · This paper addresses the inverse problem in spatially variable fields such as hydraulic conductivity in groundwater aquifers or rainfall ...Introduction · Inverse Modeling With MAD · Updating With Multiple Data Sets
  73. [73]
    The epistemic, ethical, and political dimensions of uncertainty in ...
    Jun 24, 2016 · Uncertainty is omnipresent in all policy decisions regarding whether, when, and how to respond to climate change. A common distinction is ...
  74. [74]
    Estimating global mean sea-level rise and its uncertainties by 2100 ...
    May 8, 2020 · Under RCP 8.5, the same experts projected a likely GMSL rise of 0.63–1.32 m by 2100, and 1.67–5.61 m by 2300. Expert projections for 2100 are ...
  75. [75]
    Uncertainty quantification of machine learning models to improve ...
    ML models are data driven and can suffer from large extrapolation errors when applied to changing climate/environmental conditions. UQ is required to quantify ...Missing: 2020s | Show results with:2020s
  76. [76]
    Modern Monte Carlo Methods for Efficient Uncertainty Quantification ...
    Nov 2, 2020 · This article gives an overview of modern MC methods to address the existing challenges of the standard MC in the context of UQ.
  77. [77]
    Uncertainty quantification through Monte Carlo method in a cloud ...
    May 20, 2021 · However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable ...
  78. [78]
    Semi-supervised deep learning for high-dimensional uncertainty ...
    Jun 1, 2020 · Conventional uncertainty quantification methods usually lacks the capability of dealing with high-dimensional problems due to the curse of ...
  79. [79]
    Uncertainty quantification in Bayesian inverse problems with model ...
    Oct 9, 2019 · We have focused on the uncertainty quantification in the solution of the inverse problem with data and/or model order reduction.Missing: correction | Show results with:correction
  80. [80]
    Multi-Fidelity Adaptive Sampling for Surrogate-Based Optimization ...
    May 31, 2024 · This paper presents a novel approach for enhancing the efficiency of surrogate-based algorithms through a new multi-fidelity sampling technique.Missing: importance | Show results with:importance
  81. [81]
    [PDF] Structure exploiting methods for fast uncertainty quantification in ...
    Sep 8, 2021 · Finally, we mention that for many applications, the total. Sobol' indices can be used for further input parameter dimension reduction. For ...
  82. [82]
    Efficient uncertainty quantification of CFD problems by combination ...
    Results showed that the proposed method yields accurate results with a computational cost saving of more than 60 % in comparison to the high-fidelity classical ...
  83. [83]
    [2209.11220] Quantum algorithms for uncertainty quantification - arXiv
    Sep 22, 2022 · We propose new quantum algorithms for PDEs with uncertain coefficients that are more efficient in M and L in various important regimes, compared to their ...
  84. [84]
    Quantifying and Reducing Model-Form Uncertainties in Reynolds ...
    Aug 25, 2015 · In this work we develop an open-box, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations.
  85. [85]
    Physics-informed Information Field Theory for Modeling ... - arXiv
    Jan 18, 2023 · As an addendum, the method is equipped with a metric which allows the posterior to automatically quantify model-form uncertainty. Because of ...
  86. [86]
    Quantifying model form uncertainty in Reynolds-averaged ... - arXiv
    Jul 8, 2018 · The uncertainties capture both model form uncertainty as well as epistemic uncertainty induced by the limited training data. An invariant ...
  87. [87]
    Empirical Quantification of Predictive Uncertainty Due to Model ...
    Nov 24, 2023 · We propose to characterise uncertainty owing to model discrepancy with an ensemble of parameter sets, each of which results from training to data from a ...
  88. [88]
    [PDF] Quantifying Uncertainty in Ensemble Deep Learning - SIAM.org
    In addition to higher accuracy, ensemble networks are a technique to quantify the uncertainty in a network. They highlight regions of uncertainty by display- ...
  89. [89]
    Robust uncertainty quantification in structural dynamics under ...
    Feb 17, 2020 · This is for example the case in a worst-case scenario analysis, where the objective is to identify the lowest performing possible outcome ...
  90. [90]
    Challenges and opportunities in uncertainty quantification for ... - NIH
    Multiple applications, including geophysics, climate science and aerospace engineering, incorporate UQ in the development and translation of new technologies.
  91. [91]
    A Conformal Prediction Framework for Uncertainty Quantification in ...
    Sep 17, 2025 · Physics-Informed Neural Networks (PINNs) have emerged as a powerful framework for solving PDEs, yet existing uncertainty quantification (UQ) ...
  92. [92]
    Improved Uncertainty Quantification in Physics-Informed Neural ...
    Jul 11, 2025 · In this paper, we use a two-step procedure to train Bayesian Neural Networks that provide uncertainties over the solutions to differential ...Missing: review | Show results with:review