Fact-checked by Grok 2 weeks ago

Morris method

The Morris method is a screening for global in computational models, developed by Max D. Morris in 1991 to identify influential input parameters among many in deterministic systems. It employs a one-factor-at-a-time () approach, perturbing individual inputs sequentially across multiple randomized trajectories in the input space to capture both main effects and potential interactions or nonlinearities. The procedure begins by scaling the input domain to a k-dimensional unit hypercube discretized into a p-level grid (commonly p=4 or p=8), with perturbations \Delta as multiples of $1/(p-1). Elementary effects are then calculated for each input x_i at selected base points \mathbf{x} as finite differences: d_i(\mathbf{x}) = \frac{y(\mathbf{x} + \Delta \cdot \mathbf{e}_i) - y(\mathbf{x})}{\Delta}, where y is the model output and \mathbf{e}_i is the i-th . For r trajectories (typically r=10 to $20), r effects per input are generated, yielding distributions summarized by the mean \mu_i (or absolute mean \mu_i^* to mitigate sign cancellation) and standard deviation \sigma_i; large \mu_i^* signals overall importance, while elevated \sigma_i highlights sensitivity to interactions or nonlinearity. This method's efficiency—requiring approximately r(k+1) model evaluations for k inputs—makes it ideal for initial screening in high-dimensional models where full variance-based analysis is prohibitive. It assumes no input sparsity, monotonicity, or additivity, enabling robust application across diverse domains including environmental simulations, optimization, and biomedical modeling for uncertainty quantification.

Introduction

Definition and Purpose

The Morris method is a global technique that utilizes randomized one-factor-at-a-time () designs to assess the influence of individual input factors on the output of deterministic computational models. It computes elementary effects—finite differences in model output resulting from small perturbations to a single input while holding others fixed—at multiple sampled points across the input space, providing a distribution of effects for each factor. This approach allows for the evaluation of factor importance without assuming additivity, monotonicity, or specific functional forms of the model. The primary purpose of the Morris method is to screen and rank input factors by their overall influence in preliminary computational experiments, particularly for complex models with a moderate to large number of inputs. By analyzing the and deviation of elementary effects, it identifies non-influential parameters that can be fixed or simplified to reduce model complexity, while also flagging factors that exhibit nonlinear behavior or interactions with others. This makes it valuable in applied and modeling fields, such as environmental simulations and design, where understanding key drivers is essential before more detailed analyses. Compared to variance-based global sensitivity methods like Sobol indices, the Morris method is notably efficient for high-dimensional problems, requiring only on the order of (+1) model evaluations—where is the number of sampling trajectories and is the number of inputs—versus thousands or more for Sobol approaches, enabling rapid screening even for computationally expensive models.

Historical Development

The Morris method was introduced by Max D. Morris in in his seminal paper "Factorial Sampling Plans for Preliminary Computational Experiments," published in Technometrics, where it was presented as an efficient one-at-a-time () screening technique for preliminary identification of influential input factors in complex computational models with many variables. Key refinements emerged in subsequent years to enhance the method's robustness, particularly in handling nonlinearities and interactions. In 2007, Francesca Campolongo, Jessica Cariboni, and Andrea Saltelli developed an improved screening design that introduced the μ* measure, which mitigates the cancellation of opposing elementary effects in the original mean metric, providing a more reliable indicator of importance. This update built directly on Morris's framework while improving its discriminatory power for large-scale models. The method's integration into comprehensive sensitivity analysis practices was advanced by Andrea Saltelli and colleagues in their 2004 book Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models, which positioned the approach as a computationally efficient complement to variance-based global methods, emphasizing its role in model screening and uncertainty propagation. During the and , the method gained widespread adoption in environmental and engineering applications, valued for its low computational cost in screening models with dozens of parameters, such as those simulating ecological systems or hydrological processes. By 2025, it has become a standard feature in open-source and commercial software, including Python's SALib library for global and MATLAB's SAFE Toolbox, enabling seamless implementation across interdisciplinary research.

Background Concepts

Sensitivity Analysis Overview

Sensitivity analysis is a technique used to quantify the relationship between uncertainties in model inputs and the resulting variations in model outputs, providing insights into how input parameters influence system behavior. This approach is fundamental for understanding uncertainty propagation, validating model structures, and supporting informed decision-making across disciplines such as and . In these fields, it enables practitioners to trace how input uncertainties—often represented by probability distributions—affect predictions, such as in hydrological models or structural reliability assessments. Sensitivity analysis methods are broadly categorized into local and global types. Local methods evaluate sensitivity by examining the effect of small changes in inputs around a specific nominal point, typically using partial derivatives to approximate the output response. In contrast, global methods assess sensitivity across the entire input space, accounting for parameter interactions, nonlinear effects, and full input distributions, which makes them suitable for complex, nonlinear models. Within global approaches, variance-based techniques, such as those developed by Sobol, decompose the total output variance to attribute contributions from individual inputs and their interactions. Screening methods, exemplified by one-at-a-time (OAT) approaches, focus on rapidly identifying influential parameters by varying inputs sequentially while holding others constant. The importance of sensitivity analysis lies in its ability to identify key input parameters that drive output uncertainty, thereby reducing model complexity and enhancing robustness against input variations. By prioritizing efforts on critical factors and revealing model deficiencies, it aids in and processes. Prerequisites for conducting include a basic understanding of to define input distributions and appropriate metrics for evaluating model outputs, ensuring that analyses reflect realistic scenarios.

One-at-a-Time vs. Global Methods

One-at-a-time (OAT) methods in involve systematically perturbing a single input factor while holding all others constant at nominal values, allowing for the direct assessment of individual effects on model outputs. These approaches are computationally inexpensive, typically requiring a number of model evaluations linear in the number of inputs (e.g., 2k evaluations for k inputs in basic designs), and are exemplified by traditional designs or simple derivative-based analyses. However, OAT methods are limited in their ability to detect interactions between factors or nonlinear behaviors unless perturbations are repeated extensively across the input space, often leading to incomplete insights in complex models. In contrast, global methods explore the entire range of input factors simultaneously, accounting for their distributions, variances, and interactions to provide a more comprehensive evaluation of . Techniques such as variance-based approaches, including Sobol indices, decompose output variance into contributions from individual factors and their interactions using strategies like sampling, which can require thousands of model runs (e.g., N(k+2) evaluations where N is often 1000 or more for reliable estimates). While these methods capture higher-order effects and non-monotonic relationships effectively, their high computational demand makes them impractical for initial screening in models with many inputs. The Morris method occupies a niche as an -based screening technique, achieving broader coverage than traditional OAT by generating multiple randomized trajectories across the input space to approximate overall factor influences and detect potential nonlinearities or interactions. This design requires r(k+1) evaluations (with r trajectories), offering a balance of efficiency and informativeness suitable for identifying key factors before applying more resource-intensive methods like Sobol . Nonetheless, it may overlook subtle higher-order interactions, positioning it primarily as a preliminary tool rather than a substitute for full assessment.

Mathematical Formulation

Elementary Effects

The elementary effect in the Morris method quantifies the sensitivity of a model's output to a small perturbation in a single input factor, while keeping all other inputs constant. For a model y = f(\mathbf{x}), where \mathbf{x} = (x_1, \dots, x_k) is the vector of k input factors and y is the scalar output, the elementary effect EE_i(\mathbf{x}) for the i-th input factor x_i is defined as EE_i(\mathbf{x}) = \frac{f(\mathbf{x} + \Delta \mathbf{e}_i) - f(\mathbf{x})}{\Delta}, where \Delta is a finite increment, \mathbf{e}_i is the i-th unit vector in \mathbb{R}^k, and \mathbf{x} + \Delta \mathbf{e}_i must remain within the input domain \Omega. This one-at-a-time perturbation captures the local gradient-like change in the output attributable to x_i, serving as the foundational unit for global sensitivity assessment. The perturbation size \Delta is typically set as \Delta = \frac{p}{2(p-1)} for even p, which is a multiple of the grid spacing \frac{1}{p-1}, ensuring that the perturbed point stays on the grid. The input domain \Omega is often discretized as a k-dimensional p-level grid within the unit hypercube [0, 1]^k, with factor values at \{0, \frac{1}{p-1}, \dots, 1\}, to approximate continuous inputs or directly represent discrete ones. Elementary effects are computed at multiple randomly selected base points \mathbf{x} across \Omega, generating a sample from the distribution F_i of EE_i values for each factor i; this sampling, often repeated r times per factor, accounts for nonlinearity and interactions by exploring variability in the effects across the input space. The method assumes that inputs can be scaled to the unit and discretized on for evaluation, making it applicable to both continuous distributions (via grid approximation) and inherently factors. While the original formulation addresses scalar-valued models, the elementary effect concept extends naturally to vector-valued outputs by applying the component-wise to each output .

Sensitivity Measures

In the Morris method, the elementary effects computed for each input factor are aggregated into sensitivity indices to quantify and rank the importance of factors in influencing the model output. These indices provide a screening tool for identifying influential factors, particularly in high-dimensional models where computational efficiency is crucial. The primary measures are derived from the distribution of elementary effects EE_i(\mathbf{x}) for factor i, estimated empirically from multiple trajectories in the input space. The mean elementary effect, denoted \mu_i = E[EE_i(\mathbf{x})], represents the expected change in output per unit change in input x_i, averaged over the input space. It serves as a measure of the overall influence of factor i, with larger absolute values indicating greater importance. However, \mu_i can suffer from sign cancellation when positive and negative effects offset each other, particularly in nonlinear or non-monotonic models, potentially underestimating a factor's significance. In practice, \mu_i is approximated from r samples as \mu_i \approx \frac{1}{r} \sum_{j=1}^r EE_i^{(j)}(\mathbf{x}^{(j)}). The standard deviation, \sigma_i = \sqrt{\text{Var}[EE_i(\mathbf{x})]}, quantifies the variability of the elementary effects for factor i. A high \sigma_i suggests the presence of nonlinearities or interactions with other factors, as it captures how the effect changes across different base points in the input space. This measure complements \mu_i by highlighting factors with complex behaviors beyond simple additive effects. Empirically, \sigma_i is estimated as the sample deviation of the r elementary effects. To address the limitation of sign cancellation in \mu_i, an improved mean absolute effect is used: \mu_i^* = E[|EE_i(\mathbf{x})|]. This index focuses on the magnitude of effects, providing a more robust ranking of factor importance without interference from opposing signs. It is particularly effective for screening in models with potential non-monotonic responses. The empirical estimate is \mu_i^* = \frac{1}{r} \sum_{j=1}^r |EE_i^{(j)}(\mathbf{x}^{(j)})|. These measures are often visualized by plotting \mu_i^* against \sigma_i for all factors, enabling classification of their roles. Factors with high \mu_i^* and low \sigma_i are typically linear and additive, exerting consistent influence. In contrast, those with high values of both \mu_i^* and \sigma_i indicate nonlinear or interactive effects, warranting further detailed . This graphical approach facilitates qualitative and prioritization in studies.

Implementation Procedure

Sampling Design

The input factor space in the Morris method is scaled to the unit hypercube [0, 1]^k for k factors and discretized into a p-level grid \Omega = \{0, 1/(p-1), 2/(p-1), \dots, (p-1)/(p-1)\}^k, where p is the number of discrete levels (typically p = 4 or $8 to balance resolution and computational cost). This grid structure approximates continuous inputs via quasi-Monte Carlo-like spacing, enabling efficient evaluation on a of points while maintaining coverage of the parameter space. Sampling proceeds by constructing r independent , each comprising k+1 points to compute elementary effects for all factors. A starts at a randomly selected base point \mathbf{x}^{(0)} from the grid \Omega. From this base, the subsequent k points are generated by sequentially perturbing one factor at a time by a step size \Delta, where \Delta = p / [2(p-1)] when p is even to ensure perturbed values stay on the grid and provide symmetric exploration around the base point. The in which factors are perturbed is determined by a random permutation, and the direction of each perturbation (either +\Delta or -\Delta) is randomly assigned to avoid bias in the direction of change. This process—selecting the starting point, of factors, and directions independently for each —ensures an unbiased and representative sampling of the input space. The design requires a total of r(k+1) model evaluations, with r typically ranging from 5 to 20 depending on desired precision and available resources, resulting in computational demands that scale linearly with k. This efficiency makes the Morris method particularly suitable for screening studies involving hundreds of factors, where full designs would be prohibitive.

Index Computation

Once the sampling design has been generated, consisting of r each with k+1 points in the k-dimensional input space, the model f is evaluated at all r(k+1) points to obtain the corresponding output values y^{(j)}_l for the l-th point in the j-th trajectory, where j = 1, \dots, r. For each trajectory j, the k elementary effects EE_i^{(j)} for input i = 1, \dots, k are then extracted by differencing consecutive output pairs along the chain: specifically, EE_i^{(j)} = \frac{y^{(j)}_{i+1} - y^{(j)}_i}{\Delta}, where \Delta is the predefined increment size, yielding one effect per per trajectory. These r elementary effects per factor are aggregated to compute the indices. The mean effect \mu_i is the sample \mu_i = \frac{1}{r} \sum_{j=1}^r EE_i^{(j)}, capturing the overall of factor i; the standard deviation \sigma_i is \sigma_i = \sqrt{ \frac{1}{r} \sum_{j=1}^r (EE_i^{(j)} - \mu_i)^2 }, indicating nonlinearity or interactions; and the mean absolute effect \mu_i^* = \frac{1}{r} \sum_{j=1}^r |EE_i^{(j)}| addresses potential sign cancellations in non-monotonic responses. Factors are typically ranked by \mu_i^* to identify influential ones, with thresholds such as \mu_i^* > 0.1 sometimes applied to screen for significance relative to model scale, though this depends on context. For models with vector-valued outputs, the indices \mu_i, \sigma_i, and \mu_i^* are computed separately for each output dimension, allowing factor importance to be assessed per response variable; if sample size permits, statistical tests like bootstrapping can evaluate index significance.

Variations and Extensions

Original Morris Method

The original Morris method, introduced by Max D. Morris in 1991, serves as a foundational screening approach in global sensitivity analysis for identifying influential input factors in complex computational models. It constructs multiple random trajectories in the standardized input space, computing elementary effects along each to derive two primary sensitivity indices: the mean effect μ, which quantifies the average influence of a factor, and the standard deviation σ, which screens for nonlinearity or interactions by measuring the variability of those effects. In practice, the method typically employs r=10-20 trajectories across a p=8-level grid, resulting in a total of r(k+1) model evaluations for k input factors, balancing computational cost with reliable factor ranking. Central to the method are the elementary effects, defined for each input factor x_i at a point \mathbf{x} in the k-dimensional unit hypercube as d_i(\mathbf{x}) = \frac{y(\mathbf{x} + \Delta \mathbf{e}_i) - y(\mathbf{x})}{\Delta}, where \mathbf{e}_i is the i-th unit vector and Δ is the perturbation size. The original formulation sets \Delta = \frac{p}{2(p-1)} (for even p) to enable symmetric perturbations relative to grid points, with inputs assumed to follow uniform distributions over [0,1] to facilitate grid-based sampling. Trajectories are generated by sequentially varying one factor at a time from a random starting point, ensuring each trajectory samples k+1 points while approximating the distributions of elementary effects F_i for computing μ and σ as their sample mean and standard deviation, respectively. Interactions are not explicitly modeled but are implicitly detected through elevated σ values indicating non-additive behavior. Despite its efficiency, the original method has notable limitations. The mean μ can suffer from sign cancellation, where positive and negative elementary effects offset each other, leading to underestimation of a factor's even when its effects are large in . Furthermore, reliance on randomly selected trajectories risks inefficient exploration of the input space, as overlaps between trajectories may occur, particularly for higher-dimensional problems, reducing the diversity of sampled points.

Improved Measures and Trajectories

One significant enhancement to the original Morris method involves the introduction of the measure \mu^*, defined as the of the values of the elementary effects, which addresses the of effect cancellation in models with nonlinear or non-monotonic responses by focusing on the magnitude rather than the signed average. This measure, proposed by Campolongo et al. in , improves the reliability of and is often combined with improved trajectory designs that enhance exploration of the input space, ensuring more uniform coverage compared to random trajectories. Further refinements include optimized trajectory generation by using () or Sobol sequences to select starting points, followed by an algorithm that chooses a reduced number of trajectories (typically r = 5 to $10) to maximize spatial spread while preserving estimates of mean and standard deviation of effects. This optimization reduces computational demands without sacrificing accuracy in identifying influential factors, and it extends naturally to handling groups of factors by perturbing all members simultaneously. Extensions of the method address specific challenges, such as group for correlated inputs, where factors are bundled into groups (e.g., representing interdependent parameters) and perturbed collectively to assess joint effects, as demonstrated in applications to grouped inputs in benchmark functions. Hybrid approaches integrate the Morris method with Amplitude Sensitivity Test (FAST) techniques for sequential screening and quantification, using Morris for initial factor identification and FAST for detailed variance decomposition, which has been shown effective in building performance models by confirming key parameters with fewer overall evaluations. Adaptations for dynamic models incorporate metrics like (DTW) to compute elementary effects on time-series outputs, capturing time-varying sensitivities across multiple dimensions without relying on output approximations, as applied to systems like microbial kinetics models. Recent developments as of 2025 include second-order methods to quantify higher-order interactions and learning-based approaches to compute elementary effects more efficiently in high-dimensional settings. These improvements are incorporated into open-source libraries such as Python's SALib, which supports multiple variants including optimized sampling, group analysis, and automated trajectory selection via or Sobol sequences, facilitating efficient implementation in high-dimensional .

Applications and Limitations

Real-World Examples

The Morris method has been applied in environmental modeling to assess parameter importance in simulations using tools like . In a study coupling with the modular three-dimensional multispecies transport model (MT3DMS), the method screened factors influencing pollutant removal efficiency in aquifers, identifying and as the most influential parameters for contaminant transport dynamics. This approach, originating in the and echoed in later works such as a 2020 study on arid regions, has highlighted 's role in hydrological analyses. In engineering applications, particularly , the Morris method facilitates screening of variables in finite element crash simulations to enhance vehicle safety. For example, an of occupant restraint systems employed the elementary effects variant to evaluate sensitivities of kinematic and kinetic responses to material properties and structural parameters, revealing key factors like belt tension and deployment timing. Recent implementations have reduced computational demands while prioritizing design optimizations in nonlinear dynamic models. Beyond traditional domains, the method has informed during the by ranking intervention impacts in transmission models. An elementary effects analysis of spread demonstrated that and mask usage exerted the strongest effects on reducing infection rates, guiding policy prioritization over less influential factors like self-isolation compliance. In , InterpretML integrates Morris sensitivity analysis for feature selection in interpretable models, enabling global assessment of input importance in black-box systems like classifiers. In biomedical applications, a 2018 study applied the Morris method with r=10 trajectories to a physiologically based pharmacokinetic (PBPK) model for acetaminophen, screening parameters and identifying influential factors such as absorption rate and renal clearance, which helped reduce model complexity by fixing non-influential parameters. This demonstration underscored the method's efficiency in biomedical simulations for drug disposition. The Morris method has also been used in modeling, such as screening parameters in solar photovoltaic system simulations to identify key factors affecting energy yield under uncertain weather conditions.

Advantages and Drawbacks

The Morris method offers several key advantages in , particularly for preliminary screening of input factors in computational models. Its computational efficiency stands out, as the number of required model evaluations scales linearly with the number of factors k, typically requiring only r(k + 1) runs where r is the number of replications (often 10–20), enabling analysis with as few as 100 runs for 50 factors in practice. Unlike many global methods, it imposes no strong distributional assumptions on the model or inputs, making it robust for diverse applications without presupposing or additivity. The method's measures, such as the \mu and standard deviation \sigma of elementary effects, facilitate easy interpretation: \mu identifies average effects, while \sigma detects nonlinearities or interactions, providing qualitative insights into factor importance with minimal implementation effort. Despite these strengths, the Morris method has notable drawbacks that limit its scope. It primarily captures effects and qualitative rankings, offering limited precision for quantifying variance contributions or total effects, and often fails to distinguish between nonlinearities and higher-order interactions. For continuous input spaces, the method's reliance on discrete grid levels can introduce , potentially altering results for smooth functions. Additionally, it lacks built-in uncertainty propagation for the indices themselves, as the sampling does not converge to statistics with finite runs, reducing reliability in noisy or highly variable models. In comparisons to other global sensitivity analysis techniques, the Morris method excels in speed but sacrifices comprehensiveness. It is substantially faster than variance-based methods like Sobol indices, which may demand 10,000 or more runs for similar factor counts due to the need for extensive sampling, making Morris ideal as a preliminary screening tool before proceeding to more intensive approaches such as E-FAST. However, this efficiency comes at the cost of lower accuracy for detailed effect decomposition, positioning it best for initial factor identification rather than final quantification. From a 2025 perspective, the Morris method remains relevant for its simplicity in resource-constrained settings but is increasingly augmented with techniques, such as surrogates for computing elementary effects in large-scale models, to address issues. Critiques highlight its potential oversimplification in complex systems, where undetected high-order interactions can lead to incomplete insights, prompting hybrid uses with advanced variance-based or interaction-focused extensions.

References

  1. [1]
    Factorial Sampling Plans for Preliminary Computational Experiments
    The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample.
  2. [2]
    [PDF] Factorial Sampling Plans for Preliminary Computational Experiments
    The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans ...
  3. [3]
    Morris Sensitivity Analysis — InterpretML documentation
    Summary. Also known as the Morris method[1], this is a one-step-at-a-time (OAT) global sensitivity analysis where only one input has its level (discretized ...
  4. [4]
    Morris's Elementary Effects Screening Method - RDocumentation
    This method, based on design of experiments, allows to identify the few important factors at a cost of $r \times (p+1)$ simulations (where $p$ is the number of ...
  5. [5]
    Morris Method - an overview | ScienceDirect Topics
    The Morris method is defined as a global sensitivity analysis technique that uses repeated one-at-a-time (OAT) design experiments to evaluate the influence ...
  6. [6]
    An effective screening design for sensitivity analysis of large models
    In 1991 Morris proposed an effective screening sensitivity measure to identify the few important factors in models with many factors.
  7. [7]
    [PDF] Sensitivity Analysis in Practice - Andrea Saltelli
    The sensitivity method proposed by Morris (1991) and extended by Campolongo et al. (2003b) has been applied to the model. The model output of in- terest is ...
  8. [8]
    Sensitivity analysis of environmental models: A systematic review ...
    The most established method of this type is the method of Morris (Morris, 1991), also called the Elementary Effect Test (EET (Saltelli et al., 2008)). Here ...
  9. [9]
    A Matlab toolbox for Global Sensitivity Analysis - ScienceDirect
    The first release of the SAFE Toolbox includes the Elementary Effects Test (EET, or Morris method (Morris, 1991)), Regional Sensitivity Analysis (RSA, Spear and ...
  10. [10]
  11. [11]
  12. [12]
  13. [13]
    [PDF] identification and review of sensitivity analysis methods
    Sensitivity analysis methods can be distinguished based upon whether they are local or global methods, and regarding whether they focus on one input at a ...
  14. [14]
    [PDF] Global Sensitivity Analysis. The Primer - Andrea Saltelli
    Sensitivity analysis involves models and uncertainty, and can be local or global. It includes methods like one-at-a-time sampling and setting up uncertainty.
  15. [15]
    [PDF] An effective screening design for sensitivity analysis of large models
    In 1991 Morris proposed an effective screening sensitivity measure to identify the few important factors in models with many factors. The method is based on ...
  16. [16]
    None
    ### Summary of Morris Method Index Computation and Related Concepts
  17. [17]
    EASI RBD-FAST: An efficient method of global sensitivity analysis for ...
    The EASI RBD-FAST method illustrated in this paper proves to be not only very useful but also easy to use and accessible tool for building performance ...<|control11|><|separator|>
  18. [18]
    Dynamic Time Warping as Elementary Effects Metric for Morris ...
    This work focused on demonstrating the use of dynamic time warping (DTW) as a metric for the elementary effects computation in Morris-based global sensitivity ...Dynamic Time Warping As... · 4. Methodology · 5. Results And Discussion
  19. [19]
  20. [20]
    Sensitivity analysis of factors influencing pollutant removal ... - PubMed
    Jun 24, 2022 · The simulation experiments for Morris analysis were designed, and pollutant removal efficiency was numerically simulated by coupling MODFLOW and ...
  21. [21]
    Groundwater Flow-Modeling and Sensitivity Analysis in a Hyper Arid ...
    Jul 27, 2020 · Model sensitivity is a function of groundwater response to changes in model inputs, such as groundwater recharge and aquifer hydraulic ...
  22. [22]
    [PDF] Abstract The sensitivity of occupant kinematic and kinetic crash ...
    The Morris EE method is a one-at-a-time (OAT) analysis, typically used to investigate local variation around a base point when the number of input factors is ...
  23. [23]
    Proposing an Uncertainty Management Framework to Implement the ...
    Jan 7, 2022 · Framework to enable sensitivity analysis and uncertainty quantification for automotive crash simulations. Components of the paths discussed in ...
  24. [24]
    Elementary effects analysis of factors controlling COVID-19 ...
    This paper aims to investigate the effectiveness of masks, social distancing, lockdown and self-isolation for reducing the spread of SARS-CoV-2 infections.
  25. [25]
    Physiologically Based Pharmacokinetic Modeling of a Homologous ...
    Aug 1, 1997 · The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs ...Article Pdf · Explore Related Subjects · Author Information<|control11|><|separator|>
  26. [26]
    Applying a Global Sensitivity Analysis Workflow to Improve the ...
    Jun 8, 2018 · The purpose of this study was to explore the application of global sensitivity analysis (GSA) to ascertain which parameters in the PBPK model ...
  27. [27]
  28. [28]
  29. [29]
  30. [30]
  31. [31]
    Robust combination of the Morris and Sobol methods in complex ...
    In this framework, the Morris method is used to select the input factors to be considered later on in the Sobol method. The Morris method consists of ...
  32. [32]
    A deep learning approach to calculate elementary effects of morris ...
    Oct 10, 2024 · This paper proposes frameworks for calculating the EEs of the Morris method by using DL techniques to identify the most important parameters while designing ...3.1 Axial Turbine Test Case... · 3.3 Deep Learning Frameworks · 4 ResultsMissing: discretization construction
  33. [33]
    A modified Morris screening protocol for sensitivity analysis and ...
    Jul 21, 2025 · In this study, a modified Morris screening method was established and used to evaluate the parameters of the green roof module in the SWMM model.