Fact-checked by Grok 2 weeks ago

Umbrella sampling

Umbrella sampling is a biased simulation technique designed to calculate profiles along a predefined by applying artificial biasing potentials to overcome energy barriers and enhance sampling of low-probability configurations. Introduced in 1977 by G. M. Torrie and J. P. Valleau, it addresses the limitations of standard or methods, which often fail to adequately explore due to issues in complex systems like biomolecules.90121-8) The method operates by dividing the —such as a , , or —into overlapping "windows" and running independent simulations in each, where a potential (typically ) restrains the system around a reference value to flatten the landscape locally. This generates biased probability distributions that are subsequently unbiased and combined to reconstruct the unbiased (PMF), yielding the as a function of the . Common analysis approaches include the weighted histogram analysis method (), developed by et al. in 1992, which optimally combines histograms from all windows to minimize statistical error, and umbrella integration, which integrates the mean bias force directly. Umbrella sampling has become a cornerstone in and for applications such as binding affinity predictions, conformational transitions in proteins, and free energies, often integrated with software like or . Extensions like adaptive biasing and multiple-walker variants further improve efficiency by dynamically adjusting biases or parallelizing simulations across windows. Its reliability depends on sufficient overlap between windows and adequate sampling, with errors typically reduced through WHAM's self-consistent iteration.

Overview

Definition and Purpose

Umbrella sampling is a computational technique employed in and simulations to enhance the exploration of configuration space by applying an artificial biasing potential that restrains the system within predefined regions, or "windows," along a chosen . This method enables efficient sampling of regions separated by high barriers that are typically inaccessible in unbiased simulations. Introduced as a form of , it uses non-Boltzmann distributions to improve the accuracy and efficiency of ensemble averages. The primary purpose of umbrella sampling is to reconstruct the (PMF), which represents the profile as a of the , and to quantify differences between thermodynamic states, such as reactants and products in a . By running multiple simulations in overlapping windows and combining the data through reweighting techniques, the method overcomes the limitations of direct sampling and provides a continuous landscape. This approach is particularly valuable in and for studying processes governed by concepts, without delving into detailed derivations of the underlying biasing mechanisms. In unbiased simulations, rare events like or molecular binding are hindered by severe timescale separation, where these transitions occur on to second scales, far beyond the to durations feasible with standard due to computational constraints. Umbrella sampling mitigates this issue by artificially populating high-energy regions, allowing observation of transitions that would otherwise require impractically long simulation times. A representative application is the calculation of protein-ligand binding free energies, where unbiased simulations often fail to adequately sample dissociated states, leading to inaccurate estimates; umbrella sampling, by contrast, systematically explores the binding pathway across windows, yielding reliable PMF profiles for affinity prediction.

Historical Context

Umbrella sampling was introduced by G. M. Torrie and J. P. Valleau in 1977 as a technique to overcome sampling limitations in Monte Carlo simulations of free-energy differences, employing nonphysical biasing potentials to constrain configurations within specified windows of a reaction coordinate. This window-based approach allowed for the efficient exploration of rare events by generating overlapping distributions that could be reweighted to recover unbiased probabilities, marking a significant advancement in addressing ergodicity issues in molecular simulations. In the , umbrella sampling found early applications in studying simple liquids, such as Lennard-Jones fluids, and systems, where it facilitated calculations for systems with 32 to 108 particles, often validated against independent results. These implementations demonstrated its utility in probing and association phenomena in condensed phases, building on foundational work in biased sampling while emphasizing the method's discrete window strategy over continuous biasing alternatives. By the 1990s, umbrella sampling evolved from its origins to integration with simulations, enabling the analysis of more complex energy landscapes and phase transitions in biomolecular and condensed matter systems. A pivotal development occurred in with the introduction of the Weighted Histogram Analysis Method () by S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen, and P. A. Kollman, which extended umbrella sampling by combining data from multiple windows through optimal reweighting for unbiased free-energy profiles. This refinement drew from prior reweighting techniques but highlighted umbrella sampling's structured, overlapping-window framework as essential for accurate thermodynamic estimates.

Theoretical Basis

Free Energy Landscapes

In molecular simulations, the Gibbs free energy G is a thermodynamic potential defined as G = H - TS, where H is the enthalpy, T is the absolute temperature, and S is the entropy. This quantity governs the equilibrium behavior of systems at constant temperature and pressure, serving as the criterion for spontaneity in chemical processes. The equilibrium probability distribution of molecular configurations is determined by the Boltzmann distribution, where the probability P of a state with free energy G is proportional to \exp(-G / kT), with k being Boltzmann's constant. This distribution implies that low-free-energy states are preferentially sampled, while high-free-energy regions are rarely visited, limiting the exploration of the full configurational space in simulations. The (PMF), denoted A(\mathbf{r}), quantifies the as a function of a \mathbf{r} and is defined as A(\mathbf{r}) = -kT \ln P(\mathbf{r}), where P(\mathbf{r}) is the probability density along \mathbf{r}. Introduced by Kirkwood, the PMF effectively averages out the influence of all other , providing a one-dimensional of the multidimensional landscape that captures the effective potential driving processes like binding or conformational changes. Free energy landscapes in complex molecular systems, such as proteins or solvated ions, are typically rugged, featuring multiple minima separated by high barriers that hinder transitions between states. These barriers cause breaking in standard simulations, where trajectories become trapped in local minima, failing to sample the global equilibrium distribution over feasible timescales. Umbrella sampling was developed to overcome this by improving the sampling of such challenging landscapes, enabling accurate reconstruction of the PMF.

Biasing Mechanisms

In umbrella sampling, the biasing potential is applied to enhance sampling along a chosen , \xi, which represents the progress of the system from one state to another. The most common form of this potential is , given by V_{\text{bias}}(\xi) = \frac{1}{2} k (\xi - \xi_0)^2, where k is the force constant determining the restraint strength, and \xi_0 is the reference value at the center of the sampling . This choice simplifies the and allows for analytical reweighting, as introduced in the original of the . The force constant k is typically selected to balance confinement within the window and sufficient fluctuations, often on the order of 1000–5000 kJ mol⁻¹ nm⁻² for distance-based coordinates in biomolecular simulations. To achieve comprehensive coverage of the reaction coordinate, multiple independent simulations, or "windows," are performed, each centered at different \xi_0 values. Adjacent windows must exhibit sufficient overlap in their probability distributions P_i(\xi) to connect the sampled regions seamlessly and avoid gaps in the free energy profile. Overlap is quantified by the condition that the distributions from neighboring windows share a significant portion of \xi values, typically ensuring that the standard deviations of the Gaussian-like distributions (arising from the harmonic bias) intersect substantially. Inadequate overlap can lead to discontinuities in the reconstructed free energy, while excessive overlap increases computational cost without proportional benefits. The selection of the reaction coordinate \xi is crucial for the method's efficacy, as it should capture the essential slow degrees of freedom driving the process of interest while remaining low-dimensional to minimize the need for extensive windows. Common choices include one-dimensional collective variables such as interatomic distances, bond angles, dihedral angles, or coordination numbers, which are computationally tractable and directly relate to structural changes. For instance, in studies of ligand binding, \xi might be the distance between a ligand and binding site atoms. Multi-dimensional \xi can be employed for more complex pathways but requires careful validation to ensure it adequately projects the free energy landscape without introducing hidden barriers. The effect of the biasing potential is removed post-simulation through reweighting to recover the unbiased . For each window i, the unbiased probability along \xi is estimated from the biased samples using as P(\xi) \propto P_i(\xi) \exp\left( V_{\text{bias},i}(\xi) / k_B T \right), where P_i(\xi) is the biased and the window-specific constant is determined self-consistently from the overlaps between distributions of adjacent windows to yield a continuous profile. The corrected distributions from all windows are then combined to construct the full .

Methodology

Simulation Setup

The setup of an umbrella sampling simulation begins with the selection of windows along the chosen , which is divided into discrete intervals to cover the full range of interest. Typically, windows are spaced at intervals of 0.1 to 0.2 to ensure adequate sampling, with neighboring windows designed to have 10-20% overlap in their probability distributions for reliable reconstruction. This overlap is achieved by positioning the centers of adjacent windows such that the standard deviation of the restrained distribution in each window extends sufficiently into the neighboring regions. The force constant for the harmonic biasing potential in each window is selected to restrain the system around the window center without overly distorting the underlying dynamics, commonly in the range of 1000 to 5000 kJ mol⁻¹ nm⁻². Lower values (e.g., around 1000 kJ mol⁻¹ nm⁻²) allow broader sampling but require closer window spacing, while higher values (up to 5000 kJ mol⁻¹ nm⁻² or more) enable wider spacing at the risk of reduced overlap if not balanced properly. In practice, the choice depends on the system's stiffness and the reaction coordinate's scale, often tested iteratively to achieve the desired overlap. Prior to biased simulations, the system undergoes initial unbiased equilibration to generate a stable starting configuration, typically for 100 ps to several nanoseconds under constant temperature and pressure conditions. Subsequent biased runs are then performed for each window, with equilibration of 100 ps to 1 ns followed by production sampling of 1 to 10 ns per window, depending on system size and convergence needs. These simulations are commonly implemented using molecular dynamics software such as , , or NAMD, which support harmonic restraints via input parameters like pull_coord_k in GROMACS or ntr in AMBER. Convergence of the simulations is monitored by examining the overlap of histograms from adjacent windows, ensuring uniform coverage across the , and by calculating autocorrelation times to verify statistical independence of samples. If overlaps are insufficient (e.g., less than 10%), additional windows or longer production runs may be required; autocorrelation times should be sufficiently short relative to the total simulation length to ensure adequate sampling, often assessed through effective sample size or stabilization of the PMF. The potential is typically , as outlined in the section, and the resulting trajectories are prepared for post-processing analysis detailed in the and Reweighting section.

Data Analysis and Reweighting

In umbrella sampling simulations, data analysis begins with the collection of histograms from each sampling window. For each window i, the probability distribution P_i(\xi) of the reaction coordinate \xi is obtained by binning the configurations sampled under the applied biasing potential, providing a biased estimate of the local free energy landscape around the window's reference position. These histograms capture the density of states in the biased ensemble for that window, typically requiring a bin size that balances resolution and statistical reliability, such as 0.1–0.5 nm for distance-based coordinates in molecular systems. To recover unbiased free energy estimates, the biased distributions P_i(\xi) are reweighted to the unbiased using methods like the weighted histogram analysis method () or the multistate Bennett ratio (MBAR). In , the unbiased probability density is constructed as a weighted sum of the individual , where the weights are determined self-consistently by solving equations that minimize statistical and variance across all windows, for the known biasing potentials. Similarly, MBAR provides a statistically optimal by solving for state-dependent weights w_i(\xi) that satisfy self-consistent equations derived from maximum likelihood principles, yielding the unbiased (PMF) as A(\xi) = -k_B T \ln \left( \sum_i w_i(\xi) P_i(\xi) \right), where k_B is Boltzmann's constant and T is the temperature; this approach generalizes the Bennett ratio for multiple states and handles arbitrary biases efficiently. Both methods ensure that the combined PMF is continuous and free of artifacts from poor overlap, provided the input data meet sampling requirements. Error estimation in the reconstructed PMF is crucial for assessing reliability, typically employing bootstrap resampling or block averaging techniques. Bootstrap methods involve repeatedly subsampling the original data with replacement to generate multiple PMF estimates, from which the standard deviation provides bounds that propagate both statistical sampling s and autocorrelation effects. Block averaging divides the into independent blocks, computing variances across blocks to quantify uncertainties, particularly useful for long correlated s where effective sample sizes are reduced. These approaches reveal profiles that are often higher in regions of low sampling , guiding refinements in simulation length or placement. Convergence of umbrella sampling data is evaluated through criteria such as sufficient histogram overlap between adjacent windows and the flatness of the resulting distributions. Overlap is quantified by ensuring that neighboring share at least 10–20% of their probability mass, preventing discontinuities in the PMF; inadequate overlap leads to unreliable free energy differences between distant windows. Flat histograms in the unbiased ensemble, achieved when the combined P(\xi) shows uniform sampling without deep minima or maxima beyond statistical noise, indicate adequate exploration of the , typically confirmed by monitoring the PMF stabilization over simulation time or iterations. These checks ensure the reweighted PMF accurately represents the equilibrium landscape.

Applications

Molecular Dynamics Studies

Umbrella sampling has been extensively applied in simulations to study conformational changes in biomolecules, particularly in proteins and nucleic acids, where it facilitates the exploration of rare transitions across high barriers. In peptides, it has been used to sample -coil transitions, revealing the microscopic costs associated with propagation at the N- and C-termini. For instance, simulations of alanine-based peptides demonstrated that the Zimm-Bragg propagation parameter s varies from 0.5 to 1.5 depending on length and end effects, highlighting the of 3₁₀- intermediates in the process. Similarly, in enzymes, umbrella sampling has elucidated loop closure dynamics critical for catalytic function, such as in , where biasing the Met20 loop coordinates uncovered key intermediate conformations preceding transfer, with differences on the order of several kcal/mol driving the open-to-closed . In the context of gating, umbrella sampling computes (PMF) profiles along ion permeation pathways, enabling quantification of energetic barriers to conduction. For channels, combined metadynamics and umbrella sampling identified optimal reaction coordinates for K⁺ permeation, yielding PMF barriers of approximately 4-6 kcal/mol at selectivity filter sites, which correlate with experimentally observed conductance rates. These studies emphasize how umbrella sampling captures and coordination changes during gating, providing insights into selectivity and voltage dependence without relying on long unbiased trajectories. For protein-ligand interactions, umbrella sampling restrains the -protein distance to map association and dissociation pathways, yielding binding free energies from the PMF depth. In a seminal application to trypsin-benzamidine, path-based umbrella sampling calculated absolute binding affinities with errors under 2 kcal/mol, illustrating how the method resolves intermediate states in the unbinding process. This approach has been pivotal for understanding ligand entry into buried pockets, where rotational restraints enhance sampling efficiency. A classic example from 2000s studies involves the folding landscape of alanine dipeptide, where umbrella sampling along φ and ψ dihedrals revealed PMF barriers of 5-10 kT separating major basins like C₇^eq, α-helix, and β-sheet regions, underscoring the method's utility in benchmarking force fields for secondary structure propensity.

Thermodynamic Calculations

Umbrella sampling enables the computation of differences (ΔG) in chemical systems by reconstructing the (PMF), denoted as A(ξ), along a chosen ξ, such as in alchemical transformations where atomic interactions are gradually modified. The change is obtained by integrating the derivative of the PMF along this path, given by \Delta G = \int_{\xi_a}^{\xi_b} \frac{dA(\xi)}{d\xi} \, d\xi, which quantifies the thermodynamic cost of transforming one state into another, such as mutating a in a protein . This approach combines umbrella sampling with thermodynamic integration or methods to enhance sampling of intermediate states, providing reliable ΔG values for processes like ligand binding or conformational changes. In studies, umbrella sampling computes free energies by deriving the PMF from radial distribution functions (RDFs) in explicit models, such as TIP3P water, to capture solute- interactions accurately. The PMF relates to the RDF g(r) via A(r) = -k_B T \ln g(r), where r is the interatomic distance, allowing quantification of shells and free energy penalties for solute insertion into . This method is particularly useful for modeling in complex environments, like ion pairing or transfer across membranes, by biasing simulations to sample rare configurations. For predictions, umbrella sampling constructs multidimensional PMFs to evaluate differences between protonated and deprotonated states of residues in proteins, accounting for electrostatic and conformational effects. In proteins like staphylococcal nuclease mutants, two-dimensional umbrella sampling along the proton distance and side-chain orientation yields PMFs that directly inform protonation equilibria, with predicted values aligning closely with experiments (e.g., 8.5 ± 0.2 for V66D). This approach, often using weighted analysis for PMF reconstruction as detailed in methods, reveals how buried residues shift due to desolvation and hydrogen bonding. An illustrative application is the calculation of free energies via double decoupling, where umbrella sampling decouples ligand interactions from the environment in both solvated and complex states, incorporating standard state corrections. In octa-acid host-guest systems, this yields affinities with mean deviations of 1.02–1.76 kcal/ and root-mean-square deviations of 1.33–2.09 kcal/ compared to experiments, demonstrating accuracies typically within 1–2 kcal/ for biomolecular contexts like .

Advantages and Limitations

Key Benefits

Umbrella sampling significantly enhances the efficiency of molecular simulations by enabling the system to overcome high barriers that would otherwise trap trajectories in metastable states during unbiased sampling. In standard or simulations, rare events separated by barriers of several units can require timescales on the order of milliseconds or longer to observe, whereas umbrella sampling accelerates this process by orders of magnitude through the application of biasing potentials that force exploration across the . This approach is particularly valuable for processes involving conformational changes, , or phase transitions, where unbiased methods might fail to sample relevant regions within feasible computational budgets. The method offers substantial flexibility in its implementation, as the biasing potentials can be applied to virtually any chosen collective variable—such as distances, angles, dihedrals, or more complex functions thereof—allowing adaptation to diverse chemical and biological systems. Furthermore, umbrella sampling integrates seamlessly with a wide range of engines, including , , and NAMD, through plugins like PLUMED, which facilitate the addition of biases without modifying core simulation code. This versatility has made it a staple in computational studies requiring profiles along specific pathways. A key strength lies in its statistical efficiency, achieved via reweighting techniques that combine data from multiple windows to yield unbiased estimates with reduced variance. Methods like the weighted histogram analysis () exploit overlapping distributions across windows to maximize the use of all sampled configurations, providing robust and converged results even from individually biased trajectories. This holistic data utilization minimizes the total time needed for accurate compared to running independent unbiased simulations for each state. The reliability of umbrella sampling has been validated through convergence to exact analytical results in benchmark test cases, such as one-dimensional potentials with known free energy landscapes. In these controlled scenarios, reweighted potentials of mean force match theoretical expectations precisely, confirming the method's ability to recover unbiased free energies without systematic errors when windows are properly overlapped and analyzed.90121-8)

Common Challenges

One major challenge in umbrella sampling arises from insufficient overlap between adjacent simulation windows, which can lead to gaps in the reconstructed potential of mean force (PMF) and poor convergence of free energy estimates. This issue typically stems from suboptimal choices in window positioning or bias force constants, such as using excessively high constants that fragment sampling along the collective variable, particularly near steep energy barriers. As a result, the phase space coverage becomes discontinuous, hindering the accurate combination of data from multiple windows via methods like the weighted histogram analysis method (WHAM). To address this, adaptive spacing strategies distribute windows more densely in regions of high free energy curvature, often guided by thermodynamic integration or preliminary simulations to ensure sufficient overlap (e.g., achieving about 40% probability overlap between neighboring windows). Selecting an appropriate is another critical hurdle, as an inadequate choice can conceal barriers in orthogonal , leading to incomplete sampling and biased PMF profiles. If the chosen collective variable does not align well with the true reaction pathway, the system may remain trapped in metastable states, underestimating transition rates or overlooking hidden energy penalties in perpendicular directions. This is especially problematic in biomolecular processes where multiple pathways contribute. Employing orthogonal collective variables, such as combining distance-based restraints with angular or coordinates, helps mitigate this by enhancing exploration of coupled dimensions without introducing excessive bias. The computational expense of umbrella sampling scales unfavorably with the number of windows required, as each must run independently for adequate equilibration and sampling, often totaling hundreds of nanoseconds or more. For instance, a typical setup with 100 windows, each simulated for 5 ns, demands approximately 0.5 μs of aggregate time, which can strain resources for large systems or high-precision needs. This overhead arises from the need for parallel simulations across windows to achieve statistical reliability, exacerbating costs in multi-dimensional cases. Techniques like replica-exchange umbrella sampling can partially alleviate this by facilitating transitions between windows, though they introduce additional complexity. Sampling in high-dimensional spaces poses orthogonality challenges, where biases along one coordinate inadequately explore perpendicular freedoms, resulting in slow convergence and artifacts in the free energy landscape. In such scenarios, the system struggles to decorrelate motions across dimensions, leading to inefficient barrier crossing. Partial tempering methods, such as integrated tempering sampling combined with umbrella potentials, address this by applying temperature scaling selectively to orthogonal degrees of freedom, promoting broader exploration without fully disrupting the primary bias. This hybrid approach enhances efficiency in complex energy surfaces while maintaining the method's core advantages.

References

  1. [1]
  2. [2]
    Long‐Time Protein Folding Dynamics from Short‐Time Molecular ...
    Protein folding involves physical timescales—microseconds to seconds—that are too long to be studied directly by straightforward molecular dynamics ...
  3. [3]
    Calculation of absolute protein–ligand binding free energy ... - PNAS
    The PMF in the bound state was calculated from 20 umbrella-sampling simulations separated by 0.2 Å, for a total of 0.8 ns; a cubic system with 5,490 water ...<|control11|><|separator|>
  4. [4]
    Nonphysical sampling distributions in Monte Carlo free-energy ...
    This paper describes the use of arbitrary sampling distributions chosen to facilitate such estimates. The methods have been tested successfully on the Lennard- ...Missing: original | Show results with:original
  5. [5]
    Free-energy calculations in condensed matter
    Jun 12, 2024 · Free-energy calculations in condensed matter: from early challenges to the advent of umbrella sampling. Open access; Published: 12 June 2024.
  6. [6]
    [PDF] Free energy simulations: Applications to the study of liquid water ...
    The current way to proceed on potential of mean force calculations is to apply the umbrella sampling method. In this approach a series of simulations are ...
  7. [7]
    THE weighted histogram analysis method for free‐energy ...
    The method is presented here as an extension of the Umbrella Sampling method for free-energy and Potential of Mean Force calculations.
  8. [8]
    [PDF] THE weighted histogram analysis method for free-energy ...
    The Weighted Histogram Analysis Method (WHAM), an extension of Ferrenberg and Swendsen's Multiple. Histogram Technique, has been applied for the first time ...
  9. [9]
    Molecular dynamics-based approaches for enhanced sampling of ...
    Jul 8, 2009 · Abstract. The rugged energy landscape of biomolecules together with shortcomings of traditional molecular dynamics (MD) simulations require ...
  10. [10]
    Observation time scale, free-energy landscapes, and molecular ...
    Hence shorter observation times may resolve more states, but result in broken ergodicity. A consistent treatment of the thermodynamics that corresponds with ...
  11. [11]
    GROMACS Tutorial - Umbrella Sampling
    The harmonic potential allows the force to vary according to the nature of the interactions of peptide A with peptide B.
  12. [12]
    [PDF] Methods for calculating Potentials of Mean Force
    4.1 Umbrella sampling​​ Generally, one decides a priori the centers and widths of all the windows and then chooses the corresponding equilibrium positions and ...
  13. [13]
    Umbrella sampling - what values of force constant and velocity ...
    Nov 20, 2024 · I would recommend a tight distribution of umbrella windows the first 1 nm (or so), with a force constant in the range of 2500-10000.Missing: tutorial | Show results with:tutorial
  14. [14]
    callumjd/AMBER-Umbrella_COM_restraint_tutorial - GitHub
    This tutorial uses the AMBER16 center-of-mass (COM) umbrella restraint code to determine the free energy of transfer profile for a methanol molecule through a ...
  15. [15]
    Convergence and Sampling in Determining Free Energy ...
    Nov 3, 2016 · Initial configurations for the umbrella sampling windows were generated via a steered MD (SMD) molecular dynamics simulation. A collective ...
  16. [16]
  17. [17]
  18. [18]
    Efficient Estimation of Rare-Event Kinetics | Phys. Rev. X
    Jan 28, 2016 · Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations ...
  19. [19]
    Umbrella sampling - Kästner - 2011 - Wiley Interdisciplinary Reviews
    May 26, 2011 · Umbrella sampling, biased molecular dynamics (MD), is one of the methods that provide free energy along a reaction coordinate.UMBRELLA SAMPLING... · SAMPLING TECHNIQUES · METHODS TO ANALYZE...
  20. [20]
    Theory of Adaptive Optimization for Umbrella Sampling
    Jun 12, 2014 · We present a theory of adaptive optimization for umbrella sampling. With the analytical bias force constant obtained from the constrained thermodynamic length.Introduction · Figure 1 · Appendix: An Accurate...
  21. [21]
    Reaction Coordinates are Optimal Channels of Energy Flow - PMC
    In contrast, if CVs do not align with the RCs, the infamous 'hidden barrier' in the 'orthogonal space' will appear, preventing effective sampling (45; 46). The ...
  22. [22]
    Sampling free energy surfaces as slices by combining umbrella ...
    Apr 5, 2016 · Here, a technique is presented to sample a high-dimensional free energy landscape as slices by combining umbrella sampling and metadynamics ...
  23. [23]
    Efficient Determination of Free Energy Landscapes in Multiple ... - NIH
    This was pioneered by Torrie and Valleau in the 1970s to perform Monte Carlo simulations of systems containing large energy barriers. Umbrella sampling ...
  24. [24]
    An experimentally guided umbrella sampling protocol for biomolecules
    Sep 15, 2008 · Statistical analysis shows a dramatic improvement in efficiency for a 5 window guided umbrella sampling over 5 and 17 window unguided umbrella ...
  25. [25]
    [PDF] Combine Umbrella Sampling with Integrated Tempering Method for ...
    9. Kaestner, J., Umbrella Sampling. Wiley Interdisciplinary Reviews-Computational. Molecular Science 2011, 1 (6), 932-942.<|control11|><|separator|>