Fact-checked by Grok 2 weeks ago

Molecular dynamics

Molecular dynamics (MD) is a computational simulation technique used to study the time-dependent behavior of and molecular systems by solving , providing detailed insights into the physical movements of particles at the scale. These simulations model interactions via empirical fields that approximate surfaces, enabling the prediction of trajectories for systems ranging from small molecules to large biomolecules like proteins in explicit environments. The origins of MD trace back to the mid-1950s, when Alder and Wainwright developed early simulations of hard-sphere gases to explore phase transitions, marking the birth of the method as a tool for statistical mechanics. Key milestones include Rahman's 1964 simulation of liquid argon using Lennard-Jones potentials, which demonstrated MD's potential for liquids, and the 1971 study of liquid water by Rahman and Stillinger, revealing hydrogen bonding networks. The first MD simulation of a protein was achieved in 1977 by McCammon, Gelin, and Karplus, simulating the bovine pancreatic trypsin inhibitor and opening applications in biochemistry. This foundational work led to the 2013 Nobel Prize in Chemistry awarded to Martin Karplus, Michael Levitt, and Arieh Warshel for their development of multiscale models for complex chemical systems, including MD approaches that bridged quantum and classical mechanics. In practice, MD simulations integrate forces derived from force fields—such as CHARMM, , or OPLS—to propagate atomic positions at time steps, often over to timescales with modern hardware like GPUs. They are widely applied in studying , ligand binding, enzyme mechanisms, and membrane dynamics, complementing experimental techniques like and NMR by revealing dynamic processes inaccessible to static structures. Limitations include the approximate nature of force fields, which do not account for electronic effects or chemical bond breaking without extensions, and the computational cost that restricts simulations to relatively short timescales compared to many biological events. Recent advances have significantly enhanced MD's scope and efficiency. In recent years, as of 2025, software like OpenMM, LAMMPS, and integrate machine learning (ML) potentials and advanced GPU accelerations for faster and more accurate simulations of large systems, such as viral particles or cellular environments, enabling longer timescales and rare event sampling via techniques like weighted ensembles. Improved force fields, including OPLS/2020 for organic molecules and polarizable models like +ANN, better capture long-range interactions and enhance predictive power for biomolecular recognition and . These developments, combined with enhanced sampling methods and AI-driven analysis, have expanded MD applications to , materials design, and even plant stress responses, making it an indispensable tool in and .

Historical Development

Early Foundations

The foundations of molecular dynamics trace back to the development of in the late , where theoretical frameworks began to describe the of large ensembles of particles through probabilistic methods. Ludwig Boltzmann's H-theorem, introduced in 1872, provided a kinetic foundation for understanding irreversibility and the approach to in gaseous systems by demonstrating how the H-function, defined as an integral over the velocity distribution, monotonically decreases over time, aligning molecular chaos assumptions with the second law of thermodynamics. This theorem laid essential groundwork for linking microscopic dynamics to macroscopic thermodynamic properties, influencing later approaches. Complementing Boltzmann's work, J. Willard Gibbs formalized the in his 1902 treatise on , positing that the time average of a system's equals the ensemble average over , thereby justifying the use of distributions to represent dynamical evolution. By the mid-20th century, the conceptual shift toward numerical exploration of these ideas emerged through early computational experiments that integrated classical equations for many-particle systems. The 1955 Fermi-Pasta-Ulam experiment at represented a pioneering numerical investigation, simulating a chain of 64 nonlinearly coupled oscillators to test equipartition and ; unexpectedly, the system exhibited near-recurrence rather than rapid thermalization, highlighting challenges in of chaotic dynamics and foreshadowing the computational demands of molecular simulations. This work demonstrated the feasibility of digitally evolving trajectories for dozens of particles using early computers like the MANIAC, bridging theoretical with practical computation. In the 1950s, transitioned toward simulation-oriented models for complex fluids, particularly through hard-sphere approximations that simplified interactions to instantaneous collisions, enabling the study of liquid structure without continuous potentials. Researchers at Livermore, including and Thomas Wainwright, explored these models to investigate behavior in dense systems, revealing a fluid-solid around a packing fraction of 0.49 through numerical methods that anticipated dynamical simulations. This era marked a conceptual pivot from purely analytical treatments to discrete-event computations, setting the stage for time-dependent generation in molecular systems. At the core of these developments lies Newton's second law of motion, which governs the evolution of atomic trajectories in classical systems via the equation \mathbf{F} = m \mathbf{a}, where \mathbf{F} is the on a particle of m and \mathbf{a} = d^2\mathbf{r}/dt^2, with \mathbf{r} as position; forces derived from thus dictate the deterministic paths that, when averaged, yield statistical properties. This formulation underpins the integration schemes essential for simulating molecular dynamics, connecting Newtonian mechanics directly to the phase-space explorations envisioned by Gibbs.

Key Milestones and Contributors

The pioneering molecular dynamics (MD) simulations began in 1957 with the work of Berni J. Alder and Thomas E. Wainwright, who performed the first computations using a hard-sphere model at the Lawrence Radiation Laboratory, University of California, Livermore, demonstrating phase transitions in simple particle systems. This approach laid the groundwork for exact of classical for interacting particles, marking the birth of MD as a computational method. In 1964, Aneesur Rahman conducted the first realistic MD simulation of liquid argon using a for 864 atoms, validating MD's ability to reproduce experimental properties like radial distribution functions and establishing it as a tool for studying condensed-phase systems. In 1967, Loup Verlet introduced the scheme, a time-reversible method widely used for its stability in MD trajectories. The 1970s and 1980s saw the expansion of MD to biomolecular systems, driven by , , and , who developed early simulations of proteins and enzymes, such as the 1977 MD trajectory of bovine pancreatic trypsin inhibitor, enabling insights into conformational dynamics and reaction mechanisms. A key earlier advance was the 1971 simulation of liquid water by Rahman and Frank H. Stillinger, which revealed the structure of hydrogen bonding networks in a realistic molecular liquid. Their pioneering approaches, combining classical MD with , earned them the 2013 for advancing computational studies of complex chemical systems. In the 1990s, precursors to modern acceleration emerged through adaptations, exemplified by the development of NAMD software in 1995, which enabled scalable MD simulations of large biomolecules on distributed-memory parallel machines, paving the way for handling systems beyond thousands of atoms. Entering the , integration of has revolutionized force fields, with potentials trained on quantum data achieving near accuracy at classical speeds, as reviewed in applications for and materials design. By 2025, routine MD simulations encompass million-atom systems over microsecond timescales, facilitated by specialized hardware like Anton 3 and GPU clusters.

Fundamental Principles

Phase Space and Molecular Trajectories

In molecular dynamics simulations, represents the complete configuration of a classical system of N particles in three dimensions, forming a $6N-dimensional space where each particle is characterized by three position coordinates and three momentum coordinates. This multidimensional framework encapsulates all possible states of the system, with a single point in phase space denoting the instantaneous positions \mathbf{r}_i and momenta \mathbf{p}_i for all particles i = 1, \dots, N. The concept originates from classical and was foundational to early computational simulations of interacting particle systems. Molecular trajectories are deterministic paths traced through this over time, generated by solving the for the particles. These trajectories consist of discrete of positions and velocities, approximating the continuous evolution of the system under . The dynamics are conservative and governed by Hamilton's equations derived from the total energy, expressed as the H = T + V, where T = \sum_{i=1}^N \frac{\mathbf{p}_i^2}{2m_i} is the (with m_i as the mass of particle i) and V(\mathbf{r}_1, \dots, \mathbf{r}_N) is the depending on particle positions. This formulation ensures that the time evolution follows , preserving the phase space volume for ergodic systems. To initiate a simulation, initial conditions are selected by randomly sampling positions and momenta from appropriate equilibrium distributions, such as the for configurations and the Maxwell-Boltzmann distribution for velocities, ensuring the trajectory represents statistically relevant behavior under specified thermodynamic conditions. In isolated systems without external forces or fields, the dynamics obey fundamental conservation laws: total remains constant due to the time-independent , linear is conserved if the total force sums to zero, and is preserved in the absence of external torques. These properties validate the physical accuracy of trajectories in microcanonical simulations of closed systems.

Thermodynamic Ensembles and Constraints

In molecular dynamics (MD) simulations, thermodynamic ensembles define the statistical conditions under which the system evolves, ensuring that the generated trajectories sample the appropriate distribution corresponding to physical constraints like fixed , , or . These ensembles bridge deterministic Newtonian with , allowing computed properties to match experimental observables. The choice of ensemble depends on the simulation goal: conserving exact microscale or replicating macroscopic conditions. The , denoted NVE, maintains a constant number of particles N, volume V, and total energy E, modeling an where energy is conserved exactly through the dynamics. This ensemble naturally arises in standard without external control, as pioneered in early hard-sphere simulations, providing trajectories that strictly adhere to the constant-energy in . It is ideal for studying intrinsic dynamics but limits sampling to the narrow energy shell, potentially missing ergodic exploration in complex systems. To mimic experimental conditions at constant , the (NVT) fixes N, V, and T, requiring a to couple the system to a bath. is controlled by relating it to the average (KE) via the : T = \frac{2}{3 N k_B} \langle \text{KE} \rangle, where k_B is Boltzmann's constant and the angular brackets denote a time average. A widely used deterministic is the Nosé-Hoover method, which introduces a virtual variable to rescale velocities and generate the canonical distribution without perturbations. This approach preserves reversibility and is computationally efficient for biomolecular systems. For simulations under constant pressure, the isothermal-isobaric (NPT) ensemble fixes N, P, and T, allowing volume fluctuations to match experimental densities. The Parrinello-Rahman barostat extends the Nosé-Hoover framework by treating the simulation cell as a dynamic variable, coupling it to an extended that enforces pressure control through cell shape and size adjustments. This method is particularly effective for studying transitions and condensed-phase properties, as it couples naturally with thermostats to sample the NPT . In addition to ensembles, constraints in MD simulations enforce geometric restrictions, such as fixed bond lengths or angles, to model rigid bonds (e.g., in molecules or hydrogen-containing bonds) and enable larger integration time steps by removing high-frequency vibrations. Common algorithms include SHAKE, which iteratively solves Lagrange multipliers to satisfy on positions, and RATTLE, an extension that also constrains velocities for better . These methods are crucial for efficient simulations of biomolecules, where unconstrained high-frequency modes would require femtosecond time steps. Beyond standard ensembles, generalized methods enhance sampling of rare events and rugged energy landscapes, which are challenging in NVE, NVT, or NPT due to barriers. Replica exchange molecular dynamics (REMD) runs multiple replicas at different temperatures, periodically exchanging configurations between adjacent replicas with an acceptance probability based on the criterion to facilitate barrier crossing. Similarly, umbrella sampling biases the potential along a using harmonic restraints in overlapping "windows," enabling calculations via weighted histogram analysis method after unbiased reweighting. These techniques improve convergence for processes like or . In practice, the NVE ensemble preserves exact Newtonian trajectories for fundamental studies of , whereas NVT and NPT ensembles, through thermostats and barostats, better replicate laboratory conditions like constant-temperature baths or , at the cost of slightly perturbing short-time dynamics.

Force Fields and Interaction Potentials

Empirical and Pair Potentials

Empirical potentials, also known as classical s, are parameterized mathematical models used in molecular dynamics simulations to approximate the of molecular systems. These potentials are derived primarily from experimental data, such as spectroscopic measurements and thermodynamic properties, supplemented by quantum mechanical calculations to fit parameters for atomic interactions. Widely adopted examples include the force field, originally developed for simulations of proteins and nucleic acids, which relies on all-atom representations with parameters fitted to reproduce structural and energetic features of biomolecules. Similarly, the CHARMM force field was introduced to model macromolecular dynamics, emphasizing compatibility with experimental observables like lengths and . The OPLS force field, optimized for simulations, extends to biomolecules through its all-atom variant (OPLS-AA), focusing on accurate reproduction of free energies and conformational equilibria. A hallmark of these empirical potentials is their pairwise additive nature, where the total is computed as the sum of interactions between all pairs of atoms, excluding many-body effects to enhance computational efficiency. Non-bonded interactions, which dominate intermolecular forces, are typically modeled using pair potentials that capture van der Waals and electrostatic contributions. The van der Waals term is commonly represented by the , which balances a repulsive core and an attractive dispersion tail: V_{\text{LJ}}(r) = 4\epsilon \left[ \left( \frac{\sigma}{r} \right)^{12} - \left( \frac{\sigma}{r} \right)^6 \right] Here, \epsilon denotes the well depth (interaction strength), \sigma is the finite distance at which the potential is zero, and r is the interatomic distance; this form was originally proposed to model gas-phase interactions and remains standard in biomolecular force fields. Electrostatic interactions are handled via the Coulomb potential: V_{\text{C}}(r) = \frac{q_i q_j}{4\pi \epsilon_0 r} where q_i and q_j are atomic partial charges, \epsilon_0 is the vacuum permittivity, and r is the separation; charges are fixed parameters derived from quantum calculations or empirical fitting. To maintain molecular topology, bonded interactions are included as pair or multi-body terms within a fixed connectivity framework. Bond stretching is modeled by a harmonic potential: V_b(r) = \frac{1}{2} k_b (r - r_0)^2 with force constant k_b and equilibrium length r_0, approximating the quantum harmonic oscillator for small vibrations around equilibrium. Angle bending uses a similar harmonic form, V_\theta(\theta) = \frac{1}{2} k_\theta (\theta - \theta_0)^2, while dihedral torsions employ a periodic potential, V_\phi(\phi) = \sum_n k_{\phi,n} [1 + \cos(n\phi - \gamma_n)], to capture rotational barriers along backbone chains. These terms ensure structural integrity during simulations of biomolecules like proteins and DNA. Despite their efficiency and broad applicability, empirical pair potentials have notable limitations, including the assumption of fixed atomic charges that neglect electronic polarization, which can lead to inaccuracies in polar environments or with highly charged systems.

Many-Body, Semi-Empirical, and Polarizable Potentials

Many-body potentials extend beyond pairwise interactions by incorporating the influence of multiple neighboring atoms on the local environment of each atom, enabling more accurate modeling of collective effects in materials like metals. A prominent example is the embedded atom method (EAM), developed by Daw and Baskes in 1984, which describes the total energy of a system as the sum of pairwise interactions and an embedding energy that depends on the local contributed by all neighboring atoms. In EAM, the embedding function F(\rho) represents the energy required to embed an atom into an electron density \rho, where \rho_i = \sum_{j \neq i} f_j(r_{ij}) and f_j is the distribution from atom j at distance r_{ij}. This approach has been widely applied to simulate defects, surfaces, and mechanical properties in metallic systems, capturing many-body effects such as angular dependencies implicitly through the density, and providing significant improvements in accuracy for metallic cohesion and defect energies compared to pair potentials. Semi-empirical potentials bridge classical and by incorporating approximate quantum mechanical principles with empirical parameterization, allowing for the simulation of chemical reactivity without full quantum calculations. These often rely on tight-binding approximations, where the is constructed from parameterized overlap and hopping integrals derived from Hückel theory or density functional tight-binding (DFTB) methods, enabling efficient treatment of electronic structure changes during formation and breaking. A key example is the ReaxFF reactive , introduced by van Duin and colleagues in 2001, which uses a -order to model variable connectivity in hydrocarbons and other systems, with parameters fitted to quantum mechanical data for dissociation and reaction pathways. ReaxFF has facilitated large-scale simulations of , , and biomolecular reactions by dynamically adjusting valence terms based on interatomic distances. Polarizable potentials account for the redistribution of in response to external fields, addressing limitations of fixed-charge models in environments with varying electrostatics, such as polar solvents or ionic solutions. One common implementation uses oscillators, where each polarizable atom is augmented with a positively charged auxiliary particle harmonically bound to the core atom, mimicking electronic through the displacement of this "Drude particle" in an . Another approach employs induced atomic dipoles, as in the (Atomic Multipole Optimized Energetics for Biomolecular Applications) , developed by Ponder, , and coworkers starting in the early , which uses higher-order multipoles and self-consistent to compute polarization effects. In these models, the induction contribution to the potential energy is expressed as V_{\text{ind}} = -\frac{1}{2} \sum_i \boldsymbol{\mu}_i \cdot \mathbf{E}_i, where \boldsymbol{\mu}_i is the induced dipole moment on site i and \mathbf{E}_i is the total electric field at that site, excluding self-interaction. This term is solved iteratively or approximately to capture mutual polarization between atoms. These advanced potentials offer significant improvements over simple pairwise models by better reproducing experimental properties in complex scenarios: EAM enhances accuracy for metallic cohesion and defect energies compared to pair potentials, while ReaxFF enables reactive simulations with reaction barriers matching quantum results within 5-10 kcal/mol, and polarizable models like AMOEBA and Drude-based fields reduce errors in solvation free energies from several kcal/mol in fixed-charge simulations to under 1 kcal/mol for polar solutes.

Ab Initio, Hybrid QM/MM, and Machine Learning Potentials

Ab initio molecular dynamics (AIMD) simulations compute potential energy surfaces and forces on-the-fly using quantum mechanical methods, enabling the study of chemical reactions and electronic effects without precomputed potentials. In the Born-Oppenheimer molecular dynamics (BOMD) approach, the electronic structure is solved iteratively at each nuclear configuration to obtain the ground-state energy, assuming the separation of nuclear and electronic timescales as per the Born-Oppenheimer approximation. Forces on the nuclei are derived from the Hellmann-Feynman theorem, given by \mathbf{F}_I = -\frac{\partial E}{\partial \mathbf{R}_I}, where E is the electronic energy and \mathbf{R}_I is the position of nucleus I. This method, often implemented with (DFT) or Hartree-Fock, provides high accuracy for small systems but is computationally intensive, limiting simulations to hundreds of atoms over picoseconds. A seminal advancement in AIMD is the Car-Parrinello method, which treats electronic wavefunctions as dynamical variables propagated alongside nuclear coordinates using a fictitious , avoiding repeated electronic self-consistency at each step. This unified scheme combines molecular dynamics with DFT, allowing efficient exploration of surfaces for systems up to a few hundred atoms. CPMD has been pivotal in simulating liquid and proton transfer processes, demonstrating near-quantum accuracy at reduced cost compared to BOMD. Hybrid quantum mechanics/molecular mechanics (QM/MM) methods address the scalability limitations of full ab initio approaches by treating a small reactive region quantum mechanically while modeling the surrounding environment with classical molecular mechanics. In QM/MM, the total energy is partitioned as E_{\text{total}} = E_{\text{QM}} + E_{\text{MM}} + E_{\text{QM-MM}}, where the coupling term accounts for electrostatic interactions across the boundary. This enables simulations of large biomolecules, such as enzyme active sites, with quantum-level detail in the QM subsystem (typically 10-100 atoms) and empirical efficiency elsewhere. The approach was pioneered for enzymatic reactions, illustrating dielectric and steric effects in lysozyme catalysis. The ONIOM (Our own N-layered Integrated molecular Orbital and Molecular Mechanics) method extends hybrid by allowing multiple layers of differing accuracy, such as high-level QM for the core, low-level QM or semi-empirical for an intermediate shell, and for the exterior, extrapolated via a subtractive scheme. Widely implemented in software like Gaussian, ONIOM facilitates geometry optimizations and for complex organometallic systems, balancing accuracy and cost for reactions involving transition metals. Machine learning (ML) potentials represent a data-driven paradigm for approximating quantum mechanical energies and forces, trained on datasets from ab initio calculations to achieve near-DFT accuracy at classical force field speeds. These models, often using neural networks or Gaussian processes, enable predictions in O(1) time per atom, scaling to large systems. High-dimensional neural network potentials (HDNNPs) decompose interactions into atomic contributions, capturing many-body effects for elements like silicon and water. Representative examples include the SchNet architecture, a continuous-filter that learns rotationally invariant features from atomic environments for molecular properties and dynamics. Similarly, the ANI potential employs to train transferable neural networks on DFT data for organic molecules, achieving chemical accuracy ( <1 kcal/) across diverse conformations. By 2025, advanced force fields, such as equivariant graph neural networks like and , have facilitated simulations of million-atom systems, including large biomolecular assemblies and in cellular-like environments, with quantum-like fidelity at high computational efficiency.

Simulation Methods

Integration Algorithms and Time Evolution

In molecular dynamics (MD) simulations, integration algorithms numerically solve Newton's second law of motion to evolve the positions and velocities of atoms over discrete time steps, generating trajectories that approximate continuous dynamics. These methods must balance computational efficiency, , and preservation of physical properties such as and volume. The choice of algorithm significantly impacts the accuracy and duration of simulations, particularly for systems with high-frequency vibrations or constraints. The Verlet algorithm, introduced by Loup Verlet in 1967, forms the cornerstone of MD integration due to its simplicity and robustness. It propagates positions using a central difference scheme derived from expansions, given by \mathbf{r}(t + \Delta t) = 2\mathbf{r}(t) - \mathbf{r}(t - \Delta t) + \frac{\mathbf{F}(t)}{m} \Delta t^2, where \mathbf{r} is the position, \mathbf{F} is the force, m is the mass, and \Delta t is the time step; velocities are optionally computed post-update as \mathbf{v}(t) \approx [\mathbf{r}(t + \Delta t) - \mathbf{r}(t - \Delta t)] / (2 \Delta t). This method is , meaning it preserves the geometric structure of systems, leading to bounded energy fluctuations over long simulations and exact time-reversibility when forces are time-independent. A widely adopted extension is the velocity Verlet algorithm, developed by Swope, , Berens, and Wilson in 1982, which explicitly updates both positions and velocities in a staggered manner for improved compatibility with velocity-dependent thermostats and constraints. The steps are:
  1. \mathbf{v}(t + \Delta t / 2) = \mathbf{v}(t) + [\mathbf{F}(t) / (2m)] \Delta t,
  2. \mathbf{r}(t + \Delta t) = \mathbf{r}(t) + \mathbf{v}(t + \Delta t / 2) \Delta t,
  3. Compute \mathbf{F}(t + \Delta t) at the new position,
  4. \mathbf{v}(t + \Delta t) = \mathbf{v}(t + \Delta t / 2) + [\mathbf{F}(t + \Delta t) / (2m)] \Delta t.
Like the original Verlet, it is second-order accurate (global error O(\Delta t^2)) and , enabling stable simulations with time steps up to about 1 for typical systems without constraints. Higher-order integrators, such as the Gear predictor-corrector method, offer greater accuracy for stiff systems with disparate timescales by using multi-step predictions and corrections based on Adams-Bashforth and Adams-Moulton formulas, typically up to fifth order. The predictor estimates positions and derivatives from prior steps, followed by correction using current forces; for example, the fourth-order version minimizes truncation errors for oscillatory modes. However, these methods are less common in modern MD due to their sensitivity to perturbations, which can cause in microcanonical ensembles over extended runs, and violation of symplecticity. Time step selection is critical for balancing accuracy and efficiency: \Delta t must resolve the fastest motions, such as vibrations (~10 fs period), typically limiting unconstrained all-atom simulations to 0.5–1 fs to keep energy drift below 0.1% over nanoseconds. Constraints like SHAKE, introduced by Ryckaert, Ciccotti, and Berendsen in 1977, enforce rigid bonds (e.g., C-H) via Lagrange multipliers, allowing \Delta t \approx 2 fs while maintaining stability. Reversible integrators, exemplified by the Verlet family, ensure that trajectories can be integrated backward to recover prior states exactly, preserving volume () and enabling long-term in thermodynamic ensembles. Extensions to extended systems, such as those by Martyna, Tuckerman, and Tobias in 1996, maintain reversibility for Nosé-Hoover thermostats by symmetrizing updates, crucial for sampling.

Short-Range and Long-Range Interaction Handling

In molecular dynamics simulations, interactions are typically divided into short-range and long-range components to balance computational efficiency and accuracy. Short-range interactions, such as those from Lennard-Jones (LJ) and short-range potentials, decay rapidly and can be truncated at a finite cutoff radius, usually around 1-2 , beyond which their contribution is negligible. To avoid recalculating all pairwise distances at every timestep, which would scale as O(N²) for N particles, neighbor lists are employed. These lists identify potential interacting pairs within a slightly larger "skin" or buffer region around the cutoff, reducing the number of distance computations. The Verlet neighbor list, a widely adopted method, constructs a list of neighbors for each particle by considering those within a radius r_c + r_s, where r_c is the cutoff and r_s is the skin thickness, typically chosen such that the list remains valid for 10-20 timesteps before rebuilding. This update frequency depends on the skin size and ; a larger skin reduces rebuilds but increases list size and memory usage. The approach ensures that missed interactions are prevented while minimizing overhead, making it essential for simulations of dense liquids or biomolecules. For long-range electrostatic interactions in periodic systems, simple truncation leads to significant errors due to the slow 1/r decay of Coulomb potentials, necessitating advanced summation techniques. The Ewald summation method splits the interaction into real-space (short-range, screened by a Gaussian) and reciprocal-space (long-range, via Fourier transform) components, converging both rapidly. The potential ψ(r) at position r due to a charge distribution is given by: \psi(\mathbf{r}) = \sum_{\mathbf{j}} \frac{\mathrm{erfc}(\alpha |\mathbf{r} - \mathbf{R}_j|)}{|\mathbf{r} - \mathbf{R}_j|} + \frac{4\pi}{\Omega} \sum_{\mathbf{k} \neq 0} \frac{1}{k^2} \exp(-k^2 / 4\alpha^2) \sum_{\mathbf{j}} q_j \exp(i \mathbf{k} \cdot \mathbf{R}_j) \exp(-i \mathbf{k} \cdot \mathbf{r}), where erfc is the complementary error function, α is a tunable parameter (often ~5-10 nm⁻¹), Ω is the unit cell volume, and the sums are over vectors R_j and vectors k. Larger α favors real-space but slows computation, allowing optimization based on system size. To enhance efficiency for large systems, the particle-mesh Ewald (PME) method interpolates particle charges onto a uniform grid, computes the reciprocal sum via (FFT), and back-interpolates forces, achieving O(N log N) scaling compared to the O(N²) of direct methods. PME uses for smooth charge spreading, reducing errors and enabling accurate simulations of systems with thousands of atoms, such as proteins in solution. Long-range van der Waals (dispersion) interactions, primarily from the attractive r⁻⁶ tail of LJ potentials in periodic boundaries, are handled via analytical tail corrections assuming uniform beyond the cutoff. These corrections estimate the missing and contributions as integrals over the truncated region, typically adding 1-5% to total energies in dense fluids. For LJ potentials, the correction to the potential U_tail is approximately (2π/3) ρ ∫_{r_c}^∞ r² dr [u(r)/r], where ρ is and u(r) = -C_6 / r^6, yielding U_tail = (2π/3) ρ N (C_6 / r_c^3) for N particles. In non-periodic systems, such as clusters or interfaces, multipolar expansions accelerate long-range computations by grouping distant charges into higher-order moments (, , etc.), evaluating interactions hierarchically in an structure. The (FMM), scaling as , translates multipoles between well-separated clusters using rotation and translation operators, avoiding explicit pairwise sums. This is particularly useful for vacuum or solvent-free simulations where periodic methods are inapplicable, offering speedups of 10-100x over direct summation for systems exceeding 10^4 particles.

Solvent Effects and Coarse-Graining

In molecular dynamics simulations of solvated systems, explicit solvent models treat molecules as discrete particles interacting via pairwise potentials, enabling the capture of specific solute-solvent hydrogen bonding and dynamic effects. Widely used all-atom representations include the TIP3P model, which employs a three-site with fixed bond lengths and angles, partial charges of +0.417 e on and -0.834 e on oxygen, and Lennard-Jones parameters tuned to reproduce liquid densities and radial distribution functions. The SPC/E (extended simple point charge) model is a rigid, three-site with partial charges of -0.8476 e on the oxygen and +0.4238 e on each , adjusted to better reproduce experimental properties such as the self-diffusion coefficient and constant of liquid . These models are rigid to simplify computations, with constraints enforcing during integration. Implementing explicit typically expands the total number of atoms by a factor of 10 to 100 compared to simulations, as a of several thousand molecules is required for biomolecules like proteins. Implicit solvent models address the computational overhead of explicit representations by approximating the solvent as a structureless with uniform properties, focusing on average effects rather than molecular fluctuations. The Poisson–Boltzmann (PB) method solves the Poisson equation for the electrostatic potential in a medium with mobile ions, using the linearized form for low ionic strengths to compute the electrostatic contribution to the via integration over surface charges or potentials. This numerical approach, often implemented with finite-difference grids, accounts for through boundary conditions and provides reaction field forces for dynamics, though it neglects and terms that must be added empirically. The generalized (GB) model offers a faster, analytical alternative by extending the solvation energy for ions to molecules through effective atomic radii. The polar solvation in GB is approximated as G_{\rm GB} = \sum_{i<j} \frac{q_i q_j}{\epsilon r_{ij} f_{\rm GB}(r_{ij})} where q_i and q_j are partial charges, r_{ij} is the interatomic distance, \epsilon is the solvent dielectric constant, and f_{\rm GB}(r_{ij}) is a screening function (typically f_{\rm GB} = 1 - \exp(-r_{ij}^2 / (2 R_i R_j))) incorporating effective Born radii R_i and R_j that estimate the distance to the molecular surface. This formulation efficiently mimics PB electrostatics during simulations, with nonpolar contributions often modeled via solvent-accessible surface area terms. Coarse-graining techniques reduce system complexity by grouping multiple atoms into single "s" with effective interactions derived from higher-resolution data, allowing mesoscale simulations of and solute over extended spatiotemporal scales. The model exemplifies this by mapping four heavy atoms to one , using a four-to-one for , proteins, and , with interaction parameters based on free-energy differences between and octanol phases to ensure transferability across biomolecular components. s are classified by chemical nature (polar, apolar, charged) to define Lennard-Jones and electrostatic forces, enabling simulations of processes like formation with reduced —typically by a factor of 4 per residue—while preserving structural and thermodynamic properties. Hybrid multiscale methods bridge coarse-grained efficiency with all-atom accuracy by partitioning the system, such as using coarse-grained surrounding an all-atom solute , and employing iterative refinement or force-matching to reconcile scales. For example, protocols that backmap coarse-grained trajectories to all-atom configurations, followed by equilibration, facilitate the study of large solvated assemblies like protein-lipid complexes without full explicit . These approaches maintain solvent effects through boundary potentials or rescaling, supporting simulations up to millisecond timescales for systems exceeding 10^6 atoms.

Specialized Techniques like Steered MD

Steered molecular dynamics (SMD) is a variant of molecular dynamics simulations designed to study non-equilibrium processes by applying external forces to specific atoms or groups, thereby driving the system along a predefined pathway at rates much faster than natural diffusion. This technique, introduced by Grubmüller et al. in 1996 to investigate ligand unbinding, allows exploration of reaction mechanisms and force profiles that are otherwise inaccessible due to long timescales. In SMD, the work W performed during the steered process is calculated as W = \int \mathbf{F} \cdot d\mathbf{x}, where \mathbf{F} is the external force and d\mathbf{x} is the displacement along the reaction coordinate. To extract equilibrium free energy differences \Delta G from these non-equilibrium trajectories, Jarzynski's equality is applied: \langle e^{-\beta W} \rangle = e^{-\beta \Delta G}, with \beta = 1/(k_B T), k_B Boltzmann's constant, and T temperature; this relation, originally derived in 1997, enables thermodynamic insights from irreversible work distributions. Umbrella sampling enhances sampling of rare events by introducing biasing potentials, typically harmonic restraints, along a chosen to flatten the landscape and ensure uniform exploration. Developed by Torrie and Valleau in , this method generates multiple simulations at overlapping windows of the coordinate, with the unbiased (PMF) reconstructed via weighted histogram analysis or similar techniques to combine data efficiently. It is particularly effective for computing profiles in equilibrium settings, such as conformational transitions, by compensating for the bias post-simulation using the known form of the restraining potential. Metadynamics, proposed by Laio and Parrinello in 2002, addresses barriers by adaptively adding Gaussian-shaped "hills" to the as a function of selected collective variables, discouraging revisits to sampled regions and filling basins over time. This collective variable-driven approach reconstructs the underlying surface as the negative of the accumulated bias, enabling the simulation of like activated processes without predefined paths. Refinements, such as well-tempered metadynamics, control hill deposition to converge faster and avoid overfilling. These techniques find applications in elucidating pathways, where accelerates barrier crossing to reveal folding mechanisms and intermediates, and in ligand unbinding, where SMD quantifies rupture forces and binding affinities, as demonstrated in the seminal streptavidin-biotin dissociation study. For instance, SMD has been used to probe the mechanical unfolding of domains, providing insights into force-dependent stability.

Applications and Challenges

Primary Areas of Application

Molecular dynamics (MD) simulations are extensively applied in to investigate atomic-scale processes that govern material properties and behavior. In the study of , MD has been used to model the formation of on substrates, revealing how defects, such as 5-7 member carbon rings, influence epitaxial growth mechanisms. For defects in metals and polymers, simulations elucidate defect generation during processes like nanoparticle under pressures of 100-300 , where atomic rearrangements lead to densification and . Additionally, MD examines self-assembling peptides in polymers, providing insights into biomaterial formation and structural stability at the nanoscale. In , MD plays a crucial role in elucidating reaction mechanisms and catalytic processes at surfaces. Simulations have explained the of electrochemical reactions in electrocatalysis, such as oxygen reduction on metal surfaces, by capturing and intermediate formations that traditional methods overlook. For surface catalysis, MD combined with potentials has modeled adsorption and desorption steps in , highlighting energy barriers and transition states in real-time atomic motions. These applications enable the design of more efficient catalysts by predicting how molecular interactions drive selectivity and activity. Biological applications of MD focus on biomolecular interactions and dynamics essential for understanding life processes. In protein-ligand binding, simulations predict binding affinities and conformational changes, aiding in the optimization of drug candidates by exploring allosteric sites and binding poses over nanosecond timescales. Membrane dynamics are probed through MD to model lipid bilayer fluidity and protein insertion, revealing how environmental factors influence channel function and transport. Drug design benefits from MD's ability to simulate receptor flexibility, as seen in relaxed complex schemes that account for induced fit mechanisms in ligand optimization. In physics, is employed to study fundamental properties of condensed matter systems. For , simulations analyze radial distribution functions in and ionic solutions, providing atomic-level details on bonding networks and shells. transitions are investigated under nanoconfinement, where MD reveals shifts in points and critical behavior influenced by and surface chemistry in fluids like . Nanoscale flows, such as pressure-driven in nanochannels, are modeled to uncover slip boundaries and velocity profiles that deviate from continuum hydrodynamics. Despite these advances, MD simulations are constrained by computational limits as of 2025, typically accessing timescales from nanoseconds to microseconds with classical force fields, due to integration steps required for accuracy. System sizes are generally limited to up to 10^6 atoms for routine all-atom simulations on modern hardware, though enhanced sampling techniques can extend effective timescales for specific processes. These boundaries restrict direct observation of slower biological or materials phenomena, necessitating hybrid methods for broader applicability.

Specific Examples and Case Studies

One prominent example in biomolecular simulations is the folding of the villin headpiece subdomain, a small protein consisting of 35 or 36 amino acids known for its rapid folding kinetics on the microsecond timescale. In 2002, all-atom molecular dynamics simulations using an implicit solvent model successfully captured the folding pathway from extended conformations to the native structure, reproducing experimental NMR structures and folding rates within experimental error, demonstrating the capability of MD to model protein folding without prior knowledge of intermediates. These simulations highlighted the role of hydrophobic collapse and secondary structure formation in driving the process, providing early validation of MD for studying fast-folding proteins. Another key biomolecular application involves gating, where MD simulations elucidate the conformational changes enabling permeation. A landmark one-microsecond simulation of the prokaryotic GLIC in 2010 revealed the pH-stimulated gating mechanism, capturing transitions from closed to open states and identifying key residues involved in proton sensing and pore dilation, which aligned with crystallographic data and experimental . This work demonstrated how extended MD trajectories can resolve slow gating dynamics, informing models of eukaryotic channels like those in neuronal signaling. In , MD has been instrumental in characterizing the mechanical properties of carbon nanotubes (CNTs). Early simulations in 2001 using classical force fields computed the of single-walled CNTs as approximately 1 TPa, with tensile strengths exceeding 100 GPa before brittle fracture, attributing high stiffness to the sp² carbon bonding and revealing chirality-dependent variations in deformation behavior. These results established MD as a predictive tool for nanotube reinforcement in composites, guiding experimental synthesis and testing. For , MD simulations have probed at the atomic scale. A 2014 study combined experiments and MD to measure the of monolayer at 4.0 ± 0.6 MPa·m^{1/2}, showing crack propagation via bond breaking under tension and validating continuum fracture theories at the nanoscale, with simulations illustrating how defects like Stone-Wales rotations influence energy dissipation during tearing. In systems, MD simulations of illustrate spontaneous organization in aqueous environments. A 2016 all-atom simulation demonstrated the of phospholipids with negatively charged head groups, such as 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylserine, into stable bilayers over 10 μs, driven by hydrophobic interactions and electrostatic repulsion, marking the first such observation for anionic and confirming phase behavior against neutron scattering data. This approach has advanced models of formation in cellular processes. MD also plays a crucial role in simulating for -ion batteries. Benchmark simulations in 2015 using OPLS-AA force fields on polyethylene oxide (PEO)-based electrolytes predicted ionic conductivities of 10^{-6} to 10^{-4} S/cm at 363 K, correlating polymer chain dynamics with coefficients around 10^{-7} cm²/s, and highlighting the impact of concentration on amorphous and . These insights have informed the of flexible, solid-state electrolytes with improved over liquid counterparts. The 2013 Nobel Prize in Chemistry recognized the development of multiscale modeling combining quantum mechanics/molecular mechanics (QM/MM) with MD for enzyme reactions. Pioneered by Karplus, Levitt, and Warshel, this hybrid approach treats the reactive enzyme active site quantum mechanically while surrounding regions use classical MD, enabling simulations of lysozyme catalysis that reproduced experimental rate enhancements of 10^6-fold through electrostatic stabilization of transition states. Subsequent applications, such as QM/MM-MD of chorismate mutase, have quantified barrier reductions in enzyme active sites, bridging atomic details with reaction kinetics. In recent design, MD simulations generate conformational ensembles of protein antigens to predict immune epitopes. A 2024 study on used MD to explore ensembles beyond AlphaFold2 predictions, revealing enhanced ACE2 binding affinities for certain variants (K_d ≈ 7–13 nM) and mutational effects on resistance, aiding understanding of fitness and development of and therapeutics against variants. Recent 2025 advances include machine learning-accelerated MD simulations of full capsids, enabling predictions of assembly dynamics and antiviral targeting.

Limitations and Current Frontiers

One of the primary limitations of molecular dynamics (MD) simulations is the timescale problem, where rare events such as transitions or ligand unbinding occur on millisecond to second scales, far beyond the typical microsecond timescales accessible with standard MD due to computational constraints. These infrequent processes require enhanced sampling techniques to accelerate exploration of conformational space and capture kinetic pathways effectively. Accuracy in MD is hindered by force field transferability issues, as empirical potentials often fail to generalize across diverse chemical environments, leading to systematic errors in predicted structures and dynamics. Additionally, classical force fields neglect quantum mechanical effects, particularly for light atoms like , where zero-point energies and tunneling influence vibrational spectra and reaction barriers. This omission results in discrepancies between simulated and experimental observables, such as infrared spectra of biomolecules. The computational cost of MD scales as O(N²) for non-bonded interactions in large systems without advanced approximations, limiting simulations to systems with fewer than a million atoms and restricting studies of complex assemblies like viruses or cellular components. At the frontiers of MD, integration with experimental techniques like cryo-electron microscopy (cryo-EM) enables validation and refinement of simulated structures, as seen in modeling transient states of HIV-1 capsids that align with resolved cryo-EM densities. Quantum MD approaches, incorporating nuclear quantum effects via path-integral methods, are advancing to better describe proton transfer and delocalization in enzymatic reactions. is pushing boundaries toward whole-cell models, such as simulations of the minimal cell JCVI-syn3A with over 1 million atoms, integrating dynamics of metabolic networks and membrane proteins. By 2025, persistent challenges include accurately quantifying contributions in intrinsically disordered systems, where conformational heterogeneity complicates landscapes and ensemble predictions. Sustainability concerns also arise from the energy-intensive nature of large-scale MD, with supercomputing facilities emitting significant CO₂ equivalents—equivalent to thousands of tons annually—prompting calls for greener algorithms and hardware-efficient methods.

Computational Implementation

Parallelization and Algorithmic Optimizations

Parallelization in molecular dynamics (MD) simulations is essential for handling large systems and long timescales, primarily through strategies that distribute computational workload across multiple processors while minimizing communication overhead. Two fundamental approaches are domain decomposition and force decomposition, which enable efficient scaling on distributed-memory architectures using the (MPI). Domain decomposition, also known as spatial decomposition, partitions the simulation volume into assigned to individual , with each responsible for updating positions and computing forces for atoms within its . atoms require inter-processor communication to exchange positions and forces, ensuring accurate interactions across subdomain interfaces; this method excels in systems with uniform particle densities but can suffer load imbalances in heterogeneous environments. In contrast, force decomposition assigns subsets of pairwise force computations to processors rather than atoms or space, replicating particle positions across processors as needed for short-range interactions while using all-to-all communication for long-range forces like . This approach reduces data replication for local interactions and leverages Newton's third law to avoid redundant calculations, making it particularly effective for short-range potentials in dense systems. Algorithmic optimizations further enhance efficiency by exploiting the separation of force timescales. Multiple timestep methods, such as the reversible reference system algorithm (r-RESPA), integrate fast-motional (e.g., bonded interactions) more frequently than slow ones (e.g., long-range non-bonded forces), reducing overall computational cost while maintaining through a hierarchical propagator scheme. Popular MD software packages implement these strategies via hybrid parallelism combining MPI for inter-node distribution and for intra-node threading. GROMACS employs domain decomposition with MPI for large-scale and for multi-core acceleration, achieving seamless multi-level parallelism across diverse hardware. Similarly, LAMMPS supports both domain and force decomposition through MPI-based distributed processing, augmented by for shared-memory loops over local data, enabling flexible scaling for materials and biomolecular simulations. By 2025, these optimizations have enabled weak scaling in classical MD to over 65,000 CPU cores for systems exceeding 200 million atoms, with parallel efficiencies above 90%, demonstrating the feasibility of petascale simulations for complex biomolecular assemblies.

Specialized Hardware and Accelerators

Graphics processing units (GPUs) have become a cornerstone for accelerating molecular dynamics () simulations due to their ability to parallelize computationally intensive tasks such as force calculations. Frameworks like and enable the offloading of pairwise interactions and short-range non-bonded computations to GPU cores, significantly reducing simulation times in popular software packages. For instance, in , the A100 GPU achieves performance metrics of up to 720 ns/day on benchmarks with 20,000–100,000 atoms, outperforming other GPUs like the A40 by factors of 2–3x depending on system size and frequency settings. Specialized application-specific integrated circuits () represent a pinnacle of hardware optimization for , exemplified by the developed by D.E. Shaw Research. Anton 3 employs custom with thousands of processing pipelines for interaction potentials and bond calculations, coupled with high-bandwidth 3D torus interconnects exceeding 5 Tbps, enabling unprecedented timescales for biomolecular systems. A 512-node Anton 3 configuration delivers 121 μs/day for a 1 million-atom and completes a 20 μs ribosome trajectory in approximately 5 hours, marking a breakthrough in accessing microsecond-scale dynamics. Compared to general-purpose , a 64-node Anton 3 is 120x faster on a standard 24,000-atom . Field-programmable gate arrays (FPGAs) offer reconfigurability that allows tailoring of integrators and potential functions directly in , making them suitable for niche applications requiring high-throughput or low-power operation. By reprogramming logical units and lookup tables, FPGAs can implement custom algorithms like velocity or alternative potentials without full redesign, as demonstrated in clusters of Artix-7 devices achieving ~2.5 μs/day for 100-atom Lennard-Jones systems. These setups excel in unbiased simulations of events, reaching 1.9 μs in one day while consuming only 9 per node, surpassing GPU clusters in for small-scale, long-duration runs. As of , accelerators are increasingly repurposed for learning-molecular (ML-MD) workflows, leveraging tensor cores to evaluate interatomic potentials at scale. GPUs, integrated with libraries like and cuEquivariance, enable end-to-end acceleration in tools such as LAMMPS, yielding 1.4x–3x speedups on multi-GPU setups for models like HIPPYNN and , facilitating simulations of complex materials and proteins. Complementing this, quantum annealers enhance sampling of in MD by solving optimization problems via ; for example, D-Wave systems generate uncorrelated transition paths for millisecond-scale protein conformational changes, matching classical Anton results while addressing challenges in under-sampled regimes. In comparison, GPUs provide versatile, general-purpose acceleration adaptable to diverse MD codes and parallel strategies, whereas Anton’s MD-specific ASICs deliver 20x speedup per chip over A100 GPUs for optimized biomolecular workloads like the ACE2 protein, emphasizing the trade-off between flexibility and tailored performance.

References

  1. [1]
    Molecular dynamics simulation for all - PMC - PubMed Central - NIH
    Here we describe in practical terms the types of information MD simulations can provide and the ways in which they typically motivate further experimental work.
  2. [2]
    (PDF) The Growth of Molecular Dynamics - ResearchGate
    Molecular dynamics was thus born between 1956 and 1957. Berni Alder and Tom Wainwright proved the existence of a phase transition from the fluid to the solid ...
  3. [3]
    Molecular simulations: past, present, and future (a Topical Issue in ...
    Jan 5, 2022 · One of the aims of this Special Issue is to provide a general view of how MS is growing, complementing the nutshell description given above.
  4. [4]
    Recent Advances in Simulation Software and Force Fields
    Dec 12, 2024 · Recent advances in simulation software and force fields: their importance in theoretical and computational chemistry and biophysics.Author Information · References
  5. [5]
    Boltzmann's Work in Statistical Physics
    Nov 17, 2004 · The 1872 paper contained the Boltzmann equation and the H-theorem. Boltzmann claimed that the H-theorem provided the desired theorem from ...
  6. [6]
    [PDF] Boltzmann equation and H-theorem - Heidelberg University
    In 1872, Ludwig Boltzmann published a comprehensive paper in order to rigorously es- tablish the second law of thermodynamics. This paper introduced two key ...
  7. [7]
    Ergodic theorem, ergodic theory, and statistical mechanics - PNAS
    Feb 17, 2015 · Gibbs in his 1902 work (5) argued for his version of the hypothesis based on the fact that using it gives results consistent with experiments.
  8. [8]
    Boltzmann's Ergodic Hypothesis - jstor
    Gibbs, J. W. (1902) Elementary Principles in Statistical Mechanics, as reprinted by Dover,. New York 1960. Grad, H. (1952) ...
  9. [9]
    [PDF] The Fermi-Pasta-Ulam "numerical experiment"
    The Fermi-Pasta-Ulam (FPU) pioneering numerical experiment played a major role in the history of computer simulation because it introduced this concept for the ...
  10. [10]
    (PDF) The Fermi-Pasta-Ulam 'numerical experiment' - ResearchGate
    Aug 6, 2025 · The pioneering Fermi–Pasta–Ulam (FPU) numerical experiment played a major role in the history of computer simulation because it introduced ...
  11. [11]
    Berni Alder and the pioneering times of molecular simulation
    Jul 25, 2018 · This led to the confirmation of the existence of a phase transition in a system of hard spheres in the late 50s, and was followed by the ...
  12. [12]
    [PDF] Phase Transition for a Hard Sphere System
    Aug 13, 2004 · This paper discusses the Monte Carlo method in some detail, as well as giving computational results for Lennard-Jones molecules. 6 Kirkwood, ...
  13. [13]
    [PDF] Molecular dynamics simulation
    Oct 6, 2015 · • Newton's second law: F = ma. – where F is force on an atom, m is ... • We can thus write the equations of motion as: 13. F(x) = −∇U(x).
  14. [14]
    Studies in Molecular Dynamics. I. General Method - AIP Publishing
    A method is outlined by which it is possible to calculate exactly the behavior of several hundred interacting classical particles.
  15. [15]
    Correlations in the Motion of Atoms in Liquid Argon | Phys. Rev.
    A system of 864 particles interacting with a Lennard-Jones potential and obeying classical equations of motion has been studied on a digital computer.
  16. [16]
    The Nobel Prize in Chemistry 2013 - NobelPrize.org
    The Nobel Prize in Chemistry 2013 was awarded jointly to Martin Karplus, Michael Levitt and Arieh Warshel for the development of multiscale models for complex ...Missing: biomolecular 1970s 1980s
  17. [17]
    NAMD: a Parallel, Object-Oriented Molecular Dynamics Program
    NAMD is a molecular dynamics program designed for high performance simulations of large biomolecular systems on parallel computers.<|separator|>
  18. [18]
    Anton 3 | PSC
    The third-generation Anton is the first and only resource available to the community capable of simulating multiple millions of atoms at speeds of microseconds ...
  19. [19]
    Understanding Modern Molecular Dynamics: Techniques and ...
    A pedagogical treatment of the theoretical and numerical aspects of modern molecular dynamics simulation techniques and to show several applications.
  20. [20]
    Computer "Experiments" on Classical Fluids. I. Thermodynamical ...
    Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules. Loup Verlet*.
  21. [21]
    Energy conservation in molecular dynamics simulations of classical ...
    Jun 12, 2012 · Classical Newtonian dynamics is time reversible and the energy of an isolated system is conserved. Newton formulated his dynamics as an analytic ...
  22. [22]
    Current Status of Protein Force Fields for Molecular Dynamics - NIH
    The current status of classical force fields for proteins is reviewed. These include additive force fields as well as the latest developments in the Drude and ...Missing: seminal | Show results with:seminal
  23. [23]
    [PDF] FORCE FIELDS FOR PROTEIN SIMULATIONS - Jay Ponder Lab
    This was called the AMBER/OPLS force field, and for some time was reasonably popular. As with Amber and CHARMM, an all-atom version. (OPLS-AA) was developed ...Missing: seminal | Show results with:seminal
  24. [24]
    Toward empirical force fields that match experimental observables
    We will use here as paradigmatic examples some of the force fields that are most used for simulating biomolecular systems, namely, AMBER, 19 CHARMM, 20 OPLS, ...Missing: pair seminal
  25. [25]
    Embedded-atom method: Derivation and application to impurities ...
    Jun 15, 1984 · Embedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals. Murray S. Daw and M. I. Baskes. Sandia ...Missing: EAM original
  26. [26]
    Extended tight‐binding quantum chemistry methods - Bannwarth
    Aug 9, 2020 · This review covers a family of atomistic, mostly quantum chemistry (QC) based semiempirical methods for the fast and reasonably accurate ...Abstract · INTRODUCTION · THEORY · EXAMPLE APPLICATIONS...
  27. [27]
    ReaxFF: A Reactive Force Field for Hydrocarbons - ACS Publications
    In this paper, we develop a general bond-order-dependent potential in which the van der Waals and Coulomb forces are included from the beginning and the ...Introduction · Force Field · Results · Discussion
  28. [28]
    An Empirical Polarizable Force Field Based on the Classical Drude ...
    Jan 27, 2016 · We review the classical Drude oscillator model, in which electronic degrees of freedom are modeled by charged particles attached to the nuclei of their core ...
  29. [29]
    Current Status of the AMOEBA Polarizable Force Field
    (3d, 3g) In this paper, we review the AMOEBA model and its performance in several areas including gas phase properties against state-of-the-art quantum ...
  30. [30]
    Theoretical basis and accuracy of a non-iterative polarization ...
    The second term is for the interactions between all pairs of induced dipoles, which can be written V μ μ = - 1 2 ∑ i μ i · E i ind . The factor 1/2 in Eq. (9) ...
  31. [31]
    Accurate and Efficient Corrections for Missing Dispersion ...
    We review current methods for eliminating this cutoff-dependent behavior of the dispersion energy and identify some situations where they fail.
  32. [32]
    Quantitative predictions from molecular simulations using explicit or ...
    Jun 22, 2021 · Moreover, incorporating explicit solvent molecules in such calculations, typically increases the number of atoms by a factor of 10, which is ...
  33. [33]
    Generalized Born Implicit Solvent Models for Biomolecules - PMC
    Jul 22, 2019 · Here we review a particular class of “fast” implicit solvent models, generalized Born (GB) models, which are widely used for molecular dynamics ...
  34. [34]
    Quantitative analysis of Poisson–Boltzmann implicit solvent in ...
    In this study, we are interested in how the implicit solvent performs in molecular dynamics simulations. To guarantee sampling convergence in simulated ...
  35. [35]
    Semianalytical treatment of solvation for molecular mechanics and ...
    Journal of the American Chemical Society. Cite this: J. Am. Chem. Soc. 1990, 112, 16, 6127–6129 ... https://doi.org/10.1021/jacs.4c17622. Megan Schlinsog, ...
  36. [36]
    The MARTINI Force Field: Coarse Grained Model for Biomolecular ...
    We present an improved and extended version of our coarse grained lipid model. The new version, coined the MARTINI force field, is parametrized in a systematic ...
  37. [37]
    Hybrid All-Atom/Coarse-Grained Simulations of Proteins by Direct ...
    A hybrid AA/CG scheme based on combining all-atom CHARMM and coarse-grained PRIMO representations was validated via molecular dynamics and replica exchange ...
  38. [38]
    Free energy calculation from steered molecular dynamics ...
    Aug 8, 2003 · Jarzynski's equality is applied to free energy calculations from steered molecular dynamics simulations of biomolecules.
  39. [39]
    Nonphysical sampling distributions in Monte Carlo free-energy ...
    This paper describes the use of arbitrary sampling distributions chosen to facilitate such estimates. The methods have been tested successfully on the Lennard- ...
  40. [40]
    Escaping free-energy minima - PNAS
    We introduce a powerful method for exploring the properties of the multidimensional free energy surfaces (FESs) of complex many-body systems.
  41. [41]
  42. [42]
  43. [43]
  44. [44]
    All‐atom fast protein folding simulations: The villin headpiece
    Oct 24, 2002 · The early stage of folding of villin headpiece subdomain observed in a 200-nanosecond fully solvated molecular dynamics simulation. Proc Natl ...
  45. [45]
    All-atom fast protein folding simulations: the villin headpiece - PubMed
    We provide a fast folding simulation using an all-atom solute, implicit solvent method that eliminates the need for treating solvent degrees of freedom.
  46. [46]
    One-microsecond molecular dynamics simulation of channel gating ...
    Here we present a one-microsecond-long molecular dynamics simulation of the GLIC channel pH stimulated gating mechanism. The crystal structure of GLIC obtained ...
  47. [47]
    Mechanical properties of carbon nanotube by molecular dynamics ...
    The mechanical properties of single-walled carbon nanotube (SWCNT) are computed and simulated by using molecular dynamics (MD) in this paper.Missing: seminal | Show results with:seminal
  48. [48]
    Fracture toughness of graphene | Nature Communications
    Apr 29, 2014 · Our molecular dynamics (MD) simulations provide validation and insight for experimental results. This work represents the first measurement of ...
  49. [49]
    Simulation of lipid bilayer self-assembly using all-atom lipid force fields
    This is the first time bilayer self-assembly of phospholipids with negatively charged head groups is demonstrated in all-atom MD simulations.
  50. [50]
    Benchmarking Classical Molecular Dynamics Simulations for ...
    Jun 20, 2025 · In this study, we benchmark the ability of molecular dynamics (MD) simulations with Class 1 force fields to model the transport and structural properties of ...
  51. [51]
    [PDF] Multiscale models for Complex Chemical Systems - Nobel Prize
    Oct 9, 2013 · The work awarded this year´s Nobel Prize in Chemistry focuses on the development of methods using both classical and quantum mechanical theory ...
  52. [52]
    [PDF] Arieh Warshel - Nobel Lecture
    The emergence of the QM/MM approach allowed one to ask for the first time, in a well-defined and logical way, what the origin of the catalytic power of ...
  53. [53]
    AlphaFold2 Modeling and Molecular Dynamics Simulations ... - MDPI
    We combined AlphaFold2-based atomistic predictions of structures and conformational ensembles of the SARS-CoV-2 spike complexes with the host receptor ACE2.
  54. [54]
    Integrating path sampling with enhanced sampling for rare-event ...
    Dec 9, 2024 · Several rare-event sampling algorithms have been developed over the past few decades to address this timescale problem in MD simulation. These ...INTRODUCTION · OPES-flooding · Integrated sampling · Chignolin
  55. [55]
    Deep learning the slow modes for rare events sampling - PNAS
    This means that we can interpret the enhanced sampling simulation as a dynamics that samples the Boltzmann distribution on the timescale. Thus, the VAC ...
  56. [56]
    Differentiable simulation to develop molecular dynamics force fields ...
    Feb 21, 2024 · There are two main issues with MD: the force fields used to describe atomic interactions lack accuracy, and sampling beyond the microsecond ...
  57. [57]
    Biomolecular dynamics with machine-learned quantum-mechanical ...
    Apr 5, 2024 · A disadvantage of FFs is their limited accuracy due to the neglect of important quantum-mechanical effects, such as changes to hybridization ...
  58. [58]
    On the importance of accounting for nuclear quantum effects in ab ...
    Aug 20, 2018 · We show that a force field in excellent agreement with quantum mechanical energies and forces will not produce acceptably inaccurate predictions ...
  59. [59]
    Scaling Molecular Dynamics Beyond 100000 Processor Cores ... - NIH
    The computational cost of the non-bonded interactions is O(N2). Because the van der Waals interactions decay rapidly according to the pairwise distance, they ...
  60. [60]
    Molecular Dynamics to Predict Cryo-EM: Capturing Transitions and ...
    We show the capability of MD to predict short-lived conformational states, finding remarkable confirmation by cryo-EM structures subsequently solved.Missing: integration quantum exascale
  61. [61]
    Molecular dynamics simulation of an entire cell - Frontiers
    Jan 17, 2023 · In this perspective, we show how an integrative approach can be employed to model an entire cell, the minimal cell, JCVI-syn3A, at full complexity.Missing: quantum exascale
  62. [62]
  63. [63]
    Determining accurate conformational ensembles of intrinsically ...
    Oct 10, 2025 · Determining accurate atomic resolution conformational ensembles of intrinsically disordered proteins (IDPs) is extremely challenging.
  64. [64]
  65. [65]
    Fast Parallel Algorithms for Short-Range Molecular Dynamics
    Mar 1, 1995 · Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms.Missing: original paper
  66. [66]
    A comparison between parallelization approaches in molecular ...
    Space-decomposition. The domain of the simulated system is partitioned into smaller subsets, with each computing unit responsible for calculation of forces and ...
  67. [67]
    Molecular Dynamics - Steve Plimpton
    In a nutshell, atom-decomposition methods assign a subset of atoms permanently to each processor, force-decomposition methods assign a subset of pairwise force ...
  68. [68]
    Molecular dynamics algorithm for multiple time scales - AIP Publishing
    In this note we present a variant of the RESPA (reference system propagator algorithm), which we developed for handling systems with multiple time scales.
  69. [69]
    GROMACS: High performance molecular simulations through multi ...
    GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules.Original Software... · 1. Motivation And... · 3. Software Description<|control11|><|separator|>
  70. [70]
    4.4. Parallel algorithms - LAMMPS documentation
    LAMMPS is designed to enable running simulations in parallel using the MPI parallel communication standard with distributed data via domain decomposition.
  71. [71]
    Scaling of the GROMACS Molecular Dynamics Code to 65k CPU ...
    Feb 13, 2025 · We benchmarked the performance of the GROMACS 2024 molecular dynamics (MD) code on a modern high-performance computing (HPC) cluster with ...
  72. [72]
    None
    ### Summary of GROMACS Performance and Speedup on NVIDIA A100 GPU
  73. [73]
    [PDF] Anton 3: Twenty Microseconds of Molecular Dynamics Simulation ...
    Nov 19, 2021 · Anton 3 sets new records (by orders of magnitude) in the speed of all-atom biomolecular simulations. Compared with the fastest general-purpose ...
  74. [74]
    Toward an FPGA-based dedicated computer for molecular dynamics ...
    Feb 4, 2025 · FPGAs contain a large matrix of versatile logical units (logical slices), which can be interconnected by freely programmable links. In that way, ...INTRODUCTION · II. HARDWARE · Fixed point representation of... · Thermostat
  75. [75]
    Enabling Scalable AI-Driven Molecular Dynamics Simulations
    Oct 20, 2025 · Experience with LAMMPS or other MD simulation tools. Experience using Python, familiarity with PyTorch and machine learning models (Cython a ...
  76. [76]
    Sampling a Rare Protein Transition Using Quantum Annealing
    ### Summary of Quantum Annealing for Sampling Rare Protein Transitions in MD
  77. [77]
    Molecular dynamics on quantum annealers | Scientific Reports
    Oct 7, 2022 · In this work we demonstrate a practical prospect of using quantum annealers for simulation of molecular dynamics.
  78. [78]
    Anton 3 Is a 'Fire-Breathing' Molecular Simulation Beast - HPCwire
    Sep 1, 2021 · The Anton line of supercomputers, designed and built by DE Shaw Research (DESRES); these are purpose-built systems specifically for molecular dynamics modeling.