Fact-checked by Grok 2 weeks ago

Interatomic potential

An interatomic potential is a mathematical that approximates the of atomic interactions in a as a function of the relative positions of the atoms' nuclei. These potentials serve as the core input for atomistic simulations, such as and methods, enabling the computation of forces through energy gradients to model atomic-scale behavior efficiently without relying on computationally intensive ab initio quantum mechanical calculations. By bridging and macroscopic properties, interatomic potentials are indispensable for studying structure, dynamics, defects, phase transitions, and mechanical responses in fields like , physics, and . The development of interatomic potentials traces back to early 20th-century models inspired by van der Waals forces, with the seminal (introduced in 1924 and refined in 1931) providing a simple pairwise description of van der Waals interactions suitable for and non-bonded atoms. Over the decades, as computational capabilities advanced, potentials evolved to address more complex bonding: semi-empirical many-body models emerged in the 1980s, such as the Embedded-Atom Method (EAM) for metals, which incorporates effects to capture , and the Tersoff potential for covalent semiconductors, accounting for and coordination dependencies. These were followed by reactive potentials like the Reactive Empirical Bond-Order (REBO) framework starting in 1990, with significant updates in the 2000s, designed for bond breaking and formation in hydrocarbons. In recent years, machine learning (ML)-based interatomic potentials have revolutionized the field by leveraging large datasets from ab initio calculations to achieve near-quantum accuracy at reduced computational cost, with notable examples including the Behler-Parrinello neural network potentials (2007), Gaussian Approximation Potentials (GAP, 2010), and Moment Tensor Potentials (MTP). As of 2025, universal machine learning interatomic potentials have emerged, offering improved transferability across diverse chemical environments. These ML approaches excel in handling diverse systems, from amorphous materials to biomolecules, and have enabled simulations of millions of atoms, predicting properties like elasticity, fracture toughness, self-diffusion, and thermal expansion with high fidelity. Despite these achievements, challenges persist, including limited transferability across chemical environments, difficulties in modeling charge transfer or long-range interactions, and the need for extensive training data to ensure robustness. Ongoing refinements continue to expand their applicability, driving innovations in alloy design, defect engineering, and predictive materials modeling.

Fundamentals

Definition and Importance

Interatomic potentials are mathematical functions that approximate the of a system of atoms as a function of their relative positions, providing a simplified representation of atomic interactions derived from quantum mechanical principles for use in classical simulations. These potentials effectively integrate out the electronic , condensing complex quantum effects into an analytic or tabular form that governs atomic motion. At their foundation lies the Born-Oppenheimer approximation, introduced in , which posits that electrons respond instantaneously to the slower motion of atomic nuclei due to the mass disparity (protons being approximately times heavier than electrons), thereby yielding an effective potential energy surface for nuclear dynamics with typical errors on the order of 10^{-5} . The development of interatomic potentials traces back to the , when early empirical models for pairwise atomic interactions emerged from analyses of gas properties. A seminal contribution came from J.E. Jones in 1924, who derived inverse-power molecular field potentials (of the form r^{-n} and r^{-m}) by fitting to experimental data on gas , equations of state, and crystal structures, laying groundwork for subsequent refinements. By 1931, Lennard-Jones had formalized the widely influential 12-6 potential, incorporating quantum-derived attractions, which marked a shift toward more physically motivated forms. These early pair potentials evolved over decades, incorporating many-body effects and computational fitting, to support the rise of simulations in the mid-20th century, enabling the study of dynamic atomic processes beyond static quantum calculations. Interatomic potentials play a pivotal role in computational by facilitating efficient simulations that bridge the gap between quantum-level accuracy and the classical treatment of large systems, allowing predictions of macroscopic properties from atomic-scale interactions. They enable simulations of hundreds of thousands to millions of atoms over timescales of nanoseconds to microseconds, far exceeding the scope of direct methods limited to hundreds of atoms and femtoseconds. Key applications include forecasting material behaviors such as elastic moduli, atomic diffusion rates, and phase transitions, which inform alloy design, defect formation, and mechanical response in solids, liquids, and gases. This efficiency has driven advancements in understanding complex phenomena like and , where empirical or machine-learned potentials reproduce experimental observables with high fidelity.

Functional Form

The total potential energy U of a system of atoms is typically expressed through a that decomposes it into contributions from interactions involving increasing numbers of atoms. This general form is given by U = \sum_i U_\text{single}(i) + \sum_{i<j} U_\text{pair}(r_{ij}) + \sum_{i<j<k} U_\text{triplet}(r_{ij}, r_{ik}, r_{jk}) + \cdots, where r_{ij} denotes the distance between atoms i and j, and higher-order terms account for multi-atom correlations. This expansion provides a systematic way to model atomic interactions, with the choice of truncation determining the balance between accuracy and computational efficiency. Single-body terms U_\text{single}(i) represent on-site energies for individual atoms, often incorporating external fields or embedding effects that depend on the local environment, such as electron density in metallic systems. These terms are crucial for systems where isolated atomic energies vary, but they are frequently omitted or absorbed into higher-order contributions in neutral, isolated clusters. Pair-wise interactions U_\text{pair}(r_{ij}) form the simplest and most common building block, typically decomposed into a repulsive component f_\text{repulsive}(r_{ij}) that dominates at short distances and an attractive component f_\text{attractive}(r_{ij}) that provides cohesion at longer ranges, yielding a generic form V(r) = f_\text{repulsive}(r) + f_\text{attractive}(r). To confine interactions to realistic ranges and reduce computational cost, pair potentials often incorporate cutoff functions that smoothly decay to zero beyond a specified distance, ensuring continuity in energy and forces to avoid unphysical discontinuities in simulations. Examples of such functions include polynomial or cosine-based dampers that transition interactions to zero over a narrow interval. Many-body extensions beyond pairs introduce dependencies on the full local atomic configuration, such as triplet or higher terms that capture angular variations in bonding. In the embedded atom method (EAM), for instance, the energy includes an embedding function of the local electron density, which implicitly incorporates many-body effects through summed pair-like contributions, with angular dependencies handled in extensions like the modified EAM. These terms enable better representation of directional bonding in covalent or metallic systems. Common conventions in interatomic potentials use energy units of electronvolts (eV) and distance units of angstroms (Å), facilitating comparison across materials and alignment with experimental data like cohesive energies. Smooth cutoffs are essential in all terms to maintain differentiable potentials, preserving the accuracy of derived properties like forces in molecular dynamics simulations.

Computation from Potentials

Energy Evaluation

The total potential energy of an atomic system is obtained by evaluating the interatomic potential function over all relevant atomic interactions, typically restricted to a finite cutoff distance to ensure computational tractability. For pair potentials of the general form described in prior sections, the direct summation method computes the energy as a double loop over all atom pairs: E = \frac{1}{2} \sum_{i=1}^N \sum_{j \neq i}^N \phi(r_{ij}), where \phi(r_{ij}) is the pair interaction energy between atoms i and j separated by distance r_{ij}, and the factor of $1/2 avoids double-counting. This naive approach scales as O(N^2) with system size N, making it feasible only for small systems of up to a few hundred atoms, as each distance must be calculated and checked against the cutoff. To mitigate this inefficiency for larger systems common in , neighbor list algorithms restrict computations to nearby atoms within the interaction cutoff plus a small "skin" distance. The , introduced in early molecular dynamics work, constructs for each atom i a list of potential neighbors j where r_{ij} < r_c + \delta, with \delta typically 10-20% of the cutoff r_c. Lists are reused until any atom displaces by more than \delta/2, after which they are rebuilt; this reduces the effective complexity to near O(N) by minimizing redundant distance evaluations over multiple timesteps. Complementing this, partition the simulation box into a grid of cells with dimensions equal to the cutoff radius, assigning atoms to cells and only considering interactions with atoms in the same or adjacent cells (up to 27 in 3D), further ensuring O(N) scaling without explicit distance checks for distant pairs. The skin distance in these methods also facilitates infrequent updates, balancing list construction cost against evaluation efficiency. For many-body potentials, energy evaluation involves additional summations over local environments, still leveraging neighbor lists for efficiency. In the embedded atom method (EAM), the total energy decomposes into an embedding term and a pair term: E = \sum_i F(\rho_i) + \frac{1}{2} \sum_i \sum_{j \neq i} \phi(r_{ij}), where \rho_i = \sum_{j \neq i} f(r_{ij}) is the host electron density at atom i from neighbor contributions f(r_{ij}), and F is the embedding function. Density \rho_i is computed first via a loop over neighbors within the cutoff, followed by evaluation of F(\rho_i) and the pair interactions, often using the same neighbor lists as for pair potentials to avoid O(N^2) overhead. Triplet or higher-order many-body terms, if present, require analogous nested loops over neighbor triplets, with complexity mitigated by restricting to short-range interactions. Large-scale simulations demand parallel computation of these evaluations. Domain decomposition, a standard approach in codes like , divides the simulation domain into 3D subdomains assigned to processors, with atoms migrating across boundaries as needed; neighbor lists are built locally per subdomain, and ghost atoms from adjacent subdomains ensure complete interaction coverage during energy summation. This enables near-linear scaling up to thousands of processors for systems exceeding millions of atoms, with communication overhead managed via for list updates and force/ energy exchanges. For a simple pair potential energy evaluation using a pre-built neighbor list with cutoff check, the following pseudocode illustrates the core loop (double-counting is halved at the end):
total_energy = 0.0
for i = 1 to N:
    for j in neighbor_list[i]:  # j > i to avoid double-counting, or adjust accordingly
        rij = distance(atoms[i], atoms[j])
        if rij < r_cut:
            total_energy += phi(rij)
total_energy /= 2.0  # if double-counted
This structure is adapted in production codes, where vectorization and cutoff optimizations further accelerate per-pair computations.

Force Calculation

In molecular dynamics simulations, the force on an atom i is defined as the negative gradient of the total potential energy U with respect to the position \mathbf{r}_i of that atom, given by \mathbf{F}_i = -\nabla_i U. This definition ensures that the forces drive the atomic motion according to Newton's second law, enabling the prediction of structural and dynamical properties from the interatomic potential. For pair potentials, where U = \sum_{i<j} V(r_{ij}) and r_{ij} = |\mathbf{r}_i - \mathbf{r}_j|, the force between atoms i and j is derived using the chain rule as \mathbf{F}_{ij} = -\frac{dV}{dr} \bigg|_{r_{ij}} \cdot \frac{\mathbf{r}_{ij}}{r_{ij}}, with the vector components pointing along the interatomic separation. This central force formulation maintains Newton's third law, ensuring momentum conservation in the simulation. In many-body potentials, the force calculation becomes more involved due to explicit dependence on multi-atom configurations. An analogy to the Hellmann-Feynman theorem from arises, where forces are computed as expectation values without explicit wavefunction derivatives; similarly, in empirical many-body models, \mathbf{F}_i = -\partial U / \partial \mathbf{r}_i leverages applications to collective terms like . For the embedded atom method (EAM), the total energy includes an embedding term F(\rho_i) depending on the local density \rho_i = \sum_{j \neq i} \rho_j(r_{ij}) at site i, plus pair interactions; the force then incorporates density gradients as \mathbf{F}_i = -\sum_j \left[ \frac{\partial F}{\partial \rho_i} \nabla_i \rho_j(r_{ij}) + \frac{1}{2} \frac{d \phi}{dr} \bigg|_{r_{ij}} \frac{\mathbf{r}_{ij}}{r_{ij}} \right], where \phi(r) is the pair potential. Analytic derivatives of the potential are preferred for force computation over finite difference approximations, as they provide exact expressions without introducing truncation errors that can destabilize long simulations. To maintain , especially near radii where interactions are truncated, potentials incorporate smoothing functions that ensure continuity in the first and higher derivatives, preventing discontinuities in that could lead to unphysical artifacts. The computational cost of force evaluation is comparable to that of energy calculation but includes additional directional vector operations for each interacting pair or group. Optimization relies on the same neighbor lists used for energy, which restrict summations to atoms within a cutoff distance, achieving near-linear scaling with system size N rather than O(N^2).

Empirical Parametric Potentials

Pair Potentials

Pair potentials represent the simplest class of empirical interatomic potentials, assuming that the total potential energy U of a system of atoms is the sum of pairwise interactions between all unique pairs, given by U = \sum_{i < j} V(r_{ij}), where V(r_{ij}) is the interaction energy depending solely on the distance r_{ij} between atoms i and j. This additivity ignores many-body correlations, such as angular dependencies or collective electronic effects, limiting its accuracy for complex bonding environments. A classic example is the , widely used to model van der Waals interactions, with the functional form V(r) = 4\epsilon \left[ \left( \frac{\sigma}{r} \right)^{12} - \left( \frac{\sigma}{r} \right)^6 \right], where \epsilon is the depth of the () and \sigma is the finite distance at which the potential is zero (atomic size parameter). The r^{-12} repulsive term approximates the steep rise due to Pauli exclusion and repulsion, while the r^{-6} attractive term arises from dispersion forces, derived from second-order for induced dipole interactions between neutral atoms. This form was originally motivated by fitting to gas and equation-of-state data for . Other common pair potentials include the , suitable for describing covalent bonds in diatomic molecules, V(r) = D \left[ (1 - e^{-a(r - r_e)})^2 - 1 \right], where D is the dissociation energy, a controls the width of the , and r_e is the ; it provides an analytically solvable model for vibrational spectra. For ionic and some metallic systems, the is employed, V(r) = A e^{-r/\rho} - \frac{C}{r^6}, with A and \rho parameters for the exponential repulsion (reflecting overlap of clouds) and C for the dispersive attraction; the exponential form offers a more physically motivated short-range repulsion than power-law alternatives. Pair potentials excel in modeling van der Waals-dominated systems, such as like and , where they accurately reproduce phase diagrams, melting points, and diffusion coefficients with minimal parameters. They are also applied to simple metals, though with limitations, as the pairwise additivity fails to capture many-body screening effects in gases, leading to inaccuracies in and elastic properties. For instance, Lennard-Jones potentials predict incorrect vacancy formation energies in metals due to the absence of contributions. Parameters in pair potentials are typically fitted to experimental data, such as lattice constants, cohesive energies, or second virial coefficients from gas-phase measurements, using least-squares optimization to minimize deviations. For , fitting the Lennard-Jones \epsilon and \sigma to cohesive energy and nearest-neighbor distance in the solid phase yields parameters that also align with curves.

Many-Body Potentials

Many-body potentials extend beyond pairwise interactions by incorporating collective effects from multiple atoms, addressing limitations of pair potentials in systems where bonding involves delocalized electrons, such as metals. In metallic systems, pair potentials inadequately capture the many-body nature of due to the delocalization of electrons, leading to poor predictions of properties like elastic constants and vacancy formation energies; many-body terms effectively account for these multi-atom contributions through density-dependent or angular dependencies. The Embedded Atom Method (EAM), developed by Daw and Baskes in , is a seminal many-body potential particularly suited for metals. In EAM, the total U of a system of N atoms is given by U = \sum_i F(\rho_i) + \frac{1}{2} \sum_{i \neq j} \phi(r_{ij}), where \rho_i = \sum_{j \neq i} \rho_j(r_{ij}) is the at atom i due to all neighboring atoms j, F(\rho_i) is the embedding energy function representing the cost of placing atom i in density \rho_i, \phi(r_{ij}) is a pairwise interaction term, and r_{ij} is the distance between atoms i and j. This formulation models by treating the embedding term as a many-body contribution that depends on the local atomic environment. The Modified Embedded Atom Method (MEAM), introduced by Baskes in , builds on EAM by adding angular dependencies to better describe directional bonding in materials like semiconductors and alloys. MEAM modifies the atomic to include angular terms, often using the cosine of bond angles between triplets of atoms, enabling accurate of covalent and hybrid bonding characteristics while retaining EAM's efficiency for metals. Other notable many-body potentials include the , proposed in 1984 for transition metals, which employs a square-root form of the embedding function similar to but optimized for central-force-like behavior in bcc structures. For covalent systems, the , developed in 1988, uses a bond-order concept where the strength of pairwise bonds depends on the local coordination and angles, effectively incorporating many-body effects through a multiplicative angular function. These many-body potentials find wide application in simulating metals and semiconductors, offering significant improvements over pair potentials in predicting defect structures, surface energies, and mechanical properties; for instance, EAM and MEAM accurately reproduce energies and behaviors in fcc metals, while Tersoff excels in modeling interfaces and amorphization.

Advanced and Non-Parametric Potentials

Spline and Tabulated Potentials

Spline and tabulated potentials represent a class of non-parametric interatomic potentials that rely on data-driven representations, such as lookup tables or interpolated curves, derived directly from calculations like (DFT), without imposing a predefined analytical equation. These approaches allow for flexible modeling of atomic interactions by fitting or tabulating energy and force data at discrete points, enabling the potential to adapt to the specific quantum mechanical results from reference calculations. Spline potentials typically employ functions, such as cubic splines, to interpolate the interatomic potential V(r) between radial distances where DFT energies and are computed. The cubic spline form ensures smoothness in both the potential and its first (), achieved through conditions at knot points and appropriate boundary constraints, such as natural splines with zero at the ends. For instance, in the case of , a modified embedded atom method potential uses five cubic splines—each with 10 fitting parameters—to represent pair and many-body interactions, fitted via force-matching to a comprehensive DFT database covering bulk phases, defects, and high-coordination structures. This fitting process minimizes discrepancies between predicted and DFT-derived and energies, yielding accurate reproduction of properties like elastic constants and spectra. Tabulated potentials store the interatomic energy and forces on a discrete grid of interatomic distances, with interpolation performed during simulations to evaluate interactions at arbitrary separations. Common interpolation methods include linear schemes for simplicity or higher-order approaches like B-splines for smoother results, often with grids spaced at intervals of 0.002 nm or finer to balance accuracy and efficiency. In molecular dynamics software such as GROMACS, tabulated potentials are implemented via lookup tables that support custom user-defined functions, particularly useful for complex biomolecular systems where standard parametric forms are inadequate; the tables are interpolated using cubic splines with 500–2000 points per nanometer, allowing separation of electrostatic, dispersion, and repulsion contributions. Similarly, in LAMMPS, the pair_style table command enables tabulated pair potentials with options for linear or spline interpolation, facilitating the use of DFT-derived data for arbitrary functional forms. These non-parametric methods offer significant advantages over traditional parametric potentials, which are limited by their rigid functional forms and may fail to capture subtle irregularities in the potential energy surface from quantum calculations. By directly tabulating or spline-fitting data, they achieve high fidelity to results, enabling accurate simulations of phenomena like defect formation or transitions that parametric models often approximate poorly. However, spline and tabulated potentials suffer from reduced transferability, as they are tailored to specific datasets and may not generalize well to unseen compositions, temperatures, or pressures without refitting. Additionally, they demand greater storage for the tables or parameters and can incur higher computational overhead due to , though optimizations like caching mitigate this in production codes.

Machine-Learned Interatomic Potentials

Machine-learned interatomic (MLIPs) are data-driven models that approximate quantum mechanical surfaces by leveraging algorithms, such as neural networks, trained on large datasets of energies, forces, and virials derived from (DFT) calculations. These models enable simulations with near-DFT accuracy while achieving computational speeds comparable to classical force fields, bridging the gap between quantum accuracy and classical efficiency in atomistic modeling. Unlike traditional empirical potentials, MLIPs offer flexibility through non-parametric representations that can capture complex many-body interactions without predefined functional forms. Key architectures in MLIPs often employ graph neural networks (GNNs), where atomic environments serve as nodes in a , and interactions are modeled via message-passing mechanisms that propagate between neighboring atoms to account for many-body effects. For instance, equivariant GNNs ensure rotational and translational invariance, enabling efficient handling of up to three-body interactions in modern universal models. Prominent examples include Moment Tensor Potentials (MTPs), which use a basis of tensor invariants to systematically improve accuracy by expanding the representation of local environments, and NequIP, an E(3)-equivariant that achieves high data efficiency through geometric symmetry enforcement. Another foundational approach is Potential Molecular (DPMD), which utilizes deep to learn smooth, transferable potentials for diverse chemical systems. Recent advances from 2024 to 2025 have focused on MLIPs capable of generalizing across broad classes of materials without system-specific retraining, as demonstrated in benchmarks for properties and defect modeling. Models like CHGNet and MatterSim-v1, pretrained on extensive DFT datasets encompassing millions of structures, exhibit low errors (e.g., around 0.035 /atom) for across the periodic table, enabling applications in multiscale materials design such as series simulations. These potentials support , where pretrained models are fine-tuned for specific tasks, accelerating discoveries in areas like thermal transport and structural defects. Overall, MLIPs provide a scalable for high-fidelity simulations, with ongoing developments emphasizing equivariant architectures and to enhance extrapolation beyond training data.

Development and Fitting

Parameter Fitting Techniques

Parameter fitting techniques for empirical parametric interatomic potentials seek to optimize a small number of in a predefined functional form by minimizing discrepancies between model predictions and reference data. The primary objective is to achieve low error in key materials properties, such as lattice parameters, elastic constants, vacancy formation energies, and phonon dispersion spectra, often formulated as a least-squares problem that weights contributions from multiple observables. This approach ensures the potential reproduces thermodynamic and mechanical behaviors across relevant conditions while maintaining transferability to unseen configurations. Common optimization methods include nonlinear least-squares algorithms, such as the Levenberg-Marquardt method, which combines and Gauss-Newton steps to efficiently solve the nonlinear system arising from the potential's functional form. For instance, in developing embedded atom method (EAM) potentials for face-centered cubic metals, parameters are refined by minimizing a weighted sum of squared residuals for properties like cohesive energy and . Another prominent technique is force-matching, which directly minimizes the difference between forces predicted by the empirical potential and those derived from calculations on diverse atomic snapshots, enabling the construction of transferable potentials without relying on explicit energy fits. This method, introduced for deriving classical potentials from (DFT) data, has been widely adopted for its ability to capture many-body effects implicitly through force data. Reference data for fitting typically draw from experimental sources, including measured cohesion energies, defect formation enthalpies, and moduli, which provide benchmarks for bulk and defect properties. Complementary quantum mechanical calculations, such as DFT for short-range interactions and structural relaxations, supply high-fidelity forces and energies for configurations inaccessible experimentally, like high-pressure phases or surfaces. These datasets are curated to span the potential's intended application space, ensuring balanced representation of equilibrium and non-equilibrium states. Fitting empirical potentials presents challenges, including , where excessive adjustment to reference data leads to poor beyond the set, particularly when the functional form has limited flexibility. further complicates the process, as conflicting requirements—such as accurately reproducing both elastic constants and defect energies—necessitate trade-offs, often addressed via Pareto fronts or weighted objectives to identify robust parameter sets. Specialized software tools mitigate these issues; for example, POTFIT employs force-matching with global minimization algorithms to generate effective potentials from databases, supporting various functional forms like pair or many-body interactions. Similarly, the Atomic Simulation Environment (ASE) provides optimization routines, including least-squares solvers, for iterative parameter refinement in Python-based workflows. A representative for EAM fitting begins with initial guesses based on physical intuition, followed by nonlinear least-squares minimization against equation-of-state data from DFT, such as energy-volume curves for perfect lattices. Subsequent iterations incorporate additional targets like spectra and defect properties, with regularization to prevent , yielding potentials validated for applications in metals like aluminum or .

Training Strategies for Machine Learning Models

Training machine learning interatomic potentials (MLIPs) relies on high-quality datasets typically generated from density functional theory (DFT) calculations, encompassing energies, forces, and stresses for atomic configurations. These datasets must capture diverse chemical environments to ensure model accuracy and transferability, with sizes commonly ranging from 10^4 to 10^6 configurations depending on the system's complexity and elemental composition. For multi-element systems, the dataset size scales rapidly with the number of elements, often requiring thousands of structures per element to achieve reliable predictions. Key strategies for dataset construction include , which iteratively queries uncertain regions of the by identifying configurations where model predictions exhibit high variance during simulations. On-the-fly training integrates this process into (MD) simulations, where a preliminary MLIP drives the trajectory and triggers DFT calculations for high-uncertainty structures to incrementally refine the model. Data augmentation enhances coverage by generating diverse structures through techniques such as elemental substitution or perturbations to existing configurations, addressing imbalances in underrepresented phases or compositions. Optimization during employs functions that balance errors in and , such as the weighted \mathcal{L} = \sum (\Delta E)^2 + \lambda \sum (\Delta F)^2, where \Delta E and \Delta F denote prediction errors for and components, respectively, and \lambda is a tunable weighting factor to prioritize accuracy. (SGD) or optimizers are commonly used to minimize this , with Adam's adaptive learning rates accelerating convergence for high-dimensional architectures. Recent innovations from 2024-2025 emphasize (UQ) through methods, where multiple models trained on subsets of data provide probabilistic predictions to flag extrapolation risks. Equivariant neural networks, which inherently respect physical symmetries like rotations and translations, have improved efficiency and symmetry-aware predictions in these ensembles. Frameworks such as the Atomic Simulation Environment (ASE) for data handling and for model implementation facilitate these workflows, though challenges persist in transferability across due to variations in and scales.

Validation and Limitations

Accuracy Assessment

The accuracy of interatomic potentials is primarily evaluated by comparing their predictions of atomic energies, forces, and derived properties against high-fidelity reference data, such as (DFT) calculations or experimental measurements. Common metrics include the root-mean-square error (RMSE) for total energies, typically reported in meV/atom, and for atomic forces in eV/Å (or equivalently meV/Å). Additional metrics assess elastic properties, such as the , where relative errors are computed to gauge agreement with benchmarks. These metrics quantify the potential's fidelity in reproducing quantum-mechanical behaviors at a classical computational cost. Benchmarks often involve DFT comparisons for challenging configurations like defect formation energies (e.g., vacancies or interstitials) and surface energies, where potentials must capture local relaxations and electronic effects accurately. For instance, empirical potentials like the embedded atom method (EAM) typically achieve energy RMSE values around 10-40 meV/atom relative to DFT for such systems, while more advanced machine-learned interatomic potentials (MLIPs) reduce this to 1-5 meV/atom. Experimental validation extends to thermodynamic properties, such as melting points determined via phase coexistence simulations, where discrepancies highlight the potential's ability to model finite-temperature . Property-specific tests, including phonon dispersion relations computed through dynamics, further probe vibrational accuracy; for example, MLIPs benchmarked on materials like ThO₂ show relative errors below 5% in frequencies compared to DFT-derived anharmonic Hamiltonians. Evaluation protocols emphasize robust statistical assessment, such as k-fold cross-validation during fitting to ensure generalization beyond training data, and independent test sets for unseen structures. In the context of MLIPs as of 2025, universal benchmarks like those from the Materials Project database—encompassing relaxation trajectories and diverse elemental systems—enable standardized comparisons, often revealing speed-accuracy trade-offs where models achieving sub-1 meV/atom RMSE on energies sacrifice some computational efficiency. Recent MLIPs trained on these datasets demonstrate RMSE values under 2 meV/atom for energies and 0.04 /Å for forces in metallic systems like iron, outperforming traditional potentials while maintaining scalability for large-scale simulations.

Reliability Challenges

Interatomic potentials, whether classical or machine-learned, often exhibit limited transferability when applied beyond the conditions under which they were parameterized. Potentials optimized for equilibrium structures at ambient temperatures and pressures frequently fail to accurately describe behaviors at extreme conditions, such as high temperatures or pressures, or in novel phases like amorphous states or defects. For instance, simple pair potentials, which model interactions solely between atomic pairs, neglect many-body effects and electronic contributions, leading to inaccuracies in systems involving or strong correlations, where spin-dependent interactions are crucial. This arises because training data or fitting procedures typically prioritize common configurations, resulting in poor to out-of-equilibrium dynamics or phase transitions. Extrapolation poses a significant risk, particularly for machine-learning interatomic potentials (MLIPs), which can overfit to the specific chemistries and configurations in their datasets, yielding unreliable predictions for unseen compositions or structural motifs. Even advanced MLIPs, such as those based on graph neural networks, struggle with to new chemical spaces, where environments differ substantially from the training set, leading to amplified errors in energies and forces. Recent models developed in 2025, like enhanced versions of and CHGNet, have improved robustness by incorporating broader datasets across diverse materials, mitigating some issues through better learning; however, they do not fully eliminate risks, as systematic softening of potential energy surfaces persists in certain regimes. These challenges underscore the need for cautious application, especially in predictive simulations of novel alloys or biomolecules. Classical interatomic potentials introduce systematic errors by relying on approximations that overlook quantum mechanical effects, such as and nuclear quantum delocalization, which become prominent at low temperatures or in light-element systems. In classical frameworks, atoms are treated as point particles following Newtonian mechanics, ignoring the quantized vibrational modes inherent to , which can shift equilibrium properties like lattice constants by several percent in materials like or water ice. These omissions lead to discrepancies in thermodynamic quantities, such as free energies, and can propagate errors in long-time simulations of phase stability or . While MLIPs can partially capture quantum-derived labels from data, they inherit similar biases if trained exclusively on classical trajectories, highlighting a fundamental limitation in approximating full quantum fidelity. Validation of interatomic potentials is hampered by gaps in representing , such as fractures, dislocations, or chemical reactions, which occur infrequently in standard training or testing datasets and thus remain underrepresented. These events, critical for applications like materials failure or , are often sampled inadequately in trajectories, leading to unverified predictions and potential simulation artifacts. To address this, incorporating estimates—such as Bayesian outputs or variance—has emerged as essential for flagging high-risk regions and guiding to enrich datasets with rare configurations. Without such measures, potentials may confidently output erroneous results for low-probability states, compromising reliability in high-stakes simulations. Looking ahead, hybrid quantum-machine learning approaches offer promising directions to bolster reliability by integrating quantum circuit evaluations with classical ML frameworks, enabling better handling of quantum effects and extrapolation. These methods, such as variational quantum circuits embedded in message-passing networks, leverage quantum computing's ability to model entangled states while retaining ML scalability, potentially reducing systematic errors in polyatomic systems. Ongoing developments aim to combine these with active learning loops for on-the-fly uncertainty quantification, paving the way for more robust, transferable potentials across broader chemical spaces.

References

  1. [1]
    [2204.09563] Interatomic potentials: Achievements and challenges
    Apr 20, 2022 · Here, we review empirical interatomic potentials designed to reproduce elastic properties, defect energies, bond breaking, bond formation, and even redox ...
  2. [2]
  3. [3]
  4. [4]
    [PDF] Introduction to Interatomic Potentials - DSpace@MIT
    Magnitude reduces gradually under the influence of weak forces: Velocity decorrelates with time, which is the same as saying the atom.
  5. [5]
    [PDF] Interatomic Potentials
    Interatomic potentials, determined by quantum mechanics, are the 'potential energy surface' (V(q)) that affects the quality of simulation results.
  6. [6]
    On the history of key empirical intermolecular potentials
    3.1. The potentials of (Lennard-) Jones from 1924. In article I [4] Jones derived an intermolecular potential „from the variation of the viscosity of a gas ...
  7. [7]
    Interatomic Potentials Transferability for Molecular Simulations
    Feb 5, 2018 · They define the interaction of atoms in a system and accuracy of results hinge on the choice of these potential. These mathematical functions ...
  8. [8]
    Full article: Interatomic potentials: achievements and challenges
    Interatomic potentials are functions of nuclear coordinates approximating the electronic ground state energy, or for metals, the electronic free energy of a ...
  9. [9]
    Embedded-atom method: Derivation and application to impurities ...
    Jun 15, 1984 · Embedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals. Murray S. Daw and M. I. Baskes. Sandia ...Missing: original | Show results with:original
  10. [10]
    [PDF] Tabulated potentials in molecular dynamics simulations
    For a pair potential, the total potential energy for a single timestep is simply a sum of pairwise potential values,. Φ = X i<j φ(rij) ,. (8) where rij = |ri ...
  11. [11]
    Computer "Experiments" on Classical Fluids. I. Thermodynamical ...
    Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules. Loup Verlet*.
  12. [12]
    Potential energy functions
    Evaluating the force is the most computationally demanding part of molecular dynamics. The force is the negative gradient of a scalar potential energy function, ...
  13. [13]
    [PDF] Determination of Forces from a Potential in Molecular Dynamics (note)
    In Molecular Dynamics, forces derive from potentials describing bonds, valence angles, torsion angles, and Lennard-Jones interactions. These potentials are ...
  14. [14]
    None
    ### Summary of Force Definitions for Multibody Potentials, Especially EAM
  15. [15]
    Neighbor List Artifacts in Molecular Dynamics Simulations
    As a complication, the pressure and energy are calculated only at every nstcalcenergy time step, whereas neighbor lists are updated at every nstlist time step.
  16. [16]
    On the determination of molecular fields. —II. From the equation of ...
    The object of the present paper is to ascertain whether a molecular model of the same type will also explain available experimental data concerning the equation ...
  17. [17]
    Diatomic Molecules According to the Wave Mechanics. II. Vibrational ...
    The changes in the above mentioned vibrational levels due to molecular rotation are found to agree with the Kratzer formula to the first approximation.
  18. [18]
    The classical equation of state of gaseous helium, neon and argon
    The classical equation of state of gaseous helium, neon and argon. R. A. Buckingham.
  19. [19]
    Application of the Embedded-Atom Method to Covalent Materials
    Dec 7, 1987 · The embedded-atom method, a semiempirical theory of metal bonding, is investigated as a method to calculate the bonding in a covalent material.
  20. [20]
    New empirical approach for the structure and energy of covalent ...
    Apr 15, 1988 · Empirical interatomic potentials permit the calculation of structural properties and energetics of complex systems. A new approach for ...
  21. [21]
    The embedded-atom method: a review of theory and applications
    The embedded-atom method is a semi-empirical method for performing calculations of defects in metals. The EAM incorporates a picture of metallic bonding, ...
  22. [22]
    Si - Interatomic Potentials Repository
    The numerical parameters in the functional form are obtained by fitting to a set of ab initio results from quantum-mechanical calculations based on density- ...
  23. [23]
    Tutorial — dftfit 0.3.2 documentation
    DFTFIT is a software that is used to produce interatomic potentials for molecular dynamics simulations. The goal of the code is to enable scientists to use ...
  24. [24]
    Curvature Constrained Splines for DFTB Repulsive Potential ...
    Feb 19, 2021 · The Curvature Constrained Splines (CCS) methodology has been used for fitting repulsive potentials to be used in SCC-DFTB calculations.
  25. [25]
  26. [26]
    Tabulated interaction functions - GROMACS 2025.3 documentation
    Look-up tables are used for computation of potential and forces. The tables are interpolated using a cubic spline algorithm.
  27. [27]
    How to define Potential in tabular form - LAMMPS Mailing List Mirror
    Jun 24, 2013 · A potential, whether in tabular or analytical form, if correctly tabulated should not have anything to do with its potential energy surface.Tabulated Potential - LAMMPS Mailing List MirrorUse LAMMPS instead of VASPMore results from matsci.org
  28. [28]
    [PDF] Exploring the necessary complexity of interatomic potentials
    Aug 19, 2021 · Machine learning interatomic potentials (MLIPs) are more accurate but complex, while classical potentials are simpler. This work explores if ...
  29. [29]
    Fast, accurate, and transferable many-body interatomic potentials by ...
    Nov 18, 2019 · Having established that our genetic programming algorithm can find the exact form of simple pair and many-body potentials, we evaluated its ...
  30. [30]
    Universal machine learning interatomic potentials are ready for ...
    Jun 12, 2025 · There has been an ongoing race for the past several years to develop the best universal machine learning interatomic potential.
  31. [31]
    Machine learning interatomic potential: Bridge the gap between ...
    May 17, 2024 · Machine Learning Interatomic Potential (MLIP) overcomes the challenges of high computational costs in density-functional theory and the relatively low accuracy
  32. [32]
    A critical review of machine learning interatomic potentials and ...
    Jul 16, 2025 · The principal advantage of ML-IAPs lies in their capacity to reproduce atomic interactions, including energies, forces and dynamical ...
  33. [33]
    E(3)-equivariant graph neural networks for data-efficient ... - Nature
    May 4, 2022 · This work presents Neural Equivariant Interatomic Potentials (NequIP), an E(3)-equivariant neural network approach for learning interatomic potentials.
  34. [34]
    [PDF] Moment Tensor Potentials: a class of systematically improvable ...
    Jul 26, 2016 · Moment Tensor Potentials are a new class of interatomic potentials that are systematically improvable, approximating quantum-mechanical ...
  35. [35]
    Deep Potential Molecular Dynamics: A Scalable Model with the ...
    Apr 4, 2018 · Deep Potential Molecular Dynamics (DPMD) is a neural network-based method for molecular simulations, using a local reference frame and ...Abstract · Article Text · ACKNOWLEDGMENTS
  36. [36]
    [PDF] A practical guide to machine learning interatomic potentials
    Feb 26, 2025 · This guide covers MLIPs, their structure, impact, execution speed, choosing a MLIP, infrastructure, limitations, and future development.
  37. [37]
    Highly optimized embedded-atom-method potentials for fourteen fcc ...
    Apr 20, 2011 · Highly optimized embedded-atom-method (EAM) potentials have been developed for 14 face-centered-cubic (fcc) elements across the periodic table.
  38. [38]
    Fitting empirical potentials: Challenges and methodologies
    A general methodology for fitting atomic-level simulation method potentials is given, including a strategy for focusing on the specific properties needed.
  39. [39]
    Interatomic Potentials from First-Principles Calculations - IOP Science
    Interatomic Potentials from First-Principles Calculations: The Force-Matching Method ... We present a new scheme to extract numerically “optimal” interatomic ...
  40. [40]
    Interatomic potentials from first-principles calculations: the force ...
    Jun 26, 1993 · We present a new scheme to extract numerically ``optimal'' interatomic potentials from large amounts of data produced by first-principles calculations.
  41. [41]
    Multi-objective optimization of interatomic potentials with application ...
    The parameterization of a functional form for an interatomic potential is treated as a problem in multi-objective optimization.
  42. [42]
    Potfit: Effective potentials from ab initio data | Request PDF
    Aug 6, 2025 · We present a program called potfit which generates an eective atomic interaction potential by matching it to a set of reference data ...<|control11|><|separator|>
  43. [43]
    Data Generation for Machine Learning Interatomic Potentials and ...
    Nov 21, 2024 · (59) To properly describe relative energies and energy barriers between crystal structures, ANI potential was trained to 6.3k DFT calculations ( ...
  44. [44]
    [PDF] A practical guide to machine learning interatomic potentials - arXiv
    This paper will provide a high-level guide on the key fundamental aspects needed to understand the landscape of MLIPs, including their enormous potential range ...
  45. [45]
    Data Generation for Machine Learning Interatomic Potentials and ...
    As illustrated in Figure 1a, it is evident that the data set size grows rapidly as the elemental space expands, even when dealing with small molecules.
  46. [46]
    Machine-learned interatomic potentials by active learning - Nature
    Jul 23, 2020 · Active learning (AL) is an ML strategy where a learning algorithm iteratively queries a very large pool of unlabeled data to extract a minimum ...
  47. [47]
    Accelerating the training and improving the reliability of machine ...
    In this work, we show that an active learning scheme that combines MD with MLIPs (MLIP-MD) and uncertainty estimates can avoid such problematic predictions.Abstract · Article Text · INTRODUCTION · RESULTS AND DISCUSSION
  48. [48]
    Elemental augmentation of machine learning interatomic potentials
    ... size of the required training data. As illustrated in Fig. 4(c), approximately 1000 datapoints per element are needed to train a UPot-5 model to achieve a ...
  49. [49]
    ænet-PyTorch: A GPU-supported implementation for machine ...
    Apr 25, 2023 · In this work, we present ænet-PyTorch, a PyTorch-based implementation for training artificial neural network-based machine learning interatomic potentials.INTRODUCTION · Machine learning potentials · ænet-PyTorch implementation
  50. [50]
    [PDF] RLEKF: An Optimizer for Deep Potential with Ab Initio Accuracy
    The two most commonly used training methods are. Adam (Kingma and Ba 2014) and stochastic gradient de- scent (SGD) (Saad 1998) in NNMD packages due to their.
  51. [51]
    Comparative study of ensemble-based uncertainty quantification ...
    Aug 8, 2025 · Various uncertainty quantification (UQ) methods have been developed to capture these effects in MLIPs.
  52. [52]
    Uncertainty quantification for neural network potential foundation ...
    Apr 24, 2025 · In this work, we detail two uncertainty quantification (UQ) methods. Readout ensembling, by finetuning the readout layers of an ensemble of foundation models, ...
  53. [53]
    Efficient equivariant model for machine learning interatomic potentials
    Feb 26, 2025 · Classical MD, which relies on empirical interatomic potentials, is computationally efficient but sacrifices accuracy. In contrast, AIMD, which ...
  54. [54]
    ASE ecosystem — ASE documentation
    atomicrex: atomicrex is a versatile tool for the construction of interatomic potential models. It includes a Python interface for integration with first ...
  55. [55]
    Cross-functional transferability in foundation machine learning ...
    Oct 21, 2025 · A universal graph deep learning interatomic potential for the periodic table. Nat. Computational Sci. 2, 718–728 (2022). Article Google Scholar.
  56. [56]
    How to validate machine-learned interatomic potentials
    Mar 24, 2023 · Machine learning means extracting information from large datasets—in this case, from quantum-mechanical energies and forces. An ML potential is, ...Iv. Numerical Errors · V. Physically Guided... · D. Experimental Data
  57. [57]
    Efficiency, accuracy, and transferability of machine learning potentials
    May 15, 2024 · The Pareto front of computational speed versus testing root-mean-square-error (RMSE) is computed. Second, benchmark properties relevant to ...
  58. [58]
    Machine learning interatomic potential with DFT accuracy for ...
    Nov 13, 2024 · In this study, we constructed a machine learning interatomic potential (MLIP) with density functional theory (DFT) accuracy to model the energy, atomic ...
  59. [59]
    Development of a deep machine learning interatomic potential for ...
    Nov 4, 2019 · The efficient ML potential with DFT accuracy from our study will provide a promising scheme for accurate atomistic simulations of structures and ...
  60. [60]
    Evaluating Machine Learning Interatomic Potentials for Accurate ...
    Oct 3, 2025 · Melting point calculations were performed using the solid–liquid phase coexistence method to assess MLIP applicability. SevenNet showed the ...
  61. [61]
    Training data selection for accuracy and transferability of interatomic ...
    Sep 1, 2022 · However, ML-based potentials struggle to achieve transferability, i.e., provide consistent accuracy across configurations that differ from those ...
  62. [62]
    Machine-learning interatomic potentials from a users perspective
    Machine learning interatomic potentials (MLIPs) have massively changed the field of atomistic modeling. They enable the accuracy of density functional theory in ...
  63. [63]
    [PDF] Systematic softening in universal machine learning interatomic ...
    In this study, we highlight a consistent potential energy surface (PES) softening effect in three uMLIPs: M3GNet, CHGNet, and MACE-MP-0, which is characterized ...<|separator|>
  64. [64]
    Approximating the impact of nuclear quantum effects on ...
    May 26, 2022 · Unlike classical systems, at low temperatures, quantum mechanical systems have a quantized phonon spectra and zero-point energy; that is, even ...
  65. [65]
    Discrepancies and error evaluation metrics for machine learning ...
    Sep 26, 2023 · By contrast, classical interatomic potentials, also known as force fields, have significantly lower computation costs and thus can be employed ...
  66. [66]
    Uncertainty-driven dynamics for active learning of interatomic ...
    Mar 6, 2023 · In this approach, the ML model provides an uncertainty estimate along with its prediction for each new atomic configuration. If the uncertainty ...
  67. [67]
    Fast uncertainty estimates in deep learning interatomic potentials
    Apr 27, 2023 · In this work, we present a novel, computationally inexpensive method to obtain uncertainty measures in deep learning interatomic potentials, ...
  68. [68]