Euler–Maruyama method
The Euler–Maruyama method is a first-order numerical approximation scheme for solving stochastic differential equations (SDEs) of the form dX_t = f(t, X_t) dt + g(t, X_t) dW_t, where f and g are the drift and diffusion coefficients, respectively, and W_t is a standard Wiener process; it discretizes the time interval into steps of size \Delta t and approximates the solution at each step as X_{n} = X_{n-1} + f(t_{n-1}, X_{n-1}) \Delta t + g(t_{n-1}, X_{n-1}) \Delta W_n, with \Delta W_n being independent Gaussian increments simulating the Brownian motion.[1]
Introduced by Japanese mathematician Gisiro Maruyama in his 1955 paper on continuous Markov processes, the method adapts Leonhard Euler's classical polygonal approximation for deterministic ordinary differential equations to the stochastic setting by incorporating Itô stochastic integrals, providing a practical tool for simulating paths of diffusion processes where analytical solutions are unavailable.[2] Under standard Lipschitz and growth conditions on f and g, the Euler–Maruyama scheme exhibits strong convergence of order 0.5, meaning the expected root-mean-square error over paths converges as O((\Delta t)^{1/2}), and weak convergence of order 1, where the error in expectations of smooth test functions converges as O(\Delta t).[1]
Widely applied in fields such as mathematical finance for pricing derivative securities under models like geometric Brownian motion, population dynamics with environmental noise, and physical systems involving random fluctuations, the method serves as a foundational benchmark for more advanced stochastic integrators like the Milstein scheme, despite its limitations in accuracy for multiplicative noise or stiff equations.[1] Its simplicity and implementability in standard programming environments make it accessible for Monte Carlo simulations, though care must be taken with step size selection to balance computational cost and precision.[1]
Historical Development
Origins and Naming
The Euler–Maruyama method represents a stochastic analogue to the Euler method originally developed for solving ordinary differential equations (ODEs), extending it to handle the randomness inherent in stochastic differential equations (SDEs). This adaptation emerged in the mid-20th century as computational tools became available for simulating complex systems influenced by noise, building on the deterministic Euler scheme introduced by Leonhard Euler in the 18th century for numerical integration of ODEs.[3]
The method derives its name from Leonhard Euler (1707–1783), whose forward Euler technique for ODEs serves as the foundational discretization strategy, and Gisiro Maruyama (1916–1986), a Japanese mathematician who pioneered its stochastic formulation. Maruyama's contributions focused on adapting the Euler approach to incorporate the increments of Wiener processes, essential for modeling continuous-time stochastic dynamics.[3][4]
During the 1940s and 1950s, the development of numerical methods for SDEs was motivated by the need to approximate solutions in physical systems exhibiting random fluctuations, such as Brownian motion in diffusion processes, and in engineering applications involving noisy signals and control systems. These efforts coincided with the formalization of Itô calculus, providing the rigorous framework for SDEs. A pivotal advancement came in Maruyama's 1955 paper, "Continuous Markov processes and stochastic equations," which introduced the core discretization scheme for Wiener processes, laying the groundwork for practical numerical simulations of such equations.[4][4]
Key Theoretical Contributions
The Euler–Maruyama method extends the classical Euler method for deterministic ordinary differential equations to stochastic differential equations by incorporating an approximation for the stochastic integral.[5]
In 1955, G. Maruyama established the foundational strong convergence of order 0.5 for the method, proving uniform convergence in quadratic mean to the true solution under appropriate conditions on the coefficients.[6]
During the 1970s, researchers including G.N. Milstein demonstrated weak convergence of order 1 for the Euler–Maruyama method, providing essential error analysis that confirmed its reliability for moments and expectations.
Milstein's contributions in this period, while extending to higher-accuracy schemes like the Milstein method with strong order 1.0, also solidified the foundational error bounds for the basic Euler–Maruyama approximation through stochastic Taylor expansions.[7]
These convergence results hold when the drift and diffusion coefficients satisfy global Lipschitz continuity and linear growth bounds, ensuring the existence and uniqueness of strong solutions to the underlying SDE.
Kloeden and Platen's 1992 monograph offers a thorough treatment of these properties, deriving explicit error bounds and conditions for both strong and weak convergence in multi-dimensional settings.
The Euler–Maruyama method provides a discrete-time approximation to the solution of an Itô stochastic differential equation (SDE) driven by a Wiener process.[8] Consider the general scalar Itô SDE of the form
dX_t = a(t, X_t) \, dt + b(t, X_t) \, dW_t, \quad t \in [0, T],
with deterministic initial condition X_0 = x_0 \in \mathbb{R}, where W_t is a standard Wiener process and the drift coefficient a: [0, T] \times \mathbb{R} \to \mathbb{R} and diffusion coefficient b: [0, T] \times \mathbb{R} \to \mathbb{R} are measurable functions satisfying local integrability conditions that ensure the Itô integrals exist.[2]
The method proceeds by discretizing the time interval [0, T] into a partition $0 = t_0 < t_1 < \cdots < t_N = T, with time steps \Delta t_n = t_{n+1} - t_n for n = 0, 1, \dots, N-1. The approximation Y_n to X_{t_n} is then defined recursively by the scheme
\begin{equation}
Y_{n+1} = Y_n + a(t_n, Y_n) \Delta t_n + b(t_n, Y_n) \Delta W_n, \quad Y_0 = x_0,
\end{equation}
where the increments \Delta W_n = W_{t_{n+1}} - W_{t_n} are independent and normally distributed as \Delta W_n \sim \mathcal{N}(0, \Delta t_n).[8]
The sequence of approximations \{Y_n\}_{n=0}^N constitutes a discrete-time Markov chain, since each step depends solely on the previous approximation Y_n and the independent noise increment \Delta W_n.[8]
Derivation from Itô Calculus
The Itô stochastic differential equation (SDE) for an Itô process X_t is expressed in integral form as
X_t = X_0 + \int_0^t a(s, X_s) \, ds + \int_0^t b(s, X_s) \, dW_s,
where a(t, x) and b(t, x) denote the drift and diffusion coefficients, respectively, and W_t is a standard Wiener process.
To obtain a numerical approximation, discretize the time interval [0, T] into a sequence of points $0 = t_0 < t_1 < \cdots < t_m = T, with increments \Delta t_n = t_{n+1} - t_n. Over each subinterval [t_n, t_{n+1}], the integral form yields
X_{t_{n+1}} = X_{t_n} + \int_{t_n}^{t_{n+1}} a(s, X_s) \, ds + \int_{t_n}^{t_{n+1}} b(s, X_s) \, dW_s.
The Euler–Maruyama scheme approximates these integrals by freezing the coefficients at their values at the left endpoint t_n, assuming they remain approximately constant over the small interval \Delta t_n. This leads to the recursive scheme
X_{n+1} = X_n + a(t_n, X_n) \Delta t_n + b(t_n, X_n) \Delta W_n,
where \Delta W_n = W_{t_{n+1}} - W_{t_n} is the Wiener increment, normally distributed with mean zero and variance \Delta t_n.
This freezing of coefficients is justified under the assumption that a and b are continuous (or Lipschitz continuous) in their arguments, ensuring that their variation over \Delta t_n becomes negligible as \Delta t_n \to 0. The Wiener increment \Delta W_n satisfies \mathbb{E}[\Delta W_n] = 0 and \mathbb{E}[(\Delta W_n)^2] = \Delta t_n, which matches the quadratic variation of the Wiener process in the mean-square sense, providing a consistent approximation for the stochastic integral.
The Euler–Maruyama scheme arises naturally from the Itô-Taylor expansion of the SDE solution, which generalizes the classical Taylor series to stochastic processes using Itô's lemma. The expansion includes terms involving multiple Itô integrals; the Euler–Maruyama method retains only the zeroth-order terms for both the drift (ordinary integral) and diffusion (Itô integral), truncating all higher-order multiple stochastic integrals. This truncation introduces a local truncation error of order O(\Delta t_n) in the mean-square sense for sufficiently small \Delta t_n.
Numerical Properties
Strong Convergence
Strong convergence quantifies the pathwise approximation quality of the Euler–Maruyama method to the true solution of a stochastic differential equation (SDE), focusing on the almost sure behavior of sample paths. It is defined such that the root-mean-square of the supremum error tends to zero as the discretization step size \Delta t approaches zero:
\left( \mathbb{E} \left[ \sup_{0 \leq t \leq T} |X_t - Y_t|^2 \right] \right)^{1/2} \to 0,
where X_t denotes the exact Itô process solution and Y_t the continuous-time extension of the Euler–Maruyama approximation over the interval [0, T]. This metric captures the expected maximum deviation across paths, emphasizing reliable trajectory tracking rather than statistical moments.[8]
Under standard regularity conditions—namely, global Lipschitz continuity of the drift a(x) and diffusion b(x) coefficients, along with linear growth bounds |a(x)| + |b(x)| \leq K(1 + |x|) for some K > 0—the method exhibits a strong order of convergence of 0.5. In this setting, the error bound takes the form
\left( \mathbb{E} \left[ \sup_{0 \leq t \leq T} |X_t - Y_t|^2 \right] \right)^{1/2} \leq C \sqrt{\Delta t},
with C depending on T, the Lipschitz constant, and the second moment of the initial condition \mathbb{E}[|X_0|^2] < \infty. These assumptions ensure existence, uniqueness, and moment stability of the SDE solution, enabling the discretization to mimic the path properties effectively.[8]
The proof establishes this order by analyzing the error process e_t = X_t - Y_t, decomposing it into local truncation errors from the Itô-Taylor expansion, and bounding the accumulated discrepancies. Key tools include the Itô isometry, which controls the variance of stochastic increments via \mathbb{E}\left[ \left( \int_0^T |b(X_u) - b(Y_u)|^2 du \right) \right], and Gronwall's inequality, applied to the mean-square error supremum Z(t) = \sup_{0 \leq s \leq t} \mathbb{E}[|e_s|^2] to yield Z(T) \leq C \Delta t. This framework confirms the \sqrt{\Delta t} scaling inherent to the Brownian motion increments.[8][9]
For SDEs featuring additive noise (constant b) or multiplicative noise where the drift satisfies a one-sided Lipschitz condition alongside a globally Lipschitz diffusion, the strong error remains O(\sqrt{\Delta t}). Here, the one-sided condition on the drift, such as \langle a(x) - a(y), x - y \rangle \leq L |x - y|^2 for L \geq 0, combined with polynomial growth, preserves the order while relaxing global monotonicity requirements on the drift. These extensions broaden applicability to dissipative systems without sacrificing the fundamental convergence rate.[10]
Weak Convergence
Weak convergence assesses the Euler–Maruyama method's performance in approximating statistical properties of the solution to a stochastic differential equation (SDE), such as expectations and moments, rather than individual sample paths. Formally, the approximation Y_t converges weakly to the true solution X_t if \sup_{0 \leq t \leq T} \left| \mathbb{E}[f(X_t)] - \mathbb{E}[f(Y_t)] \right| \to 0 as the time step \Delta t \to 0, for any smooth test function f belonging to the space C_b^2 of twice continuously differentiable functions with bounded derivatives.
Under the standard assumptions that the drift a and diffusion b coefficients of the SDE are globally Lipschitz continuous and satisfy linear growth conditions, and that the test function f \in C_b^2(\mathbb{R}^d), the Euler–Maruyama method exhibits a weak order of convergence of 1.0. This implies that the error satisfies \sup_{0 \leq t \leq T} \left| \mathbb{E}[f(X_t)] - \mathbb{E}[f(Y_t)] \right| \leq C \Delta t for some constant C > 0 independent of \Delta t. This order surpasses the strong convergence order of 0.5, as errors from individual increments tend to cancel out when averaging over paths to compute expectations.
For linear SDEs, the Euler–Maruyama method achieves the full weak order of 1 exactly under the aforementioned conditions, with potential for higher accuracy when the noise is additive and f is a polynomial. In the nonlinear case, maintaining order 1 requires that f has bounded second derivatives to control the remainder terms in the Itô-Taylor expansion. Unlike strong convergence, which demands pathwise mean-square accuracy and is thus stricter, weak convergence suffices for many applications focused on distributional properties. These results were rigorously established and refined in the comprehensive analysis by Kloeden and Platen.
Stability and Error Analysis
The Euler–Maruyama method preserves mean-square stability for linear stochastic differential equations of the form dX = \lambda X \, dt + \sigma X \, dW with \lambda < 0, provided the time step \Delta t satisfies the condition |1 + \lambda \Delta t|^2 + \sigma^2 \Delta t < 1.[11] This inequality ensures that the second moment of the numerical solution decays to zero as time progresses, mirroring the behavior of the exact solution.[11] For practical implementation, the condition highlights the restrictive role of the diffusion coefficient \sigma in limiting allowable step sizes when noise is prominent.[11]
The global strong error of the Euler–Maruyama method under global Lipschitz conditions on the drift and diffusion coefficients is bounded by O(\sqrt{\Delta t}), with an explicit estimate of the form \mathbb{E}[|X_T - \hat{X}_T|] \leq C \sqrt{\Delta t}, where C depends on the Lipschitz constant L, the final time T, and bounds on the coefficients. This bound arises from controlling the local truncation error accumulation over the interval [0, T], ensuring the method's reliability for sufficiently small \Delta t. Weak error analysis complements this by providing O(\Delta t) convergence for expectations, but stability considerations emphasize the strong error bound for pathwise approximations.
In stiff stochastic differential equations, characterized by large negative eigenvalues in the drift (high Lipschitz constant L), the Euler–Maruyama method exhibits heightened sensitivity to the step size \Delta t, as the stability region shrinks proportionally to $1/L.[11] This can lead to numerical instability or excessive variance growth if \Delta t is not sufficiently small, necessitating adaptive step-size strategies to dynamically adjust \Delta t based on local error estimates or stability indicators. Such adaptations maintain reliability while minimizing computational cost in systems with disparate time scales.
For geometric Brownian motion dS = \mu S \, dt + \sigma S \, dW with negative drift \mu < 0, the method's preservation of variance decay requires \Delta t small enough to satisfy the mean-square stability condition, preventing numerical explosion in the second moment.[11]
Examples and Implementations
Geometric Brownian Motion
The geometric Brownian motion (GBM) serves as a fundamental example for applying the Euler–Maruyama method, particularly in modeling asset prices in finance, where the process exhibits multiplicative noise. The stochastic differential equation (SDE) governing GBM is given by
dS_t = \mu S_t \, dt + \sigma S_t \, dW_t,
with initial condition S_0 > 0, where \mu is the drift parameter, \sigma > 0 is the volatility, and W_t is a standard Wiener process.[8] The exact solution to this SDE is
S_t = S_0 \exp\left\{ \left(\mu - \frac{\sigma^2}{2}\right) t + \sigma W_t \right\},
which follows a log-normal distribution, enabling direct comparison with numerical approximations.[8]
The Euler–Maruyama method discretizes this SDE over a time grid t_n = n \Delta t for n = 0, 1, \dots, N with T = N \Delta t, yielding the iterative scheme
S_{n+1} = S_n \left(1 + \mu \Delta t + \sigma \Delta W_n \right),
where \Delta W_n \sim \mathcal{N}(0, \Delta t) are independent Gaussian increments.[8] This approximation preserves the multiplicative structure of the noise term, producing sample paths that approximate the true GBM trajectories. However, this approximation does not guarantee that sample paths remain positive, unlike the exact GBM solution, and negative values may occur, particularly with larger \Delta t.[12]
In simulations, multiple paths generated via Euler–Maruyama exhibit the characteristic volatility clustering and log-normal terminal distribution of GBM, with paths diverging from the mean due to the random increments. The expected strong error, measured as the root-mean-square deviation from the exact solution over paths, scales as O(\sqrt{\Delta t}), reflecting the method's order-0.5 convergence for such SDEs.[8]
Practical Implementation Guidelines
The Euler–Maruyama method is implemented by discretizing the time interval [0, T] into N equal steps of size Δt = T/N, initializing the approximation Y_0 = X_0, and iteratively updating the scheme for n = 0 to N-1 as Y_{n+1} = Y_n + f(Y_n) Δt + g(Y_n) ΔW_n, where ΔW_n = √Δt ⋅ Z_n and Z_n is a standard normal random variable N(0,1).[1][13] To generate the random increments, standard random number generators are used, such as those producing Gaussian variates, ensuring the Brownian motion approximation is accurate for small Δt.[14]
For multi-dimensional stochastic differential equations dX_t = μ(t, X_t) dt + σ(t, X_t) dW_t, where X_t ∈ ℝ^d and W_t is a p-dimensional Wiener process, the method extends to vector form: Ŷ_{n+1} = Ŷ_n + μ(t_n, Ŷ_n) Δt + σ(t_n, Ŷ_n) ΔW_n, with ΔW_n ∈ ℝ^p generated as √Δt ⋅ Z_n and Z_n ∼ N(0, I_p).[13] If the Wiener processes are correlated with covariance matrix Σ, correlated increments are obtained by applying Cholesky decomposition to Σ, yielding L such that ΔW_n = L ⋅ (√Δt ⋅ Z_n) where Z_n consists of independent N(0,1) components; this ensures the correct correlation structure without altering the diffusion matrix σ.[14]
Pseudocode for a single path simulation in Python using NumPy is as follows:
python
import numpy as np
def euler_maruyama(f, g, X0, T, N, seed=None):
if seed is not None:
np.random.seed(seed)
dt = T / N
t = np.linspace(0, T, N+1)
Y = np.zeros(N+1)
Y[0] = X0
for n in range(N):
Z = np.random.normal(0, 1)
dW = np.sqrt(dt) * Z
Y[n+1] = Y[n] + f(Y[n], t[n]) * dt + g(Y[n], t[n]) * dW
return t, Y
import numpy as np
def euler_maruyama(f, g, X0, T, N, seed=None):
if seed is not None:
np.random.seed(seed)
dt = T / N
t = np.linspace(0, T, N+1)
Y = np.zeros(N+1)
Y[0] = X0
for n in range(N):
Z = np.random.normal(0, 1)
dW = np.sqrt(dt) * Z
Y[n+1] = Y[n] + f(Y[n], t[n]) * dt + g(Y[n], t[n]) * dW
return t, Y
This implementation leverages NumPy for efficient random number generation and array operations; analogous MATLAB code replaces np.random.normal with randn and uses vectorized loops where possible.[1] In R, libraries such as sde provide built-in solvers that implement the Euler–Maruyama scheme via functions like ItoEuler.[14]
For efficiency, vectorized operations in libraries like NumPy are essential to avoid explicit loops over time steps, achieving a typical runtime of O(N for simulating one path and O(N M) for M independent paths.[1] To ensure reproducibility across runs, set a fixed random seed before generating increments, as in the pseudocode example. Additionally, monitor for numerical overflow, particularly when the diffusion coefficient g (or σ) is large, by checking intermediate values and using adaptive step sizes if explosions occur.[13][14]
Applications and Extensions
Financial Modeling
The Euler–Maruyama method plays a central role in financial modeling by enabling the numerical simulation of asset price dynamics governed by stochastic differential equations (SDEs), particularly for derivative pricing under the Black-Scholes framework. In this context, Monte Carlo simulations approximate paths of geometric Brownian motion using the method's discretization scheme, where option prices are estimated by generating numerous paths and averaging the discounted payoffs at maturity. This approach is justified by the method's weak convergence properties, which ensure reliable computation of expected values for European-style options.[15]
In risk management, the Euler–Maruyama method supports Value-at-Risk (VaR) estimation by simulating scenarios for portfolio SDEs, allowing the quantification of potential losses at specified confidence levels through Monte Carlo integration of tail distributions. For instance, the method discretizes multi-asset models to generate loss paths, from which the VaR is derived as the percentile of simulated portfolio values. This application is particularly valuable for assessing systemic risks in correlated asset classes.[16]
The method's advantages in financial modeling include its straightforward implementation for high-dimensional problems, such as simulating portfolios with dozens of assets, where closed-form solutions are infeasible. Extensions, such as truncated schemes for jump-diffusion processes, further enable modeling of sudden market shocks like those in Lévy-driven SDEs. During stress testing in volatile periods, the method has been employed with time steps of approximately \Delta t \approx 1/252 to match daily trading data frequencies.[17][18][16]
However, limitations arise from the method's convergence orders: its weak order of 1 suffices for expectation-based metrics like option prices or VaR, but the strong order of 0.5 can introduce significant pathwise errors in pricing path-dependent options, such as barriers or Asians, where accurate trajectory reproduction is essential.[19]
Biological Systems
The Euler–Maruyama method has been widely applied to simulate stochastic differential equations (SDEs) modeling biological processes, particularly those involving intrinsic noise from small molecule counts or environmental fluctuations.
In gene expression modeling, the method facilitates numerical solutions for SDEs approximating bursty mRNA production, where bursts arise from intermittent promoter activity. A common formulation for mRNA concentration X_t is the SDE
dX_t = (k - \gamma X_t) \, dt + \sqrt{k + \gamma X_t} \, dW_t,
with production rate k, degradation rate \gamma, and Wiener process W_t, capturing the Poisson-like variability in transcription and degradation events. This diffusion approximation to the chemical master equation allows efficient simulation of noise-driven fluctuations in protein levels downstream of mRNA dynamics.
For population dynamics, the Euler–Maruyama method approximates solutions to SDEs incorporating environmental noise, such as the stochastic logistic growth model
dN_t = r N_t \left(1 - \frac{N_t}{K}\right) dt + \sigma N_t \, dW_t,
where N_t is population size, r is intrinsic growth rate, K is carrying capacity, \sigma quantifies noise intensity, and W_t models random perturbations like resource variability.[20] This approach reveals extinction risks and persistence thresholds under stochastic forcing, outperforming deterministic models in low-population regimes.[21]
These applications leverage the method's ability to capture intrinsic noise in small biological systems, such as molecular counts in cells, where discrete stochastic effects dominate, and enable parameter inference from time-series data via likelihood-based methods. In synthetic biology, Higham (2008) demonstrated the Euler–Maruyama method's efficiency in integrating diffusion approximations for toggle switch models, combining it with chemical Langevin equation hybrids to approximate τ-leaping for faster simulations of bistable gene circuits.[22]
More recently, during the 2020s COVID-19 outbreaks, the method has supported stochastic extensions of SEIR (susceptible-exposed-infectious-recovered) models, simulating epidemic trajectories with demographic noise to assess outbreak variability and intervention impacts.[23]
Machine Learning
In machine learning, particularly in generative modeling, the Euler–Maruyama method is extensively used to simulate stochastic differential equations underlying score-based diffusion models. These models, popularized since the early 2020s, frame data generation as sampling from a reverse-time SDE, where the forward process adds noise to data and the reverse process denoises it to generate new samples. The method discretizes this reverse SDE, often variance-preserving types, enabling efficient training and inference for tasks like image synthesis and text-to-image generation. For instance, in predictor-corrector samplers, Euler–Maruyama steps approximate the drift and diffusion terms, balancing computational speed with sample quality. As of 2025, extensions incorporate adaptive step sizes to mitigate discretization errors in high-dimensional settings.[24]
Comparisons to Other Methods
The Euler–Maruyama method, with its strong convergence order of 0.5 and weak order of 1.0, serves as a foundational explicit scheme for simulating stochastic differential equations (SDEs), but it is often contrasted with the Milstein method, which achieves a strong order of 1.0 by incorporating an additional Itô-Taylor expansion term that accounts for the stochastic integral of the diffusion coefficient's derivative.[1][8] This extra term in the Milstein scheme requires computing the derivative of the diffusion function b(t, X_t), increasing implementation complexity compared to the simpler Euler–Maruyama approximation, while both methods maintain the same weak order of 1.0 for expectations of smooth functions.[1][25]
In comparison to stochastic Runge–Kutta methods for SDEs, the Euler–Maruyama scheme represents a first-order explicit approach, whereas higher-order variants, such as weak order 2 Runge–Kutta schemes, offer improved accuracy by evaluating multiple stages per step to better approximate the drift and diffusion integrals, though at a significantly higher computational cost due to additional evaluations of the coefficients.[26][27] These Runge–Kutta extensions are particularly beneficial for problems demanding higher weak convergence, but their derivative-free formulations still demand more resources than the baseline Euler–Maruyama method.[27]
A notable advantage of the Euler–Maruyama method arises in cases of additive noise, where the diffusion coefficient b(t) is constant and independent of the solution X_t; here, the scheme attains a strong order of 1.0, equivalent to the Milstein method, without needing the additional correction term.[28][25] Consequently, the Euler–Maruyama method is preferred for rapid simulations where weak convergence is sufficient, such as Monte Carlo estimations of expectations, while the Milstein method is recommended for path-dependent applications requiring precise strong convergence, like barrier option pricing in finance.[1][8]
For stiff SDEs, where explicit schemes like Euler–Maruyama may suffer from stability restrictions necessitating tiny time steps, Rosenbrock-type methods provide semi-implicit alternatives that enhance stability through linearized Jacobian approximations, positioning Euler–Maruyama as the efficient baseline in most numerical libraries for non-stiff problems.[29][8]
| Method | Strong Order | Weak Order | Key Trade-off |
|---|
| Euler–Maruyama | 0.5 (1.0 for additive noise) | 1.0 | Simple, explicit; limited strong accuracy |
| Milstein | 1.0 | 1.0 | Requires diffusion derivative; higher strong order |
| Stochastic Runge–Kutta (higher-order) | Varies (e.g., 1.0+) | Varies (e.g., 2.0) | More stages; better accuracy at higher cost |
| Rosenbrock-type (for stiff) | 0.5–1.0 | 1.0 | Improved stability; semi-implicit complexity |