Midpoint method
The midpoint method, also known as the modified Euler method or explicit midpoint rule, is a second-order explicit Runge-Kutta numerical technique for approximating solutions to initial value problems of ordinary differential equations (ODEs) of the form \frac{du}{dt} = f(t, u) with u(t_0) = u_0.[1] It improves upon the first-order Euler method by evaluating the derivative at an intermediate "midpoint" after a half-step, then using that slope to advance the full step, yielding a local truncation error of \mathcal{O}(h^3) and global error of \mathcal{O}(h^2), where h is the step size.[2]
The algorithm proceeds iteratively: starting from (t_i, u_i), compute the predictor u_{i+1/2} = u_i + \frac{h}{2} f(t_i, u_i) and t_{i+1/2} = t_i + \frac{h}{2}, then update u_{i+1} = u_i + h f(t_{i+1/2}, u_{i+1/2}) and t_{i+1} = t_i + h.[3] This two-stage process makes it computationally efficient while providing quadratic convergence, allowing for more accurate trajectory tracing compared to linear methods like Euler's, especially for nonlinear ODEs over finite intervals.[1] As a member of the Runge-Kutta family, it serves as a foundational explicit integrator in scientific computing, often used in simulations of physical systems where higher-order methods may be overly costly.[2]
Definition
The midpoint method is a specific numerical technique for approximating solutions to initial value problems in ordinary differential equations, serving as an explicit second-order Runge-Kutta method.[4] It improves upon basic approaches by evaluating the derivative at the midpoint of each time interval, providing a more accurate estimate of the solution's behavior over that step compared to first-order methods.[5] This method requires two evaluations of the right-hand side function per step, balancing computational efficiency with enhanced precision.[4]
The midpoint method addresses initial value problems of the form y' = f(t, y), y(t_0) = y_0, where a fixed step size h is used to generate a sequence of approximations y_n \approx y(t_n) with t_n = t_0 + n h.[6] It incorporates a predictor step to estimate the solution at the interval's midpoint, then uses the slope there to advance the approximation, thereby capturing curvature effects that simpler linear extrapolations miss.[4]
Historically, the midpoint method emerged as part of the early development of Runge-Kutta methods in the late 19th and early 20th centuries, initially introduced by Carl Runge in 1895 as an adaptation of midpoint quadrature for differential equations.[7] It was further refined by Karl Heun in 1900 and Wilhelm Kutta in 1901, who integrated midpoint evaluations into higher-order schemes, establishing its foundational role in numerical ODE solvers.[7] This evolution addressed the limitations of the forward Euler method, a simpler predecessor that relies solely on the initial slope and achieves only first-order accuracy.[5]
The midpoint method is applied to initial value problems for ordinary differential equations (ODEs) of the form y' = f(t, y), y(t_0) = y_0, where f is a sufficiently smooth function.[3]
For a scalar ODE, the explicit midpoint method advances the numerical solution from (t_n, y_n) to (t_{n+1}, y_{n+1}) using a step size h > 0 via the following iterative formula:
k_1 = f(t_n, y_n),
\hat{y} = y_n + \frac{h}{2} k_1,
k_2 = f\left(t_n + \frac{h}{2}, \hat{y}\right),
y_{n+1} = y_n + h k_2,
where t_{n+1} = t_n + h and \hat{y} serves as a temporary predictor at the midpoint.[3][8] This formulation follows a predictor-corrector style, with the first stage predicting the midpoint value explicitly using the slope at the current point, and the second stage correcting the full step using the slope at that predicted midpoint.
This explicit midpoint method is distinct from implicit variants, such as the implicit midpoint rule, which solve an equation involving the unknown y_{n+1} at both stages and are not addressed here.[3]
The method extends naturally to systems of first-order ODEs, where \mathbf{y} \in \mathbb{R}^m is a vector, f: \mathbb{R} \times \mathbb{R}^m \to \mathbb{R}^m, and all operations apply component-wise. The iterative formula becomes
\mathbf{k}_1 = f(t_n, \mathbf{y}_n),
\hat{\mathbf{y}} = \mathbf{y}_n + \frac{h}{2} \mathbf{k}_1,
\mathbf{k}_2 = f\left(t_n + \frac{h}{2}, \hat{\mathbf{y}}\right),
\mathbf{y}_{n+1} = \mathbf{y}_n + h \mathbf{k}_2.
For well-posedness and uniqueness of solutions, f must satisfy a Lipschitz condition in \mathbf{y} uniformly in t.[3]
Derivation
Taylor series expansion approach
The Taylor series expansion provides a foundational approach to deriving the midpoint method for solving the initial value problem y'(t) = f(t, y(t)), y(t_0) = y_0, by matching the method's update formula to the exact solution's expansion up to second order. Consider the exact solution at t_{n+1} = t_n + h, where h is the step size. Expanding around t_n, the solution satisfies
y(t_{n+1}) = y(t_n) + h y'(t_n) + \frac{h^2}{2} y''(t_n) + O(h^3).
Since y'(t) = f(t, y(t)), the first term is h f(t_n, y(t_n)).[9]
To incorporate the second-order term, differentiate y'(t) = f(t, y(t)) using the chain rule, yielding
y''(t) = \frac{\partial f}{\partial t}(t, y(t)) + \frac{\partial f}{\partial y}(t, y(t)) \cdot f(t, y(t)).
This expression, evaluated at t_n, captures the quadratic contribution to the expansion. The forward Euler method approximates only up to the linear term, achieving first-order accuracy with local truncation error O(h^2).[9]
The midpoint method improves accuracy by approximating the second derivative at a midpoint to match the h^2/2 term. Specifically, it uses an intermediate step to estimate the solution at t_n + h/2, given by y(t_n) + (h/2) f(t_n, y(t_n)), and evaluates the right-hand side there: f(t_n + h/2, y(t_n) + (h/2) f(t_n, y(t_n))). The update formula then becomes
y_{n+1} = y_n + h f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2} f(t_n, y_n)\right).
This substitution ensures the method's expansion aligns with the exact series up to O(h^2).[9]
To verify the order, expand the method's output using Taylor series for f around (t_n, y_n). The resulting series matches the exact expansion's constant, linear, and quadratic terms, leaving a local truncation error of O(h^3), which confirms second-order accuracy. This derivation highlights the method's superiority over first-order schemes like Euler by systematically incorporating higher-order corrections via series matching.[9]
Runge-Kutta interpretation
The midpoint method can be viewed as a two-stage explicit Runge-Kutta method of order 2 for solving the initial value problem y' = f(t, y), y(t_0) = y_0.[10]
Explicit Runge-Kutta methods approximate the solution at t_{n+1} = t_n + h via y_{n+1} = y_n + h \sum_{i=1}^s b_i k_i, where the intermediate stages are computed as k_i = f\left( t_n + c_i h, y_n + h \sum_{j=1}^{i-1} a_{ij} k_j \right) for i = 1, \dots, s, with the coefficients b_i, c_i, and a_{ij} (for j < i) defining the method.[11]
For the midpoint method, s = 2 and the coefficients form the following Butcher tableau:
\begin{array}{c|cc}
0 & 0 & 0 \\
\frac{1}{2} & \frac{1}{2} & 0 \\
\hline
& 0 & 1
\end{array}
Thus, c = \begin{pmatrix} 0 \\ \frac{1}{2} \end{pmatrix}, A = \begin{pmatrix} 0 & 0 \\ \frac{1}{2} & 0 \end{pmatrix}, and b = \begin{pmatrix} 0 \\ 1 \end{pmatrix}.[12]
This configuration satisfies the order conditions for a second-order method, namely \sum_{i=1}^s b_i = 1 and \sum_{i=1}^s b_i c_i = \frac{1}{2}.[11]
In contrast to Ralston's method, another two-stage second-order Runge-Kutta method that optimizes stability by setting b = \begin{pmatrix} \frac{1}{4} \\ \frac{3}{4} \end{pmatrix} and a_{21} = \frac{2}{3}, the midpoint method places all weight on the second stage.[13]
Error analysis
Local truncation error
The local truncation error for the midpoint method is defined as the difference between the exact solution value at the next time step and the value obtained by applying a single step of the method starting from the exact solution at the current time step, denoted as y(t_n + h) - z(t_n + h), where y(t_n) = y_n is exact and z represents the numerical approximation after one step.[14]
Assuming the right-hand side function f(t, y) and its partial derivatives up to third order are continuous, the local truncation error can be derived using Taylor series expansions around t_n. The expansions for y(t_{n+1}) and the intermediate terms in the method cancel out the constant, linear, and quadratic terms, leaving a remainder involving the third derivative.[14]
Specifically, the error is \frac{h^3}{6} y'''(\xi) for some \xi \in [t_n, t_n + h], which is of order O(h^3). This principal error term, involving the third derivative of the solution, demonstrates that the midpoint method achieves second-order accuracy.[14]
Global error and convergence
The global error in the midpoint method for solving the initial value problem y' = f(t, y), y(t_0) = y_0, over a fixed time interval [t_0, T] with N = (T - t_0)/h steps of size h, is defined as e_n = y(t_n) - y_n, where y(t_n) is the exact solution at the n-th time point t_n = t_0 + n h and y_n is the numerical approximation. Under suitable conditions on f, this error satisfies \|e_n\| = O(h^2) as h \to 0 for each fixed n, and uniformly \max_{0 \leq n \leq N} \|e_n\| = O(h^2).[15][16]
To establish this bound, consider the error recursion derived from the method's update and the exact solution's Taylor expansion. Assuming f satisfies a Lipschitz condition in y with constant L > 0, i.e., \|f(t, y_1) - f(t, y_2)\| \leq L \|y_1 - y_2\| for all t \in [t_0, T] and relevant y_1, y_2, the global error satisfies
\|e_{n+1}\| \leq \|e_n\| (1 + L h) + O(h^3),
where the O(h^3) term arises from the local truncation error of the method. Iterating this inequality from n = 0 (with e_0 = 0) yields a geometric series summation:
\|e_n\| \leq h \sum_{k=0}^{n-1} O(h^3) (1 + L h)^{n-1-k} \leq C h^2 e^{L (t_n - t_0)}
for some constant C > 0 independent of h, using the bound (1 + L h)^m \leq e^{L m h}. Thus, the error remains O(h^2) over the fixed interval.[15]
The convergence of the midpoint method follows from a general theorem for one-step methods: if the method is consistent (local truncation error O(h^{p+1})) and the problem is well-posed (Lipschitz continuous f), then the global error is O(h^p). For the explicit midpoint method, which is a second-order Runge-Kutta scheme with p = 2 when f is continuously differentiable (C^1) and Lipschitz in y, the theorem implies \max_{0 \leq n \leq N} \|e_n\| \leq C h^2 for some C independent of h > 0 sufficiently small.[15][16]
In practice, this quadratic convergence means that halving the step size h reduces the global error by a factor of approximately 4, enabling efficient accuracy control in simulations by refining the grid.[15]
Implementation and examples
Algorithm steps
The midpoint method approximates solutions to the initial value problem y' = f(t, y), y(t_0) = y_0 over the interval [t_0, T] using a fixed step size h. The algorithm initializes the current time t = t_0 and solution value y = y_0, then iteratively advances the solution until t reaches or exceeds T, producing a sequence of approximation points (t_n, y_n).[1]
The required inputs are the right-hand side function f(t, y), initial time t_0, initial value y_0, step size h > 0, and end time T > t_0; the output is the discrete solution trajectory \{(t_n, y_n)\}_{n=0}^N where t_N \approx T.[2]
The core iteration follows this pseudocode structure:
initialize t = t_0, y = y_0
while t < T:
k1 = f(t, y)
temp = y + (h/2) * k1
k2 = f(t + h/2, temp)
y = y + h * k2
t = t + h
output sequence (t_n, y_n)
initialize t = t_0, y = y_0
while t < T:
k1 = f(t, y)
temp = y + (h/2) * k1
k2 = f(t + h/2, temp)
y = y + h * k2
t = t + h
output sequence (t_n, y_n)
Each full step requires two evaluations of f.[1][17]
For systems of ODEs, where y \in \mathbb{R}^d and f: \mathbb{R} \times \mathbb{R}^d \to \mathbb{R}^d, the algorithm extends naturally by treating y, k_1, k_2, and \textit{temp} as vectors, with all operations (addition, scalar multiplication) performed element-wise or via vector arithmetic in the implementation language.[17]
The step size h must balance accuracy, which improves with smaller h, against efficiency, as smaller steps increase the total number of function evaluations and computation time. Implementations should also handle potential singularities in f, such as points leading to division by zero, by validating inputs or using adaptive safeguards before evaluation./8:_Ordinary_Differential_Equations/8.03:_Runge-Kutta_2nd-Order_Method_for_Solving_Ordinary_Differential_Equations)
Numerical example
To illustrate the application of the midpoint method, consider the initial value problem y' = -y, y(0) = 1, whose exact solution is y(t) = e^{-t}. This is a standard linear test equation for evaluating numerical ODE solvers.[18]
We apply the method with step size h = 0.1 for the first two steps. At t_0 = 0, y_0 = 1, compute k_1 = f(t_0, y_0) = -1. The intermediate point is y^* = y_0 + \frac{h}{2} k_1 = 1 - 0.05 = 0.95, so k_2 = f(t_0 + \frac{h}{2}, y^*) = -0.95. Thus, y_1 = y_0 + h k_2 = 1 - 0.095 = 0.905 at t_1 = 0.1.
Next, at t_1 = 0.1, y_1 = 0.905, compute k_1 = -0.905. The intermediate point is y^* = 0.905 + 0.05 \times (-0.905) = 0.85975, so k_2 = -0.85975. Thus, y_2 = 0.905 + 0.1 \times (-0.85975) \approx 0.819 at t_2 = 0.2.
The results, including the exact values and absolute errors, are summarized in the following table:
| t_n | y_n (midpoint) | Exact y(t_n) | |y_n - y(t_n)| |
|-----------|----------------------|---------------------|-----------------------------|
| 0.0 | 1.000 | 1.000000 | 0.000000 |
| 0.1 | 0.905 | 0.904837 | 0.000163 |
| 0.2 | 0.819 | 0.818731 | 0.000294 |
The absolute errors are on the order of $10^{-4} after two steps.
To demonstrate quadratic convergence, the method is applied with halved step size h = 0.05 (four steps to reach t = 0.2), yielding approximations y(0.1) \approx 0.904877 and y(0.2) \approx 0.818802. The errors reduce by a factor of approximately 4, as shown below, consistent with the second-order accuracy of the method.[19]
| t_n | y_n (midpoint, h=0.05) | Exact y(t_n) | |y_n - y(t_n)| |
|-----------|------------------------------------|---------------------|-----------------------------|
| 0.1 | 0.904877 | 0.904837 | 0.000039 |
| 0.2 | 0.818802 | 0.818731 | 0.000071 |
Applications and extensions
Use in scientific computing
The midpoint method is widely applied in scientific computing for modeling oscillatory systems, such as the frictionless simple pendulum, where it solves the second-order ordinary differential equation describing angular motion to simulate periodic trajectories with good accuracy over moderate time scales. In mechanics, it facilitates basic trajectory predictions, for instance, in simulating the paths of projectiles or particles under gravitational forces by integrating the equations of motion step by step.
The method's advantages lie in its simplicity and second-order accuracy, which provide a balance between computational efficiency and precision without requiring higher derivatives, making it ideal for initial prototyping in physics and engineering simulations.[20] Its straightforward algorithm is easily implemented in software environments like MATLAB or Python, often via custom functions or wrappers around libraries such as SciPy's ODE solvers for rapid deployment in educational and research settings.[21]
In practice, the explicit midpoint method has limitations, particularly its instability when applied to stiff systems, where rapid changes in solution behavior demand very small step sizes to maintain numerical stability.[22] Consequently, it serves primarily as a foundational component in adaptive solvers, contributing to error estimation and step-size control rather than standalone use in complex, long-duration simulations.[23]
Historically, the midpoint method, as an early member of the Runge-Kutta family developed in the early 1900s, played a role in 20th-century computational physics for solving differential equations in simulations before the dominance of higher-order and implicit methods with advancing computing power.[24] Due to its second-order convergence properties, it offers reliable results for non-stiff oscillatory and mechanical problems when step sizes are appropriately selected.[20]
Variants for stiff equations
The explicit midpoint method, being an explicit Runge-Kutta scheme, suffers from severe stability limitations when applied to stiff ordinary differential equations (ODEs), where the Lipschitz constant L of the right-hand side function is large; specifically, the step size h must satisfy h < 2/L to ensure stability, often leading to prohibitively small time steps for practical computations.
To address this, the implicit midpoint method serves as a key variant, defined by the update
\mathbf{y}_{n+1} = \mathbf{y}_n + h \mathbf{f}\left( t_n + \frac{h}{2}, \frac{\mathbf{y}_n + \mathbf{y}_{n+1}}{2} \right),
which is a one-stage Gauss-Legendre Runge-Kutta method of order 2 and is A-stable, allowing larger step sizes for stiff problems without sacrificing qualitative behavior. This nonlinear equation for \mathbf{y}_{n+1} is typically solved using fixed-point iteration for mildly nonlinear cases or Newton's method for stronger nonlinearity, with the explicit midpoint solution providing a good initial guess to accelerate convergence.
For Hamiltonian systems, the implicit midpoint method is symplectic, preserving the symplectic structure and thus long-term energy conservation in separable cases, making it particularly suitable for simulations in molecular dynamics where stiffness arises from disparate timescales.
A related implicit variant is the two-stage Gauss-Legendre Runge-Kutta method, with Butcher tableau coefficients A = \begin{bmatrix} 1/4 & 1/4 - \sqrt{3}/6 \\ 1/4 + \sqrt{3}/6 & 1/4 \end{bmatrix}, which achieves order 4 while remaining A-stable and is effective for higher-accuracy integration of stiff Hamiltonian or general systems.
These variants are commonly employed in stiff chemical kinetics models, such as reaction networks with fast and slow species, and in electrical circuit simulations involving rapid transients, where their stability enables efficient computation without excessive damping or oscillations.