Fact-checked by Grok 2 weeks ago

Midpoint method

The midpoint method, also known as the modified or explicit midpoint rule, is a second-order explicit Runge-Kutta numerical technique for approximating solutions to initial value problems of differential equations (ODEs) of the form \frac{du}{dt} = f(t, u) with u(t_0) = u_0. It improves upon the first-order by evaluating the derivative at an intermediate "midpoint" after a half-step, then using that slope to advance the full step, yielding a local of \mathcal{O}(h^3) and global error of \mathcal{O}(h^2), where h is the step size. The algorithm proceeds iteratively: starting from (t_i, u_i), compute the predictor u_{i+1/2} = u_i + \frac{h}{2} f(t_i, u_i) and t_{i+1/2} = t_i + \frac{h}{2}, then update u_{i+1} = u_i + h f(t_{i+1/2}, u_{i+1/2}) and t_{i+1} = t_i + h. This two-stage process makes it computationally efficient while providing quadratic convergence, allowing for more accurate trajectory tracing compared to linear methods like Euler's, especially for nonlinear ODEs over finite intervals. As a member of the Runge-Kutta family, it serves as a foundational explicit integrator in scientific computing, often used in simulations of physical systems where higher-order methods may be overly costly.

Mathematical formulation

Definition

The midpoint method is a specific numerical technique for approximating solutions to initial value problems in ordinary differential equations, serving as an explicit second-order Runge-Kutta method. It improves upon basic approaches by evaluating the derivative at the of each time , providing a more accurate estimate of the solution's behavior over that step compared to methods. This method requires two evaluations of the right-hand side function per step, balancing computational efficiency with enhanced precision. The midpoint method addresses initial value problems of the form y' = f(t, y), y(t_0) = y_0, where a fixed step size h is used to generate a sequence of approximations y_n \approx y(t_n) with t_n = t_0 + n h. It incorporates a predictor step to estimate the at the interval's , then uses the there to advance the approximation, thereby capturing effects that simpler linear extrapolations miss. Historically, the midpoint method emerged as part of the early development of Runge-Kutta methods in the late 19th and early 20th centuries, initially introduced by in 1895 as an adaptation of midpoint quadrature for differential equations. It was further refined by Karl Heun in 1900 and Wilhelm Kutta in 1901, who integrated midpoint evaluations into higher-order schemes, establishing its foundational role in numerical ODE solvers. This evolution addressed the limitations of the , a simpler predecessor that relies solely on the initial slope and achieves only first-order accuracy.

General form for ODEs

The midpoint method is applied to initial value problems for ordinary differential equations (ODEs) of the form y' = f(t, y), y(t_0) = y_0, where f is a sufficiently . For a scalar ODE, the explicit midpoint method advances the numerical solution from (t_n, y_n) to (t_{n+1}, y_{n+1}) using a step size h > 0 via the following iterative formula: k_1 = f(t_n, y_n), \hat{y} = y_n + \frac{h}{2} k_1, k_2 = f\left(t_n + \frac{h}{2}, \hat{y}\right), y_{n+1} = y_n + h k_2, where t_{n+1} = t_n + h and \hat{y} serves as a temporary predictor at the midpoint. This formulation follows a predictor-corrector style, with the first stage predicting the midpoint value explicitly using the slope at the current point, and the second stage correcting the full step using the slope at that predicted midpoint. This explicit midpoint method is distinct from implicit variants, such as the implicit midpoint rule, which solve an involving the unknown y_{n+1} at both stages and are not addressed here. The method extends naturally to systems of first-order ODEs, where \mathbf{y} \in \mathbb{R}^m is a , f: \mathbb{R} \times \mathbb{R}^m \to \mathbb{R}^m, and all operations apply component-wise. The iterative formula becomes \mathbf{k}_1 = f(t_n, \mathbf{y}_n), \hat{\mathbf{y}} = \mathbf{y}_n + \frac{h}{2} \mathbf{k}_1, \mathbf{k}_2 = f\left(t_n + \frac{h}{2}, \hat{\mathbf{y}}\right), \mathbf{y}_{n+1} = \mathbf{y}_n + h \mathbf{k}_2. For well-posedness and uniqueness of solutions, f must satisfy a Lipschitz condition in \mathbf{y} uniformly in t.

Derivation

Taylor series expansion approach

The expansion provides a foundational approach to deriving the midpoint method for solving the y'(t) = f(t, y(t)), y(t_0) = y_0, by matching the method's update formula to the exact solution's expansion up to second order. Consider the exact solution at t_{n+1} = t_n + h, where h is the step size. Expanding around t_n, the solution satisfies y(t_{n+1}) = y(t_n) + h y'(t_n) + \frac{h^2}{2} y''(t_n) + O(h^3). Since y'(t) = f(t, y(t)), the first term is h f(t_n, y(t_n)). To incorporate the second-order term, differentiate y'(t) = f(t, y(t)) using the chain rule, yielding y''(t) = \frac{\partial f}{\partial t}(t, y(t)) + \frac{\partial f}{\partial y}(t, y(t)) \cdot f(t, y(t)). This expression, evaluated at t_n, captures the quadratic contribution to the expansion. The forward Euler method approximates only up to the linear term, achieving first-order accuracy with local truncation error O(h^2). The method improves accuracy by approximating the second at a to match the h^2/2 term. Specifically, it uses an intermediate step to estimate the at t_n + h/2, given by y(t_n) + (h/2) f(t_n, y(t_n)), and evaluates the right-hand side there: f(t_n + h/2, y(t_n) + (h/2) f(t_n, y(t_n))). The update formula then becomes y_{n+1} = y_n + h f\left(t_n + \frac{h}{2}, y_n + \frac{h}{2} f(t_n, y_n)\right). This substitution ensures the method's expansion aligns with the exact series up to O(h^2). To verify the order, expand the method's output using Taylor series for f around (t_n, y_n). The resulting series matches the exact expansion's constant, linear, and terms, leaving a local of O(h^3), which confirms second-order accuracy. This derivation highlights the method's superiority over schemes like Euler by systematically incorporating higher-order corrections via series matching.

Runge-Kutta interpretation

The midpoint method can be viewed as a two-stage explicit Runge-Kutta method of order 2 for solving the initial value problem y' = f(t, y), y(t_0) = y_0. Explicit Runge-Kutta methods approximate the solution at t_{n+1} = t_n + h via y_{n+1} = y_n + h \sum_{i=1}^s b_i k_i, where the intermediate stages are computed as k_i = f\left( t_n + c_i h, y_n + h \sum_{j=1}^{i-1} a_{ij} k_j \right) for i = 1, \dots, s, with the coefficients b_i, c_i, and a_{ij} (for j < i) defining the method. For the midpoint method, s = 2 and the coefficients form the following Butcher tableau: \begin{array}{c|cc} 0 & 0 & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \hline & 0 & 1 \end{array} Thus, c = \begin{pmatrix} 0 \\ \frac{1}{2} \end{pmatrix}, A = \begin{pmatrix} 0 & 0 \\ \frac{1}{2} & 0 \end{pmatrix}, and b = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. This configuration satisfies the order conditions for a second-order method, namely \sum_{i=1}^s b_i = 1 and \sum_{i=1}^s b_i c_i = \frac{1}{2}. In contrast to Ralston's method, another two-stage second-order Runge-Kutta method that optimizes stability by setting b = \begin{pmatrix} \frac{1}{4} \\ \frac{3}{4} \end{pmatrix} and a_{21} = \frac{2}{3}, the midpoint method places all weight on the second stage.

Error analysis

Local truncation error

The local truncation error for the midpoint method is defined as the difference between the exact solution value at the next time step and the value obtained by applying a single step of the method starting from the exact solution at the current time step, denoted as y(t_n + h) - z(t_n + h), where y(t_n) = y_n is exact and z represents the numerical approximation after one step. Assuming the right-hand side function f(t, y) and its partial derivatives up to third order are continuous, the local truncation error can be derived using Taylor series expansions around t_n. The expansions for y(t_{n+1}) and the intermediate terms in the method cancel out the constant, linear, and quadratic terms, leaving a remainder involving the third derivative. Specifically, the error is \frac{h^3}{6} y'''(\xi) for some \xi \in [t_n, t_n + h], which is of order O(h^3). This principal error term, involving the third derivative of the solution, demonstrates that the midpoint method achieves second-order accuracy.

Global error and convergence

The global error in the midpoint method for solving the initial value problem y' = f(t, y), y(t_0) = y_0, over a fixed time interval [t_0, T] with N = (T - t_0)/h steps of size h, is defined as e_n = y(t_n) - y_n, where y(t_n) is the exact solution at the n-th time point t_n = t_0 + n h and y_n is the numerical approximation. Under suitable conditions on f, this error satisfies \|e_n\| = O(h^2) as h \to 0 for each fixed n, and uniformly \max_{0 \leq n \leq N} \|e_n\| = O(h^2). To establish this bound, consider the error recursion derived from the method's update and the exact solution's Taylor expansion. Assuming f satisfies a Lipschitz condition in y with constant L > 0, i.e., \|f(t, y_1) - f(t, y_2)\| \leq L \|y_1 - y_2\| for all t \in [t_0, T] and relevant y_1, y_2, the global error satisfies \|e_{n+1}\| \leq \|e_n\| (1 + L h) + O(h^3), where the O(h^3) term arises from the local of the method. Iterating this inequality from n = 0 (with e_0 = 0) yields a : \|e_n\| \leq h \sum_{k=0}^{n-1} O(h^3) (1 + L h)^{n-1-k} \leq C h^2 e^{L (t_n - t_0)} for some constant C > 0 independent of h, using the bound (1 + L h)^m \leq e^{L m h}. Thus, the error remains O(h^2) over the fixed interval. The of the midpoint method follows from a general for one-step methods: if the method is consistent (local O(h^{p+1})) and the problem is well-posed ( continuous f), then the global error is O(h^p). For the explicit midpoint method, which is a second-order Runge-Kutta scheme with p = 2 when f is continuously differentiable (C^1) and in y, the implies \max_{0 \leq n \leq N} \|e_n\| \leq C h^2 for some C independent of h > 0 sufficiently small. In practice, this quadratic convergence means that halving the step size h reduces the global error by a factor of approximately 4, enabling efficient accuracy control in simulations by refining the grid.

Implementation and examples

Algorithm steps

The midpoint method approximates solutions to the y' = f(t, y), y(t_0) = y_0 over the interval [t_0, T] using a fixed step size h. The algorithm initializes the current time t = t_0 and solution value y = y_0, then iteratively advances the solution until t reaches or exceeds T, producing a sequence of points (t_n, y_n). The required inputs are the right-hand side function f(t, y), initial time t_0, initial value y_0, step size h > 0, and end time T > t_0; the output is the discrete solution trajectory \{(t_n, y_n)\}_{n=0}^N where t_N \approx T. The core iteration follows this structure:
initialize t = t_0, y = y_0
while t < T:
    k1 = f(t, y)
    temp = y + (h/2) * k1
    k2 = f(t + h/2, temp)
    y = y + h * k2
    t = t + h
output sequence (t_n, y_n)
Each full step requires two evaluations of f. For systems of ODEs, where y \in \mathbb{R}^d and f: \mathbb{R} \times \mathbb{R}^d \to \mathbb{R}^d, the algorithm extends naturally by treating y, k_1, k_2, and \textit{temp} as , with all operations (, ) performed element-wise or via vector arithmetic in the implementation language. The step size h must balance accuracy, which improves with smaller h, against efficiency, as smaller steps increase the total number of function evaluations and computation time. Implementations should also handle potential singularities in f, such as points leading to , by validating inputs or using adaptive safeguards before evaluation./8:_Ordinary_Differential_Equations/8.03:_Runge-Kutta_2nd-Order_Method_for_Solving_Ordinary_Differential_Equations)

Numerical example

To illustrate the application of the midpoint method, consider the y' = -y, y(0) = 1, whose exact solution is y(t) = e^{-t}. This is a standard linear test equation for evaluating numerical solvers. We apply the method with step size h = 0.1 for the first two steps. At t_0 = 0, y_0 = 1, compute k_1 = f(t_0, y_0) = -1. The intermediate point is y^* = y_0 + \frac{h}{2} k_1 = 1 - 0.05 = 0.95, so k_2 = f(t_0 + \frac{h}{2}, y^*) = -0.95. Thus, y_1 = y_0 + h k_2 = 1 - 0.095 = 0.905 at t_1 = 0.1. Next, at t_1 = 0.1, y_1 = 0.905, compute k_1 = -0.905. The intermediate point is y^* = 0.905 + 0.05 \times (-0.905) = 0.85975, so k_2 = -0.85975. Thus, y_2 = 0.905 + 0.1 \times (-0.85975) \approx 0.819 at t_2 = 0.2. The results, including the exact values and absolute errors, are summarized in the following table: | t_n | y_n (midpoint) | Exact y(t_n) | |y_n - y(t_n)| | |-----------|----------------------|---------------------|-----------------------------| | 0.0 | 1.000 | 1.000000 | 0.000000 | | 0.1 | 0.905 | 0.904837 | 0.000163 | | 0.2 | 0.819 | 0.818731 | 0.000294 | The absolute errors are on the order of $10^{-4} after two steps. To demonstrate quadratic convergence, the is applied with halved step size h = 0.05 (four steps to reach t = 0.2), yielding approximations y(0.1) \approx 0.904877 and y(0.2) \approx 0.818802. The errors reduce by a factor of approximately 4, as shown below, consistent with the second-order accuracy of the . | t_n | y_n (, h=0.05) | Exact y(t_n) | |y_n - y(t_n)| | |-----------|------------------------------------|---------------------|-----------------------------| | 0.1 | 0.904877 | 0.904837 | 0.000039 | | 0.2 | 0.818802 | 0.818731 | 0.000071 |

Applications and extensions

Use in scientific computing

The midpoint method is widely applied in scientific computing for modeling oscillatory systems, such as the frictionless simple pendulum, where it solves the second-order describing angular motion to simulate periodic with good accuracy over moderate time scales. In , it facilitates basic predictions, for instance, in simulating the paths of projectiles or particles under gravitational forces by integrating the step by step. The method's advantages lie in its simplicity and second-order accuracy, which provide a balance between computational efficiency and precision without requiring higher derivatives, making it ideal for initial prototyping in physics and simulations. Its straightforward is easily implemented in software environments like or , often via custom functions or wrappers around libraries such as SciPy's ODE solvers for rapid deployment in educational and research settings. In practice, the explicit midpoint method has limitations, particularly its instability when applied to stiff systems, where rapid changes in solution behavior demand very small step sizes to maintain . Consequently, it serves primarily as a foundational component in adaptive solvers, contributing to error estimation and step-size control rather than standalone use in complex, long-duration simulations. Historically, the midpoint method, as an early member of the Runge-Kutta family developed in the early , played a role in 20th-century for solving equations in simulations before the dominance of higher-order and implicit methods with advancing power. Due to its second-order convergence properties, it offers reliable results for non-stiff oscillatory and mechanical problems when step sizes are appropriately selected.

Variants for stiff equations

The explicit midpoint method, being an explicit Runge-Kutta scheme, suffers from severe limitations when applied to stiff ordinary differential equations (ODEs), where the constant L of the right-hand side function is large; specifically, the step size h must satisfy h < 2/L to ensure , often leading to prohibitively small time steps for practical computations. To address this, the implicit midpoint serves as a key variant, defined by the update \mathbf{y}_{n+1} = \mathbf{y}_n + h \mathbf{f}\left( t_n + \frac{h}{2}, \frac{\mathbf{y}_n + \mathbf{y}_{n+1}}{2} \right), which is a one-stage Gauss-Legendre Runge-Kutta of order 2 and is A-stable, allowing larger step sizes for stiff problems without sacrificing qualitative behavior. This nonlinear for \mathbf{y}_{n+1} is typically solved using for mildly nonlinear cases or for stronger nonlinearity, with the explicit midpoint solution providing a good initial guess to accelerate . For systems, the implicit midpoint method is symplectic, preserving the symplectic structure and thus long-term in separable cases, making it particularly suitable for simulations in where stiffness arises from disparate timescales. A related implicit is the two-stage Gauss-Legendre Runge-Kutta , with Butcher tableau coefficients A = \begin{bmatrix} 1/4 & 1/4 - \sqrt{3}/6 \\ 1/4 + \sqrt{3}/6 & 1/4 \end{bmatrix}, which achieves order 4 while remaining A-stable and is effective for higher-accuracy integration of stiff or general systems. These variants are commonly employed in stiff models, such as reaction networks with fast and slow , and in electrical circuit simulations involving rapid transients, where their stability enables efficient computation without excessive damping or oscillations.

References

  1. [1]
    [PDF] The Midpoint and Runge Kutta Methods
    Jul 13, 2011 · The midpoint method tries for an improved prediction. It does this by taking an initial half step in time, sampling the derivative there, and ...
  2. [2]
    The Midpoint Method - Engineering at Alberta Courses
    The midpoint method is able to trace the curve while the Euler method has higher deviations.
  3. [3]
    [PDF] AM 213B Prof. Daniele Venturi Overview of numerical methods for ...
    The explicit midpoint method is a one-step method that belongs to the class of two-stage explicit Runge-Kutta methods. Approximating y(ti + ∆t/2) by the average.Missing: analysis | Show results with:analysis
  4. [4]
    8.03: Runge-Kutta 2nd-Order Method for Solving Ordinary ...
    Oct 5, 2023 · Theory, application, and derivation of the Runge-Kutta second-order method for solving ordinary differential equations.Missing: history | Show results with:history
  5. [5]
    Midpoint Method, ODE2 | Learn Differential Equations
    Description: ODE2 implements a midpoint method with two function evaluations per step. This method is twice as accurate as Euler's method.
  6. [6]
    What is the midpoint method in ordinary differential equations?
    The midpoint method is a type of second order Runge-Kutta method. It is used to solve ordinary differential equations with a given initial condition.Missing: analysis | Show results with:analysis
  7. [7]
    [PDF] A history of Runge-Kutta methods f ~(z) dz = (x. - x.-l) - People
    Abstract. This paper constitutes a centenary survey of Runge--Kutta methods. It reviews some of the early contributio~ due to Runge, Heun, Kutta and Nystr6m ...
  8. [8]
    NMM Chapter 12: Solving Ordinary Differential Equations (ODE's)
    Nov 19, 2002 · Higher-Order Explicit Methods for ODE solvers. 1. Midpoint Method · 1) use the slope=F(t,y(t)) at the starting point 't' to extrapolate y(t+h/2)
  9. [9]
    [PDF] Contents - UNM Math
    To compute the LTE for Euler's method we need Taylor series expansions of yloc(tk+1) about ... This is called the Midpoint method, since it is an approximation to.<|control11|><|separator|>
  10. [10]
    [PDF] Runge-Kutta methods, MATH 3510
    Dec 2, 2019 · Table 1: The Butcher tableau for the explicit Runge–Kutta method. whereas the midpoint method (13) is represented as: 0. 1. 2. 1. 2. 0 1. For ...
  11. [11]
    [PDF] John Butcher's tutorials - Introduction to Runge--Kutta methods
    Order conditions. A few years later, Heun gave a full explanation of order 3 methods. Shortly afterwards Kutta gave a detailed analysis of order 4 methods.
  12. [12]
    [PDF] Ordinary Differential Equations - CMU School of Computer Science
    ... method as a one-stage Runge-Kutta method using the following Butcher tableau: 0. 1. Similarly, the Butcher tableau for the midpoint method is defined is. 0. 1.
  13. [13]
    Runge-Kutta Methods with Minimum Error Bounds
    By Anthony Ralston. 1. Introduction. Numerical methods for the ... when a fourth-order Runge-Kutta method is to be used for starting the solution or.
  14. [14]
    [PDF] ( ) 3 ( ) ( )
    The midpoint method (also called the leap frog method) yn+1 = yn 1 + 2hF yn, tn. ( ). Local truncation error (LTE) of the midpoint method en h( )= y tn+1.
  15. [15]
    [PDF] Numerical Methods II One-Step Methods for ODEs - NYU Courant
    Feb 12, 2019 · The global truncation error, however, is of order O(∆t), p = q + 1, so this is a first-order accurate method. A. Donev (Courant Institute). ODEs.
  16. [16]
    [PDF] Chapter 5 Methods for ordinary differential equations
    Convergence is the study of the global error. Consistency is the study of the local error. A numerical method is called consistent if the local error decays ...
  17. [17]
  18. [18]
    [PDF] UNIVERSITETET I OSLO
    title="Solution of y'=-Ay, y(0)=1 by the Midpoint method", hardcopy='tmp.eps ... in the previous exercise in a class hierarchy representing numerical methods.
  19. [19]
    Second Order Runge-Kutta - Swarthmore College
    this is often refered to as the "midpoint" algorithm for Second Order Runge-Kutta because it uses the slope at the midpoint, k2. A First Order Linear ...Missing: pseudocode | Show results with:pseudocode
  20. [20]
    [PDF] Solving Ordinary Differential Equations in Python - GitHub Pages
    Jun 6, 2023 · The implicit midpoint method is given by idxmid- point method ... Hairer, S. P. Nørsett, and G. Wanner. Solving Ordinary Differential.
  21. [21]
    The Midpoint Scheme and Variants for Hamiltonian Systems
    Although it is well known that the midpoint scheme may also exhibit instabilities in various stiff situations, one might still hope for good results when ...Missing: limitations | Show results with:limitations
  22. [22]
    A history of Runge-Kutta methods - ScienceDirect
    This paper constitutes a centenary survey of Runge-Kutta methods. It reviews some of the early contributions due to Runge, Heun, Kutta and Nyström.