Heun's method
Heun's method is an explicit, second-order numerical integration technique for solving initial value problems of ordinary differential equations (ODEs) of the form y' = f(t, y), y(t_0) = y_0, named after the German mathematician Karl Heun who introduced it in 1900 as part of early developments in Runge–Kutta methods.[1][2][3] It improves upon the first-order Euler method by employing a predictor-corrector approach, where an initial prediction is made using the Euler step, followed by a correction that averages the function evaluations (slopes) at the current point and the predicted endpoint to approximate the integral more accurately using the trapezoidal rule.[4][5] The method's algorithm proceeds in two stages per step of size h: first, compute the predictor \tilde{y}_{n+1} = y_n + h f(t_n, y_n), then evaluate the corrector slope f(t_{n+1}, \tilde{y}_{n+1}), and update the solution as y_{n+1} = y_n + \frac{h}{2} \left[ f(t_n, y_n) + f(t_{n+1}, \tilde{y}_{n+1}) \right].[4][3] This formulation yields a local truncation error of O(h^3) and global error of O(h^2), making it suitable for moderately accurate approximations of non-stiff ODEs, though it requires two function evaluations per step.[5][2] As a foundational member of the Runge–Kutta family, Heun's method bridges simple one-stage methods like Euler's and higher-order variants such as the classical fourth-order Runge–Kutta method, and it can be extended to systems of ODEs or stochastic differential equations with appropriate modifications.[2][3] While effective for educational purposes and basic simulations, its fixed step size limits efficiency for stiff problems, where implicit methods are preferred; however, adaptive step-size implementations enhance its practical utility in computational software.[5]Background
Numerical solution of ODEs
The initial value problem (IVP) for an ordinary differential equation (ODE) is typically formulated as y'(t) = f(t, y(t)), subject to the initial condition y(t_0) = y_0, where f is a given function and the goal is to determine the solution y(t) over some interval containing t_0.[6] This setup ensures the existence and uniqueness of a solution under suitable conditions, such as continuity and Lipschitz continuity of f with respect to y.[7] Exact analytical solutions to IVPs are available only for a limited class of ODEs, particularly linear ones; for nonlinear ODEs, closed-form solutions are rare due to the complexity of integrating such equations, often requiring special functions or approximations even when solvable.[6] Nonlinearities can lead to phenomena like finite-time blowup or non-uniqueness, further complicating analytical resolution.[7] Consequently, numerical methods become essential for approximating solutions in practical applications, such as modeling physical systems in engineering and sciences. Numerical approaches to solving IVPs involve discretizing the time domain into a grid of points t_n = t_0 + n h, where h > 0 is the step size, and approximating the solution values y(t_n) \approx y_n at these discrete nodes.[6] These methods advance the approximation step by step using finite difference schemes to estimate the derivative y'(t_n), effectively replacing the continuous integral with a summation over the grid.[7] A key concept in these methods is the local truncation error, which measures the discrepancy between the exact solution and the numerical approximation over a single step, assuming exact input data from the previous step; this error arises from approximating the smooth solution curve with a linear or higher-order finite difference.[6] For instance, in simple schemes like Euler's method, the local truncation error is on the order of O(h^2). Over multiple steps, these local errors accumulate to form the global error, which represents the total deviation of the numerical solution from the true solution at t_n; stability of the method ensures this accumulation remains bounded, typically resulting in a global error of order one less than the local truncation error for consistent schemes.[7]Euler's method
Euler's method, also known as the forward Euler method, is a first-order explicit numerical technique for approximating solutions to ordinary differential equations (ODEs) of the form y' = f(t, y) with initial condition y(t_0) = y_0. It serves as the simplest one-step method in numerical analysis, relying on the derivative at the current point to advance the solution by a fixed step size h. Developed in the 18th century and formalized in modern numerical contexts, it provides a foundational approach but exhibits limitations that have spurred more accurate variants.[8] The algorithm proceeds iteratively as follows:- Initialize t_0 = t_{\text{start}}, y_0 = y(t_0), and choose step size h > 0.
- For each step n = 0, 1, 2, \dots, until t_n \geq t_{\text{end}}:
- Compute y_{n+1} = y_n + h f(t_n, y_n).
- Update t_{n+1} = t_n + h.
- The approximate solution is the sequence \{y_n\} at points \{t_n\}.
Formulation
Algorithm steps
Heun's method is an explicit two-stage numerical integration technique for approximating solutions to initial value problems of the form y' = f(t, y), y(t_0) = y_0, where y is the solution function and f is a given Lipschitz continuous function.[4] The method proceeds iteratively over discrete time steps, producing a sequence of approximations \{ y_n \} at grid points t_n = t_0 + n h, where h > 0 is the fixed step size and n = 0, 1, \dots, N with t_N approximating the desired end time.[11] To implement Heun's method, the required inputs are the step size h > 0, the initial condition y_0, the initial time t_0, the end time T, and the function f(t, y).[3] The algorithm begins at n = 0 with y_0 and advances step by step until t_n \geq T, outputting the sequence \{ y_n \}. The core update at each iteration n consists of a predictor step to estimate an intermediate value \hat{y}_{n+1} = y_n + h f(t_n, y_n), followed by a corrector step that computes the final approximation as y_{n+1} = y_n + \frac{h}{2} \left[ f(t_n, y_n) + f(t_{n+1}, \hat{y}_{n+1}) \right], where t_{n+1} = t_n + h.[4] This predictor step aligns with the forward Euler update, providing a provisional estimate before refinement.[11] The following pseudocode outlines the iterative procedure for a scalar ODE:This implementation assumes uniform steps and stores approximations for later use if needed.[3] For systems of ODEs, where y \in \mathbb{R}^m and f: \mathbb{R} \times \mathbb{R}^m \to \mathbb{R}^m, the method extends component-wise by treating vectors throughout: the predictor becomes \hat{y}_{n+1} = y_n + h f(t_n, y_n) with vector addition and scalar multiplication, and the corrector averages the vector-valued slopes similarly.[3] The pseudocode adapts directly by replacing scalar operations with vector equivalents, enabling solution of coupled systems like predator-prey models.[3]function HeunMethod(f, t0, y0, h, T) n = 0 t = t0 y = y0 while t < T do k1 = f(t, y) y_hat = y + h * k1 k2 = f(t + h, y_hat) y = y + (h / 2) * (k1 + k2) t = t + h n = n + 1 store y as y_n // optional, for output sequence end while return sequence {y_n for n=0 to N} end functionfunction HeunMethod(f, t0, y0, h, T) n = 0 t = t0 y = y0 while t < T do k1 = f(t, y) y_hat = y + h * k1 k2 = f(t + h, y_hat) y = y + (h / 2) * (k1 + k2) t = t + h n = n + 1 store y as y_n // optional, for output sequence end while return sequence {y_n for n=0 to N} end function
Predictor-corrector approach
Heun's method functions as a predictor-corrector scheme, designed to enhance the accuracy of numerical solutions to initial value problems for ordinary differential equations of the form y' = f(t, y), y(t_0) = y_0. Introduced by Karl Heun in 1900, this approach refines the first-order Euler method by incorporating an explicit prediction followed by a correction step. The predictor phase employs the Euler step to generate an initial approximation \hat{y}_{n+1} at the end of the interval [t_n, t_n + h]:\hat{y}_{n+1} = y_n + h f(t_n, y_n).
This estimate assumes a constant slope equal to the derivative at the starting point, providing a linear extrapolation along the tangent.[1][12] In the corrector phase, the prediction is adjusted using the trapezoidal rule, which averages the slopes at the current and predicted points to better approximate the integral of f over the step:
y_{n+1} = y_n + \frac{h}{2} \left[ f(t_n, y_n) + f(t_n + h, \hat{y}_{n+1}) \right].
Geometrically, the predictor traces the tangent from (t_n, y_n), while the corrector constructs a secant line averaging the tangents at the start and the predicted endpoint, yielding a path that more closely follows the solution curve's average direction. This averaging captures curvature effects neglected by the Euler method's single tangent, resulting in second-order local accuracy by mimicking the trapezoidal integration's quadratic error term.[13][12][14] Although the standard formulation of Heun's method applies the corrector once for computational efficiency, iterative variants repeat the corrector step—re-evaluating the slope with the updated estimate—until convergence, potentially reducing truncation error further at the cost of additional function evaluations. These iterations align the method more closely with implicit schemes but are not part of the classical explicit version.
Derivation
Taylor series expansion
The Taylor series expansion of the exact solution to the initial value problem y' = f(t, y), y(t_n) = y_n, around t_n is given by y(t_n + h) = y_n + h y'(t_n) + \frac{h^2}{2} y''(t_n) + O(h^3), where higher-order terms are neglected for the local analysis up to second order.[15][16][17] Since y'(t_n) = f(t_n, y_n), the second derivative y''(t_n) is obtained via the chain rule as y''(t_n) = \frac{\partial f}{\partial t}(t_n, y_n) + \frac{\partial f}{\partial y}(t_n, y_n) f(t_n, y_n), where the partial derivatives f_t and f_y account for the dependence of f on both t and y.[15][16][17] Heun's method approximates the solution at the next step as y_{n+1} = y_n + \frac{h}{2} \left[ f(t_n, y_n) + f(t_n + h, y_n + h f(t_n, y_n)) \right], which incorporates an average of the function value at the current point and at a predicted point one step ahead.[15][16][17] To verify the order of accuracy, the predictor term f(t_n + h, \hat{y}_{n+1}), where \hat{y}_{n+1} = y_n + h f(t_n, y_n), is expanded using the multivariate Taylor series: f(t_n + h, \hat{y}_{n+1}) = f(t_n, y_n) + h \frac{\partial f}{\partial t}(t_n, y_n) + h f(t_n, y_n) \frac{\partial f}{\partial y}(t_n, y_n) + O(h^2). This expansion captures the first-order changes in both arguments of f.[15][16][17] Substituting this expansion into the Heun approximation yields y_{n+1} = y_n + h f(t_n, y_n) + \frac{h^2}{2} \left[ \frac{\partial f}{\partial t}(t_n, y_n) + \frac{\partial f}{\partial y}(t_n, y_n) f(t_n, y_n) \right] + O(h^3), which matches the exact Taylor series up to the second-order term O(h^2); thus, the local truncation error is O(h^3).[15][16][17]Butcher tableau representation
Heun's method belongs to the family of explicit Runge-Kutta methods and is compactly represented using the Butcher tableau, which organizes the coefficients \mathbf{c}, A, and \mathbf{b} in a standardized format.[18] In general, an s-stage explicit Runge-Kutta method computes intermediate stages as k_i = f(t_n + c_i h, y_n + h \sum_{j=1}^{i-1} a_{ij} k_j), \quad i = 1, \dots, s, followed by the solution update y_{n+1} = y_n + h \sum_{i=1}^s b_i k_i. Heun's method corresponds to s = 2, with the Butcher tableau \begin{array}{c|cc} 0 & 0 & \\ 1 & 1 & 0 \\ \hline & \frac{1}{2} & \frac{1}{2} \end{array} where \mathbf{c} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, the matrix A = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, and \mathbf{b}^\top = \left( \frac{1}{2}, \frac{1}{2} \right).[18] This tableau yields the stage values k_1 = f(t_n, y_n) and k_2 = f(t_n + h, y_n + h k_1), with the update y_{n+1} = y_n + h \left( \frac{1}{2} k_1 + \frac{1}{2} k_2 \right).[18] These steps precisely reproduce the explicit predictor-corrector formulation of Heun's method, where the predictor uses the Euler step and the corrector averages the slopes.[18] The coefficients in the tableau satisfy the simplifying assumptions and order conditions for a second-order method, specifically \sum_{i=1}^2 b_i = 1 and \sum_{i=1}^2 b_i c_i = \frac{1}{2}.[18]Properties
Order of accuracy
Heun's method is a second-order explicit Runge-Kutta method, meaning it achieves a global error of order O(h^2) for a step size h. This order arises from its design, which matches the first two terms of the Taylor series expansion of the exact solution, leading to a local truncation error of O(h^3). The local truncation error is derived by expanding the exact solution and the numerical approximation around a grid point, revealing that the principal error term involves third derivatives of the solution or function f, assuming sufficient smoothness.[19][20] Under standard assumptions for initial value problems (IVPs) of the form y' = f(t,y) with y(t_0) = y_0, where f satisfies a Lipschitz condition in y (ensuring uniqueness and bounded growth) and is sufficiently differentiable, the global error over a fixed interval [t_0, T] is bounded by C h^2 for some constant C > 0 independent of h. This follows from the convergence theorem for one-step methods, which links the global error order to the local truncation error order minus one, provided the method is consistent and zero-stable. The method is consistent because the local truncation error approaches zero as h \to 0, satisfying the necessary condition for convergence.[19][20] In practical implementations, the choice of step size h involves a trade-off between truncation error (which decreases with smaller h) and round-off error (which accumulates and dominates for excessively small h due to floating-point arithmetic limitations in evaluating f). For Heun's method, as an explicit two-stage process, round-off propagation remains moderate, but optimal accuracy requires balancing these errors to minimize the total computed error.[21]Stability analysis
The stability of Heun's method is analyzed using the linear test equation y' = \lambda y, where \lambda \in \mathbb{C} with \operatorname{Re}(\lambda) < 0, which models the behavior of dissipative systems. Applying the method to this equation yields the recurrence y_{n+1} = R(z) y_n, where z = h \lambda and the stability function is the quadratic polynomial R(z) = 1 + z + \frac{1}{2} z^2.[22][23] The region of absolute stability is the set of z \in \mathbb{C} such that |R(z)| \leq 1, ensuring that the numerical solution remains bounded as n \to \infty for fixed h > 0. This region lies in the left half of the complex plane, symmetric about the real axis, and forms a bounded area that includes a wedge-shaped portion around the negative real axis, extending up to approximately z = -2 along the real line. Compared to the forward Euler method's stability region—a disk of radius 1 centered at z = -1—Heun's region is larger overall, particularly in directions away from the real axis, but it remains finite and does not encompass the entire left half-plane.[22][23] As an explicit two-stage Runge-Kutta method, Heun's method is not A-stable, meaning its stability region excludes parts of the negative real axis for sufficiently large |z|, leading to instability when the step size h is too large relative to $1/|\lambda|. In the context of the Dahlquist test equation, the method achieves second-order accuracy but is not L-stable, as |R(z)| grows unbounded for large |z| due to the polynomial nature of the stability function, unlike implicit L-stable methods where R(z) \to 0 as |z| \to \infty.[22][23] For practical applications in dissipative systems with real negative eigenvalues, stability requires h < 2 / |\lambda|, the same restriction as forward Euler, which limits the method's utility for stiff problems where large |\lambda| demand very small steps to avoid oscillations or divergence. This conditional stability underscores the need for adaptive step sizing or switching to implicit methods in such scenarios.[22][23]Applications
Example computation
To illustrate the application of Heun's method, consider the initial value problem y' = -y + 1 - x, with initial condition y(0) = 3, over the interval [0, 1]. The exact solution is y(x) = 2 - x + e^{-x}.[24] This test problem is linear and solvable analytically, making it suitable for verifying numerical approximations. We apply Heun's method with step size h = 0.1, computing the first two steps explicitly to demonstrate the predictor-corrector process. For the first step, at t_0 = 0, y_0 = 3:The predictor is \tilde{y}_1 = y_0 + h f(t_0, y_0), where f(t, y) = -y + 1 - t.
Thus, f(t_0, y_0) = -3 + 1 - 0 = -2, so \tilde{y}_1 = 3 + 0.1 \cdot (-2) = 2.8.
The corrector is y_1 = y_0 + \frac{h}{2} \left[ f(t_0, y_0) + f(t_0 + h, \tilde{y}_1) \right].
Here, f(t_0 + h, \tilde{y}_1) = f(0.1, 2.8) = -2.8 + 1 - 0.1 = -1.9,
so y_1 = 3 + 0.05 \cdot (-2 + (-1.9)) = 3 + 0.05 \cdot (-3.9) = 3 - 0.195 = 2.805.[24] For the second step, at t_1 = 0.1, y_1 = 2.805:
The predictor is \tilde{y}_2 = 2.805 + 0.1 \cdot f(0.1, 2.805).
f(0.1, 2.805) = -2.805 + 1 - 0.1 = -1.905, so \tilde{y}_2 = 2.805 + 0.1 \cdot (-1.905) = 2.6145.
The corrector is y_2 = 2.805 + 0.05 \cdot [f(0.1, 2.805) + f(0.2, 2.6145)].
f(0.2, 2.6145) = -2.6145 + 1 - 0.2 = -1.8145,
so y_2 = 2.805 + 0.05 \cdot (-1.905 - 1.8145) = 2.805 + 0.05 \cdot (-3.7195) = 2.805 - 0.185975 = 2.61903.[24] Continuing this process for additional steps yields the approximations shown in the table below, including the exact values and absolute errors |y_n - y(t_n)|. The computations were performed to five decimal places for precision.
| t_n | Heun approximation y_n | Exact y(t_n) | Absolute error |
|---|---|---|---|
| 0.0 | 3.00000 | 3.00000 | 0.00000 |
| 0.1 | 2.80500 | 2.80484 | 0.00016 |
| 0.2 | 2.61903 | 2.61873 | 0.00030 |
| 0.3 | 2.44122 | 2.44082 | 0.00040 |
| 0.4 | 2.27080 | 2.27032 | 0.00048 |
| 0.5 | 2.10708 | 2.10653 | 0.00055 |