Linear multistep method
A linear multistep method (LMM) is a class of numerical integration techniques for solving initial value problems of ordinary differential equations (ODEs), where the approximation of the solution at a future time step is obtained as a linear combination of previous solution values and their corresponding derivatives, weighted by step size.[1] These methods, denoted in general form as \sum_{j=0}^{k} \alpha_j y_{n+j} = h \sum_{j=0}^{k} \beta_j f(t_{n+j}, y_{n+j}) for a k-step method, contrast with single-step methods like Runge-Kutta by reusing past computed values to improve efficiency, particularly for non-stiff problems.[1] The origins of LMMs trace back to the explicit Adams methods, first developed by John Couch Adams in 1855 and published in 1883 in collaboration with Francis Bashforth for applications in capillary action theory, where polynomial interpolation of the integrand enables higher-order approximations without resolving the full ODE at each step.[2] Subsequent advancements by Germund Dahlquist in the 1950s established foundational convergence and stability theories, proving that consistent and zero-stable LMMs converge to the true solution, while introducing barriers such as the maximum order of $2k for a k-step method and limitations on A-stability for explicit variants.[3] Key examples include the explicit Adams-Bashforth methods, which predict the next solution using past derivatives (e.g., the second-order form y_{n+1} = y_n + \frac{h}{2}(3f_n - f_{n-1})), and implicit Adams-Moulton methods, which incorporate the future derivative for greater accuracy (e.g., third-order: y_{n+1} = y_n + \frac{h}{12}(5f_{n+1} + 8f_n - f_{n-1})).[1] For stiff ODEs, where explicit methods suffer from severe stability restrictions, backward differentiation formulas (BDFs)—initially proposed by Charles F. Curtiss and Joseph O. Hirschfelder in 1952—offer implicit, A-stable alternatives up to order 5, such as the second-order BDF y_{n+1} - \frac{4}{3}y_n + \frac{1}{3}y_{n-1} = \frac{2}{3} h f_{n+1}.[4] These methods' order and stability properties, analyzed via characteristic polynomials, remain central to their design and application in scientific computing.[3]Basic Concepts
Definition and Formulation
Linear multistep methods are a class of numerical integrators designed to approximate the solution of initial value problems for ordinary differential equations (ODEs) of the form y' = f(t, y), where y(t_0) = y_0 and the integration is performed over the interval [t_0, T]. These methods advance the solution by utilizing a linear combination of previous solution values and their corresponding right-hand side evaluations, enabling efficient computation by reusing information from multiple time steps. Unlike single-step methods, they incorporate data from several prior points to achieve higher accuracy while maintaining computational efficiency. The general formulation of a linear multistep method is given by \sum_{j=0}^k \alpha_j y_{n+j} = h \sum_{j=0}^k \beta_j f(t_{n+j}, y_{n+j}), where h > 0 is the uniform step size, t_n = t_0 + n h, and the coefficients \{\alpha_j\}_{j=0}^k and \{\beta_j\}_{j=0}^k are real numbers satisfying the normalization condition \alpha_k = 1. Here, k denotes the number of steps involved in the linear combination, and the method has an associated order of accuracy p. The left-hand side advances the solution values, while the right-hand side approximates the integral contribution from the ODE's forcing term. To initiate the method beyond the initial condition y_0, one or more starting values y_1, \dots, y_k are typically obtained using single-step methods such as Runge-Kutta schemes. Linear multistep methods are classified as explicit if \beta_k = 0, in which case y_{n+k} can be solved for directly without iteration, or implicit if \beta_k \neq 0, requiring the solution of a nonlinear equation at each step, often via fixed-point iteration or Newton's method. The characteristic polynomials associated with the method are the advancing polynomial \rho(\zeta) = \sum_{j=0}^k \alpha_j \zeta^j for the solution values and the forcing polynomial \sigma(\zeta) = \sum_{j=0}^k \beta_j \zeta^j for the right-hand side evaluations; these polynomials play a central role in analyzing the method's qualitative behavior.Historical Background
The origins of linear multistep methods can be traced to the 19th century, where Leonhard Euler's forward Euler method from 1768 served as an early single-step precursor for numerical integration of ordinary differential equations (ODEs). However, the first explicit multistep approaches were developed by British astronomer John Couch Adams around 1855 and published in 1883, who created formulas to compute solutions for ODEs modeling capillary action in collaboration with Francis Bashforth, motivated by astronomical and physical computations requiring higher efficiency than single-step methods. These early methods laid the groundwork for using multiple previous points to approximate derivatives, improving accuracy for non-stiff problems without explicit reliance on advanced computing.[5] The formalization of linear multistep methods as a general class occurred in the mid-20th century through the pioneering work of Swedish mathematician Germund Dahlquist. In his 1956 paper, Dahlquist established the foundational theory of convergence and stability for these methods, analyzing zero-stability and consistency conditions essential for reliable numerical solutions of ODEs.[6] This was expanded in his 1958 doctoral dissertation at Stockholm University, titled "Stability and error bounds in the numerical integration of ordinary differential equations," which introduced the broad linear multistep framework and emphasized stability concepts for practical implementation.[7] Dahlquist's contributions were driven by the need to extend single-step methods like Runge-Kutta for more efficient integration in scientific computing, particularly as electronic computers emerged. During the 1950s and 1960s, advancements built on Adams' explicit methods by incorporating predictor-corrector pairs, where an explicit predictor (like Adams-Bashforth) estimates the next value and an implicit corrector (like Adams-Moulton) refines it, enhancing accuracy and stability for non-stiff ODEs in engineering applications. Concurrently, backward differentiation formulas (BDFs), first proposed by Charles F. Curtiss and Joseph O. Hirschfelder in 1952, were further developed and popularized in the 1960s by C. William Gear to address stiff ODEs, where traditional explicit methods fail due to stability restrictions; Gear's 1966 publications and 1971 monograph popularized variable-order BDF implementations for simulations in chemical kinetics and circuit analysis.[4][8] These developments were motivated by the computational demands of post-World War II engineering, including adaptations for stiff systems arising in reactor modeling and control theory. Key milestones include Dahlquist's 1963 paper, which analyzed special stability issues and introduced A-stability, alongside the first Dahlquist barrier limiting order for stable methods, discovered during this era as theoretical bounds on achievable accuracy.[9] Practical testing of these methods gained traction in the ENIAC era of the late 1940s and 1950s, where early computers facilitated large-scale ODE solutions for ballistics and weather prediction, underscoring the shift from hand calculations to automated numerical analysis.[10]Introductory Examples
Forward Euler Method
The forward Euler method represents the simplest linear multistep method for solving initial value problems of the form y' = f(t, y), y(t_0) = y_0, functioning as a one-step explicit scheme with k=1. In the general linear multistep framework, it takes the form y_{n+1} - y_n = h f(t_n, y_n), corresponding to coefficients \alpha_0 = -1, \alpha_1 = 1, \beta_0 = 1, and \beta_1 = 0.[11][12] This method derives from the first-order Taylor expansion of the exact solution around t_n: y(t_{n+1}) = y(t_n) + h y'(t_n) + \frac{1}{2} h^2 y''(\xi_n) for some \xi_n \in (t_n, t_{n+1}), where y'(t_n) = f(t_n, y(t_n)). Truncating the higher-order term yields the approximation y(t_{n+1}) \approx y(t_n) + h f(t_n, y(t_n)), which discretizes the ODE using a forward difference.[13][12] The local truncation error, assuming the solution is exact at t_n, arises from neglecting the O(h^2) remainder in the Taylor expansion, resulting in an error of O(h^2) per step and confirming the method's first-order accuracy. Over multiple steps spanning a fixed interval, this leads to a global error of O(h) due to error accumulation.[13][11] Implementation proceeds iteratively from the initial condition, advancing the solution at each discrete time point t_n = t_0 + n h. The following pseudocode illustrates the basic algorithm for computing the solution up to a final time t_f, with N = \lceil (t_f - t_0)/h \rceil steps:This requires only the function f and initial data, with no solver for nonlinear equations.[13] The method's primary advantages lie in its simplicity and ease of implementation, requiring no prior solution values beyond the initial condition, which eliminates the need for startup procedures common in multistep methods. However, its low order results in a global error of O(h), causing significant accumulation for moderate step sizes and limiting its use to problems tolerant of reduced accuracy.[12][13]function forward_euler(y0, t0, tf, h, f) N = ceil((tf - t0) / h) t = t0 y = y0 for n = 0 to N-1 y = y + h * f(t, y) t = t + h end return y, t // approximate y(tf) endfunction forward_euler(y0, t0, tf, h, f) N = ceil((tf - t0) / h) t = t0 y = y0 for n = 0 to N-1 y = y + h * f(t, y) t = t + h end return y, t // approximate y(tf) end
Two-Step Adams-Bashforth Method
The two-step Adams-Bashforth method is an explicit linear multistep technique designed for numerically solving non-stiff initial value problems of the form y' = f(t, y), y(t_0) = y_0. It advances the solution from two prior points by extrapolating the right-hand side function linearly to approximate the integral over the next step interval. This method belongs to the broader Adams family of explicit methods and is valued for its simplicity and efficiency when the underlying ODE lacks stiffness, allowing straightforward evaluation of f without solving nonlinear equations.[14] The method's formula isy_{n+1} = y_n + h \left( \frac{3}{2} f(t_n, y_n) - \frac{1}{2} f(t_{n-1}, y_{n-1}) \right),
where h is the step size. This arises from deriving an approximation to y(t_{n+1}) = y(t_n) + \int_{t_n}^{t_{n+1}} f(s, y(s)) \, ds by fitting a linear interpolating polynomial to f at the points t_{n-1} and t_n. Let u = (s - t_n)/h, so the interpolation points are at u = -1 and u = 0. The polynomial is p(u) = f_n + (f_n - f_{n-1}) u, and integrating yields \int_0^1 p(u) \, du = \frac{3}{2} f_n - \frac{1}{2} f_{n-1}, which is then multiplied by h to obtain the update.[15] The two-step Adams-Bashforth method achieves second-order accuracy, meaning the global error is O(h^2) under suitable conditions, with a local truncation error of \frac{5}{12} h^3 y'''(\xi) for some \xi \in (t_{n-1}, t_{n+1}). To start the iteration, an initial approximation y_1 is needed beyond the given y_0; this is typically computed using a lower-order single-step method like the forward Euler method, after which subsequent steps follow the formula iteratively. The explicit nature makes it computationally inexpensive for non-stiff problems, where the stability region encompasses a portion of the left half-plane sufficient for many practical applications.[14]
Major Families
Adams Methods
The Adams methods form a key family of linear multistep methods designed for the numerical integration of non-stiff ordinary differential equations (ODEs), comprising the explicit Adams-Bashforth (AB) methods and the implicit Adams-Moulton (AM) methods. These methods approximate the solution by integrating an interpolating polynomial fitted to the right-hand side function f of the ODE y' = f(t, y). The AB methods interpolate using values at previous time points, while the AM methods incorporate the value at the new time point, making them implicit. Developed initially for astronomical and physical computations, the methods provide high-order accuracy with efficient use of function evaluations.[5] The k-step Adams-Bashforth method is explicit and given by y_{n+1} = y_n + h \sum_{j=0}^{k-1} \beta_j f_{n-j}, where the coefficients \beta_j are chosen via Lagrange interpolation of f over the interval [t_{n-k+1}, t_n] to exactly integrate polynomials up to degree k, yielding local truncation error O(h^{k+1}) and global order k.[14] The k-step Adams-Moulton method is implicit and formulated as y_{n+1} = y_n + h \left( \beta_k f_{n+1} + \sum_{j=0}^{k-1} \beta_j f_{n-j} \right), derived similarly but using backward interpolation over [t_{n+1-k}, t_{n+1}] to achieve order k+1.[14] The implicit nature of AM requires solving a nonlinear equation at each step, typically via fixed-point iteration or Newton methods. A common strategy pairs AB and AM in a predictor-corrector framework, where the explicit AB method provides an initial guess \tilde{y}_{n+1} (predictor) to evaluate f_{n+1}, which is then inserted into the AM formula for refinement (corrector). This can be applied once (PE) for simplicity or iteratively (e.g., PEC or PECE) for higher accuracy, with the AM step often using one higher order than the AB predictor to optimize efficiency while maintaining overall order. This pairing, which leverages the strengths of both explicit efficiency and implicit accuracy, was formalized by Forest Ray Moulton for ballistic trajectory computations.[16] The coefficients for low-order Adams methods (orders 1 to 4) are summarized below, where for AB the coefficients apply to f_n, f_{n-1}, \dots; for AM, they apply to f_{n+1}, f_n, f_{n-1}, \dots. These values ensure the methods are exact for polynomials up to the specified order minus one.[14]| Order | AB Coefficients | AM Coefficients |
|---|---|---|
| 1 | $1 | $1 |
| 2 | \frac{3}{2}, -\frac{1}{2} | \frac{1}{2}, \frac{1}{2} |
| 3 | \frac{23}{12}, -\frac{16}{12}, \frac{5}{12} | \frac{5}{12}, \frac{8}{12}, -\frac{1}{12} |
| 4 | \frac{55}{24}, -\frac{59}{24}, \frac{37}{24}, -\frac{9}{24} | \frac{9}{24}, \frac{19}{24}, -\frac{5}{24}, \frac{1}{24} |