Fact-checked by Grok 2 weeks ago

Linear multistep method

A linear multistep method (LMM) is a class of techniques for solving initial value problems of ordinary differential equations (ODEs), where the approximation of the solution at a future time step is obtained as a of previous solution values and their corresponding derivatives, weighted by step size. These methods, denoted in general form as \sum_{j=0}^{k} \alpha_j y_{n+j} = h \sum_{j=0}^{k} \beta_j f(t_{n+j}, y_{n+j}) for a k-step , with single-step methods like Runge-Kutta by reusing computed values to improve efficiency, particularly for non-stiff problems. The origins of LMMs trace back to the explicit Adams methods, first developed by John Couch Adams in 1855 and published in 1883 in collaboration with Francis Bashforth for applications in capillary action theory, where polynomial interpolation of the integrand enables higher-order approximations without resolving the full ODE at each step. Subsequent advancements by Germund Dahlquist in the 1950s established foundational convergence and stability theories, proving that consistent and zero-stable LMMs converge to the true solution, while introducing barriers such as the maximum order of $2k for a k-step method and limitations on A-stability for explicit variants. Key examples include the explicit Adams-Bashforth methods, which predict the next solution using past derivatives (e.g., the second-order form y_{n+1} = y_n + \frac{h}{2}(3f_n - f_{n-1})), and implicit Adams-Moulton methods, which incorporate the future derivative for greater accuracy (e.g., third-order: y_{n+1} = y_n + \frac{h}{12}(5f_{n+1} + 8f_n - f_{n-1})). For stiff ODEs, where explicit methods suffer from severe restrictions, backward differentiation formulas (BDFs)—initially proposed by Charles F. Curtiss and Joseph O. Hirschfelder in 1952—offer implicit, A-stable alternatives up to order 5, such as the second-order BDF y_{n+1} - \frac{4}{3}y_n + \frac{1}{3}y_{n-1} = \frac{2}{3} h f_{n+1}. These methods' order and properties, analyzed via characteristic polynomials, remain central to their design and application in scientific computing.

Basic Concepts

Definition and Formulation

Linear multistep methods are a class of numerical integrators designed to approximate the solution of initial value problems for ordinary equations (ODEs) of the form y' = f(t, y), where y(t_0) = y_0 and the integration is performed over the interval [t_0, T]. These methods advance the solution by utilizing a of previous solution values and their corresponding right-hand side evaluations, enabling efficient computation by reusing information from multiple time steps. Unlike single-step methods, they incorporate data from several prior points to achieve higher accuracy while maintaining computational efficiency. The general formulation of a linear multistep method is given by \sum_{j=0}^k \alpha_j y_{n+j} = h \sum_{j=0}^k \beta_j f(t_{n+j}, y_{n+j}), where h > 0 is the uniform step size, t_n = t_0 + n h, and the coefficients \{\alpha_j\}_{j=0}^k and \{\beta_j\}_{j=0}^k are real numbers satisfying the normalization condition \alpha_k = 1. Here, k denotes the number of steps involved in the , and the method has an associated p. The left-hand side advances the solution values, while the right-hand side approximates the integral contribution from the ODE's forcing term. To initiate the method beyond the y_0, one or more starting values y_1, \dots, y_k are typically obtained using single-step methods such as Runge-Kutta schemes. Linear multistep methods are classified as explicit if \beta_k = 0, in which case y_{n+k} can be solved for directly without , or implicit if \beta_k \neq 0, requiring the solution of a nonlinear equation at each step, often via or . The characteristic polynomials associated with the method are the advancing polynomial \rho(\zeta) = \sum_{j=0}^k \alpha_j \zeta^j for the solution values and the forcing polynomial \sigma(\zeta) = \sum_{j=0}^k \beta_j \zeta^j for the right-hand side evaluations; these s play a central role in analyzing the method's qualitative behavior.

Historical Background

The origins of linear multistep methods can be traced to the , where Leonhard Euler's from 1768 served as an early single-step precursor for of ordinary differential equations (ODEs). However, the first explicit multistep approaches were developed by British astronomer around 1855 and published in 1883, who created formulas to compute solutions for ODEs modeling in collaboration with Francis Bashforth, motivated by astronomical and physical computations requiring higher efficiency than single-step methods. These early methods laid the groundwork for using multiple previous points to approximate derivatives, improving accuracy for non-stiff problems without explicit reliance on advanced computing. The formalization of linear multistep methods as a general class occurred in the mid-20th century through the pioneering work of Swedish mathematician Germund Dahlquist. In his 1956 paper, Dahlquist established the foundational theory of and for these methods, analyzing zero-stability and consistency conditions essential for reliable numerical solutions of ODEs. This was expanded in his 1958 doctoral dissertation at , titled "Stability and error bounds in the numerical integration of ordinary differential equations," which introduced the broad linear multistep framework and emphasized concepts for practical implementation. Dahlquist's contributions were driven by the need to extend single-step methods like Runge-Kutta for more efficient integration in scientific computing, particularly as electronic computers emerged. During the 1950s and , advancements built on Adams' explicit methods by incorporating predictor-corrector pairs, where an explicit predictor (like Adams-Bashforth) estimates the next value and an implicit corrector (like Adams-Moulton) refines it, enhancing accuracy and for non-stiff ODEs in applications. Concurrently, backward differentiation formulas (BDFs), first proposed by Charles F. Curtiss and Joseph O. Hirschfelder in 1952, were further developed and popularized in the by C. William Gear to address stiff ODEs, where traditional explicit methods fail due to stability restrictions; Gear's 1966 publications and 1971 monograph popularized variable-order BDF implementations for simulations in and circuit analysis. These developments were motivated by the computational demands of post-World War II , including adaptations for stiff systems arising in reactor modeling and . Key milestones include Dahlquist's 1963 paper, which analyzed special stability issues and introduced A-stability, alongside the first Dahlquist barrier limiting order for stable methods, discovered during this era as theoretical bounds on achievable accuracy. Practical testing of these methods gained traction in the era of the late 1940s and 1950s, where early computers facilitated large-scale solutions for ballistics and weather prediction, underscoring the shift from hand calculations to automated .

Introductory Examples

Forward Euler Method

The forward Euler method represents the simplest linear multistep method for solving initial value problems of the form y' = f(t, y), y(t_0) = y_0, functioning as a one-step explicit scheme with k=1. In the general linear multistep framework, it takes the form y_{n+1} - y_n = h f(t_n, y_n), corresponding to coefficients \alpha_0 = -1, \alpha_1 = 1, \beta_0 = 1, and \beta_1 = 0. This method derives from the first-order Taylor expansion of the exact solution around t_n: y(t_{n+1}) = y(t_n) + h y'(t_n) + \frac{1}{2} h^2 y''(\xi_n) for some \xi_n \in (t_n, t_{n+1}), where y'(t_n) = f(t_n, y(t_n)). Truncating the higher-order term yields the approximation y(t_{n+1}) \approx y(t_n) + h f(t_n, y(t_n)), which discretizes the ODE using a forward difference. The local , assuming the solution is exact at t_n, arises from neglecting the O(h^2) in the expansion, resulting in an of O(h^2) per step and confirming the method's accuracy. Over multiple steps spanning a fixed , this leads to a global of O(h) due to accumulation. Implementation proceeds iteratively from the , advancing the solution at each discrete time point t_n = t_0 + n h. The following illustrates the basic algorithm for computing the solution up to a final time t_f, with N = \lceil (t_f - t_0)/h \rceil steps:
function forward_euler(y0, t0, tf, h, f)
    N = ceil((tf - t0) / h)
    t = t0
    y = y0
    for n = 0 to N-1
        y = y + h * f(t, y)
        t = t + h
    end
    return y, t  // approximate y(tf)
end
This requires only the function f and initial data, with no solver for nonlinear equations. The method's primary advantages lie in its simplicity and ease of implementation, requiring no prior solution values beyond the , which eliminates the need for startup procedures common in multistep methods. However, its low order results in a global error of O(h), causing significant accumulation for moderate step sizes and limiting its use to problems tolerant of reduced accuracy.

Two-Step Adams-Bashforth Method

The two-step Adams-Bashforth method is an explicit linear multistep technique designed for numerically solving non-stiff initial value problems of the form y' = f(t, y), y(t_0) = y_0. It advances the solution from two prior points by extrapolating the right-hand side function linearly to approximate the over the next step . This method belongs to the broader Adams family of explicit methods and is valued for its simplicity and efficiency when the underlying lacks , allowing straightforward evaluation of f without solving nonlinear equations. The method's formula is
y_{n+1} = y_n + h \left( \frac{3}{2} f(t_n, y_n) - \frac{1}{2} f(t_{n-1}, y_{n-1}) \right),
where h is the step size. This arises from deriving an to y(t_{n+1}) = y(t_n) + \int_{t_n}^{t_{n+1}} f(s, y(s)) \, ds by fitting a linear to f at the points t_{n-1} and t_n. Let u = (s - t_n)/h, so the interpolation points are at u = -1 and u = 0. The is p(u) = f_n + (f_n - f_{n-1}) u, and integrating yields \int_0^1 p(u) \, du = \frac{3}{2} f_n - \frac{1}{2} f_{n-1}, which is then multiplied by h to obtain the update.
The two-step Adams-Bashforth achieves second-order accuracy, meaning the global error is O(h^2) under suitable conditions, with a local of \frac{5}{12} h^3 y'''(\xi) for some \xi \in (t_{n-1}, t_{n+1}). To start the iteration, an initial approximation y_1 is needed beyond the given y_0; this is typically computed using a lower-order single-step like the forward , after which subsequent steps follow the formula iteratively. The explicit nature makes it computationally inexpensive for non-stiff problems, where the stability region encompasses a portion of the left half-plane sufficient for many practical applications.

Major Families

Adams Methods

The Adams methods form a key family of linear multistep methods designed for the numerical integration of non-stiff ordinary differential equations (ODEs), comprising the explicit Adams-Bashforth (AB) methods and the implicit Adams-Moulton (AM) methods. These methods approximate the solution by integrating an interpolating polynomial fitted to the right-hand side function f of the ODE y' = f(t, y). The AB methods interpolate using values at previous time points, while the AM methods incorporate the value at the new time point, making them implicit. Developed initially for astronomical and physical computations, the methods provide high-order accuracy with efficient use of function evaluations. The k-step Adams-Bashforth method is explicit and given by y_{n+1} = y_n + h \sum_{j=0}^{k-1} \beta_j f_{n-j}, where the coefficients \beta_j are chosen via of f over the [t_{n-k+1}, t_n] to exactly integrate polynomials up to k, yielding local O(h^{k+1}) and global order k. The k-step Adams-Moulton method is implicit and formulated as y_{n+1} = y_n + h \left( \beta_k f_{n+1} + \sum_{j=0}^{k-1} \beta_j f_{n-j} \right), derived similarly but using backward over [t_{n+1-k}, t_{n+1}] to achieve order k+1. The implicit nature of AM requires solving a nonlinear equation at each step, typically via or methods. A common strategy pairs and AM in a predictor-corrector framework, where the explicit method provides an initial guess \tilde{y}_{n+1} (predictor) to evaluate f_{n+1}, which is then inserted into the AM formula for refinement (corrector). This can be applied once () for simplicity or iteratively (e.g., PEC or PECE) for higher accuracy, with the AM step often using one higher than the predictor to optimize while maintaining overall . This , which leverages the strengths of both explicit and implicit accuracy, was formalized by Forest Ray Moulton for ballistic trajectory computations. The coefficients for low-order Adams methods (orders 1 to 4) are summarized below, where for AB the coefficients apply to f_n, f_{n-1}, \dots; for AM, they apply to f_{n+1}, f_n, f_{n-1}, \dots. These values ensure the methods are exact for polynomials up to the specified order minus one.
OrderAB CoefficientsAM Coefficients
1$1$1
2\frac{3}{2}, -\frac{1}{2}\frac{1}{2}, \frac{1}{2}
3\frac{23}{12}, -\frac{16}{12}, \frac{5}{12}\frac{5}{12}, \frac{8}{12}, -\frac{1}{12}
4\frac{55}{24}, -\frac{59}{24}, \frac{37}{24}, -\frac{9}{24}\frac{9}{24}, \frac{19}{24}, -\frac{5}{24}, \frac{1}{24}
Adams methods are particularly effective for non-stiff ODEs, where the explicit variants avoid nonlinear solves and the implicit AM variants deliver superior accuracy per function evaluation in predictor-corrector mode—often one order higher than AB for comparable computational cost. Extensions to variable step sizes incorporate local error estimates from the predictor-corrector to adjust h, enabling adaptive in solvers like MATLAB's ode113.

Backward Differentiation Formulas

The backward differentiation formulas (BDFs) constitute a family of implicit linear multistep methods designed for the numerical of ordinary differential equations (ODEs), particularly those exhibiting . These methods approximate the solution by fitting a to previous solution values and evaluating the at the current time step using backward differences, which enhances for systems with widely varying timescales. Unlike explicit methods, BDFs require solving an implicit at each step, making them suitable for stiff problems where explicit schemes would necessitate prohibitively small time steps. The general formulation of a k-step is given by \sum_{j=0}^k \alpha_j y_{n+j} = h \beta_k f_{n+k}, where h is the step size, \alpha_k = 1, \beta_k > 0, and the coefficients \alpha_j (for j=0,\dots,k-1) and \beta_k are determined by requiring the method to have k, meaning the local is O(h^{k+1}). These coefficients arise from the approximation to the , specifically using backward differences to express y'_{n+k} in terms of past and current solution values. The derivation of BDFs relies on the backward difference operator \nabla y_n = y_n - y_{n-1}, with higher-order differences defined recursively as \nabla^k y_n = \nabla (\nabla^{k-1} y_n). The method approximates y'_{n+k} by interpolating a polynomial through the points y_{n}, y_{n-1}, \dots, y_{n-k+1} and differentiating at t_{n+k}, or equivalently using the generating function approach to satisfy the order conditions. This backward-focused differencing provides inherent damping of high-frequency oscillations, a key property for stiff systems. BDF methods are zero-stable for orders k \leq 6, A-stable for k=1,2, and A(\alpha)-stable for k=3 to $6(with\alphadecreasing from approximately90^\circto18^\circ), allowing reliable [integration](/page/Integration) along the negative real axis in the [complex plane](/page/Complex_plane), but become unstable for k \geq 7$ due to root loci entering the right half-plane. A commonly used example is the second-order BDF (BDF2), formulated as y_{n+1} - \frac{4}{3} y_n + \frac{1}{3} y_{n-1} = \frac{2}{3} h f_{n+1}, which balances computational efficiency and accuracy for many stiff problems. Due to their implicit nature, BDF equations are solved iteratively using nonlinear solvers such as , often with approximations to handle the resulting systems efficiently. For stiff ODEs, BDFs offer significant advantages over explicit methods like Adams-Bashforth by providing better damping of high-frequency modes, enabling larger step sizes without numerical instability. This makes them ideal for applications in , circuit simulation, and other stiff scenarios. Modern implementations, such as those in the SUNDIALS suite, employ variable-order BDFs (typically up to order 5) with adaptive step control to optimize performance across problem scales.

Theoretical Properties

Consistency and Order Conditions

In linear multistep methods, refers to the property that the local approaches zero as the step size h tends to zero, ensuring the method approximates the exact solution of the underlying . For a general k-step given by \sum_{j=0}^k \alpha_j y_{n+j} = h \sum_{j=0}^k \beta_j f(t_{n+j}, y_{n+j}), with \alpha_k = 1 and not all \alpha_j, \beta_j = 0, requires the first- conditions derived from Taylor expansion: \sum_{j=0}^k \alpha_j = 0 and \sum_{j=0}^k j \alpha_j = \sum_{j=0}^k \beta_j. These conditions ensure that constant and linear solutions are reproduced exactly, corresponding to the method having at least 1. The p of the method is defined such that the local \tau = O(h^{p+1}), meaning the method matches the exact solution up to terms of p in the . This is achieved when the characteristic polynomials \rho(\zeta) = \sum_{j=0}^k \alpha_j \zeta^j and \sigma(\zeta) = \sum_{j=0}^k \beta_j \zeta^j satisfy the order conditions that their derivatives align up to p. Specifically, the C(q) conditions for q = 0, 1, \dots, p are C_q = \frac{1}{q!} \sum_{j=0}^k \alpha_j j^q - \frac{1}{(q-1)!} \sum_{j=0}^k \beta_j j^{q-1} = 0 \quad (q \geq 1), with C_0 = \sum_{j=0}^k \alpha_j = 0, and C_{p+1} \neq 0 serving as the principal error constant. These conditions generalize the requirements, ensuring higher-order accuracy by equating coefficients in the . To derive these conditions, assume the exact solution y(t) satisfies the method with y_{n+j} = y(t_{n+j}). Expand y(t_{n+j}) and y'(t_{n+j}) in Taylor series around t_n: y(t_{n+j}) = \sum_{m=0}^\infty \frac{(j h)^m}{m!} y^{(m)}(t_n), \quad y'(t_{n+j}) = \sum_{m=0}^\infty \frac{(j h)^m}{m!} y^{(m+1)}(t_n). Substituting into the method yields the local truncation error h \tau_n = \sum_{j=0}^k \alpha_j y(t_{n+j}) - h \sum_{j=0}^k \beta_j y'(t_{n+j}) = \sum_{q=0}^\infty h^q C_q y^{(q)}(t_n), where the coefficients C_q must vanish for q = 0 to p to achieve order p. This derivation shows that solving the system of equations for the \alpha_j and \beta_j to satisfy the C(q) = 0 conditions determines methods of desired order. For example, the forward , a 1-step explicit method with \alpha_0 = -1, \alpha_1 = 1, \beta_0 = 1, \beta_1 = 0, satisfies C_0 = 0 and C_1 = 0, confirming p=1, with error constant C_2 = 1/2. Higher-order methods require solving larger systems; for instance, a 2-step method of 2 involves setting C_0 = C_1 = C_2 = 0 to find appropriate coefficients, as in the .

Stability and Convergence

Zero-stability is a crucial property for linear multistep methods, ensuring that the numerical solution does not diverge as the step size h approaches zero due to accumulated errors or perturbations in values. It is defined by the roots of the first \rho(\zeta) = \sum_{j=0}^k \alpha_j \zeta^j = 0. The method satisfies the root condition if all \zeta obey |\zeta| \leq 1, and any root with |\zeta| = 1 (except the simple root at \zeta = 1) has multiplicity one; the root \zeta = 1 must be simple to preserve the constant solution. This condition prevents parasitic solutions from growing exponentially, thereby bounding error propagation over many steps. Convergence of a linear multistep , which guarantees that the numerical solution approaches the exact solution as h \to 0, is characterized by Dahlquist's equivalence theorem: the converges if and only if it is both and zero-stable. For a of p that is also zero-stable, the global error satisfies |y_n - y(t_n)| = O(h^p) uniformly on any finite interval [t_0, T], assuming sufficiently accurate starting values. The proof relies on perturbation , where the numerical solution is viewed as a small deviation from the exact solution; zero-stability ensures that local truncation errors, which are O(h^{p+1}) per step due to , accumulate to a global error of O(h^p) without amplification. serves as a prerequisite, providing the local accuracy needed for this error bound. While zero-stability addresses asymptotic behavior as h \to 0, absolute stability examines the method's performance for fixed h > 0, particularly in problems with widely varying timescales. For the scalar test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0, the method applied with step size h yields the recurrence involving the stability function R(z), where z = h\lambda. The method is absolutely stable if |R(z)| \leq 1, and for damping of the numerical solution, |R(z)| < 1 in the stability region S = \{ z \in \mathbb{C} : |R(z)| \leq 1 \}, which lies in the left half of the complex plane. This region determines where the method mimics the exact solution's decay. The implications of absolute stability are particularly evident in distinguishing stiff from non-stiff problems. Explicit linear multistep methods, such as those in the , have limited stability regions that exclude parts of the negative real axis for large |z|, leading to instability when |h\lambda| is large—common in stiff equations with eigenvalues far in the left half-plane. Implicit methods, like , often possess larger stability regions encompassing significant portions of the left half-plane, enabling reliable integration of stiff systems without excessive step-size restrictions.

Advanced Limitations

Second Dahlquist Barrier

The second Dahlquist barrier establishes that no linear multistep method of order p > 2 can be A-, meaning it cannot remain for all step sizes h > 0 when applied to the test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0. This limitation arises because A-stability requires the absolute stability region to encompass the entire negative half of the complex plane, a property incompatible with higher-order accuracy in this class of methods. The theorem was proved by Germund Dahlquist in his 1963 paper, marking a pivotal result in the stability analysis of numerical ODE solvers. The proof analyzes the stability function R(z), a rational function approximating e^z up to order p, but on the imaginary axis for A-stability. Specifically, A-stability implies |R(i y)| \leq 1 for all real y, as this forms the boundary of the left half-plane. For a method of order p > 2, asymptotically for large |y|, |R(i y)| grows like |y|^{p-2} > 1, violating the boundedness condition. This contradiction is resolved by showing that only orders up to 2 permit the required bounded behavior, often using the bilinear transformation z = (\xi + 1)/(\xi - 1) and properties of analytic functions such as the Riesz-Herglotz theorem. This barrier has profound implications for solving stiff ordinary differential equations, where eigenvalues with large negative real parts demand A-stability to avoid severe step-size restrictions for stability. Explicit linear multistep methods, lacking implicit terms, cannot achieve A-stability even at order 1, as their stability regions are bounded and exclude parts of the left half-plane. Consequently, while implicit methods enable A-stability up to order 2, higher-order approximations for stiff problems must compromise on full A-stability, often relying on related concepts like A(\alpha)-stability. An important exception is the , a second-order implicit method given by y_{n+1} = y_n + \frac{h}{2} (f(t_n, y_n) + f(t_{n+1}, y_{n+1})), which is A-stable and attains the minimal error constant among such methods. This makes it a for balanced accuracy and in non-stiff to mildly stiff contexts.

L-Stability Barrier

The L-stability barrier establishes a fundamental limitation on the design of linear multistep methods for stiff ordinary differential equations, stating that no L-stable method of order p > 2 exists. L-stability requires not only A-stability—where the stability function R(z) satisfies |R(z)| \leq 1 for all z in the left half-plane \Re(z) \leq 0—but also that |R(z)| \to 0 as |z| \to \infty with \Re(z) \leq 0, ensuring rapid damping of transient components in stiff systems. This barrier extends the earlier restriction on A-stability alone, emphasizing the stricter demands for handling severe where eigenvalues have large negative real parts. Dahlquist introduced this barrier in 1985 as an extension of his earlier work on A-stability limits, highlighting its role in refining for stiff ODEs. The proof relies on the structure of the stability function R(z) for linear multistep methods, a whose numerator and denominator are polynomials of degrees typically equal to the number of steps k. For L-stability, as z \to -\infty requires R(z) \sim c/z for some constant c, implying a specific pole at to achieve the necessary decay. However, achieving order p > 2 imposes conditions on R(z) around z = 0 that conflict with this asymptotic behavior, resulting in a degree mismatch that prevents the residue from decaying sufficiently while maintaining A-stability across the entire left half-plane. This incompatibility arises because higher-order accuracy demands more terms in the , which disrupts the required rational form for L-stability. This limitation has significant implications for practical methods like the backward differentiation formulas (BDFs), which are widely used for stiff problems to their good properties. The BDF methods of orders 1 and 2 are both A-stable and L-stable, allowing effective treatment of stiff systems with fast-decaying transients. In contrast, BDF methods of orders 3 through 6 are only A(\alpha)-stable for some \alpha > 0 (with \alpha increasing as order rises, reaching about 73° for order 6), meaning they are not fully A-stable and thus cannot be L-stable; this can lead to slower decay of numerical transients in highly stiff scenarios, potentially requiring smaller steps or causing oscillations. Higher-order BDFs (beyond 6) lose even A(\alpha)-stability entirely. A common practical is order reduction near infinity, where the method effectively behaves like a lower-order (L-stable) formula for components with large |\lambda| h, achieved through techniques such as blending or variable-order selection in solvers; this mitigates the barrier's impact without sacrificing overall efficiency for moderately stiff problems.

References

  1. [1]
  2. [2]
    A Special Class of Explicit Linear Multistep Methods as Basic ... - jstor
    Peter Alfeld, A Special Class of Explicit Linear Multistep ... BASHFORTH & J. C. ADAMS, Theories of Capillary Action, Cambridge Univ. Press, Cambridge, 1883.
  3. [3]
    [PDF] DAHLQUIST'S CLASSICAL PAPERS ON STABILITY THEORY
    Sep 9, 2006 · Dahlquist's classical papers from 1956 and 1963, considered classics in numerical analysis, had a significant impact on stability theory, ...
  4. [4]
    [PDF] mathematics: curtiss and hirschfelder - vol. 38, 1952
    This scheme has the unusual property of singling out and approximating a particular solution of the differential equation to the exclusion of the manifold of ...Missing: backward | Show results with:backward
  5. [5]
    Convergence and stability in the numerical integration of ordinary ...
    Dahlquist, G. (1956). Convergence and stability in the numerical integration of ordinary differential equations. MATHEMATICA SCANDINAVICA, 4, 33–53.
  6. [6]
    Germund Dahlquist (1925 - 2005) - Biography - MacTutor
    In the same year he published A special stability problem for linear multistep methods which introduced A A A-stability and became one of the most cited papers ...
  7. [7]
    (PDF) The Numerical Integration of Ordinary Differential Equations ...
    Aug 9, 2025 · ... C. William Gear. Methods for the integration of initial value problems for the ordinary differential equation dy/dx = f(x,y) ...Missing: 1960s | Show results with:1960s
  8. [8]
    A special stability problem for linear multistep methods
    Dahlquist, G.,Stability and error bounds in the numerical integration of ordinary differential equations, Dissertation, Stockholm 1958. Also in Trans. Roy. Inst ...Missing: PhD thesis
  9. [9]
    A history of scientific computing: The development of ODE methods
    To return to Argonne in the summer of 1966: Gear wrote a pre- liminary version of a stiff integrator using fixed-order BDF methods, but with stepsize ...
  10. [10]
    [PDF] lecture 35: Linear Multistep Mehods: Truncation Error
    Euler's method is an example of a one-step method that also fits this multistep template. Here are a few examples of linear multistep methods: Euler's method:.
  11. [11]
    [PDF] A brief note on linear multistep methods - Alen Alexanderian
    Apr 27, 2022 · Definition of a multistep method. The generic description of a linear multistep method.
  12. [12]
    [PDF] NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS
    These results can be used to give a general error analysis of Euler's method for the ... In the derivation of the Euler method, we used the forward difference ...
  13. [13]
    [PDF] 2 Multistep Methods
    Now we will consider so-called multistep methods, i.e., more of the history of the solution will affect the value yn+1. 2.1 Adams Methods. Consider the first- ...Missing: Gear | Show results with:Gear
  14. [14]
    [PDF] Initial value problems 1 Derivation of Adams–Bashforth methods
    The derivation below considers two-step (r = 2) Adams–Bashforth and Adams–Moulton methods. 1 Derivation of Adams–Bashforth methods. A mathematical solution ...
  15. [15]
    An attempt to test the theories of capillary action by comparing the ...
    Sep 16, 2006 · Bashforth, Francis, 1819-1912; Adams, John Couch, 1819-1892. Publication date: 1883. Topics: Capillarity. Publisher: Cambridge [Eng.] University ...
  16. [16]
    Methods in Exterior Ballistics : Forest Ray Moulton - Internet Archive
    Sep 15, 2022 · Methods in Exterior Ballistics. by: Forest Ray Moulton. Publication date: 1962. Collection: internetarchivebooks; inlibrary; printdisabled.
  17. [17]
    Backward differentiation formulas - Scholarpedia
    Aug 8, 2007 · The first use of BDF methods appears to date back to Curtiss and Hirschfelder (1952), although they were not given that name at the time.
  18. [18]
    3.2. Mathematical Considerations - SUNDIALS documentation
    For stiff problems, CVODE includes the Backward Differentiation Formulas (BDF) in so-called fixed-leading coefficient (FLC) form, given by K 1 = q and K 2 = 0 , ...
  19. [19]
    Solving Ordinary Differential Equations I - SpringerLink
    In stockThis book is a valuable tool for students of mathematics and specialists concerned with numerical analysis, mathematical physics, mechanics, system engineering.
  20. [20]
    [PDF] CHAPTER 5: Linear Multistep Methods
    LMM is called linear because it is linear in f, unlike RK methods. (review!) Note 3. f itself is not linear! The most popular LMMs are based on polynomial.
  21. [21]
    [PDF] 6 Linear multistep methods
    A k-step linear multistep method (LMM) applied to the ODE y′ = f(t, y), y(t0) = y0, t0 ≤ t ≤ tend. is given by k. X l=0 αlyn+l = ...
  22. [22]
    [PDF] Linear Multistep Methods (LMM)
    A LMM is called accurate of order p if l(h,t) = O(hp+1) for any solution of x0 = f (t,x) which is Cp+1. Fact Since l(h,t) = C0x(t) + C1hx. 0. (t) ...
  23. [23]
    [PDF] A special stability problem for linear multistep methods - Math-Unipd
    The trapezoidal formula has the smallest truncation error among all linear multistep methods with a certain stability property. For this method error bounds.Missing: 1956 | Show results with:1956
  24. [24]
    [PDF] 13.10. Additional types of stability. Definition: A method for which the ...
    (Dahlquist) (i) An explicit linear multistep method cannot be A-stable. (ii) The order of an A-stable implicit linear multistep method cannot exceed 2. (iii) ...
  25. [25]
    33 years of numerical instability, Part I | BIT Numerical Mathematics
    Cite this article. Dahlquist, G. 33 years of numerical instability, Part I. BIT 25, 188–204 (1985). https://doi.org/10.1007/BF01934997. Download citation.