Fact-checked by Grok 2 weeks ago

Backward differentiation formula

The (BDF) is a family of implicit multistep numerical methods designed for solving initial value problems in ordinary differential equations (ODEs), where the approximation of the at the current time step relies on a backward difference fitted to past solution values. These methods generate a sequence of approximations y_n to the exact solution y(t_n) at discrete times t_n = t_0 + n h, using the general form h y_n' = \sum_{j=0}^k \alpha_{k j} y_{n-j}, where h is the step size, k is the method order, and the coefficients \alpha_{k j} are determined by the requirement that the formula is exact for up to degree k. For example, the BDF () is y_n = y_{n-1} + h f(t_n, y_n), while higher orders incorporate more previous points for increased accuracy. Introduced in the early as part of efforts to address stiff ODEs—systems where components evolve on vastly different time scales—BDFs were first proposed by Charles F. Curtiss and Joseph O. Hirschfelder in their work on integrating equations, recognizing the need for stable implicit schemes to avoid restrictive step sizes imposed by explicit methods. The methods gained prominence through the contributions of C. William Gear, who formalized and analyzed them in the late , including variable-order and variable-step implementations that adapt to the problem's . Gear's seminal further established BDFs as a for stiff solvers, emphasizing their properties for orders 1 through 6, which allow large step sizes without oscillations in stiff components. A key advantage of BDFs lies in their stability region, which includes the entire negative real axis in the for orders up to 6 (A-stable for orders 1-2 and A(α)-stable for 3-6), making them particularly effective for stiff systems arising in , circuit simulation, and mechanical vibrations. However, orders 7 and higher exhibit along the negative real axis, limiting practical use to lower orders in most implementations. BDFs are also applicable to differential-algebraic equations (DAEs) of index at most 1, where they handle algebraic constraints alongside differential components. Modern solvers, such as those in scientific software, often employ variable-order BDFs with error control to balance accuracy and efficiency.

Overview

Definition and Purpose

The is a family of implicit designed for the of initial value problems in ordinary differential equations (ODEs) of the form y' = f(t, y). These methods generate approximations to the solution by leveraging information from multiple previous time steps to advance the solution at the current step. BDFs approximate the derivative y'(t_n) through backward differences, which arise from fitting a to the solution values at the current point t_n and the preceding k-1 points, then differentiating this interpolant and equating it to f(t_n, y_n). This interpolation-based approach ensures the method's consistency with the underlying ODE while incorporating implicitness to handle challenging dynamics. The primary purpose of BDFs is to offer robust stability for stiff ODEs, where the system's eigenvalues have widely varying magnitudes, leading to rapid transients that explicit methods cannot capture without impractically small step sizes. In stiff systems, explicit or become unstable unless the step size is restricted to h < O(1 / |\lambda_{\max}|), where \lambda_{\max} is the eigenvalue with the largest magnitude, often resulting in inefficient computation. BDFs, being implicit and A-stable (or L-stable for lower orders), permit larger steps by damping these fast components effectively, making them suitable for applications like chemical kinetics or circuit simulation. To illustrate the need for such stable implicit methods, consider the prototypical stiff problem y' = -1000 y, y(0) = 1, with exact solution y(t) = e^{-1000 t}, which exhibits extremely rapid decay. Explicit methods fail to integrate this stably beyond step sizes around h \approx 0.002, whereas BDFs maintain accuracy and stability with steps up to h \approx 0.1 or larger, highlighting their utility in avoiding the stiffness-induced step-size bottleneck.

Historical Development

The backward differentiation formulas (BDFs) emerged as a specialized class of linear multistep methods for solving ordinary differential equations (ODEs), building on foundational work in multistep integration techniques developed in the late 19th and early 20th centuries. Early multistep methods, such as the Adams-Bashforth explicit formulas introduced by John Couch Adams in 1883 for predictor steps and extended by Forest Ray Moulton in the 1920s into implicit Adams-Moulton corrector methods, provided efficient ways to approximate solutions by leveraging multiple past points, particularly for non-stiff problems. These approaches laid the groundwork for implicit methods like BDFs, which shift focus from quadrature to direct differentiation to handle the growing recognition of stiffness in chemical kinetics and other applications. The specific introduction of BDFs is credited to Charles F. Curtiss and Joseph O. Hirschfelder in their 1952 paper, where they proposed implicit multistep formulas based on backward differences to address the challenges of stiff arising in reaction kinetics simulations. This work marked an early formal recognition of stiffness issues and advocated for methods with improved stability over explicit approaches, though the formulas were not yet termed "backward differentiation" at the time. Their contribution highlighted the need for A-stable methods capable of resolving disparate timescales in physical systems without severe step-size restrictions. BDFs gained widespread adoption through the efforts of C. William Gear, who in 1967 formalized and popularized these methods in his paper on the automatic integration of stiff ODEs, emphasizing their suitability for variable-step implementations in and engineering. Gear's subsequent 1971 book further detailed the algorithms, including predictor-corrector variants and stability analysis, solidifying BDFs as a cornerstone for stiff system solvers and influencing early software like DIFSUB. In the 1980s, advancements focused on enhancing BDF flexibility, with J.R. Cash introducing extended backward differentiation formulas (EBDFs) and modified extended variants (MEBDFs) to support variable-order and variable-step computations, improving efficiency for complex stiff problems without sacrificing stability up to order six. These developments addressed limitations in fixed-order BDFs and paved the way for robust implementations in modern numerical libraries.

Mathematical Formulation

General Formula

The general k-step backward differentiation formula (BDF) for numerically integrating the ordinary differential equation y' = f(t, y) is expressed as \sum_{j=0}^{k} \alpha_j y_{n+j} = h f(t_{n+k}, y_{n+k}), where h > 0 is the uniform step size, t_{n+j} = t_n + j h, the coefficients \{\alpha_j\}_{j=0}^k satisfy \alpha_k = 1 and are determined to ensure consistency and order k. This implicit is particularly suited for stiff systems due to its favorable properties. The backward operator provides a foundational basis for the coefficients and is defined recursively as \nabla y_n = y_n - y_{n-1} for the , with higher-order differences given by \nabla^m y_n = \nabla (\nabla^{m-1} y_n) = \nabla^{m-1} y_n - \nabla^{m-1} y_{n-1} for m = 2, \dots, k. The k-th backward \nabla^k y_{n+k} approximates h^k y^{(k)}(\xi) for some \xi in the , linking the differences to the continuous derivatives underlying the method. The formula arises from : consider the unique polynomial \pi_k(t) of degree at most k that interpolates the points (t_{n+j}, y_{n+j}) for j = 0, \dots, k; the method then approximates y'(t_{n+k}) \approx \pi_k'(t_{n+k}), yielding the implicit above after substitution and normalization. This perspective ensures the method's order by matching the Taylor expansion up to order k. In variable step-size formulations, the BDF relates to , where the coefficients are computed using generalized divided differences f[t_{n}, t_{n-1}, \dots, t_{n-k}] \approx y^{(k)}(\xi)/k! to maintain accuracy without uniform spacing. This connection facilitates extensions to non-uniform grids while preserving the method's core approximation properties.

Derivation of Coefficients

The derivation of the coefficients in the backward differentiation formula (BDF) relies on to approximate the solution of the y' = f(t, y). Consider k+1 equally spaced points (t_{n+j}, y_{n+j}) for j = 0, 1, ..., k, where the step size is constant h, so t_{n+j} = t_n + j h. The interpolating π_k(t) of degree at most k is constructed to pass through these points, satisfying π_k(t_{n+j}) = y_{n+j} for each j. The BDF equation is then formed by imposing the condition that the derivative of this at the most recent point equals the right-hand side of the : π_k'(t_{n+k}) = f(t_{n+k}, y_{n+k}). This condition yields a linear relation among the y_{n+j} values and f_{n+k}, with coefficients determined by the properties. To obtain the explicit form, express the interpolating polynomial using the Lagrange basis: \pi_k(t) = \sum_{j=0}^k y_{n+j} \, l_j(t), where the basis polynomials are l_j(t) = \prod_{\substack{m=0 \\ m \neq j}}^k \frac{t - t_{n+m}}{t_{n+j} - t_{n+m}} = \prod_{\substack{m=0 \\ m \neq j}}^k \frac{t - t_n - m h}{j h - m h}. Differentiating gives \pi_k'(t) = \sum_{j=0}^k y_{n+j} \, l_j'(t). Evaluating at t = t_{n+k} produces the characteristic equation f_{n+k} = \sum_{j=0}^k y_{n+j} \, l_j'(t_{n+k}), where the coefficients are the evaluated basis derivatives l_j'(t_{n+k}). Since the points are equally spaced, these derivatives scale with 1/h, reflecting the geometric progression in the denominators. The general BDF form is thus \sum_{j=0}^k \alpha_j y_{n+j} = h f_{n+k}, with α_j = h , l_j'(t_{n+k}) for normalization such that the coefficient of f is h. This interpolation ensures the method is exact for polynomials of degree at most k. The coefficients α_j can be derived using properties of the Lagrange basis under equal spacing. Standard values for low orders are as follows (with indexing shifted to match the common backward form where the sum is over y_n to y_{n-k}, but equivalent here):
Order kα_k (current)α_{k-1}α_{k-2}α_{k-3}α_{k-4}α_{k-5}α_{k-6}
11-1
21-4/31/3
31-18/119/11-2/11
41-48/2536/25-16/253/25
51-300/137300/137-200/13775/137-12/137
61-360/49450/49-400/49225/49-72/4910/49
(Note: These are adjusted for the forward indexing; standard tables often use backward indexing with α_0 = 1 for the current point.) Verification for the first-order case (k=1) gives α_0 = -1 (for y_n), α_1 = 1 (for y_{n+1}), yielding y_{n+1} - y_n = h f_{n+1}, matching the . Higher-order coefficients follow from the expansion of the operator in terms of finite differences. The in this approximation stems from the remainder term. By expansion around t_{n+k}, the exact solution y(t) satisfies y(t_{n+j}) = y(t_{n+k}) + \sum_{m=1}^\infty \frac{(-j h)^m}{m!} y^{(m)}(t_{n+k}), and the π_k reproduces terms up to m = k exactly. The resulting in π_k'(t_{n+k}) - y'(t_{n+k}) is O(h^k), but when incorporated into the multistep method over a step h, the local for the BDF_k is O(h^{k+1}), as the higher-order terms in the expansion contribute accordingly. This order is confirmed by substituting the into the BDF equation and verifying the cancellation of lower-order terms up to h^k.

Specific Formulas

First- to Third-Order Formulas

The first-order backward differentiation formula (BDF1), also known as the , approximates the solution of the y' = f(t, y) at step n+1 by y_{n+1} - y_n = h f(t_{n+1}, y_{n+1}), where h is the step size. This implicit equation requires solving for y_{n+1}, typically via for nonlinear problems, and provides first-order accuracy with a local of O(h^2). BDF1 is zero-stable, meaning small perturbations in initial conditions do not grow unboundedly as the number of steps increases. The second-order BDF (BDF2) extends this to a two-step method: \frac{3}{2} y_{n+1} - 2 y_n + \frac{1}{2} y_{n-1} = h f(t_{n+1}, y_{n+1}). This formula achieves second-order accuracy with a local of O(h^3) and is also zero-stable. To start BDF2, one common procedure uses the explicit for the first step to obtain y_1, followed by BDF1 for the second step. For third-order accuracy, the BDF3 formula is \frac{11}{6} y_{n+1} - 3 y_n + \frac{3}{2} y_{n-1} - \frac{1}{3} y_{n-2} = h f(t_{n+1}, y_{n+1}), with a local of O(h^4) and zero-stability. Starting BDF3 similarly involves explicit Euler or lower-order BDF methods for the initial two steps to provide y_1 and y_2. These low-order BDFs derive from at previous points, with coefficients obtained via backward differences. As an illustrative example, consider the linear test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0, whose exact solution decays exponentially. For BDF1 applied over one step from y_n, the update is y_{n+1} = y_n / (1 - h \lambda), which approximates the exact factor e^{h \lambda} with error O(h^2) and ensures decay since |1 / (1 - h \lambda)| < 1 for h > 0. Similar updates hold for BDF2 and BDF3, preserving the qualitative decay behavior while increasing accuracy.

Fourth- to Sixth-Order Formulas

The fourth-order backward differentiation formula (BDF4) is given by \frac{25}{12} y_{n+1} - 4 y_n + 3 y_{n-1} - \frac{4}{3} y_{n-2} + \frac{1}{4} y_{n-3} = h f(t_{n+1}, y_{n+1}), where h is the step size. The fifth-order formula (BDF5) takes the form \frac{137}{60} y_{n+1} - 5 y_n + 5 y_{n-1} - \frac{10}{3} y_{n-2} + \frac{5}{4} y_{n-3} - \frac{1}{5} y_{n-4} = h f(t_{n+1}, y_{n+1}). For the sixth-order formula (BDF6), \frac{49}{20} y_{n+1} - 6 y_n + \frac{15}{2} y_{n-1} - \frac{20}{3} y_{n-2} + \frac{15}{4} y_{n-3} - \frac{6}{5} y_{n-4} + \frac{1}{6} y_{n-5} = h f(t_{n+1}, y_{n+1}). These higher-order methods provide increased accuracy for stiff equations compared to lower-order variants, with local truncation errors of O(h^5), O(h^6), and O(h^7) for orders 4, 5, and 6, respectively. However, their stability regions in the become more restricted as the increases; specifically, BDF4, BDF5, and BDF6 are A(\alpha)-stable with \alpha \approx 73.35^\circ, $51.84^\circ, and $17.84^\circ, respectively, meaning they remain for eigenvalues within a sector of that angle from the negative real axis. Unlike the first- and second-order BDF methods, which are L-stable (A-stable with damping as |z| \to \infty), these higher orders lack full L-stability, leading to potential oscillations in highly stiff components. In practice, BDF methods beyond order 5 are rarely employed due to their narrow angles, which limit applicability to problems with eigenvalues closely aligned to the negative real axis, and increased sensitivity of coefficients to round-off errors in implementation. Software packages for stiff solvers, such as those in SUNDIALS, typically cap BDF order at 5 to balance accuracy and robustness. Orders greater than 6 violate the root condition for zero- and are unstable even for non-stiff problems.

Numerical Analysis

Local Truncation Error

The local truncation error (LTE) for a backward differentiation formula (BDF) of order k measures the approximation error introduced in a single step of the method, assuming exact values from previous steps. It is defined as \tau_{n+k} = \frac{1}{h \beta_k} \left[ \sum_{j=0}^k \alpha_j y(t_{n+j}) - h \beta_k y'(t_{n+k}) \right], where y is the exact solution of the differential equation, h is the step size, and \alpha_j, \beta_k are the method coefficients satisfying the order conditions up to order k. This \tau_{n+k} represents the amount by which the exact solution fails to satisfy the difference equation, normalized by the scaling factor h \beta_k. To derive the form of the , Taylor expansions of y(t_{n+j}) and y'(t_{n+j}) are performed around the point t_{n+k}. Specifically, y(t_{n+j}) = y(t_{n+k} + (j - k)h) = \sum_{m=0}^\infty \frac{((j - k)h)^m}{m!} y^{(m)}(t_{n+k}), and similarly for the derivative terms. Substituting these into the equation and collecting powers of h, the coefficients are chosen such that the expansions agree up to order k, ensuring the residual vanishes for polynomials of degree at most k. The lowest-order non-vanishing term arises from the (k+1)-th , yielding \tau_{n+k} = O(h^k). The leading term of this expansion is c_k h^k y^{(k+1)}(\xi), for some \xi \in (t_n, t_{n+k}) and method-specific c_k, reflecting the inherent approximation quality of the method. This derivation confirms that methods achieve their designed for sufficiently smooth solutions. For BDF methods of orders 1 through 6, the leading terms are -\frac{h}{2} y''(\xi), -\frac{2 h^2}{9} y'''(\xi), -\frac{3 h^3}{22} y^{(4)}(\xi), -\frac{12 h^4}{125} y^{(5)}(\xi), -\frac{10 h^5}{137} y^{(6)}(\xi), and -\frac{20 h^6}{343} y^{(7)}(\xi), as verified by explicit computation of the coefficients and residual terms. These constants highlight how the scales with increasing . The methods are , meaning \tau_{n+k} \to 0 as h \to 0 for any fixed order k \geq 1 and sufficiently differentiable y, which is a prerequisite for of the numerical solution to the exact solution as the step size decreases. This consistency holds uniformly across orders 1 to 6, supporting their use in practical solvers for nonstiff and mildly stiff problems.

Stability and Convergence

The zero-stability of backward differentiation formulas (BDF) is determined by the root condition on their first characteristic polynomial, defined as \rho(\zeta) = \sum_{j=0}^k \alpha_j \zeta^j, where the coefficients \alpha_j are chosen such that all roots satisfy |\zeta| \leq 1, and any roots with |\zeta| = 1 are simple. This condition ensures that perturbations in initial values do not grow unbounded as the step size h approaches zero. For BDF methods of orders k = 1 to $6, the root condition holds, confirming their zero-stability. However, for k > 6, at least one root lies outside the unit circle, rendering these higher-order BDF divergent and unsuitable for practical use. A-stability for BDF methods is assessed using the Dahlquist test equation y' = \lambda y with \operatorname{Re}(\lambda h) < 0, requiring the numerical solution to remain bounded as t \to \infty. BDF of orders 1 and 2 satisfy A-stability, as their regions of absolute stability encompass the entire left half of the . These low-order methods are also L-stable, meaning the stability function R(z) satisfies \lim_{z \to -\infty} |R(z)| = 0, which dampens high-frequency components effectively in stiff systems. For orders 3 to 6, BDF exhibit A(\alpha)-stability with \alpha decreasing from approximately 90° to 18°, indicating progressively smaller stability regions that still include the negative real axis but exclude parts of the left half-plane. In comparison, the , an A-stable order-2 method, offers full left-half-plane coverage without the order limitations of higher BDF but with reduced accuracy for stiff problems. Convergence of BDF methods follows from Dahlquist's equivalence theorem, which states that a is convergent it is consistent and zero-stable. For BDF of order k \leq 6, consistency ensures local of O(h^k), and under of the right-hand side function f, the global error is O(h^k). This theorem guarantees reliable approximation of solutions to nonstiff and mildly stiff ordinary differential equations when the method's stability properties align with the problem's eigenvalues. Stability regions in the illustrate these properties: for the first-order (backward Euler), the region covers the entire left half-plane, providing unconditional for dissipative systems. The second-order region nearly matches this, with minor exclusions near the imaginary axis. For the sixth-order , the region is fan-shaped, extending along the negative real axis up to large |z| but narrowing toward the imaginary axis, limiting its applicability to problems with eigenvalues confined to this sector.

Implementation and Extensions

Practical Implementation

Applying the backward differentiation formula (BDF) to a system of ordinary differential equations results in a nonlinear algebraic system that must be solved at each time step. The equation takes the form g(y_{n+1}) = \sum_{j=0}^{k} \alpha_j y_{n+1-j} - h \beta_0 f(t_{n+1}, y_{n+1}) = 0, where h is the step size, k is the order, and the coefficients \alpha_j and \beta_0 are determined by the method's order. This system is typically solved using the Newton-Raphson iteration, which iteratively updates an initial guess y^{(0)} via y^{(m+1)} = y^{(m)} - [Dg(y^{(m)})]^{-1} g(y^{(m)}), where Dg is the matrix, until convergence within a specified tolerance. The iteration usually requires 3–5 steps per time step for stiff problems, depending on the nonlinearity. The Jacobian of g with respect to y_{n+1} is \frac{\partial g}{\partial y_{n+1}} = \alpha_0 I - h \beta_0 \frac{\partial f}{\partial y}(t_{n+1}, y_{n+1}), where I is the and \alpha_0 = 1 in normalized form. Computing the exact \frac{\partial f}{\partial y} may be costly or unavailable, so approximations are common, such as finite-difference perturbations or using a "frozen" from previous steps to reduce evaluations of f. Modified methods, which reuse the same across iterations, further improve efficiency while maintaining stability for moderately nonlinear systems. To initialize the multistep nature of , which requires k prior solution values, the first k steps are computed using an explicit Runge-Kutta method of comparable , such as the classical fourth-order RK for orders up to 4. This starting procedure provides accurate initial history values without solving additional implicit systems, though care must be taken to match the step size [h](/page/H+) for consistency. Error estimation and step-size control are essential for reliable . BDF variants, which compute two solutions of different orders simultaneously, allow estimation of the local as the difference between them. Alternatively, Milne's , adapted for BDF, uses a predictor based on an explicit multistep formula to estimate the via the difference between predicted and corrected values. The step size [h](/page/H+) is then adjusted based on this estimate to meet a user-specified , typically by h_{\text{new}} = h \cdot ( \text{tol} / |\text{[error](/page/Error)}| )^{1/(p+1)}, where p is the . The computational cost per successful step is dominated by the Newton iterations and associated linear solves, which scale with the system dimension n depending on the solver (e.g., O(n^3) for dense factorization in direct methods or better for sparse/iterative approaches); the multistep overhead is O(k n) with small fixed k (1–5), making efficient for stiff systems relative to explicit methods.

Variable-Order and Step-Size Variants

Variable-order implementations of the backward differentiation formula () enable the method order to vary dynamically between 1 and 5, adjusting based on local error estimates to optimize the between accuracy and computational efficiency. This adaptability is particularly useful for problems where varies over the interval, allowing higher orders for smoother regions and lower orders for rapid changes. The Nordsieck form represents the solution history as a of scaled derivatives, which supports seamless order changes by updating the representation without requiring storage of all past solution values or recomputation. Step-size control in these variants relies on embedded error estimation, often derived from the difference between solutions of the current order and a lower-order . The new step size h_{n+1} is then selected as h_{n+1} = h_n \left( \frac{\tol}{\est_{\error}} \right)^{1/(k+1)}, where \tol is the user-specified , \est_{\error} is the estimated local , and k is the current order; this formula ensures the error remains within tolerance while accounting for the method's order-dependent rate. Order changes are triggered if the error estimate suggests that increasing or decreasing k would improve , with safeguards to maintain . Prominent implementations include the LSODE and VODE solvers in the ODEPACK library, developed in the 1980s, which employ variable-order, variable-step for stiff systems using Nordsieck storage and iteration for the nonlinear solves. The modern SUNDIALS suite's CVODE module, introduced in the and refined through 2005, extends this approach with fixed-leading-coefficient formulas of orders 1 to 5, supporting variable steps and orders via similar error-based adaptation in a C-based . MATLAB's ode15s solver, available since the , implements a variable-order (1 to 5) variant using numerical differentiation formulas (NDFs) by default but switches to with the 'BDF' option, incorporating step-size control for stiff ODEs and DAEs. These adaptive features excel in handling transitions between stiff and nonstiff dynamics, as seen in ODEPACK's LSODA solver, which seamlessly switches from Adams methods to variable-order when is detected, minimizing overhead during phase changes. They also facilitate integration through events or discontinuities by aggressively reducing step sizes near singularities based on error spikes, ensuring robustness without manual intervention. Post-2000 developments have focused on scalability for large-scale systems. block BDF methods, such as those proposed in , distribute computations across processors by generating multiple solution points per step, achieving speedups on multicore architectures for stiff ODEs while preserving variable order and step adaptation. GPU-accelerated variants, like the solver introduced in 2021, parallelize the BDF loop on graphics hardware, yielding up to 36-fold performance gains over CPU implementations for chemistry models involving thousands of stiff equations. Recent SUNDIALS updates (e.g., version 6.0 in 2023) have added support for linear solvers and GPU offloading, while new hybrid BDF methods (2023–2025) address oscillating solutions in applications like .

References

  1. [1]
    Backward differentiation formulas - Scholarpedia
    Aug 8, 2007 · These are numerical integration methods based on Backward Differentiation Formulas (BDFs). They are particularly useful for stiff differential equations.
  2. [2]
    BDF Methods - Prof. Michael T. Heath
    This module illustrates BDF (backward differentiation formula) methods for numerically solving initial value problems for ordinary differential equations.
  3. [3]
    [PDF] 8. Stiff Ordinary Differential Equations
    The problem of stiffness leads to computational difficulty in many practical problems. The classic example is the case of a stiff ordinary differential ...
  4. [4]
    On the integration of stiff systems of O.D.E.s using extended ...
    A class of extended backward differentiation formulae suitable for the approximate numerical integration of stiff systems of first order ordinary different.Missing: original | Show results with:original<|control11|><|separator|>
  5. [5]
    Solving Ordinary Differential Equations II - SpringerLink
    This second volume treats stiff differential equations and differential algebraic equations. It contains three chapters.
  6. [6]
    [PDF] ∑ ∑
    LMMs of this form are called Backward Difference Formula methods (BDF). ... BDF1 (1-step BDF) is h⋅. 1 h. −u n +u n+1. (. )=h f ... BDF2 (2-step BDF) is h ...
  7. [7]
    [PDF] Correction of high-order BDF convolution quadrature for fractional ...
    The explicit correction coefficients will be given for BDFs up to order 6. For BDFk, the correction is only needed at the starting k − 1 steps and thus the.
  8. [8]
    [PDF] MAXIMUM ANGLES OF A(θ)-STABILITY OF BACKWARD ... - CS UoI
    Feb 20, 2019 · We determine the maximum angles ϑq for which the three-, four-, five- and six-step backward difference formula (BDF) methods are A(ϑq)-stable, ...
  9. [9]
    On the application of higher-order Backward Difference (BDF ...
    Jul 1, 2022 · It is concluded that BDF methods of orders 3-5 can be practically employed to achieve higher levels of temporal accuracy in flow simulations.
  10. [10]
    [PDF] 2 Multistep Methods
    General Adams-Moulton formulas can be derived similarly and are listed in Ta- ble 2. Note that the backward Euler method does not quite fit the general de-.
  11. [11]
    A special stability problem for linear multistep methods
    The trapezoidal formula has the smallest truncation error among all linear multistep methods with a certain stability property. For this method error bound.
  12. [12]
    [PDF] uCRIr98412 PREPRINT VODE, A VARIABLE-COEFFICIENT ODE ...
    It uses variable-coefficient Adams-Moulton and BDF methods in Nordsieck form, as taken from the older solvers EPISODE and EPISODEB, treating the Jacobian as ...
  13. [13]
    [PDF] ODEPACK, A Systematized Collection of ODE Solvers - | Computing
    Aug 8, 1982 · The ordering algorithm (minimum degree algorithm) operates only on a symmetric sparsity structure, and in. LSODES the structure used for this is ...<|control11|><|separator|>
  14. [14]
    SUNDIALS: Suite of nonlinear and differential/algebraic equation ...
    Sep 1, 2005 · SUNDIALS is a suite of advanced computational codes for solving large-scale problems that can be modeled as a system of nonlinear algebraic equations.
  15. [15]
    ode15s - Solve stiff differential equations and DAEs - MathWorks
    ode15s is a variable-step, variable-order (VSVO) solver based on the numerical differentiation formulas (NDFs) of orders 1 to 5. Optionally, it can use the ...Description · Examples · Input Arguments · Output Arguments
  16. [16]
    Parallel Block Backward Differentiation Formulas for Solving ...
    Keywords—Backward Differentiation Formula, block, ordinary differential equations. I. INTRODUCTION. E consider block method for the parallel solution of.
  17. [17]
    [PDF] CAMP First GPU Solver: A Solution to Accelerate Chemistry in ...
    In this study, we present a GPU version of the BDF loop, which achieves up to 36x times faster than the base implementation. This speedup is very similar than ...