Fact-checked by Grok 2 weeks ago

Stiff equation

In numerical analysis, a stiff equation is an ordinary differential equation (ODE) for which certain numerical integration methods become unstable unless an impractically small step size is used, often due to the presence of widely disparate time scales in the solution components. These equations typically arise in systems where the Jacobian matrix has eigenvalues with widely varying magnitudes, particularly large negative real parts, leading to rapid transient behaviors that decay quickly alongside slower, smooth solutions. Stiffness is not an intrinsic property of the equation alone but depends on the integration interval and the chosen numerical method; for instance, a system may be stiff over a long time horizon where stability requirements impose severe restrictions on explicit solvers. The concept of stiffness originated in the context of chemical kinetics modeling in the early 1950s, where researchers encountered difficulties integrating systems with fast and slow reaction rates using explicit methods. C.F. Curtiss and J.O. Hirschfelder introduced the term "stiff" in 1952 to describe such equations, emphasizing the need for integration techniques that maintain stability without excessive computational cost. In 1963, Germund Dahlquist formalized the analysis through the notion of A-stability, a property ensuring that the numerical method's stability region includes the entire left half of the complex plane, which is crucial for handling the negative eigenvalues characteristic of stiff problems. C. William Gear further advanced practical solutions in 1971 by developing variable-order, variable-step backward differentiation formula (BDF) methods tailored for stiff systems, which have since become standard in software libraries like MATLAB's ode15s and SUNDIALS. Stiff equations are prevalent in applications such as networks, electrical circuits, and modeling, where physical processes operate on multiple scales. Key challenges include the inefficiency of explicit Runge-Kutta methods, which require step sizes proportional to the reciprocal of the largest eigenvalue magnitude, versus the robustness of implicit methods like the backward Euler or that allow larger steps through their A-stable or L-stable properties. L-stability, an extension of A-stability, additionally ensures of transients to prevent numerical oscillations. Modern approaches often employ adaptive solvers that detect and switch between explicit and implicit schemes, balancing accuracy and efficiency.

Basic Concepts

Motivating Example

A fundamental example that motivates the study of stiff equations is the scalar linear y' = \lambda y, where \lambda \in \mathbb{C} with \operatorname{Re}(\lambda) < 0 and |\lambda| large in magnitude. The exact solution is y(t) = y(0) e^{\lambda t}, which exhibits rapid exponential decay toward zero on a short timescale of order $1/|\lambda|. This test equation, introduced by Germund Dahlquist in 1963, serves as a benchmark for analyzing the stability behavior of numerical methods on stiff problems. Applying the forward Euler method to this equation yields the recurrence y_{n+1} = y_n + h \lambda y_n = (1 + h \lambda) y_n, where h > 0 is the step size. For the numerical solution to remain bounded and mimic the decaying exact solution, the amplification factor must satisfy |1 + h \lambda| < 1. When \lambda is real and negative, this stability condition simplifies to h < 2 / |\lambda|. If |\lambda| is large, the permissible h becomes extremely small—on the order of the rapid decay timescale—necessitating many steps to integrate over longer intervals of interest, which renders the method computationally inefficient. In contrast, for non-stiff scenarios where |\lambda| is moderate (e.g., |\lambda| \approx 1), the stability bound allows h to be chosen comparably large relative to the solution's natural timescale, enabling efficient simulation with explicit methods like forward Euler. This disparity highlights the core challenge of stiffness: explicit schemes impose severe step-size restrictions for stable solutions in stiff regimes, despite the underlying smooth behavior after initial transients.

Definition of Stiffness

In the context of systems of ordinary differential equations (ODEs) of the form \mathbf{y}' = \mathbf{f}(t, \mathbf{y}), stiffness arises when the Jacobian matrix J = \frac{\partial \mathbf{f}}{\partial \mathbf{y}} has eigenvalues whose real parts exhibit widely varying magnitudes, such that the largest in absolute value significantly exceeds the reciprocal of the integration time interval \tau = t_f - t_0, i.e., \max_i |\operatorname{Re}(\lambda_i)| \gg 1/\tau. This disparity implies disparate time scales in the system's dynamics: some components decay or oscillate rapidly, while others evolve slowly, demanding numerical methods that can handle both without instability or excessive computational cost. For linear constant-coefficient systems \mathbf{y}' = A \mathbf{y}, stiffness is present when all eigenvalues have negative real parts (ensuring stability) but the stiffness ratio S = \max_i |\operatorname{Re}(\lambda_i)| / \min_i |\operatorname{Re}(\lambda_i)| is large, reflecting a broad spread in the rates of exponential decay across solution modes. In nonlinear systems, stiffness is analogously assessed through the local Jacobian at points along the solution trajectory or via the Lipschitz constant of \mathbf{f}, which bounds the function's sensitivity and corresponds to the norm of the Jacobian, providing a conservative measure of potential rapid variations. Alternatively, frozen Jacobian analysis linearizes the system by fixing the Jacobian at an initial point t_0, yielding \mathbf{u}' = J(t_0) \mathbf{u}, where the eigenvalue spread of this approximation indicates transient stiffness near t_0. Stiff systems must be distinguished from singular perturbation problems, which are characterized by a small parameter \epsilon > 0 multiplying the highest derivative (e.g., \epsilon \mathbf{y}'' + \mathbf{a}(t, \mathbf{y}) \mathbf{y}' + \mathbf{b}(t, \mathbf{y}) = \mathbf{0}), leading to stiffness as \epsilon \to 0 due to boundary or internal layers; however, general stiffness occurs without such an explicit small parameter and emphasizes multi-scale behavior over asymptotic analysis.

Etymology

The term "stiff equation" first appeared in the chemical kinetics literature during the early 1950s to characterize ordinary differential equations (ODEs) modeling systems with vastly different timescales, such as those governing the rapid formation and decay of free radicals in complex reactions. In a seminal 1952 paper, C. F. Curtiss and J. O. Hirschfelder used the phrase to highlight equations where small changes in parameters could lead to numerical instability unless integrated with sufficiently small steps, emphasizing the challenges posed by transient components that decay quickly relative to the overall solution. The term gained prominence in in the 1960s through the work of C. W. Gear, who applied it to a broader class of ODEs exhibiting similar stability issues during . In his report, Gear introduced "stiff ordinary differential equations" and the notion of "stiff " for numerical methods, describing how fast-decaying solution components impose a resistance to change akin to in mechanical systems. Over time, the terminology shifted from Gear's emphasis on "stiffly stable" methods—referring to algorithms robust for such equations—to the contemporary standard of "stiff ," which now universally denotes the equations themselves in numerical methods texts and software documentation. This evolution reflects the growing recognition of stiffness as a fundamental property tied to the eigenvalues of the system's , distinct from but building on its roots.

Quantification of Stiffness

Stiffness Ratio

The stiffness ratio serves as a primary quantitative measure for assessing the degree of in linear systems of ordinary differential equations (ODEs), particularly those arising from the around an or via the . For a linear constant-coefficient \mathbf{y}' = A \mathbf{y}, where A is the , the stiffness ratio \rho is defined as \rho = \frac{\max_i |\operatorname{Re}(\lambda_i)|}{\min_j |\operatorname{Re}(\lambda_j)|}, with \lambda_i denoting the eigenvalues of A, all of which have negative real parts for ; a large \rho (typically \rho \gg 1) indicates significant due to disparate timescales governed by the fast-decaying modes (large |\operatorname{Re}(\lambda_i)|) and slow modes (small |\operatorname{Re}(\lambda_j)|). In the context of the motivating test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0 and integration over a time interval [0, T], the stiffness ratio simplifies to \rho \approx |\lambda| T, where T represents the timescale of interest (corresponding to the slow variation, akin to $1 / \min |\lambda|); stiffness arises when \rho \gg 1, as the transient decay e^{\lambda t} damps rapidly compared to the interval length. For illustration, consider a simple 2×2 linear system with Jacobian eigenvalues \lambda_1 = -10^6 and \lambda_2 = -1; here, \rho = 10^6 / 1 = 10^6, highlighting extreme stiffness where the fast mode decays in $10^{-6} units of time while the slow mode persists over order-1 timescales. This ratio has direct implications for numerical methods: explicit schemes, such as forward Euler, impose a stability restriction on the step size h < O(1 / \max_i |\lambda_i|) to avoid instability in the fast modes, rendering integration inefficient when \rho is large, as thousands or millions of steps may be needed to resolve the overall solution over the slow timescale.

Characterization of Stiffness

Stiffness in ordinary differential equations (ODEs) can be asymptotically characterized through singular perturbation theory, where the system exhibits multiple time scales due to a small parameter ε ≪ 1. Consider a prototypical form such as the semi-explicit system y' = f(y, ε z), z' = g(y, z)/ε, where y represents slow variables and z captures fast transients that decay rapidly on the scale ε, dominating the initial behavior before settling to the slow manifold. This structure highlights stiffness as a transient phenomenon, with fast components (scaled by ε^{-1}) enforcing quick relaxation to equilibrium while the overall solution remains smooth over longer intervals. Such asymptotic views emphasize that stiffness arises from disparate eigenvalues or rates, but focuses on finite-time dynamics rather than long-term stability. In nonlinear settings, stiffness manifests through the condition number of the Jacobian matrix or variations in Lipschitz constants across the domain, quantifying ill-conditioning in the vector field. The condition number κ(J) = ||J|| ⋅ ||J^{-1}|| measures sensitivity to perturbations, becoming large when eigenvalues span wide scales, leading to regions where small changes in the state cause exponentially divergent behaviors. Similarly, the Lipschitz constant L = sup ||∂f/∂y|| bounds the rate of change, and its spatial variation indicates locally stiff regions where explicit methods fail due to stability constraints rather than truncation errors. These metrics extend linear analyses by capturing nonlinear amplification of transients, as seen in pseudospectral characterizations where non-normal Jacobians allow temporary growth despite stable eigenvalues. Stiff ODEs must be distinguished from oscillatory problems, which involve eigenvalues near the imaginary axis requiring fine resolution for accuracy in periodic components, and from index problems in differential-algebraic equations (DAEs), where higher index (>1) introduces algebraic constraints that cannot be resolved as pure differentials without . In stiff ODEs, the issue stems from transients (negative real parts), not undamped oscillations or implicit relations. Practically, is detected when adaptive solvers reduce step sizes to satisfy bounds (e.g., h < 2/|λ_max| for explicit methods) while accuracy would permit larger steps, often confirmed by monitoring eigenvalue spreads or solver diagnostics in applications like chemical kinetics. The stiffness ratio provides one quantitative measure, but these advanced indicators offer deeper qualitative insights.

Stability Concepts

A-Stability

A-stability is a fundamental property in the numerical solution of ordinary differential equations (ODEs), especially those that are stiff, as it ensures that the method remains stable without imposing severe restrictions on the step size for problems with eigenvalues in the left half of the complex plane. Introduced by Germund Dahlquist in 1963, a numerical method is defined to be A-stable if its region of absolute stability includes the entire left half-plane, that is, the set \{ z \in \mathbb{C} \mid \operatorname{Re}(z) \leq 0 \}. This property allows the method to accurately integrate stable linear systems over long time intervals without the numerical solution growing unbounded, even when the step size h is larger than the reciprocal of the largest magnitude eigenvalue in absolute value. To analyze A-stability, consider the scalar test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0, which models the behavior of stiff components in linear systems. Applying a one-step method with step size h to this equation yields the iteration y_{n+1} = R(h\lambda) y_n, where R(z) is the of the method. The method is A-stable if and only if |R(z)| \leq 1 for all z \in \mathbb{C} with \operatorname{Re}(z) \leq 0. This condition guarantees that the numerical solution decays or remains bounded, mimicking the analytical solution's behavior. For linear multistep methods of the form \sum_{j=0}^k \alpha_j y_{n+j} = h \sum_{j=0}^k \beta_j f_{n+j}, A-stability requires a generalized root condition on the associated characteristic polynomial. Specifically, for every z with \operatorname{Re}(z) \leq 0, all roots \xi of the equation \rho(\xi) - z \sigma(\xi) = 0 must satisfy |\xi| \leq 1, where \rho(\xi) = \sum_{j=0}^k \alpha_j \xi^j and \sigma(\xi) = \sum_{j=0}^k \beta_j \xi^j are the characteristic polynomials of the method; moreover, roots with |\xi| = 1 must be simple. Dahlquist's second barrier theorem establishes a fundamental limitation: no consistent linear multistep method of order greater than 2 can be A-stable. The proof involves expanding the stability condition using power series and applying order conditions, showing that higher-order terms inevitably cause the stability region to exclude parts of the left half-plane. A related result for explicit , proved by Prothero and Robinson in 1974, demonstrates that no such method (of order greater than 0) can be A-stable, as the stability function R(z) is a polynomial of positive degree with R(0) = 1, and by the maximum modulus principle, |R(z)| grows unbounded as |z| \to \infty in the left half-plane, exceeding 1 in magnitude.

L-Stability

L-stability is a desirable property for numerical methods solving ordinary differential equations (ODEs), particularly those with stiff components, as it builds upon A-stability by incorporating strong damping of rapidly decaying transients. A one-step method is defined as L-stable if it is A-stable—meaning its region of absolute stability includes the entire left half of the complex plane—and if \lim_{z \to -\infty} |R(z)| = 0, where R(z) is the stability function of the method applied to the scalar test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0. This additional condition ensures that the numerical solution mimics the exact solution's exponential decay for large negative eigenvalues, preventing persistent oscillations or slow convergence that can occur with merely A-stable methods. The practical importance of L-stability is most evident in variable-step-size integrators for stiff ODEs, where initial transients dominated by fast modes require small steps for accuracy, but subsequent smooth behavior allows larger steps once these modes are adequately damped. Without the damping provided by L-stability, methods may remain constrained to small steps to avoid instability or oscillations, leading to inefficient computation; L-stable methods, by contrast, permit efficient step-size adaptation after the transients subside, improving overall performance on problems like chemical kinetics or electrical circuits with disparate timescales. A classic example illustrating L-stability is the backward Euler method, an implicit first-order scheme whose stability function is given by R(z) = \frac{1}{1 - z}. For large negative z (corresponding to stiff components with \lambda h \ll -1), |R(z)| \to 0, confirming that the method damps transients rapidly while maintaining A-stability across the left half-plane. In the context of embedded Runge-Kutta methods designed for adaptive integration of stiff systems, L-stability is frequently paired with stiff accuracy, a structural property where the coefficients of the last stage match the weights of the solution formula (a_{sj} = b_j for all j). This ensures that the method not only damps stiff modes effectively but also avoids order reduction in the presence of large Lipschitz constants, as the final stage directly yields the accurate solution without additional evaluations; examples include diagonally implicit Runge-Kutta (DIRK) schemes like ESDIRK3(2)4LSA, which achieve both properties for enhanced efficiency on nonequilibrium stiff problems.

Runge–Kutta Methods

The Euler Methods

The Euler methods represent the simplest one-step approaches for numerically solving ordinary differential equations, consisting of the explicit forward Euler method and the implicit backward Euler method, both of first-order accuracy. These methods are particularly illustrative when applied to , where stability constraints dominate the choice of step size h. For the scalar test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0 and large |\lambda| characterizing stiffness, the forward Euler method imposes severe restrictions on h, while the backward Euler method permits much larger steps without compromising stability. The forward Euler method is defined by the update formula y_{n+1} = y_n + h f(y_n), which is explicit and requires no iterative solution at each step. Its stability is analyzed via the test equation, yielding the amplification factor $1 + h\lambda, with the stability region given by |1 + z| \leq 1 where z = h\lambda, forming a small disk in the left half of the complex plane centered at -1 with radius 1. For stiff problems with large |\lambda|, this confines h to values much smaller than $1/|\lambda| to ensure the eigenvalues remain within the stability region, often leading to inefficient computation over the integration interval [0, T]. Consequently, the forward Euler method is not A-stable, as its stability region does not encompass the entire negative half-plane, making it unsuitable for stiff equations without excessively small steps. In contrast, the backward Euler method uses the implicit formula y_{n+1} = y_n + h f(y_{n+1}), with amplification factor $1/(1 - h\lambda) for the test equation, resulting in a stability region that is the exterior of the circle |1 - z| = 1, effectively covering the entire left half-plane \operatorname{Re}(z) < 0. This renders the method A-stable and L-stable, allowing step sizes h \approx T even for highly stiff components with large |\lambda|, as the numerical solution remains bounded and damps parasitic modes appropriately. Both methods achieve first-order accuracy with local truncation error O(h^2), but the backward Euler requires solving a nonlinear equation at each step, typically via fixed-point iteration or , which adds computational cost per step that is offset by the larger feasible h. Numerical comparisons on the test equation highlight these differences: for \lambda = -1000 over [0, 1], the forward Euler demands h \ll 0.001 to avoid instability and oscillations, requiring thousands of steps, whereas the backward Euler maintains stability and accuracy with h = 0.1 or larger, using far fewer steps.

The Trapezoidal Method

The implicit trapezoidal method, also known as the in the context of time discretization, is a second-order accurate numerical integrator for ordinary differential equations (ODEs) of the form y' = f(t, y). The method approximates the solution at the next time step t_{n+1} = t_n + h using the average of the right-hand side evaluated at the current and future points: y_{n+1} = y_n + \frac{h}{2} \left( f(t_n, y_n) + f(t_{n+1}, y_{n+1}) \right). This formulation arises from applying the trapezoidal rule to the integral form of the ODE and is implicit, requiring the solution of an equation involving y_{n+1}. The stability function for the trapezoidal method applied to the test equation y' = \lambda y is R(z) = \frac{1 + z/2}{1 - z/2}, where z = h \lambda. It is A-stable, with the region of absolute stability encompassing the entire left half of the complex plane (\operatorname{Re}(z) \leq 0), making it suitable for mildly stiff problems where eigenvalues lie in this region. However, it is not L-stable, as |R(z)| \to 1 as z \to -\infty along the negative real axis, which preserves high-frequency oscillations rather than damping them rapidly. This property can lead to persistent oscillations in solutions with rapidly decaying components, limiting its efficiency for highly stiff systems with strong transients. To solve the implicit equation, the method typically employs fixed-point iteration or, for nonlinear f, Newton's method, which linearizes around an initial guess using the Jacobian J = \frac{\partial f}{\partial y}. The resulting linear system is (I - \frac{h}{2} J) \Delta y = r, where r is the residual, and convergence is facilitated by the contractive nature in the stable region. In applications, the trapezoidal method is particularly valuable for discretizations of parabolic partial differential equations (PDEs), such as the heat equation, where spatial discretization yields semi-discrete stiff ODE systems. Originally developed for such diffusion problems, it provides unconditional stability and second-order accuracy in time, enabling larger step sizes compared to explicit methods while maintaining accuracy for smooth solutions.

General Theory

Runge–Kutta methods provide a general framework for one-step numerical integration of initial value problems y' = f(t, y), y(t_0) = y_0, through intermediate stages defined by a Butcher tableau with coefficients a_{ij}, b_i, and c_i. For an s-stage method, the stages are computed as k_i = f\left( t_n + c_i h, \, y_n + h \sum_{j=1}^{s} a_{ij} k_j \right), \quad i = 1, \dots, s, and the update is y_{n+1} = y_n + h \sum_{i=1}^{s} b_i k_i. Explicit methods have strictly lower triangular A = (a_{ij}), while implicit methods allow general a_{ij}, requiring iterative solvers for the nonlinear stages. The order of the method is determined by the satisfaction of order conditions on the elementary weights, ensuring local truncation error O(h^{p+1}) for order p. For stiff equations, stability is analyzed using the scalar test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0, yielding the stability function R(z) = 1 + z b^T (I - z A)^{-1} \mathbf{e}, where z = h \lambda, b = (b_i), and \mathbf{e} is the vector of ones. The region of absolute stability is the set of z where |R(z)| \leq 1; for linear systems y' = J y, stability requires h \lambda \in S for all eigenvalues \lambda of J. A method is A-stable if the stability region includes the entire left half-plane \operatorname{Re}(z) \leq 0, crucial for stiff problems with widely varying negative eigenvalues, as it allows step sizes independent of the stiff scales. Explicit RK methods have polynomial R(z), leading to bounded stability regions unsuitable for stiff ODEs, whereas implicit RK methods yield rational R(z) that can achieve A-stability. Examples include the Gauss–Legendre collocation methods, which are A-stable of even order up to 2s for s stages. L-stability requires A-stability plus \lim_{z \to -\infty} R(z) = 0, ensuring damping of transients; methods like Radau IIA are L-stable and preferred for highly stiff systems. Unlike linear multistep methods, there is no fundamental order barrier for A-stability in RK methods, allowing high-order implicit schemes, though computational cost increases with stages and implicit solves, often mitigated by diagonally implicit or singly implicit variants for efficiency in stiff applications.

Linear Multistep Methods

The Adams–Bashforth Method

The Adams–Bashforth method represents a class of explicit linear multistep methods designed primarily for the numerical solution of non-stiff initial value problems of the form y' = f(t, y). Developed in the 19th century by John Couch Adams and Francis Bashforth for applications like capillary action modeling, it approximates the solution by extrapolating from previous values of y and f, leveraging historical data to achieve higher efficiency than one-step methods for smooth, non-stiff dynamics. The method's derivation relies on polynomial interpolation of the integrand f(t, y(t)) at prior time points to approximate the integral form of the differential equation over the step interval [t_n, t_{n+1}]. For the second-order variant, a linear polynomial is fitted through the points (t_n, f(t_n, y_n)) and (t_{n-1}, f(t_{n-1}, y_{n-1})), and integrating this interpolant yields the update formula: y_{n+1} = y_n + \frac{h}{2} \left( 3 f(t_n, y_n) - f(t_{n-1}, y_{n-1}) \right). This formula has a local truncation error of order O(h^3), making it suitable for moderately accurate integration when starting values are available from a one-step method like . However, the explicit construction limits the method's stability properties, resulting in a bounded region of absolute stability in the complex plane that does not encompass the entire left half-plane, meaning it is not A-stable. For the linear test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0, stability requires the step size h to satisfy h = O(1 / |\lambda|_{\max}), where |\lambda|_{\max} reflects the scale of the fastest transient components. In stiff systems, where this maximum eigenvalue magnitude is large due to disparate time scales, the method demands impractically small steps to avoid oscillations or divergence, rendering it unsuitable for such problems. Despite these limitations, the Adams–Bashforth method remains integral to variable-step implementations for non-stiff equations, often serving as the predictor in predictor-corrector schemes paired with implicit correctors, where it enables efficient advancement without solving nonlinear systems at each step. On stiff test equations, such as y' = -k y with large k > 0, it exhibits even at step sizes adequate for accuracy, underscoring the need for implicit alternatives in stiff contexts.

Backward Differentiation Formulas

The Backward Differentiation Formulas (BDFs) constitute a family of implicit that are particularly effective for numerically integrating stiff ordinary differential equations (ODEs), owing to their robust stability characteristics in the left half of the . Developed by C. William Gear as part of his foundational work on stiff systems, BDFs approximate the solution by backward-differencing the values at previous time steps, resulting in fully implicit schemes that require solving nonlinear equations at each step, typically via . Unlike explicit methods, which suffer from severe step-size restrictions for stiff problems, BDFs allow larger steps while maintaining accuracy and spurious oscillations associated with stiff components. The general k-step takes the form \sum_{j=0}^k \alpha_j y_{n+j} = h \beta_k f(y_{n+k}), where \alpha_0 = -1, \beta_k > 0, h is the step size, and the coefficients \alpha_j (for j=1,\dots,k) and \beta_k are determined by to ensure local of order k+1. This formulation ensures zero-stability and for orders up to 6, making s suitable for variable-step implementations. For illustration, the second-order (k=2) is y_{n+1} - \frac{4}{3} y_n + \frac{1}{3} y_{n-1} = \frac{2}{3} h f(y_{n+1}), which balances computational cost with improved accuracy over the while preserving strong stability. BDF methods exhibit favorable stability properties for stiff ODEs: orders 1 and 2 are fully , meaning their stability regions encompass the entire left half-plane, while orders 3 through 6 are A(\alpha)-stable with \alpha ranging from approximately 86.0° for order 3 to 5.9° for order 6, providing partial coverage of the left half-plane sufficient for many practical stiff problems. Additionally, low-order BDFs (1 and 2) are L-stable, ensuring asymptotic stability as the step size approaches , which is crucial for efficiently handling rapidly decaying modes in stiff systems without introducing oscillations. These properties stem from the methods' root loci lying outside or on the unit circle in the stability analysis. In practical applications, BDFs are widely implemented in numerical software for both ODEs and differential-algebraic equations (DAEs), with variable order selection up to 5 to optimize performance while avoiding the reduced stability of higher orders. For instance, the SUNDIALS suite employs variable-order, variable-coefficient BDFs in its solver for index-1 DAEs, supporting adaptive stepping and demonstrating on large-scale stiff problems in scientific . This leverages the methods' L-stability at low orders to handle algebraic constraints robustly.

General Theory

Linear multistep methods approximate solutions to the y' = f(t, y), y(t_0) = y_0, through the general \sum_{j=0}^k \alpha_j y_{n+j} = h \sum_{j=0}^k \beta_j f(t_{n+j}, y_{n+j}), where h > 0 is the step size, t_n = t_0 + n h, \alpha_k = 1 , and not all \beta_j = 0. This form can be associated with the characteristic polynomials \rho(z) = \sum_{j=0}^k \alpha_j z^j and \sigma(z) = \sum_{j=0}^k \beta_j z^j, which determine the method's local and stability properties. Zero-stability requires that all of \rho(z) = 0 satisfy |z| \leq 1, with any roots of 1 being ; this Dahlquist root condition ensures bounded perturbations in the numerical solution for the test equation y' = 0. Consistency, necessary for the local to vanish as h \to 0, holds if \rho(1) = 0 and \rho'(1) = \sigma(1). Dahlquist's convergence theorem establishes that a converges to the exact solution as h \to 0 if and only if it is zero-stable and consistent, with the global error bounded by terms involving the maximum local . For stiff equations, absolute stability is analyzed using the linear test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0, where the region of absolute stability consists of those z = h\lambda for which all roots of \rho(\zeta) - z \sigma(\zeta) = 0 satisfy |\zeta| \leq 1. The boundary of this region can be determined via the boundary locus method, which traces the curve in the complex z-plane obtained by setting \zeta = e^{i\theta} for \theta \in [0, 2\pi) and solving for z = \rho(e^{i\theta}) / \sigma(e^{i\theta}). A method is A-stable if its region of absolute stability contains the entire left half of the complex plane, ensuring stability for arbitrarily small h|\lambda| in stiff components. Dahlquist's second barrier theorem proves that no zero-stable linear multistep method of order greater than 2 can be A-stable. To address stiff decay in long-time integrations, implicit methods with \beta_k > 0 are essential, as this condition promotes damping of high-frequency modes associated with stiff eigenvalues, preventing oscillations and ensuring bounded errors even as t \to \infty. L-stability extends A-stability by requiring that the stability function approaches 0 as |z| \to \infty in the left half-plane, further enhancing performance for very stiff problems.

References

  1. [1]
    Stiff systems - Scholarpedia
    Mar 28, 2007 · Generally, we consider a system to be stiff if the stiffness index is "large." More specifically, a system is considered to be stiff on an ...
  2. [2]
    A special stability problem for linear multistep methods
    Dahlquist, G.,Stability questions for some numerical methods for ordinary differential equations, To appear in Proc. Symposia on Applied Mathematics, vol. 15, „ ...
  3. [3]
    The automatic integration of ordinary differential equations
    Gear, C.W. The automatic integration of stiff ordinary differential equations. Information Processing 68, A. J. H. Morrell, Ed., North Holland, Amsterdam ...
  4. [4]
    [PDF] 8. Stiff Ordinary Differential Equations
    examine in this chapter. In general a problem is called stiff if, roughly speaking, we are. attempting to compute a particular solution that is smooth and ...
  5. [5]
    [PDF] A special stability problem for linear multistep methods - Math-Unipd
    The trapezoidal formula has the smallest truncation error among all linear multistep methods with a certain stability property. For this method error bounds.
  6. [6]
    [PDF] 4 Stiffness and Stability
    stability of Euler's method we need that the so-called growth factor |1 + λh| < 1. For real λ < 0 this is equivalent to. −2 < hλ < 0 ⇐⇒ h <. −2 λ . Thus, ...
  7. [7]
    Stiff Systems - Scholarpedia
    ### Summary of Stiff Equations from Scholarpedia
  8. [8]
    [PDF] Stiffness of ODEs - People
    Stiffness in ODEs is a transient phenomenon, where explicit methods don't work, and is not solely characterized by eigenvalues.
  9. [9]
    [PDF] AD 4!_0 745 - DTIC
    Problems governed by this type of equation are also referred to as singular perturbation problems. This designation, however, is correct only in the limit ...<|control11|><|separator|>
  10. [10]
    [PDF] mathematics: curtiss and hirschfelder - vol. 38, 1952
    Some of the uncoupled equations may be “stiff" in which case they can be integrated by the methods discussed here; whereas other uncoupled equations may be ...
  11. [11]
    Numerical Initial Value Problems in Ordinary Differential Equations
    ... Numerical Initial Value Problems in Ordinary Differential Equations. Front Cover. Charles William Gear. Prentice-Hall, 1971 - Mathematics - 253 pages.
  12. [12]
    [PDF] Stiffness 1952–2012: Sixty years in search of a definition
    Feb 5, 2013 · The most pragmatical opinion is historically the first one (Curtiss and Hirschfelder 1952):stiff equations are equations where certain ...
  13. [13]
    [PDF] 5.11 Stiff Differential Equation
    contrast to the true solution. Facts: 1) A stiff differential equation is numerically unstable unless the step size.Missing: ordinary | Show results with:ordinary
  14. [14]
    [PDF] A USER'S VIEW OF SOLVING STIFF ORDINARY DIFFERENTIAL ...
    When they very quickly correct a deviation from a desired slowly changing path, the ditterential equations describing them will be stiff. It is important to ...
  15. [15]
    [PDF] - 1 - AM213B Numerical Methods for the Solution of Differential ...
    • How to find numerically the region of absolute stability of an LMM. A-stability and L-stability of Runge-Kutta methods. Recall that we studied behaviors of ...
  16. [16]
    [PDF] Diagonally Implicit Runge-Kutta Methods for Ordinary Differential ...
    L-stability and stiff accuracy, ESDIRK6(4)7A[2] not only has a very ... damping of stiff modes in the internal stages. Lastly, the behavior of the ...
  17. [17]
    [PDF] Ordinary differential equation II
    Testing the absolute stability with the test equation, forward Euler method is ... 0), therefore the Forward Euler method is not. A-stable. Backward Euler ...
  18. [18]
    Solving Ordinary Differential Equations II - SpringerLink
    The subject of this book is the solution of stiff differential equations and of differential-algebraic systems; This second completely revised and enlarged ...
  19. [19]
    A practical method for numerical evaluation of solutions of partial ...
    This paper is concerned with methods of evaluating numerical solutions of the non-linear partial differential equation.
  20. [20]
    Convergence and stability in the numerical integration of ordinary ...
    Dahlquist, G. (1956). Convergence and stability in the numerical integration of ordinary differential equations. MATHEMATICA SCANDINAVICA, 4, 33–53.
  21. [21]
    [PDF] Linear Multistep Methods (LMM)
    Definition A LMM is called zero-stable if it satisfies the Dahlquist root condition. Recall from our discussion of linear difference equations that zero- ...
  22. [22]
    [PDF] CS520: numerical ODEs (Ch.2) - UBC Computer Science
    Even then, Adams-Bashforth regions too restrictive for higher order methods. So, want to use Adams-Moulton: for non-stiff problems can apply fixed-point ...
  23. [23]
    Adams Methods - MIT
    The second order versions (obtained by using a linear interpolant) of these methods are quite popular. The second order Adams-Bashforth (AB2) method is given by ...
  24. [24]
  25. [25]
    [PDF] Absolute Stability for Ordinary Differential Equations
    Stability regions for some Adams–Bashforth methods. The shaded region just to the left of the origin is the region of absolute stability. See Section 7.6.1 for ...
  26. [26]
    NUMERICAL INTEGRATION OF STIFF ORDINARY DIFFERENTIAL ...
    Semantic Scholar extracted view of "NUMERICAL INTEGRATION OF STIFF ORDINARY DIFFERENTIAL EQUATIONS. Report No. 221." by C. Gear.Missing: BDF | Show results with:BDF
  27. [27]
    [PDF] SUNDIALS: Suite of Nonlinear and Differential/Algebraic Equation ...
    ○ Variable order / variable coefficient form of BDF. ○ Targets: implicit ODEs, index-1 DAEs, and Hessenberg index-2 DAEs. ○ Optional routine solves for ...