Stiff equation
In numerical analysis, a stiff equation is an ordinary differential equation (ODE) for which certain numerical integration methods become unstable unless an impractically small step size is used, often due to the presence of widely disparate time scales in the solution components. These equations typically arise in systems where the Jacobian matrix has eigenvalues with widely varying magnitudes, particularly large negative real parts, leading to rapid transient behaviors that decay quickly alongside slower, smooth solutions.[1] Stiffness is not an intrinsic property of the equation alone but depends on the integration interval and the chosen numerical method; for instance, a system may be stiff over a long time horizon where stability requirements impose severe restrictions on explicit solvers.[2] The concept of stiffness originated in the context of chemical kinetics modeling in the early 1950s, where researchers encountered difficulties integrating systems with fast and slow reaction rates using explicit methods. C.F. Curtiss and J.O. Hirschfelder introduced the term "stiff" in 1952 to describe such equations, emphasizing the need for integration techniques that maintain stability without excessive computational cost. In 1963, Germund Dahlquist formalized the analysis through the notion of A-stability, a property ensuring that the numerical method's stability region includes the entire left half of the complex plane, which is crucial for handling the negative eigenvalues characteristic of stiff problems.[2] C. William Gear further advanced practical solutions in 1971 by developing variable-order, variable-step backward differentiation formula (BDF) methods tailored for stiff systems, which have since become standard in software libraries like MATLAB'sode15s and SUNDIALS.[3]
Stiff equations are prevalent in applications such as chemical reaction networks, electrical circuits, and combustion modeling, where physical processes operate on multiple scales.[1] Key challenges include the inefficiency of explicit Runge-Kutta methods, which require step sizes proportional to the reciprocal of the largest eigenvalue magnitude, versus the robustness of implicit methods like the backward Euler or trapezoidal rule that allow larger steps through their A-stable or L-stable properties.[4] L-stability, an extension of A-stability, additionally ensures damping of transients to prevent numerical oscillations.[4] Modern approaches often employ adaptive solvers that detect stiffness and switch between explicit and implicit schemes, balancing accuracy and efficiency.[1]