Fact-checked by Grok 2 weeks ago

Backward Euler method

The Backward Euler method, also known as the implicit Euler method, is a first-order numerical integration technique used to approximate solutions to initial value problems for ordinary differential equations (ODEs) of the form y' = f(t, y) with initial condition y(t_0) = y_0. It advances the solution from y_n to y_{n+1} using the formula y_{n+1} = y_n + h f(t_{n+1}, y_{n+1}), where h is the time step, making it an implicit one-step method that requires solving a nonlinear equation for y_{n+1} at each iteration, typically via root-finding algorithms such as Newton's method. This implicit formulation contrasts with the explicit forward Euler method, y_{n+1} = y_n + h f(t_n, y_n), by evaluating the right-hand side at the future time t_{n+1}, which enhances numerical stability. A key advantage of the Backward Euler method is its unconditional stability for stiff ODEs, particularly linear systems with eigenvalues in the left half of the complex plane; for example, in the test equation y' = \lambda y with \Re(\lambda) < 0, the amplification factor |1 / (1 - h\lambda)| < 1 for all h > 0, preventing oscillations or blow-up even with large step sizes. However, its global error is O(h), arising from a local truncation error of O(h^2), which limits accuracy compared to higher-order methods like Runge-Kutta schemes. The method's computational cost is higher due to the need for iterative solvers per step, but it remains widely used in applications involving stiff systems, such as chemical kinetics or electrical circuits, where stability outweighs efficiency concerns. Extensions include modified versions for improved accuracy or adaptation to partial differential equations via finite differences.

Overview

Definition and Basic Concept

The Backward Euler method is an implicit numerical integration technique for solving initial value problems of ordinary differential equations (ODEs) given by y' = f(t, y). It approximates the solution at the next time step t_{n+1} = t_n + h using the formula y_{n+1} = y_n + h f(t_{n+1}, y_{n+1}), where h > 0 is the fixed step size and y_n denotes the approximation at t_n. This approach requires solving an algebraic equation for y_{n+1} at each step, distinguishing it from explicit methods. The core idea behind the Backward Euler method lies in evaluating the derivative f at the endpoint of the integration interval rather than the starting point, which introduces the implicit dependence on the unknown y_{n+1}. This design makes the method particularly suitable for stiff ODEs, where the solution components vary on vastly different time scales and explicit methods often fail due to stability restrictions. In contrast to explicit Euler, which uses f(t_n, y_n), the backward evaluation enhances damping of high-frequency modes, providing unconditional stability for linear test problems with negative eigenvalues. A simple illustration is the linear ODE y' = -y with exact solution y(t) = e^{-t} and initial condition y(0) = 1. Applying one Backward Euler step with h = 0.1 from t_0 = 0, y_0 = 1 gives y_1 = 1 + 0.1 (-y_1), or y_1 (1 + 0.1) = 1, so y_1 = 1 / 1.1 \approx 0.9091; the exact value at t = 0.1 is approximately 0.9048. This example highlights the method's first-order approximation while assuming the implicit equation is resolved.

Historical Development

The Backward Euler method, also known as the implicit Euler method, has roots in the late 18th and early 19th centuries, emerging from early efforts in finite difference approximations and discrete calculus for solving ordinary differential equations (ODEs). In 1768, Leonhard Euler introduced elementary difference methods for approximating solutions to ODEs, providing the foundational framework for both explicit and implicit variants. These ideas built on 19th-century developments in finite difference theory. A key milestone came in 1824 when Augustin-Louis Cauchy proved the convergence of Euler's method, explicitly employing the implicit Euler formulation in his analysis to establish error bounds. This work highlighted the method's theoretical underpinnings, though it remained largely theoretical at the time. By the late 19th century, Carl Runge advanced numerical integration techniques in his 1895 paper on solving differential equations. The method's implicit nature gained prominence in the 1950s amid growing awareness of stiff ODEs in chemical kinetics and engineering, where explicit methods failed due to stability issues. In their seminal 1952 paper, C.F. Curtiss and J.O. Hirschfelder analyzed stiff systems and advocated implicit methods as effective for integrating equations with widely varying timescales. This marked a shift toward recognizing its utility beyond simple non-stiff problems. In the 1960s, the method evolved through C. William Gear's development of backward differentiation formulas (BDFs), where the first-order BDF coincides exactly with Backward Euler; Gear's 1967 paper and 1971 book formalized variable-order BDFs for stiff problems, leading to widespread adoption in computational chemistry. Often overshadowed by explicit methods prior to World War II due to computational costs, Backward Euler's value surged with electronic computers, enabling efficient handling of stiff systems. Since the late 20th century, it has been integrated into software like MATLAB's ode15s solver, which can optionally employ low-order BDF variants including Backward Euler for stiff ODEs.

Mathematical Foundation

Derivation from Taylor Series

The Backward Euler method can be derived by considering the Taylor series expansion of the solution y(t) to the ordinary differential equation y'(t) = f(t, y(t)) around the point t_{n+1}, where t_{n+1} = t_n + h and h > 0 is the step size. The expansion yields y(t_n) = y(t_{n+1}) - h y'(t_{n+1}) + \frac{h^2}{2} y''(\xi) for some \xi \in (t_n, t_{n+1}), where higher-order terms are neglected to obtain a first-order approximation. To arrive at the backward form, approximate the derivative at the endpoint t_{n+1} using the backward finite difference formula, which states y'(t_{n+1}) \approx \frac{y(t_{n+1}) - y(t_n)}{h}. This approximation introduces an error of order O(h), as confirmed by expanding y(t_n) around t_{n+1} via Taylor series, but it aligns with evaluating the right-hand side function at the future point for the method's implicit structure. Substituting the ODE y'(t_{n+1}) = f(t_{n+1}, y(t_{n+1})) into the backward difference gives f(t_{n+1}, y(t_{n+1})) \approx \frac{y(t_{n+1}) - y(t_n)}{h}, which rearranges to the implicit equation y(t_{n+1}) = y(t_n) + h f(t_{n+1}, y(t_{n+1})). Discretizing with approximations y_n \approx y(t_n) and y_{n+1} \approx y(t_{n+1}), the Backward Euler formula becomes y_{n+1} = y_n + h f(t_{n+1}, y_{n+1}), where the function evaluation at the endpoint (t_{n+1}, y_{n+1}) contrasts with the forward Euler's evaluation at the starting point (t_n, y_n), leading to the method's implicit nature while preserving first-order accuracy by neglecting higher-order terms in the expansion.

Formulation for General ODEs

The Backward Euler method extends naturally to initial value problems (IVPs) for systems of ordinary differential equations (ODEs), where the unknown \mathbf{y}(t) is a vector in \mathbb{R}^d. Consider the general autonomous IVP given by \mathbf{y}'(t) = \mathbf{f}(t, \mathbf{y}(t)) with initial condition \mathbf{y}(t_0) = \mathbf{y}_0, where \mathbf{f}: \mathbb{R} \times \mathbb{R}^d \to \mathbb{R}^d. The method approximates the solution at discrete time points t_n = t_0 + n h (for step size h > 0) via the implicit scheme \mathbf{Y}_{n+1} = \mathbf{Y}_n + h \mathbf{f}(t_{n+1}, \mathbf{Y}_{n+1}), where \mathbf{Y}_n \approx \mathbf{y}(t_n). For linear systems of the form \mathbf{y}'(t) = A \mathbf{y}(t) + \mathbf{c}(t), where A is a constant d \times d matrix and \mathbf{c}(t) is a known forcing term, the Backward Euler scheme rearranges into the explicit linear system (I - h A) \mathbf{Y}_{n+1} = \mathbf{Y}_n + h \mathbf{c}(t_{n+1}). This matrix equation can be solved for \mathbf{Y}_{n+1} using standard linear algebra techniques, such as Gaussian elimination or iterative solvers, depending on the size and structure of A. In the nonlinear case, where \mathbf{f}(t, \mathbf{y}) is nonlinear in \mathbf{y}, the scheme \mathbf{Y}_{n+1} = \mathbf{Y}_n + h \mathbf{f}(t_{n+1}, \mathbf{Y}_{n+1}) defines \mathbf{Y}_{n+1} implicitly as a fixed point of the nonlinear map \mathbf{g}(\mathbf{z}) = \mathbf{Y}_n + h \mathbf{f}(t_{n+1}, \mathbf{z}). Under suitable conditions on \mathbf{f} (e.g., Lipschitz continuity), the fixed-point iteration converges to a unique solution. For boundary value problems (BVPs), where conditions are specified at multiple points (e.g., \mathbf{y}(t_0) = \mathbf{y}_a and \mathbf{y}(t_f) = \mathbf{y}_b), the Backward Euler method can be adapted by embedding it within a shooting approach: treat the BVP as an IVP with guessed initial values, integrate forward using the scheme, and adjust the guess iteratively to match the terminal boundary conditions.

Numerical Properties

Stability and Stiff Equation Handling

The Backward Euler method exhibits absolute stability for the scalar test equation y' = \lambda y with \operatorname{Re}(\lambda) < 0, where the amplification factor satisfies \left| \frac{1}{1 - h \lambda} \right| < 1 for any step size h > 0. This condition ensures that numerical solutions decay without amplification, mirroring the analytical behavior of the exact solution y(t) = y(0) e^{\lambda t}, which approaches zero as t \to \infty. The region of absolute stability thus encompasses the entire left half of the complex plane, qualifying the method as A-stable—a property first formalized by Germund Dahlquist in his analysis of stability for linear multistep methods. In stiff systems, characterized by eigenvalues with widely varying magnitudes (e.g., ratios exceeding $10^3 or more), explicit methods like forward Euler impose severe restrictions on the step size h to maintain stability, often requiring h < 2 / |\lambda_{\max}| to prevent explosive growth or oscillations. The Backward Euler method circumvents this by permitting larger h values without instability, as its implicit formulation inherently damps spurious oscillations arising from stiff components. This makes it particularly effective for problems in chemical kinetics or circuit simulation, where disparate time scales dominate. Although the method is only first-order accurate, it overcomes the stiffness-handling limitations of higher-order explicit schemes, which possess stability regions that exclude portions of the left half-plane; this aligns with Dahlquist's second barrier, which prohibits A-stability for linear multistep methods of order greater than two. For linear problems, the Backward Euler method provides unconditional stability, effectively damping high-frequency modes that correspond to large negative eigenvalues. Consider the stiff test equation y' = -100 y, y(0) = 1, whose exact solution decays rapidly as y(t) = e^{-100 t}. With a step size h = 0.025 (40 steps over [0,1]), the forward Euler method produces oscillatory solutions that deviate wildly from the smooth decay, while Backward Euler yields a monotonically decreasing approximation closely tracking the exact curve, demonstrating robust damping of the stiff mode without step-size refinement.

Order of Accuracy and Error Analysis

The Backward Euler method is a first-order numerical scheme for solving initial value problems of ordinary differential equations (ODEs), characterized by its local truncation error (LTE) of order O(h^2), where h is the step size. To derive this, consider the exact solution y(t) satisfying y'(t) = f(t, y(t)). Substituting into the method's update formula y(t_{n+1}) = y(t_n) + h f(t_{n+1}, y(t_{n+1})) yields the residual. Using Taylor expansion of y(t_n) around t_{n+1}, y(t_n) = y(t_{n+1}) - h y'(t_{n+1}) + \frac{h^2}{2} y''(\xi) for some \xi \in (t_n, t_{n+1}), the difference y(t_{n+1}) - y(t_n) - h y'(t_{n+1}) simplifies to \frac{h^2}{2} y''(\xi), confirming the LTE as O(h^2). Dividing by h gives the per-step truncation error rate of O(h), but the absolute local error per step remains O(h^2). The global error of the Backward Euler method, defined as the difference between the exact solution and the numerical approximation over a fixed interval, is O(h) under standard assumptions, including the Lipschitz continuity of f with respect to its arguments. Convergence to this order follows from consistency and stability: the local errors accumulate, but bounded propagation ensures the total error scales linearly with h. A standard proof outlines this by expressing the global error e_n = y(t_n) - y_n via the variation-of-constants formula, then applying the discrete Gronwall inequality to bound \max_n |e_n| \leq C h for some constant C > 0 independent of h, assuming f satisfies Lipschitz conditions. The method is consistent because the LTE approaches zero as h \to 0, satisfying the necessary condition for convergence in the Lax equivalence theorem for linear problems. In stiff systems, where explicit methods like forward Euler suffer from error explosion due to instability, the Backward Euler method's unconditional stability ensures that error propagation remains bounded, preserving the O(h) global accuracy even for large step sizes.

Practical Implementation

Solving the Implicit Equation

The Backward Euler method leads to an implicit equation of the form Y_{n+1} = Y_n + h f(t_{n+1}, Y_{n+1}), which generally requires iterative solvers to resolve the nonlinear algebraic system for Y_{n+1}. For nonlinear problems, common approaches include fixed-point iteration and the Newton-Raphson method, while linear problems allow direct algebraic solutions. These techniques ensure convergence to the correct step value, with choices depending on the problem's stiffness and computational cost. Fixed-point iteration reformulates the implicit equation as a fixed point Y_{n+1} = g(Y_{n+1}), where g(Y) = Y_n + h f(t_{n+1}, Y), and iterates via Y^{(k+1)} = Y_n + h f(t_{n+1}, Y^{(k)}) starting from an initial guess, such as Y^{(0)} = Y_n. Convergence is guaranteed locally if the mapping g is contractive, which holds when the step size h satisfies h L < 1, with L being the Lipschitz constant of f with respect to Y; this condition is typically met for sufficiently small h, though the method may diverge for larger steps in stiff systems. The Newton-Raphson method provides quadratic convergence and is more robust for larger steps, solving the nonlinear system F(Y) = Y - Y_n - h f(t_{n+1}, Y) = 0 through iterations J \delta Y = -F(Y^{(k)}), Y^{(k+1)} = Y^{(k)} + \delta Y, where J = I - h \frac{\partial f}{\partial Y}(t_{n+1}, Y^{(k)}) is the Jacobian matrix. The Jacobian can be computed analytically if f permits, or approximated via finite differences, such as directional derivatives along coordinate axes to avoid full matrix evaluations. This approach requires solving a linear system at each iteration, often via LU factorization, and converges globally under suitable damping or line-search strategies for well-posed problems. In the linear case, where f(t, Y) = A Y + b(t) with constant matrix A, the implicit equation simplifies to the explicit linear system (I - h A) Y_{n+1} = Y_n, which can be solved directly by matrix inversion or, more efficiently, by LU factorization of the coefficient matrix, especially for repeated solves in multi-step contexts. This avoids iteration altogether and highlights the method's suitability for linear stiff systems. For efficiency in stiff nonlinear systems, the modified Newton method, or simplified Newton iteration, reuses a "frozen" Jacobian from a previous step or an approximate one, updating it less frequently to reduce computational overhead while maintaining good convergence properties; this variant is particularly effective in variable-step solvers where Jacobian recomputation is costly.

Computational Efficiency and Tools

The computational cost per step of the Backward Euler method for an n-dimensional system arises primarily from evaluating and utilizing the Jacobian matrix, which requires O(n²) operations for dense systems due to the need for function evaluations across all components. This cost can be significantly reduced to O(n) in practice when exploiting sparsity in the Jacobian, as is common in many physical models where dependencies are local, allowing for efficient storage and computation using sparse linear algebra routines. Despite the elevated per-step expense compared to explicit methods, which avoid implicit solves, the Backward Euler approach offers substantial efficiency gains for stiff ordinary differential equations by enabling larger time steps and thus fewer overall iterations to reach a given simulation endpoint. This trade-off is particularly advantageous in stiff regimes, where explicit schemes would demand prohibitively small steps to maintain stability. Furthermore, integration with adaptive step-size control—often via embedded error estimators that monitor local truncation errors—allows dynamic adjustment of the time step to optimize the balance between accuracy and total computational work. Implementations of the Backward Euler method are readily available in established numerical libraries, facilitating its use in scientific computing workflows. In MATLAB, the ode23s solver uses a low-order Rosenbrock method suitable for moderately stiff problems. Python's SciPy suite supports it through the solve_ivp function with the 'BDF' method option, which employs backward differentiation formulae starting from order one (equivalent to Backward Euler) and adapts orders as needed. Legacy Fortran tools like ODEPACK's LSODE routine also provide robust support, using variable-order BDF methods that encompass Backward Euler for order-one integration of stiff systems. Since the 2000s, the Backward Euler method has demonstrated strong potential for parallelization in large-scale simulations via domain decomposition strategies, which partition the spatial domain into subdomains solvable concurrently, thereby scaling efficiency with available processors while preserving the method's implicit stability.

Applications and Comparisons

Use in Scientific Computing

The Backward Euler method is widely employed in chemical kinetics simulations, particularly for modeling reaction-diffusion systems characterized by stiff rate equations, such as those encountered in combustion processes. In these applications, the method's unconditional stability allows for larger time steps without numerical instability, enabling efficient resolution of fast reaction timescales alongside slower diffusion. For instance, in reactive-flow simulations of combustion, backward Euler has been integrated with GPU acceleration to handle moderately stiff ODE systems derived from detailed chemical mechanisms, achieving significant speedups over explicit alternatives while maintaining accuracy. In electrical engineering, the Backward Euler method plays a central role in transient analysis of circuits, especially within SPICE-like simulation tools for RLC networks exhibiting fast transients. It is used to compute time-domain responses by solving the implicit system formed from circuit equations, providing numerical damping that prevents oscillations and ensures convergence in stiff scenarios like those involving capacitors and inductors. This approach is particularly valuable when initial DC operating point calculations fail, allowing simulations to ramp sources gradually and stabilize to equilibrium. For fluid dynamics, the Backward Euler method is applied to parabolic partial differential equations through the method of lines, where spatial discretization reduces the PDE to a system of stiff ODEs that the method solves implicitly. This is essential for handling diffusion-dominated flows, such as viscous heat transfer or solute transport, where explicit schemes would require prohibitively small time steps due to stability constraints. The resulting tridiagonal systems are efficiently solved, supporting simulations of phenomena like boundary layer development in low-Reynolds-number flows. In climate modeling, the Backward Euler method has been utilized since the 1990s for simulating ocean circulation, addressing stiffness arising from multi-scale physics including fast vertical mixing and slow large-scale currents. Models like the Finite Element Ocean Model (FEOM) employ it for time-stepping vertical viscosity terms, ensuring stability in unstructured grid representations of global ocean dynamics coupled to atmospheric components. This facilitates long-term projections of thermohaline circulation patterns critical to climate variability.

Comparison with Explicit Methods

The forward Euler method, an explicit scheme given by y_{n+1} = y_n + h f(t_n, y_n), is unstable for stiff ordinary differential equations (ODEs), where the stability condition |1 + h \lambda| \leq 1 imposes severe restrictions on the step size h, often requiring h < 2 / |\lambda| for eigenvalues \lambda with large negative real parts. In contrast, the backward Euler method is unconditionally stable for such problems due to its implicit formulation, allowing larger step sizes without instability. Explicit methods like forward Euler are computationally cheaper per step, as they avoid solving nonlinear equations, but their stability constraints necessitate many small steps for stiff systems, leading to higher overall costs. The backward Euler method, while more expensive per step owing to the need for iterative solvers (e.g., Newton methods), permits larger h, reducing the total number of steps and making it more efficient for stiff ODEs despite the per-step overhead. Consider the stiff test problem y' = -50 y + 51 \cos t + 49 \sin t, y(0) = 1, whose exact solution is y(t) = \sin t + \cos t. With forward Euler and h = 0.5, the solution explodes to errors on the order of $10^{11} by t = 5 due to instability, whereas backward Euler maintains errors below $10^{-2} even at t = 10. This disparity highlights how explicit methods demand h \approx 0.01 or smaller for accuracy in stiff cases, while backward Euler achieves comparable results with h = 0.5. A key advantage of backward Euler is its L-stability, which ensures that as h \to \infty, the numerical solution damps stiff components (e.g., rapidly decaying exponentials) to zero in one step, providing superior handling of transient decay in stiff problems compared to explicit methods.

Extensions and Variants

Backward Differentiation Formulae Connection

The Backward Differentiation Formulae (BDF) constitute a family of implicit multi-step methods for the numerical solution of ordinary differential equations, where the derivative is approximated using finite backward differences based on previous solution values. These methods are particularly valued for their stability in solving stiff systems. The first-order BDF, known as BDF1, is identical to the Backward Euler method, reducing to the single-step implicit formula y_{n+1} = y_n + h f(t_{n+1}, y_{n+1}). In general, a k-th order BDF approximates the solution at step n+k via the linear combination \sum_{j=0}^k \alpha_j y_{n+j} = h \beta_k f(t_{n+k}, y_{n+k}), where the coefficients satisfy \alpha_k = 1 and, for k=1, \alpha_0 = -1 and \beta_1 = 1, yielding precisely the Backward Euler scheme. This formulation leverages backward differences to ensure L-stability for low orders, making it suitable for stiff problems. The Backward Euler method extends naturally within the BDF framework to higher-order variants, such as BDF2, which incorporates two prior steps for second-order accuracy while maintaining an implicit structure but optimized for stiffness. Higher-order BDFs up to order 6 are employed for enhanced accuracy in stiff contexts, beyond which stability deteriorates. The BDF family, including Backward Euler as its foundational member, originated in C. W. Gear's 1971 work on solving differential-algebraic equations and has since become a standard approach for such systems.

Modified Backward Euler Schemes

Modified Backward Euler schemes seek to mitigate the limitations of the standard Backward Euler method, particularly its first-order accuracy, by introducing modifications that enhance computational efficiency or achieve higher-order approximations while maintaining the inherent L-stability essential for stiff ordinary differential equations (ODEs). These variants typically involve linearization techniques, hybrid explicit-implicit strategies, or multi-stage structures that reduce the cost of solving implicit systems without compromising stability. Such improvements make them suitable for applications requiring robust performance on large-scale, nonlinear problems. Rosenbrock methods, also known as ROW (Rosenbrock-Wanner) methods, represent a class of linearly implicit Runge-Kutta schemes derived from the Backward Euler framework by linearizing the nonlinear function around an approximate Jacobian, thereby avoiding full nonlinear solves at each stage. This linearization replaces the costly Newton iterations of fully implicit methods with more efficient matrix inversions. These methods retain the damping properties of Backward Euler for stiff components while enabling larger step sizes through reduced per-step costs. Predictor-corrector approaches combine an explicit predictor step with the Backward Euler method as the corrector to accelerate convergence of the implicit solve. An explicit scheme, such as forward Euler, generates an initial guess \tilde{y}_{n+1} for the next time level, which serves as the starting point for the Newton iterations in the Backward Euler corrector y_{n+1} = y_n + h f(t_{n+1}, y_{n+1}). This hybrid strategy typically requires fewer Newton iterations—often just one or two—compared to solving the implicit equation from a naive initial guess, thereby lowering overall computational expense while preserving first-order accuracy and stability. Diagonally implicit Runge-Kutta (DIRK) methods generalize the Backward Euler approach by employing a lower triangular Butcher tableau with nonzero diagonal entries, allowing sequential solution of stages with only linear systems of the system size. The Backward Euler method corresponds precisely to a one-stage DIRK scheme, where the single stage satisfies y_{n+1} = y_n + h f(t_{n+1}, y_{n+1}). Extending to multiple stages enables higher-order accuracy (up to order equal to the number of stages) through careful coefficient selection, with the diagonal structure ensuring computational costs scale linearly with stages rather than cubically as in fully implicit Runge-Kutta methods. Singly diagonally implicit Runge-Kutta (SDIRK) schemes, a subclass of DIRK with identical diagonal elements, emphasize stiff accuracy—where the final stage matches the Backward Euler update—to minimize trailing error in stiffly dominated flows. These methods have been integral to modern computational fluid dynamics (CFD) simulations since the 1980s, providing higher-order alternatives to the low-accuracy Backward Euler for resolving unsteady, stiff phenomena like turbulence and shock interactions in Navier-Stokes equations.

References

  1. [1]
    1.3: Backward Euler method - Mathematics LibreTexts
    Jul 26, 2022 · The backward Euler method is an iterative method which starts at an initial point and walks the solution forward using the iteration y n + 1 − h ...Backward Euler algorithm · Stability of backward Euler for...
  2. [2]
    MATHEMATICA Tutorial for first course, part 1.3: Backward Euler ...
    The backward Euler formula is an implicit one-step numerical method for solving initial value problems for first order differential equations.
  3. [3]
    Forward and Backward Euler Methods - MIT
    A convergent numerical method is the one where the numerically computed solution approaches the exact solution as the step size approaches 0. Once again, if the ...
  4. [4]
    [PDF] Explicit and Implicit Methods In Solving Differential Equations
    May 9, 2010 · The backward Euler's method is an implicit one which contrary to explicit methods finds the solution by solving an equation involving the ...<|control11|><|separator|>
  5. [5]
    [PDF] Unit III: Numerical Calculus Chapter III.3: ODE Initial Value Problems
    Backward Euler is an implicit method, we have to solve for yk+1 which requires some extra work: yk+1 = yk + hf (tk+1,yk+1). 11 / 58. Page 12. Implicit vs.Missing: integration | Show results with:integration
  6. [6]
    [PDF] The STIFF ODE Backward Euler and implicit ODE solvers
    To make an implicit version of the Euler method, we start out by writing the Euler update equation again, except that we evaluate the right hand side of the ...Missing: definition integration
  7. [7]
    [PDF] A DISCUSSION ON EULER METHOD: A REVIEW
    Abstract. Notwithstanding the efforts of earlier workers some fundamental aspects of an introductory course on numerical methods have been overlooked.
  8. [8]
    integration of stiff equations - PNAS
    MATHEMATICS: CURTISS AND HIRSCHFELDER PRoC. N. A. S.. The right-hand side of this equation represents a general function of x and y which for each value of x ...
  9. [9]
    Backward differentiation formulas - Scholarpedia
    Aug 8, 2007 · These are numerical integration methods based on Backward Differentiation Formulas (BDFs). They are particularly useful for stiff differential equations.Missing: 1960s | Show results with:1960s
  10. [10]
    ode15s - Solve stiff differential equations and DAEs - MathWorks
    ode15s and ode23t can solve problems with a mass matrix that is singular, known as differential-algebraic equations (DAEs).Missing: euler | Show results with:euler
  11. [11]
    [PDF] Euler's Method, Taylor Series Method, Runge Kutta Methods, Multi ...
    Taylor Series Methods: To derive these methods we start with a Taylor Expansion: y(t + ∆t) ≈ y(t)+∆ty0(t) + 1 2 ∆t2y00(t) + ... + 1 r! y(r)(t)∆tr. ∆tf0.
  12. [12]
    [PDF] ODEs: Backward Euler - TarHeels.live
    We can derive Forward Euler's Method by using Taylor series: yn+1 = y(tn + ... For Backward Euler, we use Taylor series, but expand differently: y(tn) ...
  13. [13]
    Solution Methods for IVPs: Backward (Implicit) Euler Method
    The backward Euler method is termed an “implicit” method because it uses the slope \frac{\mathrm{d}x}{\mathrm{d}t}=F(x,t) at the unknown point x_{iBackward (Implicit) Euler Method · Example 1 · Example 2 · Example 3
  14. [14]
    [PDF] ( ) 3 ( ) ( )
    Note: the backward Euler method is an implicit method. In the backward Euler method, yn+1 is not given as an explicit expression. Instead, yn+1 is given.
  15. [15]
    [PDF] An Introduction to Numerical Methods for ODEs
    4, we arrive at the Backward Euler method: yn+1 = yn + hf (tn+1,yn+1) (4) Notice however that this equation defines yn+1 implicitly.
  16. [16]
    [PDF] Computational science Organization Ordinary differential equations
    Systems of ode's. • Consider a system of coupled ... Backward-Euler. • We ... (I-hA)u(t+h) = u(t)+hc(t+h). • Backward-Euler is example of implicit method of.
  17. [17]
    [PDF] AM 213B Prof. Daniele Venturi Numerical methods to solve ...
    . Shooting method. The shooting method is a method for solving a boundary value problem by reducing it an to initial value problem which is then solved ...
  18. [18]
    [PDF] ODE Stability
    Backward Euler's Method yk+1 = yk + h f (tk+1,yk+1). Apply Backward Euler's Method to dy dt. = λy, with y (0) = y0 we get yk+1 = yk + h λyk+1.
  19. [19]
    [PDF] A special stability problem for linear multistep methods - Math-Unipd
    BIT 3 (1963), 27--43. A SPECIAL STABILITY PROBLEM FOR LINEAR. MULTISTEP METHODS*. GERMUND G. DAHLQUIST. Abstract. The trapezoidal formula has the smallest ...
  20. [20]
    [PDF] 4 Stiffness and Stability
    We already have seen one A-stable method earlier: the backward (or implicit) Euler method yn+1 = yn + hf(tn+1,yn+1). In general, only implicit methods are ...
  21. [21]
    [PDF] numerical methods for ordinary differ- ential equations - UiO
    Figure 3.2: Forward Euler (top row) and backward Euler (bottom row) for the stiff problem in Example 3.1 up to time. T = 1. To understand why the explicit Euler ...<|control11|><|separator|>
  22. [22]
    [PDF] - 1 - AM213B Numerical Methods for the Solution of Differential ...
    (Show the region of absolute stability for backward Euler method.) ==> Backward Euler method is A-stable. In addition, it satisfies φ(z)=. 1.
  23. [23]
    [PDF] NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS
    The book intro- duces the numerical analysis of differential equations, describing the mathematical background for understanding numerical methods and giving ...
  24. [24]
    [PDF] Numerical Analysis and Simulation I — Ordinary Differential Equations
    Similarly, the local truncation error of the backward Euler scheme (4.3) is. δBE n+1. = yBE n+1. − y(tn+1) = y(tn) + hf tn+1,y(tn+1) − y(tn+1 . (4.8). A ...
  25. [25]
    The Modified Newton Method in the Solution of Stiff Ordinary ... - jstor
    This paper presents an analysis of the modified Newton method as it is used in codes implementing implicit formulae for integrating stiff ordinary differential ...
  26. [26]
    Numerical integration methods and layout improvements in the ...
    May 30, 2017 · ... Backward Euler method over the Forward Euler method. These ... Since the run-time for the algorithm at every iteration is in order of O(N 2) ...
  27. [27]
    solve_ivp — SciPy v1.16.2 Manual
    This function numerically integrates a system of ordinary differential equations given an initial value.1.7.0 · 1.13.0 · Solve_ivp · 1.12.0
  28. [28]
    ode23s - Solve stiff differential equations — low order method
    This MATLAB function, where tspan = [t0 tf], integrates the system of differential equations y'=f(t,y) from t0 to tf with initial conditions y0.
  29. [29]
    [PDF] Parallel Domain Decomposition
    Parallel Domain Decomposition. Preconditioning. For Computational. Fluid ... Backward Euler time integration: Let At vary with spatial position and IIR(_) ...
  30. [30]
    [PDF] Accelerating moderately stiff chemical kinetics in reactive-flow ...
    Nov 4, 2013 · They handled the stiff chemical kinetics terms in parallel on the GPU using a first-order implicit method (the backward Euler method), employing ...
  31. [31]
    [PDF] 4.4 Applications of Transient Analysis - Designer's Guide Community
    To tell Spice2 to use backward Euler, set the undocumented option mu to zero. With Spectre, set the transient analysis options method=euler. Occasionally, this ...
  32. [32]
    None
    ### Summary: Method of Lines Using Backward Euler for Parabolic PDEs in Fluid Dynamics
  33. [33]
    [PDF] CHAPTER 3: Basic methods, basic concepts
    e.g., backward Euler is A-stable. But there are two problems with this definition: • No distinction made between cases. Re(λ) −1 and. −1 Re(λ) ≤ 0, |Im(λ)| 1.<|control11|><|separator|>
  34. [34]
    [PDF] 8. Stiff Ordinary Differential Equations
    For this purpose a method like backward Euler will perform better than the trapezoidal method. The backward Euler method is said to be L-stable. Definition ...
  35. [35]
    Solving Ordinary Differential Equations II - SpringerLink
    This second volume treats stiff differential equations and differential algebraic equations. It contains three chapters.
  36. [36]
  37. [37]
    [PDF] Rosenbrock-Wanner Methods: Construction and Mission - arXiv
    Feb 27, 2020 · This paper is concerned with the history of Rosenbrock-Wanner methods first initiated by Rosenbrock in 1963. His original ideas are ...Missing: ROS2 | Show results with:ROS2
  38. [38]
    [PDF] An adaptive homotopy tracking algorithm for solving nonlinear ...
    In this paper, we consider two predictor/corrector methods, one is a first-order scheme by using the forward. Euler as the predictor and the backward Euler as ...
  39. [39]
    [PDF] Diagonally Implicit Runge-Kutta Methods for Ordinary Differential ...
    The goal of this review is to summarize the characteristics, assess the potential, and then design several nearly optimal, general purpose, DIRK-type methods.
  40. [40]
    Diagonally Implicit Runge–Kutta Methods for Stiff O.D.E.'s - SIAM.org
    In this paper we consider diagonally split Runge–Kutta methods for the numerical solution of initial value problems for ordinary differential equations. This ...
  41. [41]
    [PDF] High-Order Implicit Time Integration for Unsteady Compressible ...
    Jun 25, 2013 · This paper presents an overview of high-order implicit time integration methods and their associated properties with a specific focus on ...