Fact-checked by Grok 2 weeks ago

Euler method

The Euler method, also known as the forward Euler method or Euler-Cauchy method, is a first-order numerical technique for approximating solutions to initial value problems in ordinary differential equations (ODEs) of the form \frac{dy}{dx} = f(x, y) with y(x_0) = y_0. It proceeds by discretizing the domain into small steps of size h and iteratively estimating the next value using the formula y_{n+1} = y_n + h \cdot f(x_n, y_n), where x_{n+1} = x_n + h, effectively approximating the solution curve with tangent line segments at each point. This method is particularly useful when exact analytical solutions are unavailable, such as for nonlinear or complex ODEs, and serves as a foundational approach in numerical analysis. Named after the Swiss mathematician Leonhard Euler (1707–1783), the method was first described in his 1768 work Institutionum calculi integralis, specifically in Volume I, Chapter 7, Sections 650–653, where he outlined a polygonal technique for integrating equations by assuming the integrand remains nearly over small intervals. Euler's approach built on the foundational of and Leibniz, emphasizing practical computation for equations lacking closed-form solutions, and he noted that errors diminish with smaller intervals but accumulate over many steps, particularly if the varies significantly. Although sometimes associated with due to his later refinements (including the backward Euler variant), the forward method is primarily attributed to Euler's original formulation. The Euler method's simplicity makes it easy to implement programmatically or by hand, requiring only evaluations of f(x, y) at discrete points, and it forms the basis for more advanced Runge-Kutta methods by incorporating higher-order terms. However, its local is O(h^2) per step, leading to a global error of O(h) over the interval, which can grow substantially for larger step sizes or rapidly oscillating solutions, often necessitating smaller h for accuracy at the cost of increased computational effort. Despite these limitations, it remains a staple in educational contexts and introductory simulations in fields like physics, , and for modeling phenomena such as or .

Overview

Definition and purpose

The Euler method is a numerical procedure for solving initial value problems of the form y' = f(t, y), y(t_0) = y_0, by iteratively approximating the with linear segments based on the at each point. This approach generates a sequence of points (t_n, y_n) where t_n = t_0 + n h and h is a fixed step size, providing discrete estimates of the continuous y(t). The primary purpose of the Euler method is to estimate solutions to ordinary differential equations when exact analytical solutions are unavailable or computationally impractical, offering a straightforward iterative technique to advance from the to desired time points. It serves as a foundational tool in , enabling approximations that can be refined by reducing the step size h, though larger steps introduce noticeable errors. For the method to yield a unique and stable solution that the approximations converge to, the function f(t, y) must be continuous in t and continuous in y, ensuring the and of the true solution under the Picard-Lindelöf theorem. The method was described by Leonhard Euler in 1768 as part of his work on differential equations in St. Petersburg.

Geometric interpretation

The Euler method provides an intuitive geometric interpretation through the lens of a , also known as a direction field, for the y' = f(t, y). In this visualization, the t-y plane is overlaid with short line segments at various points (t, y), each having a slope equal to f(t, y), representing the instantaneous direction of the solution curve at that point. The true solution curve to the passes through the given initial point and remains tangent to these segments wherever it intersects them, effectively weaving through the field while following the local directions dictated by the . To approximate this solution numerically, the Euler method constructs a polygonal path by starting at the initial point and, at each step, drawing a straight line segment of length h in the t-direction using the slope f(t_n, y_n) from the current point (t_n, y_n). This segment serves as a linear extrapolation to the next point (t_{n+1}, y_{n+1}), after which the process repeats by adopting the slope at the new point. Geometrically, this mimics tracing the solution by "riding along" the tangent lines in the slope field, building a piecewise linear approximation that locally aligns with the curve's direction. The method works effectively when the step size h is small because, over such a short , the of the true curve is negligible, allowing the line to closely match the curve's path without significant deviation. In essence, the curve bends gradually compared to the straight-line step, making the a reliable local stand-in for the actual trajectory. This accuracy arises intuitively from the fact that each step ignores the higher-order bending of the curve—such as changes in due to the second derivative—leading to a small error per step that accumulates over multiple iterations. For illustration, consider the simple y' = y with y(0) = 1. The consists of line segments where the equals the y-coordinate: horizontal at y = 0, increasingly steep upward for positive y, and downward for negative y, forming a pattern of arrows radiating from the t-axis. Starting at (0, 1) with 1, the Euler method draws a 45-degree for step h, then adjusts to the new , producing a zigzag of tangents that roughly outlines the true solution y = e^t, with closer alignment for smaller h.

Mathematical Formulation

Derivation from Taylor series

Consider the for a first-order ordinary differential equation given by y' = f(t, y) with y(t_0) = y_0, where the y(t) is assumed to be twice continuously differentiable on the interval of interest, and f(t, y) is sufficiently smooth to ensure the and of the , typically requiring f to be continuous and satisfy a condition in y. To derive the Euler method, expand the true solution y(t) using around the point t_n, where t_{n+1} = t_n + h and h > 0 is the step size. The expansion yields y(t_{n+1}) = y(t_n) + h y'(t_n) + \frac{h^2}{2} y''(\xi_n) for some \xi_n between t_n and t_{n+1}. Since y'(t_n) = f(t_n, y(t_n)), truncating the series after the first-order term provides the approximation y(t_{n+1}) \approx y(t_n) + h f(t_n, y(t_n)), which, when applied iteratively with approximate values y_n \approx y(t_n), gives the Euler method formula y_{n+1} = y_n + h f(t_n, y_n). The remainder term \frac{h^2}{2} y''(\xi_n) is of order O(h^2) under the stated assumptions, establishing the of the method as O(h^2) and confirming its accuracy (global error O(h)).

Iterative algorithm

The Euler method provides a straightforward iterative procedure for approximating the solution to an (IVP) of the form y' = f(t, y), y(t_0) = y_0, over an interval [t_0, t_f]. To implement it, initialize the approximation at the starting point with t_0 and y_0. Then, for each step n = 0, 1, \dots, N-1, where N is the chosen number of steps, update the time as t_{n+1} = t_n + h and the solution as y_{n+1} = y_n + h f(t_n, y_n), with h denoting the step size. The values (t_n, y_n) serve as the numerical approximations to the true solution (t, y(t)) at the discrete points. The step size h is typically selected as h = (t_f - t_0)/N, ensuring the grid covers the desired interval. A larger N (smaller h) yields higher accuracy by taking more steps, but this increases computational effort, as the method requires one evaluation of f per step. In practice, h is chosen based on a balance between precision needs and available resources, often starting with trial values like 0.1 or 0.01 and refining as necessary. The following pseudocode outlines the basic implementation in plain English for a uniform step size:
Given: f(t, y), t_0, y_0, h > 0, N (number of steps)
Set t = t_0
Set y = y_0
For n = 0 to N-1:
    Output (t, y)  // approximation at current step
    Set slope = f(t, y)
    Set t = t + h
    Set y = y + h * slope
Output (t, y)  // final approximation
This procedure assumes access to the function f and performs N+1 output points. The Euler method is specifically designed for initial value problems, where the solution is determined by an at t_0; it does not directly apply to boundary value problems, which require conditions at multiple points.

Examples

First-order ODE example

To illustrate the basic application of the Euler method, consider the first-order ordinary differential equation (ODE) \frac{dy}{dt} = -y, \quad y(0) = 1, which describes exponential decay with the exact analytical solution y(t) = e^{-t}. This example is solved numerically over the [0, 2] using a step size of h = 1, following the iterative where each approximation is given by y_{n+1} = y_n + h f(t_n, y_n) and f(t, y) = -y. The computation proceeds as follows. Start with the initial values t_0 = 0 and y_0 = 1. The slope at this point is f(0, 1) = -1, so the first approximation is y_1 = y_0 + h \cdot (-y_0) = 1 + 1 \cdot (-1) = 0, at t_1 = t_0 + h = 1. The next slope is f(1, 0) = 0, yielding y_2 = y_1 + h \cdot (-y_1) = 0 + 1 \cdot (0) = 0, at t_2 = t_1 + h = 2. In general, for this ODE and h = 1, the recurrence simplifies to y_{n+1} = (1 - 1) y_n = 0 for n \geq 0 after the initial step, so y_n = 0 for all n \geq 1. The table below compares the Euler approximations y_n with the exact values y(t_n) and the absolute errors |y(t_n) - y_n|, where e^{-1} \approx 0.3679 and e^{-2} \approx 0.1353.
t_ny_n (Euler)y(t_n) (exact)Absolute error
0110
100.36790.3679
200.13530.1353
With such a large step size h = 1, the Euler method produces substantial deviation from the exact solution, immediately dropping to zero after the first step and remaining there, which underestimates the true gradual exponential decay.

Higher-order ODE example

To apply the Euler method to higher-order ordinary differential equations (ODEs), the equation is first reduced to an equivalent system of first-order ODEs. A classic example is the second-order linear ODE for the simple harmonic oscillator: y''(t) + y(t) = 0, with initial conditions y(0) = 0, y'(0) = 1. The exact solution is y(t) = \sin t. To solve this numerically using the Euler method over the interval [0, \pi] with step size h = 0.5, introduce the auxiliary variable v(t) = y'(t). This yields the system: y'(t) = v(t), \quad v'(t) = -y(t), with initial vector \begin{pmatrix} y(0) \\ v(0) \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. The Euler method then proceeds iteratively by updating the vector at each step: for the n-th step, \begin{pmatrix} y_{n+1} \\ v_{n+1} \end{pmatrix} = \begin{pmatrix} y_n \\ v_n \end{pmatrix} + h \begin{pmatrix} v_n \\ -y_n \end{pmatrix}. Starting from t_0 = 0, the first update gives y_1 = 0 + 0.5 \cdot 1 = 0.5, v_1 = 1 + 0.5 \cdot (-0) = 1 at t_1 = 0.5. Continuing this process for six steps (reaching t_6 = 3.0, close to \pi \approx 3.1416) produces the approximations shown in the table below, alongside exact values for comparison.
Step nt_nApproximate y_nApproximate v_nExact y(t_n) = \sin t_n
00.00.00001.00000.0000
10.50.50001.00000.4794
21.01.00000.75000.8415
31.51.37500.25000.9975
42.01.5000-0.43750.9093
52.51.2813-1.18750.5985
63.00.6875-1.82810.1411
The approximation at t = 3.0 gives y(3.0) \approx 0.6875, greater than the exact \sin 3 \approx 0.1411; the exact value at t = \pi is y(\pi) = 0. A linear from the last two points to t = \pi yields roughly y(\pi) \approx 0.52, still positive, illustrating the method's leading to overestimation of the in oscillatory systems. Intermediate points show deviations, such as overshooting at t = 1.5 where y_3 = 1.375 > 1 \approx \sin(1.57), highlighting the accumulation of local truncation errors in oscillatory systems.

Error Analysis

Local truncation error

The local truncation error of the Euler method quantifies the discrepancy introduced by a single of the algorithm when the input at step n is the exact solution value y(t_n). For the y' = f(t, y), y(t_0) = y_0, with step size h > 0, the numerical approximation at the next point is y_{n+1} = y(t_n) + h f(t_n, y(t_n)), while the exact solution satisfies y(t_{n+1}) = y(t_n + h). The local truncation error is thus defined as e_{n+1} = y(t_{n+1}) - y_{n+1}. To derive the form of this error, apply to expand the exact solution around t_n, assuming y is twice continuously differentiable on the relevant interval: \begin{aligned} y(t_{n+1}) &= y(t_n) + h y'(t_n) + \frac{h^2}{2} y''(\xi_n), \\ \end{aligned} where \xi_n \in (t_n, t_{n+1}). Substituting y'(t_n) = f(t_n, y(t_n)) and subtracting the Euler step yields e_{n+1} = \frac{h^2}{2} y''(\xi_n). Higher-order terms appear if further differentiability is assumed, giving e_{n+1} = \frac{h^2}{2} y''(\xi_n) + O(h^3), but the leading term confirms that the local is of order O(h^2). Under the assumption that |y''(t)| \leq M for some constant M > 0 on the integration interval, the magnitude of the error satisfies |e_{n+1}| \leq \frac{M h^2}{2}. This O(h^2) behavior per step establishes the Euler method as a first-order accurate scheme overall, as the local errors accumulate to produce a global error of order O(h). The analysis requires f to be Lipschitz continuous in y (i.e., |f(t, y_1) - f(t, y_2)| \leq K |y_1 - y_2| for some K > 0) to guarantee existence, uniqueness, and boundedness of the solution, alongside the twice continuous differentiability of y.

Global truncation error

The global truncation error for the Euler method, denoted E_n = y(t_n) - y_n, measures the discrepancy between the exact solution y(t_n) of the and the numerical approximation y_n produced after n iteration steps, where t_n = n h and h is the step size. This error encapsulates the cumulative effect of approximations made throughout the integration over a fixed time . Unlike the local , which is confined to a single step, the global error reflects the overall accuracy of the method after many steps. Under suitable regularity conditions on the right-hand side f(t, y) of y' = f(t, y), including satisfaction of a condition with respect to y with constant L > 0 and a bound |y''(t)| \leq M on the second derivative of the exact solution, the global error admits the bound |E_n| \leq \frac{M h}{2 L} \left( e^{L t_n} - 1 \right). This estimate arises from analyzing the error and applying techniques such as the Gronwall inequality, confirming that the global is of order O(h) as h \to 0 for fixed t_n. The exponential factor e^{L t_n} highlights how the constant amplifies errors over longer integration times. The propagation of errors in the Euler method occurs because each local , introduced at a given step, acts as a to the for all subsequent steps; due to the inherent sensitivity of solutions—quantified by the Lipschitz constant L—these perturbations accumulate roughly linearly, leading to the O(h) global behavior over O(1/h) steps. The local truncation error per step, of order O(h^2), thus contributes to the overall O(h) degradation when summed across the interval. A concrete illustration is provided by the test equation y' = -y, y(0) = 1, with exact solution y(t) = e^{-t}. For step size h = 0.1 over [0, 1] (10 steps), the Euler iteration yields y_{n+1} = y_n (1 - h), so y_{10} = 0.9^{10} \approx 0.3487, compared to the exact y(1) \approx 0.3679, resulting in a global error of approximately 0.019. Halving the step size to h = 0.05 (20 steps) produces y_{20} = 0.95^{20} \approx 0.3585, with error ≈0.0093, demonstrating the O(h) scaling empirically.

Numerical Properties

Stability analysis

In numerical methods for ordinary differential equations (ODEs), refers to the property that small perturbations or errors in the computed solution remain bounded and do not grow without limit as the proceeds. For the explicit Euler method, absolute is assessed in the using the scalar test y' = \lambda y, where \lambda \in \mathbb{C} represents the eigenvalues of the in linearized systems. Applying the Euler method to this yields the recurrence y_{n+1} = (1 + h \lambda) y_n, so the method is absolutely stable if the amplification factor satisfies |1 + h \lambda| \leq 1. This condition defines the region of absolute in the z = h \lambda-plane as the closed disk centered at -1 with radius 1, given by |z + 1| \leq 1. The region lies entirely within the left half-plane except for the origin on the imaginary axis, covering the negative real axis from -2 to $0. In stiff ODEs, characterized by widely varying eigenvalues with large negative real parts (large |\lambda| with \mathrm{Re}(\lambda) < 0), the explicit Euler method requires a step size h < 2 / |\lambda|_{\max} to remain within the stability region, often necessitating impractically small steps that prioritize over efficiency. This conditional limits its applicability to such problems, where implicit methods with larger stability regions are preferred. Germund Dahlquist's second barrier theorem establishes that no explicit , including the forward Euler method, can be A-stable—meaning its stability region cannot encompass the entire left half-plane—whereas certain implicit methods achieve this property.

Rounding errors

In computer implementations of the Euler method, rounding errors stem from finite-precision , primarily during the computation and addition of the increment h f(t_n, y_n) to the current approximation y_n. The \epsilon, which quantifies the relative rounding precision, is approximately $2^{-52} \approx 2.22 \times 10^{-16} in double-precision arithmetic. Each iteration introduces a local roundoff error bounded by O(\epsilon |y_n|), often modeled as a small added to the update. Over N = T/h steps to integrate from initial time 0 to final time T, these errors accumulate, yielding a roundoff error of order O(\epsilon N) = O(\epsilon T / h). For sufficiently small h, this roundoff component surpasses the method's truncation error of O(h), degrading overall accuracy. The total numerical error can thus be approximated as O(h) + O(\epsilon T / h). To balance these terms and minimize the total error, an optimal step size of roughly h \approx \sqrt{\epsilon T} is selected, ensuring neither dominates excessively.

Extensions and Variations

Improved Euler methods

The improved Euler methods, also known as modified Euler methods, enhance the accuracy of the basic Euler method by incorporating additional evaluations of the function to better approximate the solution curve's . These explicit one-step methods achieve second-order accuracy, addressing the first-order limitations of the basic Euler method, which suffers from a global of O(h). Heun's method, introduced by Karl Heun in , operates as a predictor-corrector scheme. It first predicts an intermediate value using the Euler step and then corrects it by averaging the slopes at the current and predicted points. The algorithm is given by: \begin{align*} y_p &= y_n + h f(t_n, y_n), \\ y_{n+1} &= y_n + \frac{h}{2} \left[ f(t_n, y_n) + f(t_{n+1}, y_p) \right], \end{align*} where y_p is the predictor. This method yields a local of O(h^3) and global error of O(h^2), requiring two function evaluations per step. The midpoint method provides another second-order improvement by estimating the slope at the interval's midpoint after a half-step Euler prediction. Its formula is: y_{n+1} = y_n + h f\left( t_n + \frac{h}{2}, y_n + \frac{h}{2} f(t_n, y_n) \right). This explicit approach also achieves a local truncation error of O(h^3) and global error of O(h^2), with two function evaluations, offering a balance between simplicity and improved fidelity over the basic Euler method. Both Heun's and methods average or evaluate slopes across the step to capture more effectively than the basic Euler's tangent approximation, thereby reducing the local from O(h^2) to O(h^3). While explicit and straightforward to implement, they incur higher computational cost due to the additional evaluations per step compared to the single evaluation in the basic Euler method.

Applications in modern contexts

The Euler method remains a foundational tool in simulating , particularly in models like the Lotka-Volterra predator-prey system, where it provides simple numerical approximations for both deterministic and to ecological interactions and . In basic electrical circuit analysis, such as RC or RLC networks, the method is applied to solve ordinary differential equations governing voltage and current transients, offering an accessible way to predict system responses under time-varying inputs. Its computational simplicity also makes it an ideal introductory teaching tool in curricula, where it illustrates core concepts of before advancing to more sophisticated solvers. In modern extensions, the fractional Euler method addresses non-integer order derivatives in viscoelastic models, enabling accurate simulations of materials exhibiting memory-dependent behaviors, such as in Euler-Bernoulli beams under dynamic loads. This approach has seen application in 2020s research on processes, where fractional variants capture subdiffusive phenomena in complex media like porous structures or biological tissues. For stochastic environments, the extends the original to equations, playing a key role in finance for option pricing under volatility models like Ornstein-Uhlenbeck processes, where it approximates asset paths for evaluations. Recent developments from 2020 to 2025 highlight integrations with , such as the Deep Euler method, which enhances neural differential equations (ODEs) by learning local truncation errors to improve accuracy in solving complex ODEs for tasks like time-series forecasting. Due to its low computational overhead, the Euler method is favored in systems for applications, including hardware-in-the-loop simulations of motors and switched systems, where rapid execution is critical despite potential stability trade-offs. In practice, however, more accurate adaptive solvers like MATLAB's ode45, which employs higher-order Runge-Kutta integration, are often used to refine solutions and mitigate the accumulation of global errors over long simulations.