Euler method
The Euler method, also known as the forward Euler method or Euler-Cauchy method, is a first-order numerical technique for approximating solutions to initial value problems in ordinary differential equations (ODEs) of the form \frac{dy}{dx} = f(x, y) with y(x_0) = y_0. It proceeds by discretizing the domain into small steps of size h and iteratively estimating the next value using the formula y_{n+1} = y_n + h \cdot f(x_n, y_n), where x_{n+1} = x_n + h, effectively approximating the solution curve with tangent line segments at each point. This method is particularly useful when exact analytical solutions are unavailable, such as for nonlinear or complex ODEs, and serves as a foundational approach in numerical analysis.[1]
Named after the Swiss mathematician Leonhard Euler (1707–1783), the method was first described in his 1768 work Institutionum calculi integralis, specifically in Volume I, Chapter 7, Sections 650–653, where he outlined a polygonal approximation technique for integrating differential equations by assuming the integrand remains nearly constant over small intervals.[2] Euler's approach built on the foundational calculus of Newton and Leibniz, emphasizing practical computation for equations lacking closed-form solutions, and he noted that errors diminish with smaller intervals but accumulate over many steps, particularly if the derivative varies significantly.[2] Although sometimes associated with Augustin-Louis Cauchy due to his later refinements (including the backward Euler variant), the forward method is primarily attributed to Euler's original formulation.
The Euler method's simplicity makes it easy to implement programmatically or by hand, requiring only evaluations of f(x, y) at discrete points, and it forms the basis for more advanced Runge-Kutta methods by incorporating higher-order Taylor series terms.[1] However, its local truncation error is O(h^2) per step, leading to a global error of O(h) over the interval, which can grow substantially for larger step sizes or rapidly oscillating solutions, often necessitating smaller h for accuracy at the cost of increased computational effort. Despite these limitations, it remains a staple in educational contexts and introductory simulations in fields like physics, engineering, and biology for modeling phenomena such as population dynamics or projectile motion.[1]
Overview
Definition and purpose
The Euler method is a first-order numerical procedure for solving initial value problems of the form y' = f(t, y), y(t_0) = y_0, by iteratively approximating the solution curve with linear segments based on the tangent at each point.[3] This approach generates a sequence of points (t_n, y_n) where t_n = t_0 + n h and h is a fixed step size, providing discrete estimates of the continuous solution y(t).[3]
The primary purpose of the Euler method is to estimate solutions to ordinary differential equations when exact analytical solutions are unavailable or computationally impractical, offering a straightforward iterative technique to advance from the initial condition to desired time points.[4] It serves as a foundational tool in numerical analysis, enabling approximations that can be refined by reducing the step size h, though larger steps introduce noticeable errors.[3]
For the method to yield a unique and stable solution that the approximations converge to, the function f(t, y) must be continuous in t and Lipschitz continuous in y, ensuring the existence and uniqueness of the true solution under the Picard-Lindelöf theorem.[4] The method was described by Leonhard Euler in 1768 as part of his work on differential equations in St. Petersburg.[3]
Geometric interpretation
The Euler method provides an intuitive geometric interpretation through the lens of a slope field, also known as a direction field, for the ordinary differential equation y' = f(t, y). In this visualization, the t-y plane is overlaid with short line segments at various points (t, y), each having a slope equal to f(t, y), representing the instantaneous direction of the solution curve at that point. The true solution curve to the initial value problem passes through the given initial point and remains tangent to these segments wherever it intersects them, effectively weaving through the field while following the local directions dictated by the differential equation.[5][6][7]
To approximate this solution numerically, the Euler method constructs a polygonal path by starting at the initial point and, at each step, drawing a straight line segment of length h in the t-direction using the slope f(t_n, y_n) from the current point (t_n, y_n). This segment serves as a linear extrapolation to the next point (t_{n+1}, y_{n+1}), after which the process repeats by adopting the slope at the new point. Geometrically, this mimics tracing the solution by "riding along" the tangent lines in the slope field, building a piecewise linear approximation that locally aligns with the curve's direction.[5][7]
The method works effectively when the step size h is small because, over such a short interval, the curvature of the true solution curve is negligible, allowing the tangent line to closely match the curve's path without significant deviation. In essence, the solution curve bends gradually compared to the straight-line step, making the linear approximation a reliable local stand-in for the actual trajectory.[6][5]
This first-order accuracy arises intuitively from the fact that each step ignores the higher-order bending of the curve—such as changes in slope due to the second derivative—leading to a small error per step that accumulates over multiple iterations. For illustration, consider the simple equation y' = y with initial condition y(0) = 1. The slope field consists of line segments where the slope equals the y-coordinate: horizontal at y = 0, increasingly steep upward for positive y, and downward for negative y, forming a pattern of arrows radiating from the t-axis. Starting at (0, 1) with slope 1, the Euler method draws a 45-degree line segment for step h, then adjusts to the new slope, producing a zigzag of tangents that roughly outlines the true exponential solution y = e^t, with closer alignment for smaller h.[6][7]
Derivation from Taylor series
Consider the initial value problem for a first-order ordinary differential equation given by y' = f(t, y) with initial condition y(t_0) = y_0, where the solution y(t) is assumed to be twice continuously differentiable on the interval of interest, and f(t, y) is sufficiently smooth to ensure the existence and uniqueness of the solution, typically requiring f to be continuous and satisfy a Lipschitz condition in y.[8][9]
To derive the Euler method, expand the true solution y(t) using Taylor's theorem around the point t_n, where t_{n+1} = t_n + h and h > 0 is the step size. The expansion yields
y(t_{n+1}) = y(t_n) + h y'(t_n) + \frac{h^2}{2} y''(\xi_n)
for some \xi_n between t_n and t_{n+1}.[8][10]
Since y'(t_n) = f(t_n, y(t_n)), truncating the series after the first-order term provides the approximation
y(t_{n+1}) \approx y(t_n) + h f(t_n, y(t_n)),
which, when applied iteratively with approximate values y_n \approx y(t_n), gives the Euler method formula
y_{n+1} = y_n + h f(t_n, y_n).
[9][10]
The remainder term \frac{h^2}{2} y''(\xi_n) is of order O(h^2) under the stated smoothness assumptions, establishing the local truncation error of the method as O(h^2) and confirming its first-order accuracy (global error O(h)).[8][9]
Iterative algorithm
The Euler method provides a straightforward iterative procedure for approximating the solution to an initial value problem (IVP) of the form y' = f(t, y), y(t_0) = y_0, over an interval [t_0, t_f].[1] To implement it, initialize the approximation at the starting point with t_0 and y_0. Then, for each step n = 0, 1, \dots, N-1, where N is the chosen number of steps, update the time as t_{n+1} = t_n + h and the solution approximation as y_{n+1} = y_n + h f(t_n, y_n), with h denoting the step size.[11] The values (t_n, y_n) serve as the numerical approximations to the true solution (t, y(t)) at the discrete points.[12]
The step size h is typically selected as h = (t_f - t_0)/N, ensuring the grid covers the desired interval.[1] A larger N (smaller h) yields higher accuracy by taking more steps, but this increases computational effort, as the method requires one evaluation of f per step.[11] In practice, h is chosen based on a balance between precision needs and available resources, often starting with trial values like 0.1 or 0.01 and refining as necessary.[12]
The following pseudocode outlines the basic implementation in plain English for a uniform step size:
Given: f(t, y), t_0, y_0, h > 0, N (number of steps)
Set t = t_0
Set y = y_0
For n = 0 to N-1:
Output (t, y) // approximation at current step
Set slope = f(t, y)
Set t = t + h
Set y = y + h * slope
Output (t, y) // final approximation
Given: f(t, y), t_0, y_0, h > 0, N (number of steps)
Set t = t_0
Set y = y_0
For n = 0 to N-1:
Output (t, y) // approximation at current step
Set slope = f(t, y)
Set t = t + h
Set y = y + h * slope
Output (t, y) // final approximation
This procedure assumes access to the function f and performs N+1 output points.[1]
The Euler method is specifically designed for initial value problems, where the solution is determined by an initial condition at t_0; it does not directly apply to boundary value problems, which require conditions at multiple points.[11]
Examples
First-order ODE example
To illustrate the basic application of the Euler method, consider the first-order ordinary differential equation (ODE)
\frac{dy}{dt} = -y, \quad y(0) = 1,
which describes exponential decay with the exact analytical solution y(t) = e^{-t}.[13] This example is solved numerically over the interval [0, 2] using a step size of h = 1, following the iterative algorithm where each approximation is given by y_{n+1} = y_n + h f(t_n, y_n) and f(t, y) = -y.[13]
The computation proceeds as follows. Start with the initial values t_0 = 0 and y_0 = 1. The slope at this point is f(0, 1) = -1, so the first approximation is
y_1 = y_0 + h \cdot (-y_0) = 1 + 1 \cdot (-1) = 0,
at t_1 = t_0 + h = 1. The next slope is f(1, 0) = 0, yielding
y_2 = y_1 + h \cdot (-y_1) = 0 + 1 \cdot (0) = 0,
at t_2 = t_1 + h = 2. In general, for this ODE and h = 1, the recurrence simplifies to y_{n+1} = (1 - 1) y_n = 0 for n \geq 0 after the initial step, so y_n = 0 for all n \geq 1.[13]
The table below compares the Euler approximations y_n with the exact values y(t_n) and the absolute errors |y(t_n) - y_n|, where e^{-1} \approx 0.3679 and e^{-2} \approx 0.1353.[13]
| t_n | y_n (Euler) | y(t_n) (exact) | Absolute error |
|---|
| 0 | 1 | 1 | 0 |
| 1 | 0 | 0.3679 | 0.3679 |
| 2 | 0 | 0.1353 | 0.1353 |
With such a large step size h = 1, the Euler method produces substantial deviation from the exact solution, immediately dropping to zero after the first step and remaining there, which underestimates the true gradual exponential decay.[13]
Higher-order ODE example
To apply the Euler method to higher-order ordinary differential equations (ODEs), the equation is first reduced to an equivalent system of first-order ODEs.[14]
A classic example is the second-order linear ODE for the simple harmonic oscillator:
y''(t) + y(t) = 0,
with initial conditions y(0) = 0, y'(0) = 1. The exact solution is y(t) = \sin t. To solve this numerically using the Euler method over the interval [0, \pi] with step size h = 0.5, introduce the auxiliary variable v(t) = y'(t). This yields the system:
y'(t) = v(t), \quad v'(t) = -y(t),
with initial vector \begin{pmatrix} y(0) \\ v(0) \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}.[14]
The Euler method then proceeds iteratively by updating the vector at each step: for the n-th step,
\begin{pmatrix} y_{n+1} \\ v_{n+1} \end{pmatrix} = \begin{pmatrix} y_n \\ v_n \end{pmatrix} + h \begin{pmatrix} v_n \\ -y_n \end{pmatrix}.
Starting from t_0 = 0, the first update gives y_1 = 0 + 0.5 \cdot 1 = 0.5, v_1 = 1 + 0.5 \cdot (-0) = 1 at t_1 = 0.5. Continuing this process for six steps (reaching t_6 = 3.0, close to \pi \approx 3.1416) produces the approximations shown in the table below, alongside exact values for comparison.
| Step n | t_n | Approximate y_n | Approximate v_n | Exact y(t_n) = \sin t_n |
|---|
| 0 | 0.0 | 0.0000 | 1.0000 | 0.0000 |
| 1 | 0.5 | 0.5000 | 1.0000 | 0.4794 |
| 2 | 1.0 | 1.0000 | 0.7500 | 0.8415 |
| 3 | 1.5 | 1.3750 | 0.2500 | 0.9975 |
| 4 | 2.0 | 1.5000 | -0.4375 | 0.9093 |
| 5 | 2.5 | 1.2813 | -1.1875 | 0.5985 |
| 6 | 3.0 | 0.6875 | -1.8281 | 0.1411 |
The approximation at t = 3.0 gives y(3.0) \approx 0.6875, greater than the exact \sin 3 \approx 0.1411; the exact value at t = \pi is y(\pi) = 0. A linear extrapolation from the last two points to t = \pi yields roughly y(\pi) \approx 0.52, still positive, illustrating the method's instability leading to overestimation of the amplitude in oscillatory systems. Intermediate points show deviations, such as overshooting at t = 1.5 where y_3 = 1.375 > 1 \approx \sin(1.57), highlighting the accumulation of local truncation errors in oscillatory systems.
Error Analysis
Local truncation error
The local truncation error of the Euler method quantifies the discrepancy introduced by a single iteration of the algorithm when the input at step n is the exact solution value y(t_n). For the initial value problem y' = f(t, y), y(t_0) = y_0, with step size h > 0, the numerical approximation at the next point is y_{n+1} = y(t_n) + h f(t_n, y(t_n)), while the exact solution satisfies y(t_{n+1}) = y(t_n + h). The local truncation error is thus defined as e_{n+1} = y(t_{n+1}) - y_{n+1}.[8][15]
To derive the form of this error, apply Taylor's theorem to expand the exact solution around t_n, assuming y is twice continuously differentiable on the relevant interval:
\begin{aligned}
y(t_{n+1}) &= y(t_n) + h y'(t_n) + \frac{h^2}{2} y''(\xi_n), \\
\end{aligned}
where \xi_n \in (t_n, t_{n+1}). Substituting y'(t_n) = f(t_n, y(t_n)) and subtracting the Euler step yields
e_{n+1} = \frac{h^2}{2} y''(\xi_n).
Higher-order terms appear if further differentiability is assumed, giving e_{n+1} = \frac{h^2}{2} y''(\xi_n) + O(h^3), but the leading term confirms that the local truncation error is of order O(h^2).[8][16]
Under the assumption that |y''(t)| \leq M for some constant M > 0 on the integration interval, the magnitude of the error satisfies |e_{n+1}| \leq \frac{M h^2}{2}. This O(h^2) behavior per step establishes the Euler method as a first-order accurate scheme overall, as the local errors accumulate to produce a global error of order O(h). The analysis requires f to be Lipschitz continuous in y (i.e., |f(t, y_1) - f(t, y_2)| \leq K |y_1 - y_2| for some K > 0) to guarantee existence, uniqueness, and boundedness of the solution, alongside the twice continuous differentiability of y.[8][15]
Global truncation error
The global truncation error for the Euler method, denoted E_n = y(t_n) - y_n, measures the discrepancy between the exact solution y(t_n) of the initial value problem and the numerical approximation y_n produced after n iteration steps, where t_n = n h and h is the step size. This error encapsulates the cumulative effect of approximations made throughout the integration over a fixed time interval. Unlike the local truncation error, which is confined to a single step, the global error reflects the overall accuracy of the method after many steps.[17]
Under suitable regularity conditions on the right-hand side function f(t, y) of the ODE y' = f(t, y), including satisfaction of a Lipschitz condition with respect to y with constant L > 0 and a bound |y''(t)| \leq M on the second derivative of the exact solution, the global error admits the bound
|E_n| \leq \frac{M h}{2 L} \left( e^{L t_n} - 1 \right).
This estimate arises from analyzing the error recurrence relation and applying techniques such as the discrete Gronwall inequality, confirming that the global truncation error is of order O(h) as h \to 0 for fixed t_n. The exponential factor e^{L t_n} highlights how the Lipschitz constant amplifies errors over longer integration times.[9]
The propagation of errors in the Euler method occurs because each local truncation error, introduced at a given step, acts as a perturbation to the initial condition for all subsequent steps; due to the inherent sensitivity of ODE solutions—quantified by the Lipschitz constant L—these perturbations accumulate roughly linearly, leading to the O(h) global behavior over O(1/h) steps. The local truncation error per step, of order O(h^2), thus contributes to the overall O(h) degradation when summed across the interval.[17]
A concrete illustration is provided by the test equation y' = -y, y(0) = 1, with exact solution y(t) = e^{-t}. For step size h = 0.1 over [0, 1] (10 steps), the Euler iteration yields y_{n+1} = y_n (1 - h), so y_{10} = 0.9^{10} \approx 0.3487, compared to the exact y(1) \approx 0.3679, resulting in a global error of approximately 0.019. Halving the step size to h = 0.05 (20 steps) produces y_{20} = 0.95^{20} \approx 0.3585, with error ≈0.0093, demonstrating the O(h) scaling empirically.[9]
Numerical Properties
Stability analysis
In numerical methods for ordinary differential equations (ODEs), stability refers to the property that small perturbations or errors in the computed solution remain bounded and do not grow without limit as the integration proceeds.[13] For the explicit Euler method, absolute stability is assessed in the complex plane using the scalar test equation y' = \lambda y, where \lambda \in \mathbb{C} represents the eigenvalues of the Jacobian in linearized systems.[13] Applying the Euler method to this equation yields the recurrence y_{n+1} = (1 + h \lambda) y_n, so the method is absolutely stable if the amplification factor satisfies |1 + h \lambda| \leq 1.[13]
This condition defines the region of absolute stability in the z = h \lambda-plane as the closed disk centered at -1 with radius 1, given by
|z + 1| \leq 1.
[18] The region lies entirely within the left half-plane except for the origin on the imaginary axis, covering the negative real axis from -2 to $0.[18]
In stiff ODEs, characterized by widely varying eigenvalues with large negative real parts (large |\lambda| with \mathrm{Re}(\lambda) < 0), the explicit Euler method requires a step size h < 2 / |\lambda|_{\max} to remain within the stability region, often necessitating impractically small steps that prioritize stability over efficiency.[13] This conditional stability limits its applicability to such problems, where implicit methods with larger stability regions are preferred.[19] Germund Dahlquist's second barrier theorem establishes that no explicit linear multistep method, including the forward Euler method, can be A-stable—meaning its stability region cannot encompass the entire left half-plane—whereas certain implicit methods achieve this property.[20]
Rounding errors
In computer implementations of the Euler method, rounding errors stem from finite-precision floating-point arithmetic, primarily during the computation and addition of the increment h f(t_n, y_n) to the current approximation y_n. The machine epsilon \epsilon, which quantifies the relative rounding precision, is approximately $2^{-52} \approx 2.22 \times 10^{-16} in IEEE 754 double-precision arithmetic.[21]
Each iteration introduces a local roundoff error bounded by O(\epsilon |y_n|), often modeled as a small perturbation added to the update. Over N = T/h steps to integrate from initial time 0 to final time T, these errors accumulate, yielding a global roundoff error of order O(\epsilon N) = O(\epsilon T / h). For sufficiently small h, this roundoff component surpasses the method's global truncation error of O(h), degrading overall accuracy.[21][22]
The total numerical error can thus be approximated as O(h) + O(\epsilon T / h). To balance these terms and minimize the total error, an optimal step size of roughly h \approx \sqrt{\epsilon T} is selected, ensuring neither dominates excessively.[23]
Extensions and Variations
Improved Euler methods
The improved Euler methods, also known as modified Euler methods, enhance the accuracy of the basic Euler method by incorporating additional evaluations of the derivative function to better approximate the solution curve's curvature. These explicit one-step methods achieve second-order accuracy, addressing the first-order limitations of the basic Euler method, which suffers from a global truncation error of O(h).[8]
Heun's method, introduced by Karl Heun in 1900, operates as a predictor-corrector scheme. It first predicts an intermediate value using the Euler step and then corrects it by averaging the slopes at the current and predicted points. The algorithm is given by:
\begin{align*}
y_p &= y_n + h f(t_n, y_n), \\
y_{n+1} &= y_n + \frac{h}{2} \left[ f(t_n, y_n) + f(t_{n+1}, y_p) \right],
\end{align*}
where y_p is the predictor. This method yields a local truncation error of O(h^3) and global error of O(h^2), requiring two function evaluations per step.[24][8][25]
The midpoint method provides another second-order improvement by estimating the slope at the interval's midpoint after a half-step Euler prediction. Its formula is:
y_{n+1} = y_n + h f\left( t_n + \frac{h}{2}, y_n + \frac{h}{2} f(t_n, y_n) \right).
This explicit approach also achieves a local truncation error of O(h^3) and global error of O(h^2), with two function evaluations, offering a balance between simplicity and improved fidelity over the basic Euler method.[8][26]
Both Heun's and midpoint methods average or evaluate slopes across the step to capture curvature more effectively than the basic Euler's tangent approximation, thereby reducing the local truncation error from O(h^2) to O(h^3). While explicit and straightforward to implement, they incur higher computational cost due to the additional function evaluations per step compared to the single evaluation in the basic Euler method.[8][25]
Applications in modern contexts
The Euler method remains a foundational tool in simulating population dynamics, particularly in models like the Lotka-Volterra predator-prey system, where it provides simple numerical approximations for both deterministic and stochastic variants to study ecological interactions and stability. In basic electrical circuit analysis, such as RC or RLC networks, the method is applied to solve ordinary differential equations governing voltage and current transients, offering an accessible way to predict system responses under time-varying inputs.[27] Its computational simplicity also makes it an ideal introductory teaching tool in computational science curricula, where it illustrates core concepts of numerical integration before advancing to more sophisticated solvers.[28]
In modern extensions, the fractional Euler method addresses non-integer order derivatives in viscoelastic models, enabling accurate simulations of materials exhibiting memory-dependent behaviors, such as in Euler-Bernoulli beams under dynamic loads.[29] This approach has seen application in 2020s research on anomalous diffusion processes, where fractional variants capture subdiffusive phenomena in complex media like porous structures or biological tissues.[30] For stochastic environments, the Euler-Maruyama method extends the original to stochastic differential equations, playing a key role in finance for option pricing under volatility models like Ornstein-Uhlenbeck processes, where it approximates asset paths for Monte Carlo evaluations.[31]
Recent developments from 2020 to 2025 highlight integrations with machine learning, such as the Deep Euler method, which enhances neural ordinary differential equations (ODEs) by learning local truncation errors to improve accuracy in solving complex ODEs for tasks like time-series forecasting.[32] Due to its low computational overhead, the Euler method is favored in embedded systems for real-time control applications, including hardware-in-the-loop simulations of motors and switched systems, where rapid execution is critical despite potential stability trade-offs.[33] In practice, however, more accurate adaptive solvers like MATLAB's ode45, which employs higher-order Runge-Kutta integration, are often used to refine solutions and mitigate the accumulation of global errors over long simulations.[28]