Fact-checked by Grok 2 weeks ago

Initial value problem

An initial value problem (IVP) is a mathematical formulation consisting of an (ODE) together with one or more s that specify the value of the solution and its derivatives at a designated initial point, typically denoted as t = t_0. For a ODE of the form y' = f(t, y), the is usually y(t_0) = y_0, while higher-order equations require correspondingly more conditions to determine the constants in the general solution. This setup ensures the problem is well-posed, aiming to find a that satisfies both the and the initial specifications over some containing the initial point. The theory of IVPs is grounded in results guaranteeing the existence and uniqueness of solutions under suitable conditions on the function f. The establishes that if f(t, y) is continuous in t and continuous in y on a rectangular domain around (t_0, y_0), then there exists a unique to the IVP on some |t - t_0| < h. This theorem, proved via successive approximations (Picard iteration), provides a local existence result and forms the basis for analyzing the behavior of solutions, including their maximal intervals of validity where they remain defined and unique. For linear ODEs or those satisfying the Lipschitz condition globally, solutions can extend over the entire real line. IVPs are ubiquitous in scientific and engineering applications, modeling dynamic systems with known starting states, such as projectile motion in physics, population dynamics in biology, and circuit analysis in electrical engineering. When closed-form solutions are unavailable, numerical methods are essential for approximation; common approaches include the explicit , which advances the solution using a first-order Taylor expansion, and higher-order , which improve accuracy by evaluating the derivative at multiple points within each step. These techniques, implemented in software like MATLAB or Python's SciPy library, enable simulations of complex phenomena while respecting stability and error control.

Definition and Formulation

General Form for ODEs

The initial value problem (IVP) for ordinary differential equations (ODEs) is fundamentally concerned with solving a differential equation subject to specified conditions at an initial point. In its most basic scalar form, an IVP consists of a first-order ODE given by y'(t) = f(t, y(t)), together with the initial condition y(t_0) = y_0, where t is the independent variable often interpreted as time, y(t) is the scalar-valued dependent variable representing the state, t_0 is the initial time, y_0 is the initial state value, and f is a given function. This formulation extends naturally to vector-valued functions, where y: \mathbb{R} \to \mathbb{R}^n for some dimension n \geq 1, yielding the system y'(t) = f(t, y(t)), \quad y(t_0) = y_0, with y_0 \in \mathbb{R}^n and f: \mathbb{R} \times \mathbb{R}^n \to \mathbb{R}^n. Such systems arise in modeling multi-component phenomena, maintaining the same structural principles as the scalar case. Higher-order ODEs are routinely reformulated as equivalent first-order systems to align with this general IVP structure. For instance, a second-order linear ODE of the form y''(t) + p(t) y'(t) + q(t) y(t) = g(t), with initial conditions y(t_0) = y_0 and y'(t_0) = y_1, can be converted by introducing the vector z(t) = (z_1(t), z_2(t))^T where z_1(t) = y(t) and z_2(t) = y'(t), resulting in the first-order system z'(t) = \begin{pmatrix} 0 & 1 \\ -q(t) & -p(t) \end{pmatrix} z(t) + \begin{pmatrix} 0 \\ g(t) \end{pmatrix}, \quad z(t_0) = \begin{pmatrix} y_0 \\ y_1 \end{pmatrix}. This reduction applies analogously to ODEs of arbitrary order m, transforming them into an m-dimensional first-order system. In more abstract settings, IVPs appear in infinite-dimensional spaces, such as evolution equations in Banach spaces, where the general form is \frac{dy}{dt} = A y + f(t), \quad y(0) = y_0, with y taking values in a , A a linear operator (often unbounded), and f a forcing term. This framework accommodates partial differential equations recast as abstract ODEs in function spaces, preserving the initial value specification at t = 0.

Initial Conditions and Variations

In the context of ordinary differential equations (ODEs), the initial condition specifies the value of the solution at a designated starting point, typically formulated as y(t_0) = y_0, where t_0 denotes the initial time and y_0 represents the initial state, which may be a scalar for single equations or a vector for systems. This condition anchors the solution within the family of possible functions satisfying the differential equation, enabling the determination of a unique trajectory forward in time. For systems of ODEs comprising multiple interdependent equations, the initial conditions extend accordingly, such as y_1(t_0) = y_{10} and y_2(t_0) = y_{20} for a two-component system, providing starting values for each dependent variable. Variations in initial conditions arise to accommodate diverse problem setups. The initial time t_0 need not be zero or any standard value; it can be any point within the domain where the equation is defined, allowing flexibility in modeling phenomena starting from arbitrary moments. In higher-dimensional contexts, such as partial differential equations (PDEs), the initial value problem for ODEs relates to the broader , where initial data is prescribed on a hypersurface rather than a single point; however, initial value problems remain a specialized subclass confined to ODEs with pointwise initial data. A key criterion for the formulation of initial value problems is well-posedness, as defined by , which requires that a solution exists for given initial data, that this solution is unique, and that small perturbations in the initial data lead to continuously small changes in the solution. This ensures the problem is practically meaningful, avoiding instabilities or ambiguities in physical or mathematical interpretations. The term "initial value problem" gained prominence in the 19th century through 's foundational work on differential equations, including his 1842 memoir on partial differential equations that emphasized initial data, and 's contributions to qualitative analysis and celestial mechanics, which highlighted the role of initial conditions in dynamical systems.

Theoretical Foundations

Existence Theorems

The study of existence theorems for initial value problems (IVPs) in ordinary differential equations (ODEs) originated in the late 19th century, addressing whether solutions to equations of the form y' = f(t, y), y(t_0) = y_0 are guaranteed under suitable conditions on f. Giuseppe Peano introduced the first such theorem in 1886, establishing local existence based solely on the continuity of f. This result marked a foundational step, though Peano's initial proof contained gaps later corrected in subsequent works. Peano's existence theorem states that if f(t, y) is continuous on a rectangular domain R = \{ (t, y) \mid |t - t_0| \leq a, |y - y_0| \leq b \} containing the initial point (t_0, y_0), then there exists h > 0 such that the IVP has at least one on the [t_0 - h, t_0 + h], where h = \min(a, b/M) and M = \sup_{(t,y) \in R} |f(t,y)|. The proof typically relies on constructing a of polygonal approximations or using the form and compactness arguments, such as Ascoli-Arzelà, to extract a convergent . This theorem highlights that alone suffices for , without requiring stronger regularity for . Building on Peano's work, in 1890 and Ernst Lindelöf in 1894 developed a refined incorporating a condition, which ensures both local and , though the part aligns closely with Peano's under the additional assumption. The posits that if f is continuous in t and y on the rectangle R, and continuous in y uniformly in t (i.e., there exists K > 0 such that |f(t, y_1) - f(t, y_2)| \leq K |y_1 - y_2| for all (t, y_1), (t, y_2) \in R), then the IVP admits a unique solution on [t_0 - h, t_0 + h] for some h > 0. The proof employs successive approximations ( iterations) converging via the applied to the y(t) = y_0 + \int_{t_0}^t f(s, y(s)) \, ds. For broader applicability, especially when the Lipschitz condition fails, Carathéodory's theorem relaxes the assumptions to measurability and integrability, accommodating functions that are discontinuous in t. Specifically, if f(t, y) is measurable in t for fixed y, continuous in y for almost all t, and satisfies |f(t, y)| \leq g(t) where g is integrable on some interval around t_0, then a local solution exists in the Carathéodory sense (absolutely continuous y satisfying the integral equation almost everywhere). This framework, developed in the early 20th century, extends Peano's result to non-smooth right-hand sides prevalent in applications like control theory. Local solutions can often be extended to maximal intervals unless they exhibit finite-time blow-up. If a solution y on [t_0, \tau) remains bounded as t \to \tau^- with \tau < \infty, it can be continued beyond \tau under the theorem conditions; otherwise, the maximal interval of existence is [t_0, \tau). Global existence on [t_0, \infty) follows if f satisfies linear growth bounds, preventing escape to infinity in finite time. Extensions to stochastic IVPs, such as stochastic differential equations driven by Brownian motion, adapt these theorems under analogous Lipschitz or monotonicity conditions on the drift and diffusion coefficients, with post-2000 research emphasizing weak solutions and applications to financial modeling and physics.

Uniqueness and Stability

The Picard–Lindelöf theorem establishes uniqueness for solutions of the initial value problem y'(t) = f(t, y(t)), y(t_0) = y_0, where f is continuous and satisfies a Lipschitz condition in the y-variable on a rectangular domain around (t_0, y_0). Specifically, if there exists L > 0 such that |f(t, y_1) - f(t, y_2)| \leq L |y_1 - y_2| for all (t, y_1), (t, y_2) in the domain, then there is a unique solution on some interval [t_0 - h, t_0 + h]. The proof relies on transforming the ODE into an equivalent y(t) = y_0 + \int_{t_0}^t f(s, y(s)) \, ds and applying the in the space of continuous functions on [t_0 - h, t_0 + h], where the is a under the Lipschitz assumption. Weaker conditions for uniqueness relax the global Lipschitz requirement. Osgood's criterion guarantees uniqueness if f is continuous and satisfies |f(t, y_1) - f(t, y_2)| \leq \omega(|y_1 - y_2|), where \omega is a continuous, strictly increasing with \omega(0) = 0 such that \int_0^\epsilon \frac{du}{\omega(u)} = \infty for some \epsilon > 0. This integrability condition on $1/\omega prevents "funneling" of solutions, ensuring they do not intersect. Non-uniqueness arises when the or Osgood conditions fail, even under Peano's existence theorem guaranteeing at least one solution if f is continuous. For example, the IVP y' = 3 y^{2/3}, y(0) = 0 has the zero solution y(t) = 0 and infinitely many others y(t) = t^3 for t \geq 0 (and zero elsewhere), as f(t, y) = 3 y^{2/3} is continuous but not near y = 0. In the Carathéodory setting, where f is measurable in t and continuous in y, Okamura's theorem (1942) provides uniqueness if solutions exhibit "variational stability," meaning small perturbations in initial data lead to solutions that remain close in a suitable sense. Stability concepts complement by analyzing solution behavior under perturbations. Continuous dependence on initial data holds under the Picard–Lindelöf assumptions: if y_n(t) solves the IVP with initial condition y_n( t_0 ) = y_0 + \delta_n where \delta_n \to [0](/page/0), then \| y_n(t) - y(t) \| \to [0](/page/0) uniformly on compact intervals within the existence domain. For autonomous systems y' = f(y) with equilibrium f(y^*) = [0](/page/0), requires that for every neighborhood U of y^*, there exists V such that solutions starting in V remain in U for all t \geq [0](/page/0); this is characterized by the existence of a V(y) that is positive definite and non-increasing along trajectories. Extensions to stochastic differential equations address uniqueness in noisy settings. The Yamada–Watanabe theorem (1971) establishes that pathwise uniqueness—almost sure uniqueness of sample paths—combined with weak existence implies strong existence and uniqueness in law for Itô SDEs under conditions like Hölder continuity with exponent $1/2 in the diffusion coefficient.

Solution Methods

Analytical Approaches

Analytical approaches to solving initial value problems (IVPs) for ordinary differential equations (ODEs) seek closed-form exact solutions, typically applicable to equations of specific forms. These methods rely on transforming the into an integrable form, often through algebraic manipulation or recognition of particular structures in the function defining . While powerful for simple cases, they are limited to equations where the right-hand side permits explicit . One fundamental technique is , applicable when the ODE can be written as y' = f(t) g(y), where the dependence on the independent variable t and dependent variable y can be isolated. The method involves rearranging to \frac{dy}{g(y)} = f(t) \, dt, followed by : \int \frac{dy}{g(y)} = \int f(t) \, dt + C. Applying the y(t_0) = y_0 determines the constant C, yielding the explicit or implicit solution. This approach, a cornerstone of exact solvability, traces its systematic use to early developments in . For linear first-order ODEs of the form y' + p(t) y = q(t), the method provides an exact solution. An \mu(t) = \exp\left( \int p(t) \, dt \right) is constructed, and multiplying through the equation gives \mu(t) y' + \mu(t) p(t) y = \mu(t) q(t), which is the of the product \frac{d}{dt} [\mu(t) y]. Integrating both sides yields y(t) = \frac{1}{\mu(t)} \left[ \int_{t_0}^t \mu(s) q(s) \, ds + y_0 \mu(t_0) \right]. This technique, introduced by Leonhard Euler in the , reduces the equation to an exact form amenable to integration. Picard's iteration method offers a successive scheme for proving existence and constructing solutions to IVPs y' = f(t, y), y(t_0) = y_0, particularly under a condition on f with respect to y, ensuring . Starting with an initial guess y_0(t) = y_0, subsequent iterates are defined by y_{n+1}(t) = y_0 + \int_{t_0}^t f(s, y_n(s)) \, ds. The sequence \{ y_n(t) \} converges to the unique solution on an interval determined by the constant and bounds on f. Developed by in the late , this method also ties into theorems by demonstrating in a suitable . Exact solutions extend to specific nonlinear forms, such as autonomous equations y' = f(y), which are separable and solvable by integrating \int \frac{dy}{f(y)} = t + C, with the initial condition fixing C. Similarly, for equations expressible as M(t, y) \, dt + N(t, y) \, dy = 0, exactness holds if \frac{\partial M}{\partial y} = \frac{\partial N}{\partial t}, allowing integration to find a potential function \Psi(t, y) = C whose level sets give the solution. These criteria identify integrable cases without additional factors. Comprehensive catalogs of such solvable forms are detailed in handbooks of exact solutions./Ordinary_Differential_Equations/2:_First_Order_Differential_Equations/2.5:_Autonomous_Differential_Equations)/1:_First_order_ODEs/1.8:_Exact_Equations) Despite their elegance, analytical methods are feasible only for ODEs with simple, structured right-hand sides f(t, y), as often defies for complex or nonlinear dependencies. Euler's 18th-century advancements, including integrating factors and early systematic treatments of ODEs, laid the groundwork but highlighted the need for alternatives in more general cases.

Numerical Techniques

Numerical techniques are essential for approximating solutions to initial value problems (IVPs) when analytical methods are infeasible, particularly for nonlinear or complex ordinary differential equations (ODEs). These methods discretize the continuous problem into a of algebraic equations, balancing accuracy, , and computational efficiency. Common approaches include single-step methods like Euler's and Runge-Kutta, which use information from the current point to advance the solution, and multistep methods that incorporate previous points for higher efficiency. Error analysis plays a crucial role, distinguishing local truncation errors (per step) from global errors (accumulated over the interval), with stability considerations vital for stiff systems where rapid changes demand implicit formulations. Euler's method, one of the simplest explicit single-step techniques, approximates the by advancing from y_n to y_{n+1} using the forward difference: y_{n+1} = y_n + h f(t_n, y_n), where h is the step size and f(t, y) = \frac{dy}{dt}. This method derives from the tangent line approximation and has a local of O(h^2), leading to a global error of O(h) under suitable assumptions. While easy to implement, its first-order accuracy often requires small h for reliability, making it suitable for introductory purposes but less efficient for production use. Runge-Kutta methods improve accuracy through multiple internal stages, evaluating the at intermediate points to better approximate the over each step. The classical fourth-order Runge-Kutta (RK4) method, developed around 1900, computes four stages k_i: \begin{align*} k_1 &= h f(t_n, y_n), \\ k_2 &= h f\left(t_n + \frac{h}{2}, y_n + \frac{k_1}{2}\right), \\ k_3 &= h f\left(t_n + \frac{h}{2}, y_n + \frac{k_2}{2}\right), \\ k_4 &= h f(t_n + h, y_n + k_3), \end{align*} then updates y_{n+1} = y_n + \frac{h}{6} (k_1 + 2k_2 + 2k_3 + k_4). This can be represented compactly via the tableau:
0
1/21/2
1/201/2
100
1/61/3
The method achieves a local of O(h^5) and global error of O(h^4), offering a significant improvement over Euler's while remaining explicit and straightforward. It stems from foundational work by Runge in and Kutta in 1901, who systematized higher-order approximations for integration. Multistep methods leverage solutions from multiple prior steps to reduce function evaluations, enhancing efficiency for nonstiff problems. The explicit Adams-Bashforth methods predict the next value using past derivatives; for the second-order variant (AB2), y_{n+1} = y_n + \frac{h}{2} (3 f(t_n, y_n) - f(t_{n-1}, y_{n-1})), with local O(h^3) and global O(h^2). Implicit counterparts like Adams-Moulton require solving a nonlinear at each step but offer better ; the first-order Adams-Moulton (equivalent to backward Euler) is y_{n+1} = y_n + \frac{h}{2} (f(t_{n+1}, y_{n+1}) + f(t_n, y_n)). These methods, originating from Bashforth and Adams' 1883 work on computations, are often paired in predictor-corrector schemes for practical implementation. Higher-order up to order 12 exist, but orders beyond 6 can exhibit instability without careful step control. To mitigate accumulation of errors, modern implementations incorporate adaptive step sizing based on local error estimates, typically requiring the error per step to stay below a user-specified . Embedded Runge-Kutta methods, such as the Dormand-Prince pair from 1980, compute two approximations of differing orders (e.g., fourth and fifth) from the same stages, using their difference to estimate and control the local error: if excessive, halve h; if low, increase it. This enables efficient traversal of the integration interval while maintaining accuracy. For stiff equations—where eigenvalues have widely varying scales—explicit methods like forward Euler suffer instability unless h is impractically small; implicit methods like backward Euler, y_{n+1} = y_n + h f(t_{n+1}, y_{n+1}), offer larger stability regions via A-stability, though at the cost of solving implicit systems per step. Stability is analyzed through regions in the complex plane where the numerical solution mimics the exact one's boundedness. Software libraries widely adopt these techniques for robust IVP solving. MATLAB's ode45 solver, for instance, implements the Dormand-Prince embedded Runge-Kutta (4,5) pair with adaptive stepping and continuous output via dense , making it suitable for medium-order nonstiff problems and handling tolerances from $10^{-14} to $10^{-6} efficiently. Such tools automate error control and method selection, bridging theoretical algorithms to practical computations.

Examples and Applications

Basic Examples

A fundamental example of an initial value problem (IVP) arises in modeling , such as in where the rate of change is proportional to the . The IVP is formulated as \frac{dy}{dt} = k y, \quad y(0) = y_0 where k > 0 is the growth rate constant. This separable equation can be solved using the method of , yielding the analytical solution y(t) = y_0 e^{k t}. For illustrative purposes, consider k = 0.85 and y_0 = 19, so y(t) = 19 e^{0.85 t}. To verify, substitute into the initial condition: at t = 0, y(0) = 19 e^{0} = 19 = y_0. Differentiating the solution gives y'(t) = 19 \cdot 0.85 e^{0.85 t} = 0.85 y(t), confirming it satisfies the . Another basic IVP is the simple , which describes periodic motion like a on a without . The second-order IVP is \frac{d^2 y}{dt^2} + \omega^2 y = 0, \quad y(0) = A, \quad y'(0) = 0 where \omega > 0 is the angular frequency. The general solution is a linear combination of sine and cosine functions, and applying the initial conditions yields y(t) = A \cos(\omega t). Verification confirms the initial conditions: y(0) = A \cos(0) = A and y'(t) = -A \omega \sin(\omega t), so y'(0) = 0. The second derivative is y''(t) = -A \omega^2 \cos(\omega t) = -\omega^2 y(t), satisfying the equation. An instructive example highlighting non-uniqueness is the first-order IVP \frac{dy}{dt} = \sqrt{|y|}, \quad y(0) = 0. This admits multiple solutions, including the trivial solution y(t) = 0 for all t and the non-trivial solution y(t) = \left(\frac{t}{2}\right)^2 for t \geq 0 (with y(t) = 0 for t < 0 to satisfy the initial condition). Both solutions satisfy the initial condition: y(0) = 0. For the trivial solution, y'(t) = 0 = \sqrt{|0|}. For the non-trivial solution, y'(t) = \frac{t}{2} = \sqrt{|y(t)|} for t > 0, and at t = 0, the derivative is 0 matching the right-hand side. This demonstrates existence via Peano's theorem but lack of uniqueness due to the failure of the Lipschitz condition at y = 0. For systems of IVPs, a basic example is the simplified predator-prey model from the , where x(t) represents prey population and y(t) predators: \frac{dx}{dt} = a x - b x y, \quad \frac{dy}{dt} = -c y + d x y, \quad x(0) = x_0, \quad y(0) = y_0 with positive constants a, b, c, d. The solution curve satisfies the initial conditions by construction: x(0) = x_0 and y(0) = y_0. While no elementary closed-form solution exists, the system exhibits periodic orbits around the equilibrium (c/d, a/b) for appropriate initial values.

Practical Applications

Initial value problems (IVPs) are fundamental in modeling dynamical systems across physics, where Newton's second law of motion often leads to second-order equations of the form m \frac{d^2 \mathbf{r}}{dt^2} = \mathbf{F}(t, \mathbf{r}, \frac{d\mathbf{r}}{dt}), with initial and specifying the conditions. A classic example is under constant , ignoring air resistance, where the vertical displacement satisfies \frac{d^2 y}{dt^2} = -g, with initial conditions y(0) = 0 and \frac{dy}{dt}(0) = v_0 \sin \theta; the exact solution is y(t) = v_0 \sin \theta \, t - \frac{1}{2} g t^2, which predicts the and time of flight for applications like or satellite launches. In , IVPs arise in the analysis of RLC circuits, governed by the second-order equation L \frac{d^2 q}{dt^2} + R \frac{dq}{dt} + \frac{1}{C} q = V(t), where q(t) is the charge on the , L is , R is , C is , and V(t) is the applied voltage, solved with initial charge q(0) and initial current q'(0) = I(0). This formulation enables the prediction of transient behaviors, such as oscillations in series circuits, which is crucial for designing filters, oscillators, and power systems. Biological systems frequently employ first-order IVPs for population dynamics, exemplified by the logistic equation \frac{dy}{dt} = r y \left(1 - \frac{y}{K}\right), with initial population y(0) = y_0, where r is the intrinsic growth rate and K is the carrying capacity; this models S-shaped growth curves observed in microbial cultures or wildlife populations limited by resources. The solution y(t) = \frac{K y_0 e^{r t}}{K + y_0 (e^{r t} - 1)} provides insights into equilibrium states and extinction risks, informing ecological management and epidemiology. In engineering control systems, linear IVPs describe feedback loops, where the state evolution follows equations like \frac{d\mathbf{x}}{dt} = A \mathbf{x} + B u, with initial state \mathbf{x}(0) and control input u(t), enabling stabilization of processes such as robotic arms or aircraft autopilots through proportional-integral-derivative (PID) controllers. Stiff IVPs emerge in chemical kinetics, particularly in enzyme reactions modeled by the Michaelis-Menten mechanism, which produces systems of ODEs with widely varying timescales due to rapid binding and slow catalysis, requiring specialized solvers to simulate reaction rates accurately in pharmaceutical design. Since 2018, IVPs have gained prominence in through neural ordinary differential equations (Neural ODEs), which parameterize continuous-depth models as \frac{dz}{dt} = f(z(t), t, \theta), with initial hidden state z(0), allowing differentiable solvers to train on time-series data for tasks like and generative modeling, offering memory-efficient alternatives to discrete-layer networks. In practice, these applications often rely on numerical methods to approximate solutions when analytical forms are unavailable.

References

  1. [1]
    Differential Equations - Definitions - Pauls Online Math Notes
    Nov 16, 2022 · Initial Value Problem. An Initial Value Problem (or IVP) is a differential equation along with an appropriate number of initial conditions.
  2. [2]
    [PDF] Antiderivatives and Initial Value Problems
    Definition. • An initial-value problem is a differential equation together with enough additional conditions to specify the constants of integration that ...
  3. [3]
    [PDF] 5.1 Elementary Theory of Initial-Value Problems 1. Summary 2 ...
    An initial-value problem is a differential equation with initial conditions to uniquely determine the solution. A well-posed problem has a unique solution and ...
  4. [4]
    [PDF] Picard's Existence and Uniqueness Theorem
    One of the most important theorems in Ordinary Differential Equations is Picard's. Existence and Uniqueness Theorem for first-order ordinary differential ...
  5. [5]
    [PDF] Picard-Lindelof Theorem - UNM Math
    Theorem 1 If an operator F : X → X is a strict contraction with respect to the metric dX and (X, dx) is a complete metric space of functions, ...
  6. [6]
    [PDF] Validated Solutions of Initial Value Problems for Parametric ODEs
    Initial value problems (IVPs) for ordinary differential equations (ODEs) arise naturally in many applications in science and engineering. It is often the ...
  7. [7]
    [PDF] Chapter 1 NUMERICAL METHODS FOR ODE INITIAL VALUE ...
    In Section 1.1, we introduce the simplest possible numerical ODE scheme, For- ward Euler (FE). In its basic form, it is very inaccurate (as seen already in ...
  8. [8]
    [PDF] 22 Initial Value Problems for Ordinary Differential Equations
    However, some applications of interest in science and engineering give rise to initial value problems for which stability considerations for explicit ...
  9. [9]
    [PDF] Ordinary Differential Equations
    An initial value problem (IVP) for the first-order system is the differential equation DE : x′ = f(t, x), together with initial conditions IC : x(t0) = x0 . A ...
  10. [10]
    [PDF] 5. The Initial Value Problem for Ordinary Differential Equations
    In this chapter we begin a study of time-dependent differential equations, beginning with the initial value problem (IVP) for a time-dependent ordinary ...
  11. [11]
    Higher Order Equations - Ximera - The Ohio State University
    Reduction of Equations to First Order Systems. In principle, we can find solutions to (??) by finding solutions of a corresponding linear system of first order ...
  12. [12]
    Higher Order Systems
    Second order differential equations are very common in science and engineering applications. Higher order initial value problems are easily solved using an ...
  13. [13]
    [PDF] Nonlinear Evolution Equations1 - UC Davis Math
    The resulting initial value problem is very degenerate. Any initial surface is characteristic. As a result, the Einstein field equations imply that the initial ...
  14. [14]
    Initial-value problems for ODEs - GitHub Pages
    The initial-value problem (IVP), in which all of the conditions are given at a single value of the independent variable, is the simplest situation.
  15. [15]
    [PDF] Well-Posed Problems - UNL Math
    According to Hadamard, a problem is well-posed (or correctly-set) if a. it has a solution, b. the solution is unique, c. the solution depends continuously ...
  16. [16]
    [PDF] Origin and Development of the Cauchy problem in general relativity
    The seminal work of Yvonne Choquet-Bruhat published in 1952 demonstrates that it is possible to formulate Einstein's equations as an initial value problem. The ...
  17. [17]
    [PDF] Henri Poincar´e and partial differential equations
    Sep 3, 2012 · Henri Poincaré introduced a new approach to solve the Dirichlet prob- lem and he gave the first general solution of the initial value problem.
  18. [18]
    [PDF] Carathéodory theory of ODEs
    Then for any given x0 ∈ Rn, t0 ∈ I and p0 ∈ Π there exists a unique x ∈ AC(I), which solves x′ = f(t, x, p0), x(t0) = x0 in the sense of Carathéodory. This ...Missing: source | Show results with:source
  19. [19]
    [PDF] Existence and Uniqueness of Solutions for Stochastic Differential ...
    Nov 8, 2019 · In this thesis we present an existence and uniqueness theorem for stochastic differential equations with respect to a Brownian motion, under the ...
  20. [20]
    [PDF] I. An existence and uniqueness theorem for differential equations
    The following theorem also gives information about a lower bound for the length of the interval on which this solution exists. The Picard-Lindelöf Theorem.
  21. [21]
    [PDF] 2.2 Osgood's uniqueness theorem.
    and so we see that the key ...
  22. [22]
    [PDF] A note on existence theorem of peano ii - UT Math
    An ODE with non-Lipschitz right hand side has been con- sidered. A family of ... When problem (1.1) admits non-uniqueness then for some initial data x0.
  23. [23]
    Note on the solutions of a system of differential equations
    The system of ordinary differential equations has long been studied and the properties of its solutions have been made clear gradually.
  24. [24]
    [PDF] 3.4 Global Existence theorem 3.5 Continuous dependence - IITB Math
    (3) Continuous dependence: The solution depends continuously on the data that are present in the problem. Theorem 3.39 Initial value problem for an ODE y0 = f(x ...
  25. [25]
    [PDF] Lyapunov Stability
    Lyapunov Stability. The stability of solutions to ODEs was first put on a sound mathematical footing by Lya- punov circa 1890. This theory still dominates ...
  26. [26]
    [PDF] On the uniqueness of solutions of stochastic differential equations
    In this paper, we shall discuss the uniqueness problem for solutions of stochastic differential equations. The theory o f stochastic differential equations, ...
  27. [27]
    Differential Equations - Pauls Online Math Notes - Lamar University
    Jun 26, 2023 · Laplace Transforms - In this chapter we introduce Laplace Transforms and how they are used to solve Initial Value Problems. With the ...Systems of Differential Equations · Partial Differential Equations
  28. [28]
    Separable Equations - Pauls Online Math Notes
    Feb 6, 2023 · In this section we solve separable first order differential equations, i.e. differential equations in the form N(y) y' = M(x).Missing: historical standard reference
  29. [29]
    [PDF] Solving Linear First-Order Differential Equations Leonard Euler's ...
    This set of notes accompanies the Primary Source Project “Solving Linear First-Order Differential. Equations: Leonard Euler's Integrating Factor Method” written ...
  30. [30]
    Math 519, Picard Iteration
    This procedure of generating a sequence of functions which approximate the solution whose existence we are trying to establish, is called Picard iteration.
  31. [31]
    [PDF] EXACT SOLUTIONS for ORDINARY DIFFERENTIAL EQUATIONS
    This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated.
  32. [32]
    Initial value problem Definition & Example | nuclear-power.com
    May 9, 2023 · Analytical methods provide exact solutions but may be limited in their applicability, while numerical methods offer greater flexibility but ...Understanding Initial Value... · Exploring Methods To Solve... · Numerical Methods...
  33. [33]
    Leonhard Euler (1707 - 1783) - Biography - MacTutor
    Euler made substantial contributions to differential geometry, investigating the theory of surfaces and curvature of surfaces. Many unpublished results by ...
  34. [34]
    Solving Ordinary Differential Equations I - SpringerLink
    In stockThis book is a valuable tool for students of mathematics and specialists concerned with numerical analysis, mathematical physics, mechanics, system engineering.
  35. [35]
    Ueber die numerische Auflösung von Differentialgleichungen
    Runge, C. Ueber die numerische Auflösung von Differentialgleichungen. Math. Ann. 46, 167–178 (1895).Missing: Carl | Show results with:Carl
  36. [36]
    [PDF] Kutta, W. (1901). Beitrag zur raherungsweisen Integration Totaler ...
    Page 1. Kutta, W. (1901). Beitrag zur raherungsweisen Integration Totaler Differential gleichungen. Z. Math. Phys., 46, 435-453.
  37. [37]
    [PDF] An attempt to test the theories of capillary action by comparing the ...
    BROCKIIAt s. Page 9. AN ATTEMPT. TO TEST. THE THEORIES OF CAPILLARY ACTION. BY COMPARING. THE THEORETICAL AND MEASURED FORMS. OF DROPS OF FLUID,. BY. FRANCIS ...
  38. [38]
    ode45 - Solve nonstiff differential equations — medium order method
    ode45 is based on an explicit Runge-Kutta (4,5) formula, the Dormand-Prince pair. It is a single-step solver – in computing y(t n) , it needs only the ...Matlab · Choose an ODE Solver · Summary of ODE Options · Odeset
  39. [39]
    [PDF] Exponential Growth and Decay
    Next, we use the initial condition y(0) = y0 to find B: y0 = y(0) = Be0 = B · 1 = B. Thus, the solution is y = y0 ekt, and the proof is complete. In applied ...Missing: kyy( | Show results with:kyy(
  40. [40]
    Differential Equations - Intervals of Validity - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will give an in depth look at intervals of validity as well as an answer to the existence and uniqueness question for ...Missing: sqrt | Show results with:sqrt
  41. [41]
    [PDF] Partial Differential Equations - cs.Princeton
    • Lotka-Volterra: dx dt. = ax −bxy, dy dt. = −cy + dxy. Page 18. Page 19. Phase Plane Diagram. Page 20. Other possible behaviors. From http://tutorial.math.
  42. [42]
    [PDF] 13.2. Integrals of Vector Functions; Projectile Motion
    Newton's Second Law of Motion says that the force acting on the projectile ... solving the initial value problem: Differential Equation: d2r dt2. = −gj.
  43. [43]
    6.3 The RLC Circuit - Ximera - The Ohio State University
    An initial value problem for (eq:6.3.6) has the form. where is the initial charge on the capacitor and is the initial current in the circuit. We've already ...
  44. [44]
    8.6 Population Growth and the Logistic Equation
    The equation d P d t = P ( 0.025 − 0.002 P ) is an example of the logistic equation, and is the second model for population growth that we will consider. We ...
  45. [45]
    [PDF] CONTROL SYSTEMS
    In control system design the most common mathematical models of the behavior of interest are, in the time domain, linear ordinary differential equations with ...
  46. [46]
    [PDF] Accelerated stochastic simulation of the stiff enzyme-substrate reaction
    The enzyme-catalyzed conversion of a substrate into a product is a common reaction motif in cellular chemical systems. In the three reactions that comprise ...<|separator|>
  47. [47]
    [PDF] Neural Ordinary Differential Equations
    In this section, we experimentally investigate the training of neural ODEs for supervised learning. Software To solve ODE initial value problems numerically, we ...