Fact-checked by Grok 2 weeks ago

Picard–Lindelöf theorem

The Picard–Lindelöf theorem, also known as Picard's existence and uniqueness theorem or the Cauchy–Lipschitz theorem, is a foundational result in the theory of ordinary differential equations that guarantees the local existence and uniqueness of solutions to first-order initial value problems when the right-hand side function satisfies appropriate continuity and Lipschitz conditions. Formally, consider the initial value problem y'(t) = f(t, y(t)), y(t_0) = y_0, where f is defined on a rectangular domain D = [a, b] \times [c, d] containing the point (t_0, y_0). The theorem asserts that if f is continuous on D and Lipschitz continuous with respect to the second variable y (i.e., there exists a constant K \geq 0 such that |f(t, y_1) - f(t, y_2)| \leq K |y_1 - y_2| for all (t, y_1), (t, y_2) \in D), then there exists a unique continuously differentiable solution y(t) defined on some interval [t_0 - h, t_0 + h] with h > 0, such that y(t) remains within D and satisfies the equation and initial condition. The Lipschitz condition ensures uniqueness, and it is often satisfied if the partial derivative \partial f / \partial y exists and is continuous (hence bounded) on D. Without the Lipschitz requirement, existence may still hold by Peano's theorem, but uniqueness can fail, as in cases where multiple solutions emanate from the initial point. The proof relies on reformulating the differential equation as an equivalent integral equation y(t) = y_0 + \int_{t_0}^t f(s, y(s)) \, ds and applying the Banach fixed-point theorem in the complete metric space of continuous functions on the interval with the supremum norm. Starting from an initial guess y_0(t) = y_0, successive Picard iterates are defined by y_{n+1}(t) = y_0 + \int_{t_0}^t f(s, y_n(s)) \, ds; the Lipschitz condition makes the integral operator a contraction mapping for sufficiently small h (specifically, when h K < 1), ensuring the sequence converges uniformly to a unique fixed point, which is the desired solution. This approach not only proves existence and uniqueness but also provides a constructive method for approximating solutions numerically. The theorem generalizes to systems of first-order ODEs and has profound implications for the qualitative analysis of dynamical systems, stability theory, and applications in physics and engineering, where it underpins the predictability of solutions to models like population dynamics or electrical circuits. Historically, the result builds on earlier work by Augustin-Louis Cauchy and Rudolf Lipschitz on Lipschitz conditions in the 19th century; Émile Picard (1856–1941) formalized the iteration method around 1890 to establish existence, while Ernst Lindelöf (1870–1946) contributed extensions to more general settings, leading to the combined naming.

Theorem Statement

Local Existence and Uniqueness

The Picard–Lindelöf theorem establishes sufficient conditions for the local existence and uniqueness of solutions to initial value problems for first-order ordinary differential equations. Consider the initial value problem y'(t) = f(t, y(t)), \quad y(t_0) = y_0, where f: D \to \mathbb{R} and D \subseteq \mathbb{R}^2 is an open domain containing the point (t_0, y_0). Suppose there exist a > 0 and b > 0 such that the closed rectangle R = \{ (t, y) \in \mathbb{R}^2 : |t - t_0| \leq a, \, |y - y_0| \leq b \} is contained in D, and f is continuous on R. Additionally, assume f satisfies the Lipschitz condition with respect to y uniformly on R: there exists a constant K \geq 0 such that |f(t, y_1) - f(t, y_2)| \leq K |y_1 - y_2| for all t \in [t_0 - a, t_0 + a] and all y_1, y_2 \in [y_0 - b, y_0 + b]. Under these assumptions, there exists h > 0 such that the initial value problem has a unique continuously differentiable solution y defined on the interval I = [t_0 - h, t_0 + h]. The value of h is given by h = \min\left( a, \frac{b}{M}, \frac{1}{K} \right), where M = \sup_{(t,y) \in R} |f(t,y)| \geq 0. This choice of h guarantees that the graph of the solution remains inside the rectangle R over I, as the increment in y over any subinterval of length at most h is bounded by b. The theorem's local nature implies that the solution can be extended beyond the initial interval I as long as it remains in the domain D where the hypotheses hold, leading to a maximal interval of existence ( \alpha, \beta ) with t_0 \in ( \alpha, \beta ), \alpha \leq t_0 - h, and \beta \geq t_0 + h, on which the solution is unique but may cease to exist at the endpoints if it approaches the boundary of D or becomes unbounded. The solution on this maximal interval can be constructed via Picard iteration, which converges to the unique fixed point under the theorem's conditions.

Lipschitz Condition

The Lipschitz condition plays a central role in the Picard–Lindelöf theorem by ensuring the uniqueness of solutions to the initial value problem y' = f(t, y), y(t_0) = y_0, where f: D \to \mathbb{R}^n and D \subseteq \mathbb{R} \times \mathbb{R}^n is open. The function f is said to satisfy a local Lipschitz condition with respect to y if, for every compact set K \subset D, there exists a constant K > 0 such that |f(t, y_1) - f(t, y_2)| \leq K |y_1 - y_2| for all (t, y_1), (t, y_2) \in K. This uniform bound in t controls how much f can vary as y changes, typically holding locally around the initial point (t_0, y_0). Unlike mere continuity, which suffices for existence but allows multiple solutions, the Lipschitz condition guarantees both existence and uniqueness. Peano's existence theorem establishes that continuous f ensures at least one local solution exists, but without the Lipschitz restriction, solutions may not be unique. The stricter Lipschitz requirement prevents divergence of potential solutions by linearly bounding their separation rates. A practical way to verify the Lipschitz condition is to examine the partial derivative \partial f / \partial y; if f is continuously differentiable and \| \partial f / \partial y \| is bounded on a compact set, the mean value theorem yields a Lipschitz constant equal to that bound. For instance, f(y) = y^2 satisfies the condition locally on bounded domains, as its derivative $2y is bounded there, but fails globally on \mathbb{R} because no single K works for arbitrarily large |y_1 - y_2|. Linear functions f(y) = a y + b, however, are globally Lipschitz with constant |a|. By enforcing this linear growth in differences, the Lipschitz condition inhibits multiple solutions from emerging from the initial condition, as any two candidate solutions would have their trajectories converge or coincide locally due to the controlled discrepancy in their velocities. This geometric intuition underpins the contraction mapping argument in proofs of the theorem.

Proof Techniques

Outline of Successive Approximations

The method of successive approximations, introduced by Émile Picard, offers an intuitive framework for demonstrating the existence and uniqueness of solutions to the initial value problem y' = f(t, y), y(t_0) = y_0, by iteratively refining an initial guess to converge to the solution. The process begins with the zeroth approximation y_0(t) \equiv y_0, a constant function matching the initial condition. Subsequent approximations are generated recursively via the integral form of the equation: y_{n+1}(t) = y_0 + \int_{t_0}^t f(s, y_n(s)) \, ds, \quad n = 0, 1, 2, \dots These iterates are defined on a bounded interval [t_0 - h, t_0 + h], where the functions y_n are continuous. The sequence \{ y_n \} is interpreted as repeated applications of the operator T(y)(t) = y_0 + \int_{t_0}^t f(s, y(s)) \, ds, which acts on the space of continuous functions on the interval. Under the assumption that f satisfies a Lipschitz condition with respect to y (with constant K), the operator T behaves as a contraction in a suitable complete metric space, provided h is chosen sufficiently small such that the contraction constant q = K h < 1. Intuitively, starting from any initial y_0, the iterates y_n = T^n(y_0) draw closer together, converging uniformly to a limit function y that is the unique fixed point of T, satisfying T(y) = y. This fixed point solves the equivalent integral equation, and by differentiation, it fulfills the original differential equation and initial condition.

Rigorous Proof via Contraction Mapping

Consider the initial value problem \mathbf{y}'(t) = \mathbf{f}(t, \mathbf{y}(t)), \quad \mathbf{y}(t_0) = \mathbf{y}_0, where \mathbf{y}: I \to \mathbb{R}^n, \mathbf{f}: D \to \mathbb{R}^n is continuous on a domain D containing the rectangle R = \{(t, \mathbf{y}) : |t - t_0| \leq a, |\mathbf{y} - \mathbf{y}_0| \leq b\}, Lipschitz continuous in the second variable with constant K > 0, and bounded by M > 0 on R (i.e., |\mathbf{f}(t, \mathbf{y})| \leq M). Let I_h = [t_0 - h, t_0 + h] with h > 0 to be specified, and denote by C(I_h, \mathbb{R}^n) the Banach space of continuous functions \mathbf{y}: I_h \to \mathbb{R}^n equipped with the supremum norm \|\mathbf{y}\| = \sup_{t \in I_h} |\mathbf{y}(t)|, which induces a complete metric. This space is complete because uniform limits of continuous functions are continuous. Define the closed ball B = \{\mathbf{y} \in C(I_h, \mathbb{R}^n) : \|\mathbf{y} - \mathbf{y}_0\| \leq b \}, where \mathbf{y}_0 denotes the constant function with value \mathbf{y}_0; B is a complete metric subspace. Consider the integral operator T: C(I_h, \mathbb{R}^n) \to C(I_h, \mathbb{R}^n) given by (T\mathbf{y})(t) = \mathbf{y}_0 + \int_{t_0}^t \mathbf{f}(s, \mathbf{y}(s)) \, ds for \mathbf{y} \in C(I_h, \mathbb{R}^n). First, T maps B into itself provided h \leq b/M, since for \mathbf{y} \in B, |(T\mathbf{y})(t) - \mathbf{y}_0| \leq \int_{t_0}^t |\mathbf{f}(s, \mathbf{y}(s))| \, ds \leq M |t - t_0| \leq M h \leq b. Thus, restrict to such h \leq \min\{a, b/M\} so that T: B \to B. Moreover, T is continuous because \mathbf{f} is continuous and the integral operator preserves continuity. Next, T is a contraction on B. For \mathbf{y}, \mathbf{z} \in B, |(T\mathbf{y})(t) - (T\mathbf{z})(t)| \leq \int_{t_0}^t |\mathbf{f}(s, \mathbf{y}(s)) - \mathbf{f}(s, \mathbf{z}(s))| \, ds \leq K \int_{t_0}^t |\mathbf{y}(s) - \mathbf{z}(s)| \, ds \leq K h \|\mathbf{y} - \mathbf{z}\|, so \|T\mathbf{y} - T\mathbf{z}\| \leq K h \|\mathbf{y} - \mathbf{z}\|. Choosing h < 1/K ensures K h < 1, making T a contraction with constant \alpha = K h < 1. Thus, restrict further to h \leq \min\{a, b/M, 1/K\}. By the Banach fixed-point theorem, since B is a complete metric space and T: B \to B is a contraction, there exists a unique fixed point \mathbf{y}^* \in B such that \mathbf{y}^* = T \mathbf{y}^*, i.e., \mathbf{y}^*(t) = \mathbf{y}_0 + \int_{t_0}^t \mathbf{f}(s, \mathbf{y}^*(s)) \, ds. This \mathbf{y}^* is continuous as an element of C(I_h, \mathbb{R}^n). Differentiating both sides using the fundamental theorem of calculus yields \mathbf{y}^{*'}(t) = \mathbf{f}(t, \mathbf{y}^*(t)) for t \in I_h, and evaluating at t = t_0 gives \mathbf{y}^*(t_0) = \mathbf{y}_0. Thus, \mathbf{y}^* solves the initial value problem on I_h. For uniqueness, suppose \boldsymbol{\phi}: I_h \to \mathbb{R}^n is another solution to the initial value problem with \boldsymbol{\phi} \in C(I_h, \mathbb{R}^n). Then \boldsymbol{\phi} satisfies the integral equation \boldsymbol{\phi} = T \boldsymbol{\phi}, so \boldsymbol{\phi} \in B (as solutions remain in the rectangle for small h) and \boldsymbol{\phi} = \mathbf{y}^* by uniqueness of the fixed point. Hence, the solution is unique in C(I_h, \mathbb{R}^n).

Illustrative Examples

Picard Iteration for a Simple ODE

To illustrate the Picard iteration method, consider the initial value problem y' = y, y(0) = 1, defined on \mathbb{R}. Here, the function f(t, y) = y satisfies the global Lipschitz condition with constant K = 1, since |f(t, y_1) - f(t, y_2)| = |y_1 - y_2| \leq 1 \cdot |y_1 - y_2| for all t \in \mathbb{R} and y_1, y_2 \in \mathbb{R}. The Picard iterates are defined by y_0(t) = 1 and y_{n+1}(t) = 1 + \int_0^t y_n(s) \, ds for n \geq 0. The first few iterates are:
  • y_1(t) = 1 + t,
  • y_2(t) = 1 + t + \frac{t^2}{2},
  • y_3(t) = 1 + t + \frac{t^2}{2} + \frac{t^3}{6}.
In general, the n-th iterate is the partial sum of the Taylor series for the exponential function:
y_n(t) = \sum_{k=0}^n \frac{t^k}{k!}.
As n \to \infty, the sequence \{y_n(t)\} converges pointwise to y(t) = e^t on \mathbb{R}, which satisfies the original initial value problem because \frac{d}{dt} e^t = e^t and e^0 = 1. This convergence demonstrates the effectiveness of the iteration in producing the unique solution under the Lipschitz condition. The error between the n-th iterate and the exact solution is the tail of the Taylor series for e^t:
|y_n(t) - e^t| = \left| \sum_{k=n+1}^\infty \frac{t^k}{k!} \right| \leq \frac{|t|^{n+1}}{(n+1)!} e^{|t|},
which follows from the Lagrange form of the remainder in Taylor's theorem, where the (n+1)-th derivative of e^t is bounded by e^{|t|} on the interval between 0 and t. This bound shows that the approximation improves rapidly as n increases, especially for bounded t.

Case of Non-Uniqueness Without Lipschitz

A classic counterexample illustrating the failure of uniqueness in the Picard–Lindelöf theorem occurs when the right-hand side function is continuous but fails to satisfy the Lipschitz condition. Consider the initial value problem y'(t) = \sqrt{|y(t)|}, \quad y(0) = 0. The function f(y) = \sqrt{|y|} is continuous everywhere, including at y = 0, since \lim_{y \to 0} \sqrt{|y|} = 0 = f(0). However, f(y) is not Lipschitz continuous in any neighborhood of y = 0. To see this, note that the derivative f'(y) = \frac{1}{2\sqrt{|y|}} for y \neq 0 becomes unbounded as y \to 0, implying that the slope of f grows arbitrarily steep near the origin, violating the Lipschitz requirement. Despite the continuity of f, this initial value problem admits multiple solutions. One solution is the trivial constant function y(t) = 0 for all t \in \mathbb{R}, which satisfies y'(t) = 0 = \sqrt{|0|} and the initial condition y(0) = 0. Another solution is y(t) = \begin{cases} \frac{t^2}{4} & t \geq 0, \\ 0 & t < 0. \end{cases} This function is differentiable everywhere: for t < 0, y'(t) = 0 = \sqrt{0}; for t > 0, y'(t) = \frac{t}{2} = \sqrt{\frac{t^2}{4}} = \sqrt{|y(t)|}; and at t = 0, the left derivative is 0 and the right derivative is \lim_{t \to 0^+} \frac{t}{2} = 0, matching \sqrt{|y(0)|} = 0. It also satisfies y(0) = 0. The existence of at least these two distinct solutions demonstrates non-uniqueness for the initial value problem, underscoring the necessity of the Lipschitz condition in the Picard–Lindelöf theorem to ensure a unique solution. In fact, infinitely many solutions exist, as one can construct similar functions that remain zero up to an arbitrary time \tau \geq 0 before following the parabola \left(\frac{t - \tau}{2}\right)^2 for t > \tau, but the two explicit examples suffice to illustrate the point.

Advanced Topics

Maximizing the Existence Interval

In the Picard–Lindelöf theorem, the length h of the local existence interval for the solution to the initial value problem y' = f(t, y), y(t_0) = y_0, is chosen as h = \min\left(a, \frac{b}{M}, \frac{\alpha}{K}\right) for some \alpha < 1 (often \alpha = 1/2), where the rectangle R = \{ (t, y) : |t - t_0| \leq a, |y - y_0| \leq b \} contains (t_0, y_0), f is continuous and Lipschitz continuous in y with constant K \geq 0 on R, M = \sup_{(t,y) \in R} |f(t, y)|, and b > 0. This selection ensures both that the graph of the solution remains within R for |t - t_0| \leq h (since the increment in y over this interval is bounded by \int_{t_0}^{t_0 + h} |f(s, y(s))| \, ds \leq M h \leq b) and that the Picard integral operator is a contraction mapping (with constant \alpha < 1), guaranteeing convergence of the iterates. To maximize the existence interval beyond this initial h, the solution can be extended step by step using the implicit function theorem applied to the integral equation y(t) = y_0 + \int_{t_0}^t f(s, y(s)) \, ds. Assuming f is continuously differentiable, the theorem guarantees that the solution curve (t, y(t)) can be continued uniquely as long as it remains in the interior of the domain where the Lipschitz condition holds, until it approaches the boundary of the rectangle or a point where the assumptions fail. The same optimization and extension principles apply to vector-valued initial value problems in \mathbb{R}^n, where y: \mathbb{R} \to \mathbb{R}^n, f: \mathbb{R} \times \mathbb{R}^n \to \mathbb{R}^n satisfies the continuity and componentwise Lipschitz conditions (with constant K) on a suitable rectangular domain, yielding a unique local solution on |t - t_0| \leq h = \min\left(a, \frac{b}{M}, \frac{\alpha}{K}\right) with M = \sup \|f\| (Euclidean norm) and b a bound on \|y - y_0\|. Despite these extensions, the maximal existence interval may remain finite even if f is defined on a larger domain, as the solution can blow up (i.e., |y(t)| \to \infty as t approaches the endpoint) or exit the domain where the Lipschitz condition holds.

Global Existence Conditions

The Picard–Lindelöf theorem guarantees local existence and uniqueness of solutions to the initial value problem y' = f(t, y), y(t_0) = y_0, under suitable continuity and Lipschitz conditions on f. To extend this to global existence on the entire real line \mathbb{R}, additional growth restrictions on f are required to prevent solutions from escaping to infinity in finite time. A standard sufficient condition is the linear growth bound: there exist nonnegative constants A and B such that |f(t, y)| \leq A |y| + B for all t \in \mathbb{R} and y \in \mathbb{R}^n. Under this condition, along with the Lipschitz assumption in y, the solution exists and is unique on all of \mathbb{R}. The proof proceeds by first applying the local theorem to obtain a maximal interval of existence (T^-, T^+), then showing that T^+ = +\infty and T^- = -\infty using a priori estimates. Suppose the solution y(t) is defined on [t_0, T) with T < +\infty. Integrating the ODE yields |y(t)| \leq |y_0| + \int_{t_0}^t |f(s, y(s))| \, ds \leq |y_0| + \int_{t_0}^t (A |y(s)| + B) \, ds. Applying Gronwall's inequality to this integral inequality gives |y(t)| \leq (|y_0| + B (t - t_0)) e^{A (t - t_0)}, which remains bounded as t \to T^-. Since the solution cannot blow up in finite time under the linear growth, and the domain is unbounded, the maximal interval must extend to \mathbb{R}. An illustrative counterexample where global existence fails is the scalar equation y' = y^2, y(0) = 1, which satisfies the Lipschitz condition locally but violates linear growth since |f(y)| = y^2 grows quadratically. The explicit solution is y(t) = \frac{1}{1 - t}, which blows up at t = 1, demonstrating finite-time singularity. More generally, a priori bounds can be established using energy estimates or comparison theorems to ensure solutions remain confined. For instance, in systems where a Lyapunov-like function V(y) satisfies V' \leq C V + D along trajectories, Gronwall's inequality again provides uniform bounds, preventing escape to infinity and implying global existence under linear growth. Comparison principles for scalar inequalities further refine these estimates by bounding solutions against known global ones.

Comparisons to Peano and Carathéodory Theorems

The Picard–Lindelöf theorem emerged in the late 19th century as a refinement of earlier existence results for ordinary differential equations (ODEs). In 1886, Giuseppe Peano established the first general existence theorem for the initial value problem y' = f(t, y), y(t_0) = y_0, requiring only that f be continuous on a rectangular domain around (t_0, y_0). This result, known as Peano's theorem, guarantees the existence of at least one local solution but does not ensure uniqueness, as the continuity of f alone permits multiple solutions in some cases, such as the example of non-uniqueness under non-Lipschitz conditions discussed earlier. Émile Picard extended this in 1890 by incorporating a Lipschitz condition on f with respect to y, yielding both existence and uniqueness, while Ernst Lindelöf provided a key refinement in 1894 that strengthened the theorem's applicability under local Lipschitz continuity. In contrast to the Picard–Lindelöf theorem's stringent Lipschitz requirement, Peano's theorem relaxes the assumptions by dropping the need for differentiability or bounded variation in f, relying instead on the Arzelà–Ascoli theorem to extract a convergent subsequence from a family of approximate solutions, such as polygonal paths or Euler approximations. This compactness argument ensures existence on a small interval but allows for non-unique solutions when f fails to satisfy Lipschitz continuity, highlighting a trade-off: broader applicability at the cost of uniqueness. The Picard–Lindelöf theorem, by imposing the additional Lipschitz condition, provides the strongest local guarantee—both existence and uniqueness—making it the preferred tool when such regularity holds, though it applies to a narrower class of functions than Peano's. Further weakening the hypotheses, the Carathéodory existence theorem, developed by Constantin Carathéodory in the early 20th century, establishes local existence for ODEs where f(t, y) is measurable in t for fixed y, continuous in y for almost every t, and bounded by an integrable function in t. These Carathéodory conditions generalize Peano's continuity requirement, accommodating functions that are discontinuous in t but integrable, such as those arising in control theory or stochastic processes, while still forgoing uniqueness unless supplemented by growth restrictions. Thus, while Peano's theorem suffices for continuous f and Picard–Lindelöf demands Lipschitz continuity for uniqueness, Carathéodory's framework offers the most permissive existence result among these classical local theorems, prioritizing robustness over precision in solution properties.

References

  1. [1]
    [PDF] Picard's Existence and Uniqueness Theorem
    One of the most important theorems in Ordinary Differential Equations is Picard's. Existence and Uniqueness Theorem for first-order ordinary differential ...
  2. [2]
    [PDF] Picard-Lindelof Theorem - UNM Math
    Apr 22, 2015 · Theorem 1 If an operator F : X → X is a strict contraction with respect to the metric dX and (X, dx) is a complete metric space of functions, ...
  3. [3]
    [PDF] 1.6 Computing and Existence
    Exactly one theoretical solution exists in problem (1), provided f(x, y) and fy(x, y) are continuous; see the Picard–Lindelöf theorem, page 67. The situation ...
  4. [4]
    ODE-Project Existence and Uniqueness of Solutions
    It was Emile Picard (1856–1941) who developed the method of successive approximations to show the existence of solutions of ordinary differential equations. He ...
  5. [5]
    [PDF] Ordinary Differential Equations - Michigan State University
    Apr 1, 2015 · This is an introduction to ordinary differential equations. We describe the main ideas to solve certain differential equations, ...
  6. [6]
    [PDF] MA2AA1 (ODE's): Lecture Notes
    This theorem is also called Picard-Lindelöf theorem or Cauchy-. Lipschitz theorem, and was developed by these mathematicians in the 19th century. Question ...
  7. [7]
    [PDF] Introduction - UC Davis Math
    The following result, due to Picard and Lindelöf, is the fundamental local ex- istence and uniqueness theorem for IVPs for ODEs. It is a local existence theorem.<|control11|><|separator|>
  8. [8]
    [PDF] Peano's Existence Theorem revisited - arXiv
    Feb 6, 2012 · Abstract. We present new proofs to four versions of Peano's Existence Theo- rem for ordinary differential equations and systems.
  9. [9]
    [PDF] I. An existence and uniqueness theorem for differential equations
    If in Picard's theorem one drops the Lipschitz condition then there may be more than one solution, thus the uniqueness assertion in the theorem is not longer ...Missing: explanation | Show results with:explanation
  10. [10]
    [PDF] Notes on Existence and Uniqueness of IVPs
    Theorem 1 (Picard's existence theorem, also known as Picard–Lindelöf theorem) Consider the ... proof ... f(y(s),s)ds. Construct a sequence of successive ...
  11. [11]
    [PDF] Math 135A, Winter 2016 Picard Iteration In this note we consider the ...
    Example. Consider the initial value problem y′ = y, y(0) = 1, whose solution is y = et (using techniques we learned ...
  12. [12]
    [PDF] THE REMAINDER IN TAYLOR SERIES 1. Introduction Let f(x) be ...
    The remainder Rn,a(x) is the difference between f(x) and its nth degree Taylor polynomial, Tn,a(x), and is described as f(x)−Tn,a(x).
  13. [13]
    [PDF] Basic Real Analysis
    ... Coddington–Levinson's Theory of Ordinary Differential. Equations, Kaplan's ... (Picard–Lindelöf Existence Theorem). Let D be a nonempty open set in R1 ...
  14. [14]
    [PDF] An overview of local existence and local uniqueness theorems for ...
    ABSTRACT. We discuss a collection of local existence and local uniqueness theorems, for the initial value problem in one variable. We consider real-valued ...
  15. [15]
    [PDF] Ordinary Differential Equations and Dynamical Systems
    This is a preliminary version of the book Ordinary Differential Equations and Dynamical Systems published by the American Mathematical Society (AMS). This ...
  16. [16]
    [PDF] Chapter 14 - Differential Equations
    The Carathéodory's existence theorem states than an ODE has a solution, under some mild conditions. • It is a generalization of the Peano's existence ...
  17. [17]
    [PDF] Carathéodory theory of ODEs
    Sufficiency follows from the local existence (Theorem 6) and the fact that solutions can be glued together in a continuous manner (Lemma 4). Note that a maximal ...