Fact-checked by Grok 2 weeks ago

Lyapunov function

A Lyapunov function, also known as a Lyapunov energy function, is a scalar-valued, continuously defined on the state space of a that serves as a tool to assess the of points without explicitly solving the system's differential equations. Introduced by Russian mathematician in his 1892 doctoral dissertation The General Problem of the Stability of Motion, it provides a direct method—often called Lyapunov's second method—for proving properties in both continuous-time and discrete-time systems. Formally, for a system \dot{x} = f(x) with equilibrium at the origin, a function V: \mathbb{R}^n \to \mathbb{R} is a Lyapunov function if it is positive definite, meaning V(x) > 0 for x \neq 0 and V(0) = 0, and its sublevel sets \{x \mid V(x) \leq c\} are bounded and compact. The time derivative \dot{V}(x) = \nabla V(x)^T f(x) along system trajectories determines stability: if \dot{V}(x) \leq 0 for all x, the equilibrium is stable in the sense of Lyapunov; if \dot{V}(x) < 0 for x \neq 0, it is asymptotically stable; and if \dot{V}(x) \leq -\alpha \|x\|^p for some \alpha > 0 and p > 0, exponential stability holds. These conditions ensure that trajectories remain bounded or converge to the equilibrium, mimicking the decrease of energy in physical systems like damped oscillators. Lyapunov functions have broad applications across and , particularly in for designing stabilizing controllers in nonlinear systems, robotics, and dynamics. For instance, in robotic systems, they certify the region of attraction around stable equilibria or verify invariance of safe sets, often constructed via quadratic forms V(x) = x^T P x for linear systems where P is positive definite. Extensions like further refine the theory by allowing \dot{V} \leq 0 with additional conditions to prove asymptotic when the derivative is zero on invariant sets. This framework remains foundational for analyzing complex, high-dimensional systems where analytical solutions are infeasible.

Historical Context

Origins in Lyapunov's Work

Aleksandr Mikhailovich Lyapunov, a Russian mathematician born in 1857, developed the foundational concepts of what are now known as Lyapunov functions during his academic career at the University of Kharkov. In 1892, he submitted his doctoral dissertation titled The General Problem of the Stability of Motion to the University of Moscow and defended it there, where he had been a professor at Kharkov since 1886. This 250-page work, published by the Kharkov Mathematical Society, systematically addressed the stability of solutions to systems of differential equations, introducing a novel analytical framework that avoided the need to explicitly solve the equations. Lyapunov's motivation stemmed from challenges in , particularly the of motion in non-integrable systems like the , where predicting trajectories analytically proved impossible. Influenced by Pafnuty Chebyshev's 1882 inquiry into the equilibrium shapes of rotating fluid masses—relevant to planetary formation and figure-of-equilibrium problems—Lyapunov sought general criteria for that could apply broadly without relying on specific solutions. His approach was driven by the limitations of earlier methods, such as those of and Simeon Poisson, which were case-specific and inadequate for complex celestial dynamics. In the dissertation, Lyapunov proposed what became known as the direct method (or second method) of stability analysis, utilizing positive definite functions whose time derivatives along system trajectories provided stability insights. This innovation marked a , enabling assessments of stability in arbitrary dynamical systems. Initially, the work received acclaim in mathematical circles, earning Lyapunov recognition by age 35, including a positive review in the Bulletin Astronomique. A translation was published in , but it remained largely overlooked in and the until the ideas were popularized in the mid-20th century; a full English translation appeared in 1992.

Evolution in Stability Theory

Following Lyapunov's original contributions in 1892, his methods experienced a period of relative obscurity in the West until their rediscovery and popularization in the early to mid-20th century, largely through the efforts of émigré scholars. Nicolas Minorsky, a -born who emigrated to the , played a pivotal role by integrating Lyapunov's concepts into engineering literature. In his 1947 book Introduction to Non-Linear Mechanics, Minorsky applied Lyapunov's energy-like functions to analyze nonlinear oscillations and control systems, such as ship steering, thereby facilitating the adoption of these ideas in American naval and aeronautical research during and after . This work bridged the gap between Lyapunov's theoretical framework and practical automatic control problems, marking an early step in the method's integration into . Key milestones in the evolution included early applications by Nikolai G. Chetaev in the 1930s, who extended Lyapunov's direct method to and of motion at the Kazan Aviation Institute. Chetaev's 1934 paper on the of relative equilibria in demonstrated the utility of Lyapunov functions for nonlinear systems beyond linear approximations, influencing subsequent Soviet and international work on dynamic . The term "Lyapunov function" itself emerged in the mid-20th century, with English-language usage becoming standardized around the as translations and applications proliferated; for instance, it was referenced in contexts by the to denote scalar functions proving without solving equations. Post-World War II, Lyapunov methods gained traction in and automatic , where they informed system design amid the rise of nonlinear dynamics analysis. Richard Bellman's dynamic programming, developed in the , incorporated Lyapunov-like value functions to address , linking to decision processes in multistage systems. Lyapunov's approach fundamentally bridged —rooted in linear stability criteria like those of Lagrange and Poincaré—to modern by enabling rigorous analysis of nonlinear behaviors without . This shift was evident in post-1940s , where Lyapunov functions provided sufficient conditions for in feedback loops and adaptive systems, influencing fields from to process . By the , extensions such as Lyapunov functions further solidified this transition, allowing direct synthesis of stabilizing controllers for nonlinear dynamical systems.

Definition and Properties

Formal Definition

A Lyapunov function is fundamentally associated with the analysis of in autonomous dynamical systems, which are equations where the right-hand side does not depend explicitly on time. Consider an autonomous system given by \dot{x} = f(x), where x \in \mathbb{R}^n and f: \mathbb{R}^n \to \mathbb{R}^n is continuously differentiable with an point at the origin, meaning f(0) = 0. A continuously differentiable function V: D \to \mathbb{R}, defined on a neighborhood D of the origin, qualifies as a Lyapunov function if it is , satisfying V(0) = 0 and V(x) > 0 for all x \in D \setminus \{0\}, and if its orbital along system trajectories, \dot{V}(x) = \nabla V(x) \cdot f(x), is non-positive, i.e., \dot{V}(x) \leq 0 for all x \in D. Positive definiteness ensures that V(x) acts like an energy-like measure that is minimized uniquely at the , while the non-positive condition implies that this measure does not increase along trajectories. For global analysis, V(x) is radially unbounded if V(x) \to \infty as \|x\| \to \infty.

Key Properties and Interpretations

Lyapunov functions possess several important properties that enhance their utility in . A Lyapunov function V(x) is said to be proper if its sublevel sets \{x \mid V(x) \leq c\} are compact for every c > 0, which ensures that trajectories remain confined within bounded regions during . This property is particularly valuable for local studies, as it implies that the function behaves well near the equilibrium without allowing unbounded excursions. Additionally, for time-varying systems, V(t, x) is decrescent if there exists a continuous W_2(\|x\|) such that V(t, x) \leq W_2(\|x\|) for all t \geq 0 and x in the , meaning the function's is controlled uniformly in time and increases with distance from the equilibrium. For global , V(x) is radially unbounded if V(x) \to \infty as \|x\| \to \infty, guaranteeing that the equilibrium's implications extend across the entire state space. These properties contribute to intuitive interpretations of Lyapunov functions as generalized measures in dynamical systems. The function V(x) can be viewed as an "energy-like" quantity, where the condition \dot{V}(x) \leq 0 along system trajectories implies that this energy is non-increasing, thereby preventing trajectories from escaping bounded regions around the equilibrium. This analogy draws from physical systems, such as mechanical conservators where total (kinetic plus potential) remains constant or decreases in dissipative cases, but Lyapunov functions extend this concept to arbitrary nonlinear dynamics without requiring explicit laws. Central to Lyapunov's direct method is the use of such functions for qualitative stability assessment, circumventing the need to solve the underlying ordinary differential equations explicitly. By constructing V(x) and verifying its derivative's negativity, one gains insights into system behavior solely through algebraic inequalities, making the approach broadly applicable to complex nonlinear problems.

Stability Theorems for Autonomous Systems

Conditions for Stability

In the context of autonomous dynamical systems \dot{x} = f(x) with at the , where f(0) = 0 and f is locally continuous, Lyapunov's direct method provides sufficient conditions for through the choice of a suitable Lyapunov function V(x). Specifically, if V(x) is continuously differentiable and positive definite in a neighborhood of the —meaning V(0) = 0 and V(x) > 0 for x \neq 0—and its time along system trajectories satisfies \dot{V}(x) = \frac{\partial V}{\partial x} f(x) \leq 0 in that neighborhood, then the is . Stability in the Lyapunov sense means that for every \epsilon > 0, there exists a \delta > 0 such that if the satisfies \|x(0)\| < \delta, then the trajectory remains bounded by \|x(t)\| < \epsilon for all t \geq 0. This condition ensures that solutions starting sufficiently close to the equilibrium do not escape a prescribed neighborhood over time, without requiring convergence to the origin. The proof relies on the sublevel sets of V(x), defined as \Omega_c = \{x \in D : V(x) \leq c\} for some c > 0, where D is the domain containing the origin. Since V(x) is positive definite, these sets are compact and contain a ball around the origin. With \dot{V}(x) \leq 0, V(x(t)) is non-increasing along trajectories, implying that \Omega_c is invariant: any solution starting in \Omega_c remains there for all t \geq 0. Choosing \delta such that V(x(0)) < c for \|x(0)\| < \delta, and selecting c small enough so that \Omega_c \subset \{x : \|x\| < \epsilon\}, bounds the trajectory within the desired \epsilon-neighborhood, establishing stability. The condition \dot{V}(x) \leq 0 guarantees stability by preventing V from increasing but permits trajectories to evolve along invariant sets where \dot{V}(x) = 0, potentially without approaching the equilibrium; this contrasts with stricter conditions that enforce convergence.

Conditions for Asymptotic Stability

In the context of autonomous dynamical systems of the form \dot{x} = f(x), where x \in \mathbb{R}^n and f(0) = 0, the conditions for asymptotic stability extend the basic stability criteria by ensuring that trajectories not only remain bounded near the equilibrium but also converge to it over time. A fundamental theorem states that if there exists a continuously differentiable function V: U \to \mathbb{R}, defined on a neighborhood U of the origin, such that V is positive definite (i.e., V(0) = 0 and V(x) > 0 for x \in U \setminus \{0\}) and its time derivative along system trajectories satisfies \dot{V}(x) = \nabla V(x) \cdot f(x) < 0 for all x \in U \setminus \{0\}, then the equilibrium at x = 0 is locally asymptotically stable. This means there exists a neighborhood \mathcal{V} \subset U of the origin such that every solution starting in \mathcal{V} satisfies \lim_{t \to \infty} x(t) = 0. The proof relies on the strict decrease of V along trajectories. Since V is positive definite, its sublevel sets \{x \in U \mid V(x) \leq c\} for small c > 0 are compact and contain the origin as the unique minimum. For an initial condition x(0) with V(x(0)) = c, the trajectory remains confined to this compact set because \dot{V} < 0 prevents V from increasing. Moreover, V(x(t)) is monotonically decreasing and bounded below by 0, so it converges to some limit l \geq 0. If l > 0, the trajectory would eventually enter a region where \dot{V} < -\epsilon < 0 for some \epsilon > 0, implying V continues to decrease below l, a contradiction. Thus, l = 0, and since V is positive definite, x(t) \to 0 as t \to \infty. Additionally, the strict inequality ensures that the set E = \{x \in U \mid \dot{V}(x) = 0\} = \{0\}, so no nontrivial trajectories lie entirely within E, preventing convergence to other points. For global asymptotic stability, the conditions are strengthened: V must be positive definite and radially unbounded on \mathbb{R}^n (i.e., V(x) \to \infty as \|x\| \to \infty), with \dot{V}(x) < 0 for all x \neq 0. Under these assumptions, every trajectory starting in \mathbb{R}^n converges to the origin as t \to \infty. The radial unboundedness guarantees that sublevel sets remain compact, allowing the local argument to extend globally without boundary issues.

Construction and Examples

Methods for Constructing Lyapunov Functions

Constructing Lyapunov functions often begins with trial-and-error approaches, particularly for linear systems where quadratic forms prove effective. For a linear autonomous system \dot{x} = Ax, a standard technique involves selecting a candidate Lyapunov function V(x) = x^T P x, with P a positive definite symmetric matrix, and solving the associated Lyapunov equation A^T P + P A = -Q for a positive definite Q. This method ensures \dot{V}(x) = x^T (A^T P + P A) x = -x^T Q x < 0 for x \neq 0, confirming asymptotic stability at the origin when a suitable P > 0 exists. For nonlinear systems, linearization around the provides a practical starting point for constructing Lyapunov functions. The system \dot{x} = f(x) is approximated by its A = \frac{\partial f}{\partial x}(0) at the origin, and the from the linear case is used as a candidate, with subsequent verification that V(x) remains positive definite and \dot{V}(x) \leq 0 in a neighborhood of the . This approach leverages the of the linearized system to infer local of the nonlinear one, though properties require additional checks. Energy-like functions offer an intuitive method for or physical systems, drawing from principles. In such cases, V(x) is formulated as the sum of kinetic and potential energies, which typically yields and a negative semi-definite due to dissipative terms like . For systems with , this construction naturally aligns with the system's physics, facilitating proofs of without extensive computation. In feedback control design for nonlinear systems, enables recursive construction of Lyapunov functions, particularly for systems in strict-feedback or lower-triangular form. Starting from the innermost subsystem, a term is added to V at each step to stabilize virtual controls, ensuring the overall \dot{V} < 0 through adaptive or robust gains. This method systematically builds V while simultaneously deriving stabilizing controllers. The Krasovskii-LaSalle approach employs a quadratic candidate V(x) = x^T P x, with P > 0 often derived from the system's , combined with to handle cases where \dot{V} \leq 0. Here, trajectories are shown to converge to the largest invariant set within \{x \mid \dot{V}(x) = 0\}, establishing asymptotic stability if that set is the equilibrium. This technique extends quadratic methods to nonlinear by focusing on invariant set analysis rather than strict negativity of \dot{V}. Despite these methods, constructing Lyapunov functions remains challenging owing to their non-uniqueness—multiple valid V may exist—and the computational difficulty in high-dimensional or highly nonlinear systems, where trial candidates often fail. Lyapunov theorems assure existence for asymptotically stable systems, motivating search algorithms, but practical typically relies on domain-specific intuition or optimization.

Illustrative Examples

A classic illustrative example of a Lyapunov function is provided by the simple linear scalar system \dot{x} = -x, where the x = 0 is the point. Consider the candidate Lyapunov function V(x) = \frac{1}{2} x^2. This function is positive definite because V(0) = 0 and V(x) > 0 for all x \neq 0. To assess , compute the time derivative along the system trajectories: \dot{V}(x) = \frac{dV}{dx} \dot{x} = x \cdot (-x) = -x^2. Since \dot{V}(x) \leq 0 for all x and \dot{V}(x) < 0 for x \neq 0, the is asymptotically stable by Lyapunov's theorem, as V decreases strictly except at the . Another example arises in the analysis of a nonlinear damped oscillator governed by the second-order equation \ddot{q} + \dot{q} + q + q^3 = 0, which can be rewritten in state-space form as \dot{q} = p, \dot{p} = -p - q - q^3 with state vector (q, p). A suitable Lyapunov function is the total "energy" V(q, p) = \frac{1}{2} p^2 + \frac{1}{2} q^2 + \frac{1}{4} q^4, which is positive definite as it consists of quadratic kinetic energy and a potential energy term that is radially unbounded. The time derivative is \dot{V}(q, p) = p \dot{p} + (q + q^3) \dot{q} = p (-p - q - q^3) + (q + q^3) p = -p^2 + p(-q - q^3) + p(q + q^3) = -p^2. Here, \dot{V}(q, p) \leq 0 for all (q, p), with equality only when p = 0. By LaSalle's invariance principle, trajectories converge to the largest invariant set where p = 0 and \dot{p} = -q - q^3 = 0, implying q = 0, so the origin is globally asymptotically stable. These examples highlight how Lyapunov functions often mimic physical energy dissipation in mechanical systems, where \dot{V} \leq 0 captures the decay due to damping.

Extensions and Advanced Topics

Application to Non-Autonomous Systems

Lyapunov functions can be extended to non-autonomous systems of the form \dot{x} = f(t, x), where the dynamics explicitly depend on time t, by allowing the Lyapunov function itself to be time-varying, denoted as V(t, x). In this setup, V(t, x) must be continuously differentiable and positive definite uniformly in t, meaning there exist positive definite functions \alpha_1 and \alpha_2 such that \alpha_1(\|x\|) \leq V(t, x) \leq \alpha_2(\|x\|) for all t \geq 0 and x in a domain D. Additionally, V(t, x) is required to be decrescent, ensuring it is bounded above by a positive definite function of x only, independent of t, to guarantee uniformity across initial times. The time derivative of V along system trajectories is given by \dot{V}(t, x) = \frac{\partial V}{\partial t}(t, x) + \nabla V(t, x) \cdot f(t, x), which now includes an explicit partial derivative with respect to time to account for the time-varying nature of both V and f. For uniform stability of the equilibrium at the origin, \dot{V}(t, x) \leq 0 must hold for all t \geq 0 and x \in D, with the conditions on V ensuring that the sublevel sets remain bounded independently of initial time t_0. This contrasts with the autonomous case, where V is time-independent and \dot{V} simplifies to \nabla V \cdot f(x) \leq 0, without needing uniformity in t. A variant of the stability theorem states that if V(t, x) is positive definite, decrescent, and \dot{V}(t, x) \leq -W_3(x) where W_3 is positive definite, then the origin is uniformly asymptotically stable, with trajectories converging to the origin at a rate uniform in t_0 \geq 0. For exponential stability, stricter bounds are required, such as c_1 \|x\|^2 \leq V(t, x) \leq c_2 \|x\|^2 and \dot{V}(t, x) \leq -c_3 \|x\|^2 for positive constants c_1, c_2, c_3. These uniform conditions ensure similar stability conclusions as in the autonomous case but adapted to handle time-varying perturbations in the dynamics. In periodic non-autonomous systems, where f(t + T, x) = f(t, x) for some period T > 0, additional challenges arise due to the oscillatory time dependence, often requiring to analyze linearizations and complement Lyapunov methods for determining multipliers.

Converse Lyapunov Theorems

Converse Lyapunov theorems establish the existence of Lyapunov functions for dynamical systems that satisfy certain properties, thereby providing a theoretical justification for Lyapunov's direct method in the reverse direction. These results confirm that if a is , a suitable Lyapunov function exists, without requiring explicit construction for practical analysis. They apply primarily to autonomous ordinary differential equations of the form \dot{x} = f(x), where f is locally continuous and f(0) = 0. The foundational converse theorem, due to Massera in 1949, states that if the origin of an autonomous is asymptotically , then for any neighborhood U of the origin, there exists a Lyapunov function V: U \to \mathbb{R} that is continuously differentiable, positive definite, and decrescent, with \dot{V}(x) \leq 0 along system trajectories, and strictly negative except at the origin. This local existence result holds under the assumption of asymptotic stability, ensuring that V serves as a strict Lyapunov function in U. Massera's addresses the core question of whether Lyapunov functions are guaranteed for stable systems, resolving a key gap in Lyapunov's original framework. For global asymptotic stability, Kurzweil's 1956 theorem extends this result by proving that if the is globally asymptotically , then there exists a radially unbounded Lyapunov function V: \mathbb{R}^n \to \mathbb{R} that is continuously differentiable, positive definite, and proper (i.e., level sets are compact), with \dot{V}(x) < 0 for x \neq 0. Radially unbounded means V(x) \to \infty as \|x\| \to \infty, which aligns the function with global attractivity properties. This global converse ensures the Lyapunov function captures the entire state space behavior. Proofs of these theorems typically involve constructing the Lyapunov function via integrals along system trajectories. For instance, one defines V(x) = \int_0^\infty g(\|\phi(t; x)\|) \, dt, where \phi(t; x) denotes the solution trajectory starting at x, and g is a continuous, (e.g., g(s) = s^2) chosen to ensure convergence of the integral and the required properties of V, such as and negative definiteness of \dot{V}. In the linear case \dot{x} = A x with A Hurwitz, this simplifies to V(x) = \int_0^\infty \|\phi(t; x)\|^2 \, dt, which yields a V(x) = x^T P x solving the A^T P + P A = -I. Nonlinear extensions adapt this integral construction by selecting appropriate gauges for g to handle local or global behavior, verifying the derivative conditions through trajectory estimates. These converse theorems have profound implications, providing a rigorous foundation for numerical and search-based methods to approximate Lyapunov functions, as their is guaranteed for systems. They underscore the completeness of Lyapunov's second method by affirming that implies the availability of a certifying function, facilitating theoretical analysis and controller design without solving the differential equations explicitly.

References

  1. [1]
    [PDF] Lecture 12 Basic Lyapunov theory
    (called global exponential stability, and is stronger than G.A.S.) then, there is a Lyapunov function that proves the system is exponentially stable, i.e., ...
  2. [2]
    Lyapunov Stability Theory - an overview | ScienceDirect Topics
    Lyapunov stability theory is defined as a framework used to describe the stability of dynamic systems, forming the theoretical basis for system-controller ...
  3. [3]
    Ch. 9 - Lyapunov Analysis - Underactuated Robotics
    Lyapunov functions are used to certify stability or to establish invariance of a region. But the same conditions can be used to certify that the state of a ...
  4. [4]
    Aleksandr Mikhailovich Lyapunov - Biography - MacTutor
    Aleksandr Lyapunov was a Russian mathematician best known for his development of the stability theory of a dynamical system. ... method of the Lyapunov vector ...Missing: direct | Show results with:direct
  5. [5]
    [PDF] Aleksandr Lyapunov, the man who created the modern theory of ...
    Jan 26, 2019 · In 1892, he issued the work “The general problem of stability of motions", in which he established the modern abstract mathematical theory of ...Missing: motivation | Show results with:motivation
  6. [6]
    Introduction to non-linear mechanics: topological methods ...
    Jul 25, 2012 · Introduction to non-linear mechanics: topological methods, analytical methods, non-linear resonance, relaxation oscillations.
  7. [7]
    [PDF] From Infancy to Potency: Lyapunov's Second Method and the Past ...
    L2M is immensely successful in control theory. • While it had no impact on CLASSICAL control ... ... • Control Lyapunov functions. • Homogeneity. • Partial ...Missing: evolution Western adoption
  8. [8]
    General Problem Of the Stability of Motion [1 ed.] 0748400621 ...
    Translation notes Aleksandr Mikhailovich Lyapunov's The General Problem of the Stability of Motion was published in Russian by the Mathematical Society of ...
  9. [9]
    [PDF] THE THEORY OF DYNAMIC PROGRAMMING - Richard Bellman
    stated above, the basic idea of the theory of dynamic programming is that of viewing an optimal policy as one deter- mining the decision required at each time ...Missing: Lyapunov | Show results with:Lyapunov
  10. [10]
    Brief History of Feedback Control - F.L. Lewis
    The work of A.M. Lyapunov was seminal in control theory. He studied the stability of nonlinear differential equations using a generalized notion of energy in ...Missing: Nicolas | Show results with:Nicolas
  11. [11]
    [PDF] Autonomous Systems and Stability
    x' = F(x). That is to say, an autonomous system is a system of ODEs in which the underlying variable t does not appear explicitly in the defining equations. For ...
  12. [12]
    [PDF] Nonlinear Systems
    analysis of nonlinear systems, with emphasis on Lyapunov's method. We give ... by using the storage function of the dynamical system as a Lyapunov function.
  13. [13]
    [PDF] Lyapunov Stability - Purdue Engineering
    Definition [Positive Definite Function (PDF)]. A continuous function V: R+×Rn ... Definition [Decrescent Function]. A continuous V: R+×D → R+ is said to ...
  14. [14]
    [PDF] Nonlinear Systems and Control Lecture # 9 Lyapunov Stability
    Nonlinear Systems and Control. Lecture # 9. Lyapunov Stability. – p. 1/15. Page ... Failure of a Lyapunov function candidate to satisfy the conditions for ...
  15. [15]
    [PDF] 4 Lyapunov Stability Theory
    In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties.
  16. [16]
    Backstepping-Based Lyapunov Function Construction Using ...
    Jun 20, 2016 · Thus, approximate dynamic programming (ADP) could be used to estimate value functions (Lyapunov functions) instead of SOS.Missing: stability | Show results with:stability
  17. [17]
    [PDF] Lecture 7: Finding Lyapunov Functions
    This lecture gives an introduction into basic methods for finding Lyapunov functions and ... In particular, when ψ ∞ 0, this yields the definition of a Lyapunov ...
  18. [18]
    [PDF] Tutorial on Lyapunov's Stability - Purdue Engineering
    Lyapunov Functions---Basic Idea. □Seek an aggregate summarizing function that continually decreases toward a minimum. □For mechanical systems--- energy of a ...<|control11|><|separator|>
  19. [19]
    A Floquet–Lyapunov theory for nonautonomous linear periodic ...
    In this work, we introduce a version of the classic Floquet–Lyapunov theorem for ω − periodic nonautonomous linear differential equations with piecewise ...
  20. [20]
    On Liapounoff's Conditions of Stability - jstor
    ANNALS OF MATHEMATICS. Vol. 50, No. 3, July, 1949. ON LIAPOUNOFF'S CONDITIONS OF STABILITY. BY J. L. MASSERA. (Received February 9, 1948). 1. In his well known ...
  21. [21]
    On the inversion of Ljapunov's second theorem on stability of motion
    On the inversion of Ljapunov's second theorem on stability of motion. Jaroslav Kurzweil · Czechoslovak Mathematical Journal (1956). Volume: 06, Issue: 2, page ...
  22. [22]
    [PDF] Converse theorems of Lyapunov's second method
    As to problem (a), the most effective method of attack used so far (Kurzweil, Massera) has been to find first a Lyapunov function Vo which satisfies the ...