Fact-checked by Grok 2 weeks ago

Lyapunov stability

Lyapunov stability is a cornerstone of , providing a rigorous framework to analyze whether an equilibrium point of a system remains close to its initial state under small perturbations or returns to it over time. Introduced by the Russian mathematician Aleksandr Mikhailovich Lyapunov in his 1892 doctoral thesis, The General Problem of the Stability of Motion, the concept addresses the qualitative behavior of solutions to ordinary differential equations without necessarily solving them explicitly. In continuous-time dynamical systems described by \dot{x} = f(x), where x \in \mathbb{R}^n and f(0) = 0 defines the equilibrium at the origin, an equilibrium is Lyapunov stable if for every \epsilon > 0, there exists \delta > 0 such that initial conditions satisfying \|x(0)\| < \delta yield trajectories with \|x(t)\| < \epsilon for all t \geq 0. It is asymptotically stable if it is Lyapunov stable and trajectories converge to the equilibrium, i.e., \lim_{t \to \infty} x(t) = 0 for initial conditions within some \delta > 0. These definitions extend to discrete-time systems and more general settings, including partial differential equations and stochastic processes. Central to Lyapunov's approach is the direct method (or second method), which employs a Lyapunov function V(x): a continuously differentiable, positive definite scalar (V(0) = 0, V(x) > 0 for x \neq 0) whose \dot{V}(x) = \nabla V(x) \cdot f(x) along system trajectories is negative semi-definite (\dot{V} \leq 0) to certify , or negative definite (\dot{V} < 0 for x \neq 0) for asymptotic . This method contrasts with the first method, which relies on linearization around the equilibrium to assess stability via eigenvalues, but the direct method handles nonlinearities more robustly. Extensions like LaSalle's invariance principle refine analysis by examining the largest invariant set where \dot{V} = 0. Lyapunov stability has profound applications in control theory, robotics, and engineering, enabling the design of stabilizing controllers for nonlinear systems—such as feedback laws that render closed-loop dynamics stable—without global solutions. It underpins modern tools like sum-of-squares optimization for verifying Lyapunov functions in polynomial systems and informs stability in complex phenomena, from mechanical oscillators to biological networks.

Introduction and Basic Concepts

Notions of Stability

In dynamical systems, stability describes how trajectories behave in the vicinity of an equilibrium under small perturbations to the initial conditions. Intuitively, a stable system maintains trajectories that remain close to the reference behavior if starting from nearby points, ensuring that minor disturbances do not cause large deviations over time. Consider a general autonomous dynamical system governed by the ordinary differential equation \dot{x} = f(x), where x \in \mathbb{R}^n and f: \mathbb{R}^n \to \mathbb{R}^n is continuously differentiable. An equilibrium point x^* is a point where \dot{x} = 0, so f(x^*) = 0, representing a state where the system remains at rest if started there. The equilibrium x^* is Lyapunov stable if, for every \epsilon > 0, there exists \delta > 0 such that whenever the initial state satisfies \|x(0) - x^*\| < \delta, the trajectory remains bounded by \|x(t) - x^*\| < \epsilon for all t \geq 0. This formal definition, introduced by , ensures that solutions starting sufficiently close to the equilibrium stay arbitrarily close indefinitely. Lyapunov stability focuses on the equilibrium point itself, whereas stability of solutions addresses whether trajectories initiated close to one another remain proximate throughout their evolution. Related notions extend this idea: asymptotic stability requires that trajectories not only stay close but also approach the equilibrium as t \to \infty, while exponential stability strengthens this by demanding convergence at an exponential rate, i.e., \|x(t) - x^*\| \leq K \|x(0) - x^*\| e^{-\alpha t} for some constants K > 0 and \alpha > 0. These stability properties can be local, holding within a bounded neighborhood of x^*, or global, applying to the entire state space.

Equilibrium Points

In dynamical systems described by ordinary differential equations of the form \dot{x} = f(x), where x \in \mathbb{R}^n and f: \mathbb{R}^n \to \mathbb{R}^n is a , an x^* is defined as a point satisfying f(x^*) = 0. This condition implies that if the system starts at x^*, the remains stationary at that point for all time, representing a constant solution where the state does not evolve. Equilibrium points are identified by solving the algebraic equation f(x^*) = 0, which can be done analytically for simple systems or numerically using methods such as or continuation techniques for more complex cases. In low-dimensional systems, explicit solutions may be feasible, but higher-dimensional or nonlinear systems often require computational tools to locate all equilibria within a bounded region. Equilibria can be classified as isolated or non-isolated. An isolated equilibrium x^* is one where there exists a neighborhood around x^* containing no other equilibria, allowing local analysis such as via the matrix. Non-isolated equilibria, in contrast, form continua or manifolds where f(x) = 0 holds along a set of points, often arising in systems with symmetries or degenerate cases, complicating local stability assessments. Stability analysis in dynamical systems is typically performed relative to a specific equilibrium point, as the long-term behavior of trajectories is evaluated with respect to perturbations from that point; notions like apply directly to these points to determine if nearby trajectories remain bounded or converge back. For instance, in the scalar system \dot{x} = -x, the origin x^* = 0 is the sole equilibrium, found by solving -x^* = 0. In the Lotka-Volterra predator-prey model \dot{x} = ax - bxy, \dot{y} = -cy + dxy (with positive parameters a, b, c, d), equilibria occur at (0,0) and the coexistence point (c/d, a/b), solved by setting both equations to zero.

Historical Background

Lyapunov's Original Contributions

Aleksandr Lyapunov's foundational contributions to are detailed in his 1892 doctoral dissertation, The General Problem of the Stability of Motion, presented at Kharkov University. This work systematically addressed the challenge of assessing the stability of solutions to nonlinear differential equations, where solving the equations explicitly is typically infeasible. Lyapunov developed two primary methods to determine stability without requiring complete solutions, establishing criteria based on and properties of the system. In what is now termed the first or indirect method, Lyapunov proposed linearizing the around an point and analyzing the eigenvalues of the matrix evaluated at that point. This applies to equilibria, where the has no eigenvalues with zero real part; the of the shares the stability properties of the linearized —if all eigenvalues have negative real parts, the is asymptotically , whereas a positive real part indicates instability. The method thus offers a practical local test for via linear algebra, though it fails for non- cases. Lyapunov's second or direct method introduced the use of Lyapunov functions, positive definite scalar functions V(\mathbf{x}) defined in a neighborhood of the , such that their time along trajectories satisfies \dot{V}(\mathbf{x}) \leq 0. This condition ensures that trajectories remain bounded near the , implying . If \dot{V}(\mathbf{x}) < 0 for \mathbf{x} \neq 0, the is asymptotically stable, as the function decreases, driving states toward the origin. A central theorem from the dissertation asserts that if V(\mathbf{x}) is positive definite and radially unbounded (i.e., V(\mathbf{x}) \to \infty as \|\mathbf{x}\| \to \infty), with \dot{V}(\mathbf{x}) < 0 for \mathbf{x} \neq 0, then the equilibrium is globally asymptotically stable for the entire state space. Lyapunov emphasized that both methods pertain to autonomous systems, characterized by differential equations without explicit external inputs or time dependence, limiting their immediate use for forced or non-autonomous dynamics.

Post-Lyapunov Developments

Lyapunov's seminal work on stability, published in 1892, initially received limited attention in the early 20th century, primarily due to its emphasis on nonlinear dynamical systems at a time when linear approximations dominated analysis. The French translation of his dissertation in 1907 facilitated some recognition in Western Europe, but widespread adoption occurred only after World War II, spurred by the need to analyze complex nonlinear systems in aerospace and control engineering, alongside the emergence of early computers for simulating trajectories. Key extensions came from Soviet mathematicians in the interwar and wartime periods. Nikolay G. Chetaev introduced barrier functions in the 1930s to study instability, identifying regions near equilibria where positive definiteness fails, thereby providing criteria complementary to Lyapunov's stability conditions. Independently, I.G. Malkin advanced the field through rigorous treatments of asymptotic stability and perturbation methods, culminating in his 1952 book Theory of Stability of Motion, which synthesized Lyapunov's ideas for practical engineering applications. A major theoretical breakthrough arrived with converse theorems, which addressed the existence of Lyapunov functions for known stable systems. In 1949, José Luis Massera proved that for asymptotically stable continuous-time systems satisfying certain smoothness conditions, a Lyapunov function exists and can be constructed via integration along trajectories, resolving a key open question from Lyapunov's original framework. Subsequent works by researchers like Jaroslav Kurzweil refined these results for broader classes of systems. The 1960s saw further innovations, including Joseph P. LaSalle's invariance principle, which extended Lyapunov's direct method by showing that trajectories converge to the largest invariant set within the region where the Lyapunov derivative vanishes, serving as a precursor to more advanced asymptotic stability tools. Paralleling these developments, Lyapunov stability profoundly influenced control theory in the 1950s and 1960s; A.I. Lur'e and A.M. Letov applied it to design absolute stability criteria for nonlinear feedback systems, enabling robust controller synthesis for applications like servomechanisms and autopilots.

Stability in Continuous-Time Systems

Definitions and Classifications

In continuous-time systems, stability is analyzed for nonlinear dynamical systems of the form \dot{x} = f(x), where x \in \mathbb{R}^n denotes the state, and f: D \subseteq \mathbb{R}^n \to \mathbb{R}^n is a continuous function defined on a domain D containing the origin. An equilibrium point x^* satisfies f(x^*) = x^*; without loss of generality, the analysis often shifts coordinates so that x^* = 0 and f(0) = 0. These systems model phenomena with smooth time evolution, such as mechanical systems or chemical reactions, where trajectories follow continuous flows governed by differential equations. Lyapunov stability of the equilibrium x^* = 0 requires that, for every \epsilon > 0, there exists \delta > 0 such that if the initial state satisfies \|x(0)\| < \delta, then \|x(t)\| < \epsilon for all t \geq 0. This \epsilon-\delta definition ensures that solutions starting sufficiently close to the equilibrium remain arbitrarily close thereafter, capturing local boundedness in the continuous trajectory. Asymptotic stability extends Lyapunov stability by additionally requiring that \lim_{t \to \infty} x(t) = 0 whenever \|x(0)\| < \delta' for some \delta' > 0. This property implies not only persistence near the but also convergence to it over time. Uniform stability strengthens the by making \delta independent of the initial time t_0, ensuring the stability margin does not degrade with time shifts. For autonomous systems like \dot{x} = f(x), this uniformity holds inherently due to time-invariance. Global asymptotic stability applies the asymptotic stability to all initial states x(0) \in \mathbb{R}^n, ensuring convergence from any starting point without reliance on a local . Continuous-time notions emphasize over real-time intervals, with trajectories defined for all t \geq t_0 under standard and assumptions. Uniform exponential stability provides a quantitative rate of convergence: there exist constants K > 0 and \alpha > 0 such that \|x(t)\| \leq K e^{-\alpha t} \|x(0)\| for all t \geq 0 and \|x(0)\| < \delta'' with some \delta'' > 0. This exponential decay bound implies asymptotic stability and uniformity, highlighting robust attraction in continuous dynamics.

Lyapunov's Direct Method

Lyapunov's direct method, also known as the second method of Lyapunov, enables the analysis of stability for equilibrium points of nonlinear autonomous systems of the form \dot{x} = f(x), where x \in \mathbb{R}^n and f is locally with f(0) = 0, without requiring explicit solutions to the differential equations. This approach relies on identifying a special scalar function V(x), termed a , whose level sets behave analogously to energy contours in physical systems, decreasing or remaining constant along system trajectories to infer stability properties. A Lyapunov function V: \mathbb{R}^n \to \mathbb{R} must be continuously differentiable and positive definite in a neighborhood of the x = 0, meaning V(0) = 0 and V(x) > 0 for all x \neq 0 within that neighborhood. More precisely, there exist a neighborhood B_r = \{x : \|x\| < r\} for some r > 0 and a \mathcal{K} \alpha (continuous, strictly increasing, \alpha(0) = 0) such that \alpha(\|x\|) \leq V(x) for x \in B_r. The time derivative of V along the system trajectories, known as the orbital derivative, is given by \dot{V}(x) = \nabla V(x) \cdot f(x), which represents the rate of change of V as the state evolves. The fundamental theorem of Lyapunov's direct method states that if V(x) is positive definite and \dot{V}(x) \leq 0 for all x in some neighborhood of the , then the x = 0 is (locally) in the sense of Lyapunov. Furthermore, if \dot{V}(x) < 0 for all x \neq 0 in that neighborhood, then the is locally asymptotically stable, as the strict negativity ensures that trajectories converge to the . These conditions guarantee that solutions starting sufficiently close to the remain bounded and, in the asymptotic case, approach it over time. For global results, V(x) must be radially unbounded, satisfying V(x) \to \infty as \|x\| \to \infty. Under this property, if V(x) is positive definite and radially unbounded with \dot{V}(x) \leq 0 for all x \in \mathbb{R}^n, the equilibrium is globally stable; if additionally \dot{V}(x) < 0 for x \neq 0, it is globally asymptotically stable. This extension applies the local criteria over the entire state space, ensuring attractivity from any initial condition. An important extension, LaSalle's invariance principle, relaxes the strict negativity condition: if \dot{V}(x) \leq 0 and all trajectories converge to the largest invariant set contained in \{x : \dot{V}(x) = 0\}, with that set being solely the equilibrium, then asymptotic stability holds. Constructing a suitable Lyapunov function remains an art, with no universal algorithm guaranteed to succeed for arbitrary nonlinear systems. For intuitive purposes, especially in systems close to linear, quadratic forms V(x) = x^T P x—where P is a positive definite matrix—provide a starting point, as their derivatives can be analyzed via matrix inequalities to verify the stability conditions. However, for general nonlinearities, candidate functions are often derived from physical interpretations, such as kinetic plus potential energy in mechanical systems.

Lyapunov's Indirect Method

Lyapunov's indirect method, also referred to as the first method of stability analysis, provides a technique for determining the local stability properties of an equilibrium point in a continuous-time nonlinear dynamical system by examining the stability of its linear approximation. This approach, originally developed by , approximates the nonlinear dynamics near the equilibrium through linearization, leveraging the eigenvalues of the Jacobian matrix to infer qualitative behavior. It is particularly effective for hyperbolic equilibria, where the linearized system captures the essential local topology of the nonlinear flow. Consider a nonlinear autonomous system described by \dot{x} = f(x), where x \in \mathbb{R}^n and f is continuously differentiable, with an equilibrium point x^* satisfying f(x^*) = 0. The linearization around x^* yields the approximate system \dot{z} = A z, where z = x - x^* and A = \frac{\partial f}{\partial x}(x^*) is the Jacobian matrix evaluated at the equilibrium. The stability of this linear system is determined by the eigenvalues \lambda_i of A: if all \operatorname{Re}(\lambda_i) < 0, the origin of the linear system is asymptotically stable; if any \operatorname{Re}(\lambda_i) > 0, it is unstable. For equilibria—those where no eigenvalue of A has zero real part—the guarantees that the is locally topologically equivalent to its in a neighborhood of x^*. This equivalence implies that if the linearized system is asymptotically stable, so is the locally; conversely, instability in the linearization implies local instability of the . The theorem, independently established by Grobman in 1959 and Hartman in 1960, ensures a conjugating the flows of the nonlinear and linear systems near the , preserving their qualitative dynamics. In cases where the equilibrium is non-hyperbolic, meaning at least one eigenvalue has \operatorname{Re}(\lambda_i) = 0, the indirect method is inconclusive, as the linearization may exhibit neutral stability (e.g., centers or foci) while the nonlinear terms determine the actual behavior. Such scenarios often require advanced techniques like center manifold reduction to analyze the stability along the center directions. For instance, in the scalar equation \dot{x} = a x + o(|x|) near x^* = 0, the equilibrium is locally asymptotically stable if a < 0, unstable if a > 0, and the method fails to decide if a = 0. The indirect method's utility lies in its computational simplicity, as it reduces local stability assessment to eigenvalue analysis of the , making it a foundational tool for preliminary investigations in nonlinear despite its limitations to local behavior.

Stability in Discrete-Time Systems

Definitions and Classifications

In discrete-time systems, is analyzed for nonlinear dynamical systems of the form x_{k+1} = f(x_k), where x_k \in \mathbb{R}^n denotes the state at discrete time step k \geq 0, and f: D \subseteq \mathbb{R}^n \to \mathbb{R}^n is a defined on a domain D containing the origin. An point x^* satisfies f(x^*) = x^*; , the analysis often shifts coordinates so that x^* = 0 and f(0) = 0. These systems model iterative maps, such as sampled-data or filters, where trajectories evolve in discrete steps rather than continuous flows. Lyapunov stability of the equilibrium x^* = 0 requires that, for every \epsilon > 0, there exists \delta > 0 such that if the initial state satisfies \|x_0\| < \delta, then \|x_k\| < \epsilon for all k \geq 0. This \epsilon-\delta definition ensures that solutions starting sufficiently close to the equilibrium remain arbitrarily close thereafter, capturing local boundedness in the discrete iterates. Asymptotic stability extends Lyapunov stability by additionally requiring that \lim_{k \to \infty} x_k = 0 whenever \|x_0\| < \delta' for some \delta' > 0. This property implies not only persistence near the but also to it over successive iterations. Uniform stability strengthens the definition by making \delta independent of the initial time step k_0, though for autonomous systems like x_{k+1} = f(x_k), this uniformity holds inherently due to time-invariance. Global asymptotic stability applies the asymptotic stability condition to all initial states x_0 \in \mathbb{R}^n, ensuring from any starting point without reliance on a local . Unlike continuous-time systems, discrete-time notions lack explicit dependence on continuous time intervals, focusing instead on integer-step uniformity and the absence of intermediate interpolation. Uniform exponential stability provides a quantitative rate of convergence: there exist constants K > 0 and $0 < \rho < 1 such that \|x_k\| \leq K \rho^k \|x_0\| for all k \geq 0 and \|x_0\| < \delta'' with some \delta'' > 0. This geometric decay bound implies asymptotic stability and uniformity, highlighting robust attraction in discrete dynamics.

Discrete Lyapunov Functions

In discrete-time dynamical systems described by x_{k+1} = f(x_k), where f: \mathbb{R}^n \to \mathbb{R}^n is and the is an point (i.e., f(0) = 0), a discrete V: D \to \mathbb{R} (with D an open neighborhood of the ) is defined as a satisfying V(0) = 0 and V(x) > 0 for all x \in D \setminus \{0\}. This ensures V serves as a measure of the system's deviation from , analogous to energy-like functions in continuous systems but adapted to finite time steps. The core stability condition involves the change in V along system trajectories, given by \Delta V(x_k) = V(f(x_k)) - V(x_k). If there exists such a V with \Delta V(x) \leq 0 for all x \in D, then the origin is (Lyapunov) stable, meaning trajectories starting nearby remain nearby. For asymptotic stability, a stricter condition \Delta V(x) < 0 for all x \in D \setminus \{0\} suffices, implying trajectories converge to the origin as k \to \infty. These conditions parallel the continuous-time case but replace the time derivative \dot{V} with the finite difference \Delta V, reflecting the discrete nature of the dynamics where evolution occurs via the composition V \circ f rather than infinitesimal changes. For exponential stability, where \|x_k\| \leq \alpha \|x_0\| \rho^k for some \alpha \geq 1 and $0 < \rho < 1, a suitable V must satisfy quadratic bounds c_1 \|x\|^2 \leq V(x) \leq c_2 \|x\|^2 (with c_1, c_2 > 0) and \Delta V(x) \leq -\gamma \|x\|^2 for some \gamma > 0 and all x \in D \setminus \{0\}. An alternative formulation requires V(f(x)) \leq \lambda V(x) with $0 < \lambda < 1, ensuring a uniform contraction rate in the Lyapunov level sets. Construction of discrete Lyapunov functions follows methods similar to the continuous case, often starting with quadratic forms for linear systems x_{k+1} = A x_k. Here, V(x) = x^T P x with symmetric positive definite P solves the discrete Lyapunov equation A^T P A - P = -Q for any positive definite Q, guaranteeing \Delta V(x) = -x^T Q x < 0 for x \neq 0 if A is Schur stable (all eigenvalues inside the unit circle). For nonlinear systems, candidates may be derived from linearizations or energy-based functions, but verifying the decrease condition requires direct evaluation along f. The absence of derivatives in discrete analysis simplifies some computations but demands careful handling of the mapping f to ensure global or local validity.

Linear Systems

State-Space Models

In linear systems theory, the state-space representation provides a fundamental framework for modeling and analyzing dynamic behavior, particularly for stability properties. For continuous-time systems, the homogeneous linear time-invariant model is expressed as \dot{x}(t) = A x(t), where x(t) \in \mathbb{R}^n is the state vector and A is an n \times n constant matrix. The unique solution to this initial value problem, assuming x(0) = x_0, is given by x(t) = e^{A t} x_0, where e^{A t} denotes the matrix exponential, ensuring well-posedness through existence and uniqueness of solutions for all t \geq 0. Due to the homogeneity of the equation (absence of forcing terms), the equilibria are the solutions to A x = 0, i.e., the kernel of A; the origin x^* = 0 is always an equilibrium point and the unique one if A is invertible, while for singular A, there is a nontrivial subspace of equilibria. A key property of this representation is the principle of superposition, which follows from the linearity of the system: if x_1(t) and x_2(t) are solutions corresponding to initial conditions x_{1,0} and x_{2,0}, then \alpha x_1(t) + \beta x_2(t) is the solution for initial condition \alpha x_{1,0} + \beta x_{2,0} for any scalars \alpha, \beta. The matrix exponential e^{A t} is well-defined via its power series e^{A t} = I + A t + \frac{(A t)^2}{2!} + \cdots, which converges for all t and guarantees the solution's smoothness and global existence. This formulation facilitates stability analysis by focusing on the evolution of states from arbitrary initial conditions. For discrete-time systems, the analogous homogeneous model is x_{k+1} = A x_k, where x_k \in \mathbb{R}^n and the subscript denotes the time step k \in \mathbb{Z}_{\geq 0}. The solution is x_k = A^k x_0, with A^k representing the k-th matrix power, again ensuring well-posedness through iterative application. As in the continuous case, the system is homogeneous, so the equilibria are the kernel of A; the origin x^* = 0 is always an equilibrium and unique if A is invertible, while for singular A, there is a nontrivial subspace of equilibria. Superposition holds: linear combinations of solutions yield solutions to linear combinations of initial states. In both continuous and discrete settings, Lyapunov stability for these linear state-space models equates to the boundedness of solutions for all initial conditions x_0, meaning \|x(t)\| (or \|x_k\|) remains finite for all t \geq 0 (or k \geq 0), which aligns with the general notions of stability in linear dynamical systems. This boundedness criterion serves as the foundation for further Lyapunov-based methods to assess qualitative behavior without explicit solution computation.

Eigenvalue-Based Stability Criteria

For linear time-invariant systems in continuous time, described by the state-space model \dot{x} = A x, asymptotic stability of the origin holds if and only if all eigenvalues \lambda of the system matrix A satisfy \operatorname{Re}(\lambda) < 0, a condition known as . This spectral criterion arises from the fact that the solution trajectories decay exponentially when the real parts of all eigenvalues are negative, ensuring convergence to the equilibrium. In discrete time, for systems of the form x_{k+1} = A x_k, asymptotic stability requires that all eigenvalues \lambda of A satisfy |\lambda| < 1, referred to as . Under this condition, the powers of A diminish in norm, leading to trajectories that approach the origin as k \to \infty. Computing eigenvalues directly can be computationally intensive for high-dimensional systems, so the Routh-Hurwitz criterion provides an alternative test for Hurwitz stability by examining the coefficients of the characteristic polynomial \det(sI - A) = 0 without solving for roots explicitly. The criterion constructs a Routh array from these coefficients; the system is Hurwitz stable if all elements in the first column of the array are positive (or all negative), with the number of sign changes indicating unstable roots. This method, originally developed by Edward Routh and Adolf Hurwitz, is particularly useful for low-order systems where eigenvalue computation is feasible but tedious. Another equivalent condition for asymptotic stability in continuous-time linear systems involves the existence of a positive definite matrix P > 0 solving the A^T P + P A = -Q for any positive definite Q > 0. The solution P can be found analytically or numerically, and its confirms stability without eigenvalue computation. Marginal stability occurs when all eigenvalues have non-positive real parts with at least one pair of purely imaginary eigenvalues (and no repeated imaginary roots), resulting in bounded oscillations rather than asymptotic convergence to the origin. In such cases, trajectories remain bounded but do not decay to zero, as seen in undamped oscillators.

Systems with Inputs

Bounded-Input Bounded-State Stability

Bounded-input bounded-state (BIBS) stability extends the notion of stability to linear systems subject to external inputs, ensuring that bounded inputs produce bounded state trajectories without requiring convergence to an equilibrium. Consider the continuous-time linear time-invariant system given by \dot{x}(t) = A x(t) + B u(t), where x(t) \in \mathbb{R}^n is the state, u(t) \in \mathbb{R}^m is the input, A \in \mathbb{R}^{n \times n} is the system matrix, and B \in \mathbb{R}^{n \times m} is the input matrix. An input is bounded if there exists M > 0 such that \|u(t)\| \leq M for all t \geq 0. The system is BIBS stable if, for any initial state x(0) and any bounded input u, the state satisfies \sup_{t \geq 0} \|x(t)\| < \infty. For such systems, BIBS stability holds if and only if the matrix A is Hurwitz, i.e., all eigenvalues of A have strictly negative real parts. The explicit solution is x(t) = e^{A t} x(0) + \int_0^t e^{A (t - \tau)} B u(\tau) \, d\tau. When A is Hurwitz, there exist constants K > 0 and \alpha > 0 such that \|e^{A t}\| \leq K e^{-\alpha t} for all t \geq 0, ensuring the first term decays exponentially to zero regardless of x(0). The integral term remains bounded for bounded u, as its norm is upper-bounded by M \int_0^t \|e^{A (t - \tau)}\| \|B\| \, d\tau \leq M \|B\| K / \alpha, yielding an overall state bound proportional to \|x(0)\| and M. If A is not Hurwitz, there exist bounded inputs that cause unbounded states. In the frequency domain, BIBS stability is equivalently characterized by the finiteness of the H_\infty norm of the transfer function G(s) = (sI - A)^{-1} B, defined as \|G\|_\infty = \sup_{\omega \in \mathbb{R}} \bar{\sigma}(G(j\omega)), where \bar{\sigma} denotes the largest singular value. This norm is finite precisely when A is Hurwitz, and it quantifies the worst-case amplification from bounded inputs to states in the L_2 sense, aligning with BIBS for linear systems. The discrete-time counterpart is the system x_{k+1} = A x_k + B u_k, where BIBS stability requires all eigenvalues of A to have magnitude strictly less than . The solution x_k = A^k x_0 + \sum_{j=0}^{k-1} A^{k-1-j} B u_j is bounded under bounded u_k due to the geometric of powers of A, analogous to the continuous case.

Input-to-State Stability

Input-to-state stability (ISS) extends Lyapunov stability analysis to nonlinear dynamical systems influenced by external inputs, capturing how the magnitude and persistence of inputs bound the state trajectories while preserving asymptotic convergence toward the equilibrium. For continuous-time systems of the form \dot{x}(t) = f(x(t), u(t)), where x \in \mathbb{R}^n denotes the state and u \in \mathbb{R}^m the input, ISS quantifies the robustness of the system's behavior against disturbances or control signals. This property ensures that bounded inputs produce bounded states, with the state decay rate independent of the input history beyond its supremum norm. The system is input-to-state stable if there exist a class \mathcal{KL} function \beta and a class \mathcal{K} function \gamma such that \|x(t)\| \leq \beta(\|x(0)\|, t) + \gamma\left( \sup_{0 \leq \tau \leq t} \|u(\tau)\| \right) holds for all t \geq 0, all initial states x(0), and all measurable locally essentially bounded inputs u. This definition, introduced by Sontag, highlights the transient response via \beta (which captures decay from initial conditions) and the steady-state gain via \gamma (which bounds the effect of the input supremum). When the input vanishes (u \equiv 0), the inequality simplifies to \|x(t)\| \leq \beta(\|x(0)\|, t), implying global asymptotic stability of the origin for the unforced system. ISS admits a Lyapunov characterization: the system is ISS if and only if there exists a continuous ISS-Lyapunov function V: \mathbb{R}^n \to \mathbb{R}_{\geq 0}, proper and positive definite, satisfying \dot{V}(x, u) \leq -\alpha(\|x\|) + \sigma(\|u\|) along system trajectories for class \mathcal{K}_\infty functions \alpha, \sigma. This dissipation inequality balances state contraction against input forcing, enabling constructive stability proofs and controller design. A significant implication is that ISS guarantees uniform global asymptotic stability when inputs converge to zero, as the state then follows the unforced decay bound. For discrete-time systems x_{k+1} = f(x_k, u_k), an analogous ISS property holds, defined by \|x_k\| \leq \beta(\|x_0\|, k) + \gamma\left( \sup_{0 \leq j < k} \|u_j\| \right) with \beta \in \mathcal{KL} and \gamma \in \mathcal{K}, alongside a corresponding Lyapunov difference condition \Delta V(x_k, u_k) \leq -\alpha(\|x_k\|) + \sigma(\|u_k\|). This extension preserves the core insights for sampled-data and hybrid systems analysis.

Advanced Topics

Barbalat's Lemma

Barbalat's lemma provides a key tool for establishing asymptotic convergence in stability analysis, particularly when the time derivative of a Lyapunov function is non-positive but not strictly negative, allowing conclusions about the limit of that derivative. In the context of Lyapunov's direct method, where a positive definite function V(t, x) satisfies \dot{V}(t, x) \leq 0 along system trajectories, the lemma extends the analysis to show that \dot{V} \to 0 under additional conditions, facilitating proofs of asymptotic stability in systems where direct negativity of \dot{V} is unavailable. The standard statement of Barbalat's lemma is as follows: Consider a function \phi: [0, \infty) \to \mathbb{R} that is uniformly continuous on [0, \infty). If the limit \lim_{t \to \infty} \int_0^t \phi(\tau) \, d\tau exists and is finite, then \lim_{t \to \infty} \phi(t) = 0. This result, originally established by Barbalat in 1959, relies on the uniform continuity to control the oscillations of \phi and prevent divergence of the integral. A proof sketch proceeds by contradiction. Suppose \lim_{t \to \infty} \phi(t) \neq 0. Then there exists \epsilon > 0 and a sequence t_k \to \infty such that |\phi(t_k)| \geq \epsilon. Due to , \phi remains bounded away from zero in intervals around each t_k, leading to increments in the that contradict its to a finite . Alternatively, a uses the Cauchy criterion for the integral's convergence to show that \sup_{s > t} \left| \int_t^s \phi(\tau) \, d\tau \right| \to 0 as t \to \infty, and combines this with the of \phi to bound |\phi(t)|. To apply the lemma in Lyapunov stability, uniform continuity of \dot{V} must hold. This is often ensured by assuming the existence and boundedness of the second time derivative \ddot{V}, which implies that \dot{V} has and is thus uniformly continuous; for instance, if the system dynamics and V are sufficiently smooth, \ddot{V} = \frac{\partial \dot{V}}{\partial t} + \frac{\partial \dot{V}}{\partial x} f(t, x) remains bounded on compact sets. Such assumptions are common in non-autonomous or time-varying systems, where \dot{V} \leq 0 implies V is bounded below (e.g., V \geq 0), so \int_0^\infty \dot{V}(\tau) \, d\tau = \lim_{t \to \infty} V(t) - V(0) exists and is finite. The lemma then yields \dot{V} \to 0, proving asymptotic stability when combined with further arguments like or persistency of excitation. In adaptive control, Barbalat's lemma is frequently used to demonstrate parameter convergence. For example, in model reference adaptive schemes for uncertain linear systems \dot{x} = A x + B (u + \theta^T \phi(x)), a Lyapunov function V = e^T P e + \tilde{\theta}^T \Gamma^{-1} \tilde{\theta} (with tracking error e, parameter error \tilde{\theta}, and positive definite matrices P, \Gamma) satisfies \dot{V} \leq -e^T Q e for some Q > 0, implying boundedness of states and parameters. Uniform continuity of \dot{V} follows from bounded signals, so \dot{V} \to 0 and e \to 0; persistency of excitation then ensures \tilde{\theta} \to 0, achieving parameter convergence.

LaSalle's Invariance Principle

LaSalle's invariance principle provides a method to establish asymptotic stability for autonomous dynamical systems when the time derivative of a Lyapunov function is non-positive rather than strictly negative. Consider a continuous-time system \dot{x} = f(x), where f is locally Lipschitz continuous on a domain D \subseteq \mathbb{R}^n. Suppose there exists a continuously differentiable function V: D \to \mathbb{R} such that V(x) is positive definite and \dot{V}(x) \leq 0 for all x \in D. Let \Omega \subset D be a compact and positively invariant set with respect to the system dynamics. Define E = \{x \in \Omega \mid \dot{V}(x) = 0\} and let M be the largest invariant set contained in E. Then, every trajectory starting in \Omega converges to M as t \to \infty, meaning the \omega-limit set of the trajectory is contained in M. If V(x) is radially unbounded (i.e., V(x) \to \infty as \|x\| \to \infty) and D = \mathbb{R}^n, then the sublevel sets \{x \in \mathbb{R}^n \mid V(x) \leq c\} are for all c > 0, ensuring positive invariance and for large enough c. In this case, if the only invariant trajectory in E is the equilibrium point x = 0, the origin is globally asymptotically . This refines Lyapunov's method by allowing analysis of the invariant structure within the zero of \dot{V} to conclude . The principle extends to discrete-time systems x_{k+1} = T(x_k), where T: \mathbb{R}^m \to \mathbb{R}^m is continuous on a set G. Assume a V: G \to \mathbb{R} satisfies V(T(x)) - V(x) \leq 0 for all x \in G. Let E = \{x \in G \mid V(T(x)) = V(x)\} and M the largest invariant set in E. For any x_0 \in G such that the \{x_k\} remains in G and is bounded, the trajectory converges to M \cap V^{-1}(c), where c is a limit point of \{V(x_k)\}. of V and of level sets ensure similar conclusions as in the continuous case.

Examples

Nonlinear Oscillator

A paradigmatic example of Lyapunov stability analysis in nonlinear systems is the damped pendulum, governed by the second-order \ddot{\theta} + b \dot{\theta} + \sin \theta = 0, where b > 0 is the and \theta represents the from the vertical. This autonomous system has an at \theta = 0, \dot{\theta} = 0, corresponding to the downward hanging position. To assess stability, consider the Lyapunov function V(\theta, \dot{\theta}) = \frac{1}{2} \dot{\theta}^2 + (1 - \cos \theta), which represents the total (kinetic plus , normalized for simplicity). This function is positive definite in a neighborhood of the , with V(0, 0) = 0. The time derivative along system trajectories is \dot{V} = \dot{\theta} \ddot{\theta} + \sin \theta \, \dot{\theta} = \dot{\theta} (-b \dot{\theta} - \sin \theta) + \sin \theta \, \dot{\theta} = -b \dot{\theta}^2 \leq 0, which is negative semi-definite. Thus, V is non-increasing, implying Lyapunov stability of the . For asymptotic stability, applies: the largest invariant set within \{\dot{V} = 0\} consists of the equilibria where \theta = k \pi, \dot{\theta} = 0 for integers k; however, in a neighborhood of the , such as |\theta| < \pi, it reduces to the singleton \{\theta = 0, \dot{\theta} = 0\}. Trajectories therefore converge asymptotically to the . Numerically, the phase portrait in the (\theta, \dot{\theta})-plane reveals spirals inward toward the , capturing the oscillatory decay characteristic of underdamped . For b > 0, the exhibits asymptotic in a neighborhood of the , with trajectories spiraling inward due to .

Linear Control System

In linear systems, analysis often involves state-space representations where the are given by \dot{x} = A x + B u, with x \in \mathbb{R}^n as the , u \in \mathbb{R}^m as the input, A as the system , and B as the input . To achieve stabilization, a common approach is full-state u = -K x, where K \in \mathbb{R}^{m \times n} is the gain , transforming the open-loop system into the closed-loop form \dot{x} = (A - B K) x. The closed-loop A_{cl} = A - B K determines , and for asymptotic , all eigenvalues of A_{cl} must lie in the open left half of the complex plane (Hurwitz ). Eigenvalue placement via state feedback allows arbitrary assignment of the closed-loop poles if the pair (A, B) is controllable, enabling the design of K to position eigenvalues in the left half-plane for desired response characteristics such as damping and settling time. This pole-placement technique ensures exponential stability, as the solution to the closed-loop system x(t) = e^{A_{cl} t} x(0) decays to zero for any initial condition when \operatorname{Re}(\lambda_i(A_{cl})) < 0 for all eigenvalues \lambda_i. A representative example is the double integrator, modeling systems like position control in mechanics, with dynamics \ddot{x} = u or in state-space form \dot{x} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} x + \begin{bmatrix} 0 \\ 1 \end{bmatrix} u, where x = [x, \dot{x}]^T. Without control (u=0), the eigenvalues are at the origin, rendering the system marginally stable but not asymptotically stable. Applying state feedback u = -K x with K = [k_1, k_2] yields A_{cl} = \begin{bmatrix} 0 & 1 \\ -k_1 & -k_2 \end{bmatrix}, whose characteristic equation is s^2 + k_2 s + k_1 = 0; choosing k_1 > 0 and k_2 > 0 places poles at Hurwitz locations, such as (-1 \pm j) for k_1 = 2, k_2 = 2, ensuring oscillatory decay to equilibrium. Lyapunov stability for the closed-loop system can be verified using a Lyapunov function V(x) = x^T P x, where P > 0 solves the A_{cl}^T P + P A_{cl} = -Q for some Q > 0, guaranteeing \dot{V}(x) = -x^T Q x < 0 for x \neq 0 and thus asymptotic stability. In optimal control contexts like the linear quadratic regulator (LQR), P is obtained by solving the algebraic Riccati equation (ARE) A^T P + P A - P B R^{-1} B^T P + Q = 0, with Q \geq 0 and R > 0 as weighting matrices; the resulting K = R^{-1} B^T P stabilizes the system while minimizing a quadratic . For the double integrator with Q = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} and scalar R > 0, the positive definite solution P yields eigenvalues \lambda_{1,2} = -\frac{1}{\sqrt{2}} R^{1/4} \pm j \frac{1}{\sqrt{2}} R^{1/4}, confirming stability. Simulations of the double integrator illustrate the impact: without feedback, trajectories from initial conditions like x(0) = [1, 0]^T exhibit constant position or linear drift in position if initial velocity is nonzero, showing lack of asymptotic stability. With LQR (R=1), the state response decays exponentially, reaching near-zero within seconds, as \|x(t)\| follows e^{\alpha t} with \alpha < 0 determined by the dominant eigenvalue, highlighting the control's role in enforcing Lyapunov stability.

References

  1. [1]
    Alexandr Mikhailovich Liapunov, The general problem of the stability ...
    This memoir is recognized as the first extensive treatise on the stability theory of solutions of ordinary differential equations.
  2. [2]
    [PDF] Aleksandr Lyapunov, the man who created the modern theory of ...
    Jan 26, 2019 · This theorem can be used to give exact conditions under which the old method of “stability investigations on the basis of first approximation" ...
  3. [3]
    [PDF] 1 Basic definitions 2 Lyapunov Stability
    This lecture provides an overview of Lyapunov stability for time-invariant systems. ... Consider a closed loop dynamical system of the form. ˙x = ...
  4. [4]
    [PDF] Stability for Control Systems and Lyapunov Functions - Math@LSU
    Oct 28, 2024 · In the same way that we can use Lyapunov functions to prove the asymptotic stability of dynamical systems, we can use ISS-Lyapunov functions ...
  5. [5]
    Ch. 9 - Lyapunov Analysis - Underactuated Robotics
    Lyapunov functions are used to certify stability or to establish invariance of a region. But the same conditions can be used to certify that the state of a ...
  6. [6]
    [PDF] Lyapunov Stability - Purdue Engineering
    For autonomous system, uniform (asymptotic) stability is the same as (asymptotic) stability. Definition [Positive Definite (Semi-Definite) Function (PDF)]. A ...
  7. [7]
    [PDF] 4 Lyapunov Stability Theory
    Thus, for time-invariant systems, stability implies uniform stability and asymptotic stability implies uniform asymptotic stability. It is important to note ...
  8. [8]
    [PDF] Lecture 12 Basic Lyapunov theory
    Some stability definitions we consider nonlinear time-invariant system ... A Lyapunov global asymptotic stability theorem suppose there is a function V ...
  9. [9]
    Equilibrium - Scholarpedia
    Oct 21, 2011 · An equilibrium is asymptotically stable if all eigenvalues have negative real parts; it is unstable if at least one eigenvalue has positive real part.Jacobian Matrix · Types of Equilibria · Two-Dimensional Space
  10. [10]
  11. [11]
  12. [12]
    [PDF] Chapter Four - Graduate Degree in Control + Dynamical Systems
    Equilibrium points are one of the most important features of a dynamical sys- tem since they define the states corresponding to constant operating conditions. A.
  13. [13]
    Introduction (Chapter 1) - Stability Regions of Nonlinear Dynamical ...
    ... theory of stability in his PhD thesis: The general problem of the stability of motion (1892). Lyapunov derived sufficient conditions for stability based on ...
  14. [14]
    [PDF] Nonlinear Systems
    analysis of nonlinear systems, with emphasis on Lyapunov's method. We give ... In 1892, Lyapunov showed that certain other functions could be used ...
  15. [15]
    [PDF] The historical development of classical stability concepts: Lagrange ...
    May 28, 2009 · The formula- tion of Lyapunov's stability concept in the beginning of the twentieth century marks the beginning of mod- ern stability theory and ...
  16. [16]
    Aleksandr Mikhailovich Lyapunov - Biography - MacTutor
    He presented the thesis On the stability of ellipsoidal forms of equilibrium of a rotating liquid in 1884 and defended it at St Petersburg University in the ...Missing: original | Show results with:original
  17. [17]
    Chetaev function - Scholarpedia
    Sep 15, 2007 · Chetaev functions are the analogue of Lyapunov functions to study instability of solutions of differential equations.
  18. [18]
    The Stability Of Motion : N. G. Chetayev - Internet Archive
    Feb 3, 2024 · The stability problems, which are becoming more and more frequent in engineering and physics, require Lyapunov's exact methods.
  19. [19]
    Theory of Stability of Motion - Google Books
    Author, Ioėlʹ Gilʹevich Malkin ; Publisher, U.S. Atomic Energy Commission, Office of Technical Information, 1959 ; Original from, the University of Michigan.
  20. [20]
    [PDF] Converse theorems of Lyapunov's second method
    As to problem (a), the most effective method of attack used so far (Kurzweil, Massera) has been to find first a Lyapunov function Vo which satisfies the ...
  21. [21]
    [PDF] A dual to Lyapunov's stability theorem
    In the 1950s, it was applied by Chetayev to aeronautical stability problems and by Lur'e and Letov for nonlinear control problems. The ideas were promoted ...
  22. [22]
    [PDF] Lyapunov Methods Definitions of Stability Lyapunov's Direct Method
    In this recitation we review concepts of stability and Lyapunov's direct and indirect methods for analysing the stability of a system around an equilibrium ...
  23. [23]
    [PDF] AN INVARIANCE PRINCIPLE IN THE THEORY OF STABILITY
    The "invariance principle" explains the paper's title, and it allows for sharper results when limit sets of solutions have an invariance property.Missing: 1960 | Show results with:1960
  24. [24]
    The general problem of the stability of motion - Taylor & Francis Online
    The general problem of the stability of motion · Citations · Metrics · Reprints & Permissions · View PDF (open in a new window) PDF (open in a new window) · Share.
  25. [25]
    [PDF] Lyapunov Theory for Discrete Time Systems 1 Autonomous systems
    This work contains a collection of Lyapunov related theorems for discrete time systems. Its main purpose it to collect in a self contained document part.
  26. [26]
    l~ber die Anwendung der Methode von Ljapunov auf ...
    In der nachstehenden Arbei~ untersuche ich die Stabilit/£t der L6sungen yon Differenzengleichungen mit Hilfe einer Modifikation der sog. zweiten oder.Missing: Uber | Show results with:Uber
  27. [27]
    [PDF] Chapter Five - Linear Systems
    In this chapter we specialize our results to the case of linear, time-invariant input/output systems. Two central concepts are the matrix exponential and the ...
  28. [28]
    [PDF] Lectures on Linear Systems Theory - University of Notre Dame
    Dec 2, 2024 · methods for solving discrete-time state equations. 5.1. Solutions to Continuous-time Linear Homogeneous Systems: Consider the linear ...
  29. [29]
    [PDF] Lecture Notes for EE263
    exponential (i.e., continuous-time solution):. eA. = I + A + A2/2! + ···. = I + TΛT−1 + TΛT−1 2. /2! + ···. = T(I +Λ+Λ2/2! + ···)T−1. = TeΛT−1. = T diag(eλ1 ...
  30. [30]
    [PDF] Linear Systems
    A LTI discrete state space system is given by x(k + 1) = Ax(k) + Bu(k). (23) y(k) = Cx(k) + Du(k). (24). Discrete time systems have the exact same properties as.
  31. [31]
    [PDF] University of Washington Lecture Notes for ME547 Linear Systems
    Feb 17, 2023 · Abstract: ME547 is a first-year graduate course on modern control systems focusing on: state- space description of dynamic systems, linear ...
  32. [32]
    C Linear System Theory | Optimal Control and Estimation
    One can show that A A 's eigenvalues determine the LTI system's stability, as the following Theorem states: Theorem C.1 (Stability of Continuous-Time LTI ...Missing: criteria | Show results with:criteria
  33. [33]
    [PDF] ECE504: Lecture 8 - spinlab
    A discrete-time system is stable if and only if, when the input u[k] ≡ 0 for all k ≥ k0, the state x[k] is bounded for all k ≥ k0 for any initial state x[k0] ∈ ...
  34. [34]
    [PDF] SCHUR STABILITY OF MATRIX SEGMENT VIA BIALTERNATE ...
    Jul 2, 2024 · A real n × n matrix A ∈ Rn×n is said to be Schur stable if all eigenvalues of A are contained in the open unit disk in the complex plane. This ...
  35. [35]
    [PDF] 7.8 Stability of Discrete-Time Linear Systems
    It claims that all multiple eigenvalues of a discrete-time linear system that lie on the unit circle are unstable. This is true in most cases. However, it is ...
  36. [36]
    [PDF] Lecture 10: Routh-Hurwitz Stability Criterion - Matthew M. Peet
    Theorem 1. The number of sign changes in the first column of the Routh table equals the number of roots of the polynomial in the Closed ...
  37. [37]
    [PDF] Fall 2010 Stability in the sense of Lyapunov - Purdue Engineering
    Oct 13, 2010 · We solve the Lyapunov equation ATP +PA = −Q for P. Assume P to have ... asymptotic stability from the Lyapunov theorem if we start with P and then ...
  38. [38]
    [PDF] Lyapunov Stability - University of Washington
    Lyapunov's approach to stability. The direct method of Lyapunov to stability problems: ▷ no need for explicit solutions to system responses. ▷ an “energy ...
  39. [39]
    [PDF] Lyapunov Stability of Linear Systems
    Here, we study the Lyapunov stability theory for time invariant continuous and discrete linear systems only. In 1892 the Russian mathematician Alexander ...
  40. [40]
    [PDF] 27 Qualitative behavior of linear systems - MIT OpenCourseWare
    equilibrium is an edge case in terms of dynamic stability. In the example in Equation 2, the eigenvalues are pure imaginary, so this is an edge case. In the ...
  41. [41]
    [PDF] Integral State-Feedback Control of Linear Time-Varying Systems
    Nov 23, 2021 · Moreover, bounded-input bounded-state stability with respect to arbitrary disturbances is shown. A modification for preventing controller.
  42. [42]
    [PDF] Solution of LTI State-Space Equations - University of Washington
    Theorem. Consider ˙x = f (x,t), x (t0) = x0, with: ▷ f (x,t) piecewise continuous in t (continuous except at finite points of discontinuity).
  43. [43]
    [PDF] Input-to-State Stability - Sontag Lab
    The notion of input-to-state stability (ISS) was introduced in Sontag (1989), and it provides theo- retical concepts used to describe stability features of a ...
  44. [44]
    [PDF] Smooth Stabilization Implies Coprime Factorization - Sontag Lab
    In this paper, we establish that factorizations exist under weaker hypotheses, and in doing so we make contact with the growing literature on nonlinear feedback.
  45. [45]
    [PDF] On Characterizations of Input-to-State Stability with Respect to ...
    In the paper (Sontag and. Wang, 1995), the authors established the equiv- alence among several natural characterizations of the iss property, stated in terms of ...
  46. [46]
    [PDF] input-to-state stability for discrete-time nonlinear systems - Sontag Lab
    Abstract: In this paper the input-to-state stability (iss) property is studied for discrete-time nonlinear systems. We show that many iss results for ...
  47. [47]
    None
    ### Summary of Barbalat's Lemma from arXiv:1411.1611
  48. [48]
    [PDF] Nonlinear Systems and Control Lecture # 10 The Invariance Principle
    A set M is invariant if x(0) ∈ M ⇒ x(t) ∈ M, ∀ t ∈ R. A positively invariant set is x(0) ∈ M ⇒ x(t) ∈ M, ∀ t ≥ 0.
  49. [49]
  50. [50]
    [PDF] Stability in the sense of Lyapunov
    Stability is one of the most important properties characterizing a system's qualitative behavior. There are a number of stability concepts used in the study ...
  51. [51]
    [PDF] Feedback Systems
    For a linear system, the stability of the equilibrium point at the origin can be determined from the eigenvalues of the matrix A: λ(A) := 1s ∈ C : det(sI − A)= ...
  52. [52]
    [PDF] Linear Quadratic Optimal Control - University of Washington
    algebraic Riccati equation (ARE). AT P + PA − PBR−1BT P + Q = 0. ▷ and the closed-loop system is asymptotically stable, with. Jmin = J0 = 1. 2 x (t0). T. P+x ...
  53. [53]
    [PDF] Linear Systems I Lecture 14 - Solmaz S. Kia
    We can only change the location of controllable eigenvalues using state feedback. We can only stabilize a system whose uncontrollable eigenvalues are stable.