Fact-checked by Grok 2 weeks ago

State-transition matrix

The state-transition matrix, often denoted as \Phi(t, t_0), is a fundamental mathematical construct in control theory and linear dynamical systems that describes the evolution of a system's state vector from an initial time t_0 to a later time t in the absence of external inputs. For linear time-invariant (LTI) systems governed by the homogeneous state-space equation \dot{x}(t) = A x(t), the state at time t is given by x(t) = \Phi(t, t_0) x(t_0), where A is the system matrix and \Phi(t, t_0) = e^{A(t - t_0)} represents the matrix exponential. This matrix encapsulates the intrinsic dynamics of the system, enabling predictions of future states solely from initial conditions and system parameters. In discrete-time systems, the state-transition matrix \Phi(k, k_0) similarly governs state evolution according to x(k) = \Phi(k, k_0) x(k_0), where the system follows x(k+1) = A x(k) and \Phi(k, k_0) = A^{k - k_0} for LTI cases. For time-varying systems, where A = A(t), the matrix satisfies the differential equation \dot{\Phi}(t, t_0) = A(t) \Phi(t, t_0) with the initial condition \Phi(t_0, t_0) = I, the identity matrix, highlighting its role in more general linear dynamics. Key properties include the semigroup property \Phi(t, t_1) \Phi(t_1, t_0) = \Phi(t, t_0) for t \geq t_1 \geq t_0, ensuring consistent composition of transitions, and nonsingularity, which guarantees invertibility and unique solutions to state equations. Additionally, the determinant follows the Jacobi-Liouville formula \det(\Phi(t, t_0)) = \exp\left( \int_{t_0}^t \trace(A(\tau)) \, d\tau \right), preserving volume in phase space for continuous-time systems. Computation of the state-transition matrix varies by system type: for LTI continuous-time systems, methods include the matrix exponential series e^{At} = \sum_{k=0}^\infty \frac{(At)^k}{k!}, Laplace transform inversion of (sI - A)^{-1}, or the Cayley-Hamilton theorem using the characteristic polynomial. In discrete-time LTI systems, it is simply the powers of A, while time-varying cases often require numerical integration of the matrix differential equation. These approaches underpin stability analysis, where asymptotic stability occurs if all eigenvalues of A have negative real parts, leading to \lim_{t \to \infty} \Phi(t, t_0) = 0. The state-transition matrix is indispensable in control system design, facilitating the solution of nonhomogeneous equations x(t) = \Phi(t, t_0) x(t_0) + \int_{t_0}^t \Phi(t, \tau) B(\tau) u(\tau) \, d\tau to incorporate inputs u(t), and enabling techniques like and assessments. Its applications extend to fields such as for trajectory prediction, for motion planning, and for circuit analysis, providing a unified framework for modeling and simulating complex dynamic behaviors.

Fundamentals

Definition and notation

In linear dynamical systems described by the state-space model \dot{x}(t) = A(t) x(t) + [B(t)](/page/Br%C3%BCgger_%2526_Thomet) u(t), where x(t) \in \mathbb{R}^n is the , u(t) \in \mathbb{R}^m is the input vector, A(t) is the n \times n system matrix, and B(t) is the n \times m input matrix, the state-transition matrix \Phi(t, \tau) is defined as the unique n \times n matrix solution to the homogeneous matrix differential equation \dot{\Phi}(t, \tau) = A(t) \Phi(t, \tau) with the initial condition \Phi(\tau, \tau) = I, the n \times n identity matrix. This matrix maps the state at initial time \tau to the state at subsequent time t via the relation x(t) = \Phi(t, \tau) x(\tau) for the homogeneous case (i.e., when u(t) = 0). Standard notation for the state-transition matrix includes \Phi(t, \tau) to emphasize dependence on both times t and \tau, reflecting its role in time-varying systems. In time-invariant systems where A(t) = A is constant, it is often simplified to \Phi(t - \tau), or further to \Phi(t) when \tau = 0, assuming evolution from the initial time origin. A key property is that \Phi(t, t) = I for all t, ensuring no change in state when initial and final times coincide. The concept of the state-transition matrix, also known as the fundamental matrix in the theory of linear ordinary differential equations, emerged in the late 19th century through foundational work on solving systems of linear ODEs, with significant contributions from on the variation of constants formula and later advancements by in stability analysis of linear systems around 1892. Its formalization in modern occurred in the mid-20th century as part of state-space methods.

Relation to system solutions

The state-transition matrix provides the explicit solution to the homogeneous linear state-space equation \dot{x}(t) = A(t) x(t) with initial condition x(0) = x_0, given by x(t) = \Phi(t, 0) x(0), where \Phi(t, 0) is the state-transition matrix. This matrix itself satisfies the matrix differential equation \frac{d}{dt} \Phi(t, 0) = A(t) \Phi(t, 0) with the initial condition \Phi(0, 0) = I, the identity matrix, ensuring that the state evolution is fully determined by the system matrix A(t) and the initial state. For the non-homogeneous case \dot{x}(t) = A(t) x(t) + B(t) u(t) with x(0) = x_0, the complete solution incorporates the input u(t) through , yielding x(t) = \Phi(t, 0) x(0) + \int_0^t \Phi(t, \tau) B(\tau) u(\tau) \, d\tau. Here, the integral term represents the forced response, propagated forward from each input instant \tau using the state-transition matrix \Phi(t, \tau). Under standard conditions where A(t) is continuous or satisfies a Lipschitz condition in the state variable (i.e., \|A(t)(y - x)\| \leq k(t) \|y - x\| for some piecewise continuous k(t)), the Picard-Lindelöf theorem guarantees the existence and uniqueness of solutions to the state equation, implying that the state-transition matrix is unique for the given A(t). To illustrate, consider the scalar system \dot{x}(t) = -a x(t) with a > 0 and initial condition x(0) = x_0. The solution is x(t) = e^{-a t} x_0, so the state-transition matrix (a scalar in this case) is \Phi(t, 0) = e^{-a t}, which satisfies \frac{d}{dt} \Phi(t, 0) = -a \Phi(t, 0) and \Phi(0, 0) = 1.

Applications in Linear Systems

Time-invariant systems

In linear time-invariant (LTI) systems, where the system matrix A is constant, the state-transition matrix simplifies to \Phi(t, \tau) = e^{A(t - \tau)}. The matrix exponential e^{At} is defined through its power series expansion: e^{At} = \sum_{k=0}^{\infty} \frac{(At)^k}{k!} = I + At + \frac{(At)^2}{2!} + \frac{(At)^3}{3!} + \cdots, which converges for all finite t and square matrices A. This form arises directly from solving the homogeneous state equation \dot{x}(t) = Ax(t), where the transition matrix maps the initial state x(\tau) to x(t). Unique to LTI systems, the state-transition matrix exhibits time-shift invariance, satisfying \Phi(t + s, \tau + s) = \Phi(t, \tau) for all t, s, \tau, reflecting the system's lack of dependence on absolute time. It also obeys the semigroup property \Phi(t, 0) \Phi(s, 0) = \Phi(t + s, 0), with \Phi(0, 0) = I and \Phi(t, \tau) invertible as \Phi^{-1}(t, \tau) = \Phi(\tau, t). These properties facilitate analysis of system evolution over arbitrary intervals. For the complete state equation \dot{x}(t) = Ax(t) + Bu(t), the solution is x(t) = e^{A(t - \tau)} x(\tau) + \int_{\tau}^{t} e^{A(t - \sigma)} B u(\sigma) \, d\sigma, combining the unforced response with the convolution integral for the input effect. In discrete-time LTI systems, modeled as x(k+1) = Ax(k) + Bu(k), the state-transition matrix is \Phi(k, j) = A^{k-j}, powering the matrix to advance the by steps. This discrete form often derives from sampling continuous-time LTI systems, where A = e^{A_c T} for sampling period T, linking the two domains. Z-transform analysis further connects it to (zI - A)^{-1}, enabling frequency-domain solutions for and response.

Time-varying systems

In linear time-varying (LTV) systems of the form \dot{x}(t) = A(t) x(t) + B(t) u(t), the state-transition matrix \Phi(t, \tau) relates the state at time t to the state at an earlier time \tau via x(t) = \Phi(t, \tau) x(\tau) for the homogeneous case. Unlike the linear time-invariant (LTI) scenario, where \Phi(t, \tau) = e^{A(t-\tau)} provides a closed-form solution, the LTV case generally lacks such an explicit expression due to the dependence of A(t) on time, complicating direct computation and analysis. The state-transition matrix for LTV systems satisfies the fundamental matrix differential equations \frac{\partial \Phi(t, \tau)}{\partial t} = A(t) \Phi(t, \tau) with respect to the upper limit and \frac{\partial \Phi(t, \tau)}{\partial \tau} = -\Phi(t, \tau) A(\tau) with respect to the lower limit, subject to the initial condition \Phi(\tau, \tau) = I. These partial differential equations arise from differentiating the definition of \Phi(t, \tau) and substituting the system dynamics, emphasizing the two-point boundary value nature of the problem. The complete solution to the LTV state equation incorporates these properties as x(t) = \Phi(t, 0) x(0) + \int_0^t \Phi(t, \sigma) B(\sigma) u(\sigma) \, d\sigma, where the integral term accounts for the forced response. This formulation underscores the heightened sensitivity to time variations in A(t) and B(t), as small changes in these matrices can propagate nonlinearly through the convolution, often demanding numerical approximation techniques like Runge-Kutta methods for practical evaluation rather than analytical forms. A representative example is the with time-varying , governed by the state-space model \dot{x_1} = x_2, \dot{x_2} = -\omega^2 x_1 - \gamma(t) x_2, yielding A(t) = \begin{pmatrix} 0 & 1 \\ -\omega^2 & -\gamma(t) \end{pmatrix}. For a such as \gamma(t) = \gamma_0 + \alpha \sin(\beta t) with \gamma(t) > 0 ensuring underdamping at each instant, the state-transition matrix \Phi(t, \tau) resists closed-form derivation due to the oscillatory , illustrating the necessity of numerical methods to compute state trajectories and assess response characteristics. Stability analysis for LTV systems reveals significant departures from LTI behavior: the system may be unstable even if A(t) is Hurwitz (all eigenvalues with negative real parts) for every t, as the time variations can accumulate to drive trajectories to . This contrasts sharply with LTI systems, where a constant Hurwitz A ensures asymptotic ; in LTV cases, the lack of uniform eigenvalue decay allows , as demonstrated by examples where the real parts of eigenvalues remain negative but the solution grows unbounded due to parametric excitation.

Mathematical Properties

Basic properties

The state-transition matrix \Phi(t, \tau) for a linear time-varying \dot{x}(t) = A(t) x(t) exhibits several fundamental properties that ensure its utility in describing system evolution. If A(t) is continuous, then \Phi(t, \tau) is continuously differentiable with respect to both t and \tau. Specifically, the with respect to the first argument satisfies \frac{\partial \Phi(t, \tau)}{\partial t} = A(t) \Phi(t, \tau), while the with respect to the second argument is \frac{\partial \Phi(t, \tau)}{\partial \tau} = -\Phi(t, \tau) A(\tau). These differential relations follow directly from the matrix equation defining \Phi(t, \tau) and confirm its smooth dependence on time parameters. A key algebraic property is the invertibility of \Phi(t, \tau) for all t and \tau, which holds because the homogeneous linear system has unique solutions forward and backward in time. The inverse is given by \Phi(t, \tau)^{-1} = \Phi(\tau, t), reflecting the reciprocal nature of state propagation. This nonsingularity is quantified by the determinant formula \det \Phi(t, \tau) = \exp\left( \int_{\tau}^{t} \operatorname{trace}(A(s)) \, ds \right), which is nonzero and provides insight into volume preservation or expansion in state space./11%3A_Continuous-time_linear_state-space_models/11.01%3A_The_Time_Varying_Case) At the initial time, \Phi(\tau, \tau) = I_n, where I_n is the n \times n for an n-dimensional state space; this normalization ensures that the state remains unchanged instantaneously. Additionally, \Phi(t, \tau) is linear in its state argument, satisfying \Phi(t, \tau) (c \mathbf{x}_1 + \mathbf{x}_2) = c \Phi(t, \tau) \mathbf{x}_1 + \Phi(t, \tau) \mathbf{x}_2 for scalars c and vectors \mathbf{x}_1, \mathbf{x}_2, as it represents a linear of initial conditions. These traits underpin the matrix's role in composing solutions for both time-invariant and time-varying systems.

Advanced properties

One key advanced property of the state-transition matrix \Phi(t, \tau) for linear time-varying systems \dot{x}(t) = A(t) x(t) is the norm bound derived from the Bellman-Gronwall lemma, which states that \|\Phi(t, \tau)\| \leq \exp\left( \int_{\tau}^{t} \|A(s)\| \, ds \right). This inequality provides an explicit upper estimate on the growth or decay of solutions, essential for stability margins in control design without computing \Phi explicitly. For linear time-invariant (LTI) systems where A(t) = A is constant, the eigenvalues of \Phi(t, 0) = e^{A t} are e^{\lambda_i t}, with \lambda_i denoting the eigenvalues of A; this relation directly links spectral properties of A to the transient behavior of states. In contrast, for linear time-varying (LTV) systems with periodic A(t + T) = A(t), Floquet theory decomposes the monodromy matrix—the value of \Phi over one period \Phi(t + T, t)—into \Phi(t + T, t) = P(t) e^{R T} P(t)^{-1}, where P(t) is periodic and R is constant; stability depends on the Floquet exponents from the eigenvalues of R. The adjoint state-transition matrix \Phi^*(t, \tau), governing the backward propagation of the costate in problems, satisfies \dot{p}(t) = -A(t)^T p(t) and relates to the original via \Phi^*(t, \tau) = \Phi(\tau, t)^T for LTI cases, enabling efficient gradient computation in and duality in linear quadratic regulators. Uniform asymptotic stability of the occurs \lim_{h \to \infty} \sup_{\tau \geq 0} \|\Phi(\tau + h, \tau)\| = [0](/page/0), ensuring trajectories converge uniformly regardless of initial time; for linear systems, this is equivalent to uniform stability with \|\Phi(t, \tau)\| \leq k e^{-\gamma (t - \tau)} for constants k, \gamma > 0. In the , modern control theory, pioneered by Kalman and colleagues, integrated criteria with state-space representations, using the state-transition matrix to characterize internal stability in systems and optimal regulators, shifting focus from frequency-domain to time-domain .

Series Expansions and Derivations

Peano-Baker series

The Peano-Baker series offers a formal infinite series representation for the state-transition matrix \Phi(t, \tau) in time-varying linear systems governed by the \dot{x}(t) = A(t) x(t), where A(t) is an n \times n matrix-valued function. This expansion arises from successive Picard iterations of the form of the matrix differential equation \frac{d}{dt} \Phi(t, \tau) = A(t) \Phi(t, \tau) with \Phi(\tau, \tau) = I. The explicit form of the Peano-Baker series is given by \Phi(t, \tau) = I + \sum_{k=1}^{\infty} \int_{\tau}^{t} ds_1 \int_{\tau}^{s_1} ds_2 \cdots \int_{\tau}^{s_{k-1}} A(s_1) A(s_2) \cdots A(s_k) \, ds_k, where I is the , and the multiple integrals are ordered such that \tau \leq s_k \leq \cdots \leq s_2 \leq s_1 \leq t. This series directly solves for the state evolution x(t) = \Phi(t, \tau) x(\tau). The series was originally developed by in his 1888 work on integrating linear differential equations via series methods, with further refinements by Henry Frederick Baker in 1905, who extended the approach to more general linear systems of ordinary differential equations (ODEs). The Peano-Baker series converges absolutely and uniformly on any compact interval where A(t) is continuous, or more generally, where \|A(t)\| is locally integrable; a sufficient for absolute convergence over [\tau, t] is \int_{\tau}^{t} \|A(s)\| \, ds < \infty, ensuring the series defines a bounded linear . Truncation of the series to the first m terms yields an approximation with remainder bounded by \frac{ \left( \int_{\tau}^{t} \|A(s)\| \, ds \right)^{m+1} }{ (m+1)! } \exp\left( \int_{\tau}^{t} \|A(s)\| \, ds \right), providing quantifiable error estimates for practical computations. For slowly varying A(t), where higher-order integrals contribute negligibly, the first few terms of the series approximate the state-transition matrix effectively; for instance, in the scalar case n=1 with A(t) nearly constant, the truncation \Phi(t, \tau) \approx I + \int_{\tau}^{t} A(s) \, ds closely mimics the exponential form \exp\left( \int_{\tau}^{t} A(s) \, ds \right), with the error diminishing as the variation in A(t) decreases.

Exponential matrix form

For time-invariant linear systems described by \dot{x}(t) = A x(t), where A is a constant matrix, the state-transition matrix \Phi(t) takes the closed-form exponential expression \Phi(t) = e^{A t}, which maps the initial state x(0) to x(t) = e^{A t} x(0). This form arises directly from solving the homogeneous system, generalizing the scalar case where the solution to \dot{x} = a x is x(t) = e^{a t} x(0). The matrix exponential is defined via its power series expansion: e^{A t} = I + A t + \frac{(A t)^2}{2!} + \frac{(A t)^3}{3!} + \cdots = \sum_{k=0}^{\infty} \frac{(A t)^k}{k!}, analogous to the scalar exponential series. This series converges absolutely for all finite matrices A and all real or complex t, due to the uniform convergence of the scalar series and properties of matrix norms. Alternative representations facilitate computation or analysis. One approach uses the inverse Laplace transform: e^{A t} = \mathcal{L}^{-1} \left\{ (s I - A)^{-1} \right\}, where \mathcal{L}^{-1} denotes the inverse Laplace operator applied entrywise; this follows from the Laplace transform of the system solution. For matrices with known Jordan canonical form A = P J P^{-1}, where J is block-diagonal with Jordan blocks, the exponential simplifies to e^{A t} = P e^{J t} P^{-1}, and each Jordan block exponential e^{J_k t} expands as e^{\lambda t} \left( I + t N + \frac{(t N)^2}{2!} + \cdots + \frac{(t N)^{m-1}}{(m-1)!} \right) for eigenvalue \lambda and nilpotent part N of size m, terminating finitely since N^m = 0. Key properties include the semigroup law e^{A (t_1 + t_2)} = e^{A t_1} e^{A t_2} for t_1, t_2 \geq 0, reflecting the additive structure of time in the system dynamics. For non-commuting matrices A and B, the product e^{A t} e^{B t} does not equal e^{(A + B) t} in general, but expands via the Baker-Campbell-Hausdorff formula as e^{A t} e^{B t} = \exp\left( (A + B) t + \frac{[A, B]}{2} t^2 + \ higher-order\ terms \right), where [A, B] = A B - B A is the Lie bracket; this series converges for sufficiently small t depending on the norms of A and B. The concept traces to the late , extending the scalar to non-commuting objects in the work of on continuous transformation groups, where it serves as the from the to the .

Computation and Estimation

Analytical computation

For linear time-invariant (LTI) systems, the state-transition matrix \Phi(t) = e^{At} admits a closed-form analytical expression via the matrix , which can be computed exactly using the Cayley-Hamilton theorem. This theorem states that a matrix A satisfies its own , allowing the to be reduced to a finite in A of degree at most n-1, where n is the system dimension: e^{At} = \alpha_0(t) I + \alpha_1(t) A + \cdots + \alpha_{n-1}(t) A^{n-1}, with scalar coefficients \alpha_i(t) determined by solving the corresponding scalar derived from the . This approach is particularly useful for low-dimensional systems where the powers of A can be explicitly calculated. In linear time-varying (LTV) systems, exact analytical computation of the state-transition matrix \Phi(t, t_0) is generally challenging, but closed-form solutions exist for systems with special structure, such as when the matrices A(t) at different times commute, i.e., [A(t_1), A(t_2)] = 0 for all t_1, t_2. In such cases, the time-ordered simplifies to the ordinary : \Phi(t, 0) = \exp\left( \int_0^t A(s) \, ds \right), enabling direct integration of the system . Another symbolic method for analytical computation, applicable primarily to LTI systems, involves the Laplace transform: the transform of \Phi(t) is \Phi(s) = (sI - A)^{-1}, and the matrix is recovered via the inverse Laplace transform of each entry. This technique leverages partial fraction decomposition or residue methods for explicit inversion when the eigenvalues of A are known. Despite these methods, analytical computation is rarely feasible for high-dimensional systems or general LTV cases without special structure, as the required integrations or inversions become intractable symbolically, often necessitating numerical alternatives.

Numerical estimation

Numerical methods for estimating the state-transition matrix \Phi(t) are essential when analytical solutions are unavailable or impractical, particularly for linear time-varying (LTV) systems where the matrix A(t) depends on time. For LTV systems, the state-transition matrix satisfies the matrix (ODE) \frac{d\Phi}{dt} = A(t) \Phi(t) with \Phi(0) = I. One common approach is direct using explicit Runge-Kutta methods, such as the fourth-order scheme, which approximates the solution by evaluating at intermediate points within each time step. These methods are implemented in standard numerical libraries and provide controllable accuracy through adaptive step sizing. For instance, a fixed step size of 5 seconds has been shown to achieve errors below 0.001 ft and errors below 0.00001 ft/s over intervals of 1500 seconds in simulations. Error analysis for Runge-Kutta integration of such matrix ODEs reveals that the local truncation error is on the order of O(h^{p+1}), where h is the step size and p is the method order (e.g., p=4 for the classical scheme), while global error accumulates as O(h^p) over the interval, assuming Lipschitz continuity of A(t). For stiff systems or long-time integrations, the Magnus expansion offers a more efficient alternative, approximating \Phi(t) = \exp(\Omega(t)) where \Omega(t) is a series expansion of integrated commutators of A(t); truncated versions preserve the Lie group structure of the orthogonal group and exhibit superior stability compared to standard Runge-Kutta for certain nonlinear extensions. These integrators are particularly useful in control applications like Kalman filtering, where the state-transition matrix propagates covariance. For linear time-invariant (LTI) systems, where \Phi(t) = e^{At}, eigenvalue decomposition provides an efficient numerical estimation if A is diagonalizable: compute eigenvalues \lambda_i and eigenvectors v_i via A = V \Lambda V^{-1}, then e^{At} = V \exp(\Lambda t) V^{-1} with \exp(\Lambda t) = \operatorname{diag}(e^{\lambda_i t}). This method leverages spectral properties and is implemented in libraries like MATLAB's expm function, which combines scaling and squaring with Padé approximation for high accuracy, achieving relative errors below for well-conditioned matrices. Challenges arise with ill-conditioned eigenvectors or non-diagonalizable cases, where alternative decompositions like are required, though they increase computational cost. Data-driven estimation of the discrete-time state-transition matrix \Phi_k is possible from input-output measurements using subspace identification algorithms, which avoid explicit model assumptions. The N4SID (Numerical algorithms for Subspace State Space System IDentification) method constructs an extended observability matrix from Hankel matrices of past and future outputs, performs singular value decomposition (SVD) to estimate the system order, and recovers \Phi_k via least-squares regression on the state sequences derived from oblique projections. Similarly, the MOESP (Multivariable Output-Error State sPace) algorithm uses orthogonal projections to separate deterministic and stochastic components, estimating \Phi_k through RQ factorization of block-Hankel matrices followed by SVD-based state realization, yielding consistent estimates under persistent excitation. These techniques, available in toolboxes like MATLAB's System Identification Toolbox, produce discrete \Phi_k \approx I + A \Delta t for small sampling intervals \Delta t, with estimation errors scaling as O(1/\sqrt{N}) for N data points. Post-2000 advances incorporate machine learning for approximating \Phi(t) from trajectories, especially in high-dimensional or partially observed settings. Recurrent neural networks (RNNs) can learn linear dynamics by minimizing prediction errors via gradient descent, parameterizing the state transition as \hat{A} in a canonical form and achieving excess risk bounds of O(\sqrt{n^5} + \sigma^2 n^3 / (T N)), where n is the state dimension, T the trajectory length, N the number of trajectories, and \sigma^2 the noise variance; errors are measured in mean squared prediction loss, implicitly bounding the Frobenius norm \|\hat{A} - A\|_F. Feedforward neural networks have also been used to parameterize time-varying \Phi(t) directly, trained on simulation data to approximate solutions of the matrix ODE with high fidelity and reduced computational overhead compared to traditional integrators. As an illustrative example, consider estimating the state-transition matrix for the LTI system with A = \begin{pmatrix} 0 & 1 \\ -1 & -0.1 \end{pmatrix} (a damped oscillator) over t = 1 using fourth-order Runge-Kutta with step size h = 0.1. The yields \Phi(1) \approx \begin{pmatrix} 0.5403 & 0.8415 \\ -0.8415 & 0.5403 \end{pmatrix}, closely matching the analytical e^{At} computed via , with a relative error below $10^{-6} due to the method's order.

References

  1. [1]
    [PDF] Lectures on Dynamic Systems and Control - DSpace@MIT
    (Note that since the state transition matrix in CT is alway invertible, there is no restriction that t1 lie between t0 and t | unlike in the DT case, where the ...
  2. [2]
  3. [3]
    [PDF] 2 Lecture 1: Introduction Control Theory
    7.1 State Transition Matrix. (State transition matrix.) Φ(t, t0) is called the state transition matrix. Consider the matrix differential equation. ˙X = A(t)X ...
  4. [4]
    Fundamental matrix - Encyclopedia of Mathematics
    Jun 5, 2020 · Cauchy's formula is often called the variation of constants formula, and the Cauchy matrix is also called the transition matrix (cf. also Cauchy ...<|control11|><|separator|>
  5. [5]
    Modern Control System Theory and Design, 2nd Edition - O'Reilly
    STATE TRANSITION MATRIX. The state transition matrix relates the state of a system at t = t0 to its state at a subsequent time t, when the input u(t) = 0.
  6. [6]
    [PDF] Modern Control Systems Lecture Notes
    Three properties of the state-transition matrix will now be derived. 1. Φ(0) = I. This result follows directly from (1.19) by setting t = 0. The state ...
  7. [7]
    Brief History of Feedback Control - F.L. Lewis
    The work of A.M. Lyapunov was seminal in control theory. He studied the stability of nonlinear differential equations using a generalized notion of energy in ...Missing: transition | Show results with:transition
  8. [8]
    [PDF] Module 04 Linear Time-Varying Systems - UTSA
    Sep 26, 2017 · Linear time-varying systems (LTV) have the form ˙x(t) = A(t)x(t) + B(t)u(t), and their state solution is given in terms of the State Transition ...
  9. [9]
    [PDF] 1 Linear Time-Varying Systems
    Φ(t, τ)B(τ)u(τ)dτ,. Φ(t, τ) = P(t)P(τ)−1 . 1.2.3 State Transition Matrix. Φ(t, τ) is called the state transition matrix. Properties. 1) Φ(t, t) = I,. 2) Φ−1(t ...
  10. [10]
    Transient Response from State Space Representation
    This derivation is made in analogy with that of a scalar first order differential equation. The scalar and matrix equations are shown below, side-by-side.
  11. [11]
    [PDF] Fall 2010 State Equation Solution - Purdue Engineering
    Sep 14, 2010 · The methods discussed allow us to easily compute the matrix exponential and hence the state transition matrix of LTI systems. A major tool used ...
  12. [12]
    [PDF] Time-Domain Solution of LTI State Equations - MIT
    This does not imply that the state transition matrix Φ(t) does not exist for systems with repeated eigenvalues; it may be computed by other methods. It may ...
  13. [13]
    [PDF] Lecture 10 Solution via Laplace transform and matrix exponential
    ... state-transition matrix; it maps the initial state to the state at time t: x(t) = Φ(t)x(0). (in particular, state x(t) is a linear function of initial state x(0)).
  14. [14]
    [PDF] Solution of LTI State-Space Equations - University of Washington
    Theorem. Consider ˙x = f (x,t), x (t0) = x0, with: ▷ f (x,t) piecewise continuous in t (continuous except at finite points of discontinuity).
  15. [15]
    [PDF] Discrete-Time State Space Analysis
    Similarly to continuous-time linear systems, discrete state space equations can be derived from difference equations (Section 8.3.1). In Section 8.3.2 we show.
  16. [16]
    [PDF] K. Tsakalis and P. Ioannou, Linear Time-Varying Systems
    ... State Transition Matrix and ... A class of popular simpli ed models are those represented by a linear, time-invariant, ordinary di erential equation.
  17. [17]
    [PDF] ECE 602 Fall 2011 A Glimpse at Linear Time-Varying (LTV) Systems
    To be able to solve this problem, we need to acquaint ourselves with the state transition matrix for linear LTV systems. We start with the LTV uncontrolled ...
  18. [18]
    (PDF) On the Damped Harmonic Oscillator with Time Dependent ...
    Aug 6, 2025 · ... space, require the damping to never vanish and establish ... and C is an n × n n \times n matrix whose elements are real-valued functions.
  19. [19]
    [PDF] Nonlinear Systems
    the origin is globally exponentially stable. 6. Example 4.21 The linear time-varying system i; = A(t)x. ( 4.27). Page 173. 156. CHAPTER 4. LYAPUNOV STABILITY.
  20. [20]
    [PDF] Lectures on Linear Systems Theory - University of Notre Dame
    Dec 2, 2024 · fundamental matrix a state transition matrix and denote it as Φ(t;t0) ... ∥Φ(τ,t)∥ ≤ ke−λ(τ−t). Since we assumed Q(t) is bounded and ...
  21. [21]
    [PDF] BEN-GURION UNIVERSITY OF THE NEGEV - arXiv
    Jul 1, 2021 · According to Floquet theory, the transition matrix of an LPTV system represented by a square periodic-function matrix 𝐴(𝑡) = 𝐴(𝑡 + 𝑇) can be.
  22. [22]
    [PDF] What Is the Adjoint of a Linear System? - Dennis S. Bernstein
    representation of the state transition matrix. Details are provided in ... = Φ(τ,t)T. Then, for all t, τ ∈ [t0,tf], Ψ(t, τ) is the state transition ...
  23. [23]
    [PDF] Optimal Control Theory - Module 3 - Maximum Principle
    where Φ∗ is the state transition matrix of the variation matrix A∗ in equation (25). Recall, however, that for adjoint systems, d dt hz, xi = 0. So the ...
  24. [24]
    [PDF] Lyapunov Stability - Purdue Engineering
    Notice that non-linear systems (and some linear systems) may have more than one equilibrium state. to > 0 such that, if x to ( ) <δ then x t( ) < ε for all t > ...Missing: history | Show results with:history
  25. [25]
    [PDF] Contributions to the theory of optimal control
    The purpose of this paper is to give an account of recent research on a classical problem in the theory of control: the design of linear control systems so as ...
  26. [26]
    [1011.1775] The Peano-Baker series - arXiv
    Nov 8, 2010 · This note reviews the Peano-Baker series and its use to solve the general linear system of ODEs. The account is elementary and self-contained.Missing: 1887 1890
  27. [27]
  28. [28]
  29. [29]
    [PDF] Solution via matrix exponential - EE263
    the matrix etA is called the state transition matrix, usually written (t) generalizes scalar case: solution of _x = ax, with a 2 R and constant, is x(t) ...
  30. [30]
    [PDF] Nineteen Dubious Ways to Compute the Exponential of a Matrix ...
    This shows that the perturbation bounds on φ(t) for normal matrices are as small as can be expected.
  31. [31]
    [PDF] SF2832 - Computing the matrix exponential
    Nov 11, 2016 · There are (at least) three different ways to compute the matrix exponential: i) using the definition, ii) using the Laplace transform, iii) ...
  32. [32]
    254A, Notes 1: Lie groups, Lie algebras, and the Baker-Campbell ...
    Sep 1, 2011 · In this set of notes, we describe the basic analytic structure theory of Lie groups, by relating them to the simpler concept of a Lie algebra.
  33. [33]
    The Matrix Exponential - ResearchGate
    At the end of the nineteenth century this theory came to life in the works of Sophus Lie. It had its origins in Lie's idea of applying Galois theory to ...
  34. [34]
    [PDF] Computing the Matrix Exponential The Cayley-Hamilton Method 1
    The matrix exponential eAt forms the basis for the homogeneous (unforced) and the forced response of LTI systems. We consider here a method of determining ...
  35. [35]
    [PDF] Lecture 7 Solution of State Equations - ECEN 605
    In the expression of the solution x(t) of the state equation, the term eAt is called state transition matrix and is commonly notated.
  36. [36]
    [PDF] Linear Control Systems Lecture # 24 Time-Varying Systems ...
    Let A(t) = α1(t)M1 + α2(t)M2, where α1(t) and α2(t) are scalar continuous functions of t, and M1 and M2 are n × n constant matrices that commute: M1M2 = ...
  37. [37]
    [PDF] 1 Solution of Time-Invariant State-Space Equations
    Only the first equation here is a differential equation. Once we solve this equation for x(t), we can find y(t) very easily using the second equation.
  38. [38]
    [PDF] analysis of linear state-space systems - F.L. Lewis
    Aug 6, 2008 · +. +. = s s. sH . To find the state transition matrix, one may perform four inverse Laplace transforms, one for each element of Φ(s), to ...
  39. [39]
    [PDF] Computation of State Vector and Transition Matrix for TRAM. - DTIC
    Jun 1, 1977 · The method used is the standard fourth order Runge-Kutta method [2]. It is described as follows. Let the values of the variables be given ...
  40. [40]
    [PDF] A Numerical Scheme for Ordinary Differential Equations Having ...
    An exact numerical scheme based on the state transition matrix (STM) is an example. ... using a variable time step size 4thorder Runge-Kutta. (R-K) scheme with ...
  41. [41]
    [PDF] The Magnus expansion and some of its applications - arXiv
    Oct 30, 2008 · other standard numerical integrators, such as explicit Runge–Kutta methods. From a purely numerical point of view, the algorithm is stable ...
  42. [42]
    expm - Matrix exponential - MATLAB - MathWorks
    The `expm(X)` function computes the matrix exponential of X, where X is a square matrix. The syntax is `Y = expm(X)`.Description · Examples · References
  43. [43]
    [PDF] N4SID1 : Subspace Algorithms for the Identi cation of Combined ...
    In this paper, we derive two new N4SID al- gorithms to identify mixed deterministic-stochastic systems. Both algorithms determine state sequences through the ...
  44. [44]
    [PDF] Part 1. The output-error state model identification class of algorithms
    In this paper, we present two novel algorithms to realize a finite dimensional, linear time-invariant state-space model from input-output data. The algorithms.Missing: seminal | Show results with:seminal
  45. [45]
    [PDF] Gradient Descent Learns Linear Dynamical Systems
    If we remove all non-linear state transitions from a recurrent neural network, we are left with the state transition representation of a linear dynamical system ...
  46. [46]
    Feedforward neural network based time-varying state-transition ...
    Jan 15, 2022 · It is demonstrated that the designed state transition matrix has high accuracy and is computationally highly efficient.Missing: estimating | Show results with:estimating