Fact-checked by Grok 2 weeks ago

Matrix differential equation

A matrix differential equation, also known as a matrix system of differential equations, is a mathematical model that expresses a system of ordinary differential equations in compact vector-matrix form, where the unknowns are components of a vector function and the relationships involve matrix coefficients. The general form for a linear homogeneous system is \dot{\mathbf{x}}(t) = A \mathbf{x}(t), where \mathbf{x}(t) is an n \times 1 vector of functions, A is an n \times n constant matrix, and the dot denotes differentiation with respect to time t; for nonhomogeneous cases, it extends to \dot{\mathbf{x}}(t) = A \mathbf{x}(t) + \mathbf{f}(t), incorporating an external forcing term \mathbf{f}(t). This formulation arises naturally from converting higher-order scalar equations or coupled first-order systems into first-order vector form, facilitating analysis of multidimensional dynamics such as those in physics and engineering. Solutions to homogeneous matrix differential equations are constructed using the of A, yielding expressions like \mathbf{x}(t) = c_1 e^{\lambda_1 t} \mathbf{v}_1 + \cdots + c_n e^{\lambda_n t} \mathbf{v}_n for distinct real eigenvalues, with adjustments for repeated or cases involving generalized eigenvectors or oscillatory terms. Alternatively, the solution is given by the matrix exponential e^{At}, where the general solution is \mathbf{x}(t) = e^{At} \mathbf{x}(0) for initial value problems, and this encapsulates the system's evolution operator. For nonhomogeneous systems, methods like integrate the matrix with the forcing function. Matrix differential equations are pivotal in modeling coupled phenomena, including mechanical systems like mass-spring oscillators where equations describe interactions via matrices, electrical such as RLC circuits reduced to state-space forms, and biological processes like predator-prey dynamics or represented by stoichiometric matrices. Their study underpins stability analysis through eigenvalues (e.g., negative real parts indicate asymptotic stability) and extends to nonlinear variants via , influencing fields from to .

Fundamental Concepts

Definition and Formulation

A matrix differential equation, also known as a linear system of first-order ordinary differential equations (ODEs), is formulated as \dot{\mathbf{x}}(t) = A \mathbf{x}(t) + \mathbf{b}(t), where \mathbf{x}(t) is an n-dimensional vector function representing the state variables, A is an n \times n constant matrix, and \mathbf{b}(t) is an n-dimensional forcing vector that may depend on time t. This equation encapsulates a system of n coupled linear ODEs, with the matrix A encoding the linear interactions and dependencies among the state variables. The components play distinct roles: \mathbf{x}(t) tracks the evolution of the system's states over time, such as positions or concentrations in a physical model; A governs the intrinsic dynamics through its entries, which represent coefficients of proportionality between states; and \mathbf{b}(t) accounts for external inputs or non-homogeneous effects. To specify a unique solution, an initial condition \mathbf{x}(0) = \mathbf{x}_0 is typically imposed, forming an initial value problem. Standard notation conventions include boldface for vectors like \mathbf{x}(t), an overdot for the time derivative \dot{\mathbf{x}}(t) = \frac{d\mathbf{x}}{dt}, and t as the independent variable, often representing time in applications. These equations arise naturally in modeling multivariable dynamical systems, such as electrical circuits, , or , where multiple interacting variables evolve linearly over time.

Homogeneous versus Non-Homogeneous Systems

Matrix differential equations, or systems of linear ordinary differential equations in vector form, are classified into homogeneous and non-homogeneous categories based on the presence of external forcing terms. This distinction is fundamental to understanding their structure and solvability, extending the concepts from scalar ordinary differential equations (ODEs) to vector-valued systems. A homogeneous system takes the form \dot{\mathbf{x}}(t) = A \mathbf{x}(t), where \mathbf{x}(t) is an n-dimensional vector function, A is an n \times n constant coefficient matrix, and the forcing term \mathbf{b}(t) = \mathbf{0}. In this case, the system's evolution depends solely on the initial conditions \mathbf{x}(0) and the intrinsic properties of A, such as its eigenvalues, which determine whether solutions decay, grow, or oscillate over time. The zero vector \mathbf{x}(t) \equiv \mathbf{0} is always a solution, known as the trivial solution, and all solutions form a vector space under linear combinations. In contrast, a non-homogeneous system is given by \dot{\mathbf{x}}(t) = A \mathbf{x}(t) + \mathbf{b}(t), where \mathbf{b}(t) is a non-zero vector function representing external inputs or forcing. This term introduces influences beyond the system's internal dynamics, such as constant vectors (e.g., \mathbf{b}(t) = \mathbf{c} for steady external loads) or time-varying functions like sinusoidal inputs (e.g., \mathbf{b}(t) = \sin(\omega t) \mathbf{d}). Solutions to non-homogeneous systems no longer pass through the origin and require finding both the homogeneous solution and a particular solution that accounts for \mathbf{b}(t). The applies to linear systems, stating that the general to the non-homogeneous equation is the sum of the general to the associated homogeneous equation and any particular to the non-homogeneous one. This additive structure leverages the of the , allowing separate treatment of internal dynamics and external effects, much like in scalar linear ODEs.

Analytical Solutions

Matrix Exponential Method

The matrix exponential serves as the fundamental solution to the homogeneous linear matrix differential equation \dot{\mathbf{x}} = A \mathbf{x}. Define it. The matrix exponential e^{At} is defined by the power series e^{At} = \sum_{k=0}^{\infty} \frac{(At)^k}{k!} which converges absolutely for every A and every scalar t. Differentiating the series term by term yields \frac{d}{dt} e^{At} = A e^{At}, with e^{A \cdot 0} = I, so \mathbf{x}(t) = e^{At} \mathbf{x}(0) solves the initial value problem./3%3A_Systems_of_ODEs/3.8%3A_Matrix_exponentials) For the non-homogeneous equation \dot{\mathbf{x}} = A \mathbf{x} + \mathbf{b}(t), the solution is \mathbf{x}(t) = e^{At} \mathbf{x}(0) + \int_{0}^{t} e^{A(t-s)} \mathbf{b}(s) \, ds. To compute e^{At}, one approach is the power series, though it is primarily theoretical due to slow convergence for large t. If A is diagonalizable, A = P D P^{-1} with D diagonal, then e^{At} = P e^{Dt} P^{-1}, where e^{Dt} is diagonal with entries e^{\lambda_i t}. For non-diagonalizable matrices, the Jordan canonical form is used, where the exponential respects the block structure. Key properties include the semigroup property e^{A(t+s)} = e^{At} e^{As} for all t, s, and the inverse (e^{At})^{-1} = e^{-At}. For matrices that are not diagonalizable, alternative computational methods such as the Putzer algorithm provide a practical way to evaluate the matrix exponential.

Putzer Algorithm

The Putzer algorithm, introduced by E. J. Putzer in 1966, offers a practical method for computing the matrix exponential e^{At} in the solution of linear systems of differential equations \dot{x} = Ax, where A is an n \times n constant matrix, without needing to diagonalize A or compute its Jordan canonical form. This approach leverages the eigenvalues \lambda_1, \dots, \lambda_n of A (counted with algebraic multiplicity) to express e^{At} = \sum_{i=1}^n p_i(t) P_i, where the matrices P_i are defined recursively as P_1 = I and P_{i+1} = (A - \lambda_i I) P_i for i = 1, \dots, n-1. The scalar functions p_i(t) are obtained by solving a chain of first-order ordinary differential equations: \dot{p}_1 = \lambda_1 p_1 with p_1(0) = 1, and \dot{p}_{i+1} = \lambda_{i+1} p_{i+1} + p_i with p_{i+1}(0) = 0 for i = 1, \dots, n-1, which correspond to the initial conditions p_j(0) = \delta_{1j}. These scalar ODEs can be solved explicitly as p_1(t) = e^{\lambda_1 t} and p_i(t) = e^{\lambda_i t} \int_0^t e^{(\lambda_{i-1} - \lambda_i) s} p_{i-1}(s) \, ds recursively, though numerical integration may be used for complex cases. The derivation of the algorithm relies on the Cayley-Hamilton theorem, which states that A satisfies its own \det(A - \lambda I) = 0, implying that powers of A can be reduced to a of lower powers. Extending this, e^{At} satisfies the same , allowing representation in the basis \{I, (A - \lambda_1 I), (A - \lambda_1 I)(A - \lambda_2 I), \dots \} spanned by the recursive P_i, with coefficients p_i(t) evolving according to the companion form of the . This framework ensures the expression holds even when eigenvalues are repeated, by repeating the eigenvalue in the recursion for the multiplicity. A key advantage of the Putzer algorithm is its efficiency in handling non-diagonalizable matrices, as it requires only the eigenvalues (computable via standard methods like ) and a sequence of n-1 matrix-vector multiplications or recursive matrix products, avoiding the need for generalized eigenvectors or Jordan blocks. It is particularly useful in theoretical analyses or when n is small, providing a that illuminates the structure of solutions without full . However, the algorithm assumes the eigenvalues are known and distinct for its simplest presentation; while it extends naturally to multiple eigenvalues by appropriate ordering in the chain, ill-conditioned eigenvalue problems or near-repeated roots can amplify numerical errors in the . For large n, more scalable methods like Padé approximation may be preferred over this eigenvalue-dependent approach.

Stability and Equilibrium

Steady-State Analysis

In the context of matrix differential equations, the steady state, also known as the point, refers to a constant solution \mathbf{x}_{ss} where the time derivative vanishes, satisfying \dot{\mathbf{x}} = 0. For a homogeneous \dot{\mathbf{x}} = A \mathbf{x}, this condition simplifies to A \mathbf{x}_{ss} = 0, yielding the trivial steady state \mathbf{x}_{ss} = 0 unless A has a nontrivial null space. For a non-homogeneous system \dot{\mathbf{x}} = A \mathbf{x} + \mathbf{b} with constant forcing term \mathbf{b}, the steady-state equation becomes A \mathbf{x}_{ss} + \mathbf{b} = 0. To solve for the in the non-homogeneous case, rearrange to \mathbf{x}_{ss} = -A^{-1} \mathbf{b}, provided that A is invertible (i.e., \det(A) \neq 0) and \mathbf{b} is a constant vector. This explicit formula arises from the algebraic resolution of the condition and represents the particular solution that balances the linear dynamics with the external input. In applications, such as electrical circuits or mechanical systems under constant loads, this corresponds to the long-term operating condition where transients have died out. The of solutions to the describes the long-time behavior of the system, where \mathbf{x}(t) \to \mathbf{x}_{ss} as t \to \infty, contingent on the of the homogeneous component. The general solution decomposes into the homogeneous part plus a particular solution, and for , the homogeneous terms must vanish over time, which occurs when the eigenvalues of A have negative real parts. This asymptotic approach to is fundamental in modeling persistent states under constant influences. Special cases arise when A is singular (\det(A) = 0), meaning zero is an eigenvalue, in which the may not exist uniquely or at all for the non-homogeneous system. If \mathbf{b} lies outside the column space of A, no steady state exists; otherwise, solutions form an affine offset by the null space of A, leading to non-uniqueness. Such scenarios are common in conservative systems, like closed networks without , where invariants preserve multiple equilibria. In physical models, steady states often represent balances between competing processes; for instance, in governed by linear rate laws, \mathbf{x}_{ss} equates production and consumption rates of concentrations, achieving a . This interpretation extends to ecological or pharmacokinetic models, where constant steady states signify sustainable or therapeutic balances.

Stability in Linear Systems

In linear systems of matrix differential equations, stability refers to the long-term behavior of solutions relative to an equilibrium point, typically the origin for homogeneous systems or a steady-state solution for non-homogeneous ones. Asymptotic stability occurs when all solutions \mathbf{x}(t) converge to the equilibrium (e.g., \mathbf{x}(t) \to 0 or \mathbf{x}_{ss} as t \to \infty), neutral stability when solutions remain bounded without converging, and instability when solutions diverge. These definitions extend concepts to vector-valued systems, where perturbations around the equilibrium do not grow unbounded. For homogeneous systems \dot{\mathbf{x}} = A \mathbf{x}, is determined by the eigenvalues \lambda_i of the matrix A. The system is asymptotically stable if all eigenvalues satisfy \operatorname{Re}(\lambda_i) < 0, ensuring exponential decay of solutions via the matrix exponential \mathbf{x}(t) = e^{At} \mathbf{x}(0). If all \operatorname{Re}(\lambda_i) \leq 0 with no eigenvalues having zero real part or larger Jordan blocks for those with \operatorname{Re}(\lambda_i) = 0, the system is neutrally stable, with solutions bounded but possibly oscillating. Instability arises if any \operatorname{Re}(\lambda_i) > 0, leading to . The canonical form of A provides deeper insight into , decomposing the system into blocks corresponding to each eigenvalue. For eigenvalues with \operatorname{Re}(\lambda_i) < 0, solutions decay; for \operatorname{Re}(\lambda_i) > 0, they grow. Purely imaginary eigenvalues (\operatorname{Re}(\lambda_i) = 0) produce persistent oscillations if the Jordan blocks are diagonal (1×1), resulting in neutral with bounded elliptical or circular trajectories. However, non-trivial Jordan blocks for such eigenvalues introduce growth terms (e.g., t e^{i \omega t}), rendering the system unstable. Non-zero real parts dominate the behavior, with negative parts causing decay and positive parts causing divergence, modulated by the imaginary components for oscillatory decay or growth. In non-homogeneous systems \dot{\mathbf{x}} = A \mathbf{x} + \mathbf{f}(t), the general solution is \mathbf{x}(t) = e^{At} \mathbf{x}(0) + \int_0^t e^{A(t-\tau)} \mathbf{f}(\tau) \, d\tau. Stability of the homogeneous part (\dot{\mathbf{x}} = A \mathbf{x}) governs convergence: if asymptotically stable, solutions approach a particular solution bounded by the input \mathbf{f}(t); otherwise, the homogeneous terms may cause divergence regardless of \mathbf{f}(t). For two-state-variable systems (n=2), can be assessed using the \tau = \operatorname{tr}(A) and D = \det(A) of the 2×2 A, as the eigenvalues are \lambda_{1,2} = \frac{\tau \pm \sqrt{\tau^2 - 4D}}{2}. The origin is a sink (asymptotically stable or spiral) if \tau < 0 and D > 0, ensuring both eigenvalues have negative real parts. A center (neutral with oscillations) occurs if \tau = 0 and D > 0, yielding purely imaginary eigenvalues. Unstable sources or saddles arise if \tau > 0 with D > 0 (unstable /spiral) or D < 0 (saddle with one positive and one negative eigenvalue). Degenerate cases like D = 0 lead to instability due to a zero eigenvalue. For higher-dimensional systems (n > 2), the Routh-Hurwitz criterion extends the eigenvalue analysis without computing roots explicitly, applying to the \det(sI - A) = s^n + a_{n-1} s^{n-1} + \cdots + a_0. The system is asymptotically stable if all coefficients a_i > 0 and the Hurwitz determinants satisfy specific positivity conditions, equivalent to all \operatorname{Re}(\lambda_i) < 0. This algebraic test is particularly useful for parameter-dependent matrices, confirming no right-half-plane roots.

Applications and Examples

Two-Variable State System

The two-variable state system represents the simplest non-trivial case of a matrix differential equation, modeling the evolution of a two-dimensional state vector \mathbf{x}(t) = \begin{pmatrix} x_1(t) \\ x_2(t) \end{pmatrix} governed by the homogeneous linear equation \dot{\mathbf{x}} = A \mathbf{x}, where A is a constant 2×2 matrix. This formulation arises in applications such as linearized predator-prey models, where x_1 and x_2 might represent population densities, or coupled harmonic oscillators describing mechanical systems with mutual interactions. The system's behavior is determined by the eigenvalues of A, which dictate the qualitative dynamics in the phase plane without requiring numerical simulation. Eigenvalue analysis begins with the characteristic equation \det(A - \lambda I) = 0, which simplifies to the quadratic \lambda^2 - \operatorname{tr}(A) \lambda + \det(A) = 0, where \operatorname{tr}(A) is the trace and \det(A) is the determinant. The roots \lambda_1, \lambda_2 classify the equilibrium at the : if the discriminant D = [\operatorname{tr}(A)]^2 - 4 \det(A) > 0 with both eigenvalues real and having the same sign, the system exhibits a or unstable , where trajectories converge to or diverge from the along straight lines or curved paths aligned with eigenvectors. For D > 0 with eigenvalues of opposite signs, a emerges, featuring hyperbolic trajectories that approach along one eigendirection and depart along the other. When D < 0, eigenvalues produce spiral sinks or sources if the real part is negative or positive, respectively, resulting in oscillatory trajectories winding toward or away from the ; pure imaginary eigenvalues (\operatorname{tr}(A) = 0) yield closed orbits, though this is a center only for specific conservative systems. Degenerate cases, such as D = 0, lead to improper nodes with repeated eigenvalues, where behavior depends on the geometric multiplicity. In the , trajectories illustrate the system's qualitative dynamics: for a stable node, solution curves approach the asymptotically, often tangent to the eigenvector corresponding to the slower-decaying eigenvalue, while points show separatrices dividing the into regions of inflow and outflow. Nullclines, defined as lines where \dot{x_1} = 0 or \dot{x_2} = 0, coincide with the axes only if off-diagonal elements of A vanish, but in general, they are straight lines through the , intersecting at the and guiding the . This visualization reveals global flow patterns, such as convergence in spirals for damped oscillations or divergence in unstable nodes, providing for long-term behavior without explicit . If A has distinct real eigenvalues \lambda_1 \neq \lambda_2, the system is diagonalizable as A = P D P^{-1}, where D = \operatorname{diag}(\lambda_1, \lambda_2) and P collects the eigenvectors \mathbf{v}_1, \mathbf{v}_2. Substituting \mathbf{y} = P^{-1} \mathbf{x} transforms the equation to \dot{\mathbf{y}} = D \mathbf{y}, yielding the explicit solution \mathbf{x}(t) = c_1 e^{\lambda_1 t} \mathbf{v}_1 + c_2 e^{\lambda_2 t} \mathbf{v}_2, where constants c_1, c_2 are fixed by initial conditions. This linear combination traces straight-line solutions along eigenvectors in the special basis, confirming the phase plane classifications. For the non-homogeneous extension \dot{\mathbf{x}} = A \mathbf{x} + \mathbf{b} with constant \mathbf{b} \neq \mathbf{0}, assuming A invertible, the equilibrium shifts to \mathbf{x}_{eq} = -A^{-1} \mathbf{b}, a fixed point where \dot{\mathbf{x}} = \mathbf{0}. The general solution becomes \mathbf{x}(t) = \mathbf{x}_h(t) + \mathbf{x}_p(t), with \mathbf{x}_h(t) the homogeneous part and particular solution \mathbf{x}_p(t) = -A^{-1} \mathbf{b}, effectively translating the by \mathbf{x}_{eq} while preserving the original qualitative dynamics around the new . This shift models forced systems, such as constant external inputs in , where stability mirrors that of the homogeneous case.

Step-by-Step Solved Example

Consider the non-homogeneous matrix differential equation \dot{\mathbf{x}} = A \mathbf{x} + \mathbf{g}(t), where A = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -6 & -11 & -6 \end{pmatrix}, \mathbf{g}(t) = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}, and the initial condition \mathbf{x}(0) = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}. This system models scenarios like multi-compartment chemical reactions or electrical networks with constant input. To solve, first determine the homogeneous solution \mathbf{x}_h(t) by finding the of A. The is \det(\lambda I - A) = \lambda^3 + 6\lambda^2 + 11\lambda + 6 = 0, with roots \lambda_1 = -1, \lambda_2 = -2, \lambda_3 = -3. The corresponding eigenvectors are \mathbf{v}_1 = \begin{pmatrix} -1 \\ 1 \\ -1 \end{pmatrix}, \mathbf{v}_2 = \begin{pmatrix} 1 \\ -2 \\ 4 \end{pmatrix}, and \mathbf{v}_3 = \begin{pmatrix} 1 \\ -3 \\ 9 \end{pmatrix}. Thus, the homogeneous solution is \mathbf{x}_h(t) = c_1 \mathbf{v}_1 e^{-t} + c_2 \mathbf{v}_2 e^{-2t} + c_3 \mathbf{v}_3 e^{-3t}. Next, find a particular solution \mathbf{x}_p(t). Since \mathbf{g}(t) is constant, assume \mathbf{x}_p = \mathbf{u}, a constant vector \mathbf{u} = \begin{pmatrix} a \\ b \\ c \end{pmatrix}. Substituting yields A \mathbf{u} + \mathbf{g} = \mathbf{0}, or A \mathbf{u} = -\mathbf{g}. Solving the system gives b = 0, c = 0, and -6a = -1, so a = \frac{1}{6}. Hence, \mathbf{x}_p = \begin{pmatrix} \frac{1}{6} \\ 0 \\ 0 \end{pmatrix}. The general solution is \mathbf{x}(t) = \mathbf{x}_h(t) + \mathbf{x}_p. Apply the initial condition: \mathbf{x}(0) = c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + c_3 \mathbf{v}_3 + \mathbf{x}_p = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}. This simplifies to c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + c_3 \mathbf{v}_3 = \begin{pmatrix} \frac{5}{6} \\ 0 \\ 0 \end{pmatrix}. Solving the system yields c_1 = -\frac{5}{2}, c_2 = -\frac{5}{2}, c_3 = \frac{5}{6}. Therefore, \mathbf{x}(t) = -\frac{5}{2} \begin{pmatrix} -1 \\ 1 \\ -1 \end{pmatrix} e^{-t} - \frac{5}{2} \begin{pmatrix} 1 \\ -2 \\ 4 \end{pmatrix} e^{-2t} + \frac{5}{6} \begin{pmatrix} 1 \\ -3 \\ 9 \end{pmatrix} e^{-3t} + \begin{pmatrix} \frac{1}{6} \\ 0 \\ 0 \end{pmatrix}. Simplifying component-wise: x_1(t) = \frac{5}{2} e^{-t} - \frac{5}{2} e^{-2t} + \frac{5}{6} e^{-3t} + \frac{1}{6}, \quad x_2(t) = -\frac{5}{2} e^{-t} + 5 e^{-2t} - \frac{5}{2} e^{-3t}, x_3(t) = \frac{5}{2} e^{-t} - 10 e^{-2t} + \frac{15}{2} e^{-3t}. To verify, compute \dot{\mathbf{x}}(0) from the solution: \dot{x}_1(0) = -\frac{5}{2} e^{-t} + 5 e^{-2t} - \frac{5}{2} e^{-3t} \bigg|_{t=0} = -\frac{5}{2} + 5 - \frac{5}{2} = 0, \dot{x}_2(0) = \frac{5}{2} e^{-t} - 10 e^{-2t} + \frac{15}{2} e^{-3t} \bigg|_{t=0} = \frac{5}{2} - 10 + \frac{15}{2} = 0, \dot{x}_3(0) = -\frac{5}{2} e^{-t} + 20 e^{-2t} - \frac{45}{2} e^{-3t} \bigg|_{t=0} = -\frac{5}{2} + 20 - \frac{45}{2} = -5. Thus, \dot{\mathbf{x}}(0) = \begin{pmatrix} 0 \\ 0 \\ -5 \end{pmatrix}. Meanwhile, A \mathbf{x}(0) + \mathbf{g}(0) = A \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ -6 \end{pmatrix} + \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ -5 \end{pmatrix}, confirming the solution satisfies the equation at t=0. The initial condition is satisfied by construction. The eigenvalues -1, -2, -3 are negative, indicating asymptotic ; thus, \mathbf{x}(t) approaches the steady-state \mathbf{x}_p = \begin{pmatrix} \frac{1}{6} \\ 0 \\ 0 \end{pmatrix} as t \to \infty.

References

  1. [1]
    [PDF] Chapter 6 Linear Systems of Differential Equations - UNCW
    We can solve this system of first order equations using matrix methods. However, we will first need to recall a few things from linear algebra. This will be ...
  2. [2]
    Systems of Differential Equations - Pauls Online Math Notes
    Nov 16, 2022 · Systems of differential equations can be converted to matrix form and this is the form that we usually use in solving systems. Example 3 ...<|control11|><|separator|>
  3. [3]
    [PDF] Lecture 23: Differential equations and eAt - MIT OpenCourseWare
    If we start with a kth order equation we get a k by k matrix with coefficients of the equation in the first row and 1's on a diagonal below that; the rest ...
  4. [4]
    [PDF] Systems of linear differential equations - Purdue Math
    Systems of linear differential equations. Samy Tindel. Purdue University ... 2 distinct real eigenvalues: General solution of the form x = c1v1eλ1t + ...
  5. [5]
    3.3: Linear systems of ODEs - Mathematics LibreTexts
    Feb 23, 2025 · This page provides an overview of matrix and vector valued functions, particularly focusing on their differentiation. It explains how matrix ...Theorem 3 . 3 . 1 · Example 3.3.1 · Theorem 3 . 3 . 2
  6. [6]
    [PDF] Chapter 4 Systems of Linear Differential Equations
    Systems of Linear Differential Equations. Consider the third-order linear ... Write the system in vector-matrix form. 5. x0. 1. = −2x1 + x2 + sin t x0. 2.
  7. [7]
    [PDF] Contents 6 Systems of First-Order Linear Di erential Equations
    arbitrary systems of linear differential equations. So we will talk only ... which we will write in matrix form as y0 = Ay with y =.... y1.
  8. [8]
    [PDF] 4.5 The Superposition Principle and Undetermined Coefficients ...
    Thus, by superposition principle, the general solution to a nonhomogeneous equation is the sum of the general solution to the homogeneous equation and one ...
  9. [9]
    Differential Equations - Basic Concepts - Pauls Online Math Notes
    Nov 16, 2022 · Principle of Superposition. Note that we didn't include the restriction of constant coefficient or second order in this. This will work for any ...
  10. [10]
    Matrix Exponential -- from Wolfram MathWorld
    The power series that defines the exponential map e^x also defines a map between matrices. In particular, exp(A) = e^(A) (1) = sum_(n=0)^(infty)(A^n)/(n!)
  11. [11]
    [PDF] 4. Linear Systems: Matrix Exponentials - UMD MATH
    Aug 2, 2022 · A general solution of the differential equation is y(t) = c1e5t + c2et. Because y/(t)=5c1e5t + c2et , the general initial conditions imply.
  12. [12]
    [PDF] Matrix-Exponentials | MIT
    Sep 7, 2017 · 3 Series definition of a matrix exponential. Just plugging in t = 1 above, we see that we have defined the matrix exponential by. eA = XeΛX−1.
  13. [13]
    [PDF] Notes on the Matrix Exponential and Logarithm Howard E. Haber
    Nov 14, 2023 · Abstract. In these notes, we summarize some of the most important properties of the matrix exponential and the matrix logarithm.
  14. [14]
    Avoiding the Jordan Canonical Form in the Discussion of Linear ...
    AVOIDING THE JORDAN CANONICAL FORM IN THE. DISCUSSION OF LINEAR SYSTEMS WITH. CONSTANT COEFFICIENTS. E. J. PUTZER, North American Aviation Science Center ...
  15. [15]
    [PDF] Putzer's Algorithm
    Sep 8, 2016 · We state and prove Putzer's algorithm in this section and provide discus- sion, examples and applications subsequently. Theorem 1.2 Let A be an ...Missing: paper | Show results with:paper
  16. [16]
    [PDF] Avoiding the Jordan Canonical Form in the Discussion of Linear ...
    Oct 18, 2017 · A = SJS-1. Then beginning with A, one simply calculates the set { qi(t) } and/or { ri(t) }.
  17. [17]
    Nineteen Dubious Ways to Compute the Exponential of a Matrix ...
    Most of this paper was originally published in 1978. An update, with a separate bibliography, describes a few recent developments. MSC codes. 15A15 · 65F15 ...
  18. [18]
    [PDF] Lectures on Linear Systems Theory - University of Notre Dame
    Dec 2, 2024 · This book grew out of lectures I gave for a semester-long course in linear systems theory for first year engineering graduate students at ...
  19. [19]
    Analytical Solution of Steady State Equations for Chemical Reaction ...
    Here we present an approach that can provide closed form steady-state solutions to complex systems, resulting from CRN with binary reactions and mass-action ...
  20. [20]
    [PDF] Notes on Linear Systems Theory
    (d) Derive conditions for stability of a discrete-time linear system having one or more eigenvalues with magnitude identically equal to 1. (Hint: Use Jordan ...
  21. [21]
    [PDF] Chapter 3. Two-dimensional Linear Systems - UC Davis Math
    The stable (respectively, unstable) subspace Es (respectively, Eu) of 0 is the subspace spanned by the (generalized) eigenvectors associated with eigenvectors.
  22. [22]
    [PDF] Routh-Hurwitz Criterion in the Examination of Eigenvalues of a ...
    Jun 15, 1987 · stable if all the eigenvalues of H have negative real part. (2) The equilibrium solution W(t):—0 is unstable if at least one eigenvalue of ...
  23. [23]
    [PDF] Math 312 Lecture Notes Linear Two-dimensional Systems of ...
    Feb 26, 2005 · The linear system of two first order differential equations is dx/dt = ax + by and dy/dt = cx + dy, or dx/dt=Ax, where x = x y, and A = ab cd.
  24. [24]
    [PDF] 26. Phase portraits in two dimensions - MIT OpenCourseWare
    This section presents a very condensed summary of the behavior of two dimensional linear systems, followed by a catalogue of linear phase portraits. A much ...
  25. [25]
    Differential Equations - Phase Plane - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will give a brief introduction to the phase plane and phase portraits. We define the equilibrium solution/point for a ...
  26. [26]
    Phase Portraits of Linear Systems - UBC Math
    A phase portrait is a picture of trajectories of a system, showing the motion of a point in the phase plane. Trajectories are the path of the point.
  27. [27]
    Using Eigenvalues and Eigenvectors to Find Stability and Solve ODEs
    Oct 11, 2024 · In this section on Eigenvalue Stability, we will first show how to use eigenvalues to solve a system of linear ODEs. Next, we will use the ...
  28. [28]
    Differential Equations - Nonhomogeneous Systems
    Nov 16, 2022 · We looked at two methods of solving nonhomogeneous differential equations here and while the work can be a little messy they aren't too bad.
  29. [29]
    [PDF] Nonhomogeneous Linear Systems
    This theorem assures us that we can construct a general solution for a nonhomogeneous system of differential equations from any single particular solution xp , ...