Fact-checked by Grok 2 weeks ago

Jordan matrix

In linear algebra, a Jordan matrix, also known as a matrix in and named after the mathematician Camille Jordan who introduced the form in 1870, is a diagonal matrix composed of Jordan blocks, where each Jordan block is a square matrix featuring a single eigenvalue along its main diagonal and ones on the superdiagonal, with all other entries being zero. This structure provides a canonical representation for any square matrix over an algebraically closed field, such as the complex numbers, revealing the matrix's eigenvalue multiplicities and the dimensions of its generalized eigenspaces. The canonical form theorem states that every A is similar to a unique matrix J (up to of the blocks), meaning there exists an P such that A = P J P^{-1}, where the columns of P consist of eigenvectors and generalized eigenvectors forming Jordan chains. For an eigenvalue \lambda, the number of Jordan blocks corresponds to the geometric multiplicity ( of the eigenspace), while the sizes of the blocks determine the algebraic multiplicity and the of the part. A matrix is diagonalizable all its Jordan blocks are of size 1, reducing the form to a of eigenvalues. Jordan matrices are fundamental for analyzing linear transformations, computing matrix powers and functions (e.g., via the on blocks), and solving systems of linear ordinary differential equations, as the form simplifies and reveals stability properties. The minimal polynomial of a matrix is the of the minimal polynomials of its Jordan blocks, which are (\lambda - \mu)^k where k is the size of the largest block for eigenvalue \mu. This form extends to fields that are not algebraically closed by considering rational canonical forms as an alternative, but over \mathbb{C}, the Jordan form is always achievable.

Background Concepts

Eigenvalues and Diagonalizability

Eigenvalues of a square matrix A are the scalars \lambda that satisfy the characteristic equation \det(A - \lambda I) = 0, where I is the identity matrix of the same dimension; these values are the roots of the characteristic polynomial p_A(\lambda) = \det(A - \lambda I). For each eigenvalue \lambda, a corresponding eigenvector is a non-zero vector v such that Av = \lambda v. The set of all such eigenvectors for a fixed \lambda, together with the zero vector, forms the eigenspace, which is the null space of A - \lambda I; the dimension of this eigenspace defines the geometric multiplicity of \lambda. The algebraic multiplicity of an eigenvalue \lambda is its multiplicity as a root of the characteristic polynomial, counting repetitions in the factorization. This multiplicity always satisfies the inequality that the geometric multiplicity is less than or equal to the algebraic multiplicity for each eigenvalue. The concept of eigenvalues traces back to Augustin-Louis Cauchy, who introduced the characteristic equation in the context of solving systems of linear equations and quadratic forms around 1829. A square matrix A is diagonalizable if there exists an invertible matrix P such that P^{-1} A P = D, where D is a diagonal matrix with the eigenvalues of A on its main diagonal. By the diagonalizability theorem, A is diagonalizable if and only if, for every eigenvalue \lambda, its algebraic multiplicity equals its geometric multiplicity; equivalently, the minimal polynomial of A factors into distinct linear terms over the base field. This condition ensures the existence of a basis of eigenvectors that spans the entire space, allowing the matrix to be represented in a diagonal form. Diagonalization was formalized by Camille Jordan in 1870, building on earlier work by mathematicians such as Karl Weierstrass. For example, consider the 2×2 by an angle \theta that is not a multiple of \pi: A = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}. The is \lambda^2 - 2\cos\theta \, \lambda + 1 = 0, with \lambda = e^{i\theta} and \lambda = e^{-i\theta}, which are distinct complex conjugates. Each has algebraic and geometric multiplicity one, so A is diagonalizable over \mathbb{C}: an invertible P exists such that P^{-1} A P = D = \operatorname{diag}(e^{i\theta}, e^{-i\theta}).

Limitations of Diagonalization

A matrix is defective if, for at least one eigenvalue, its geometric multiplicity is strictly less than its algebraic multiplicity. This discrepancy implies that the eigenspace for that eigenvalue does not span the full generalized eigenspace, preventing the matrix from having a complete basis of eigenvectors. A classic example is the 2×2 A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, which has eigenvalue 0 with algebraic multiplicity 2 but geometric multiplicity 1. Here, the is \det(A - \lambda I) = \lambda^2, confirming the algebraic multiplicity, while the eigenspace consists of vectors of the form \begin{pmatrix} a \\ 0 \end{pmatrix}, yielding 1. Consequently, no basis of eigenvectors exists, rendering A non-diagonalizable. Diagonalization also fails if the minimal polynomial of the matrix has repeated factors. Specifically, a matrix over a F is diagonalizable if and only if its minimal polynomial factors into distinct linear factors over F. Repeated factors indicate higher-order generalized eigenvectors are needed beyond standard eigenvectors. Over algebraically closed fields such as the complex numbers, every has at least one eigenvalue, as the splits completely. However, over the real numbers, a may lack real eigenvalues, necessitating an extension to the complex for full analysis. A is non-diagonalizable precisely when it is not similar to a ; in such cases, the Jordan canonical form serves as the next best canonical form, capturing the structure via Jordan blocks.

Definition and Structure

Jordan Blocks

A Jordan block J_k(\lambda) of size k associated with an eigenvalue \lambda is a k \times k featuring \lambda along the and 1's strictly on the superdiagonal, with all other entries equal to zero. This structure makes it the fundamental building block for representing matrices that are not diagonalizable. The matrix admits the explicit decomposition J_k(\lambda) = \lambda I_k + N_k, where I_k denotes the k \times k and N_k is the strictly upper triangular with 1's on the superdiagonal and zeros elsewhere. Here, N_k satisfies N_k^k = 0 but N_k^{k-1} \neq 0, establishing the nilpotency index of N_k as exactly k, the smallest positive integer m such that N_k^m = 0. The eigenvalue of J_k(\lambda) is solely \lambda, possessing algebraic multiplicity k (the of the matrix) and geometric multiplicity 1 (the of the eigenspace). Powers of a Jordan block leverage the nilpotency of N_k through the binomial theorem, yielding J_k(\lambda)^m = \sum_{i=0}^{k-1} \binom{m}{i} \lambda^{m-i} N_k^i for any nonnegative integer m, since higher terms vanish as N_k^k = 0. This formula simplifies computations for functions of the matrix and highlights the "shift" behavior induced by the superdiagonal 1's. For illustration, consider the 3×3 Jordan block with \lambda = 2: J_3(2) = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{pmatrix}. This matrix embodies a chain of generalized eigenvectors: if v_1 is an eigenvector satisfying (J_3(2) - 2I)v_1 = 0, then v_2 = (J_3(2) - 2I)v_1 and v_3 = (J_3(2) - 2I)v_2 form the basis where the action of J_3(2) shifts the chain, underscoring the single-dimensional eigenspace and the defective nature of the matrix.

Full Jordan Canonical Form

The Jordan canonical form theorem asserts that every square matrix A \in M_n(\mathbb{C}) is similar to a block-diagonal matrix J, known as the Jordan canonical form of A, where J consists of Jordan blocks along the diagonal, and this form is unique up to the ordering of the blocks. This representation provides a canonical structure for matrices that are not diagonalizable, capturing the deviation from diagonality through the sizes and number of off-diagonal 1's in the blocks. To construct the Jordan canonical form, the vector space is first decomposed into a direct sum of generalized eigenspaces corresponding to each eigenvalue \lambda, where the generalized eigenspace for \lambda is the kernel of (A - \lambda I)^n. Within each generalized eigenspace, a basis is formed by identifying Jordan chains of generalized eigenvectors, obtained through successive iterations of the kernel of (A - \lambda I)^k for increasing k, starting from the eigenspace (where k=1). These chains determine the structure of the Jordan blocks for that eigenvalue, with the length of each chain corresponding to the size of a block. The uniqueness of the Jordan canonical form stems from the fact that, for each eigenvalue [\lambda](/page/Lambda), the sizes of the associated Jordan blocks are uniquely specified by the dimensions of the kernels \dim \ker((A - [\lambda](/page/Lambda) I)^m) for m = 1, 2, \dots, which reveal the number and lengths of the chains via differences in these dimensions. For instance, consider a $4 \times 4 matrix with eigenvalues [\lambda](/page/Lambda) = 1 (algebraic multiplicity 3, consisting of one block of size 2 and one of size 1) and [\lambda](/page/Lambda) = 2 (multiplicity 1); its Jordan canonical form is the block-diagonal matrix J = \begin{pmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \end{pmatrix}, where the first $2 \times 2 block corresponds to the chain for \lambda = 1, the $1 \times 1 block is the remaining eigenvector for \lambda = 1, and the final block is for \lambda = 2. For matrices over the real numbers \mathbb{R}, the Jordan canonical form is adapted to the real Jordan form, where non-real eigenvalues occur in complex conjugate pairs \lambda, \overline{\lambda}, and the corresponding blocks are replaced by real $2 \times 2 blocks of the form \begin{pmatrix} a & -b \\ b & a \end{pmatrix} (for \lambda = a + bi) paired with larger structures for generalized eigenvectors, ensuring the entire form remains real. A standard algorithmic approach to obtain the Jordan structure begins by computing the characteristic polynomial \det(\lambda I - A) to identify the eigenvalues and their algebraic multiplicities. For each eigenvalue \lambda, the ascending chain of kernel dimensions \dim \ker((A - \lambda I)^k) for k = 1, 2, \dots until stabilization is calculated, with the differences \dim \ker((A - \lambda I)^{k}) - \dim \ker((A - \lambda I)^{k-1}) giving the number of Jordan blocks of size at least k, thereby determining the complete block partition.

Algebraic Properties

Similarity Invariants

Two matrices A and B are similar if there exists an P such that B = P^{-1} A P. Similarity preserves several fundamental properties, including eigenvalues, , , and . The Jordan canonical form provides a complete set of similarity invariants through the of its Jordan blocks for each eigenvalue. Specifically, the number and sizes of these blocks are encoded by the Segre , which lists the sizes of the blocks in non-increasing order for each eigenvalue, or equivalently by the Weyr , a non-increasing sequence w_k(\lambda) where w_k(\lambda) counts the number of blocks of size at least k for eigenvalue \lambda. The Segre and Weyr characteristics are conjugate partitions of each other, and both fully determine the Jordan form up to of blocks. Over non-algebraically closed fields, where the may not split into linear factors, the Jordan form may not exist, and the serves as an alternative invariant, based on invariant factors rather than elementary divisors. However, when the minimal polynomial splits completely (as over algebraically closed fields like the complex numbers), the Jordan form is preferred for its direct revelation of eigenvalue multiplicities and generalized eigenspace structures. For example, consider two 4×4 matrices: one with Jordan blocks of sizes 3 and 1, and another that appears unrelated but has the same block sizes after ; their Segre (3,1) ensures they are similar, despite differing initial forms. A fundamental states that two matrices are similar they share the same eigenvalues (with matching algebraic multiplicities) and identical Segre (or Weyr) characteristics for each eigenvalue, thereby classifying the similarity class uniquely by the Jordan block sizes.

Polynomials Associated with Jordan Form

The characteristic polynomial of a matrix A coincides with that of its canonical form J, as similar matrices share the same . Since J is a block consisting of Jordan blocks, the \chi_A(t) = \det(tI - A) factors as \prod_{\lambda} (t - \lambda)^{m_{\lambda}}, where the product is over the distinct eigenvalues \lambda of A, and m_{\lambda} denotes the algebraic multiplicity of \lambda, equal to the total of the generalized eigenspace for \lambda or the sum of the sizes of all Jordan blocks corresponding to \lambda. For an individual Jordan block of size k associated with eigenvalue \lambda, the is (t - \lambda)^k. The minimal polynomial m_A(t) of A is the of least degree such that m_A(A) = 0, and it too is invariant under similarity, matching that of J. For the Jordan form, m_A(t) = \prod_{\lambda} (t - \lambda)^{k_{\lambda}}, where k_{\lambda} is the size of the largest Jordan block for eigenvalue \lambda, known as the of \lambda. Thus, the exponent k_{\lambda} in the minimal polynomial directly reflects the longest chain of generalized eigenvectors for \lambda. For a single Jordan block of size k with eigenvalue \lambda, both the minimal and polynomials are (t - \lambda)^k. By the Cayley-Hamilton theorem, the annihilates A, so the minimal polynomial divides the in the ring of polynomials; specifically, k_{\lambda} \leq m_{\lambda} for each \lambda, with equality if and only if there is exactly one Jordan block per eigenvalue. Equality of the minimal and characteristic polynomials holds precisely when each k_{\lambda} = m_{\lambda}, meaning the Jordan form has a single block of full multiplicity for each eigenvalue. The structure encoded by these polynomials relates to the theorem, which decomposes the underlying as a of the generalized eigenspaces \bigoplus_{\lambda} \Ker((A - \lambda I)^{k_{\lambda}}), where the exponents k_{\lambda} are exactly those from the minimal polynomial. On each generalized eigenspace, the restriction of A has minimal polynomial (t - \lambda)^{k_{\lambda}}, and the overall minimal polynomial is the of these factors, yielding the product form since the linear factors (t - \lambda) are distinct. This decomposition highlights how the Jordan block structure determines the polynomial annihilators and the lattice.

Applications in Computation

Matrix Powers and Functions

One of the primary advantages of the Jordan canonical form is its utility in computing powers and analytic functions of a matrix. For a matrix A \in \mathbb{C}^{n \times n} with Jordan decomposition A = P J P^{-1}, where J is the Jordan form, any analytic function f applied to A satisfies f(A) = P f(J) P^{-1}. Since J is block-diagonal with Jordan blocks J_k(\lambda) along the diagonal, f(J) is also block-diagonal, with each block given by f(J_k(\lambda)). For a single Jordan block J_k(\lambda) of size k \times k, the entries of f(J_k(\lambda)) are determined by the Taylor series expansion of f around \lambda. Specifically, f(J_k(\lambda)) is an upper triangular Toeplitz matrix where the main diagonal consists of f(\lambda), the first superdiagonal has entries f'(\lambda), the second superdiagonal has f''(\lambda)/2!, and in general, the i-th superdiagonal (for i = 0, \dots, k-1) has entries f^{(i)}(\lambda)/i!, with zeros beyond the (k-1)-th superdiagonal. This structure arises because J_k(\lambda) = \lambda I_k + N_k, where N_k is the nilpotent shift matrix with N_k^k = 0, allowing f(J_k(\lambda)) to be expressed via the finite binomial expansion of the Taylor series truncated at degree k-1. A key application is the computation of the matrix exponential e^A, defined by the power series e^A = \sum_{m=0}^\infty A^m / m!. Using the Jordan form yields e^A = P e^J P^{-1}, where for each block, e^{J_k(\lambda)} = e^\lambda e^{N_k} = e^\lambda \sum_{i=0}^{k-1} N_k^i / i!, since higher powers of N_k vanish. This results in a that is particularly efficient for small block sizes. Matrix powers A^m follow similarly: A^m = P J^m P^{-1}, with J^m computed block-wise. For a Jordan block J_k(\lambda), J_k(\lambda)^m = \sum_{i=0}^{k-1} \binom{m}{i} \lambda^{m-i} N_k^i, again leveraging the nilpotency of N_k. This binomial expansion simplifies iterative power computations, especially for large m, by reducing to polynomial evaluations on the eigenvalues and nilpotent parts. As an illustrative example, consider a nilpotent Jordan block of size 3, J_3(0) = N_3 = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}. The exponential is e^{N_3} = I_3 + N_3 + N_3^2 / 2! = \begin{pmatrix} 1 & 1 & 1/2 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}, a finite sum terminating at the second power due to N_3^3 = 0. This demonstrates how the Jordan structure yields exact, compact expressions without infinite series. In numerical linear algebra, the Jordan form facilitates these computations theoretically but is often impractical due to its sensitivity to perturbations in A, as small changes can alter the block structure dramatically. Consequently, the Schur canonical form, which is more stable and computable via unitary transformations, is frequently preferred for approximating matrix functions and powers in practice.

Numerical Computation of Jordan Form

Computing the Jordan canonical form numerically presents significant challenges due to its inherent ill-posedness. The form is not unique, as Jordan blocks corresponding to the same eigenvalue can be permuted arbitrarily, and even small perturbations in the matrix entries can drastically alter the block structure or merge/split blocks, leading to discontinuity in the entries of the Jordan form with respect to the original matrix. This sensitivity arises because the Jordan form relies on the precise identification of algebraic and geometric multiplicities, which are highly unstable under rounding errors in floating-point arithmetic. The standard approach to numerical computation begins with a , which triangularizes the matrix via a unitary into an upper triangular form with eigenvalues on the diagonal, providing a starting point. From this form, the Jordan structure is determined by analyzing the ranks of certain submatrices or using techniques to identify invariant subspaces corresponding to each eigenvalue cluster. For generalized eigenvalue problems, the QZ algorithm computes a generalized , yielding block upper triangular forms for the pair of matrices. The process typically involves the following steps: first, compute the eigenvalues using the QR algorithm, which iteratively applies orthogonal transformations to produce the Hessenberg form and then the Schur form. Next, group the diagonal entries by eigenvalue proximity and compute the dimensions of the generalized eigenspaces via rank determinations of deflating subspaces, often using rank-revealing QR decompositions on the Schur blocks; the sizes of the Jordan blocks are then inferred from the Weyr characteristic, which counts the number of blocks of each dimension. Finally, a change-of-basis matrix is constructed by solving for the Jordan chains within these subspaces. Early numerical methods were pioneered by J. H. Wilkinson in his 1965 monograph, which laid the groundwork for analyzing the of eigenvalue computations and highlighted the difficulties in obtaining the form. Modern implementations often employ algorithms, originally developed in the 1970s by Ruhe and Kågström, which progressively the matrix by computing forms to reveal the Kronecker or structure through orthogonal transformations and computations. In software libraries, direct computation of the Jordan form is rarely provided due to instability; instead, routines such as DGES (for ) and GGES (for the QZ algorithm) are used as proxies, allowing users to extract approximate Jordan information post hoc. MATLAB's jordan function is primarily intended for symbolic matrices and issues warnings for numerical inputs, recommending the schur function for practical computations. When the exact Jordan form proves too unstable, alternatives include the real Schur form, which uses orthogonal transformations to produce a block upper triangular matrix with 1×1 or 2×2 blocks on the diagonal for real matrices, or the Hessenberg form as an intermediate step in the QR process, both of which preserve eigenvalues while avoiding the full Jordan structure.

Applications in Systems

Discrete Dynamical Systems

In linear discrete dynamical systems, the evolution of a state vector \mathbf{x}_{n+1} = A \mathbf{x}_n is given by \mathbf{x}_n = A^n \mathbf{x}_0, where A is an m \times m matrix. The Jordan canonical form A = P J P^{-1} transforms the solution to \mathbf{x}_n = P J^n P^{-1} \mathbf{x}_0, decoupling the dynamics into independent Jordan blocks. For a Jordan block J_k(\lambda) of size k associated with eigenvalue \lambda, the power J_k(\lambda)^n consists of entries \binom{n}{i} \lambda^{n-i} on the i-th superdiagonal, yielding solutions that are linear combinations of terms n^j \lambda^n for j = 0, \dots, k-1. This structure reveals the qualitative behavior: if |\lambda| < 1, the block contributes exponentially decaying terms; if |\lambda| > 1, it produces ; and if |\lambda| = 1 with k > 1, polynomial growth of degree up to k-1 emerges, as in shear-augmented rotations. The asymptotic stability of the is determined solely by the eigenvalues, independent of block structure: the origin is asymptotically stable if and only if all eigenvalues satisfy |\lambda| < 1, ensuring \|\mathbf{x}_n\| \to 0 as n \to \infty for any initial \mathbf{x}_0. For |\lambda| = 1 and k = 1, bounded oscillatory or constant behavior occurs, but larger blocks introduce unbounded polynomial terms, leading to instability. In applications like Markov chains, where A = P is a stochastic transition matrix, the eigenvalue 1 corresponds to stationary distributions, and the size of its Jordan blocks governs convergence rates to equilibrium; blocks of size greater than 1 indicate slower, polynomial-decaying transients, while eigenvalues with |\lambda| < 1 capture transient states that decay exponentially. Similarly, in population models using nonnegative matrices (e.g., Leslie matrices), the Jordan form distinguishes transient populations (tied to |\lambda| < 1) from recurrent or dominant ones. For nonnegative irreducible matrices, the Perron-Frobenius theorem ensures the dominant eigenvalue r (spectral radius) is real, positive, and simple, implying a single Jordan block of size 1 for r, which dictates the long-term growth rate \mathbf{x}_n \sim c r^n \mathbf{v} for positive eigenvector \mathbf{v}. This simplifies analysis in positive discrete systems, such as economic or ecological models, where the block structure for r reveals asymptotic proportionality without polynomial complications. For primitive matrices (a subclass where some power is positive), r > |\lambda| for all other eigenvalues, guaranteeing unique long-term dominance. In multi-step discrete systems, such as linear recurrences \mathbf{x}_{n+k} = \sum_{i=1}^k a_i \mathbf{x}_{n+k-i}, the formulation links the form to the via invariant factors, where each companion block corresponds to a minimal factor, and the decomposition provides the modal (eigenvalue-based) behavior for assessment. This connection is particularly useful for analyzing higher-order iterations, as the elementary divisors in the form refine the cyclic structure from the .

Continuous Linear Differential Equations

The Jordan canonical form provides a powerful tool for solving systems of linear ordinary differential equations (ODEs) with constant coefficients of the form \dot{x} = A x, where A is an n \times n matrix and x(t) \in \mathbb{R}^n. The general solution is given by x(t) = e^{A t} x(0), where the matrix exponential e^{A t} can be computed using the Jordan decomposition A = P J P^{-1}, yielding e^{A t} = P e^{J t} P^{-1}. This approach transforms the system into a block-diagonal form, allowing explicit computation of the exponential for each Jordan block. For a single Jordan block J_k(\lambda) of size k corresponding to eigenvalue \lambda, we have J_k(\lambda) = \lambda I_k + N, where N is the nilpotent matrix with 1s on the superdiagonal and zeros elsewhere. The exponential is then e^{J_k(\lambda) t} = e^{\lambda t} \sum_{i=0}^{k-1} \frac{(t N)^i}{i!} = e^{\lambda t} \begin{pmatrix} 1 & t & \frac{t^2}{2!} & \cdots & \frac{t^{k-1}}{(k-1)!} \\ 0 & 1 & t & \cdots & \frac{t^{k-2}}{(k-2)!} \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 & t \\ 0 & 0 & \cdots & 0 & 1 \end{pmatrix}, since higher powers of N vanish. This structure produces solution components that are generalized modes of the form t^j e^{\lambda t} for j = 0, 1, \dots, k-1, reflecting the algebraic multiplicity and defectiveness of the eigenvalue. The full solution x(t) is a superposition of such terms across all blocks in J, with polynomial degrees bounded by the size of each block minus one. Consider a 2×2 system \dot{x} = A x where A has a repeated eigenvalue \lambda = -1 with geometric multiplicity 1, so J = \begin{pmatrix} -1 & 1 \\ 0 & -1 \end{pmatrix}. The solution is x(t) = e^{-t} \begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix} P^{-1} x(0), which expands to terms like (a + b t) e^{-t} and b e^{-t} for constants a, b. If instead \lambda = 1 > 0, the form becomes e^{t} \begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix} P^{-1} x(0), producing growing solutions amplified by the linear polynomial factor, illustrating unstable behavior. For complex conjugate eigenvalues with negative real part, real Jordan blocks yield damped oscillatory solutions modulated by polynomials if defective. The asymptotic stability of the is determined by the eigenvalues: the system is asymptotically stable if all eigenvalues satisfy \operatorname{Re}(\lambda) < 0, as the exponential decay dominates any growth from the blocks. In cases where some \operatorname{Re}(\lambda) > 0, the Jordan blocks introduce amplification that exacerbates the , rendering the system unstable. If \operatorname{Re}(\lambda) = 0 for some blocks, may occur, but polynomials can lead to secular growth unless the block size is 1. For non-homogeneous systems \dot{x} = A x + f(t), the general solution combines the homogeneous part with a particular solution via variation of parameters: x(t) = e^{A t} x(0) + \int_0^t e^{A (t-s)} f(s) \, ds, where the fundamental matrix e^{A t} = P e^{J t} P^{-1} leverages the Jordan form for efficient evaluation. This integral, known as Duhamel's formula, inherits the modal structure from the homogeneous solution, allowing the influence of f(t) to be analyzed through the same exponential and polynomial terms.

References

  1. [1]
    [PDF] Lecture 4.2. Jordan form - Purdue Math
    Mar 31, 2020 · Such a matrix is called a Jordan block of size m with eigenvalue λ1. Its characteristic polynomial is (λ1 − λ)m, so the only eigenvalue is λ1,.
  2. [2]
    Jordan form - StatLect
    A matrix is said to be in Jordan form if 1) its diagonal entries are equal to its eigenvalues; 2) its supradiagonal entries are either zeros or ones; 3) all its ...
  3. [3]
    Jordan Matrix - an overview | ScienceDirect Topics
    A Jordan matrix is defined as a block matrix of the form \( J_i = \begin{pmatrix} \lambda_i & 1 & 0 & \cdots & 0 \\ 0 & \lambda_i & 1 & \cdots & 0 \\ \vdots & ...
  4. [4]
    Characteristic Polynomial -- from Wolfram MathWorld
    The characteristic polynomial is the polynomial on the left side of the characteristic equation, where A is a square matrix.
  5. [5]
    Eigenvalue -- from Wolfram MathWorld
    Eigenvalues are a special set of scalars associated with a linear system of equations (ie, a matrix equation) that are sometimes also known as characteristic ...
  6. [6]
    eigenvalue (of a matrix) - PlanetMath
    Mar 22, 2013 · Definition. Let A be an complex n×n n × n matrix. A number λ∈C λ ∈ ℂ is said to be an eigenvalue of A if there is a nonzero n×1 n × 1 column ...
  7. [7]
    Algebraic and geometric multiplicity of eigenvalues - StatLect
    Algebraic multiplicity is the number of times an eigenvalue appears as a root of the characteristic polynomial. Geometric multiplicity is the dimension of its ...Algebraic multiplicity · Geometric multiplicity · Relationship between...
  8. [8]
    Matrices and determinants - MacTutor History of Mathematics
    In 1826 Cauchy, in the context of quadratic forms in n variables, used the term 'tableau' for the matrix of coefficients. He found the eigenvalues and gave ...
  9. [9]
    Diagonalization
    Diagonalization Theorem​​ An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.Missing: seminal paper
  10. [10]
    [PDF] 5.3 Diagonalization - UC Berkeley math
    Theorem 5 (The Diagonalization Theorem). An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = P−1DP, ...<|separator|>
  11. [11]
    [PDF] A Short History of Operator Theory - NYU Stern
    In 1870, Camille Jordan published the full canonical-form analysis of matrices, which is a prototype for the decomposition of compact operators in the infinite ...
  12. [12]
    [PDF] Eigenvalues and eigenvectors of rotation matrices
    √cos2 θ − 1 = cosθ ± i sin θ = e±iθ . (6). Thus, we ave confirmed that the eigenvalues are not real if θ 6= 0 (mod π). For the special.
  13. [13]
    [PDF] Unit V: Eigenvalue Problems Chapter V.2: Fundamentals
    If λ has geometric multiplicity < algebraic multiplicity, then λ is said to be defective. We say a matrix is defective if it has at least one defective.
  14. [14]
    Definition NLT Nilpotent Linear Transformation
    So, other than one trivial case (the zero matrix), every nilpotent linear transformation is not diagonalizable. It remains to see what is so “essential” about ...
  15. [15]
    [PDF] (Diag) Diagonalizability Math 683L (Summer 2003) In this lecture we ...
    A matrix A ∈ Mn(F) is diagonalizable if and only if its minimal polynomial. mA(x) factors as a product of distinct linear factors. mA(x)=(x − λ1)(x − λ2 ...
  16. [16]
    [PDF] Eigenvalues, Characteristic Polynomials, and the Trace of Matrix
    Jan 29, 2025 · ... have eigen- vectors. However, over an algebraically closed field such as C or a complex field, every linear operator has at least one eigenvalue ...
  17. [17]
    Jordan Block -- from Wolfram MathWorld
    Strang, G. Linear Algebra and its Applications, 3rd ed. Philadelphia, PA: Saunders, 1988. Referenced on Wolfram|Alpha. Jordan Block. Cite this as: Weisstein ...
  18. [18]
  19. [19]
    [PDF] Top Ten Jordan Normal Form Applications - Algebra 452
    Powers of an m×m Jordan block of form λI +N are easy to compute by the binomial theorem. (λI+N)k = λkI+kλk−1N+ k(k − 1). 2 λk−2N2+...+ k(k − 1)···(k − m ...Missing: expansion citation
  20. [20]
    [PDF] The Jordan Canonical Form - Math (Princeton)
    The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an al- gebraically closed ...
  21. [21]
    [PDF] 10. Jordan canonical form As an application of the structure theorem ...
    10.1. statement of the theorem. The theorem is that any n × n matrix A with coefficients in the complex numbers is conjugate to a matrix in Jordan canonical ...
  22. [22]
    [PDF] Generalized eigenvectors. Jordan canonical form (continued).
    The Jordan canonical form of A will contain only one Jordan block with the eigenvalue 0. Also, v1 can be extended to a chain of generalized eigenvectors in a ...
  23. [23]
    [PDF] Math 4571 – Lecture 25
    Our goal is to prove that every matrix is similar to a Jordan canonical form and to give a procedure for computing the Jordan canonical form of a matrix.
  24. [24]
    [PDF] The Jordan canonical form - Team Pancho
    Nov 22, 2013 · The. Jordan form is unique up to permutation of its blocks, and it is the only general Jordan matrix such that the dimensions of the iterated ...Missing: uniqueness | Show results with:uniqueness
  25. [25]
    [PDF] jordanform.pdf
    It is easy to determine the Jordan form of a matrix, provided one knows the eigenvalues. One can also use this to establish the uniqueness of the Jordan form, ...
  26. [26]
    [PDF] Jordan Canonical Form
    Oct 26, 2005 · This matrix B is called the Jordan canonical form of the matrix A. If the eigenvalues of A are real, the matrix B can be chosen to be real. If.
  27. [27]
    [PDF] MATHEMATICS 217 NOTES - Math (Princeton)
    MATHEMATICS 217. NOTES. PART I – THE JORDAN CANONICAL FORM. The characteristic polynomial of an n×n matrix A is the polynomial χA(λ) = det(λI −A), a monic ...
  28. [28]
    [PDF] A proof of the Jordan normal form theorem
    Jordan normal form theorem states that any matrix is similar to a block- diagonal matrix with Jordan blocks on the diagonal. To prove it, we first.
  29. [29]
    [PDF] Facts About Eigenvalues
    Definition: Two matrices A and B are called similar if there exists an invertible matrix X such that A = X−1BX. Theorem: Suppose A and B are similar matrices.
  30. [30]
    [PDF] Similarity Invariants Math 422
    Some of important properties shared by similar matrices are the determinant, trace, rank, nullity, and eigenvalues.
  31. [31]
    Similarity
    Similar matrices have the same eigenvalues. By this theorem in Section 5.2, similar matrices also have the same trace and determinant.
  32. [32]
    [PDF] Canonical forms for similarity, and triangular factorizations
    Jan 3, 2022 · It is easy to derive the Weyr characteristic if the Segre characteristic is known, and vice versa. For example, from the Segre ...
  33. [33]
    [PDF] The Weyr Characteristic - Swarthmore College
    Jan 12, 1999 · The Jordan canonical form gives a canonical form for square matrices under the equivalence relation of similarity. It can be used whenever the ...
  34. [34]
    [PDF] The Real Jordan Canonical Form and the Rational Canonical Form
    The complex Jordan blocks come in conjugate pairs J = Jr(a+ib),J = Jr(a−ib) ... Every real or complex square matrix is similar to its transpose. Proof ...
  35. [35]
    [PDF] rational canonical and jordan forms - UMD MATH
    We say F is algebraically closed if every non-constant polynomial has a root in F. For example, C is algebraically closed.
  36. [36]
    [PDF] Computing the Jordan Canonical Form - arXiv
    Mar 2, 2021 · This paper presents a two-stage algorithm using a regularization theory to compute the Jordan Canonical Form, even with matrix perturbations.
  37. [37]
    Jordan Canonical Form: Theory and Practice - SpringerLink
    Jordan Canonical Form (JCF) is one of the most important, and useful, concepts in linear algebra. The JCF of a linear transformation, or of a matrix, ...
  38. [38]
    Ill-Conditioned Eigensystems and the Computation of the Jordan ...
    Several of the more stable methods for computing the Jordan canonical form are discussed, together with the alternative approach of computing well-defined bases ...
  39. [39]
    Eigenvalues, Eigenvectors and Generalized Schur Decomposition
    The LAPACK routines normalize T to have real non-negative diagonal entries. Note that in this form, the eigenvalues can be easily computed from the ...Missing: Jordan | Show results with:Jordan
  40. [40]
    Jordan normal form (Jordan canonical form) - MATLAB - MathWorks
    J = jordan(A) computes the Jordan normal form of the matrix A. Because the Jordan form of a numeric matrix is sensitive to numerical errors, prefer converting ...Missing: jord | Show results with:jord
  41. [41]
    [PDF] 3 Jordan Canonical Forms - UC Berkeley math
    The proof meant here is that if two real matrices are similar over C then they have the same Jordan normal form, and thus they are similar over R to the same ...Missing: expansion citation
  42. [42]
    [PDF] Convergence Rates for Markov Chains - Stat@Duke
    Hence if P is diagonalizable we are done. If P is not diagonalizable, then we still need to prove that the eigenvalue Xo = 1 is not part of a larger Jordan ...
  43. [43]
    THE PERRON-FROBENIUS THEOREM - WPI
    The Perron-Frobenius Theorem has proven to be a consistently powerful result for examining certain nonnegative matrices arising in discrete models. It has been ...
  44. [44]
    [PDF] Mathematics Department University of Wisconsin - Madison ...
    For a nonnegative matrix P, we discuss the relation of its marked reduced graph to that part of the Jordan form that is associated with the Perron-Frobenius ...
  45. [45]
    [PDF] Math 537 - Lecture Notes – Linear Systems and Fundamental Solution
    These lecture notes cover linear systems of ODEs, fundamental solutions, matrix properties, Jordan canonical form, and general linear systems.
  46. [46]
    [PDF] Ordinary Differential Equations and Dynamical Systems
    This is a preliminary version of the book Ordinary Differential Equations and Dynamical Systems published by the American Mathematical Society (AMS). This ...
  47. [47]
    [PDF] MATH 6410: Ordinary Differential Equations
    So the study of linear constant coefficient ODE initial value problems ... Now we are working with a matrix in real Jordan canonical form,. J = diag(J1 ...
  48. [48]
    [PDF] Differential Equations
    ... form for some eigenvalue . These blocks are called Jordan blocks, and the resulting form of the matrix of is called its Jordan normal (or canonical) form.