Fact-checked by Grok 2 weeks ago

Companion matrix

In linear algebra, a companion matrix is a square matrix constructed from the coefficients of a such that the polynomial serves as both its and minimal polynomial. This matrix, often associated with the work of Georg Frobenius from 1878, provides a representation linking to linear transformations, and the term "companion matrix" was introduced by C.C. MacDuffee in as a translation of the German "Begleitmatrix." For a p(x) = x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0 of n, the standard Frobenius C is the n \times n defined by 1's on the subdiagonal (positions (i+1, i) for i = 1 to n-1), zeros elsewhere except in the last row, which contains the negated coefficients [-a_0, -a_1, \dots, -a_{n-1}]. For example, when n=2 and p(x) = x^2 + a_1 x + a_0, the is \begin{pmatrix} 0 & -a_0 \\ 1 & -a_1 \end{pmatrix}. An alternative convention, often used in the rational canonical form, is the of this structure, with 1's on the superdiagonal and the negated coefficients in the last column. In either form, the eigenvalues of the are precisely the roots of p(x); for the standard form, the corresponding right eigenvectors take the form of Vandermonde vectors [1, \lambda, \lambda^2, \dots, \lambda^{n-1}]^T for each root \lambda, while for the alternative form they are the reversed vectors [\lambda^{n-1}, \dots, \lambda, 1]^T. Key properties of the companion matrix include its nonderogatory nature—meaning the geometric multiplicity of each eigenvalue is 1—and the fact that it is similar to any matrix sharing the same minimal polynomial of degree n. It exhibits low-rank structure, expressible as a permutation matrix plus a rank-1 update, which aids numerical computations. Companion matrices play a fundamental role in the rational canonical form, where any square matrix over a is similar to a block-diagonal matrix with companion matrices of invariant factors on the diagonal blocks. They are widely applied in computing roots by solving the associated eigenvalue problem, as implemented in algorithms like MATLAB's roots function. In the context of linear recurrence relations, such as the , powers of the companion matrix generate the sequence terms from initial conditions, enabling closed-form solutions via . Additionally, they facilitate the reduction of higher-order linear differential equations or difference equations to systems of equations, with applications in and .

Definition and Construction

Standard Companion Matrix

The standard companion matrix is a square matrix constructed from the coefficients of a monic polynomial of degree n, p(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_1 \lambda + a_0, where the coefficients a_i belong to a F. This matrix provides a representation that links to linear transformations on the F^n. The explicit construction of the n \times n companion matrix C places the standard basis vectors in a shifted form for the first n-1 rows and the negated coefficients in the last column: C = \begin{pmatrix} 0 & 0 & \cdots & 0 & -a_0 \\ 1 & 0 & \cdots & 0 & -a_1 \\ 0 & 1 & \cdots & 0 & -a_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & -a_{n-1} \end{pmatrix}. This form, known as the Frobenius companion matrix, was introduced by in 1878 to construct matrices with prescribed polynomials over arbitrary fields. The term "companion matrix" was later coined by C.C. MacDuffee in 1946 as a of the "Begleitmatrix." For example, consider the monic quadratic polynomial p(\lambda) = \lambda^2 + 3\lambda + 2. The corresponding standard companion matrix is C = \begin{pmatrix} 0 & -2 \\ 1 & -3 \end{pmatrix}. This matrix exemplifies the structure, with the subdiagonal filled by 1's and the last column containing the negated coefficients. forms exist with coefficients placed differently, such as in the first row or column.

Variant Forms

The transposed companion matrix C^T for a monic polynomial p(x) = x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0 is obtained by taking the transpose of the standard companion matrix C, resulting in a matrix with 1s on the superdiagonal (positions (i, i+1) for i = 1 to n-1) and the negated coefficients -a_0, -a_1, \dots, -a_{n-1} in the last row. This form places the identity shift in the first n-1 rows and the polynomial coefficients in the bottom row, making it particularly convenient for certain iterative computations involving the matrix powers. The explicit form of the transposed companion matrix is C^T = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ -a_0 & -a_1 & -a_2 & \cdots & -a_{n-1} \end{pmatrix}. Frobenius companion matrices encompass two primary forms, often denoted as types A and B, which differ in the placement of the shift identities and coefficients to enhance numerical properties in eigenvalue algorithms. Form A (the "column companion") features 1s on the subdiagonal and the negated coefficients in the last column, while form B (the "row companion") has 1s on the superdiagonal and the negated coefficients in the last row—corresponding to the transposed variant above. These forms are preferred in numerical linear algebra for computing polynomial roots via eigenvalue solvers, as their structured sparsity (with exactly n nonzeros besides the coefficients) reduces fill-in during QR iterations and improves backward stability compared to denser representations. All variant forms of the companion matrix, including the standard, transposed, and Frobenius types A and B, are similar to one another over the base field, typically via conjugation by a permutation matrix such as the reversal matrix R (with 1s on the anti-diagonal), which preserves the characteristic polynomial and eigenvalues. For instance, the Frobenius form A is similar to form B (its transpose) through conjugation by the reversal matrix R, ensuring that transformations between variants do not alter the spectral properties essential for root-finding applications. As an example, consider the quadratic polynomial p(x) = x^2 + a x + b. The standard companion matrix is C = \begin{pmatrix} 0 & -b \\ 1 & -a \end{pmatrix}, while the transposed form is C^T = \begin{pmatrix} 0 & 1 \\ -b & -a \end{pmatrix}. These are similar via the reversal R = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, confirming they share the eigenvalues -a/2 \pm \sqrt{(a/2)^2 - b}.

Algebraic Properties

Characteristic and Minimal Polynomials

The characteristic polynomial of the companion matrix C for the monic polynomial p(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_1 \lambda + a_0 is p(\lambda) itself. This equality can be established by direct computation of \det(\lambda I - C) via recursive cofactor expansion along the first row (or equivalently, the last column in some conventions). To sketch the proof, consider the base case n=1: C = [-a_0] and \det([\lambda](/page/Lambda) I - C) = \lambda + a_0 = p(\lambda). For the inductive step, assume the result holds for degree n-1. The matrix \lambda I - C has \lambda on the diagonal, -1 on the subdiagonal (from the +1 entries in C), and the last column [a_0, a_1, \dots, a_{n-1}]^T. Expanding the determinant along the first row yields \lambda times the determinant of an (n-1) \times (n-1) , which by induction is p_{n-1}(\lambda) = \lambda^{n-1} + a_{n-1} \lambda^{n-2} + \cdots + a_1, plus a term involving a_0 times the determinant of a lower triangular with determinant (-1)^{n-1}. Combining these gives \det(\lambda I - C) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_0 = p(\lambda). The minimal polynomial of C is also p(\lambda). By the Cayley-Hamilton theorem, C satisfies its , so p(C) = 0, and thus the minimal polynomial m_C(\lambda) divides p(\lambda). To show equality, note that p(\lambda) is monic of degree n, and no polynomial of lower degree annihilates C: the vectors e_1, Ce_1, \dots, C^{n-1} e_1 (where e_1 is the first vector) form a basis for the space, implying of the powers I, C, \dots, C^{n-1}. If a f(\lambda) = \sum_{k=0}^{m} b_k \lambda^k with m < n satisfied f(C) = 0, then f(C) e_1 = 0 would imply the coefficients b_k = 0 by this independence, a contradiction. Thus, \deg m_C(\lambda) = n, so m_C(\lambda) = p(\lambda).

Diagonalizability Conditions

A companion matrix C of degree n, associated with a p(\lambda) of degree n, is diagonalizable over an such as the numbers if and only if p(\lambda) has n distinct roots, meaning all eigenvalues are distinct. This condition ensures the existence of a full set of linearly eigenvectors, as the eigenspaces span the entire when there are no repeated eigenvalues. When p(\lambda) has repeated roots, C is not diagonalizable. In such cases, the Jordan canonical form of C consists of exactly one Jordan block per distinct eigenvalue, with the block size equal to the algebraic multiplicity of that eigenvalue. Since the geometric multiplicity of each eigenvalue is always 1 for a companion matrix (due to its cyclic nature), the geometric multiplicity equals the algebraic multiplicity only for simple roots (multiplicity 1). For example, consider p(\lambda) = (\lambda - 1)^2. The corresponding 2×2 companion matrix is C = \begin{pmatrix} 0 & -1 \\ 1 & 2 \end{pmatrix}, which has a repeated eigenvalue 1 with algebraic multiplicity 2 but geometric multiplicity 1. Its form is the single block J = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, confirming it is not diagonalizable. Over the complex numbers, every companion matrix is triangulable via the , but it is diagonalizable precisely when the eigenvalues (roots of p(\lambda)) are distinct.

Similarity Transformations

The rational canonical form provides a canonical representation under similarity transformations for any over a , consisting of a block diagonal matrix whose diagonal blocks are companion matrices corresponding to the invariant factors of the matrix. Specifically, every matrix A \in M_n(F) is similar to a of companion matrices C_{q_1}(x) \oplus \cdots \oplus C_{q_k}(x), where the q_i(x) are the monic invariant factors satisfying q_i(x) \mid q_{i+1}(x) for each i, the product q_1(x) \cdots q_k(x) is the of A, and q_k(x) is the minimal polynomial. In the special case where the is cyclic—meaning there exists a single cyclic generating the entire under powers of A—the rational canonical form reduces to a single companion matrix block C_m(x), where m(x) is the minimal polynomial of A, which coincides with its . A matrix A is similar to the companion matrix C_p(x) of its characteristic polynomial p(x) if and only if A and C_p(x) share the same minimal and characteristic polynomials, which occurs precisely when A admits a cyclic vector. More generally, two matrices are similar if and only if they possess the same rational canonical form, ensuring that the companion matrix blocks determine the similarity class uniquely up to the ordering of the blocks. To construct the similarity transformation, suppose v is a cyclic vector for A, so that the Krylov subspace basis \mathcal{B} = \{ v, Av, A^2 v, \dots, A^{n-1} v \} forms a basis for the space. The change-of-basis matrix P has columns given by the coordinates of these vectors in the standard basis, satisfying P^{-1} A P = C_p(x), where p(x) is the minimal polynomial. This construction leverages the fact that the action of A on \mathcal{B} mimics the shift structure of the companion matrix, with the Cayley-Hamilton theorem ensuring the relation closes appropriately. A concrete example arises with the p(x) = x^n - 1, whose companion matrix in the Frobenius form is precisely the corresponding to an n-, such as \begin{pmatrix} 0 & 0 & \cdots & 0 & 1 \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{pmatrix}, which represents a cyclic shift and has x^n - 1. In this case, the matrix is already in companion form, illustrating a direct instance of similarity to itself under the transformation. The companion matrix blocks in the rational canonical form are unique up to the ordering of the blocks along the diagonal, as determined by the unique invariant factors of the matrix, providing an invariant characterization of similarity classes over the base field.

Representations

Cyclic Shift Interpretation

The cyclic shift matrix S, also known as the permutation matrix for the standard n-cycle (1\ 2\ \dots\ n), is defined by placing 1's on the superdiagonal (positions (i, i+1) for i = 1 to n-1) and in the bottom-left corner (position (n, 1)), with zeros elsewhere. This matrix represents a right shift of the vectors: S e_i = e_{i+1} for i < n and S e_n = e_1, where e_i are the vectors. For the monic polynomial p(\lambda) = \lambda^n - 1, the companion matrix C coincides exactly with this cyclic S. In this case, C permutes the basis vectors in a single , reflecting the roots of unity structure of p(\lambda). More generally, even if the companion matrix uses the transpose convention (with 1's on the subdiagonal and coefficients in the top row), it remains similar to S via a suitable . This equivalence extends through similarity transformations; specifically, C for p(\lambda) = \lambda^n - 1 is diagonalized by the Vandermonde matrix V whose columns are the powers of a primitive n-th root of unity \omega (i.e., columns (1, \omega^j, \omega^{2j}, \dots, \omega^{(n-1)j})^T for j = 0 to n-1), yielding V^{-1} C V = \operatorname{diag}(1, \omega, \omega^2, \dots, \omega^{n-1}). The Fourier-like basis provided by V interprets the action of C as a cyclic permutation in the frequency domain, akin to the discrete Fourier transform. In the general case of a p(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \dots + a_0, the companion matrix C acts on the polynomial basis \{1, \lambda, \lambda^2, \dots, \lambda^{n-1}\} in the quotient space \mathbb{F}[\lambda]/\langle p(\lambda) \rangle as by \lambda, which corresponds to a weighted shift: the coordinates shift upward, with the highest-degree term wrapping around via the relation \lambda^n = -a_{n-1} \lambda^{n-1} - \dots - a_0. This operation is known as the companion shift, preserving the cyclic nature but incorporating the polynomial coefficients as weights in the feedback. This interpretation finds applications in the study of circulant matrices, which are precisely the polynomials in the basic circulant S (the companion matrix of \lambda^n - 1), and thus share its similarity properties under the same Vandermonde transformation. In , the cyclic shift matrix S realizes the of the \mathbb{Z}/n\mathbb{Z}, where the generator acts by permutation on the group elements, providing a concrete matrix model for the group's action on itself.

Multiplication in Field Extensions

In a simple algebraic field extension L = K(\alpha)/K of degree n, where \alpha is algebraic over K with irreducible minimal polynomial p(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_1 \lambda + a_0 \in K[\lambda], the vector space structure over K admits the power basis \{1, \alpha, \alpha^2, \dots, \alpha^{n-1}\}. This basis spans L as a K-module, and every element \beta \in L can be uniquely expressed as \beta = b_0 + b_1 \alpha + \cdots + b_{n-1} \alpha^{n-1} with b_i \in K. The map m_\alpha: L \to L defined by m_\alpha(\beta) = \alpha \beta is a K-linear of L. With respect to the power basis, the matrix of m_\alpha is the companion matrix C of p(\lambda), which takes the form C = \begin{pmatrix} 0 & 0 & \cdots & 0 & -a_0 \\ 1 & 0 & \cdots & 0 & -a_1 \\ 0 & 1 & \cdots & 0 & -a_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & -a_{n-1} \end{pmatrix}. This representation arises because by \alpha shifts the basis elements: m_\alpha(1) = \alpha = 0 \cdot 1 + 1 \cdot \alpha + 0 \cdot \alpha^2 + \cdots + 0 \cdot \alpha^{n-1}, m_\alpha(\alpha^k) = \alpha^{k+1} for $1 \leq k \leq n-2, and m_\alpha(\alpha^{n-1}) = \alpha^n. Since p(\alpha) = 0, the relation \alpha^n = -a_{n-1} \alpha^{n-1} - \cdots - a_1 \alpha - a_0 reduces higher powers to lower-degree terms in the basis, filling the last column of C. Thus, C encodes the action of by the \alpha p(\lambda). For a concrete illustration, consider the quadratic extension \mathbb{Q}(\sqrt{2})/\mathbb{Q} with minimal polynomial p(\lambda) = \lambda^2 - 2, so a_1 = 0 and a_0 = -2. The basis is \{1, \sqrt{2}\}, and m_{\sqrt{2}}(1) = \sqrt{2} = 0 \cdot 1 + 1 \cdot \sqrt{2}, while m_{\sqrt{2}}(\sqrt{2}) = 2 = 2 \cdot 1 + 0 \cdot \sqrt{2}. The resulting matrix is the companion form C = \begin{pmatrix} 0 & 2 \\ 1 & 0 \end{pmatrix}, where the last column uses -a_0 = 2 and -a_1 = 0. This matrix satisfies p(C) = 0, confirming its role in the extension. The and of C connect directly to field-theoretic invariants. For the monic minimal , \operatorname{tr}(C) = -a_{n-1}, which equals the \operatorname{Tr}_{L/K}(\alpha) of \alpha in the extension; in standard cases without the \lambda^{n-1} term (i.e., a_{n-1} = 0), this yields \operatorname{tr}(C) = 0. Similarly, \det(C) = (-1)^n a_0 = N_{L/K}(\alpha), where N_{L/K}(\alpha) is the field norm, the product of the conjugates of \alpha. These relations hold because the eigenvalues of C are the conjugates of \alpha, with and matching the elementary symmetric functions from p(\lambda).

Applications

Linear Recurrence Relations

A linear homogeneous recurrence relation of order n with constant coefficients is given by the equation u_{k+n} = -a_{n-1} u_{k+n-1} - \cdots - a_1 u_{k+1} - a_0 u_k for k \geq 0, where the a_i are constants and the sequence \{u_k\} is determined by initial conditions u_0, u_1, \dots, u_{n-1}. This recurrence can be expressed in state-space form by defining the vector \mathbf{v}_k = \begin{pmatrix} u_k \\ u_{k+1} \\ \vdots \\ u_{k+n-1} \end{pmatrix}, which satisfies \mathbf{v}_{k+1} = C \mathbf{v}_k, where C is the n \times n companion matrix associated with the p(\lambda) = \lambda^n + a_{n-1} \lambda^{n-1} + \cdots + a_1 \lambda + a_0. The matrix C has the structure with 1's on the superdiagonal and the negatives of the coefficients -a_0, -a_1, \dots, -a_{n-1} in the bottom row, ensuring the linear transformation advances the according to the recurrence. Iterating this relation yields the solution \mathbf{v}_k = C^k \mathbf{v}_0, with \mathbf{v}_0 formed from the initial conditions. If the companion matrix C is diagonalizable, which occurs when the roots r_i of p(\lambda) = 0 are distinct, then C = P D P^{-1} for some invertible P and diagonal D = \operatorname{diag}(r_1, \dots, r_n), leading to a \mathbf{v}_k = P D^k P^{-1} \mathbf{v}_0. This implies that each term in the sequence admits the explicit form u_{k+m} = \sum_{i=1}^n c_i r_i^k for suitable constants c_i depending on the initial conditions and the m-th row of P D^k P^{-1}, providing a Binet-like formula based on of the . A representative example is the Fibonacci recurrence F_{k+2} = F_{k+1} + F_k for k \geq 0, with characteristic polynomial p(\lambda) = \lambda^2 - \lambda - 1 (so a_1 = -1, a_0 = -1) and typical initial conditions F_0 = 0, F_1 = 1. Here, the state vector is \mathbf{v}_k = \begin{pmatrix} F_k \\ F_{k+1} \end{pmatrix}, and \mathbf{v}_{k+1} = C \mathbf{v}_k with C = \begin{pmatrix} 0 & 1 \\ 1 & 1 \end{pmatrix}, the companion matrix in the form described. The powers C^k generate the sequence terms, and diagonalization yields the closed form F_k = \frac{\phi^k - (-\phi)^{-k}}{\sqrt{5}}, where \phi = \frac{1 + \sqrt{5}}{2} is the (the larger root of p(\lambda) = 0). The connection to s arises because the ordinary G(x) = \sum_{k=0}^\infty u_k x^k for the sequence satisfying the recurrence is a of the form G(x) = q(x) / \tilde{p}(x), where \tilde{p}(x) = x^n p(1/x) = 1 + a_{n-1} x + \cdots + a_0 x^n is the and q(x) is a of less than n determined by the initial conditions. In particular, when the initial conditions are chosen such that q(x) = 1, the simplifies to $1 / \tilde{p}(x). For the , this yields G(x) = x / (1 - x - x^2), aligning with the reciprocal of the adjusted for initials.

Systems of Differential Equations

A higher-order linear homogeneous ordinary differential equation (ODE) with constant coefficients can be expressed as y^{(n)} + a_{n-1} y^{(n-1)} + \cdots + a_1 y' + a_0 y = 0, where y is the dependent variable and the a_i are constants. To solve this, the scalar equation is reduced to an equivalent vector system using the companion matrix. Define the state vector \mathbf{x}(t) = \begin{pmatrix} y(t) \\ y'(t) \\ \vdots \\ y^{(n-1)}(t) \end{pmatrix}. The derivatives satisfy \mathbf{x}'(t) = C \mathbf{x}(t), where C is the n \times n companion matrix given by C = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ -a_0 & -a_1 & -a_2 & \cdots & -a_{n-1} \end{pmatrix}. This transformation preserves the solutions of the original ODE, as the components of \mathbf{x}(t) recover y(t) and its derivatives up to order n-1. The solution to the system is \mathbf{x}(t) = e^{C t} \mathbf{x}(0), where e^{C t} is the matrix exponential, which can be computed using methods such as Laplace transforms or diagonalization via the eigenvalues of C. The roots of the characteristic polynomial of C, which match those of the original ODE, determine the solution modes of the form e^{r t}. For example, consider the second-order ODE y'' + 3y' + 2y = 0. The state vector is \mathbf{x}(t) = \begin{pmatrix} y(t) \\ y'(t) \end{pmatrix}, and the system becomes \mathbf{x}'(t) = \begin{pmatrix} 0 & 1 \\ -2 & -3 \end{pmatrix} \mathbf{x}(t), where the matrix is the companion matrix for this equation. In the non-homogeneous case, y^{(n)} + a_{n-1} y^{(n-1)} + \cdots + a_0 y = f(t), the system extends to \mathbf{x}'(t) = C \mathbf{x}(t) + \mathbf{b} f(t), with \mathbf{b} = \begin{pmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{pmatrix}, though the homogeneous case remains the primary focus for understanding the companion matrix structure. This representation has been integral to since the 1960s, facilitating state-space models for linear time-invariant systems.

References

  1. [1]
    Companion Matrix -- from Wolfram MathWorld
    , which is also its characteristic polynomial. Companion matrices are used to write a matrix in rational canonical form.
  2. [2]
    What Is a Companion Matrix? - Nick Higham
    Mar 23, 2021 · A companion matrix has some low rank structure. It can be expressed as a unitary matrix plus a rank- 1 matrix.
  3. [3]
    [PDF] Solving Polynomial Equations Using Linear Algebra
    The most important property of companion matrices in this article can be stated as follows: Given a polynomial p, the companion matrix defines a matrix M such.
  4. [4]
    An Efficient Formula for Linear Recurrences
    The solutions to a scalar, homogeneous, constant-coefficient, linear recurrence are expressible in terms of the powers of a companion matrix.
  5. [5]
    [PDF] MATHEMATICS 217 NOTES - Math (Princeton)
    This matrix is called the companion matrix of the polynomial p(λ) = a0 + a1λ + ··· + an−1λn−1 + λn. Conversely if A is the companion matrix to a polynomial p(λ) ...
  6. [6]
  7. [7]
    Bounds on polynomial roots using intercyclic companion matrices
    Feb 15, 2018 · We will say that two companion matrices are equivalent if one can be obtained from the other by permutation similarity and/or transposition. As ...Missing: variants | Show results with:variants
  8. [8]
    [PDF] On Condition Numbers of Companion Matrices - McMaster University
    In this paper we explore the condition number which has been developed to know what kind of inaccuracies to expect when doing calculations on a system of linear ...<|control11|><|separator|>
  9. [9]
    [PDF] arXiv:1711.02576v1 [math.RA] 7 Nov 2017
    Nov 7, 2017 · Abstract. The Frobenius companion matrix, and more recently the Fiedler companion matrices, have been used to provide lower and upper bounds ...
  10. [10]
    None
    ### Extracted Proof for Characteristic and Minimal Polynomials of Companion Matrix
  11. [11]
    Companion Matrix for a Polynomial - Problems in Mathematics
    We introduce a companion matrix for a polynomial and give a proof that the characteristic polynomial of the companion matrix of a polynomial is the ...
  12. [12]
    [PDF] rational canonical and jordan forms - UMD MATH
    Note that, if V is cyclic, the companion matrix depends only on the minimal ... So the minimal polynomial of N is x3 and therefore the minimal polynomial of ...
  13. [13]
    [PDF] diagonalization and Jordan form of the companion matrix
    (iii) Prove that C is undiagonizable, i.e. there is no 2×2 matrix A such that A−1CA = D,a2×2 diagonal matrix. We will apply the companion matrix C to each ...
  14. [14]
    [PDF] M.6. Rational canonical form
    The unique matrix in rational canonical form that is similar to a given matrix A is called the rational canonical form of A. The blocks of the rational ...Missing: textbook | Show results with:textbook
  15. [15]
    Rational Canonical Form -- from Wolfram MathWorld
    The rational canonical form is unique, and shows the extent to which the minimal polynomial characterizes a matrix.Missing: similarity | Show results with:similarity
  16. [16]
    [PDF] All you should know about your “Companion”
    These matrices are the “molecules” that lie at the heart of the whole field of linear algebra and its many appli- cations, ranging from Canonical Forms to ...
  17. [17]
    [PDF] Introduction to representation theory - MIT Mathematics
    Jan 10, 2011 · The notes cover a number of standard topics in representation theory of groups, Lie algebras, and ... matrix (called the companion matrix) is.
  18. [18]
    [PDF] Algebra 2 - 521 Lecture Notes Prof Janet Vassilev - UNM Math
    Mar 13, 2020 · will show Ca(x) companion matrix for a(x) is unique. Given T ... field extension of F. Let α ∈ K \ F. could have 1, α, ...αn−1 be ...
  19. [19]
    [PDF] A very brief introduction to finite fields - Olivia Di Matteo
    Dec 10, 2015 · The trace of an element is the matrix trace of its matrix representation, and the norm is the determinant. is the polynomial basis of Fqn .
  20. [20]
    [PDF] a closed formula for linear recurrences with constant coefficients
    is the j × j companion matrix of the (a1,...,aj)-nacci sequence. This ... linear recurrence with characteristic polynomial zjp(1/z) = ajzj + aj−1zj ...
  21. [21]
    [PDF] A Matrix Approach for General Higher Order Linear Recurrences
    Aug 28, 2009 · Matrix methods are helpful and convenient in solving certain problems stemming from linear recursion relations, such as that of finding an ...
  22. [22]
    [PDF] Topic 13: Linear Algebra: Vector Spaces, Matrices and Linearity
    The companion matrix -converting a higher order DE to a first-order system. Here we are going to convert a higher order differential equation into a system of ...
  23. [23]
    [PDF] Linear Systems of Differential Equations Michael Taylor
    The matrix A given by (3.5) is called the companion matrix of the polynomial. (3.6) p(λ) = λn + an−1λn−1 + ··· + a1λ + a0. Note that a direct search of ...
  24. [24]
    linear system theory : lotfi a. zadeh and charles a. desoer
    Nov 22, 2022 · linear system theory. by: lotfi a. zadeh and charles a. desoer. Publication date: 1963. Collection: internetarchivebooks; inlibrary; ...