Fact-checked by Grok 2 weeks ago

Matrix polynomial

A matrix polynomial, also referred to as a polynomial matrix, is a matrix-valued function P(\lambda) = \sum_{k=0}^d A_k \lambda^k, where each A_k is an n \times n constant matrix over a (typically the complex numbers), \lambda is a scalar variable, d is the degree, and A_d \neq 0. This generalizes scalar by allowing matrix coefficients, leading to non-commutative multiplication and richer spectral properties. The study of matrix polynomials encompasses their algebraic structure, , and numerical methods, building on linear algebra to address higher- eigenvalue problems. A key concept is regularity, where \det P(\lambda) is not identically zero, ensuring finite eigenvalues defined as roots of \det P(\lambda) = 0. Linearizations transform a degree-d matrix polynomial into a linear (a degree-1 form) while preserving eigenvalues, facilitating computation via standard generalized eigenvalue solvers like the QZ algorithm. Canonical forms, such as the over polynomial rings, reveal invariant factors and structural indices, analogous to the Jordan form for matrices. Matrix polynomials arise in diverse applications, particularly in systems and , where they model differential equations as P(\lambda) v = 0 for eigenvalue analysis in stability studies. In and vibration analysis, they describe multi-degree-of-freedom systems, with eigenvalues corresponding to natural frequencies. More recently, in , parahermitian matrix polynomials (satisfying R^H(1/\bar{z}) = R(z)) enable broadband , source separation, and polynomial eigenvalue decompositions for systems. Numerical challenges, including ill-conditioning at high degrees, drive ongoing research in structured linearizations and .

Fundamentals

Definition

A , also known as a , is a whose entries are in a scalar \lambda over a \mathbb{F} (typically or complex numbers). Formally, it is given by P(\lambda) = \sum_{k=0}^d A_k \lambda^k, where each A_k is an n \times n constant with entries in \mathbb{F}, d is the (with A_d \neq 0), and \lambda is the scalar . This results in an n \times n where each entry is a scalar of at most d. This construction generalizes scalar polynomials by replacing scalar coefficients with matrices, leading to non-commutative algebra in operations like . For instance, the characteristic \lambda I_n - A of a A is a linear matrix polynomial, and its \det(\lambda I_n - A) is the scalar characteristic p(\lambda), which annihilates A via the Cayley-Hamilton theorem: p(A) = 0. The study of matrix polynomials emerged in the late 19th century as part of linear algebra developments, with early work on linear matrix polynomials (pencils) by in the 1840s and in the 1890s on canonical forms.

Examples

A constant matrix polynomial of degree 0 is P(\lambda) = A_0, where A_0 is a constant n \times n , independent of \lambda. For instance, with n=2, P(\lambda) = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}. Each entry is a constant , representing the simplest case. The zero matrix polynomial P(\lambda) = O, the n \times n , corresponds to all coefficients A_k = O for k \geq 0. This trivially satisfies P(\lambda) v = 0 for any v, serving as the zero element in the ring of matrix polynomials. Consider a linear matrix polynomial P(\lambda) = A_0 + A_1 \lambda. For concreteness, let A_0 = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}, \quad A_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}. Then, P(\lambda) = \begin{pmatrix} 1 & \lambda \\ \lambda & 2 \end{pmatrix}. This example shows how the entries become linear polynomials in \lambda, combining the matrices via by powers of \lambda. For a quadratic matrix polynomial, consider P(\lambda) = A_0 + A_1 \lambda + A_2 \lambda^2 with the same A_0, A_1 and A_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I_2. Then, P(\lambda) = \begin{pmatrix} 1 + \lambda^2 & \lambda \\ \lambda & 2 + \lambda^2 \end{pmatrix}. The entries are polynomials, illustrating the general form where and by \lambda^k are used. Matrix polynomials P(\lambda) are well-defined because the scalar \lambda commutes with all matrix coefficients via , allowing unambiguous computation of the regardless of whether the A_k commute among themselves. However, when multiplying two matrix polynomials P(\lambda) Q(\lambda), the non-commutativity of the A_k and coefficients from Q must be accounted for.

Algebraic Properties

Operations

Matrix polynomials are expressions of the form P(\lambda) = \sum_{k=0}^d A_k \lambda^k, where each A_k is an n \times n matrix over a (typically the numbers), \lambda is a scalar , d is the , and A_d \neq 0. Addition of two matrix polynomials P(\lambda) = \sum_{k=0}^d A_k \lambda^k and Q(\lambda) = \sum_{k=0}^e B_k \lambda^k is defined termwise: (P + Q)(\lambda) = \sum_{k=0}^{\max(d,e)} (A_k + B_k) \lambda^k, where coefficients are zero-padded if necessary. This operation is associative and commutative, as is, with the polynomial (all A_k = 0) serving as the . Scalar multiplication by a element \alpha is (\alpha P)(\lambda) = \sum_{k=0}^d (\alpha A_k) \lambda^k, which distributes over addition. Multiplication of matrix polynomials is defined analogously to scalar polynomials but using for coefficients: if R(\lambda) = P(\lambda) Q(\lambda), then R(\lambda) = \sum_{k=0}^{d+e} C_k \lambda^k, where C_k = \sum_{i+j=k} A_i B_j. This is the Cauchy product with matrix products. Unlike scalar polynomials, multiplication is not commutative in general, since P(\lambda) Q(\lambda) \neq Q(\lambda) P(\lambda) unless the coefficients commute. It is, however, associative and distributive over addition: P(\lambda) (Q(\lambda) + S(\lambda)) = P(\lambda) Q(\lambda) + P(\lambda) S(\lambda). The constant matrix polynomial I ( with zero higher coefficients) serves as the multiplicative identity. The degree of a matrix polynomial P(\lambda) is the highest k such that A_k \neq 0 (the zero polynomial has degree -\infty by convention). For sums, \deg(P + Q) \leq \max(\deg P, \deg Q), with equality unless leading coefficients cancel. For products, \deg(P Q) = \deg P + \deg Q provided the product of leading coefficients is nonzero. The set of all n \times n matrix polynomials over the field forms a non-commutative under and , with making it a ring over the field. This structure generalizes the and supports algebraic manipulations like and greatest common divisors in certain cases.

Evaluation

Evaluation of a matrix polynomial P(\lambda) = \sum_{k=0}^d A_k \lambda^k at a scalar \lambda yields an n \times n P(\lambda). The direct method computes powers \lambda^k (inexpensive as scalars) and performs d scalar- multiplications and d additions, costing O(d n^3) operations via standard arithmetic, but since powers are trivial, it's efficient for moderate d. provides a more efficient and stable approach by rewriting P(\lambda) in nested form: P(\lambda) = A_0 + \lambda (A_1 + \lambda (A_2 + \cdots + \lambda A_d \cdots )), starting from the highest degree: initialize B_d = A_d, then B_{k-1} = A_{k-1} + \lambda B_k for k = d, \dots, 1, yielding P(\lambda) = B_0. This requires exactly d scalar- multiplications and d additions, reducing intermediate computations and improving in by minimizing error propagation. For example, for a P(\lambda) = A_0 + A_1 \lambda + A_2 \lambda^2, compute B_2 = A_2, B_1 = A_1 + \lambda B_2, P(\lambda) = A_0 + \lambda B_1. For very high degrees, while scalar powers remain cheap, is already optimal with O(d n^3) . follows from the backward stability of Horner's rule in the scalar case, extended to matrices: the computed \hat{P}(\lambda) satisfies \hat{P}(\lambda) = P(\tilde{\lambda}) + \Delta, where perturbations \Delta are bounded by a small multiple of times the . Advanced methods like Paterson–Stockmeyer are unnecessary here, as no matrix powers are involved.

Annihilating Polynomials

Characteristic Polynomial

For an n \times n polynomial matrix A(\lambda) with entries in the F[\lambda] over a F, the is defined as \chi_{A}(\mu; \lambda) = \det(\mu I - A(\lambda)), where \mu is an indeterminate and I is the n \times n . This is a in \mu of degree n, with coefficients that are polynomials in \lambda. The roots in \mu for fixed \lambda relate to the eigenvalues, but more importantly, the Cayley–Hamilton theorem extends to matrices over commutative rings, including polynomial rings: \chi_{A}(A(\lambda); \lambda) = 0, the zero matrix over F[\lambda]. This means substituting \mu = A(\lambda) into the polynomial yields the zero matrix. A proof follows from the general case using the adjugate: (\mu I - A(\lambda)) \adj(\mu I - A(\lambda)) = \chi_{A}(\mu; \lambda) I, and evaluating at \mu = A(\lambda) gives the result, handling non-commutativity in the matrix polynomial ring. The coefficients of \chi_{A}(\mu; \lambda) can be computed using generalizations of Leverrier's algorithm adapted for polynomial entries, avoiding full determinant evaluations at multiple points. This involves recursive computation of traces and matrix products over the polynomial ring. The characteristic polynomial is invariant under equivalence transformations over the polynomial ring: if B(\lambda) = P(\lambda)^{-1} A(\lambda) P(\lambda) for invertible polynomial matrices P(\lambda), then \chi_{B}(\mu; \lambda) = \chi_{A}(\mu; \lambda). This invariance highlights its role in classifying polynomial matrices up to equivalence. Historically, the generalization of Cayley-Hamilton to rings builds on the original 1858 result for fields. For matrix polynomials P(\lambda), the determinant \det P(\lambda) is related to the of a linearization, up to a \lambda^{n(d-1)}, where d is the degree. This connection facilitates .

Minimal Polynomial

The minimal polynomial of a polynomial matrix A(\lambda) \in F[\lambda]^{n \times n}, denoted m_A(\mu; \lambda), is the unique in \mu of least degree with coefficients in F[\lambda] such that m_A(A(\lambda); \lambda) = 0, the . It generates the annihilator ideal in F[\lambda][\mu] and divides any other annihilating polynomial, including the \chi_A(\mu; \lambda), in the ring F[\lambda][\mu]. Similar matrices over the share the same minimal polynomial. The minimal polynomial encodes the invariant factor structure of the polynomial matrix via its . The roots in \mu are related to the eigenvalues, but multiplicities reflect the module structure over F[\lambda]: the degree in \mu for each irreducible corresponds to the highest in the invariant factors. For regular matrix polynomials, it relates to the finite and infinite elementary divisors. The degree of m_A equals the maximum degree over the invariant factors. To compute m_A(\mu; \lambda), one approach is to compute the to find the invariant factors and take their in \mu. Alternatively, factor \chi_A over F[\lambda][\mu] and test divisors, or use Krylov-like methods adapted to the polynomial setting by generating powers and finding dependencies in the . These methods reveal the non-diagonalizable structure analogous to blocks but in terms of torsion modules. In the context of matrix polynomials, the minimal polynomial distinguishes the algebraic structure, with the exponent for each factor giving the index of nilpotency in the associated module, capturing deviations from free modules.

Series and Approximations

Geometric Series

The geometric series for scalars is given by \sum_{k=0}^\infty r^k = (1 - r)^{-1} for |r| < 1. In the matrix setting, the analogous infinite series is S = \sum_{k=0}^\infty A^k = (I - A)^{-1}, where A is a square matrix and I is the identity matrix of compatible dimension, provided that the spectral radius \rho(A) < 1. This condition ensures convergence in any matrix norm, as the powers A^k tend to the zero matrix. For finite truncations, the partial sum is S_n = \sum_{k=0}^n A^k = (I - A^{n+1})(I - A)^{-1}, assuming I - A is invertible. This formula follows from the scalar case by direct substitution and verification: multiplying both sides by I - A yields (I - A) S_n = I - A^{n+1}, which holds by telescoping the sum \sum_{k=0}^n A^k - \sum_{k=1}^{n+1} A^k = I - A^{n+1}. If \rho(A) \geq 1, the infinite series diverges. More generally, the resolvent (zI - A)^{-1} admits a Laurent series expansion as (zI - A)^{-1} = z^{-1} \sum_{k=0}^\infty \left( \frac{A}{z} \right)^k for |z| > \rho(A), where the converges absolutely. A concrete example arises when A is nilpotent with A^m = 0 for some positive integer m; here, \rho(A) = 0 < 1, and the series terminates exactly at the finite sum \sum_{k=0}^{m-1} A^k = (I - A^m)(I - A)^{-1} = (I - A)^{-1}. For instance, the $2 \times 2 nilpotent Jordan block A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} satisfies A^2 = 0, so \sum_{k=0}^1 A^k = I + A = (I - A)^{-1}.

Convergence and Applications

The convergence of the matrix geometric series \sum_{k=0}^\infty A^k for a square matrix A is determined by the spectral radius \rho(A), defined as the maximum modulus of the eigenvalues of A. The series (in any ) if and only if \rho(A) < 1; it diverges if \rho(A) > 1. A sufficient condition for convergence is that the matrix norm satisfies \|A\| < 1 for some consistent matrix norm, though this is not necessary, as convergence can occur even if \|A\| \geq 1 provided \rho(A) < 1. In boundary cases where \rho(A) = 1, the series may diverge, but summation methods such as Abel summation can assign a meaningful sum through of the \sum_{k=0}^\infty r^k A^k as r \to 1^-, under additional conditions like the absence of blocks for eigenvalues on the unit circle. For the partial sum S_n = \sum_{k=0}^n A^k, the error relative to the infinite sum S = (I - A)^{-1} when \|A\| < 1 satisfies \|S - S_n\| \leq \frac{\|A\|^{n+1}}{1 - \|A\|}, providing a practical bound for truncation in computations. This Neumann representation ties directly to the resolvent operator (zI - A)^{-1}, where for z = 1 and \rho(A) < 1, the inverse (I - A)^{-1} is given by the , enabling of matrix functions beyond the disk of convergence. A primary application arises in solving linear systems of the form (I - A)x = b, where the x_{k+1} = A x_k + b with x_0 = 0 generates the partial sums of the , converging to the unique solution x = (I - A)^{-1} b if and only if \rho(A) < 1. This iterative approach underpins basic stationary methods in , with the asymptotic rate governed by \rho(A), ensuring linear convergence when the condition holds. Matrix polynomials extend this framework by serving as finite-degree approximants to the resolvent (I - A)^{-1}, particularly useful when the spectral radius condition prevents direct series summation; for instance, methods generate such polynomials that minimize the approximation error over low-degree spaces. Rational functions, built from ratios of matrix polynomials, further improve accuracy for resolvent approximation near the , though they require careful placement to maintain .

Broader Applications

In Linear Algebra

Matrix polynomials extend classical linear algebra concepts to higher-degree eigenvalue problems, particularly in . For a regular matrix polynomial P(\lambda) = \sum_{k=0}^d A_k \lambda^k, the eigenvalues are the roots of the \det P(\lambda) = 0, which is a scalar of degree at most nd. This generalizes the of a single , allowing analysis of infinite eigenvalues (at \lambda = \infty) and structural indices that describe the polynomial's nullity. Canonical forms provide invariant decompositions analogous to the canonical form. The introduces canonical triples (A, B, C) for monic polynomials, where A and B form a block structure revealing chains for finite eigenvalues and Kronecker blocks for infinite ones. Standard pairs (E, A) linearize the polynomial while preserving eigenvalues, facilitating similarity invariants like the Smith-McMillan form over the , which factors P(\lambda) into diagonal matrices with invariant factors. These forms classify polynomials up to equivalence and highlight divisors, enabling factorization into coprime spectral factors for stability analysis. Invariant subspaces for matrix polynomials are generalized through the kernels of spectral divisors. If Q(\lambda) divides P(\lambda), the generalized eigenspace associated with Q is invariant under the companion linearization, decomposing the space into primary components similar to the theorem. This structure is crucial for understanding non-diagonalizable cases, where the sizes of Jordan-like blocks are determined by the multiplicities in the minimal indices of the polynomial.

In Numerical Methods

Numerical methods for matrix polynomials primarily address the polynomial eigenvalue problem (PEP): finding \lambda and v \neq 0 such that P(\lambda) v = 0. A standard approach is , which converts a degree-d matrix polynomial into a linear L(\lambda) = \lambda X - Y of size nd \times nd, preserving the eigenvalues and allowing use of generalized eigenvalue solvers like the QZ algorithm. Common linearizations include the companion form \lambda \begin{pmatrix} I & 0 \\ \vdots & I \end{pmatrix} - \begin{pmatrix} 0 & I & \cdots \\ \vdots & \ddots & \ddots \\ -A_0 & \cdots & -A_{d-1} \end{pmatrix}, which is straightforward but can be ill-conditioned for high degrees or structured polynomials (e.g., palindromic in applications). Structured linearizations, such as those preserving , mitigate this by maintaining sparsity and . For large-scale sparse PEPs, projection methods based on Krylov subspaces are essential. The Arnoldi algorithm builds an for the subspace \mathcal{K}_m(L, v), projecting the pencil onto a smaller Hessenberg form for eigenvalue approximation via the implicitly restarted Arnoldi method (IRAM). For Hermitian definite PEPs, the produces a tridiagonal , accelerating with shift-and-invert strategies targeting interior eigenvalues. These methods handle non-normal cases through block Krylov approaches, estimating eigenvectors via residual minimization, though challenges include prevention via locking and purging of converged eigenpairs. Function approximation and perturbation analysis are also key. For evaluating P(\lambda) at specific points, extends to matrices with backward under , bounding errors by O(\epsilon d \|A_{\max}\|), where \epsilon is machine precision. quantifies sensitivity of eigenvalues to coefficient changes, with condition numbers derived from the norm of the , guiding robust computations in applications like vibration analysis. Ongoing research focuses on scalable solvers for structured high-degree , incorporating to isolate eigenvalues.

References

  1. [1]
    Linearization of regular matrix polynomials | The Electronic Journal ...
    Jan 1, 2008 · This note contains a short review of the notion of linearization of regular matrix polynomials. The objective is clarification of this ...
  2. [2]
    Matrix Polynomials | SIAM Publications Library
    This book provides a comprehensive treatment of the theory of polynomials in a complex variable with matrix coefficients.
  3. [3]
    Applications to Matrix Polynomials - SpringerLink
    This chapter contains two applications to the theory of regular matrix polynomials. The first is connected with the zero structure, and relates the behaviour of ...
  4. [4]
    None
    ### Summary of Polynomial Matrices from the Document
  5. [5]
    Matrix polynomial - StatLect
    A matrix polynomial is a linear combination of the powers of a square matrix.Powers · Definition · Null space of a matrix polynomial · Eigenvalues of a matrix...
  6. [6]
    [PDF] MATH 304 Linear Algebra Lecture 35: Matrix polynomials. Matrix ...
    Definition. Given an n×n matrix A, we let. A2 = AA, A3 = AAA, ..., Ak = AA...A ... Evaluation of a matrix polynomial is yet another problem where the ...
  7. [7]
    [PDF] A Brief History of Linear Algebra - Math
    Simply stated, a square matrix satisfies its characteristic equation. Cayley's efforts were published in two papers, one in 1850 and the other in 1858. His ...
  8. [8]
    Matrix Polynomial -- from Wolfram MathWorld
    A polynomial with matrix coefficients. An nth order matrix polynomial in a variable t is given by P(t)=A_0+A_1t+A_2t^2+...+A_nt^n, where A_k are p×p square ...
  9. [9]
    Polynomial Matrix -- from Wolfram MathWorld
    A matrix whose entries are polynomials.
  10. [10]
    [PDF] Linear Algebra With Applications - Emory Mathematics
    Jan 3, 2021 · Section 2.2 is renamed as Matrix-Vector Multiplication. • Minor revisions made throughout, including fixing typos, adding exercises, expanding ...
  11. [11]
    Matrix Analysis - Cambridge University Press & Assessment
    Interlacing polynomials. Proceedings of the American ... Horn, The Johns Hopkins University, Charles R. Johnson, College of William and Mary, Virginia.
  12. [12]
  13. [13]
    Accuracy and Stability of Numerical Algorithms | 5. Polynomials
    We consider Horner's rule for evaluation and the Newton divided difference polynomial for interpolation.
  14. [14]
    An Extension and Efficient Calculation of the Horner's Rule for ...
    We propose an efficient method for calculating “matrix polynomials” by extending the Horner's rule for univariate polynomials. We extend the Horner's rule ...
  15. [15]
    On the Number of Nonscalar Multiplications Necessary to Evaluate ...
    Mixed-Precision Paterson–Stockmeyer Method for Evaluating Polynomials of Matrices ... This paper develops an algorithm to multiply a 𝑝 × 2 matrix by a 2 ...Missing: original | Show results with:original
  16. [16]
    Optimality of the Paterson–Stockmeyer method for evaluating matrix ...
    Aug 1, 2019 · We analyze the number of matrix multiplications required by the Paterson–Stockmeyer method and by two widely used generalizations.
  17. [17]
    Efficient evaluation of matrix polynomials - ScienceDirect.com
    Feb 15, 2018 · This paper presents a new family of methods for evaluating matrix polynomials more efficiently than the state-of-the-art Paterson–Stockmeyer ...
  18. [18]
    Characteristic Polynomial -- from Wolfram MathWorld
    The characteristic polynomial is the polynomial left-hand side of the characteristic equation det(A-lambdaI)=0, where A is a square matrix and I is the ...
  19. [19]
    [PDF] Eigenvalue-Polynomials | MIT
    Sep 7, 2017 · The amazing fact is that the characteristic polynomial det(C − λI) = p(λ), and so the eigenvalues of. C are the roots of p. 6. Page 7. 4.1 Proof.
  20. [20]
    What is the Cayley–Hamilton Theorem? - Nick Higham
    Nov 3, 2020 · Historical Note. The Cayley–Hamilton theorem appears in the 1858 memoir in which Cayley introduced matrix algebra. Cayley gave a proof for n = 2 ...
  21. [21]
    [PDF] The Cayley-Hamilton Theorem
    Mar 16, 2012 · Determinant, cofactors and adjugate in 3 lines: adj(A) = t(cofactor(A)(i,j))ij. Definition adjugate n (A : 'M_n) := \matrix_(i, j) cofactor A ...
  22. [22]
    [PDF] The generalized characteristic polynomial, corresponding resolvent ...
    C is the trace of the kth exterior power of C, which has dimension n k . This trace may be computed as the sum of all principal minors of C of size k (see ...
  23. [23]
    Classroom Note:A Simple Proof of the Leverrier--Faddeev ...
    This will lead to a novel and elegant proof of the recursive relations that compute the coefficients of the characteristic polynomial used in the Leverrier-- ...
  24. [24]
    [PDF] MATHEMATICS 217 NOTES - Math (Princeton)
    This matrix is called the companion matrix of the polynomial p(λ) = a0 + a1λ + ··· + an−1λn−1 + λn. Conversely if A is the companion matrix to a polynomial p(λ) ...Missing: A_k | Show results with:A_k
  25. [25]
    [PDF] Minimal Polynomial and Jordan Form
    Moreover, a look at the minimal polynomial tells you at a glance whether the matrix (or map) is diagonalizable—another important property, again invariant under ...
  26. [26]
    [PDF] The minimal polynomial and some applications - Keith Conrad
    The minimal polynomial is a special polynomial that indicates when a linear operator is diagonalizable, and it is used to detect diagonalizability.<|control11|><|separator|>
  27. [27]
  28. [28]
    [PDF] Matrix Analysis - Anand Institute Of Mathematics
    Apr 2, 2019 · Horn is a Research Professor in the Department of Mathematics at the University of Utah. He is the author of Topics in Matrix Analysis ( ...
  29. [29]
    [PDF] Geometric Series of Matrices
    Definition: Let T be any square matrix. Then the sequence {Sn}n≥0 defined by. Sn = I + T + ... + T n−1. , S0 = I, is called the geometric series generated by T.
  30. [30]
  31. [31]
    [PDF] Chapter 3 Numerical linear algebra
    Corollary 3.1 The Neumann series *k Ak converges if sprA < 1 and diverges if sprA > 1. Thus, the spectral radius of a matrix is always defined, and is a ...
  32. [32]
    [PDF] Summing divergent matrix series - Department of Statistics
    May 30, 2025 · Abstract. We extend several celebrated methods in classical analysis for summing series of com- plex numbers to series of complex matrices.
  33. [33]
    [PDF] 9. Numerical Solution of Algebraic Systems
    May 18, 2008 · Proposition 7.25 tells us that iteration of the affine function will converge to the fixed point if and only if its coefficient matrix, namely g ...
  34. [34]
    Neumann series and iterative methods - CSI Math
    Theorem on convergence of Neumann series. Theorem (p198). If $\|A \| < 1$, then ... The spectral radius is defined as the largest eigenvalue in magnitude:.
  35. [35]
    Polynomial Approximation to the Inverse of a Large Matrix - arXiv
    Feb 25, 2025 · The inverse of a large matrix can often be accurately approximated by a polynomial of degree significantly lower than the order of the matrix.Missing: resolvent (IA)^
  36. [36]
    [PDF] Linear Algebra I
    Then A is diagonalizable if and only if its minimal polynomial MA is a product of distinct monic linear factors. Proof. First assume that A is diagonalizable.
  37. [37]
    [PDF] Chapter 6 Eigenvalues and Eigenvectors
    P is symmetric, so its eigenvectors (1, 1) and (1, −1) are perpendicular. The only eigenvalues of a projection matrix are 0 and 1. The eigenvectors for λ = 0 ( ...
  38. [38]
    [PDF] Linear Algebra 6: The Primary Decomposition Theorem - People
    Nov 11, 2005 · • The Primary Decomposition Theorem, Mark 1. • The Primary ... minimal polynomial of T|W . Proof. Example. If mT. (x) = x. 2. − x then (as ...
  39. [39]
    [PDF] Notes on Jordan Form - Northwestern University
    The charac- teristic polynomial tells us how many times a certain eigenvalue will appear in this Jordan form, and the dimension of each eigenspace tells us how ...<|control11|><|separator|>
  40. [40]
    [PDF] Canonical Forms
    similarity invariants are equal to 1. Proof Let A have characteristic polynomial ОA(x) and minimal polynomial m(x). Using the definition of invariant ...
  41. [41]
    [PDF] M.6. Rational canonical form
    The blocks of the rational canonical form of A are companion matrices of ... The minimal polynomial µA(x) of a matrix A ∈ Matn(K) is defined to be the ...
  42. [42]
    Polynomial Preconditioned GMRES and GMRES-DR - SIAM.org
    It is shown that polynomial preconditioning can significantly improve restarted GMRES for difficult problems, and the reasons for this are examined. Stability ...
  43. [43]
    Efficient and Stable Arnoldi Restarts for Matrix Functions Based on ...
    To approximate f ⁡ ( A ) ⁢ b ---the action of a matrix function on a vector---by a Krylov subspace method, restarts may become mandatory due to storage ...<|separator|>
  44. [44]
    Near-Minimax Polynomial Approximation in an Elliptical Region
    A new algorithm for computing Chebyshev coefficients is based on the fast Fourier transform (FFT) and thus inherits the efficiency and numerical stability of ...
  45. [45]
    Computing an Eigenvector with Inverse Iteration | SIAM Review
    The purpose of this paper is two-fold: to analyze the behavior of inverse iteration for computing a single eigenvector of a complex square matrix and to ...
  46. [46]
    Accuracy and Stability of Numerical Algorithms | SIAM Publications Library
    ### Summary of Content from Higham's Book on Condition Number and Horner's Method for Matrices