Fact-checked by Grok 2 weeks ago

Krylov subspace

In linear algebra, the Krylov subspace of order m, denoted K_m(A, v), is the linear subspace spanned by the vectors \{v, Av, A^2 v, \dots, A^{m-1} v\}, where A is an n \times n matrix and v is a nonzero vector in \mathbb{R}^n or \mathbb{C}^n. These subspaces are nested, with K_k(A, v) \subseteq K_{k+1}(A, v) for each k, and their dimension grows by at most one at each step until reaching the grade of v with respect to A, which is the degree of the minimal polynomial of A restricted to the cyclic subspace generated by v. The concept originates from the work of Russian mathematician Alexei Nikolaevich Krylov in 1931, who used matrix powers to approximate solutions to systems of linear ordinary differential equations via the characteristic polynomial. It gained prominence in numerical linear algebra during the 1950s through developments like the Lanczos algorithm (1950) for tridiagonalization and the conjugate gradient (CG) method (1952) by Magnus Hestenes and Eduard Stiefel, which minimize the error in these subspaces for symmetric positive definite systems. Subsequent extensions addressed nonsymmetric problems, leading to methods such as biconjugate gradient (BiCG, 1975) and GMRES (1986). Krylov subspaces underpin a class of efficient iterative projection methods for solving large-scale sparse linear systems Ax = b arising in scientific , such as those from finite element or discretizations of partial differential equations. They exploit the structure of A through matrix-vector products, avoiding full matrix factorizations, and are also applied to eigenvalue problems, optimization, and preconditioning in fields like , , and . These methods are among the most influential numerical algorithms of the due to their convergence properties and adaptability to .

Definition and History

Definition

The Krylov subspace of order r, denoted \mathcal{K}_r(A, b), is defined as the linear span of the set of vectors generated by successively applying powers of the matrix A to the starting vector b: \mathcal{K}_r(A, b) = \operatorname{span} \{ b, Ab, A^2 b, \dots, A^{r-1} b \}, where A is an n \times n square matrix and b is a nonzero vector in \mathbb{R}^n or \mathbb{C}^n. The assumption that b \neq 0 ensures the subspace has dimension at least 1, while A being square allows for consistent powers up to order r-1. An equivalent formulation expresses the Krylov subspace as the column space (or range) of the associated Krylov matrix, which collects the generating vectors as its columns: K_r(A, b) = \begin{bmatrix} b & Ab & A^2 b & \cdots & A^{r-1} b \end{bmatrix}, \quad \mathcal{K}_r(A, b) = \operatorname{range} \left( K_r(A, b) \right). This matrix form highlights the role of K_r(A, b) in explicitly generating the subspace through its columns. In practical contexts, such as projection methods for solving linear systems Ax = b, the starting vector b often takes the form of the initial residual r_0 = b - A x_0, where x_0 is an initial approximation to the solution; this choice aligns the subspace with error reduction in iterative algorithms.

Historical Development

The Krylov subspace was first introduced by and naval Aleksei Nikolaevich Krylov in his paper, where he developed a method for computing the coefficients of the to determine the eigenvalues of a , specifically applied to finding the frequencies of small vibrations in mechanical systems. In this work, Krylov employed the subspace—generated by iteratively applying the to an initial vector—to approximate solutions through expansions, enabling efficient numerical computation without forming the full . In the 1940s, connections emerged between these subspaces and cyclic theory, notably in Karl Hessenberg's 1942 dissertation, which explored the Hessenberg form and its implications for cyclic subspaces generated by matrix-vector powers. Following , Krylov subspaces gained prominence in during the 1950s, particularly through iterative methods for solving large linear systems; for instance, Magnus R. Hestenes and Eduard Stiefel popularized their use in the introduced in 1952, building on earlier efforts at the Institute for . Implicit applications appeared even earlier, such as in Cornelius Lanczos's 1950 iteration method for eigenvalue problems, which relied on approximations within these subspaces without explicit naming. The subspaces were explicitly attributed to Krylov and termed "Krylov subspaces" in the literature starting in the late 1970s and early 1980s, as seen in works on iterative solvers, despite their prior implicit use in mid-20th-century developments.

Properties

Basic Properties

The Krylov subspace \mathcal{K}_r(A, b) exhibits a nested structure, satisfying \mathcal{K}_r(A, b) \subseteq \mathcal{K}_{r+1}(A, b) for each r \geq 1, which ensures that larger subspaces incorporate all preceding ones. Additionally, the action of the matrix A preserves this nesting through the inclusion A \mathcal{K}_r(A, b) \subseteq \mathcal{K}_{r+1}(A, b), meaning that applying A to any vector in \mathcal{K}_r(A, b) yields a result within the subsequent subspace. This leads to a form of A-invariance, where the subspace is mapped into itself or an extension under the linear transformation defined by A, facilitating iterative constructions in linear algebra. Specifically, \mathcal{K}_r(A, b) is A-invariant in the sense that A applied to its elements remains contained within the family of Krylov subspaces generated from the same starting vector b. A core algebraic property links the to polynomial actions: \mathcal{K}_r(A, b) = \{ p(A) b \mid p \text{ is a [polynomial](/page/Polynomial) with } \deg(p) < r \}, representing all vectors obtainable by applying polynomials of degree less than r to the initial vector b. This polynomial basis provides a natural framework for analyzing the 's structure and its role in approximation theory. Finally, \mathcal{K}_r(A, b) is the smallest A-invariant subspace containing b, ensuring it captures the minimal extension needed to include all iterates under powers of A starting from b. This minimality underscores its efficiency as a building block for methods in numerical computations.

Dimension and Independence

The dimension of the Krylov subspace \mathcal{K}_r(A, b) satisfies \dim \mathcal{K}_r(A, b) \leq \min(n, r), where n is the dimension of the ambient space, as it is spanned by at most r vectors.\] This bound is tight up to the grade $r_0$ of the vector $b$ with respect to the matrix $A$, defined as the smallest integer such that $A^{r_0} b$ lies in the span of $\{b, Ab, \dots, A^{r_0-1} b\}$, with $r_0 \leq \deg m_A(b)$, where $m_A(b)$ is the monic minimal polynomial of lowest degree annihilating $b$ (i.e., $m_A(b)(A)b = 0$).\[ Beyond this point, the dimension stabilizes, and \dim \mathcal{K}_r(A, b) = r_0 for all r > r_0. The vectors forming the basis \{b, Ab, \dots, A^{r-1} b\} are linearly independent precisely when r \leq r_0, ensuring that the Krylov subspace grows incrementally by one at each step up to the grade.$$] For r > r_0, linear dependence arises, as higher powers A^k b for k \geq r_0 can be expressed as linear combinations of the preceding vectors via the minimal polynomial relation. This independence property underpins the utility of Krylov subspaces in iterative methods, where the basis vectors provide a structured approximation space until saturation. Krylov's theorem establishes that the subspace \mathcal{K}_{r_0}(A, b) coincides with the cyclic generated by b under A, meaning A \mathcal{K}_{r_0}(A, b) \subseteq \mathcal{K}_{r_0}(A, b), and further iterations do not enlarge it.[$$ In this stabilized , the action of A is fully captured by the lower-dimensional structure. The grade r_0, and thus the maximal of the Krylov , is intimately tied to the canonical form of A: specifically, r_0 equals the size of the largest block corresponding to the eigenvalues appearing in the minimal m_A(b).[] This connection highlights how the reflects the algebraic multiplicity and defectiveness of the relevant spectral components affecting b.

Applications

In Numerical Linear Algebra

Krylov subspaces play a central role in iterative methods for solving large-scale linear systems of the form Ax = b, where A is a large, often sparse or structured . These methods generate a sequence of approximating subspaces \mathcal{K}_r(A, r_0) = \operatorname{span}\{ r_0, Ar_0, A^2 r_0, \dots, A^{r-1} r_0 \}, with initial residual r_0 = b - A x_0, and seek an approximate solution x_r \in x_0 + \mathcal{K}_r(A, r_0) that minimizes the residual norm \| b - A x_r \| or a related error measure over this low-dimensional subspace. This collocation approach at Ritz values—eigenvalues of the projected —enables efficient approximation even when the full solution space is high-dimensional, leveraging the fact that solutions often lie in small Krylov subspaces for matrices with clustered or extremal eigenvalues. In eigenvalue problems, Krylov subspaces facilitate the computation of extremal by projecting the original A onto \mathcal{K}_r(A, q), resulting in a reduced of Hessenberg form for general A or tridiagonal form for symmetric A. The of this projected , known as Ritz values, serve as approximations to those of A, with typically rapid for eigenvalues at the spectral extremes. For nonsymmetric matrices, the Arnoldi process constructs an for \mathcal{K}_r, yielding the Hessenberg reduction; for symmetric cases, the produces the tridiagonal form, both exploiting the subspace structure to focus computational effort on dominant spectral components. A key advantage of Krylov subspace methods in is their matrix-free nature, requiring only evaluations of matrix-vector products Av rather than explicit storage or formation of A. This makes them particularly suitable for sparse matrices, where A may arise from discretizations of partial differential equations, or for black-box operators where the matrix is implicit or too costly to assemble. For instance, in the GMRES method, the at each iteration is orthogonal to the Krylov subspace \mathcal{K}_r(A, r_0), ensuring monotonic decrease in the and robust properties for nonsymmetric systems.

In Control Theory and Other Fields

In control theory, Krylov subspaces play a fundamental role in analyzing the of linear time-invariant systems described by the state-space model \dot{x} = A x + B u, where A is the system and B is the input . The controllable is spanned by the Krylov \mathcal{K}(A, B) = \operatorname{span}\{B, AB, A^2 B, \dots \}, extended to handle the multi-column B by generating block Krylov subspaces. A system is controllable if the rank of this equals the n, ensuring that any initial can be driven to any target using the input u. This rank condition leverages the properties of Krylov subspaces, where the full rank guarantees complete without needing to compute the full controllability explicitly. Dually, observability assesses whether the system's internal states can be inferred from the output y = C x, where C is the output matrix. The observable subspace is determined by the Krylov subspace \mathcal{K}(A^T, C^T), and the system is observable if its rank equals n. Algorithms such as the Arnoldi process applied to A^T and C^T compute an orthonormal basis for the orthogonal complement of the unobservable subspace, enabling efficient verification without forming the full observability matrix. This duality mirrors the controllability analysis and is foundational for state estimation in control systems. Krylov subspaces are also central to model reduction techniques, which generate low-order approximations of high-dimensional systems while preserving key dynamical properties. In balanced truncation, the and Gramians are approximated via projections onto Krylov subspaces using methods like the Arnoldi or Lanczos algorithms, yielding low-rank factors without solving large Lyapunov equations. A balancing transformation is then derived from the of these factors, allowing truncation of states corresponding to small Hankel singular values to produce a reduced balanced realization that maintains and error bounds relative to the original system. This approach is particularly effective for large-scale systems in control design, where reduced models facilitate controller and . Beyond core control applications, Krylov subspaces enable moment-matching methods for simulating large-scale electronic circuits, where transfer functions are approximated by matching Taylor series moments around a frequency expansion point. These methods, such as the block , generate reduced-order models that preserve the first several moments of the original interconnect system, improving computational efficiency in time-domain simulations without sacrificing accuracy in passivity or . In control design, Krylov-based Padé approximants extend this by providing rational approximations to the system , matching both low- and high-frequency behaviors to support robust controller tuning and frequency-domain analysis.

Algorithms and Methods

Orthogonalization Techniques

Orthogonalization techniques are for generating bases of Krylov s, as \{v, Av, A^2 v, \dots, A^{r-1} v\} becomes increasingly ill-conditioned for larger r due to the rapid growth in the norms of higher powers, leading to numerical instability in computations. By enforcing through processes like Gram-Schmidt, these methods produce well-conditioned orthonormal bases that facilitate reliable projections onto the . The is a foundational algorithm for computing an V_r = [v_1, v_2, \dots, v_r] of the Krylov subspace \mathcal{K}_r(A, v_1), where A is a general and v_1 is the starting with \|v_1\| = 1. At each step j = 1, \dots, r, it applies A to v_j and orthogonalizes the result against the previous basis vectors using modified Gram-Schmidt to obtain v_{j+1}, simultaneously building an upper H_r \in \mathbb{R}^{r \times r}. The key relation is AV_r = V_r H_r + h_{r+1,r} v_{r+1} e_r^T, where e_r is the r-th unit vector, and h_{r+1,r} = \| w_{r+1} \| with w_{r+1} the orthogonalized vector. For symmetric or Hermitian matrices A, the Lanczos algorithm specializes the Arnoldi process, leveraging the symmetry to produce a tridiagonal matrix T_r via a three-term recurrence relation that simplifies orthogonalization to interactions only with the immediate predecessors and successors. This yields the relation A V_r = V_r T_r + \beta_r v_{r+1} e_r^T, where \beta_r > 0 is the recurrence , and V_r remains orthonormal. The tridiagonal arises because ensures h_{j+1,i} = 0 for |j-i| > 1. These techniques ensure incrementally, mitigating the issues of the power basis and allowing efficient storage of the reduced matrices—tridiagonal for Lanczos (O(r) space) or Hessenberg for Arnoldi (O(r^2) space)—which enable compact projections of the original problem onto the . Such bases are briefly used to approximate eigenvalues through Ritz pairs from the eigenvalues of the reduced matrices.

Iterative Solvers

Krylov subspace methods form the foundation of many iterative solvers for large-scale linear systems Ax = b, where the is sought by projecting onto successively larger Krylov subspaces \mathcal{K}_k(A, r_0) = \text{span}\{ r_0, Ar_0, \dots, A^{k-1} r_0 \} generated from the initial residual r_0 = b - A x_0. These methods exploit the subspace's ability to approximate the efficiently, often converging in fewer iterations than direct methods for sparse or structured matrices. The method, developed for symmetric positive definite matrices A, constructs an A-orthogonal basis for the \mathcal{K}_r(A, r_0) and minimizes the quadratic functional x^T A x - 2 b^T x over this at each iteration. This minimization ensures that the residual is orthogonal to the current with respect to the A-inner product, leading to rapid convergence for well-conditioned systems. The method requires only matrix-vector products and is particularly effective for elliptic partial differential equations discretized on grids. For nonsymmetric matrices A, the generalized minimal residual (GMRES) method orthogonalizes the Krylov basis using to build an for \mathcal{K}_r(A, r_0), then minimizes the of the \| b - A x \|_2 over the affine subspace x_0 + \mathcal{K}_r(A, r_0). This least-squares problem is solved via a small QR factorization of the from Arnoldi, making GMRES robust but storage-intensive due to the growing basis. GMRES is widely used in and circuit simulation where A lacks . Other prominent Krylov-based solvers include the biconjugate gradient stabilized (BiCGSTAB) method for nonsymmetric systems, which extends the biconjugate gradient approach by introducing a approximation to smooth and reduce irregular behavior while operating in a pair of Krylov s. For symmetric indefinite matrices, the minimal (MINRES) method applies Lanczos tridiagonalization to generate an for \mathcal{K}_r(A, r_0) and minimizes \| b - A x \|_2 over the , avoiding breakdowns that can occur in . In all these methods, the dimension of the Krylov directly influences rates, with faster reduction in norms for matrices with clustered eigenvalues. To address practical limitations such as memory constraints from expanding subspaces, restarting techniques periodically truncate the Krylov basis—e.g., GMRES(m) restarts every m steps—though this can slow convergence by losing subspace information. Preconditioning transforms the system to \tilde{A} = M^{-1} A or \tilde{A} = A M^{-1} (with nonsingular M \approx A) to improve the condition number and accelerate subspace growth toward the eigenspace of small eigenvalues, commonly using incomplete LU factorizations or multigrid for sparse matrices in scientific computing.

Challenges and Extensions

Numerical Issues

In finite precision arithmetic, the Gram-Schmidt orthogonalization process employed in the Arnoldi and Lanczos algorithms to generate an for the Krylov subspace experiences gradual loss of due to rounding errors, which can degrade the accuracy of subsequent projections and approximations. This phenomenon is particularly pronounced in long iterations, where accumulated errors cause the basis vectors to deviate from , often requiring remedial measures such as modified Gram-Schmidt orthogonalization or selective reorthogonalization to restore . The underlying power basis of the Krylov subspace, consisting of successive powers applied to the starting , can become severely ill-conditioned when the matrix eigenvalues span a wide range of magnitudes, leading to exponential growth or decay in the basis vectors and substantial amplification of rounding errors in the subspace approximation. This ill-conditioning arises from the Vandermonde-like structure of the basis, which is notoriously sensitive to variations, prompting the preference for orthonormal bases in practice despite the added computational overhead. Maintaining a full basis for a Krylov subspace of dimension r in an n \times n setting demands O(n r) for the orthonormal vectors, posing significant challenges for large-scale problems where r may approach n; restarting techniques address this by truncating the subspace periodically and reinitializing, though they can impede by discarding potentially useful information. The efficacy of Krylov subspace methods is highly sensitive to the choice of starting vector b, as an inappropriate selection—such as one lacking components in dominant eigenspaces—can generate a deficient that inadequately approximates critical directions, resulting in poor or missed spectral features. In the , near-degenerate eigenvalues exacerbate the loss of , potentially leading to breakdowns where linear dependence among vectors halts progress.

Generalizations and Modern Uses

Rational Krylov subspaces extend the classical formulation by incorporating shifts or poles, defined as \mathcal{K}_r(A, b; \xi) = q_{r-1}(A)^{-1} \operatorname{span}\{ b, A b, \dots, A^{r-1} b \}, where q_{r-1}(z) = \prod_{i=1}^{r-1} (z - \xi_i) and \xi = (\xi_1, \dots, \xi_{r-1}) are chosen poles. This generalization improves convergence for matrix function approximations, particularly when eigenvalues are clustered, by strategically placing poles near spectral clusters to enhance resolution in those regions. Such methods have been applied in and exponential integrators for stiff systems, offering superior performance over polynomial Krylov spaces in targeted spectral intervals. Block Krylov subspaces generalize the standard approach to handle multiple starting vectors simultaneously, defined as \mathcal{K}_r(A, B) = \operatorname{span}\{ B, A B, \dots, A^{r-1} B \}, where B is a matrix whose columns are the initial block of vectors. This structure is particularly advantageous in parallel computing environments, as it enables the concurrent solution of systems with multiple right-hand sides, reducing communication overhead and improving scalability on distributed architectures. Block methods have been employed in seismic imaging and fluid dynamics simulations, where they accelerate the approximation of matrix functions and eigenvalue problems for large-scale data. In , Krylov subspaces facilitate kernel approximations through methods like the Nyström approach combined with low-rank (), enabling efficient handling of massive matrices without full computation. Randomized block Krylov techniques enhance this by providing stronger low-rank approximations with fewer iterations, achieving near-optimal error bounds for and Gaussian processes on datasets exceeding millions of points. Krylov-based algorithms, such as quantum adaptations of the Lanczos method, have advanced estimation in since the 2010s by constructing subspaces from sparse Hamiltonians on noisy intermediate-scale quantum devices. These methods use block encodings to iteratively build the Krylov basis, estimating low-lying eigenvalues with polylogarithmic query complexity relative to system size, outperforming phase estimation for many-body problems. Recent 2020s developments integrate Krylov subspaces into for and multilinear systems, as in tensor Krylov methods via the T-product, which approximate solutions to large tensor equations with reduced tensor ranks. In randomized algorithms for large-scale data, block Krylov variants enable low-rank approximations of functions and problems, minimizing inner products to scale efficiently on high-dimensional datasets while preserving accuracy.

References

  1. [1]
    [PDF] Overview of Krylov subspace methods with applications to
    The main idea of Krylov subspace methods is to project the original problem into Km . In the next sections we will see how this is done via simple Galerkin type ...
  2. [2]
    [PDF] A Brief Introduction to Krylov Space Methods for Solving Linear ...
    Here is our proposal: Definition 2. A (standard) Krylov space method for solving a linear. system Ax = b or, briefly, a Krylov space solver is an iterative ...
  3. [3]
    [PDF] Extrapolation and Krylov Subspace Methods a Historical Approach
    Krylov arrives at the general characteristic polynomial using the si and the well known Newton formulas. The origin of Krylov methods is not immediate! Page ...
  4. [4]
    [PDF] Iterative Methods for Sparse Linear Systems Second Edition
    ... Saad. Copyright c 2003 by the Society for Industrial and Applied Mathematics ... Krylov Subspace Methods Part II. 229. 7.1. Lanczos Biorthogonalization ...
  5. [5]
    [PDF] The origin and development of Krylov subspace methods
    A Krylov subspace method can be defined as a process that extracts an approximate solution to a givem problem from a Krylov subspace, which is a subspace of ...
  6. [6]
    [PDF] arXiv:1412.1538v1 [cs.IT] 4 Dec 2014
    Dec 4, 2014 · Krylov subspaces and annihilating polynomials. Around 1930 the engineer and scientist Alexei Nikolaevich Krylov used Krylov subspaces to ...
  7. [7]
    [PDF] Methods of conjugate gradients for solving linear systems
    6, December 1952. Research Paper 2379. Methods of Conjugate Gradients for Solving. Linear Systems'. 1. Magnus R. Hestenes 2 and Eduard Stiefel 3. An iterative ...
  8. [8]
    [PDF] An iteration method for the solution of the eigenvalue problem of ...
    Oct 5, 2004 · I have been searching for a copy of the original paper of C Lanczos in 1950 in the journal of the Research of the National. Bureau of ...
  9. [9]
    GMRES: A Generalized Minimal Residual Algorithm for Solving ...
    We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov ...
  10. [10]
    [PDF] Krylov Space Methods on State-Space Control Models
    We give an overview of various Lanczos/Krylov space methods and how they are being used for solving certain problems in Control Systems Theory based on state- ...
  11. [11]
    [PDF] Efficient Balance-and-Truncate Model Reduction for Large Scale ...
    Nov 1, 2000 · Our contribution is an algorithm for balance-and-truncate model reduction, using Krylov methods, where no Lyapunov equations need solution. The ...<|separator|>
  12. [12]
    [PDF] Electronic Circuit Simulation and the Development of New Krylov ...
    The approach that is relevant for circuit interconnect analysis is moment matching. It is based on selecting a suitable expansion point s0 ∈ C and then ...
  13. [13]
    [PDF] NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS ...
    This is a revised edition of a book which appeared close to two decades ago. Someone scrutinizing how the field has evolved in these two decades will make.
  14. [14]
    [PDF] Krylov Subspace Methods for the Eigenvalue problem - UCSD CSE
    Implicitly Restarted Arnoldi Iteration is the most time and space efficient method for computing few eigen pairs for large sparse matrices. Extra Slides. Krylov.
  15. [15]
    Krylov Subspace Solvers and Preconditioners
    Krylov subspace solvers and preconditioners are used for solving large, sparse linear systems from partial differential equations, and are state of the art.
  16. [16]
    [PDF] Methods of Conjugate Gradients for Solving Linear Systems 1
    49, No.6, December 1952. Research Paper 2379. Methods of Conjugate Gradients for Solving. Linear Systems 1. Magnus R. Hestenes 2 and Eduard Stiefel 3. An ...
  17. [17]
    Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for ...
    Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems. Author: H. A. van der VorstAuthors Info & ...
  18. [18]
    Solution of Sparse Indefinite Systems of Linear Equations - SIAM.org
    Solution of Sparse Indefinite Systems of Linear Equations. Authors: C. C. Paige and M. A. SaundersAuthors Info & Affiliations ... MINRES: From Negative Curvature ...Missing: original | Show results with:original
  19. [19]
    Preconditioners for Krylov subspace methods: An overview
    Oct 21, 2020 · Named after Aleksei Nikolaevich Krylov, who used these spaces to ... 7 (1970), 627–656. 10.1137/0707049. Web of Science® Google Scholar.
  20. [20]
    The influence of orthogonality on the Arnoldi method - ScienceDirect
    Apr 15, 2000 · In this paper we prove that the loss of orthonormality of the computed basis can affect the reliability of the computed eigenpair when we use ...
  21. [21]
    [PDF] On the loss of orthogonality in the Gram-Schmidt ... - Cerfacs
    In the framework of the Arnoldi method, the orthogonality of the computed vectors is essential for obtaining an accurate projection onto the corresponding space ...
  22. [22]
    Error Analysis of Krylov Methods In a Nutshell - SIAM.org
    We provide a general framework for the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of ...
  23. [23]
    Krylov Subspace Residual and Restarting for Certain Second Order ...
    We then show that the computational cost can be further reduced in many cases by using our restarting in the Gautschi cosine scheme. We analyze residual ...
  24. [24]
    [PDF] Convergence of Polynomial Restart Krylov Methods for Eigenvalue ...
    Abstract. Krylov subspace methods have led to reliable and effective tools for resolving large-scale, non-Hermitian eigenvalue problems.
  25. [25]
    Convergence of Restarted Krylov Subspaces to Invariant ... - SIAM.org
    In effect, polynomial filters dynamically steer low-dimensional Krylov spaces toward a desired invariant subspace through their action on the starting vector.
  26. [26]
    [PDF] Rational Krylov approximation of matrix functions - Stefan Güttel
    Mar 28, 2012 · In many applications, the matrix A is large and typically sparse or structured. ... By the definition of a rational Krylov space we have rm(A)b =.
  27. [27]
    [PDF] arXiv:2103.04054v2 [math.NA] 18 Dec 2021
    Dec 18, 2021 · Abstract. Rational Krylov subspaces have become a reference tool in dimension reduction pro- cedures for several application problems.
  28. [28]
    [PDF] Block Krylov Space Methods for Linear Systems with multiple Right ...
    Block Krylov space solvers are iterative methods that are especially designed for such problems and have fundamental advantages over the corresponding methods ...
  29. [29]
    A review of block Krylov subspace methods for multisource ...
    Jun 30, 2015 · In general, iterative solvers will be efficient for large-scale 3-D problems if (1) a good preconditioning scheme is applied and (2) multisource ...
  30. [30]
    [PDF] Large-Scale Nyström Kernel Matrix Approximation Using ...
    Alternatively, one may perform partial singular value decomposition (SVD) using the Krylov subspace methods, such as the Arnoldi method [7]. However, time ...
  31. [31]
    Randomized Block Krylov Methods for Stronger and Faster ...
    Randomized block Krylov methods improve runtime for approximate singular value decomposition, achieving the same guarantees in fewer iterations and nearly ...<|separator|>
  32. [32]
    Quantum Krylov subspace algorithms for ground and excited state ...
    Sep 14, 2021 · Quantum Krylov subspace diagonalization (QKSD) algorithms provide a low-cost alternative to the conventional quantum phase estimation algorithm.Missing: Lanczos post- 2010
  33. [33]
    Exact and efficient Lanczos method on a quantum computer
    May 23, 2023 · We present an algorithm that uses block encoding on a quantum computer to exactly construct a Krylov space, which can be used as the basis for the Lanczos ...Missing: post- 2010
  34. [34]
    Tensor Krylov subspace methods via the T-product for large ... - arXiv
    May 15, 2024 · We introduce new tensor krylov subspace methods for solving large Sylvester tensor equations. The proposed method uses the well-known T-product for tensors and ...
  35. [35]
    Randomized block-Krylov subspace methods for low-rank ... - arXiv
    Feb 3, 2025 · This paper introduces randomized block-Krylov subspace methods for low-rank approximation of matrix functions, improving upon previous methods ...