Fact-checked by Grok 2 weeks ago

Lanczos algorithm

The Lanczos algorithm is an iterative procedure for approximating the extremal eigenvalues and corresponding eigenvectors of large, sparse, symmetric matrices by constructing an orthonormal basis for the Krylov subspace generated from an initial vector and the matrix. Developed by Hungarian-American mathematician Cornelius Lanczos in 1950, it transforms the original matrix into a tridiagonal form through successive orthogonalization steps, enabling efficient computation of spectral information without full matrix diagonalization. The method relies on three-term recurrence relations to generate Lanczos vectors, which span the subspace and yield a smaller tridiagonal matrix whose eigenvalues approximate those of the original. At its core, the algorithm begins with an initial q_1 and iteratively computes subsequent vectors q_j via A q_j = \beta_{j-1} q_{j-1} + \alpha_j q_j + \beta_j q_{j+1}, where A is the , \alpha_j = q_j^T A q_j, and \beta_j ensures , producing a T_k that captures the projected action of A. This process, a specialization of the for Hermitian matrices, converges rapidly to extreme eigenvalues when the initial vector has components in the corresponding eigenvectors, with theoretical guarantees derived from Chebyshev polynomial approximations. Early analyses by Kaniel (1966) and (1971) addressed loss of due to finite-precision arithmetic, leading to practical implementations with selective reorthogonalization. Variations of the Lanczos algorithm extend its utility: the block Lanczos method, introduced by Cullum and Donath in , processes multiple vectors simultaneously to handle clustered eigenvalues and improve parallelism. Randomized block variants, developed in the by researchers like Musco and Musco (2015), incorporate random projections for low-rank approximations in high-dimensional data applications such as and matrix compression. Additionally, adaptations like the Golub-Kahan-Lanczos (1965) apply similar Krylov techniques to of non-symmetric matrices. Widely used in scientific , , and , the algorithm remains foundational for solving large-scale eigenvalue problems where direct methods are infeasible.

Background and Motivation

Symmetric Eigenvalue Problems

The symmetric eigenvalue problem involves determining scalars λ, known as eigenvalues, and corresponding nonzero vectors v, known as eigenvectors, that satisfy the equation Av = λv, where A is a real symmetric n × n matrix. This problem arises frequently in scientific and applications, where the eigenvalues often represent physically meaningful quantities such as energies or frequencies, and the eigenvectors describe associated modes or states. Real symmetric matrices possess several advantageous properties that simplify the eigenvalue problem. All eigenvalues are real numbers, and the matrix is diagonalizable with an orthonormal basis of eigenvectors. The spectral theorem formalizes this by asserting that any real symmetric matrix A can be decomposed as A = QΛQᵀ, where Q is an whose columns are the eigenvectors, and Λ is a containing the eigenvalues on its . These properties ensure that the eigenvectors corresponding to distinct eigenvalues are orthogonal, facilitating numerical computations and theoretical analysis. Direct methods for solving the symmetric eigenvalue problem, such as the , incur a computational cost of O(n³) floating-point operations, which becomes infeasible for large n, especially when A is sparse and only a subset of eigenvalues is needed. This high cost motivates the development of iterative approaches that exploit sparsity and target specific spectral regions, enabling efficient solutions for matrices arising in high-dimensional simulations. The significance of symmetric eigenvalue problems emerged prominently in the early , fueled by advances in —where the eigenvalues of the operator correspond to discrete energy levels—and in vibration analysis, where they determine the natural frequencies and modes of mechanical structures like beams and plates. These applications underscored the need for robust computational techniques to handle increasingly complex systems.

Connection to Iterative Methods

The power method, first described by and Hilda Pollaczek-Geiringer in 1929, serves as a foundational iterative technique for approximating the dominant eigenvalue and its associated eigenvector of a A. The algorithm initializes with an arbitrary nonzero starting vector q_1, then iteratively updates q_{k+1} = A q_k / \|A q_k\|, where \|\cdot\| denotes the Euclidean norm, generating a sequence that converges to the eigenvector corresponding to the eigenvalue of largest , assuming it is real, simple, and strictly dominant. The approximate eigenvalue at each step can be obtained via the q_k^T A q_k. This method relies solely on matrix-vector multiplications, making it computationally efficient for large sparse matrices, but its linear convergence rate is governed by the ratio |\lambda_2 / \lambda_1|, where \lambda_1 and \lambda_2 are the dominant and subdominant eigenvalues, respectively. Despite its simplicity, the power method has notable limitations that motivated subsequent advancements. Convergence slows dramatically when eigenvalues cluster near the dominant one, as the ratio |\lambda_2 / \lambda_1| approaches unity, potentially requiring an impractically large number of iterations. Moreover, the method does not inherently maintain among generated vectors, leading to accumulated numerical errors in finite precision, and it computes only a single extremal eigenpair, offering no direct means to access interior eigenvalues or the full spectrum. These shortcomings highlighted the need for enhanced iterative strategies that could accelerate and produce multiple eigenpairs while preserving . The , introduced by William E. Arnoldi in , emerged as a key precursor to more sophisticated methods, particularly for nonsymmetric eigenvalue problems. It generalizes the power method by constructing an for the spanned by successive powers of A applied to the starting vector, resulting in a approximation whose eigenvalues serve as Ritz values. For symmetric matrices, the symmetry imposes additional structure, enabling a three-term recurrence that reduces storage and computational costs compared to the general case. This symmetry exploitation distinguishes symmetric iterative methods from their nonsymmetric counterparts. The conceptual foundations of these iterative approaches trace back to earlier 19th-century developments, including Carl Friedrich Gauss's work on rules in the 1810s, which utilized orthogonal polynomials to approximate integrals, and Lord Rayleigh's 1877 introduction of the variational for bounding eigenvalues in problems. Lanczos himself drew analogies between matrix iterations and the recurrence relations of orthogonal polynomials, akin to those in Gauss-Christoffel , to motivate efficient eigenvalue extraction. These historical contributions, spanning and physics, underscored the potential of polynomial-based iterations to overcome the power method's deficiencies by leveraging projections and orthogonalization.

Description of the Algorithm

Core Steps and Recurrence Relations

The Lanczos algorithm begins with the initialization of the process by selecting an arbitrary starting q_1 normalized such that \|q_1\|_2 = 1, setting \beta_1 = 0, and computing the first diagonal \alpha_1 = q_1^T A q_1, where A is the of interest. This setup establishes the foundation for generating an in the spanned by powers of A applied to q_1. The core iteration proceeds for k = 1, 2, \dots, m, where m is the desired number of steps, typically much smaller than the matrix dimension. At each step, a vector is computed as w = A q_k - \beta_k q_{k-1}, followed by the extraction of the diagonal \alpha_k = q_k^T w. The vector w is then orthogonalized against q_k to ensure , yielding \beta_{k+1} q_{k+1} = w - \alpha_k q_k with q_{k+1} normalized so \|q_{k+1}\|_2 = 1 and \beta_{k+1} = \|w - \alpha_k q_k\|_2 > 0. This three-term , \beta_{k+1} q_{k+1} = A q_k - \alpha_k q_k - \beta_k q_{k-1}, directly enforces the short recurrence property unique to symmetric matrices, allowing efficient computation without full Gram-Schmidt orthogonalization in exact arithmetic. For practical implementation, the algorithm can be outlined in as follows:
Initialize: Choose q_1 with ||q_1||_2 = 1, β_1 = 0, α_1 = q_1^T A q_1
For k = 1 to m:
    w = A q_k - β_k q_{k-1}
    α_k = q_k^T w
    w = w - α_k q_k  // Short recurrence orthogonalization (against q_k; prior step handled q_{k-1})
    β_{k+1} = ||w||_2
    If β_{k+1} = 0, stop (exact [invariant subspace](/page/Invariant_subspace) found)
    q_{k+1} = w / β_{k+1}
    // Optional: Reorthogonalize q_{k+1} against previous q_j for j < k to mitigate rounding errors
This outline highlights the iterative generation of Lanczos vectors q_k, with reorthogonalization recommended as an optional step to preserve numerical stability by countering loss of orthogonality due to finite-precision arithmetic. Upon completion after m steps, the algorithm produces the matrix Q_m = [q_1, q_2, \dots, q_m] whose columns form an orthonormal basis for the , and the symmetric tridiagonal matrix T_m with diagonal entries \alpha_1, \alpha_2, \dots, \alpha_m and subdiagonal (and superdiagonal) entries \beta_2, \beta_3, \dots, \beta_m, satisfying the relation A Q_m = Q_m T_m + \beta_{m+1} q_{m+1} e_m^T.

Computation of Eigenvalues and Eigenvectors

The Lanczos algorithm computes approximations to the eigenvalues and eigenvectors of a large symmetric matrix A by leveraging the Rayleigh-Ritz procedure applied to the tridiagonal matrix T_m generated during the iteration. The eigenvalues of T_m, denoted as \theta_j^{(m)} for j = 1, \dots, m, are called Ritz values and serve as approximations to the eigenvalues of A. Similarly, the corresponding eigenvectors y_j of T_m are used to form Ritz vectors v_j = Q_m y_j, where Q_m is the orthonormal basis of the Krylov subspace spanned by the Lanczos vectors; these v_j approximate the eigenvectors of A. This approach ensures that the Ritz pairs (\theta_j^{(m)}, v_j) are optimal approximations within the subspace, minimizing the residual norm \|A V_m - V_m \Theta_m\|_2, where V_m collects the Ritz vectors and \Theta_m is diagonal with Ritz values. The largest and smallest Ritz values converge rapidly to the extreme eigenvalues of A, bounding them according to the Courant-Fischer min-max theorem. Specifically, the largest Ritz value \theta_{\max}^{(m)} satisfies \lambda_{\max}(A) \geq \theta_{\max}^{(m)} \geq \lambda_k(A) for some k, while the smallest \theta_{\min}^{(m)} provides a lower bound \lambda_{\min}(A) \leq \theta_{\min}^{(m)} \leq \lambda_{n-m+1}(A), with convergence accelerating for well-separated extreme eigenvalues due to the interlacing properties of the Ritz values. This behavior makes the Lanczos method particularly effective for extremal eigenproblems, as interior eigenvalues typically require more iterations to approximate accurately. To manage computational resources when the subspace dimension m reaches a storage limit, restarting strategies are employed, such as the thick-restart Lanczos method. In this approach, after m steps, converged Ritz pairs are identified based on small residual norms (e.g., \|A v_j - \theta_j v_j\| \leq 10^{-8} |\theta_j|) and deflated by retaining a subset of k Ritz vectors forming an orthonormal basis \hat{Q}_k = Q_m Y_k, where Y_k are the corresponding eigenvectors of T_m. The algorithm then restarts the Lanczos process in the deflated subspace orthogonal to these converged vectors, preserving previously found eigenpairs while focusing on remaining ones, thus maintaining efficiency without full reorthogonalization. Error bounds for the approximations are derived from the residual norms of the Ritz pairs. For a Ritz pair (\theta_j, v_j), the residual satisfies \|A v_j - \theta_j v_j\|_2 = \beta_{m+1} |y_{j,m}|, where \beta_{m+1} is the subdiagonal element from the Lanczos recurrence and y_{j,m} is the last component of y_j; small values of this norm indicate convergence, with the eigenvalue error bounded by |\theta_j - \lambda_j| \leq \beta_{m+1}. These bounds guide the selection of reliable eigenpairs and inform restarting decisions, ensuring numerical reliability in practical implementations.

Tridiagonalization Procedure

The tridiagonalization procedure of the Lanczos algorithm reduces a real symmetric matrix A \in \mathbb{R}^{n \times n} to symmetric tridiagonal form T via an orthogonal similarity transformation, yielding an orthogonal matrix Q such that Q^T A Q = T. This process leverages the three-term recurrence relation inherent to the algorithm, where each new Lanczos vector q_k satisfies A q_k = \beta_k q_{k-1} + \alpha_k q_k + \beta_{k+1} q_{k+1}, with \alpha_k = q_k^T A q_k on the diagonal of T and \beta_k on the subdiagonal. The goal is to construct T such that its eigenvalues approximate those of A, facilitating subsequent eigenvalue computations. In the partial tridiagonalization phase, after m < n iterations starting from an initial unit vector q_1, the columns of Q_m = [q_1, \dots, q_m] form an orthonormal basis for the Krylov subspace \mathcal{K}_m(A, q_1) = \operatorname{span}\{q_1, A q_1, \dots, A^{m-1} q_1\}, and the projected matrix T_m = Q_m^T A Q_m is symmetric tridiagonal of order m. This yields the low-rank approximation A \approx Q_m T_m Q_m^T, with the residual error captured by the term \beta_{m+1} q_{m+1} e_m^T Q_m^T, where e_m is the m-th standard basis vector in \mathbb{R}^m, \beta_{m+1} = \|A q_m - \alpha_m q_m - \beta_m q_{m-1}\|, and q_{m+1} is the normalized direction orthogonal to the current subspace. The magnitude of \beta_{m+1} measures the deviation from exact reproduction within the subspace, decreasing as m increases for well-converged extremal eigenvalues. For the full tridiagonalization, the process continues iteratively until m = n, at which point \beta_{n+1} = 0 in exact arithmetic, resulting in the exact decomposition Q^T A Q = T. This full reduction is mathematically equivalent to the classical Householder bidiagonalization (or tridiagonalization for symmetric matrices) in exact arithmetic, producing the same T up to signs in the columns of Q, but the Lanczos approach is non-direct and iterative, making it suitable for large sparse matrices where only matrix-vector products are feasible without destroying sparsity or introducing fill-in. In contrast to Householder's dense operations requiring O(n^3) time and full storage, Lanczos exploits sparsity for efficiency in applications like structural analysis. Storage requirements are minimal: the tridiagonal matrix T is represented compactly by its n diagonal elements \alpha_k and n-1 subdiagonal elements \beta_k, requiring O(n) space, while the Lanczos vectors comprising Q—each of length n—are stored only if full eigenvectors are needed for reconstructing approximate eigenpairs from T; otherwise, they can be discarded after computing the scalars. This efficiency enables the method's application to matrices too large for dense storage.

Derivation

Extension of the Power Method

The power method is an iterative technique for approximating the dominant eigenvalue and corresponding eigenvector of a symmetric matrix A. Starting from an initial vector q_1, it repeatedly computes q_{k+1} = A q_k / \|A q_k\|, converging to the eigenvector associated with the eigenvalue of largest magnitude. However, this approach operates along a single direction and inefficiently recomputes A v at each step without reusing prior information. To address this inefficiency, the power method can be modified by generating multiple orthogonal directions through orthogonalization of residuals, which avoids redundant matrix-vector multiplications and builds a richer approximation space. This extension computes successive residuals and orthogonalizes them against previous vectors, spanning the Krylov subspace K_m(A, q_1) = \operatorname{span}\{q_1, A q_1, \dots, A^{m-1} q_1\}, a low-dimensional subspace that captures the action of A on the initial vector. The initial steps of this modified process begin with a normalized starting vector q_1 (with \|q_1\| = 1). Compute v_1 = A q_1, then \alpha_1 = q_1^T v_1, and the residual r_1 = v_1 - \alpha_1 q_1. Normalize the residual to obtain \beta_1 = \|r_1\| and q_2 = r_1 / \beta_1, ensuring q_2 is orthogonal to q_1. These steps project out the component of A q_1 in the direction of q_1, yielding an orthogonal basis for the subspace. This subspace-oriented enhancement motivates the Lanczos algorithm by enabling better approximations to multiple extremal eigenvalues compared to single-vector iterations, as the projected problem on the Krylov subspace reveals spectral information more efficiently for large sparse matrices.

Generation of Orthogonal Lanczos Vectors

The Lanczos algorithm generates an orthonormal basis for the Krylov subspace \mathcal{K}_k(A, q_1) = \span\{q_1, A q_1, \dots, A^{k-1} q_1\}, where A is a symmetric matrix and q_1 is a unit initial vector, by applying a three-term recurrence relation that leverages the symmetry of A to avoid the computational expense of full Gram-Schmidt orthogonalization at each step. In contrast to the power method, which produces successive powers of A applied to q_1 without orthogonalization, the Lanczos process ensures orthogonality through a short recurrence, enabling efficient construction of the basis vectors q_k. The mathematical basis for this can be established by induction on the step k. Assume that the vectors q_1, \dots, q_k form an orthonormal set, i.e., q_j^T q_i = \delta_{ij} for $1 \leq i,j \leq k. Set \beta_0 = 0. The next vector q_{k+1} is derived from the recurrence \beta_{k+1} q_{k+1} = A q_k - \alpha_k q_k - \beta_k q_{k-1}, where \alpha_k = q_k^T A q_k and \beta_{k+1} = \| A q_k - \alpha_k q_k - \beta_k q_{k-1} \| for k \geq 1 (for k=1, the \beta_1 q_0 term is absent). To show q_{k+1}^T q_j = 0 for j \leq k, take the inner product of the recurrence with q_j: \beta_{k+1} q_{k+1}^T q_j = q_j^T A q_k - \alpha_k q_j^T q_k - \beta_k q_j^T q_{k-1}. For j \leq k-2, the induction hypothesis implies q_j^T q_k = 0 and q_j^T q_{k-1} = 0, so both last terms vanish, leaving q_j^T A q_k. Due to the symmetry of A, q_j^T A q_k = q_k^T A q_j, and since A q_j lies in the span of \{q_{j-1}, q_j, q_{j+1}\} by the recurrence (for j < k), q_k^T A q_j = 0 for |k - j| > 1. For j = k-1, q_{k-1}^T q_k = 0 so the \alpha_k term vanishes, and q_{k-1}^T A q_k - \beta_k q_{k-1}^T q_{k-1} = q_{k-1}^T A q_k - \beta_k = 0, since q_{k-1}^T A q_k = \beta_k by symmetry and the recurrence for A q_{k-1}. For j = k, both last terms give -\alpha_k, canceled by q_k^T A q_k = \alpha_k. Thus, propagates. This short recurrence exploits the structure imposed by , as the action of A on q_k projects onto at most three consecutive basis vectors: A q_k \in \span\{q_{k-1}, q_k, q_{k+1}\}. Consequently, the inner products satisfy q_j^T A q_k = 0 for |j - k| > 1, a arising directly from the prior and the symmetric tridiagonal form of the projected operator (without deriving the explicit coefficients here). The Lanczos vectors admit a polynomial representation that underscores their connection to orthogonal polynomials. Specifically, each q_k = p_{k-1}(A) q_1, where \{p_m\} is a sequence of monic polynomials of degree m orthogonal with respect to the inner product \langle p, r \rangle = q_1^T p(A) r(A) q_1 (corresponding to the spectral measure induced by A and q_1). These polynomials satisfy a three-term recurrence analogous to that of , such as those of Gauss quadrature, ensuring the generated vectors remain orthonormal as the expands. This perspective highlights the Lanczos process as a matrix analogue of generating orthogonal polynomials via moment-matching in the spectral distribution of A.

Formation of the Tridiagonal Matrix

In the general case of the Arnoldi iteration applied to a nonsymmetric matrix, the projection T_m = Q_m^T A Q_m yields an upper Hessenberg matrix, where all entries below the first subdiagonal are zero due to the orthogonalization process that ensures the residual lies in the direction of the next basis vector. However, when A is symmetric, the resulting Hessenberg form must also be symmetric, which forces all entries above the first superdiagonal to vanish, producing a real symmetric tridiagonal matrix T_m with nonzero elements only on the main diagonal and the adjacent sub- and super-diagonals. The explicit entries of this tridiagonal matrix T_m are derived directly from the Lanczos recurrence relations and the of the columns of Q_m = [q_1, \dots, q_m]. The diagonal elements are given by \alpha_k = q_k^T A q_k, \quad k = 1, \dots, m, which represent the Rayleigh quotients at each step. The subdiagonal (and superdiagonal, by symmetry) elements are \beta_{k+1} = q_k^T A q_{k+1}, \quad k = 1, \dots, m-1, where \beta_{k+1} emerges as the norm of the residual vector in the recurrence A q_k = \beta_k q_{k-1} + \alpha_k q_k + \beta_{k+1} q_{k+1}, ensuring the three-term structure (with \beta_0 = 0). All other entries of T_m are zero because the conditions q_i^T q_j = \delta_{ij} imply that q_k^T A q_j = 0 for |k - j| > 1, as nonadjacent vectors are orthogonal and the action of A on q_j only involves q_{j-1}, q_j, and q_{j+1}. The coefficients of T_m also connect to the spectral properties of A through the moment matrix interpretation. Specifically, the entries \alpha_k and \beta_k match the recursion coefficients for the orthonormal polynomials associated with the spectral measure \mu defined by the initial vector q_1, where the moments are m_j = q_1^T A^j q_1 = \int \lambda^j \, d\mu(\lambda). This relation arises because the Lanczos vectors satisfy a three-term recurrence that mirrors the orthogonal polynomial expansion with respect to \mu, allowing T_m to approximate the action of functions of A via quadrature rules on the projected spectrum. As the iteration proceeds to m = n (the dimension of A), the basis Q_n becomes a full orthonormal matrix, and the similarity transformation yields Q_n T_n Q_n^T = A exactly, recovering the original matrix from its tridiagonal projection. This exactness holds in exact arithmetic under the assumption of no breakdowns in the recurrence (i.e., \beta_k \neq 0).

Theoretical Properties

Convergence Behavior

The convergence of the Lanczos algorithm is primarily driven by the of the eigenvalues of the A. Extreme eigenvalues—those at the largest and smallest ends of the spectrum—tend to be approximated rapidly when there is a significant separating them from the rest of the eigenvalues, as the algorithm leverages the of Chebyshev polynomials to accelerate in such regions. In contrast, densely clustered interior eigenvalues converge more slowly due to the reduced resolution in those areas. This behavior aligns with the algorithm's tendency to prioritize regions of the spectrum that exhibit "too little charge" relative to an equilibrium determined by the overall eigenvalue spread and the ratio of matrix size to iteration count. The growth of the Krylov subspace plays a central role in determining the accuracy of approximations, with the dimension m of the subspace roughly needing to match or exceed the number of desired eigenvalues for reliable convergence. As the subspace expands through successive iterations, it captures increasingly refined projections of A, enabling better eigenvalue estimates via the associated Ritz values. Empirically, these Ritz values interlace with the true eigenvalues of A, meaning that for each step k, the Ritz values from the k-step tridiagonal matrix lie between those of the full spectrum, ensuring a monotonic approach to the exact values. Convergence typically proceeds from the outside in, with the largest and smallest eigenvalues stabilizing first, while interior ones require more steps to resolve accurately. Several factors can impede , including the presence of multiple or closely spaced eigenvalues, which lead to slower resolution and potential misconvergence where Ritz values temporarily average over clusters rather than isolating individuals. An ill-conditioned starting , with small projections onto the dominant eigenvectors, further delays progress by limiting the initial subspace's informativeness. To mitigate these issues, techniques such as thick restarting have been developed, which retain a of previous Ritz vectors (thicker than a single ) to restart the process, preserving valuable spectral information and accelerating for clustered or degenerate cases without excessive computational overhead.

Kaniel–Paige Theory

The Kaniel–Paige theory establishes rigorous bounds on the convergence of the Lanczos algorithm for approximating the extreme eigenvalues of a large symmetric matrix A, emphasizing the rapid approximation of the largest and smallest eigenvalues under certain spectral conditions. This framework, developed in the late 1960s and early 1970s, demonstrates that the algorithm's Ritz values \theta^{(m)} from the m-step tridiagonal matrix converge to the corresponding eigenvalues of A at a rate influenced by the spectral gap and the final Lanczos coefficient \beta_{m+1}. The theory assumes A is symmetric with distinct eigenvalues, ensuring the eigenvectors form an orthogonal basis, though subsequent extensions address cases with clustered spectra by grouping nearby eigenvalues into effective gaps. In his 1966 paper, Shmuel Kaniel derived explicit error bounds for the eigenvalue approximations, focusing on the largest eigenvalue \lambda_1 > \lambda_2 \geq \cdots \geq \lambda_n. A central result is the bound on the approximation error for the largest Ritz value \theta_1^{(m)}: |\lambda_1 - \theta_1^{(m)}| \leq \frac{(\lambda_1 - \lambda_n) \beta_{m+1}^2}{(\lambda_1 - \lambda_2)^2}, where the gap \delta = \lambda_1 - \lambda_2 measures the separation from the next eigenvalue, and \beta_{m+1} is the subdiagonal element in the tridiagonal matrix after m steps, which decreases as convergence progresses. This inequality highlights quadratic convergence in terms of \beta_{m+1}/\delta, showing that small residuals lead to tight eigenvalue estimates, provided the initial vector has a nonzero projection onto the dominant eigenvector. Kaniel's analysis also extends to eigenvector errors, bounding the deviation by similar residual terms scaled by the spectral diameter. C. C. Paige refined these bounds in his 1971 thesis, introducing residual norm estimates that directly tie the algorithm's accuracy to the off-diagonal \beta_{m+1}. For a converged Ritz pair (\theta^{(m)}, y^{(m)}), the residual satisfies \|r^{(m)}\|_2 = \beta_{m+1} |e_m^T y^{(m)}|, where e_m is the last unit vector, and Paige showed this implies error bounds of the form |\lambda_i - \theta_i^{(m)}| \leq C \cdot (\beta_{m+1}/\delta_i)^2 for extreme indices i, with C a constant depending on the global spectrum. This refinement draws an analogy to Chebyshev polynomial acceleration in iterative methods, where the minimax properties of Chebyshev polynomials on the spectral interval explain the superlinear convergence for well-separated extremes: the error diminishes like the square of the polynomial evaluated outside the remaining spectrum. Paige's work assumes simple extreme eigenvalues but notes that for clustered interiors, the bounds hold by treating clusters as pseudo-eigenvalues with widened effective gaps. These bounds underscore the Lanczos algorithm's efficiency for extreme eigenvalues, with convergence accelerating quadratically once \beta_{m+1} becomes small relative to the gap, though the theory requires careful handling of multiple eigenvalues through deflation or blocking in practice.

Numerical Stability Considerations

In finite-precision floating-point arithmetic, the Lanczos algorithm encounters significant numerical stability challenges primarily due to the gradual loss of orthogonality among the generated Krylov basis vectors. Rounding errors accumulate during the computation of each new vector q_k through the recurrence relation, causing the inner products q_k^T q_j for j < k to deviate from zero, rather than remaining exactly orthogonal as in exact arithmetic. This degradation worsens as the iteration progresses, particularly for large matrices, and can lead to the off-diagonal coefficients \beta_k becoming spuriously small or approaching machine epsilon, resulting in premature breakdown of the algorithm where no further meaningful vectors can be generated. To mitigate this loss of orthogonality, several reorthogonalization strategies have been developed. Selective reorthogonalization monitors the projected components of the new vector q_k onto the previous subspace spanned by Q_{k-1} = [q_1, \dots, q_{k-1}], typically by computing the norm of Q_{k-1}^T (A q_{k-1} - \alpha_{k-1} q_{k-1} - \beta_{k-1} q_{k-2}); if this norm exceeds a small tolerance \epsilon (often around machine precision times the matrix norm), reorthogonalization is performed only against the recent vectors that contribute most to the error, preserving semiorthogonality at a cost of O(n k) overall rather than per step. In contrast, full reorthogonalization explicitly orthogonalizes q_k against the entire previous basis Q_{k-1} at every step using procedures like modified Gram-Schmidt, ensuring near-perfect but incurring a high computational expense of O(n k^2) total operations, which limits its practicality for very large-scale problems. These techniques, pioneered in the late , balance accuracy and efficiency by intervening only when necessary to prevent severe instability. A direct consequence of non-orthogonality is the emergence of ghost eigenvalues, which are spurious values approximating already-converged eigenvalues of the original matrix, often appearing as duplicates due to the algorithm effectively restarting in a contaminated . These artifacts do not reflect true spectral multiplicities and can mislead eigenvalue estimation, but they are identifiable through validation: the residuals \| A y_m - \theta_m y_m \| for the associated pairs (\theta_m, y_m) remain large (on the order of the orthogonality error) compared to true approximations, where residuals decrease rapidly. Modern analyses have quantified these stability issues, particularly for sparse matrices where matrix-vector products dominate computation. Meurant (2006) derives bounds on the propagation of rounding errors in the Lanczos process, demonstrating that while orthogonality loss is inevitable, the eigenvalues of the tridiagonal matrix T_k remain reliable approximations to those of the original matrix as long as reorthogonalization is judiciously applied, with error growth controlled by the condition number and sparsity structure rather than exploding uncontrollably. This framework underscores the algorithm's robustness in practice for well-conditioned problems, informing implementations that prioritize monitoring over exhaustive corrections.

Variations and Extensions

Block Lanczos Algorithm

The block Lanczos algorithm extends the standard Lanczos method, which operates on a single starting vector to approximate extreme eigenvalues, by processing multiple starting vectors simultaneously to target invariant subspaces or clusters of eigenvalues more efficiently. This variant is particularly motivated by the need to handle symmetric matrices with multiple eigenvalues or closely spaced clusters, where the single-vector approach may converge slowly or require multiple independent runs. By employing a of orthogonal vectors, it accelerates the discovery of subspaces corresponding to groups of eigenvalues, making it suitable for large-scale problems in scientific computing. In the procedure, the algorithm begins with an initial block of p orthogonal vectors forming the n \times p matrix Q_1, where n is the dimension and p is the small block size (typically 2–10). Subsequent blocks are generated via a recurrence relation: for step k, compute R_k = A Q_k where A is the n \times n ; then extract the block tridiagonal coefficients as \alpha_k = Q_k^T R_k (a p \times p diagonal block) and orthogonalize the P_k = R_k - Q_k \alpha_k - Q_{k-1} \beta_{k-1}^T via to yield Q_{k+1} \beta_k = P_k, where \beta_k is the p \times p subdiagonal block. After m steps, the accumulated Q_m ( n \times mp ) and the mp \times mp block-tridiagonal matrix T_m (with p \times p blocks \alpha on the diagonal and \beta on the sub- and superdiagonals) approximate the properties, from which values and vectors are obtained by eigendecomposition of T_m. The primary advantages include faster convergence for spectra with clustered eigenvalues, as the block structure captures multiple Ritz pairs in parallel per iteration, reducing the total number of matrix-vector products compared to running multiple single-vector Lanczos instances. Additionally, block operations enhance parallelizability on architectures, leveraging level-3 BLAS for matrix-matrix multiplications, and the integrates well with restarting techniques to manage storage for very large matrices. These features, building on extensions in Golub and Van Loan's framework, have made it a for extremal eigenspaces in high-dimensional symmetric problems.

Applications Over Finite Fields

The Lanczos algorithm has been adapted for use over finite fields, particularly to compute the nullspace of large sparse matrices in contexts where real or complex arithmetic is inapplicable, such as cryptography and coding theory. Over fields like GF(2) or GF(q) for small q, the core challenge is the absence of a natural inner product that ensures orthogonality as in the real case; instead, adaptations rely on field-specific operations, randomized starting vectors, and techniques to generate linearly independent sequences without explicit normalization or division-prone steps. Pivoting and block-structured approaches are often employed to detect rank deficiencies and maintain numerical reliability in the discrete setting, enabling the identification of kernel bases for matrices arising from linear dependencies. A seminal adaptation is the Wiedemann algorithm, introduced in 1986 as a probabilistic method inspired by the Lanczos process for solving sparse linear systems over finite fields. It generates a Krylov-like sequence from a random starting vector, computes the minimal polynomial of the matrix restricted to the spanned subspace using the Berlekamp-Massey algorithm, and derives solutions or nullspace elements from the polynomial factors, achieving success with high probability after a small number of iterations. For an n × n sparse matrix with w nonzeros per row, the algorithm requires O(n²) matrix-vector multiplications, each costing O(w) field operations, for a total time complexity of O(n² w) while using O(n) space, making it suitable for systems too large for direct . This approach has been widely adopted for its efficiency in handling ill-conditioned or singular systems over GF(q). Further refinements include block Lanczos variants tailored for finite fields, notably Montgomery's 1995 block algorithm for GF(2), which processes multiple starting vectors simultaneously to sample nullspace elements and find dependencies in sparse matrices. By generating blocks of vectors and using reorthogonalization only when necessary to avoid breakdowns from linear dependence, it reliably computes a basis for the in O(n² w) time, with practical implementations demonstrating scalability to matrices with millions of rows. These methods excel in applications requiring repeated nullspace computations, such as via the general number field sieve (GNFS), where the linear algebra step involves solving enormous sparse systems over GF(2) to identify relations among sieved values— a bottleneck that block Lanczos variants have optimized for records like the factorization of RSA-768. In , similar adaptations aid in decoding linear codes by solving sparse equations or finding low-weight codewords, enhancing error-correcting capabilities in finite-field-based systems.

Relation to Arnoldi Iteration

The Arnoldi iteration is a generalization of the Krylov subspace method that constructs an orthonormal basis for the Krylov subspace generated by a matrix A and an initial vector, applicable to nonsymmetric matrices. It employs the full Gram-Schmidt process to ensure orthogonality of the basis vectors, resulting in an upper Hessenberg matrix that approximates the action of A on the subspace. This process yields the relation A Q_k = Q_k H_k + h_{k+1,k} q_{k+1} e_k^T, where Q_k collects the orthonormal vectors, H_k is the Hessenberg matrix, and e_k is the k-th standard basis vector. The method was originally proposed by Arnoldi in 1951 as a means to minimize iterations for eigenvalue problems. In contrast, the Lanczos algorithm simplifies this framework when A is symmetric (or Hermitian in the complex case), exploiting the matrix's symmetry to produce a instead of the full Hessenberg form. The symmetry implies a three-term for the orthogonalization step: A q_j = \beta_{j-1} q_{j-1} + \alpha_j q_j + \beta_j q_{j+1}, which avoids the need for full j-term orthogonalization required in the general Arnoldi process at each step. This reduction in computational complexity arises because the Lanczos vectors satisfy a shorter recurrence due to the self-adjoint property of A, making the algorithm more efficient for symmetric problems while still generating an for the . The original Lanczos method from 1950 laid the foundation for this symmetric case. Fundamentally, the Lanczos algorithm is equivalent to the restricted to symmetric matrices, where the reduces to tridiagonal form and the full orthogonalization collapses to the three-term relation. This equivalence highlights Lanczos as a specialized instance of , inheriting its properties but benefiting from symmetry-induced simplifications. For practical use, the Lanczos algorithm is preferred for Hermitian matrices to compute extremal eigenvalues or solve symmetric systems efficiently, whereas the is essential for nonsymmetric matrices. Additionally, the Arnoldi process serves as the basis for methods like GMRES, which uses the to minimize residuals in nonsymmetric linear systems.

Practical Applications

In Scientific Computing

The Lanczos algorithm plays a pivotal role in scientific computing for solving large-scale eigenvalue problems arising from discretized (PDEs) in physics and simulations. By iteratively constructing a from sparse symmetric , it enables efficient extraction of dominant eigenpairs without full storage or , which is essential for systems with millions of . This approach is particularly valuable in handling the sparsity inherent in or discretizations, allowing computations on matrices up to 10^6 × 10^6 in size through matrix-vector multiplications alone. In structural dynamics, the Lanczos algorithm is widely employed for modal analysis of sparse mass and stiffness matrices generated by the finite element method (FEM). These matrices represent the generalized eigenvalue problem K \phi = \lambda M \phi, where K is the stiffness matrix and M is the mass matrix, both sparse and symmetric due to the underlying mesh structure. The algorithm computes the lowest-frequency modes critical for vibration analysis, earthquake engineering, and design optimization of structures like bridges or aircraft components. For instance, parallel implementations of the implicitly restarted Lanczos method have been used to solve eigenproblems for systems with over 100,000 degrees of freedom, demonstrating convergence to a few dozen accurate modes in under an hour on distributed systems. This efficiency stems from the algorithm's ability to target interior or extreme eigenvalues via spectral shifts, avoiding the need to compute the full spectrum. In , Lanczos facilitates eigenvalue problems in the stability analysis of flows governed by discretized Navier-Stokes equations. The linearized Navier-Stokes operator, often resulting in large sparse nonsymmetric matrices after spatial , is analyzed to identify unstable modes that predict to or flow instabilities in applications like and . A Lanczos-based procedure applied to Euler/Navier-Stokes solvers computes complete eigenspectra for analysis over airfoils, providing eigenvectors for modal decomposition and reduced-order modeling, which outperforms partial-spectrum methods like in completeness and accuracy for two-dimensional flows. methods incorporating Lanczos principles have been integrated into time-stepping frameworks for global stability computations, enabling analysis of high-Reynolds-number flows where direct solvers are infeasible. For optimization in scientific computing, the Lanczos algorithm underpins trust-region methods by approximating solutions to the trust-region subproblem involving the . In large-scale nonlinear optimization, such as parameter estimation in physical models, the subproblem minimizes a quadratic model m(p) = f + g^T p + \frac{1}{2} p^T H p subject to \|p\| \leq \Delta, where H is the sparse approximation. The Lanczos method generates an orthonormal basis for the to solve this exactly in the subspace, yielding a near-optimal step that respects the trust-region boundary and promotes global . This approach, detailed in seminal work on symmetric indefinite Hessians, reduces computational cost compared to full eigenvalue decompositions while maintaining quadratic model fidelity for ill-conditioned problems in engineering design. The algorithm's scalability extends to climate modeling, where it processes massive sparse covariance matrices from atmospheric data for empirical orthogonal function (EOF) analysis, equivalent to on global fields. In such applications, Lanczos eigensolvers handle datasets representing millions of grid points, extracting leading modes that capture variability in temperature or patterns without dense matrix operations. This has enabled efficient in ensemble simulations, supporting predictions of climate phenomena like El Niño with reduced memory footprint.

In Spectral Graph Theory

In spectral graph theory, the Lanczos algorithm plays a central role in analyzing the eigenvalues of the L = D - A, where D is the diagonal and A is the of an undirected . This matrix is symmetric and , with eigenvalues satisfying $0 = \lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n, where the smallest eigenvalues encode structural properties such as connectivity. The Lanczos algorithm efficiently computes these smallest eigenpairs by generating an for the , exploiting the sparsity of L to reveal insights into graph bottlenecks and expansion. A key application is spectral partitioning, where the Fiedler vector—the eigenvector corresponding to the second smallest eigenvalue \lambda_2, known as the —guides the division of graphs into balanced components with minimal edge cuts. The Lanczos method computes this vector iteratively, often requiring only a modest number of steps for practical accuracy, by orthogonalizing against the constant eigenvector and using the graph's global structure to produce high-quality partitions superior to local heuristics. This approach leverages the rapid convergence of Lanczos to extreme eigenvalues, ensuring reliable approximations even for large sparse graphs. The algorithm also supports approximations of PageRank in symmetric settings, such as undirected web graphs, by applying Lanczos-type iterations to the dominant eigenpairs of the normalized Laplacian, yielding personalized rankings with reduced computational overhead compared to power methods. In community detection for social networks, local variants like the Local Lanczos Spectral Approximation (LLSA) subsample neighborhoods around seed nodes, then use Lanczos to approximate the leading eigenvector of a , identifying cohesive groups with high precision on datasets such as and co-purchase networks. These techniques trace their origins to the , when advancements in Lanczos implementations enabled practical eigenvalue computations for bounding the Cheeger , which relates \lambda_2 to the graph's edge expansion and informed early spectral partitioning heuristics.

In Quantum Mechanics Simulations

In simulations, the Lanczos algorithm plays a crucial role in addressing the computational challenges posed by large, sparse, symmetric matrices derived from the second quantization . In this representation, the is expressed as a sum of one- and two-body terms involving , leading to a where most elements are zero due to selection rules that limit interactions to nearby configurations in the . This sparsity makes the Lanczos method efficient, as it iteratively builds an in the without requiring full storage or dense operations. For ground state computation, the Lanczos algorithm is employed to approximate the lowest eigenvalue and eigenvector of the , corresponding to the energy and wavefunction. Starting from a trial vector close to the expected , the method generates a whose eigenvalues provide Ritz approximations that converge rapidly to the extremal eigenvalues. This approach is integrated into hybrid methods like the (DMRG), where Lanczos is applied to small effective Hamiltonians in the renormalized basis during iterative sweeps, enabling accurate treatment of strongly correlated systems with thousands of sites. The Lanczos algorithm also facilitates simulations of quantum time evolution by approximating the matrix exponential e^{-iHt}, essential for computing real-time dynamics. By reducing the Hamiltonian to a tridiagonal form in the Krylov subspace, the exponential can be evaluated efficiently on this low-dimensional representation, yielding accurate short-time propagators with controlled error bounds based on the subspace dimension. This technique, known as iterative Lanczos reduction, is particularly effective for unitary evolution in molecular systems, avoiding the need for full diagonalization while preserving key dynamical features like autocorrelation functions. Historically, the Lanczos algorithm originated in a 1950 paper motivated by eigenvalue problems for linear and operators in physical contexts, including quantum vibration analyses where properties of oscillatory systems are central. In contemporary packages, such as those implementing configuration interaction or coupled-cluster methods, the Lanczos method remains a standard tool for extracting information and enabling scalable simulations of molecular Hamiltonians.

Implementations and Software

Standard Library Routines

The Lanczos algorithm is supported in major numerical libraries through routines that address symmetric eigenvalue problems, either via direct solvers for the resulting tridiagonal form or iterative methods for large-scale applications. LAPACK includes driver routines such as SSTEV and DSTEV for computing all eigenvalues and, optionally, eigenvectors of real symmetric tridiagonal matrices, which are produced by the Lanczos tridiagonalization process. For iterative solutions of large sparse symmetric eigenproblems, ARPACK provides the Implicitly Restarted Lanczos Method (IRLM), a variant that uses restarting to focus on a subset of eigenvalues while integrating LAPACK subroutines for efficient matrix operations and orthogonalization. MATLAB's eigs function employs ARPACK under the hood for symmetric problems, performing Lanczos iterations to approximate a specified number of , with adjustable subspace dimensions to aid convergence. These routines are optimized for matrices up to dimensions of approximately 10^5, particularly in sparse settings, and include parameters for shifts to target interior eigenvalues and tolerances for controlling accuracy and counts. LAPACK's core implementation traces to 77 standards for portability, with 3.0 (released in 1999) introducing enhancements to eigenvalue solvers for improved and robustness against rounding errors.

Open-Source and Custom Implementations

The library, part of the scientific computing ecosystem alongside , provides the scipy.sparse.linalg.eigsh function as a wrapper around the ARPACK library, implementing the implicitly restarted Lanczos method for computing a few extreme of large sparse symmetric or Hermitian matrices. This open-source implementation supports user-specified parameters for the number of Lanczos vectors and restart iterations, enabling customization for specific convergence needs in applications like . SLEPc, the Scalable Library for Eigenvalue Problem Computations, offers robust open-source implementations of Lanczos-based solvers built on the PETSc toolkit, including shifted block Lanczos for symmetric generalized eigenproblems and support for distributed-memory parallelism. It incorporates GPU acceleration through PETSc's vector and matrix operations, allowing efficient handling of large-scale problems on heterogeneous architectures, with options for thick-restart variants to improve convergence. Custom and educational implementations in often emphasize pedagogical clarity and flexibility, such as standalone Lanczos codes that include toggles for selective reorthogonalization to mitigate loss of in finite-precision arithmetic. For instance, the TeNPy framework provides a simple Lanczos routine tailored for quantum many-body simulations, allowing users to adjust reorthogonalization frequency for better . Similarly, Julia's IterativeSolvers.jl package delivers pure-Julia iterative eigensolvers, incorporating Lanczos-based procedures like Golub-Kahan-Lanczos bidiagonalization for , which extends naturally to symmetric eigenvalue problems with customizable restart mechanisms. In the 2020s, open-source efforts have increasingly targeted , with the Trilinos framework's Anasazi package providing distributed-memory Lanczos implementations, including block and thick-restart variants, optimized for parallel execution across thousands of nodes in environments. These enhancements in Trilinos focus on scalability for massive sparse matrices, integrating with other packages for preconditioning and supporting hybrid CPU-GPU workflows to address challenges in scientific simulations at exascale.

References

  1. [1]
    [PDF] r An Iteration Method for the Solution of the Eigenvalue
    The present investigation designs a systematic method for finding the latent I'oots and the principal axes of a matrix, without reducing the order of the ...
  2. [2]
    [PDF] Chapter 10 - Arnoldi and Lanczos algorithms - Ethz
    The Lanczos Algorithm 10.3 essentially determines an invariant Krylov subspace Km(x) of Fn. More precisely, it constructs an orthonormal basis {q1,...,qm} of Km ...
  3. [3]
    [PDF] A Tour of the Lanczos Algorithm and its Convergence Guarantees ...
    Apr 17, 2018 · Widely used iterative algorithm for computing the extremal eigenvalues and corresponding eigenvectors of a large, sparse, symmetric matrix A.
  4. [4]
    [PDF] NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS ...
    It is possible to generate sparse matrices, solve sparse linear systems, and compute eigenvalues of large sparse matrices with MATLAB or Octave. The fol ...
  5. [5]
    The Symmetric Eigenvalue Problem | SIAM Publications Library
    The Lanczos algorithm has had a checkered history since its debut in 1950. Although Lanczos pointed out that his method could be used to find a few eigenvectors ...
  6. [6]
    [PDF] Symmetric Matrices and the Spectral Theorem - Purdue Math
    May 4, 2025 · real entries is always diagonalizable and all the eigenvalues and eigenvectors are real. ... Theorem says that given any n×n real symmetric ...
  7. [7]
    [PDF] Note 14: Symmetric Matrices and Spectral Theorem - EECS16B
    Apr 12, 2024 · (i) The eigenvalues of A are real. (ii) A is diagonalizable. (iii) There is an orthonormal basis of Rn consisting of eigenvectors of A. In ...
  8. [8]
    [PDF] Unit 17: Spectral theorem
    17.2. Theorem: Symmetric matrices have only real eigenvalues. 17.3. Theorem: If A is symmetric, then eigenvectors to different eigenvalues are perpendicular.
  9. [9]
    [PDF] The QR Algorithm - Ethz
    The overall complexity (number of floating points) of the algorithm is O(n3), which we will see is not entirely trivial to obtain. The major limitation of the ...Missing: source | Show results with:source
  10. [10]
    [PDF] Eigenvalue computation in the 20th century
    Abstract. This paper sketches the main research developments in the area of computational methods for eigenvalue problems during the 20th century.Missing: quantum | Show results with:quantum
  11. [11]
    [PDF] Lecture Notes on Solving Large Scale Eigenvalue Problems - Ethz
    In physics, eigenvalues are usually related to vibrations. Objects like violin strings, drums, bridges, sky scrapers can swing.
  12. [12]
    Georg von Charasoff and Anticipation of von Mises Iteration in ... - jstor
    and the so-called Power Method (or von Mises Iteration) devised initially by Richard von Mises and Hilda Pollaczek-Geiringer in 1929 on the other. As well ...
  13. [13]
    [PDF] implicitly restarted arnoldi/lanczos methods for large scale ...
    by the power method. An examination of the behavior of the power sequence as ... The second observation leads to the Lanczos/Arnoldi process. (Arnoldi,. 1951 and.
  14. [14]
    [PDF] Eigenvalue Computations: The Power and Lanczos Methods
    This method was developed by Cornelius Lanczos in 1950 as a complete algorithm for tridiagonalization. In actual use, round-off errors cause the method to ...Missing: Gauss | Show results with:Gauss
  15. [15]
    [PDF] SOME HISTORY OF THE CONJUGATE GRADIENT AND ... - UMD CS
    This paper gives some of the history of the conjugate gradient and Lanczos algorithms and an annotated bibliography for the period 1948-1976. Key words, ...Missing: precursors | Show results with:precursors
  16. [16]
    [PDF] Connections between Lanczos Iteration and Orthogonal Polynomials
    In this thesis we examine the connections between orthogonal polynomials and the. Lanczos algorithm for tridiagonalizing a Hermitian matrix.
  17. [17]
    [PDF] Lecture 36. The Lanczos Iteration
    This simple argument leading to a three-term recurrence relation applies to arbitrary self-adjoint operators, not just to matrices. The Lanczos Iteration.
  18. [18]
    None
    ### Summary of Lanczos Method Steps, Recurrence Relation, and Pseudocode (cond-mat.de Research on Lanczos)
  19. [19]
    [PDF] Iterative Algorithms for Eigenvalue Problems
    The Lanczos Algorithm with Selective Orthogonal- ization. We discuss a variation of the Lanczos algorithm which has (nearly) the high ac- curacy of the ...Missing: origins polynomials
  20. [20]
    [PDF] Thick-Restart Lanczos Method for Symmetric Eigenvalue Problemsy
    Feb 12, 1998 · This paper describes a simple restarting scheme for the Lanczos algorithm. This restarted Lanczos algorithm uses as many arithmetic operations ...
  21. [21]
    None
    Nothing is retrieved...<|control11|><|separator|>
  22. [22]
    Numerical methods. Note 7
    Lanczos algorithm is an orthogonal transformation of a (real symmetric) matrix A into a tridiagonal form T: QTAQ=T. The tridiagonal matrix T may be of a lower ...
  23. [23]
    [PDF] Error analysis of the Lanczos algorithm for tridiagonalizing a ...
    A complete rounding error analysis of the algorithm is presented here, giving among other results an important expression for the loss of orthogonality of the ...
  24. [24]
    On the equivalence of three approaches to matrix tridiagonalization
    Tridiagonalization of a symmetric matrix by Lanczos, Givens or Householder methods leads to essentially the same result. Due to the greater efficiency of ...
  25. [25]
    [PDF] Power method and Krylov subspaces
    The Lanczos algorithm thus reduces the original hermitian N × N matrix A into a smaller n × n tridiagonal matrix Tn by an orthogonal projection onto the order-n ...
  26. [26]
    [PDF] An iteration method for the solution of the eigenvalue problem of ...
    Oct 5, 2004 · The rapid accumulation of fatal rounding errors, common to all iteration processes if applied to matrices of high dispersion (large "spread" of.
  27. [27]
    [PDF] Iterative Methods for Sparse Linear Systems Second Edition
    In the six years that passed since the publication of the first edition of this book, iterative methods for linear systems have made good progress in ...<|control11|><|separator|>
  28. [28]
    [PDF] From Lanczos to CG - Cornell: Computer Science
    Mar 13, 2015 · In the last lecture, we developed the Lanczos iteration, which for a symmetric matrix A implicitly generates the decomposition.
  29. [29]
    [PDF] Matrices, Moments and Quadrature, cont'd
    The block Lanczos algorithm, due to Golub and Underwood, is applied to A with the initial block u v . The result is a 2n×2n block tridiagonal matrix with 2 × 2 ...
  30. [30]
    Which Eigenvalues Are Found by the Lanczos Method? - SIAM.org
    The Lanczos method tends to find eigenvalues in regions that have too little charge when compared to an equilibrium distribution.
  31. [31]
    [PDF] convergence of the lanczos algorithm in presence of close ...
    This paper is concerned with the behaviour of the Lanczos algorithm in presence of close eigenvalues. We numerically study in great details an example with two ...
  32. [32]
    Thick-Restart Lanczos Method for Large Symmetric Eigenvalue ...
    Thick-restart Lanczos is a popular restarted variant because of its simplicity and numerical robustness. However, convergence can be slow for highly clustered ...
  33. [33]
    [PDF] The Lanczos and conjugate gradient algorithms in finite precision ...
    The Lanczos and conjugate gradient algorithms were introduced more than five decades ago as tools for numerical computation of dominant eigenvalues.
  34. [34]
    The Lanczos Algorithm With Selective Orthogonalization*
    In this paper it is shown how a modification called selective orthogonalization stifles the formation of duplicate eigenvectors without increasing the cost of a ...
  35. [35]
    [PDF] Stability of the Lanczos Method for Matrix Function Approximation
    The Lanczos method for iteratively tridiagonalizing a. Hermitian matrix is one of the most important algo- rithms in numerical computation. Introduced for com-.
  36. [36]
    [PDF] Block Lanczos Algorithm - DTIC
    Feb 3, 2023 · The block Lanczos algorithm is an extension of the basic single vector Lanczos algorithm. We iterate with a block of vectors rather than with a ...Missing: derivation | Show results with:derivation
  37. [37]
    the principle of minimized iterations in the solution of the matrix ...
    E. ARNOLDI. Hamilton Standard Division, United Aircraft Corporation,East Hartford,Conn. An interpretation of Dr. Cornelius Lanczos' iteration method ...
  38. [38]
    A Shifted Block Lanczos Algorithm for Solving Sparse Symmetric ...
    The algorithm has its foundations in known techniques in solving sparse symmetric eigenproblems, notably the spectral transformation of Ericsson and Ruhe and ...
  39. [39]
    IRBL: An Implicitly Restarted Block-Lanczos Method for Large-Scale ...
    The irbleigs code is an implementation of an implicitly restarted block-Lanczos method for computing a few selected nearby eigenvalues and associated ...
  40. [40]
  41. [41]
    Filtering Frequencies in a Shift-and-Invert Lanczos Algorithm for the ...
    The shift-and-invert Lanczos algorithm is a commonly used solution procedure to compute the eigenpairs of large, sparse eigenvalue problems.
  42. [42]
    Eigenvalue calculation procedure for an Euler/Navier-Stokes solver ...
    A Lanczos procedure is applied to a Navier-Stokes solver for computing eigenvalues and eigenvectors. These eigenvalues and eigenvectors are associated with ...
  43. [43]
    [PDF] Time-stepping and Krylov methods for large-scale instability problems
    Apr 11, 2018 · Regarding the computation of the eigenpairs of the linearized Navier-Stokes op- erator, different strategies have been proposed over the years.
  44. [44]
    Solving the Trust-Region Subproblem using the Lanczos Method
    The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods.
  45. [45]
    Using a Lanczos Eigensolver in the Computation of Empirical ...
    This paper focuses on this part of the EOF analysis, the computation of some singular values, and the associated vectors of the data matrix D . After recalling ...
  46. [46]
    [PDF] PARTITIONING SPARSE MATRICES WITH EIGENVECTORS OF ...
    The spectral algorithm for computing vertex separators considered in this paper has three features that distinguish it from previous algorithms thatare worthy ...<|separator|>
  47. [47]
    [PDF] Computing the smallest eigenpairs of the graph Laplacian - Unipd
    Convergence to the smallest eigenvalues is much slower. Hence, to compute the leftmost part of the spectrum, it is more usual to apply the Lanczos process to ...
  48. [48]
    [PDF] Lanczos type method for computing PageRank - Keio University
    Mar 9, 2015 · In this paper, a new algorithm for computing the. PageRank vector is proposed, using a combination of the Lanczos bi-orthogonalization algorithm ...
  49. [49]
    [PDF] Local Lanczos Spectral Approximation for Community Detection
    There are three key steps in the proposed algorithm: heat kernel sampling, local Lanczos spectral approximation and community boundary truncation. The heat ...
  50. [50]
    [PDF] 7 Exact Diagonalization and Lanczos Method - Erik Koch J¨ulich ...
    Exact diagonalization and Lanczos method are discussed, with exact diagonalization including variational principles and matrix eigenvalue problem. Lanczos ...
  51. [51]
    A tale of two vectors: A Lanczos algorithm for calculating RPA mean ...
    Jan 3, 2022 · It belongs to the group of molecular properties in quantum chemistry called sum-over-states properties, whose computation requires calculating ...
  52. [52]
    LAPACK: dstev
    DSTEV computes all eigenvalues and, optionally, eigenvectors of a !> real symmetric tridiagonal matrix A. !> Parameters. [in], JOBZ !> JOBZ is CHARACTER*1 ...Missing: xSTEV | Show results with:xSTEV
  53. [53]
    [PDF] ARPACK Users Guide
    May 5, 2025 · hen the matrix A is symmetric it reduces to a variant of the Lanczos process called the Implicitly Restarted Lanczos Method (IRLM ) . These ...
  54. [54]
    eigs - Subset of eigenvalues and eigenvectors - MATLAB
    If you do not specify a 'SubspaceDimension' value, then the default algorithm uses at least 20 Lanczos vectors. For problems where eigs fails to converge ...
  55. [55]
    Numerical comparison of iterative eigensolvers for large sparse ...
    The comparison is performed on a number of matrices, with dimensions up to a few hundred thousands, arising from the discretization of the diffusion equation by ...
  56. [56]
    Summary of improvements since LAPACK 3.0 - The Netlib
    Nov 14, 2017 · 1. Improvements in LAPACK 3.1.0 · Hessenberg QR algorithm with the small bulge multi-shift QR algorithm together with aggressive early deflation.
  57. [57]
    LAPACK Users' Guide | SIAM Publications Library
    Release 3.0 of LAPACK introduces new routines and extends the functionality of existing routines. The most significant new routines and functions include: 1 ...<|separator|>
  58. [58]
    eigsh — SciPy v1.16.2 Manual
    In this case, eigsh() will call eigs() and return the real parts of the eigenvalues thus obtained. Parameters: Andarray, sparse matrix or LinearOperator. A ...1.12.0 · 1.13.0 · Scipy.sparse.linalg.eigsh · 1.15.0
  59. [59]
    [PDF] SLEPc Users Manual
    This manual describes SLEPc, the Scalable Library for Eigenvalue Problem Computations, a software package for the solution of large sparse eigenproblems on ...
  60. [60]
    [PDF] A note on the GPU acceleration of eigenvalue computations
    We present improved implementations in CUDA and OpenCL, which are now available in ViennaCL, PETSc and SLEPc, and demonstrate an up to two-fold performance gain ...
  61. [61]
    zachtheyek/Lanczos-Algorithm: Python implementation of ... - GitHub
    The Lanczos algorithm is a direct algorithm devised by Cornelius Lanczos that is an adaptation of power methods to find the m "most useful" (tending towards ...
  62. [62]
    lanczos.py — TeNPy 1.0.7.dev85+234bd41 documentation
    lanczos.py . on github (download). """Simple Lanczos diagonalization routines. These functions can be used in d_dmrg.py and e_tdvp to replace the eigsh and ...
  63. [63]
    SVDL - IterativeSolvers.jl
    Compute some singular values (and optionally vectors) using Golub-Kahan-Lanczos bidiagonalization with thick restarting. If log is set to true is given, method ...
  64. [64]
    [PDF] An Overview of Trilinos
    Supports construction and use of parallel distributed memory precondition- ers such as overlapping Schwarz domain decomposition, Jacobi scaling and local Gauss- ...Missing: exascale | Show results with:exascale
  65. [65]
    Anasazi | Trilinos
    Anasazi is an extensible and interoperable framework for large-scale eigenvalue algorithms. The motivation for this framework is to provide a generic interface.Missing: Lanczos | Show results with:Lanczos