Fact-checked by Grok 2 weeks ago

Jacobi eigenvalue algorithm

The Jacobi eigenvalue algorithm is an iterative for computing the of a real by applying a sequence of orthogonal plane rotations to progressively diagonalize the matrix, with the diagonal elements converging to the eigenvalues and the accumulated rotations yielding the eigenvectors. It operates exclusively on symmetric matrices, preserving throughout the process, and is particularly valued for its ability to achieve high relative accuracy in the computed eigenvalues. Proposed by the German mathematician in 1846 as a to solve systems of equations arising in the of secular perturbations in , the algorithm was originally manual but laid the foundation for modern . It remained largely theoretical until the 1950s, when it was adapted for electronic computers by researchers including Goldstine, , and , sparking renewed interest in its practical implementation. Today, variants of the method are implemented in numerical libraries like , often for (SVD) via extensions to symmetric forms such as H = A^T A. At its core, proceeds in sweeps: in each , it identifies the largest off-diagonal element a_{pq} (the ) in the current , computes a angle \theta for the corresponding submatrix to zero that element—using formulas like \tan(2\theta) = 2a_{pq}/(a_{pp} - a_{qq})—and applies the to update the entire and an accumulator for eigenvectors. Rotations cycle through off-diagonal pairs (e.g., row-wise or in a ), with typically requiring fewer than 10 sweeps for matrices up to order 1000, exhibiting in the off-diagonal norm. The computational cost is O(n^3 \log \log \epsilon) for an n \times n to machine precision \epsilon, making it competitive with the for moderate sizes but slower for very large n without optimization. Key strengths include its simplicity, which facilitates implementation and debugging, and its inherent parallelism, as rotations on disjoint pairs can be performed simultaneously on modern architectures. It often delivers superior componentwise accuracy compared to other methods, especially for ill-conditioned matrices, and avoids the need for prior reduction to tridiagonal form. Applications span , including , quantum mechanical simulations of molecular systems, and extensions to or Hermitian matrices for advanced and . Despite these advantages, it is less favored for extremely large-scale problems, where divide-and-conquer or randomized techniques predominate.

Overview and Background

Core Concept

The Jacobi eigenvalue algorithm is an iterative, rotation-based method designed to compute the of a real by transforming it into a diagonal form through successive plane rotations. This approach exploits the property that symmetric matrices possess real eigenvalues and can be diagonalized by an orthogonal , enabling the isolation of eigenvalues on the diagonal while preserving the matrix's spectral properties. The core process involves repeatedly applying Givens rotations—orthogonal transformations in a two-dimensional —to target and zero out individual off-diagonal elements of , progressively reducing its off-diagonal content until it becomes sufficiently diagonal. Each rotation is applied to zero a specific off-diagonal entry in the corresponding 2×2 submatrix, affecting the rows and columns involved, but the cumulative effect across sweeps reduces the off-diagonal norm quadratically, leading to a cumulative that tracks the evolving . As the algorithm converges, the diagonal entries represent the eigenvalues, and the accumulated product of the rotation matrices yields the corresponding eigenvectors, which form an . A principal advantage of the Jacobi algorithm lies in its quadratic behavior, which ensures rapid and accurate determination of all eigenvalues, particularly for well-conditioned symmetric matrices where eigenvalues are moderately separated. This convergence rate stems from the squared reduction in the sum of off-diagonal elements' magnitudes per iteration, making it reliable for obtaining high-precision results across the spectrum. In its basic workflow, the algorithm identifies the off-diagonal element with the largest (the ), computes the appropriate to eliminate it, applies this rotation symmetrically to the matrix, and updates an eigenvector accumulator, iterating this selection-annihilation cycle until off-diagonal norms fall below a predefined . This strategy prioritizes the most significant contributors to non-diagonality, enhancing efficiency and stability.

Historical Development

The Jacobi eigenvalue algorithm originated with the work of German mathematician , who introduced the method in 1846 as an iterative technique employing plane rotations to solve systems of linear equations encountered in astronomical computations, particularly the theory of secular perturbations. Although initially developed for equation solving, Jacobi applied the approach to compute the eigenvalues of real symmetric matrices by successively eliminating off-diagonal elements through rotations, achieving by hand. This marked the first systematic use of rotation-based iterations for eigenvalue extraction, though the computational demands limited its practicality to small matrices at the time. The method saw limited adoption in the late 19th and early 20th centuries, overshadowed by emerging direct techniques for eigenvalue problems, and was largely forgotten amid the rise of power methods and determinant-based approaches. Theoretical interest persisted sporadically, but no major advancements occurred until the computational landscape shifted post-World War II. By 1947, the was independently implemented on mechanical desk calculators at the UK's National Physical Laboratory for solving symmetric eigenvalue problems, demonstrating its viability for moderately sized matrices in scientific applications. The advent of electronic computers in the early catalyzed the algorithm's revival and widespread use. Pioneering implementations appeared on machines such as the ILLIAC I at the University of Illinois and the SEAC at the U.S. National Bureau of Standards, where the method's simplicity and proved advantageous for real symmetric . and collaborators, including Helen Hay Goldstine, contributed significantly during this period by analyzing rates, , and optimized strategies in papers from 1947 to 1955, adapting the technique for high-order inversion and on nascent computing hardware. Their work, including detailed studies on the Jacobi process for symmetric , established rigorous foundations for its automated execution. A pivotal milestone came in 1965 with James H. Wilkinson's authoritative textbook The Algebraic Eigenvalue Problem, which systematically expounded the Jacobi method's theory, variants, and implementation details for symmetric tridiagonal and full matrices. Wilkinson highlighted its robustness against rounding errors compared to alternatives like the for certain cases, while providing proofs and practical guidelines that propelled its integration into standard numerical libraries. This publication not only popularized among researchers but also bridged its historical roots to modern computational practice, influencing software like EISPACK in the late 1960s.

Mathematical Prerequisites

Eigenvalues of Symmetric Matrices

A real matrix A is symmetric if it equals its transpose, A = A^T. Such matrices possess several key properties regarding their eigenvalues and eigenvectors: all eigenvalues are real numbers, and it is possible to select a complete set of orthonormal eigenvectors corresponding to these eigenvalues. These characteristics ensure that the eigensystem is well-behaved, with no complex eigenvalues or non-orthogonal eigenvector sets complicating the analysis. The spectral theorem provides a foundational result for real symmetric matrices: any such matrix is orthogonally diagonalizable. Specifically, there exists an orthogonal matrix Q (satisfying Q^T Q = I) and a diagonal matrix \Lambda such that A = Q \Lambda Q^T, where the diagonal entries of \Lambda are the real eigenvalues of A, and the columns of Q form an orthonormal basis of eigenvectors. This decomposition highlights the matrix's ability to be expressed in a basis aligned with its natural directions of stretching or compression, facilitating applications in diverse fields. Symmetric matrices commonly model physical systems, including observables in (where real Hermitian operators correspond to symmetric matrices), stiffness and mass matrices in analysis of structures, and covariance matrices in for data reduction. These contexts often require precise eigenvalue computations to interpret energy levels, natural frequencies, or variance directions. Numerical computation of eigenvalues for symmetric matrices faces challenges with ill-conditioned cases, where eigenvalues cluster closely or the is large, leading to sensitivity to rounding errors in standard methods. In such scenarios, stable algorithms like the provide accurate results by preserving symmetry and throughout the iteration, making them suitable for demanding eigenvector accuracy. The Jacobi eigenvalue algorithm exploits these symmetric properties to achieve such through successive rotations.

Givens Rotations

A , also known as a Jacobi rotation in this context, is an that operates on a pair of rows and columns indexed by p and q in an n \times n real A, with the specific goal of zeroing out the off-diagonal element a_{pq}. This rotation is represented by an n \times n matrix G_{pq}(\theta) that is identical to the except in the p-th and q-th rows and columns, where it takes the form of a rotation matrix: \begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix} in the (p, q) subspace. The angle \theta is determined by the relation \tan(2\theta) = \frac{2 a_{pq}}{a_{pp} - a_{qq}}, ensuring that the transformed matrix A' = G_{pq}^T(\theta) A G_{pq}(\theta) has a'_{pq} = 0 while preserving the eigenvalues of A. This choice of \theta solves the equation derived from setting the off-diagonal entry to zero after the similarity transformation. Key properties of the make it integral to the Jacobi algorithm. It is , satisfying G_{pq}^T(\theta) G_{pq}(\theta) = I, which implies that the transformation is a similarity and thus preserves the of the . The of A is maintained in A', as the operation is under orthogonal conjugation. Furthermore, successive applications of such rotations accumulate into a full Q, where the columns of Q form the eigenvectors of the original upon . These rotations also leave the Frobenius invariant, providing a measure of progress by reducing the of off-diagonal elements. Givens rotations are particularly suitable for the Jacobi eigenvalue algorithm because they localize their effect to the selected indices p and q, minimally disturbing elements distant from these positions. This targeted zeroing enables an iterative process that progressively diagonalizes the matrix without introducing widespread numerical instability, leveraging the structure of symmetric matrices to compute eigenvalues and eigenvectors efficiently.

Algorithm Description

Classical Jacobi Method

The classical Jacobi method is an iterative procedure designed to compute the of a real by successively applying orthogonal similarity transformations that reduce the off-diagonal elements to near zero. It operates on an n \times n A, leveraging plane rotations to diagonalize A while preserving its eigenvalues. This approach ensures due to the of the transformations, which maintains throughout the process. The algorithm initializes with A^{(0)} = A, the input symmetric matrix, and an n \times n identity matrix Q = I to accumulate the product of rotations for eigenvector computation. Iterations are organized into sweeps, where each sweep processes all unique off-diagonal pairs (i,j) with i < j exactly once, typically in a cyclic order such as row-wise. For each pair, a Givens rotation G is computed to zero the element a_{ij}^{(k)} in the current iterate A^{(k)}; this rotation is then applied via the similarity transformation A^{(k+1)} = G^T A^{(k)} G, and Q is updated as Q^{(k+1)} = Q^{(k)} G to track the accumulated orthogonal matrix. A full sweep thus involves n(n-1)/2 such rotations, progressively concentrating the matrix's "mass" on the diagonal. The Givens rotation parameters are determined using the standard formula for symmetrizing the (i,j) submatrix, ensuring the updated a_{ij} becomes zero while preserving symmetry. Convergence is monitored by checking the maximum absolute off-diagonal element after each sweep. The iteration terminates when \max_{i \neq j} |a_{ij}^{(k)}| < \epsilon, where \epsilon is a tolerance often set to machine epsilon times the average diagonal scale, such as \trace(A)/n, to account for the matrix's magnitude and floating-point precision. Upon termination, the diagonal entries of A^{(k)} yield the eigenvalues in approximate order (typically decreasing from largest to smallest), and the columns of Q^{(k)} form the corresponding . This method guarantees that the computed eigenvalues interlace the true ones and provides accurate results for well-conditioned problems, with empirical evidence showing rapid quadratic convergence in the final stages.

Variant Choices: One-Sided vs. Two-Sided

The two-sided variant of the Jacobi eigenvalue algorithm applies orthogonal rotations simultaneously to both the left and right sides of the matrix, transforming a symmetric matrix A via A \leftarrow G^T A G, where G is a Givens rotation matrix. This approach preserves the symmetry of A throughout the iteration process, making it particularly suitable for computing the full eigendecomposition of real symmetric or complex Hermitian matrices. By zeroing out off-diagonal elements in pairs, the method ensures that the transformed matrix remains symmetric at each step, facilitating quadratic convergence per sweep under appropriate ordering strategies. In contrast, the one-sided variant applies rotations only to one side of the matrix, typically updating A \leftarrow A G for a general matrix A, without inherently preserving symmetry. This formulation is commonly employed in the computation of the of arbitrary matrices, where it implicitly diagonalizes A^T A to obtain the right singular vectors and singular values, with left singular vectors recoverable via additional steps such as U = A V \Sigma^{-1}. The one-sided method, as developed in early works, avoids the need for bilateral updates, simplifying implementation for non-symmetric cases but requiring careful handling of numerical stability when the matrix is ill-conditioned or singular. Key trade-offs between the variants arise in convergence speed and computational overhead. The two-sided approach generally converges faster for symmetric eigenvalue problems, as it directly exploits symmetry to reduce the squared off-diagonal norm by a_{pq}^2 + a_{qp}^2 per rotation pair, leading to higher accuracy in fewer iterations for dense symmetric matrices. However, it demands maintenance of symmetry, which can increase floating-point operations compared to the one-sided method's unilateral updates. The one-sided variant is computationally simpler and more efficient for on general matrices, but it may exhibit slower convergence when adapted to eigenvalue problems, as it does not leverage symmetry and can require more sweeps to achieve comparable accuracy. Experimental evaluations on small to medium-sized matrices confirm that two-sided implementations often achieve shorter runtimes and superior relative accuracy for symmetric cases. Selection of the variant depends on the problem structure: the two-sided Jacobi is preferred for Hermitian eigenvalue problems where symmetry preservation ensures robust and efficient full decomposition. Conversely, the one-sided variant serves as a foundational step in SVD algorithms, such as the , making it ideal for general rectangular matrices in applications like least-squares solving or .

Implementation Details

Pseudocode Outline

The classical Jacobi eigenvalue algorithm can be implemented by iteratively applying plane rotations to a symmetric matrix until its off-diagonal elements are sufficiently small, with the diagonal entries converging to the eigenvalues and an accumulated orthogonal matrix providing the eigenvectors. The following pseudocode outlines the core procedure for an n \times n real symmetric input matrix A, assuming double-precision floating-point arithmetic and a tolerance \epsilon > 0 for the maximum off-diagonal entry. This implementation follows the classical variant, which selects the pair of indices corresponding to the largest off-diagonal magnitude at each step to ensure efficient convergence.
function jacobi_eigenvalues(A: symmetric n x n matrix, tol: float) -> (eigenvalues: vector, eigenvectors: n x n matrix)
    if n == 1:
        return [A[1,1]], eye(1)  // Trivial case: single eigenvalue, identity eigenvector
    
    Q = eye(n)  // Initialize orthogonal matrix for eigenvectors
    A_current = copy(A)  // Working copy of symmetric matrix
    
    max_offdiag = maximum(|A_current[i,j]| for i < j)
    sweeps = 0
    max_sweeps = 50  // Heuristic upper bound for convergence in typical cases (adjust for larger n)
    
    while max_offdiag > tol and sweeps < max_sweeps:
        // Find indices p < q maximizing |A_current[p,q]|
        (p, q) = argmax_{i<j} |A_current[i,j]|
        
        if |A_current[p,q]| <= tol:
            break
        
        apq = A_current[p,q]
        app = A_current[p,p]
        aqq = A_current[q,q]
        
        // Stable computation of rotation parameters (avoids cancellation and overflow)
        if abs(app - aqq) < 1e-12 * abs(app + aqq):  // Nearly equal diagonals
            tau = -apq / (app - aqq + copysign(1e-12, app - aqq))  // Small perturbation
        else:
            tau = (app - aqq) / (2 * apq)
        
        t = 1.0 / (tau + copysign(sqrt(1 + tau*tau), tau))
        if tau < 0:
            t = -t  // Adjust for positive sin convention if desired
        
        c = 1.0 / sqrt(1 + t*t)
        s = t * c
        
        // Explicitly zero if apq too small
        if abs(apq) < tol:
            continue
        
        // Apply right multiplication A := A J (update columns p and q)
        for i = 1 to n:
            temp1 = c * A_current[i,p] - s * A_current[i,q]
            A_current[i,q] = s * A_current[i,p] + c * A_current[i,q]
            A_current[i,p] = temp1
        
        // Apply left multiplication A := J^T A (update rows p and q)
        for k = 1 to n:
            temp1 = c * A_current[p,k] - s * A_current[q,k]
            temp2 = s * A_current[p,k] + c * A_current[q,k]
            A_current[p,k] = temp1
            A_current[q,k] = temp2
        
        // Force symmetry and zero the pivot (numerical cleanup)
        A_current[p,q] = 0
        A_current[q,p] = 0
        
        // Accumulate Q := Q J (update columns p and q of Q)
        for i = 1 to n:
            temp1 = c * Q[i,p] - s * Q[i,q]
            Q[i,q] = s * Q[i,p] + c * Q[i,q]
            Q[i,p] = temp1
        
        // Update max_offdiag for next iteration
        max_offdiag = maximum(|A_current[i,j]| for i < j)
        sweeps += 1
    
    // Extract eigenvalues from diagonal (unsorted)
    eigenvalues = [A_current[i,i] for i = 1 to n]
    
    // Optional: Sort eigenvalues and corresponding eigenvectors in decreasing order
    // indices = argsort(eigenvalues, descending)
    // eigenvalues = eigenvalues[indices]
    // eigenvectors = Q[:, indices]
    
    return eigenvalues, Q
This emphasizes key elements such as the outer conditioned on the maximum off-diagonal (updated after each ) and the accumulation of the Q via right-multiplication by each J(p,q,\theta), ensuring A = Q \Lambda Q^T at where \Lambda is diagonal. A upper bound on sweeps serves as a practical safeguard, as the method exhibits quadratic typically within a small number of sweeps for well-conditioned symmetric matrices of moderate size. For positive definite matrices, is often faster due to the bounded and positive eigenvalues, reducing the number of required rotations. The output eigenvalues are the final diagonal entries, which may be sorted descending along with the corresponding eigenvector columns for standard presentation.

Numerical Example

To illustrate the execution of the classical Jacobi eigenvalue algorithm, consider the 3×3 A = \begin{pmatrix} 4 & -1 & 0 \\ -1 & 3 & -1 \\ 0 & -1 & 2 \end{pmatrix}. The exact eigenvalues of this matrix are 3, 3 + \sqrt{3} \approx 4.732, and 3 - \sqrt{3} \approx 1.268. The algorithm proceeds by successively applying plane rotations to zero the off-diagonal elements, starting with a cyclic sweep over the pairs (1,2), (1,3), and (2,3). The rotation matrices are accumulated in an Q, initialized as the identity, such that the final A \approx Q \Lambda Q^T, where \Lambda is diagonal with the eigenvalues on the diagonal. The columns of Q are the corresponding eigenvectors. For the first rotation on the (1,2) pair, the off-diagonal element is the pivot. The rotation angle is , with and (using the stable computation t = , where w = (a_{22} - a_{11})/(2 a_{12}) = 0.5). The rotation matrix J has the 2×2 block \begin{pmatrix} 0.851 & 0.526 \ -0.526 & 0.851 \end{pmatrix} in positions (1,2), with 1's elsewhere on the diagonal. Applying A^{(1)} = J^T A J yields A^{(1)} \approx \begin{pmatrix} 4.618 & 0 & -0.526 \\ 0 & 2.382 & -0.851 \\ -0.526 & -0.851 & 2 \end{pmatrix}, where a_{12} is now zero (up to machine precision). The sweep continues with the (1,3) pair, where |a_{13}| \approx 0.526. Here, w \approx 2.490, t \approx 0.193, \theta \approx 0.191 radians, \cos\theta \approx 0.982, \sin\theta \approx 0.190. The updated matrix is A^{(2)} \approx \begin{pmatrix} 4.720 & -0.162 & 0 \\ -0.162 & 2.382 & -0.836 \\ 0 & -0.836 & 1.899 \end{pmatrix}. Next, the (2,3) pair has |a_{23}| \approx 0.836. For this rotation, w \approx 0.289, t \approx 0.752, \approx 0.646 radians, \approx 0.799, \approx 0.601. The matrix after this rotation is A^{(3)} \approx \begin{pmatrix} 4.720 & -0.129 & 0.097 \\ -0.129 & 3.010 & 0 \\ 0.097 & 0 & 1.270 \end{pmatrix}. A second sweep repeats the process on the now-smaller off-diagonals, further reducing them (e.g., the maximum |a_{ij}| drops below 0.05 after the next (1,2) rotation with \approx 0.142 radians). After 2–3 full sweeps (typically 6–9 rotations for n=3), the matrix is sufficiently diagonal, with entries approximately [4.732, 3.000, 1.268], matching the exact eigenvalues. The accumulated Q after satisfies A \approx Q \Lambda Q^T (verified numerically to within 10^{-10} relative error using standard linear algebra libraries). The eigenvectors are the columns of Q, orthonormal by construction. This example demonstrates how the method systematically deflates off-diagonals while preserving and accumulating the modal matrix.

Convergence Analysis

Theoretical Guarantees

The Jacobi eigenvalue algorithm for real symmetric matrices demonstrates quadratic in the sense that, once the off-diagonal elements are sufficiently small, the squared Frobenius of the off-diagonal part decreases quadratically toward zero. A key measure of progress is the off-diagonal Frobenius norm squared, defined as E(A) = \sum_{i \neq j} a_{ij}^2, which quantifies the deviation from diagonality. Each Jacobi rotation applied to the largest off-diagonal element (in the classical variant) strictly decreases E(A) by exactly $2(a_{pq})^2, where a_{pq} is the targeted element, ensuring monotonic progress toward a . Over a full sweep of n(n-1)/2 rotations in an n \times n , the norm satisfies E(A^{(k+1)}) \leq \rho E(A^{(k)}), where \rho = 1 - \frac{2}{n(n-1)} < 1 for the cyclic ordering, establishing linear in E with a factor independent of the entries. The total number of sweeps required to reduce E(A) below a \varepsilon is bounded by \frac{\log(E(A^{(0)})/\varepsilon)}{\log(1/\rho)}, which scales logarithmically with the desired precision. In the worst case, the algorithm may require O(n^2) rotations to achieve , though shows that 5 to 20 sweeps suffice for most symmetric matrices of practical interest, regardless of dimension. Upon convergence, when E(\hat{A}) \leq \varepsilon, the eigenvalues—given by the diagonal entries—are accurate such that \sum_{i=1}^n (\lambda_i - \hat{\lambda}_i)^2 \leq \varepsilon, allowing eigenvalues to be computed to machine precision by setting \varepsilon appropriately small relative to the . The eigenvectors, accumulated via the product of rotation matrices, converge to the exact ones for simple eigenvalues, but their accuracy is governed by standard and depends on the conditioning of the matrix, particularly the eigenvalue gaps.

Factors Affecting Speed

The speed of the Jacobi eigenvalue algorithm is influenced by the strategy for selecting off-diagonal elements to zero at each iteration. In the classical variant, the pair with the largest absolute off-diagonal entry is chosen, which accelerates by prioritizing the most significant perturbations but incurs an O(n^2) cost per rotation to identify the maximum. Conversely, the cyclic approach follows a predetermined order, such as sweeping through all upper-triangular pairs row by row, eliminating the search overhead and enabling O(1) selection per step, though it typically demands more —often by a constant factor—for equivalent accuracy. This makes the classical method preferable for smaller matrices where search costs are manageable, while cyclic variants suit larger-scale implementations prioritizing simplicity and parallelism. Matrix properties play a crucial role in determining the number of sweeps required for convergence. For nearly diagonal symmetric matrices, where off-diagonal elements are already small relative to the diagonal, the algorithm requires only a few sweeps, as the initial Frobenius norm of the off-diagonals is low, leading to rapid reduction. In contrast, matrices with clustered eigenvalues exhibit slower progress, as plane rotations intended to zero one off-diagonal may inadvertently couple nearby eigenvalues, necessitating additional iterations to isolate them without reintroducing perturbations. These structural characteristics can vary the iteration count by orders of magnitude, with well-separated eigenvalues generally yielding faster overall execution compared to densely coupled spectra. The choice of convergence tolerance directly impacts both accuracy and runtime. A tolerance set to machine epsilon, approximately $10^{-16} for double-precision arithmetic, ensures eigenvalues accurate to working precision but may prolong computation if the matrix requires many sweeps to meet this strict criterion. Setting a looser threshold risks premature termination and suboptimal eigenvalue estimates, particularly for ill-conditioned problems, while overly tight values beyond machine precision offer no benefit due to rounding errors. Sweep strategies further modulate practical performance by balancing thoroughness and efficiency. Full sweeps process all off-diagonal pairs in sequence before reassessing , guaranteeing global reduction in the off-diagonal but potentially wasting effort on negligible elements late in the process. Partial sweeps, which target only the largest remaining off-diagonals, or threshold-accepting variants that halt a sweep early if all subsequent elements fall below tolerance, can reduce total rotations while preserving the algorithm's quadratic . These approaches are especially effective in later stages when the matrix is close to diagonal, allowing early termination without compromising reliability.

Computational Efficiency

Time and Space Complexity

The Jacobi eigenvalue algorithm's computational cost is dominated by the repeated application of plane rotations to progressively diagonalize the symmetric input matrix A \in \mathbb{R}^{n \times n}. Each individual rotation, which targets a specific off-diagonal pair (p, q), requires updating the relevant rows and columns of A while preserving symmetry, along with an optional update to the eigenvector accumulator matrix Q (initialized as the identity). The cost of applying a single Jacobi rotation G to both A (yielding A' = G^T A G) and Q (yielding Q' = Q G) is approximately $8n + O(1) floating-point operations (flops), where the $4n for each accounts for the matrix updates exploiting symmetry in A and full orthogonal updates in Q, and the additional O(1) flops cover the computation of the rotation parameters (cosine and sine values). A full sweep of the algorithm cyclically applies rotations to all n(n-1)/2 unique off-diagonal pairs, ensuring each is processed once. With each rotation costing O(n) flops, the total expense per sweep is O(n^3), specifically approximately $4n^3 flops in the classical cyclic variant when symmetry is exploited and eigenvectors are accumulated. In practice, convergence is achieved after a small fixed number of sweeps (typically 3 to 6 for well-conditioned matrices), yielding an overall time complexity of O(n^3). However, in the worst case, the number of sweeps required can reach O(n), leading to a total complexity of O(n^4). Regarding space complexity, the algorithm requires O(n^2) storage for the symmetric matrix A (which can be represented compactly using n(n+1)/2 entries) and an additional O(n^2) for the orthogonal eigenvector matrix Q, for a total of O(n^2) space. Temporary storage for rotation parameters and intermediate computations adds only O(1) overhead. Compared to the , which achieves O(n^3) total flops with strong asymptotic efficiency, the is generally slower due to its iterative sweeps but provides greater and accuracy when computing the full set of eigenvectors, particularly for ill-conditioned problems.

Optimization Techniques

To reduce the computational overhead of the Jacobi eigenvalue algorithm, particularly the costly search for the largest off-diagonal element in each rotation step, several optimization techniques have been developed that maintain the method's quadratic convergence while improving practical efficiency. These strategies focus on efficient pivot selection, , and handling converged components, often amortizing search costs across sweeps and enabling concurrent operations on modern hardware. A key optimization for the classical (greedy) variant involves caching the largest off-diagonal element per row at the start of each sweep, which eliminates the need for a full O(n²) scan to identify the global maximum pivot at every rotation. Instead, the maximum among the n row-maxima can be found in O(n) time, and after applying a rotation to rows p and q, only those two rows are rescanned in O(n) time to update their maxima. With approximately n²/2 rotations per sweep, this amortizes the total search cost to O(n³) per sweep, matching the order of the rotation costs and making the approach viable for larger matrices without sacrificing the greedy selection's rapid reduction of the off-diagonal Frobenius norm. The cyclic Jacobi method further simplifies pivot selection by eliminating searches altogether, applying rotations in a fixed order—such as cycling through all pairs involving row k for k = 1 to n—rather than targeting the largest off-diagonal element each time. This row-cyclic strategy avoids the O(n²) comparison overhead of the classical while preserving similar properties, typically requiring only a few sweeps (often fewer than 10 for n up to 1000) to achieve machine precision, and it facilitates better locality and predictability in implementations. For parallelization, the Jacobi algorithm lends itself naturally to concurrent execution by assigning disjoint rotation pairs to separate threads or processors, allowing up to n/2 simultaneous updates per sweep since rotations on non-overlapping indices commute. Threshold-based variants enhance load balancing by skipping rotations where off-diagonal elements fall below a small tolerance, reducing idle time on processors and improving on multiprocessor systems; for instance, implementations on systolic arrays or distributed-memory architectures have demonstrated near-linear for dense symmetric matrices up to moderate sizes. Block Jacobi methods extend this parallelism by applying rotations to larger 2r × 2r subblocks via techniques like symmetric QR decompositions, avoiding numerous small 2×2 rotations and enabling more granular task distribution across processors when n/(2r) ≥ p (with p processors). This approach is particularly effective for very large matrices, as it reduces communication overhead in parallel settings while maintaining the algorithm's numerical stability. Deflation techniques further optimize by identifying and isolating rows or columns with sufficiently small off-diagonal elements (e.g., below a tolerance like times the norm), treating the corresponding diagonal entry as a converged eigenvalue and effectively reducing the active dimension. This early prevents unnecessary rotations on nearly diagonal components, accelerating in later sweeps and conserving arithmetic operations, especially in matrices with clustered eigenvalues.

Applications

In Scientific Computing

The Jacobi eigenvalue algorithm plays a significant role in as an alternative to the for solving dense symmetric eigenvalue problems, particularly when computing the full set of is essential. Unlike the QR method, which relies on orthogonal transformations to reduce the matrix to Hessenberg form before iteration, the Jacobi approach uses successive plane rotations to directly diagonalize the matrix, offering advantages in scenarios requiring high accuracy for small or clustered eigenvalues. This makes it suitable for dense matrices where the QR algorithm may introduce larger relative errors in the smaller eigenvalues of positive definite systems. For instance, studies have shown that with an appropriate stopping criterion based on off-diagonal elements, Jacobi achieves uniformly better relative accuracy for small eigenvalues compared to standard QR implementations. In modern scientific libraries, the Jacobi algorithm is integrated as a robust option for symmetric eigensystems, notably through the DSYEVJ routine in , which implements a one-sided Jacobi method following an initial symmetric indefinite decomposition. This routine is designed for high precision and is particularly effective for matrices up to moderate sizes (e.g., n ≤ 2000), where it can serve as a reliable driver for full . Additionally, the method's rotation-based structure allows it to act as a preprocessing step in hybrid eigensolvers, enhancing the performance of iterative techniques like the by providing initial approximations or handling dense subproblems in contexts. Its inclusion in underscores its value in standard numerical toolkits for tasks demanding orthogonal eigenvectors without deflation issues common in partial QR computations. The algorithm exhibits superior for symmetric matrices with nearly degenerate eigenvalues. Jacobi's systematic zeroing of off-diagonal elements via rotations maintains eigenvector and bounds error growth even in ill-conditioned cases, as demonstrated in analyses of Hermitian matrices with clustered spectra. This stability stems from the method's backward error properties, where perturbations are small relative to the matrix's Frobenius , making it preferable for comprehensive . Historically, the Jacobi algorithm, proposed in 1846, has served as a foundational technique for understanding rotation-based eigensolvers in fields like , where early computations of molecular Hamiltonians benefited from its direct approach to small dense systems, and , aiding of vibration modes in finite element models. Its emphasis on plane rotations influenced subsequent developments in both domains, providing a conceptual basis for handling symmetric problems before the dominance of QR and iterative methods in the mid-20th century. In , for example, Jacobi was among the methods evaluated for large-order eigenvalue in the 1970s, highlighting its role in early simulations.

Real-World Use Cases

The Jacobi eigenvalue algorithm finds application in vibration analysis of mechanical structures, where it computes the eigenvalues representing natural frequencies from the generalized eigenvalue problem formed by and stiffness matrices. This approach is particularly useful for determining mode shapes and frequencies in , enabling engineers to predict and mitigate in designs such as bridges or components. For instance, a generalized Jacobi solves the eigenvalue problem directly without to standard form, providing accurate results for large structural systems. In (), the Jacobi algorithm performs eigen-decomposition of the to identify principal components for in datasets. It iteratively applies rotations to diagonalize the symmetric , yielding eigenvalues that indicate variance explained by each component and eigenvectors defining the transformation directions. This method is implemented in computational frameworks for efficient on high-dimensional data, such as in preprocessing, where it ensures orthogonal components for feature extraction. The algorithm is employed in to solve the time-independent for molecular orbitals by diagonalizing the symmetric , with eigenvalues corresponding to levels. In computational , Jacobi rotations iteratively zero off-diagonal elements of the Fock or , facilitating the self-consistent field (SCF) procedure to obtain occupied and virtual orbitals for electronic structure calculations. This application supports simulations of molecular properties, such as bond energies and spectra, in methods like Hartree-Fock theory. For image , the one-sided Jacobi variant computes the () of image matrices to enable and denoising. By applying rotations to approximate low-rank representations, it retains dominant singular values for reconstructing compressed images with minimal loss in quality, commonly used in JPEG-like formats or storage-efficient archiving. In denoising, thresholding small singular values removes noise while preserving structural features, as demonstrated in hardware implementations for real-time . Recent advancements include GPU-accelerated versions, such as NVIDIA's cuSOLVER GESVDJ, enabling faster for large images in applications as of 2023.

Extensions and Variants

Generalizations to Non-Symmetric Matrices

The Jacobi eigenvalue algorithm has been extended to general real matrices, which are typically non-symmetric, through variants that compute the real Schur form rather than full . In this approach, a of orthogonal rotations is applied to progressively annihilate subdiagonal elements, resulting in a block upper where the diagonal blocks contain the eigenvalues; for real matrices, complex eigenvalues appear as 2×2 blocks with conjugate pairs. This two-sided transformation enables simultaneous processing of the matrix structure, though the presence of complex eigenvalues limits direct to cases where all eigenvalues are real. For complex Hermitian matrices, the algorithm generalizes naturally by replacing orthogonal rotations with unitary transformations, preserving the Hermitian structure while zeroing off-diagonal elements. Each iteration involves a complex plane rotation U_k = R(i_k, j_k, \phi_k, \alpha_k), where the phase \alpha_k and angle \phi_k are chosen to annihilate the pivot a_{i_k j_k}^{(k)}, leading to the update A^{(k+1)} = U_k^* A^{(k)} U_k. This extension maintains quadratic convergence under appropriate pivot strategies and finds applications in quantum logic synthesis for Hermitian gate decomposition. The method also adapts to compute the singular value decomposition (SVD) of rectangular or non-symmetric matrices. In the one-sided Jacobi variant, rotations are applied to the columns of A to orthogonalize them, effectively diagonalizing A^T A and yielding the right singular vectors and singular values; a two-sided variant processes 2×2 blocks directly to zero symmetric off-diagonals in bidiagonal forms. These approaches converge quadratically in the off-diagonal Frobenius norm, providing singular values as the square roots of the eigenvalues of A^T A. Despite these extensions, the exhibits limitations for non- matrices, where convergence reduces to linear rates under certain ordering schemes, compared to the quadratic rates for (e.g., symmetric or Hermitian) cases. This slower performance often leads to its replacement by more robust algorithms, such as the QR method for general non-symmetric eigenproblems or divide-and-conquer techniques in symmetric contexts.

Modern Implementations

Modern implementations of the Jacobi eigenvalue algorithm are integrated into established numerical libraries, providing robust, optimized routines for computing of symmetric matrices. In , the routine DSYEVJ serves as a dedicated Jacobi driver for symmetric eigenvalue problems, implemented in and designed for integration with BLAS for efficient matrix operations. This routine employs a two-sided with cyclic ordering, targeting medium-sized matrices where its quadratic convergence and inherent parallelism offer advantages over QR-based alternatives like DSYEVD. High-level language wrappers extend accessibility across ecosystems. In , the JacobiEigen.jl package implements a cyclic Jacobi algorithm, including variants for mixed-precision computations to enhance accuracy and speed on modern hardware. While Julia's base LinearAlgebra module primarily relies on for eigenvalue decompositions, specialized packages like JacobiEigen.jl enable direct use of Jacobi methods for symmetric matrices. For , although SciPy's linalg.eigh function defaults to QR or divide-and-conquer drivers via , users can interface with LAPACK's DSYEVJ through lower-level bindings like f2py or ctypes for Jacobi-specific needs. Parallel and GPU-accelerated versions address scalability demands. NVIDIA's cuSOLVER library includes cusolverDnSyevj, a Jacobi-based solver for symmetric matrices on -enabled GPUs, utilizing block-cyclic rotations to exploit massive parallelism on multi-core architectures. This implementation outperforms QR methods for batched small-to-medium problems, achieving up to 10x speedups in proximal optimization contexts due to its rotation-based structure. Recent advancements in the focus on mixed-precision strategies to balance computational efficiency and . For instance, a 2024 mixed-precision preconditioned Jacobi algorithm computes a low-precision before applying high-precision rotations, yielding smaller relative forward error bounds than standard Jacobi while reducing runtime by factors of 2-5 on symmetric matrices up to order 1000. Similarly, a 2025 mixed-precision Jacobi eigensolver leverages low-precision initial decompositions followed by high-precision updates, improving accuracy in fault-prone floating-point environments without full high-precision overhead. These enhancements, often implemented in packages like Julia's JacobiEigen.jl, support multi-language deployments and align with parallel techniques such as block rotations for .

References

  1. [1]
    Jacobi Eigenvalue Algorithm for Symmetric Real Matrices
    In summary, here are the steps in each iteration of the Jacobi algorithm: Find the off-diagonal element $a_{ij}$ ( $i ...
  2. [2]
    [PDF] Lecture 30. Other Eigenvalue Algorithms
    One of the oldest ideas for computing eigenvalues of matrices is the Jacobi al- gorithm, introduced by Jacobi in 1845. This method has attracted attention.
  3. [3]
    Über ein leichtes Verfahren die in der Theorie der Säcularstörungen ...
    Über ein leichtes Verfahren die in der Theorie der Säcularstörungen vorkommenden Gleichungen numerisch aufzulösen. · Volume: 30, page 51-94 · ISSN: 0075-4102; ...
  4. [4]
    [PDF] new fast and accurate jacobi svd algorithm: ii. lapack working note 170
    [23] C. G. J. Jacobi, ¨Uber ein leichtes Verfahren die in der Theorie der Säcularstörungen vorkom- menden Gleichungen numerisch aufzulösen, Crelle's Journal ...<|control11|><|separator|>
  5. [5]
    The Jacobi Eigenvalue Algorithm for Computing the ... - arXiv
    May 24, 2024 · The Jacobi eigenvalue algorithm, originally proposed by Jacobi in 1846 [9] , is an iterative method used for computing eigenvalues and ...<|control11|><|separator|>
  6. [6]
    [PDF] Jacobi Methods
    The primary advantage of the Jacobi method over the symmetric QR algorithm is its parallelism. As each Jacobi update consists of a row rotation that affects ...
  7. [7]
    A parallel algorithm for the eigenvalues and eigenvectors for a ...
    It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic ...
  8. [8]
    [PDF] 0.0.1 Jacobi eigenvalue algorithm - UCCS
    This algorithm uses planar rotations to systematically decrease the size of off-diagonal elements while increasing the size of diagonal elements. A bit of ...
  9. [9]
    Eigenvalue computation in the 20th century - ScienceDirect.com
    The Jacobi method which was originally proposed in 1846 [65], reduces a real symmetric matrix to diagonal form by a sequence of plane rotations. Jacobi, however ...
  10. [10]
    The algebraic eigenvalue problem : Wilkinson, J. H. (James Hardy)
    Jul 29, 2019 · Publication date: 1965. Topics: Algebras, Linear, Equations -- Numerical solutions, Matrices. Publisher: Oxford, Clarendon Press.
  11. [11]
    [PDF] Symmetric matrices and positive definiteness - MIT OpenCourseWare
    Symmetric matrices are good – their eigenvalues are real and each has a com plete set of orthonormal eigenvectors. Positive definite matrices are even bet ter.
  12. [12]
    [PDF] 1 Some Facts on Symmetric Matrices
    Definition: Matrix A is symmetric if A = AT . Theorem: Any symmetric matrix 1) has only real eigenvalues; 2) is always diagonalizable; 3) has orthogonal ...
  13. [13]
    [PDF] ES.1803: Symmetric Matrices - MIT OpenCourseWare
    Our ultimate goal is to prove the following theorem. Spectral Theorem. A real n × n symmetric matrix has n orthogonal eigenvectors with real eigenvalues.
  14. [14]
    [PDF] SYMMETRIC MATRICES Math 21b, O. Knill
    SYMMETRIC MATRICES. A matrix A with real entries is symmetric, if AT = A ... or in quantum mechanics as observables or in neural networks as learning ...
  15. [15]
    [PDF] Circulant Matrices and Their Application to Vibration Analysis
    Jun 19, 2014 · of vibration analysis for over 40 yr. Early work considered ... are described by symmetric matrices. Then the usual orthogonal- ity ...
  16. [16]
    [PDF] Principal Component Analysis
    Theorem 5.3 (Spectral theorem for symmetric matrices). If A ∈ Rd×d is ... Algorithm 6.2 (Principal component analysis (PCA)). Given a dataset X ...
  17. [17]
    [PDF] Notes for Ma221 Lecture 10, Oct 27, 2023 Chapter 5 on symmetric ...
    (2) Based on Jacobi's Method, the historically oldest algorithm. The modern version is by Drmac and Veselic, called sgesvj for the SVD (not yet for the ...
  18. [18]
    [PDF] Contents
    If we choose θ so that tan 2θ = 2apq/(app −aqq), the plane rotation annihilates the (p, q) element of A. (If app − aqq = 0, we choose θ = π/4). Remark 6.2 ...
  19. [19]
    Jacobi's Method is More Accurate than QR - SIAM.org
    Jacobi algorithm for symmetric eigenvalue problem and integrable gradient system of Lax form. Japan Journal of Industrial and Applied Mathematics, Vol. 14 ...Missing: original | Show results with:original
  20. [20]
    [PDF] On an Implementation of Two-Sided Jacobi Method
    Experimental results confirmed that the two-sided Jacobi method has shorter computation time and higher accuracy than the one-sided Jacobi method for small ...Missing: variants | Show results with:variants
  21. [21]
    [PDF] Chapter 3 of Calculus++: The Symmetric Eigenvalue Problem
    In general, the Jacobi algorithm does not produce an exactly diagonal matrix in any finite number of iterations, so the formulation of the algorithm that we ...
  22. [22]
    [PDF] CS 450 – Numerical Analysis Chapter 4: Eigenvalue Problems
    Jan 28, 2019 · ▻ One of oldest methods for computing eigenvalues is Jacobi method, ... Other Methods for Symmetric Matrices. Page 87. 87. Bisection or ...Missing: challenges | Show results with:challenges
  23. [23]
    [PDF] 11.1 Jacobi Transformations of a Symmetric Matrix
    The Jacobi method consists of a sequence of orthogonal similarity transforma- tions of the form of equation (11.0.14). Each transformation (a Jacobi rotation) ...
  24. [24]
    [PDF] Eigenvalue Problem for Symmetric Matrices 1 The Jacobi Algorithm
    Since the target is to find an “approximation” to (1.1), the Jacobi algorithm is a combination of the factorization methods and the iterative methods we've seen ...
  25. [25]
    [PDF] Convergence of Eigenvectors in the Block Classical Jacobi Method ...
    In the block version of the classical Jacobi method for the symmetric eigenvalue problem, the off-diagonal elements of the iterate matrix A(k) converge to zero.
  26. [26]
    The Symmetric Eigenvalue Problem | SIAM Publications Library
    The aim of the book is to present mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, ...
  27. [27]
    On the Convergence of the Jacobi Method for Arbitrary Orderings
    This paper proves that the two-sided Jacobi method computes the eigenvalues ... This paper presents a new one-sided Jacobi SVD algorithm for triangular matrices ...
  28. [28]
    [PDF] chapter 3 : jacobi–davidson method
    It is effective when the matrix is nearly diagonal, i.e. if the matrix of eigenvectors is close to the identity matrix. It is mainly used for problems of.
  29. [29]
    On convergence to eigenvalues and eigenvectors in the block ...
    Aug 1, 2021 · In this paper, we study the convergence of appropriate sequences computed in the block-Jacobi EVD algorithm towards eigenvalues and eigenvectors ...
  30. [30]
    [PDF] (U.John Hopkins) Matrix Computations (3rd Ed.) [ripped by sabbanji]
    Van Loan (1988). Handbook for Matrix Computa- tions, SLAM Publications ... Overview of Eigenvalue Algorithms. Reduction to Hessenberg/Tridiagonal Form ...
  31. [31]
    A sorted partial jacobi method and its convergence analysis
    Jacobi methods for computing the eigendecomposition of a class of so-called low-rank-plus-shift symmetric matrices are investigated.
  32. [32]
    [PDF] new fast and accurate jacobi svd algorithm: i. lapack working note 169
    Thus, if only the singular values are needed, the QR factorization as preprocessor is paid off (in terms of flops) in one full sweep if m > 7n/3, and it will be ...
  33. [33]
    None
    Below is a merged summary of the Jacobi eigenvalue algorithm from "Matrix Computations" (4th Edition) based on the provided segments. Since the content varies across excerpts, I’ve synthesized the information into a comprehensive response, prioritizing the most detailed and authoritative data (e.g., from Chapter 8.5 where explicitly mentioned). Where data is missing or inconsistent, I’ve noted it and included all relevant details. To maximize density and clarity, I’ve used tables in CSV format for key technical details like time/space complexity and flop counts, followed by a narrative summary and useful URLs.
  34. [34]
    On the Convergence of Cyclic Jacobi Methods - Oxford Academic
    Abstract. In a cyclic Jacobi method for calculating the eigenvalues and eigenvectors of a symmetric matrix, the pivots are chosen in any fixed cyclic order.<|control11|><|separator|>
  35. [35]
    Parallel Jacobi algorithms for the algebraic eigenvalue problem
    This thesis investigates some Jacobi-like algorithms for solving eigenvalue problems on parallel computers.First considered is the (easier) case of Hermitian ...
  36. [36]
    Parallel block Jacobi eigenvalue algorithms using systolic arrays
    In this paper we consider the questions of whether such a systolic device for symmetric eigenvalue problems can be used to solve larger problems.
  37. [37]
    syevj - NetLib.org
    DSYEVJ implements symmetric indefinite decomposition (subroutine DGJGT) followed by implicit (one-sided) Jacobi method (subroutine DJGJF).Missing: scientific | Show results with:scientific
  38. [38]
    [PDF] A NOTE ON THE ACCURACY OF SYMMETRIC EIGENREDUCTION ...
    Abstract. We present some experimental results illustrating the fact that on highly ill–conditioned. Hermitian matrices the relative accuracy of computed ...
  39. [39]
    [PDF] Solution methods for eigenvalue problems in structural mechanics
    As only one eigenvalue is required, for large order systems the Householder-QR inverse iteration solution and the Jacobi method are obviously inefficient.
  40. [40]
    Solution methods for eigenvalue problems in structural mechanics
    The solution methods include the QR method, a generalized Jacobi iteration, a new determinant search technique, and an automated sub-space iteration.
  41. [41]
    Fast principal component analysis using fixed-point algorithm
    Jul 15, 2007 · ... eigenvalue decomposition (EVD) based PCA method. The mean squared ... Thus it avoids Cyclic Jacobi's method for eigendecomposition. We ...
  42. [42]
    High Level Synthesis FPGA Implementation of the Jacobi Algorithm ...
    Mar 2, 2015 · In this paper, we have described a high level synthesis implementation of the Jacobi algorithm for eigenvalue and eigenvector calculations.
  43. [43]
    Computational Quantum Chemistry - Books
    Jun 17, 2013 · The Born-Oppenheimer approximation is the first of our key approximations in dealing with the Schrödinger equation. It is the central starting ...
  44. [44]
    [PDF] Eigenvalue problems from electronic structure theory
    This is done using molecular orbital methods. The orbital concept is an approximation and it is used for the qualitative discussion of the chemistry of atoms ...
  45. [45]
    Lossy Image Compression Using Singular Value Decomposition ...
    Aug 5, 2025 · Two most widely used techniques in the hardware implementation of the SVD are Jacobi (or two sided) SVD and Hestenes-Jacobi (or one sided) SVD.
  46. [46]
    [PDF] Fixed-Point Hestenes SVD Algorithm for Computing Eigen Faces
    Hestenes' algorithm is a variant of one-sided Jacobi's algorithm and is discussed in. [7]-[8]. Being a one-sided version the time of computation is lesser ...
  47. [47]
    On Asymptotic Convergence of Nonsymmetric Jacobi Algorithms
    In this paper, it is shown that the cyclic nonsymmetric Jacobi method converges locally and asymptotically quadratically under mild hypotheses if special ...
  48. [48]
    The method of Jacobi for unsymmetric eigenproblems - AIAA ARC
    This paper offers a complete solution of the unsymmetric problem via a variation of the Jacobi method. The eigenvalues emerge in algebraic order, and there is ...Missing: nonsymmetric | Show results with:nonsymmetric
  49. [49]
    [PDF] arXiv:1810.12720v1 [math.NA] 30 Oct 2018
    Oct 30, 2018 · In this paper we prove the global convergence of the complex Jacobi method for. Hermitian matrices for a large class of generalized serial pivot ...
  50. [50]
    Quantum-Logic Synthesis of Hermitian Gates - ACM Digital Library
    To this end, an extended version of the Jacobi approach for calculating the eigenvalues of Hermitian matrices in linear algebra is considered as the basis of ...
  51. [51]
    New Fast and Accurate Jacobi SVD Algorithm. I
    Jacobi methods started with the two-sided method of Kogbetliantz and the one-sided method of Hestenes. They have likewise had many developments, including ...<|control11|><|separator|>
  52. [52]
    Computing accurate eigenvalues using a mixed-precision Jacobi ...
    Jan 7, 2025 · We provide a rounding error analysis of a mixed-precision preconditioned Jacobi algorithm, which uses low precision to compute the preconditioner.
  53. [53]
    eigh — SciPy v1.16.2 Manual
    Solve a standard or generalized eigenvalue problem for a complex Hermitian or real symmetric matrix. Find eigenvalues array w and optionally eigenvectors array ...Eigh · 1.12.0 · 1.13.1 · 1.15.2Missing: Jacobi | Show results with:Jacobi
  54. [54]
  55. [55]
    [PDF] Faster proximal algorithms for matrix optimization using Jacobi ...
    A significant computational bottleneck in many of these algorithms is the need to compute a full eigenvalue or singular value decomposition at each iteration.Missing: original | Show results with:original
  56. [56]
    A Mixed Precision Eigensolver Based on the Jacobi Algorithm - arXiv
    Aug 30, 2025 · The Jacobi algorithm is aiming to reduce the off-diagonal entries iteratively using Givens rotations. We investigate how to use the low ...Missing: seminal eigenvalue