Fact-checked by Grok 2 weeks ago

Bidiagonal matrix

A bidiagonal matrix is a in linear algebra characterized by non-zero entries confined to the and exactly one adjacent off-diagonal, either the superdiagonal (resulting in an upper bidiagonal matrix) or the subdiagonal (resulting in a lower bidiagonal matrix). This structure is a special case of banded matrices with 1, enhancing computational efficiency. Bidiagonal matrices are fundamental in , serving as intermediates in key decomposition algorithms for dense and sparse matrices. Notably, the Golub-Kahan-Lanczos bidiagonalization process reduces a general to bidiagonal form as a precursor to computing its (), enabling stable and efficient eigenvalue analysis for applications in data compression, , and pseudoinverse computation. They also underpin QR factorizations via Householder transformations, where iterative applications yield bidiagonal structures for solving problems and eigenvalue computations. Beyond general matrices, bidiagonal forms facilitate specialized solutions for structured systems, such as totally positive matrices and Vandermonde-like arrays, by exploiting factorizations that maintain and accuracy in high-precision environments. Products of bidiagonal matrices often exhibit desirable properties like total nonnegativity, enhancing their utility in optimization and approximation theory.

Fundamentals

Definition

A bidiagonal matrix is a with all entries equal to zero except those on the and exactly one adjacent off-diagonal band, either the superdiagonal (for upper bidiagonal matrices) or the subdiagonal (for lower bidiagonal matrices). This structure results in at most $2k - 1 nonzero entries for a square k \times k , making it a particularly efficient form for certain numerical computations. Unlike tridiagonal matrices, which permit nonzero entries on both the superdiagonal and subdiagonal in addition to the , a bidiagonal matrix restricts nonzeros to only one off-diagonal band, yielding a narrower . Bidiagonal matrices form a subclass of banded matrices, where nonzeros are confined within a fixed distance from the , but with the minimal bandwidth of two for the nonzero bands. The concept extends naturally to rectangular matrices. For an upper bidiagonal matrix B \in \mathbb{R}^{m \times n}, the general entry satisfies b_{ij} = 0 if |i - j| > 1 or i > j, so nonzeros appear only on the (for i = j \leq \min(m,n)) and the superdiagonal (for i = j - 1, up to the dimensions). An analogous definition holds for lower bidiagonal matrices, with nonzeros on the and subdiagonal (i = j + 1).

Notation and Examples

A bidiagonal matrix is specified by its non-zero entries along the and one adjacent off-diagonal, either the superdiagonal (for upper bidiagonal) or subdiagonal (for lower bidiagonal). In standard notation, an upper bidiagonal matrix B \in \mathbb{R}^{m \times n} (with m \geq n) has diagonal elements d_i for i = 1, \dots, n and superdiagonal elements e_i for i = 1, \dots, n-1, with all other entries zero. Similarly, a lower bidiagonal matrix has diagonal elements d_i and subdiagonal elements f_i for i = 2, \dots, m, assuming m \geq n. For a square $3 \times 3 upper bidiagonal matrix, the structure takes the form \begin{pmatrix} a & b & 0 \\ 0 & c & d \\ 0 & 0 & e \end{pmatrix}, where a, c, e are the diagonal entries and b, d are the superdiagonal entries. A corresponding $3 \times 3 lower bidiagonal matrix is \begin{pmatrix} a & 0 & 0 \\ f & c & 0 \\ 0 & g & e \end{pmatrix}, with f, g on the subdiagonal. Rectangular bidiagonal matrices follow analogous patterns, with the off-diagonal truncated at the matrix boundaries. For example, a $4 \times 3 upper bidiagonal matrix is \begin{pmatrix} a & b & 0 \\ 0 & c & d \\ 0 & 0 & e \\ 0 & 0 & 0 \end{pmatrix}, where the superdiagonal ends after the second row due to only three columns, and the fourth row is entirely zero. In larger matrices, this bidiagonal structure implies a sparse, band-like form concentrated along the specified diagonals, facilitating efficient storage and computation.

Properties

Structural Properties

A bidiagonal matrix is a with non-zero entries confined to the and one adjacent off-diagonal (either the superdiagonal for upper bidiagonal or subdiagonal for lower bidiagonal). For an n \times n square bidiagonal matrix, there are exactly $2n - 1 non-zero entries: n on the and n-1 on the off-diagonal. This sparsity pattern gives bidiagonal matrices a of 1 (or semi-bandwidth of 1), meaning the non-zeros are limited to positions where the row and column indices differ by at most 1. The rank of a square bidiagonal matrix is at most n, and it achieves full rank if all its non-zero entries are non-zero, particularly if every diagonal element is non-zero. Rank deficiency occurs if one or more diagonal elements are zero, as this decouples the matrix into independent blocks of lower dimension, reducing the overall rank accordingly. Bidiagonal matrices are not closed under multiplication, as the product of two bidiagonal matrices generally yields a with non-zeros on the and both adjacent off-diagonals. However, certain operations preserve the bidiagonal structure; for instance, the product of a bidiagonal matrix and a remains bidiagonal, since the diagonal matrix scales the rows or columns without introducing new non-zeros. The of a square bidiagonal matrix, being the sum of its diagonal elements, is simply the sum of the n entries. For a triangular bidiagonal matrix (upper or lower), the is the product of its diagonal elements, as the matrix is essentially triangular with the off-diagonal elements not affecting the determinant computation.

Spectral Properties

A symmetric bidiagonal matrix, being a real with nonzeros restricted to the and the adjacent off-diagonals, possesses real eigenvalues, a fundamental property of all real symmetric matrices. These eigenvalues can be determined using specialized numerical methods that exploit the matrix's banded structure, though such methods are beyond the scope of theoretical properties here. For a general bidiagonal matrix B, the singular values \sigma_i(B) are defined as the square roots of the eigenvalues of either B^\top B or B B^\top, both of which are symmetric tridiagonal matrices whose spectral structure inherits the sparsity of B. This connection facilitates analysis of the distribution through the well-studied eigenvalues of tridiagonal matrices. If all entries of a bidiagonal matrix are nonzero, its singular values are distinct. The provides a localization for the eigenvalues of any bidiagonal matrix: each eigenvalue lies within at least one of the disks in the centered at a diagonal entry a_i with equal to the of the single adjacent off-diagonal entry in that row (or column, by ). For an upper bidiagonal matrix, the disks are centered at (a_1, |e_1|), (a_2, |e_2|), \dots, (a_{n-1}, |e_{n-1}|), (a_n, 0), where e_i are the superdiagonal entries; this often yields tight bounds due to the minimal off-diagonal support. In the simple case of a $2 \times 2 upper bidiagonal matrix \begin{pmatrix} a & b \\ 0 & c \end{pmatrix}, the matrix is upper triangular, so its eigenvalues are precisely the diagonal entries a and c. For an upper bidiagonal matrix with positive diagonal entries, the singular values are positive and strictly decreasing, reflecting the matrix's nonnegative structure and ordered spectral decay. The structural sparsity of bidiagonal matrices aids in efficient computations by reducing the complexity of associated operations.

Algorithms

Bidiagonalization

Bidiagonalization refers to the process of reducing a general m \times n A (with m \geq n) to upper bidiagonal form through a series of orthogonal transformations, primarily to simplify the of its (SVD). This reduction preserves the singular values of A and is a foundational step in algorithms for SVD. The resulting bidiagonal facilitates subsequent iterative methods for extraction by concentrating the non-zero structure along the main diagonal and the superdiagonal. The seminal Golub-Kahan bidiagonalization algorithm, introduced in 1965, achieves this reduction using Householder reflections applied alternately from the left and right sides of the matrix. It produces orthogonal matrices U \in \mathbb{R}^{m \times m} and V \in \mathbb{R}^{n \times n}, along with an upper bidiagonal matrix B \in \mathbb{R}^{m \times n}, such that A = U B V^T, where B has non-zero entries only on its main diagonal and superdiagonal. The process begins with the first column of A: a left Householder transformation U_1 is constructed to zero out all entries below the (1,1) position in that column, effectively performing an initial QR-like step. Then, a right Householder transformation V_1 is applied to the first row of the transformed matrix to zero out entries to the right of the (1,2) position, except for the superdiagonal. This alternation continues iteratively for subsequent columns and rows: for each stage k = 1, \dots, n-1, a left reflector U_k zeros elements below the diagonal in column k, followed by a right reflector V_k that zeros elements beyond the superdiagonal in row k. The final stage for k = n applies only a left reflector if necessary. These transformations are accumulated to form the full U and V. The algorithm's steps can be outlined in pseudocode as follows, assuming real arithmetic and focusing on the core Householder applications:
for k = 1 to n
    Form the Householder vector u_k for the subcolumn A(k:m, k) to zero below the diagonal
    Apply left reflection: A(k:m, k:n) ← I - 2 u_k u_k^T times A(k:m, k:n)
    Accumulate U ← U (I - 2 u_k u_k^T)
    if k < n
        Form the Householder vector v_k for the subrow A(k, k+1:n) to zero beyond the superdiagonal
        Apply right reflection: A(k:m, k+1:n) ← A(k:m, k+1:n) (I - 2 v_k v_k^T)
        Accumulate V ← V (I - 2 v_k v_k^T)
    end if
end for
This iterative QR-like factorization builds the bidiagonal structure progressively while maintaining through the orthogonal nature of reflectors. The of the Golub-Kahan procedure is O(m n^2) floating-point operations for m \geq n, dominated by the matrix-vector products in the later iterations. While the focus here is on general rectangular matrices, a related variant for symmetric matrices reduces them to tridiagonal form using only left (or right) Householder transformations, as the bidiagonal structure simplifies further due to symmetry; however, the Golub-Kahan method remains the standard for the nonsymmetric case.

Singular Value Computation

Computing the singular value decomposition (SVD) of a bidiagonal matrix represents the primary computational bottleneck in the full SVD of a general matrix after bidiagonalization, often accounting for a significant portion of the total runtime due to the need for high accuracy across all singular values. For an n \times n upper bidiagonal matrix B with diagonal entries d_1, d_2, \dots, d_n and superdiagonal entries e_1, e_2, \dots, e_{n-1}, the SVD takes the form B = U \Sigma V^T, where U and V are orthogonal matrices, and \Sigma = \operatorname{diag}(\sigma_1, \sigma_2, \dots, \sigma_n) contains the nonnegative singular values \sigma_i in nonincreasing order. Equivalently, this is expressed via iterative orthogonal transformations as Q^T B P = \Sigma, where Q and P accumulate the left and right singular vectors, respectively. The classical algorithm for this computation is a bidiagonal variant of the QR iteration, which applies successive QR decompositions with implicit shifts to deflate and converge to the singular values. In each iteration, a shift is chosen (often based on the Wilkinson shift from the bottom 2x2 submatrix) to accelerate convergence, followed by a bulge-chasing phase that propagates the transformation through the bidiagonal structure, leading to deflation of zero superdiagonal elements. This process isolates singular values one by one from the bottom, with the accumulated transformations yielding the singular vectors upon request. The method ensures quadratic convergence near the singular values and is implemented in the LAPACK routine DBDSQR, which handles both real and complex cases. To improve accuracy in , particularly for small singular values prone to underflow or cancellation errors, zero-shift QR iterations are employed, avoiding explicit shifts and relying on the inherent deflation properties of the bidiagonal form. These zero-shift variants compute singular values to high relative accuracy independent of their magnitude, as demonstrated in analyses showing bounded relative errors after a fixed number of steps. For enhanced performance, especially on parallel architectures, divide-and-conquer approaches like the MR³ recursively split the bidiagonal matrix into smaller subproblems, solving them independently and combining via low-rank updates from rank-revealing decompositions. This method, building on earlier zero-shift techniques, reduces the asymptotic complexity and is competitive with QR iteration in speed while maintaining accuracy. Overall, these algorithms achieve a of O(n^2) operations for the bidiagonal SVD of an n \times n , dominated by O(n) iterations each costing O(n) time due to the sparse structure, contrasting with the O(n^3) cost of the preceding bidiagonalization. LAPACK's DBDSDC routine implements a divide-and-conquer variant for faster execution when singular vectors are not required. Numerical experiments confirm that these methods deliver singular values accurate to nearly across a wide range of matrix conditionings.

Applications

In Numerical Linear Algebra

Bidiagonal matrices serve as a crucial intermediate form in the computation of the () of general matrices, particularly through the Golub-Reinsch algorithm, which reduces an arbitrary matrix to bidiagonal form using transformations before applying iterative methods to extract the singular values. This approach enables stable and efficient SVD computation by simplifying the problem to a form where only the diagonal and one superdiagonal (or subdiagonal) elements are nonzero, reducing the complexity of subsequent iterations. The foundational work on this bidiagonalization procedure and its application to SVD was developed by Golub and Kahan in the 1960s, with their 1965 paper introducing methods to calculate singular values and the pseudoinverse directly from the bidiagonal form. In the context of eigenvalue problems, bidiagonal matrices integrate with variants of the , as the s of a bidiagonal matrix correspond to the eigenvalues of an associated symmetric , allowing eigenvalue techniques to be adapted for computation on the bidiagonal form. This connection facilitates the use of QR iterations on the tridiagonal counterpart, enhancing the efficiency of singular value refinement while maintaining the structure's sparsity. For least squares problems, the bidiagonal form aids in solving overdetermined systems by leveraging the , where the reduced matrix allows for straightforward computation of the minimum-norm solution via the pseudoinverse, avoiding direct formation of the normal equations that can amplify conditioning issues. This is particularly valuable in applications requiring the solution to \min \| Ax - b \|_2, as the of the bidiagonal intermediate yields the necessary decompositions with controlled error propagation. The use of bidiagonal forms preserves numerical accuracy for ill-conditioned matrices, as the Householder-based reduction ensures backward stability, with perturbations bounded by machine epsilon times the matrix norm, preventing loss of information in nearly singular cases. For instance, consider a bidiagonal matrix B with SVD B = U \Sigma V^T; the pseudoinverse is then B^+ = V \Sigma^+ U^T, where \Sigma^+ inverts the nonzero singular values, enabling direct computation of the least squares solution x = B^+ b for a corresponding system, often with relative errors on the order of the condition number times machine precision. This example illustrates how the bidiagonal SVD not only simplifies pseudoinverse calculation but also supports stable handling of rank-deficient problems in numerical linear algebra.

In Optimization and Other Fields

Bidiagonal matrices play a significant role in trust-region methods for solving large-scale sparse problems. In these methods, the LSQR algorithm, which relies on bidiagonalization of the , is used to approximately solve the trust-region subproblem by generating an efficient path for the step computation. This approach ensures and satisfies the necessary conditions for global convergence, outperforming alternatives like the conjugate gradient (CGLS) method in terms of count and reduction on benchmark problems. In modern , particularly in online for training deep neural networks, bidiagonal matrices facilitate structured sparsity in second-order methods such as the Sparsified Online Newton (SONew) . Here, a unit lower bidiagonal matrix forms part of the tridiagonal in minimizing the LogDet divergence, enabling linear time and while achieving optimal bounds of O(\sqrt{T}) for T iterations. This structure enhances scalability to billion-parameter models, yielding up to 30% faster convergence and 3.4% accuracy improvements over first-order optimizers like on tasks involving vision transformers and graph neural networks. In , bidiagonal factorizations of matrices enable implementations for discrete transforms (DFTs), requiring 2n-1 cycles on arrays of n² processors while maintaining accuracy for tasks.

References

  1. [1]
    Linear Algebra Glossary - UC Davis Math
    Feb 15, 2012 · The matrix is called upper bidiagonal if these are the main diagonal and the principal superdiagonal. The matrix is called lower bidiagonal if ...
  2. [2]
    Bidiagonal matrix - EPFL Graph Search
    In mathematics, a bidiagonal matrix is a banded matrix with non-zero entries along the main diagonal and either the diagonal above or the diagonal below.
  3. [3]
    [2311.06609] The Power of Bidiagonal Matrices - arXiv
    Nov 11, 2023 · We show that bidiagonal matrices have a number of interesting properties that make them powerful tools in a variety of problems, especially when they are ...Missing: definition | Show results with:definition<|control11|><|separator|>
  4. [4]
    [PDF] The Power of Bidiagonal Matrices Higham, Nicholas J. 2023
    Abstract. Bidiagonal matrices are widespread in numerical linear algebra, not least because of their use in the standard algorithm for computing the ...
  5. [5]
    Bidiagonalization of Matrices and Solution of Linear Equations
    The bidiagonalization algorithm is shown to be the basis of important methods for solving the linear least squares problem for large sparse matrices.
  6. [6]
    The Application of the Bidiagonal Factorization of Totally Positive ...
    The approach to solving linear systems with structured matrices by means of the bidiagonal factorization of the inverse of the coefficient matrix is first ...
  7. [7]
    [PDF] Numerical Linear Algebra
    – A lower bidiagonal matrix A has entries aij = 0 for all j>i or i>j + 1; analogous definition for upper bidiagonal. – A matrix is upper Hessenberg if aij = 0 ...
  8. [8]
  9. [9]
    The Power of Bidiagonal Matrices - Nick Higham
    Nov 22, 2023 · Bidiagonal matrices have many interesting properties, especially when we consider products of them. We describe some of their properties here.Missing: definition | Show results with:definition
  10. [10]
    [PDF] Golub and Van Loan - EE IIT Bombay
    we describe what Algorithm 4.6.1 does in matrix-vector language. Define the lower bidiagonal matrix Lk(a) E lll(n+l)x(n+l) by. 0 and the diagonal matrix Dk by.
  11. [11]
    Calculating the Singular Values and Pseudo-Inverse of a Matrix
    We extend the Golub–Kahan algorithm for computing the singular value decomposition of bidiagonal matrices to triangular matrices R. Our algorithm avoids the ...
  12. [12]
    [PDF] 12 How to Compute the SVD
    We will now close with the discussion of an algorithm for the first phase. 12.1 Golub-Kahan Bidiagonalization. The procedure is similar to the Householder ...
  13. [13]
    [PDF] Computing the SVD - Ethz
    Nov 24, 2024 · It generates a bulge in position (3,1) and (1,3) as we can see in the following example. Page 13. 7 Example. We consider the bidiagonal matrix.
  14. [14]
    [PDF] The Singular Value Decomposition: Anatomy of Optimizing an ...
    On the other hand, the reduction of a matrix A to bidiagonal form is done by applying orthogonal matrices on both the left and right sides of A---hence it is ...
  15. [15]
    [PDF] Singular value decomposition and least squares solutions
    The program described below first uses Householder transformations to reduce A to bidiagonal form, and then the Q R algorithm to find the singular values of the.
  16. [16]
    LAPACK: dbdsqr - The Netlib
    DBDSQR computes the singular values and, optionally, the right and/or !> left singular vectors from the singular value decomposition (SVD) of !> a real N-by-N ( ...
  17. [17]
    [PDF] Accurate Singular Values and Differential QD Algorithms - DTIC
    Oct 26, 1992 · An efficient accurate subroutine is provided to return the singular values and vectors of. 2 x 2 bidiagonal matrices. Deflation when a diagonal ...
  18. [18]
    [PDF] the mr3-gk algorithm for the bidiagonal svd
    Mar 5, 2012 · Abstract. Determining the singular value decomposition of a bidiagonal matrix is a frequent subtask in numer- ical computations.
  19. [19]
    [PDF] Computation of the Singular Value Decomposition
    Algorithm 1a: Householder reduction to bidiagonal form: Input: m,n, A where A is m × n. Output: B,U, V so that B is upper bidiagonal,U and V are products of ...
  20. [20]
    bdsqr: bidiagonal SVD, QR iteration (dqds) - LAPACK - The Netlib
    ▽LAPACK. ▻Linear solve, AX = B. ▻Least squares. ▻Orthogonal/unitary factors (QR, CS, etc.) ▻Non-symmetric eigenvalues. ▻Hermitian/symmetric eigenvalues. ▽ ...
  21. [21]
    [PDF] Calculating the singular values and pseudo-inverse of a matrix
    The scheme first transforms A to a bidiagonal matrix J, then diagonalizes J. The scheme described here is complicated but does not suffer from the computational ...
  22. [22]
    Accurate Bidiagonal Reduction for Computing the Singular Value ...
    Bidiagonal reduction is the preliminary stage for the fastest stable algorithms for computing the singular value decomposition (SVD) now available.
  23. [23]
    [PDF] INEXACT TRUST REGION METHOD FOR LARGE SPARSE ...
    The main purpose of this paper is to show that linear least squares methods based on bidiagonalization, namely the LSQR algorithm, can be used for ...
  24. [24]
  25. [25]
    [PDF] Bidiagonal factorization of Fourier matrices and systolic algorithms ...
    Jul 1, 2020 · Abstract-An algorithm for factoring Fourier matrices into products of bidiagonal matrices is presented. These factorizations have the same.