Fact-checked by Grok 2 weeks ago

Tridiagonal matrix

A tridiagonal matrix is a square in which all entries are zero except those on the , the subdiagonal (immediately below the ), and the superdiagonal (immediately above the ). This structure makes it a special case of a band matrix with bandwidth 1, allowing for compact storage using only O(n) space for an n×n matrix, compared to O(n²) for a dense . Tridiagonal matrices possess several advantageous properties that facilitate efficient numerical computations. For instance, systems of linear equations involving a tridiagonal Ax = b can be solved in O(n) time using specialized algorithms like the Thomas algorithm, which is a variant of tailored to this sparse structure and avoids fill-in during factorization. Additionally, for symmetric tridiagonal matrices, can be computed using stable O(n²) methods such as the , which is particularly useful in . Determinants of tridiagonal matrices can also be evaluated recursively in O(n) time via a continued fraction-like formula, leveraging the matrix's banded form. These matrices arise naturally in numerous applications across and . In , they model finite difference approximations to second-order boundary value problems for and partial , such as the Poisson in one dimension. Tridiagonal forms also appear in the of orthogonal polynomials, Markov chains for modeling sequential processes, and Tikhonov regularization for ill-posed problems, where their Toeplitz variants ( diagonals) enable closed-form solutions and fast computations. Furthermore, in modeling, such as spread or particle deposition on lattices, tridiagonal matrices capture nearest-neighbor interactions efficiently.

Definition and Notation

Formal Definition

A tridiagonal is a square of n \times n in which all entries are zero except those on the , the subdiagonal (immediately below the ), and the superdiagonal (immediately above the ). This structure confines the nonzero elements to a narrow band around the , distinguishing it from denser . In terms of bandwidth, a tridiagonal matrix has a lower bandwidth of 1 and an upper bandwidth of 1, meaning the nonzero entries extend at most one position below and above the . More formally, for an n \times n matrix A = (a_{i,j}), it satisfies a_{i,j} = 0 whenever |i - j| > 1. This contrasts with a , which has bandwidth (0,0) and only nonzero main diagonal entries, and with bidiagonal matrices, which have bandwidth (1,0) or (0,1) by including only one off-diagonal. Broader banded matrices, such as pentadiagonal ones, allow nonzeros up to two positions away, resulting in bandwidth (2,2). Under certain conditions, such as the matrix being irreducible and satisfying diagonal dominance (where each diagonal entry's is at least the sum of the s of the off-diagonal entries in its row), tridiagonal matrices exhibit desirable properties like nonsingularity. A tridiagonal matrix is irreducible if all subdiagonal and superdiagonal entries are nonzero, ensuring the associated is strongly connected. These matrices are particularly valuable in contexts for efficient storage and computation in .

Matrix Representation and Examples

A tridiagonal matrix T of order n is typically denoted in block form using vectors for its three nonzero diagonals: the main diagonal consists of elements b_i for i = 1, \dots, n, the subdiagonal (immediately below the ) consists of elements a_i for i = 2, \dots, n, and the superdiagonal (immediately above the ) consists of elements c_i for i = 1, \dots, n-1. This notation compactly represents the sparse structure where all other entries are zero, as seen in the general form: T = \begin{pmatrix} b_1 & c_1 & 0 & \cdots & 0 \\ a_2 & b_2 & c_2 & \cdots & 0 \\ 0 & a_3 & b_3 & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & c_{n-1} \\ 0 & 0 & \cdots & a_n & b_n \end{pmatrix}. For a concrete illustration, consider a 3×3 tridiagonal matrix with specific numerical values, such as b_1 = 4, b_2 = 5, b_3 = 6, a_2 = 1, a_3 = 2, c_1 = 3, and c_2 = 1: T = \begin{pmatrix} 4 & 3 & 0 \\ 1 & 5 & 1 \\ 0 & 2 & 6 \end{pmatrix}. This example highlights the banded structure, with nonzero entries confined to the specified diagonals. Similarly, a 4×4 tridiagonal matrix can be formed with values b_1 = 2, b_2 = 3, b_3 = 4, b_4 = 5, a_2 = 1, a_3 = 1, a_4 = 2, c_1 = 2, c_2 = 3, and c_3 = 1: T = \begin{pmatrix} 2 & 2 & 0 & 0 \\ 1 & 3 & 3 & 0 \\ 0 & 1 & 4 & 1 \\ 0 & 0 & 2 & 5 \end{pmatrix}. These numerical instances demonstrate how varying the diagonal elements allows for diverse applications while preserving the tridiagonal form. In the symmetric case, the subdiagonal and superdiagonal elements are equal, i.e., a_{i+1} = c_i for i = 1, \dots, n-1, resulting in a matrix that equals its own . For example, a 3×3 symmetric tridiagonal might take the form: T = \begin{pmatrix} b_1 & c_1 & 0 \\ c_1 & b_2 & c_2 \\ 0 & c_2 & b_3 \end{pmatrix}, where the off-diagonal symmetries a_2 = c_1 and a_3 = c_2 ensure T = T^T. This structure is prevalent in applications like and orthogonal polynomials, where operators are modeled. A special subclass is the tridiagonal , characterized by constant values along each diagonal: a fixed \delta on the main diagonal, fixed \sigma on the subdiagonal, and fixed \tau on the superdiagonal. For n = 4, such a matrix appears as: T = \begin{pmatrix} \delta & \tau & 0 & 0 \\ \sigma & \delta & \tau & 0 \\ 0 & \sigma & \delta & \tau \\ 0 & 0 & \sigma & \delta \end{pmatrix}. This constant-diagonal property simplifies analytical computations, such as finding closed-form expressions for eigenvalues, and arises in and difference equations. Tridiagonal matrices also arise naturally in the discretization of second-order linear recurrence relations. Specifically, solving a tridiagonal system A x = d, where A has subdiagonal elements a_k, diagonal b_k, and superdiagonal c_k, yields components x_k that satisfy a second-order inhomogeneous linear recurrence of the form x_k = p_{k-1} x_{k-1} + q_{k-1} x_{k-2} + r_{k-1}, with coefficients derived from the matrix entries and initial conditions from the boundary equations. In the Toeplitz case with constant coefficients, this recurrence admits explicit solutions via equations.

Algebraic Properties

Basic Structure and Operations

A tridiagonal of n is sparse, with at most $3n - 2 nonzero entries: n on the , n-1 on the superdiagonal, and n-1 on the subdiagonal. This sparsity structure facilitates efficient storage and computation, requiring only O(n) space compared to O(n^2) for a dense of the same . The set of tridiagonal matrices forms a vector of the space of all n \times n matrices, as it is closed under and . Specifically, the sum of two tridiagonal matrices is tridiagonal, since the of corresponding entries preserves zeros outside the three central diagonals. Similarly, multiplying a tridiagonal matrix by a scalar yields another tridiagonal matrix, with all nonzero entries scaled accordingly. The product of two tridiagonal matrices is generally pentadiagonal, with nonzero entries potentially extending to the second superdiagonal and second subdiagonal, resulting in a of up to 5 (or semi-bandwidth 2). This follows from the property that the product of two band matrices with semi-bandwidths p and q has semi-bandwidth at most p + q. The product remains tridiagonal if the off-diagonal contributions cancel out, such as when one matrix is diagonal or the matrices commute in a way that avoids fill-in on the outer diagonals. The of a tridiagonal matrix is also tridiagonal, with the superdiagonal and subdiagonal interchanged while the remains unchanged. If the original matrix is symmetric (superdiagonal equals subdiagonal), then it equals its . The of a tridiagonal matrix, defined as the sum of its eigenvalues, equals the sum of its entries, independent of the off-diagonal elements. For norms, the Frobenius norm is the of the of all entries, which for a tridiagonal matrix primarily depends on the diagonal and adjacent off-diagonals but ties closely to the diagonal magnitudes in sparse approximations; the infinity norm, as the maximum absolute row sum, is bounded by the maximum of the absolute values of the diagonal plus the two adjacent off-diagonals per row. These properties underpin the use of tridiagonal matrices in solving sparse linear systems efficiently.

Determinant Computation

The determinant of an n \times n tridiagonal matrix A_n, with main diagonal entries a_1, \dots, a_n, subdiagonal entries b_1, \dots, b_{n-1}, and superdiagonal entries c_1, \dots, c_{n-1}, satisfies the three-term \det(A_n) = a_n \det(A_{n-1}) - b_{n-1} c_{n-1} \det(A_{n-2}), with base cases \det(A_0) = 1 and \det(A_1) = a_1. This relation arises from expanding the along the last row and applying properties of determinants for bordered matrices, enabling computation in linear O(n). For the special case of a Toeplitz tridiagonal matrix with constant a, constant subdiagonal b, and constant superdiagonal c, the recurrence becomes a linear homogeneous with constant coefficients, D_n = a D_{n-1} - bc D_{n-2}. The is r^2 - a r + bc = 0, with roots \lambda_1, \lambda_2. Assuming \lambda_1 \neq \lambda_2, the closed-form solution is D_n = \det(A_n) = \frac{\lambda_1^{n+1} - \lambda_2^{n+1}}{\lambda_1 - \lambda_2}. If the discriminant a^2 - 4bc < 0, the roots are complex, and the expression can be rewritten in trigonometric form, such as D_n = \frac{\sin((n+1)\theta)}{\sin \theta} where the diagonal entries are a = 2 \cos \theta and the off-diagonal entries are b = c = 1. In cases of repeated roots, the form adjusts to D_n = (n+1) \lambda_1^n. For constant diagonals, the ratio D_n / D_{n-1} admits a continued fraction representation, a - bc / (a - bc / (a - \cdots )), which converges to the dominant root \max(|\lambda_1|, |\lambda_2|) for large n. The Gershgorin circle theorem provides bounds on the eigenvalues of a tridiagonal matrix, with each disk centered at a_i and radius |b_{i-1}| + |c_i| (or adjusted at boundaries); since the determinant equals the product of eigenvalues, these bounds indirectly constrain the possible magnitude of the determinant. However, direct recursive computation via the three-term relation can exhibit numerical instability for large n when |\lambda_1| > |\lambda_2|, due to amplification of errors in the recessive mode. The can suffer from cancellation when the roots are close, leading to loss of significant digits in .

Eigenvalues and Eigenvectors

The characteristic polynomial of a tridiagonal matrix T_n of order n, with diagonal entries a_1, \dots, a_n, subdiagonal entries b_1, \dots, b_{n-1}, and superdiagonal entries c_1, \dots, c_{n-1}, is defined as p_n(\lambda) = \det(T_n - \lambda I_n). This polynomial satisfies the three-term recurrence relation p_n(\lambda) = (a_n - \lambda) p_{n-1}(\lambda) - b_{n-1} c_{n-1} p_{n-2}(\lambda), with initial conditions p_0(\lambda) = 1 and p_1(\lambda) = a_1 - \lambda. The eigenvalues of T_n are the roots of p_n(\lambda) = 0, and these roots inherit the recursive structure through the continued fraction representation or generating function solutions of the recurrence. For the special case of a symmetric tridiagonal with constant diagonal entries a and constant off-diagonal entries b (i.e., a_i = a, b_i = c_i = b for all i), the eigenvalues admit an explicit : \lambda_k = a + 2b \cos\left( \frac{k \pi}{n+1} \right) for k = 1, 2, \dots, n. The corresponding eigenvectors have components v_{j,k} = \sin\left( \frac{j k \pi}{n+1} \right) for j = 1, \dots, n. Since the matrix is symmetric, its eigenvectors form an orthogonal basis for \mathbb{R}^n. This orthogonality follows from the spectral theorem for symmetric matrices and is preserved in the explicit sine form, where the inner product between distinct eigenvectors v_k and v_m vanishes due to the orthogonality of sine functions over the discrete grid. As n \to \infty, the eigenvalues \lambda_k of the symmetric tridiagonal Toeplitz matrix densely fill the interval [a - 2|b|, a + 2|b|], approximating the continuous spectrum of the associated infinite Toeplitz operator. The asymptotic distribution of the normalized eigenvalues (\lambda_k - a)/(2|b|) follows the arcsine law with density \frac{1}{\pi \sqrt{1 - x^2}} for x \in (-1, 1), reflecting the uniform spacing in the cosine argument. Perturbation theory for nearly tridiagonal matrices, such as those where small fill-in entries are introduced outside the tridiagonal band, shows that eigenvalues remain close to those of the original tridiagonal form if the perturbations are block-structured or localized. For a symmetric tridiagonal matrix perturbed by setting one off-diagonal entry to zero, the eigenvalue shifts are bounded by the square root of the perturbation magnitude, with explicit bounds derived from interlacing properties. More generally, for Hermitian block tridiagonal matrices under blockwise perturbations, well-separated eigenvalues exhibit insensitivity, with perturbation bounds scaling as the norm of the off-block entries relative to eigenvalue gaps.

Advanced Properties

Matrix Inversion

The inverse of a nonsingular tridiagonal matrix is generally a full (dense) matrix, but its entries possess explicit closed-form expressions that can be derived using the Green's function approach, which interprets the inverse as the discrete Green's function for the second-order linear difference equation associated with the matrix. For a general n \times n tridiagonal matrix A with diagonal entries a_1, \dots, a_n, subdiagonal entries b_1, \dots, b_{n-1}, and superdiagonal entries c_1, \dots, c_{n-1}, the entries of the inverse A^{-1} are given by ratios involving the determinants of appropriate principal submatrices. Define the leading principal minors \Delta_k = \det(A[1:k, 1:k]) for k = 0, \dots, n, where \Delta_0 = 1 and \Delta_1 = a_1, satisfying the three-term recurrence \Delta_k = a_k \Delta_{k-1} - b_{k-1} c_{k-1} \Delta_{k-2} for k \geq 2. Similarly, define the trailing principal minors \Theta_k = \det(A[n-k+1:n, n-k+1:n]) for k = 0, \dots, n, where \Theta_0 = 1 and \Theta_1 = a_n, with the recurrence \Theta_k = a_{n-k+1} \Theta_{k-1} - b_{n-k+1} c_{n-k} \Theta_{k-2} for k \geq 2. Then, for i \leq j, (A^{-1})_{i,j} = (-1)^{i+j} \frac{\Delta_{i-1} \Theta_{n-j}}{\Delta_n} \prod_{k=i}^{j-1} c_k, and for i > j, (A^{-1})_{i,j} = (-1)^{i+j} \frac{\Delta_{j-1} \Theta_{n-i}}{\Delta_n} \prod_{k=j+1}^{i} b_k. In the symmetric case (b_k = c_k for all k), this reduces to the transpose of the upper triangle formula. The matrix A is invertible if and only if \Delta_n \neq 0, which ensures the existence of the expressions above; this condition is equivalent to all leading principal minors \Delta_k \neq 0 for k = 1, \dots, n, guaranteeing that without pivoting encounters no zero pivots. In the special case of a symmetric positive definite tridiagonal matrix (where b_i = c_i > 0 for all i and all leading principal minors are positive), the entries of the decay monotonically away from the , with the rate of decay depending on the minimum eigenvalue and the off-diagonal magnitudes; specifically, for strictly diagonally dominant matrices, |(A^{-1})_{i,j}| \leq C \rho^{|i-j|} for some constants C > 0 and $0 < \rho < 1. The leading and trailing minors can be computed in O(n) time via their respective recurrence relations, enabling the full inverse to be assembled in O(n^2) arithmetic operations by evaluating the products \prod_{k=i}^{j-1} c_k (which can be accelerated using prefix products).

Similarity to Symmetric Forms

A similarity transformation preserves the eigenvalues of a matrix, as the transformed matrix P^{-1} A P shares the same spectrum as the original matrix A. This property holds because the characteristic polynomials of A and P^{-1} A P are identical, ensuring that the eigenvalues and their algebraic multiplicities remain unchanged under such transformations. For Hermitian matrices, orthogonal similarity transformations can reduce the matrix to a symmetric tridiagonal form, which simplifies subsequent computations while preserving the spectrum. The Householder reduction method achieves this by applying a sequence of Householder reflections—orthogonal matrices that zero out elements below the subdiagonal in successive columns, starting from the first. This process eliminates entries outside the tridiagonal band in a stable manner, requiring approximately \frac{4}{3}n^3 floating-point operations for an n \times n matrix. Alternatively, the , originally proposed in 1950, performs tridiagonalization through an iterative process that generates an orthonormal basis via three-term recurrence relations, effectively projecting the matrix onto a to yield a symmetric tridiagonal representation. For real nonsymmetric matrices, the standard similarity reduction for eigenvalue computations is to upper Hessenberg form using Householder reflections or Givens rotations, which eliminates elements below the subdiagonal. Reduction to tridiagonal form is possible but less common due to numerical stability issues and higher costs. In the QR algorithm, Hessenberg (or tridiagonal for symmetric cases) reduction accelerates iterations by banding the matrix. Post-reduction, the tridiagonal matrix may be unreduced, meaning all subdiagonal elements are nonzero, which implies it has no multiple eigenvalues and corresponds to an irreducible structure. In contrast, a block tridiagonal form arises if some subdiagonal elements are exactly zero, indicating the presence of invariant subspaces or multiple eigenvalues that partition the matrix into smaller tridiagonal blocks. This distinction is important for further analysis, as block forms allow divide-and-conquer strategies in eigenvalue solvers.

Solution of Linear Systems

Solving a linear system Ax = b, where A is an n \times n tridiagonal matrix with subdiagonal entries b_i (for i=2,\dots,n), diagonal entries a_i (for i=1,\dots,n), and superdiagonal entries c_i (for i=1,\dots,n-1), and b \in \mathbb{R}^n, requires ensuring the existence and uniqueness of the solution. A sufficient condition for A to be nonsingular—and thus for the system to have a unique solution—is strict diagonal dominance, where |a_1| > |c_1|, |a_n| > |b_n|, and |a_i| > |b_i| + |c_i| for i=2,\dots,n-1. This property guarantees invertibility for general matrices, including tridiagonal ones, via the , as all eigenvalues lie within disks centered at the diagonal entries that do not include the origin. One theoretical approach to solving the system adapts , which expresses each component x_j as the ratio of two determinants: the determinant of the matrix obtained by replacing the j-th column of A with b, divided by \det(A). For tridiagonal matrices, these determinants (and subdeterminants) can be computed efficiently using a three-term , reducing the cost from O(n!) for dense matrices to O(n^2) overall, as each of the n determinants requires O(n) operations. However, this method remains inefficient compared to direct techniques, as it does not exploit the structure for linear-time performance and can accumulate errors in finite precision. More efficient theoretical resolution leverages factorization, such as , which for tridiagonal matrices can be performed in O(n) time due to the limited . The resulting lower and upper triangular factors then allow solution via forward to solve Ly = b followed by backward to solve Ux = y, each also requiring O(n) operations, yielding an overall complexity of O(n) for the system. This approach underscores the structural advantage of tridiagonal systems over dense ones, where substitution alone costs O(n^2). The well-posedness of the problem depends on the \kappa(A), typically measured in the 2-norm as \kappa(A) = \|A\|_2 \|A^{-1}\|_2, which bounds the relative error amplification: \frac{\| \delta x \| / \|x\|}{\| \delta A \| / \|A\| + \| \delta b \| / \|b\|} \leq \kappa(A). For tridiagonal matrices, explicit O(n)-time algorithms exist to compute \kappa_1(A) or \kappa_\infty(A), exploiting the QR factorization or recurrence relations for norms of A and A^{-1}. Well-posedness holds when \kappa(A) remains bounded independently of n, as in certain Toeplitz tridiagonal matrices where \kappa_\infty(A) = 3 regardless of dimension; otherwise, growing \kappa(A) (e.g., exponentially in some cases) indicates sensitivity to perturbations. Perturbation analysis reveals vulnerabilities in ill-conditioned tridiagonal systems, where small changes in entries or the right-hand side can produce large deviations in x. For instance, consider a symmetric tridiagonal M-matrix with diagonal entries $2 + \epsilon and off-diagonals -1, where small \epsilon > 0 (e.g., \epsilon = 0.009) yields \kappa(A) \approx 300 for n=50, amplifying relative errors by a factor of hundreds despite the matrix being diagonally dominant. Wilkinson's seminal work on error analysis in eigenvalue problems extended to linear systems highlights such cases, showing that Gaussian elimination on near-singular tridiagonals incurs backward errors bounded by O(n \epsilon \kappa(A)), but forward errors can approach \kappa(A) times the backward error, emphasizing the need for careful numerical handling.

Numerical Algorithms

LU Decomposition

The LU decomposition of a tridiagonal matrix A with subdiagonal entries a_i (i=2,\dots,n), diagonal entries b_i (i=1,\dots,n), and superdiagonal entries c_i (i=1,\dots,n-1) factors A = [LU](/page/LU_decomposition), where L is a unit lower with 1's on the and subdiagonal entries l_i (i=2,\dots,n), and U is an upper bidiagonal matrix with diagonal entries u_i (i=1,\dots,n) and superdiagonal entries c_i (i=1,\dots,n-1). The factors are computed using the following forward recurrence relations without pivoting: \begin{align*} u_1 &= b_1, \\ l_i &= \frac{a_i}{u_{i-1}}, \quad i = 2, \dots, n, \\ u_i &= b_i - l_i c_{i-1}, \quad i = 2, \dots, n. \end{align*} This process requires O(n) arithmetic operations and O(n) storage, as only the nonzero elements of L and U need to be retained. No pivoting is required if A is strictly diagonally dominant, i.e., |b_1| \geq |c_1| and |b_i| > |a_i| + |c_i| for i=2,\dots,n-1, |b_n| \geq |a_n|, ensuring all u_i \neq 0 and numerical stability in . Otherwise, the algorithm may encounter breakdown if some u_i = 0, necessitating pivoting strategies that disrupt the bidiagonal structure. This factorization is closely related to the evaluation of continued fractions, where the recurrence relations mirror the iterative computation of convergents in a continued fraction expansion for solving tridiagonal systems.

Thomas Algorithm

The Thomas algorithm, also known as the tridiagonal matrix algorithm (TDMA), is a direct method for solving a system of linear equations Ax = d, where A is an n \times n tridiagonal matrix with subdiagonal elements a_i (for i = 2, \dots, n), diagonal elements b_i (for i = 1, \dots, n), and superdiagonal elements c_i (for i = 1, \dots, n-1), and d is the right-hand side vector. Named after physicist Llewellyn Thomas, who introduced it in 1949, the algorithm exploits the banded structure of A to perform Gaussian elimination without fill-in, reducing computational complexity from O(n^3) for general dense matrices to O(n). It consists of a forward elimination phase that transforms the system into an upper triangular form and a subsequent back-substitution phase to recover the solution x. In the forward elimination phase, the algorithm first computes the diagonal elements u_i of the upper triangular factor U in the of A (with diagonal in L), followed by updating the right-hand side to a modified z. Specifically, set u_1 = b_1 and z_1 = d_1 / u_1; then, for i = 2 to n, compute the multiplier w_i = a_i / u_{i-1}, u_i = b_i - w_i c_{i-1}, and z_i = (d_i - w_i z_{i-1}) / u_i. This phase effectively solves L z = d while constructing U, avoiding storage of the full L or U matrices beyond the necessary scalars. The back-substitution phase then solves U x = z by setting x_n = z_n and, for i = n-1 down to $1, x_i = z_i - (c_i / u_i) x_{i+1}. The following pseudocode outlines the algorithm:
# Forward elimination
u[1] = b[1]
z[1] = d[1] / u[1]
for i = 2 to n:
    w = a[i] / u[i-1]
    u[i] = b[i] - w * c[i-1]
    z[i] = (d[i] - w * z[i-1]) / u[i]

# Back-substitution
x[n] = z[n]
for i = n-1 downto 1:
    x[i] = z[i] - (c[i] / u[i]) * x[i+1]
This implementation requires only $8n - 7 floating-point operations, confirming the O(n) complexity. The algorithm is numerically stable when the tridiagonal matrix is strictly diagonally dominant, i.e., |b_i| > |a_i| + |c_i| for all i, or when it is symmetric positive definite, as these conditions prevent growth in rounding errors during elimination. However, it can be sensitive to rounding errors in non-diagonally dominant cases, potentially leading to instability without pivoting, though pivoting introduces fill-in and reduces the sparsity advantage. Due to its sequential dependencies—each step relies on the previous computation—the Thomas algorithm poses challenges for parallelization on multi-core or distributed systems when solving a single tridiagonal system, often requiring domain decomposition or alternative methods like cyclic reduction for effective parallelism. In contrast to general , which demands O(n^3) operations and is ill-suited for banded matrices, the Thomas algorithm's efficiency makes it ideal for large-scale tridiagonal systems arising in numerical simulations.

Eigenvalue Computation Methods

The , adapted for tridiagonal matrices, is a cornerstone method for computing eigenvalues by iteratively performing QR decompositions while preserving the tridiagonal structure. For symmetric tridiagonal matrices, the algorithm maintains this form exactly, reducing computational cost to O(n) per iteration compared to O(n^3) for dense matrices, with total cost O(n^2) for convergence. Implicit shifts, such as the shift \sigma_k = h_{n,n}^{(k-1)} or the more robust Wilkinson shift (the eigenvalue of the bottom 2x2 submatrix closer to h_{n,n}^{(k-1)}), accelerate convergence to quadratic rates without explicitly forming the shifted matrix, avoiding fill-in beyond the tridiagonal band. occurs when a subdiagonal element |b_k| falls below a like \epsilon(|a_{k-1}| + |a_k|), isolating real eigenvalues and decoupling the matrix into smaller independent tridiagonal blocks, which can then be solved recursively. This approach, originating from Francis's work, ensures numerical stability and efficiency for real symmetric cases. For symmetric tridiagonal matrices, the offers an O(n^2) alternative to the QR method, particularly advantageous for parallelization. It recursively splits the matrix at an interior point, say index m ≈ n/2, into two smaller tridiagonal submatrices T_1 and T_2 via a rank-one update: T = \begin{bmatrix} T_1 & 0 \\ 0 & T_2 \end{bmatrix} + \rho u u^T, where \rho = \pm b_m and u has ±1 at positions m and m+1. The eigenvalues of the subproblems are computed recursively, and the full spectrum is obtained by solving a for the rank-one perturbation. Originally proposed by Cuppen, the method was refined by Gu and Eisenstat for stability, achieving O(n^2) complexity for eigenvalues (with O(n^3) total if eigenvectors are needed due to orthogonal transformations). The bisection method, leveraging Sturm sequences, provides a reliable way to isolate and refine individual eigenvalues of symmetric tridiagonal matrices without computing the full spectrum. A Sturm sequence \{p_0(\lambda), \dots, p_n(\lambda)\} for the tridiagonal matrix is generated via the three-term recurrence: p_0(\lambda) = 1, p_1(\lambda) = a_1 - \lambda, and p_k(\lambda) = (a_k - \lambda) p_{k-1}(\lambda) - b_{k-1}^2 p_{k-2}(\lambda) for k \geq 2, where a_k are diagonal and b_k subdiagonal elements. The number of sign changes in the sequence equals the number of eigenvalues less than \lambda, by Sturm's theorem adapted to matrices. Bisection proceeds by bracketing eigenvalues using Gerschgorin bounds and iteratively halving intervals until convergence, with each evaluation costing O(n) operations, making it suitable for selected eigenvalues. This technique, popularized in Givens's and Wilkinson's frameworks, ensures isolation even for clustered spectra. In the symmetric case, particularly within divide-and-conquer, eigenvalues after rank-one updates are found by solving the secular f(\lambda) = 1 + \rho \sum_{k=1}^m \frac{v_k^2}{\lambda_k - \lambda} = 0, where \{\lambda_k\} and \{v_k\} derive from the subproblem eigensystems. Roots lie in the intervals between consecutive \lambda_k, and each is located using a quadratic interpolation or bisection-like solver in O(1) iterations per root, totaling O(n) per update. This step, critical for merging sub-spectra, maintains backward stability as shown in refined implementations. Software libraries implement these methods efficiently; for instance, LAPACK's DSTEVD routine computes all of a real symmetric tridiagonal matrix using a divide-and-conquer approach, offering O(n^2) performance for eigenvalues and scalability on parallel architectures. Similarly, DSTEQR employs the with implicit shifts for the same task, balancing speed and reliability in standard numerical environments.

Applications

Finite Difference Methods

Tridiagonal matrices frequently arise in the discretization of second-order ordinary differential equations using methods. For the -u''(x) = f(x) on an [a, b] with Dirichlet conditions u(a) = u_a and u(b) = u_b, the second-order central at interior grid points x_i = a + i h (where h = (b - a)/n) yields the \frac{-u_{i-1} + 2u_i - u_{i+1}}{h^2} = f_i, leading to a tridiagonal A \mathbf{u} = \mathbf{f} where A has 2 on the main diagonal and -1 on the sub- and superdiagonals (scaled by $1/h^2). In the context of the one-dimensional Poisson equation -\frac{d^2 u}{dx^2} = f(x) subject to homogeneous Dirichlet boundaries on [0, 1], the discretization on a uniform grid with n interior points produces the explicit tridiagonal matrix form \frac{1}{h^2} \begin{pmatrix} 2 & -1 & & \\ -1 & 2 & -1 & \\ & \ddots & \ddots & \ddots \\ & & -1 & 2 \end{pmatrix} \begin{pmatrix} u_1 \\ \vdots \\ u_n \end{pmatrix} = \begin{pmatrix} f_1 \\ \vdots \\ f_n \end{pmatrix}, where h = 1/(n+1). This structure exploits the locality of the second derivative , enabling efficient solution of the resulting . For time-dependent problems, such as the one-dimensional u_t = \alpha u_{xx} on [0, L] \times [0, T] with appropriate initial and boundary conditions, the implicit Backward Time Centered Space (BTCS) scheme at each time step \Delta t discretizes the spatial derivative centrally, resulting in a tridiagonal system of the form (I - r A) \mathbf{u}^{k+1} = \mathbf{u}^k, where r = \alpha \Delta t / h^2, A is the second-difference tridiagonal matrix, and \mathbf{u}^k is the solution vector at time level k. This approach ensures unconditional for any \Delta t > 0, with the tridiagonal systems solved iteratively across time steps. The accuracy of these central difference approximations for the second stems from expansions, where the for \frac{u(x+h) - 2u(x) + u(x-h)}{h^2} \approx u''(x) is O(h^2), derived from the even-order terms in the expansion. Consequently, the global error in the solution to problems like the Poisson equation is also O(h^2), assuming sufficient smoothness of u and f. In multigrid methods for solving the linear systems from discretizations of elliptic PDEs such as the 1D Poisson equation, tridiagonal solvers serve as exact coarse-grid correctors, leveraging the banded structure on coarser levels to accelerate convergence. These preconditioners reduce the effectively, achieving near-optimal O(n) work complexity for the overall solver. The Thomas algorithm can be employed to solve these tridiagonal systems efficiently in O(n) time.

Physical Systems and Models

Tridiagonal matrices arise prominently in the modeling of physical systems where interactions are limited to nearest neighbors, such as in one-dimensional chains or discretized continuous media. In , the tight-binding model represents the of electrons in a periodic as a tridiagonal matrix for a one-dimensional chain, capturing hopping between adjacent sites while neglecting longer-range interactions. The diagonal elements correspond to on-site energies, and the off-diagonal elements represent the hopping amplitudes, typically denoted as t. The eigenvalues of this matrix yield the energy bands of the system, providing insights into electronic properties like and band gaps. This approach originated from approximations in , particularly extensions of Hückel theory in the 1950s, where tridiagonal forms were used to simplify calculations for linear conjugated systems, such as polyenes, by assuming \pi-electron interactions only between adjacent atoms. In , tridiagonal matrices facilitate numerical solutions to the time-independent for potentials like the finite square well. The Numerov method discretizes the second-order on a spatial grid, resulting in a tridiagonal eigenvalue problem where the matrix incorporates the potential on the diagonal and finite-difference approximations for the operator on the off-diagonals. This formulation allows efficient computation of bound-state energies and wavefunctions, with higher-order accuracy compared to standard finite-difference schemes due to the Numerov integrator's error reduction. For a finite well of width a and depth V_0, the resulting matrix eigenvalues approximate the quantized energy levels E_n, enabling analysis of tunneling and confinement effects without full analytic solutions. Classical mechanical systems also employ tridiagonal matrices to describe vibrations in continuous media or discrete chains. For a vibrating string governed by the wave equation \frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}, spatial discretization via finite differences yields tridiagonal mass and stiffness matrices in the modal analysis, where the eigenvalues correspond to squared frequencies of normal modes. Similarly, for a discretized beam under Euler-Bernoulli theory, the stiffness matrix takes a tridiagonal form reflecting bending moments between adjacent segments. In systems of coupled oscillators, such as a chain of masses connected by springs, the coupling matrix is tridiagonal, with diagonal terms from individual spring constants and off-diagonals from inter-mass couplings; diagonalizing this matrix reveals the normal modes, where each mode represents collective oscillation at distinct frequencies, facilitating understanding of energy transfer and wave propagation. These representations underscore the efficiency of tridiagonal structures in capturing localized interactions in both quantum and classical models.

Graph Laplacians and Combinatorics

In , the P_n on n has an that is tridiagonal, featuring zeros along the and ones along the subdiagonal and superdiagonal, reflecting the linear chain structure where each connects only to its immediate neighbors. This sparse form facilitates analytical study of graph properties, such as and characteristics. The Laplacian matrix L of P_n, defined as L = D - A where D is the diagonal and A is the , is also tridiagonal. It has entries of 1 on the diagonal for the endpoint vertices (degree 1), 2 on the diagonal for internal vertices (degree 2), and -1 on the off-diagonals, capturing the second differences along the . By Kirchhoff's matrix-tree theorem, the number of spanning trees in any equals the of any cofactor ( obtained by removing one row and column) of its ; for the tree P_n, this yields 1, confirming the unique structure. Tridiagonal determinants arise prominently in combinatorial enumerations related to path graphs. The matching of P_n, which generates the number of matchings of each size (subsets of non-adjacent edges), satisfies the recurrence \alpha(P_n, x) = x \alpha(P_{n-1}, x) - \alpha(P_{n-2}, x), with initial conditions \alpha(P_0, x) = 1 and \alpha(P_1, x) = x; this three-term relation mirrors the recursive computation of determinants for tridiagonal matrices with constant off-diagonal entries, enabling closed-form expressions via Chebyshev polynomials of the second kind. Similarly, polynomials, which enumerate non-attacking placements on Ferrers boards ( shapes akin to path-induced boards), exhibit recurrences solvable through tridiagonal matrix determinants, linking them to orthogonal polynomials like whose Jacobi matrices are tridiagonal. In probabilistic combinatorics, random walks on the path graph P_n model diffusion along a line, with the transition matrix being tridiagonal: endpoints transition with probability 1 to their single neighbor, while internal vertices transition with probability $1/2 to each adjacent vertex. This structure allows efficient computation of mixing times and stationary distributions, highlighting tridiagonal matrices' role in analyzing combinatorial processes on linear graphs. Eigenvalues of these tridiagonal matrices further inform the graph spectrum, providing insights into expansion properties.

References

  1. [1]
    Solving Tridiagonal Matrix Systems - Colorado State University
    A tridiagonal matrix system is Ax=b, where A is a square matrix with non-zero entries only on its diagonal and adjacent diagonals. The goal is to find x.
  2. [2]
    [PDF] Chapter
    over, any principal submatrix of a tridiagonal matrix based on contiguous index sets is again a tridiagonal matrix. To verify that any tridiagonal of the ...
  3. [3]
    [PDF] Tridiagonal Matrices: Thomas Algorithm - William Lee
    The Thomas algorithm is an efficient way of solving tridiagonal matrix systems. It is based on LU decompo- sition in which the matrix system Mx = r is ...
  4. [4]
    [PDF] A New O(n2) Algorithm for the Symmetric Tridiagonal Eigenvalue ...
    Aug 2, 2016 · The main contribution of this thesis is a new O(n2), easily parallelizable algorithm for solving the tridiagonal eigenproblem. Three main ...
  5. [5]
    [PDF] CHAPTER 1 APPLIED LINEAR ALGEBRA - DSpace@MIT
    The band has only three diagonals, so these matrices are tridiagonal. Because K is a tridiagonal matrix, Ku = f can be quickly solved.
  6. [6]
    [PDF] Tridiagonal Toeplitz Matrices: Properties and Novel Applications
    Tridiagonal Toeplitz matrices have known eigenvalues and eigenvectors, are used in differential equations, time series analysis, and Tikhonov regularization. ...
  7. [7]
    [PDF] APPLICATIONS OF TRIDIAGONAL MATRICES IN NON ...
    APPLICATIONS OF TRIDIAGONAL ... Exact solutions are invaluable as they shed light on some general properties of related, more complex, systems [20]. ... Explicit ...
  8. [8]
    [PDF] A Tridiagonal Matrix - Penn Math
    For instance the simplest case a = 0 corresponds to coupled oscillators whose left end is fixed while a = 1 corresponds to a free end. We'll present some ...
  9. [9]
    Tridiagonal Matrix - an overview | ScienceDirect Topics
    A tridiagonal matrix is defined as a matrix ... The general n × n lower triangular matrix has bandwidth (0, n − 1); a tridiagonal matrix has bandwidth (1, 1).
  10. [10]
    Tridiagonal matrix algorithm - TDMA (Thomas algorithm) - CFD Online
    Sep 14, 2016 · a_1 x_{n} + b_1 x_1 + c_1 x_2 = d_1,: a_i x_{i - 1} + b_i x_i + c_i x_{i +: a_n x_{n-1} + ...
  11. [11]
    [PDF] A Real Symmetric Tridiagonal Matrix With a Given Characteristic ...
    The purpose of this note is to present an alternative solution to the above problem of Fiedler which provides a real symmetric tridiagonal matrix for every ...<|separator|>
  12. [12]
    None
    ### Summary: Tridiagonal Matrices and Second-Order Linear Recurrence Relations
  13. [13]
    None
    Below is a merged summary of tridiagonal matrices from *Matrix Computations* (4th Ed.), consolidating all information from the provided segments into a comprehensive response. To maximize detail and clarity, I will use a table in CSV format for key properties, followed by additional details and references. This ensures all information is retained while maintaining a dense and organized structure.
  14. [14]
    [PDF] An explicit formula for the determinants of tridiagonal 2-Toeplitz and ...
    (2). Dn = dnDn−1 − an−1bn−1Dn−2. The above recurrence is quite easy to solve in the case when we deal with a Toeplitz matrix. This formula is given for example ...
  15. [15]
    [PDF] Some formulas for determinants of tridiagonal matrices in ... - HAL
    Nov 20, 2019 · In this paper, by virtue of induction and properties of determinants, we will discover explicit and recurrent formulas for evaluations of ...
  16. [16]
    [PDF] some tridiagonal determinants related to central delannoy numbers ...
    This is an explicit and closed-form formula for computing a general tridiagonal determinant (a Jacobi determinant). The following result can be found in [12] ...
  17. [17]
    [PDF] NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS ...
    It is easy to prove by using a well-known recurrence for determi- nants of tridiagonal matrix, that the Lanczos recurrence computes the characteris- tic ...
  18. [18]
    [PDF] Recursion Formulae for the Characteristic Polynomial of Symmetric ...
    For symmetric, tridiagonal matrices, there is a well-known two-term recur- sion to evaluate the characteristic polynomials of the principal submatrices of the ...
  19. [19]
    Orthogonal Eigenvectors of Symmetric Tridiagonal Matrices
    In this paper we present an O(nk) procedure, Algorithm , for computing k eigenvectors of an n×n symmetric tridiagonal matrix T.
  20. [20]
    Asymptotics of eigenvalues of symmetric Toeplitz band matrices
    In this paper we obtain uniform asymptotic formulas for all eigenvalues of symmetric Toeplitz band matrices of large dimension. The entries of the matrices ...Missing: tridiagonal | Show results with:tridiagonal
  21. [21]
    Perturbation in eigenvalues of a symmetric tridiagonal matrix
    We study the eigenvalue perturbations of an n × n real unreduced symmetric tridiagonal matrix T when one of the off-diagonal element is replaced by zero.
  22. [22]
    [PDF] Eigenvalue perturbation bounds for Hermitian block tridiagonal ...
    Oct 12, 2011 · The main message of this paper is that an eigenvalue is insensitive to blockwise perturbation, if it is well-separated from the spectrum of the ...
  23. [23]
    The analytic inversion of any finite symmetric tridiagonal matrix
    The Green's function is the inverse of the infinite symmetric tridiagonal matrix (H-zI). By calculating the inverse of the finite symmetric tridiagonal matrix ...
  24. [24]
    The Explicit Inverse of a Tridiagonal Matrix
    This implies that rc^ = /•,_1r„_, is an integer, that is, an integer multiple of B'1 can be generated by integer arithmetic. Similarly, if one had a linear ...
  25. [25]
    The Explicit Inverses of Two Commonly Occurring Matrices
    Explicit formulae are given for the inverses of certain tridiagonal scalar and block matrices. During an investigation into the convergence properties of ...
  26. [26]
    Decay Rates of the Inverse of Nonsymmetric Tridiagonal and Band ...
    For diagonally dominant matrices we show that the entries of the inverse strictly decay along a row or column. We give a sharp decay result for tridiagonal ...
  27. [27]
    [PDF] On recursive algorithms for inverting tridiagonal matrices - arXiv
    Sep 30, 2015 · Abstract. If A is a tridiagonal matrix, then the equations AX = I and XA = I defining the inverse X of A are in fact the second order ...<|control11|><|separator|>
  28. [28]
    [PDF] An iteration method for the solution of the eigenvalue problem of ...
    Oct 5, 2004 · I have been searching for a copy of the original paper of C Lanczos in 1950 in the journal of the Research of the National. Bureau of ...
  29. [29]
    [PDF] 11.2 Reduction of a Symmetric Matrix to Tridiagonal Form
    For symmetric matrices, the preferred simple form is tridiagonal. The Givens reduction is a modification of the Jacobi method.
  30. [30]
    [PDF] On Tridiagonalizing and Diagonalizing Symmetric Matrices with ...
    A tridiagonal matrix whose o -diagonal entries are all nonzero is called unreduced. ... Since the trace is the sum of the eigenvalues, and the o diagonal entry is.
  31. [31]
    [PDF] 1 Diagonally dominant matrices - Cornell: Computer Science
    Sep 27, 2019 · Strict diagonal dominance is a useful structural condition for several rea- sons: it ensures nonsingularity, it guarantees convergence of ...
  32. [32]
    A Parallel Algorithm for Solving General Tridiagonal Equations - jstor
    The method is based on an efficient implementation of Cramer's rule, in which the only divisions are by the determinant of the matrix. Therefore, the algorithm ...
  33. [33]
    solving-a-system-of-linear-algebraic-equations-with-a-tridiagonal ...
    Jul 1, 2021 · To numerically solve a system of linear algebraic equations with a tridiagonal matrix, a recursive version of Cramer's rule is proposed.
  34. [34]
    [PDF] Fast Tridiagonal Solvers on the GPU - Research at NVIDIA
    Jan 9, 2010 · Forward reduction takes about twice as much time as back- ward substitution, since it requires more computations and memory accesses. At first ...
  35. [35]
    Reliable Computation of the Condition Number of a Tridiagonal ...
    The estimation of the condition number of a matrix relative to inversion is important for bounding the errors in computer solutions of linear systems of ...
  36. [36]
    Condition number of a tridiagonal Toeplitz matrix is independent of ...
    Nov 24, 2013 · If we have the tridiagonal matrix A= [ 4 1 1 4 1 ⋱ ⋱ ⋱ 1 4 1 ] then no matter the size of A, its condition number, using the infinity norm, is κ(A)=3.Missing: posedness | Show results with:posedness
  37. [37]
    [PDF] Bounding the error in Gaussian elimination for tridiagonal systems
    Error bounds and their computation for general tridiagonal matrices are discussed also. Key words, tridiagonal matrix, forward error analysis, backward error ...
  38. [38]
    [PDF] Golub and Van Loan - EE IIT Bombay
    Block Tridiagonal Systems. 199. 4.5.4 The SPIKE Framework. A bandwidth-p matrix A E R.Nqx Nq can also be regarded as a block tridiagonal matrix with banded ...
  39. [39]
    [PDF] Stability and sensitivity of tridiagonal LU factorization without pivoting
    Abstract. In this paper the accuracy of LU factorization of tridiagonal matrices without piv- oting is considered. Two types of componentwise condition ...
  40. [40]
    [PDF] LU FACTORIZATION AND PARALLEL EVALUATION OF ...
    In this paper we show that with a slight twist, an alternate formulation using suffix products of certain matrix chains using a backward evaluation of CFs.
  41. [41]
    [PDF] Tridiagonal Matrices: Thomas Algorithm - William Lee
    The Thomas algorithm is an efficient way of solving tridiagonal matrix systems. It is based on LU decompo- sition in which the matrix system Mx = r is ...
  42. [42]
    [PDF] Parallel Solution of Tridiagonal Linear Systems on Modern CPUs
    This simpli- fied variant of Gaussian elimination is called the Thomas algorithm, named after Llewellyn Thomas (Thomas, 1949).Missing: paper | Show results with:paper
  43. [43]
    A library of parallel and scalable solvers for massive tridiagonal ...
    ... Thomas algorithm is unsuited for the parallel computation of a single tridiagonal system. Its parallel version is usually applicable to many tridiagonal ...
  44. [44]
    [PDF] The QR Algorithm - Ethz
    The overall complexity (number of floating points) of the algorithm is O(n3), which we will see is not entirely trivial to obtain.Missing: preprocessing | Show results with:preprocessing
  45. [45]
    [PDF] Chapter 5 - Cuppen's Divide and Conquer Algorithm - Ethz
    1. Partition the tridiagonal eigenvalue problem into two (or more) smaller tridiagonal eigenvalue problems. 2. Solve the two smaller problems.
  46. [46]
    A divide and conquer method for the symmetric tridiagonal ...
    A method is given for calculating the eigenvalues of a symmetric tridiagonal matrix. The method is shown to be stable and for a large class of matrices it.<|control11|><|separator|>
  47. [47]
    A Divide-and-Conquer Algorithm for the Symmetric Tridiagonal ...
    The authors present a stable and efficient divide-and-conquer algorithm for computing the spectral decomposition of an N × N symmetric tridiagonal matrix.
  48. [48]
    [PDF] Lecture # 13 Sturm Sequences, Inverse Iteration, and the Rayliegh ...
    Thus these three simple functions can get you all of the eigenvalues of the tridiagonal matrix. But, what about the vectors? We use inverse iteration. To ...
  49. [49]
    Eigenvalues of Symmetric Tridiagonal Matrices: A Fast, Accurate ...
    Abstract. An algorithm is developed for obtaining eigenvalues of real, symmetric, tridiagonal matrices. It combines dynamically Given's method of bisection.
  50. [50]
    A Stable and Efficient Algorithm for the Rank-One Modification of the ...
    An algorithm is presented for computing the eigendecomposition of a symmetric rank-one modification of a symmetric matrix whose eigendecomposition is known.
  51. [51]
    [PDF] Parallelizing the Divide and Conquer Algorithm for the Symmetric ...
    Feb 24, 1998 · Solving the symmetric eigenvalue problem consists, in general, in three steps: the symmetric matrix A is reduced to tridiagonal form T, then one.
  52. [52]
    LAPACK: dstev
    DSTEV computes all eigenvalues and, optionally, eigenvectors of a !> real symmetric tridiagonal matrix A. !> Parameters. [in], JOBZ !> JOBZ is CHARACTER*1 ...
  53. [53]
    What Is the Second Difference Matrix? - Nick Higham
    Jan 31, 2022 · The second difference matrix is the tridiagonal matrix T_n with diagonal elements 2 and sub- and superdiagonal elements -1.Missing: "-u''" = | Show results with:"-u''" =
  54. [54]
    [PDF] Finite Difference Methods for Two Point Boundary Problems
    Apr 19, 2021 · u(x + h) − u(x) h . Given a sufficiently small value of h, the right hand side can be used an approximation to the first derivative u0(x).
  55. [55]
    [PDF] 8 Finite Differences: Partial Differential Equations
    Therefore, the forward transform of the source term can be calculated, this can be used to find the ˆu, and then the inverse transform can be taken to find u.
  56. [56]
    1-d problem with Dirichlet boundary conditions - Richard Fitzpatrick
    is termed a tridiagonal matrix, since only those elements which lie on ... \begin{displaymath} a_i\,u_{i-1}+ b_i, (121). for $i=1,N$ . Let us search for ...
  57. [57]
    (PDF) The Exact Formulation of the Inverse of the Tridiagonal Matrix ...
    Aug 6, 2025 · A new method for solving the 1D Poisson equation is presented using the finite difference method. This method is based on the exact ...
  58. [58]
    [PDF] Implicit Scheme for the Heat Equation
    Clearly the matrix is tridiagonal and symmetric. Exercise. Show that the matrix is invertible. How do we efficiently solve this system of equations?
  59. [59]
    [PDF] 17 Finite differences for the heat equation - UCSB Math
    17.2 The implicit BTCS scheme. Instead of approximating ut in the heat equation ut = uxx by the forward difference, which resulted in the difference equation ...
  60. [60]
    [PDF] 1 Finite difference example: 1D implicit heat equation
    In particular, the fully implicit FD scheme leads to a “tridiagonal” system of linear equations that can be solved efficiently by LU decomposition using the ...
  61. [61]
    [PDF] Numerical differentiation: finite differences
    2∆x is an approximation of f0(x) whose error is proportional to ∆x2. It is called the second-order or O(∆x2) centered difference approximation of f0(x). Page 2 ...
  62. [62]
    [PDF] Finite difference methods, Green functions and error analysis, IB/IIM ...
    Finite difference formula for second order derivatives. Central δ2u(¯x) = u(¯x − h) − 2u(¯x) + u(¯x + h) h2. = u. 00. (¯x) + O(h2). = ∆+∆−u(¯x) + O(h2). One ...
  63. [63]
    MULTIGRID_POISSON_1D - Multigrid Solver for 1D Poisson Problem
    Dec 7, 2011 · MULTIGRID_POISSON_1D is a C++ library which applies a multigrid method to solve the linear system associated with a discretized version of the ...
  64. [64]
    [PDF] Multigrid Methods
    Multigrid methods are solvers for linear system of equations that arise, e.g., in the discretization of partial differential equations. For this reason, ...
  65. [65]
    Finite Difference preconditioning for compact scheme discretizations ...
    Dec 1, 2020 · The finite difference preconditioning for higher-order compact scheme discretizations of non separable Poisson's equation is investigated.
  66. [66]
    [PDF] arXiv:2001.09667v1 [cond-mat.str-el] 27 Jan 2020
    Jan 27, 2020 · This paper presents a method for finding analytical solutions for bulk and edge energy levels in 1D tight-binding models under open boundary ...<|control11|><|separator|>
  67. [67]
    A new efficient method for calculation of energy eigenvalues and ...
    Mar 15, 1988 · A method for the numerical solution of the one‐dimensional Schrödinger equation based on a matrix formulation of Numerov's method is described. ...<|control11|><|separator|>
  68. [68]
    [PDF] arXiv:0902.2308v1 [math-ph] 13 Feb 2009
    Feb 13, 2009 · The fact that the interaction is only between nearest neighbours is reflected by the tridiagonal form of Mh (i.e. nonzero entries only on the ...
  69. [69]
    [PDF] Normal Modes - LIGO-Labcit Home
    Normal mode analysis makes it easy to understand the complicated motion of individual oscillators when they are coupled to others. (Although vibrations of ...Missing: tridiagonal | Show results with:tridiagonal
  70. [70]
    [PDF] Spectra of Simple Graphs - Whitman College
    May 13, 2013 · We will find a recurrence relationship for the characteristic polynomial of a path graph. Recall that the adjacency matrix of a path graph Pn is ...Missing: tridiagonal | Show results with:tridiagonal
  71. [71]
    [PDF] arXiv:2108.12798v1 [math.CO] 29 Aug 2021
    Aug 29, 2021 · The Laplacian matrix of a path graph is a tridiagonal matrix with its main diagonal determined by the vector (1, 2,..., 2, 1). The first ...
  72. [72]
    [PDF] 1 The Matrix-Tree Theorem. - MIT OpenCourseWare
    The Matrix-Tree Theorem is a formula for the number of spanning trees of a graph in terms of the determinant of a certain matrix. We begin with the.
  73. [73]
    [PDF] Matching Polynomials of Graphs 26.1 Overview 26.2 2 √ d − 1
    Dec 5, 2018 · We will prove that the matching polynomial of every d-regular graph divides the matching polyno- mial of a larger tree of maximum degree d.
  74. [74]
    [PDF] ORTHOGONAL POLYNOMIALS AND COMBINATORICS D. Stanton ...
    Aug 3, 2000 · Introduction. There are many ways that orthogonal polynomials can occur in combinatorics. This paper concentrates on two topics.
  75. [75]
    [PDF] Fastest mixing Markov chain on a path ∗ - Stanford University
    We consider the problem of assigning transition probabilities to the edges of a path, so the resulting Markov chain or random walk mixes as rapidly as possible.
  76. [76]
    Tridiagonal matrices and spectral properties of some graph classes
    Mar 1, 2021 · The characteristic polynomial of a graph is the characteristic polynomial of its adjacency matrix. Finding efficient algorithms for ...