Tridiagonal matrix
A tridiagonal matrix is a square matrix in which all entries are zero except those on the main diagonal, the subdiagonal (immediately below the main diagonal), and the superdiagonal (immediately above the main diagonal).[1] This structure makes it a special case of a band matrix with bandwidth 1, allowing for compact storage using only O(n) space for an n×n matrix, compared to O(n²) for a dense matrix.[2] Tridiagonal matrices possess several advantageous properties that facilitate efficient numerical computations. For instance, systems of linear equations involving a tridiagonal coefficient matrix Ax = b can be solved in O(n) time using specialized algorithms like the Thomas algorithm, which is a variant of Gaussian elimination tailored to this sparse structure and avoids fill-in during factorization.[3] Additionally, for symmetric tridiagonal matrices, eigenvalues and eigenvectors can be computed using stable O(n²) methods such as the QR algorithm, which is particularly useful in spectral analysis.[4] Determinants of tridiagonal matrices can also be evaluated recursively in O(n) time via a continued fraction-like formula, leveraging the matrix's banded form.[2] These matrices arise naturally in numerous applications across mathematics and engineering. In numerical analysis, they model finite difference approximations to second-order boundary value problems for ordinary and partial differential equations, such as the Poisson equation in one dimension.[5] Tridiagonal forms also appear in the theory of orthogonal polynomials, Markov chains for modeling sequential processes, and Tikhonov regularization for ill-posed inverse problems, where their Toeplitz variants (constant diagonals) enable closed-form solutions and fast computations.[6] Furthermore, in stochastic modeling, such as epidemic spread or particle deposition on lattices, tridiagonal matrices capture nearest-neighbor interactions efficiently.[7]Definition and Notation
Formal Definition
A tridiagonal matrix is a square matrix of order n \times n in which all entries are zero except those on the main diagonal, the subdiagonal (immediately below the main diagonal), and the superdiagonal (immediately above the main diagonal).[2] This structure confines the nonzero elements to a narrow band around the main diagonal, distinguishing it from denser matrices.[8] In terms of bandwidth, a tridiagonal matrix has a lower bandwidth of 1 and an upper bandwidth of 1, meaning the nonzero entries extend at most one position below and above the main diagonal.[9] More formally, for an n \times n matrix A = (a_{i,j}), it satisfies a_{i,j} = 0 whenever |i - j| > 1.[2] This contrasts with a diagonal matrix, which has bandwidth (0,0) and only nonzero main diagonal entries, and with bidiagonal matrices, which have bandwidth (1,0) or (0,1) by including only one off-diagonal. Broader banded matrices, such as pentadiagonal ones, allow nonzeros up to two positions away, resulting in bandwidth (2,2).[9] Under certain conditions, such as the matrix being irreducible and satisfying diagonal dominance (where each diagonal entry's absolute value is at least the sum of the absolute values of the off-diagonal entries in its row), tridiagonal matrices exhibit desirable properties like nonsingularity.[2] A tridiagonal matrix is irreducible if all subdiagonal and superdiagonal entries are nonzero, ensuring the associated directed graph is strongly connected.[2] These matrices are particularly valuable in sparse matrix contexts for efficient storage and computation in numerical linear algebra.[8]Matrix Representation and Examples
A tridiagonal matrix T of order n is typically denoted in block form using vectors for its three nonzero diagonals: the main diagonal consists of elements b_i for i = 1, \dots, n, the subdiagonal (immediately below the main diagonal) consists of elements a_i for i = 2, \dots, n, and the superdiagonal (immediately above the main diagonal) consists of elements c_i for i = 1, \dots, n-1.[10] This notation compactly represents the sparse structure where all other entries are zero, as seen in the general form: T = \begin{pmatrix} b_1 & c_1 & 0 & \cdots & 0 \\ a_2 & b_2 & c_2 & \cdots & 0 \\ 0 & a_3 & b_3 & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & c_{n-1} \\ 0 & 0 & \cdots & a_n & b_n \end{pmatrix}. For a concrete illustration, consider a 3×3 tridiagonal matrix with specific numerical values, such as b_1 = 4, b_2 = 5, b_3 = 6, a_2 = 1, a_3 = 2, c_1 = 3, and c_2 = 1: T = \begin{pmatrix} 4 & 3 & 0 \\ 1 & 5 & 1 \\ 0 & 2 & 6 \end{pmatrix}. This example highlights the banded structure, with nonzero entries confined to the specified diagonals. Similarly, a 4×4 tridiagonal matrix can be formed with values b_1 = 2, b_2 = 3, b_3 = 4, b_4 = 5, a_2 = 1, a_3 = 1, a_4 = 2, c_1 = 2, c_2 = 3, and c_3 = 1: T = \begin{pmatrix} 2 & 2 & 0 & 0 \\ 1 & 3 & 3 & 0 \\ 0 & 1 & 4 & 1 \\ 0 & 0 & 2 & 5 \end{pmatrix}. These numerical instances demonstrate how varying the diagonal elements allows for diverse applications while preserving the tridiagonal form.[10] In the symmetric case, the subdiagonal and superdiagonal elements are equal, i.e., a_{i+1} = c_i for i = 1, \dots, n-1, resulting in a matrix that equals its own transpose. For example, a 3×3 symmetric tridiagonal matrix might take the form: T = \begin{pmatrix} b_1 & c_1 & 0 \\ c_1 & b_2 & c_2 \\ 0 & c_2 & b_3 \end{pmatrix}, where the off-diagonal symmetries a_2 = c_1 and a_3 = c_2 ensure T = T^T. This structure is prevalent in applications like quantum mechanics and orthogonal polynomials, where self-adjoint operators are modeled.[11] A special subclass is the tridiagonal Toeplitz matrix, characterized by constant values along each diagonal: a fixed \delta on the main diagonal, fixed \sigma on the subdiagonal, and fixed \tau on the superdiagonal. For n = 4, such a matrix appears as: T = \begin{pmatrix} \delta & \tau & 0 & 0 \\ \sigma & \delta & \tau & 0 \\ 0 & \sigma & \delta & \tau \\ 0 & 0 & \sigma & \delta \end{pmatrix}. This constant-diagonal property simplifies analytical computations, such as finding closed-form expressions for eigenvalues, and arises in signal processing and difference equations.[6] Tridiagonal matrices also arise naturally in the discretization of second-order linear recurrence relations. Specifically, solving a tridiagonal system A x = d, where A has subdiagonal elements a_k, diagonal b_k, and superdiagonal c_k, yields components x_k that satisfy a second-order inhomogeneous linear recurrence of the form x_k = p_{k-1} x_{k-1} + q_{k-1} x_{k-2} + r_{k-1}, with coefficients derived from the matrix entries and initial conditions from the boundary equations. In the Toeplitz case with constant coefficients, this recurrence admits explicit solutions via characteristic equations.[12]Algebraic Properties
Basic Structure and Operations
A tridiagonal matrix of order n is sparse, with at most $3n - 2 nonzero entries: n on the main diagonal, n-1 on the superdiagonal, and n-1 on the subdiagonal.[13] This sparsity structure facilitates efficient storage and computation, requiring only O(n) space compared to O(n^2) for a dense matrix of the same order.[13] The set of tridiagonal matrices forms a vector subspace of the space of all n \times n matrices, as it is closed under addition and scalar multiplication. Specifically, the sum of two tridiagonal matrices is tridiagonal, since the addition of corresponding entries preserves zeros outside the three central diagonals. Similarly, multiplying a tridiagonal matrix by a scalar yields another tridiagonal matrix, with all nonzero entries scaled accordingly.[13] The product of two tridiagonal matrices is generally pentadiagonal, with nonzero entries potentially extending to the second superdiagonal and second subdiagonal, resulting in a bandwidth of up to 5 (or semi-bandwidth 2). This follows from the property that the product of two band matrices with semi-bandwidths p and q has semi-bandwidth at most p + q. The product remains tridiagonal if the off-diagonal contributions cancel out, such as when one matrix is diagonal or the matrices commute in a way that avoids fill-in on the outer diagonals.[13] The transpose of a tridiagonal matrix is also tridiagonal, with the superdiagonal and subdiagonal interchanged while the main diagonal remains unchanged. If the original matrix is symmetric (superdiagonal equals subdiagonal), then it equals its transpose.[13] The trace of a tridiagonal matrix, defined as the sum of its eigenvalues, equals the sum of its main diagonal entries, independent of the off-diagonal elements. For norms, the Frobenius norm is the square root of the sum of squares of all entries, which for a tridiagonal matrix primarily depends on the diagonal and adjacent off-diagonals but ties closely to the diagonal magnitudes in sparse approximations; the infinity norm, as the maximum absolute row sum, is bounded by the maximum of the absolute values of the diagonal plus the two adjacent off-diagonals per row.[13] These properties underpin the use of tridiagonal matrices in solving sparse linear systems efficiently.[13]Determinant Computation
The determinant of an n \times n tridiagonal matrix A_n, with main diagonal entries a_1, \dots, a_n, subdiagonal entries b_1, \dots, b_{n-1}, and superdiagonal entries c_1, \dots, c_{n-1}, satisfies the three-term recurrence relation \det(A_n) = a_n \det(A_{n-1}) - b_{n-1} c_{n-1} \det(A_{n-2}), with base cases \det(A_0) = 1 and \det(A_1) = a_1.[14] This relation arises from expanding the determinant along the last row and applying properties of determinants for bordered matrices, enabling computation in linear time complexity O(n).[15] For the special case of a Toeplitz tridiagonal matrix with constant main diagonal a, constant subdiagonal b, and constant superdiagonal c, the recurrence becomes a linear homogeneous relation with constant coefficients, D_n = a D_{n-1} - bc D_{n-2}. The characteristic equation is r^2 - a r + bc = 0, with roots \lambda_1, \lambda_2. Assuming \lambda_1 \neq \lambda_2, the closed-form solution is D_n = \det(A_n) = \frac{\lambda_1^{n+1} - \lambda_2^{n+1}}{\lambda_1 - \lambda_2}. If the discriminant a^2 - 4bc < 0, the roots are complex, and the expression can be rewritten in trigonometric form, such as D_n = \frac{\sin((n+1)\theta)}{\sin \theta} where the diagonal entries are a = 2 \cos \theta and the off-diagonal entries are b = c = 1.[16] In cases of repeated roots, the form adjusts to D_n = (n+1) \lambda_1^n. For constant diagonals, the ratio D_n / D_{n-1} admits a continued fraction representation, a - bc / (a - bc / (a - \cdots )), which converges to the dominant root \max(|\lambda_1|, |\lambda_2|) for large n.[16] The Gershgorin circle theorem provides bounds on the eigenvalues of a tridiagonal matrix, with each disk centered at a_i and radius |b_{i-1}| + |c_i| (or adjusted at boundaries); since the determinant equals the product of eigenvalues, these bounds indirectly constrain the possible magnitude of the determinant. However, direct recursive computation via the three-term relation can exhibit numerical instability for large n when |\lambda_1| > |\lambda_2|, due to amplification of roundoff errors in the recessive mode. The closed-form expression can suffer from cancellation when the roots are close, leading to loss of significant digits in floating-point arithmetic.Eigenvalues and Eigenvectors
The characteristic polynomial of a tridiagonal matrix T_n of order n, with diagonal entries a_1, \dots, a_n, subdiagonal entries b_1, \dots, b_{n-1}, and superdiagonal entries c_1, \dots, c_{n-1}, is defined as p_n(\lambda) = \det(T_n - \lambda I_n). This polynomial satisfies the three-term recurrence relation p_n(\lambda) = (a_n - \lambda) p_{n-1}(\lambda) - b_{n-1} c_{n-1} p_{n-2}(\lambda), with initial conditions p_0(\lambda) = 1 and p_1(\lambda) = a_1 - \lambda.[17] The eigenvalues of T_n are the roots of p_n(\lambda) = 0, and these roots inherit the recursive structure through the continued fraction representation or generating function solutions of the recurrence.[17] For the special case of a symmetric tridiagonal Toeplitz matrix with constant diagonal entries a and constant off-diagonal entries b (i.e., a_i = a, b_i = c_i = b for all i), the eigenvalues admit an explicit closed-form expression: \lambda_k = a + 2b \cos\left( \frac{k \pi}{n+1} \right) for k = 1, 2, \dots, n.[6] The corresponding eigenvectors have components v_{j,k} = \sin\left( \frac{j k \pi}{n+1} \right) for j = 1, \dots, n.[6] Since the matrix is symmetric, its eigenvectors form an orthogonal basis for \mathbb{R}^n. This orthogonality follows from the spectral theorem for symmetric matrices and is preserved in the explicit sine form, where the inner product between distinct eigenvectors v_k and v_m vanishes due to the orthogonality of sine functions over the discrete grid.[18] As n \to \infty, the eigenvalues \lambda_k of the symmetric tridiagonal Toeplitz matrix densely fill the interval [a - 2|b|, a + 2|b|], approximating the continuous spectrum of the associated infinite Toeplitz operator. The asymptotic distribution of the normalized eigenvalues (\lambda_k - a)/(2|b|) follows the arcsine law with density \frac{1}{\pi \sqrt{1 - x^2}} for x \in (-1, 1), reflecting the uniform spacing in the cosine argument.[6][19] Perturbation theory for nearly tridiagonal matrices, such as those where small fill-in entries are introduced outside the tridiagonal band, shows that eigenvalues remain close to those of the original tridiagonal form if the perturbations are block-structured or localized. For a symmetric tridiagonal matrix perturbed by setting one off-diagonal entry to zero, the eigenvalue shifts are bounded by the square root of the perturbation magnitude, with explicit bounds derived from interlacing properties.[20] More generally, for Hermitian block tridiagonal matrices under blockwise perturbations, well-separated eigenvalues exhibit insensitivity, with perturbation bounds scaling as the norm of the off-block entries relative to eigenvalue gaps.[21]Advanced Properties
Matrix Inversion
The inverse of a nonsingular tridiagonal matrix is generally a full (dense) matrix, but its entries possess explicit closed-form expressions that can be derived using the Green's function approach, which interprets the inverse as the discrete Green's function for the second-order linear difference equation associated with the matrix.[22] For a general n \times n tridiagonal matrix A with diagonal entries a_1, \dots, a_n, subdiagonal entries b_1, \dots, b_{n-1}, and superdiagonal entries c_1, \dots, c_{n-1}, the entries of the inverse A^{-1} are given by ratios involving the determinants of appropriate principal submatrices. Define the leading principal minors \Delta_k = \det(A[1:k, 1:k]) for k = 0, \dots, n, where \Delta_0 = 1 and \Delta_1 = a_1, satisfying the three-term recurrence \Delta_k = a_k \Delta_{k-1} - b_{k-1} c_{k-1} \Delta_{k-2} for k \geq 2. Similarly, define the trailing principal minors \Theta_k = \det(A[n-k+1:n, n-k+1:n]) for k = 0, \dots, n, where \Theta_0 = 1 and \Theta_1 = a_n, with the recurrence \Theta_k = a_{n-k+1} \Theta_{k-1} - b_{n-k+1} c_{n-k} \Theta_{k-2} for k \geq 2. Then, for i \leq j, (A^{-1})_{i,j} = (-1)^{i+j} \frac{\Delta_{i-1} \Theta_{n-j}}{\Delta_n} \prod_{k=i}^{j-1} c_k, and for i > j, (A^{-1})_{i,j} = (-1)^{i+j} \frac{\Delta_{j-1} \Theta_{n-i}}{\Delta_n} \prod_{k=j+1}^{i} b_k. In the symmetric case (b_k = c_k for all k), this reduces to the transpose of the upper triangle formula.[23][24] The matrix A is invertible if and only if \Delta_n \neq 0, which ensures the existence of the expressions above; this condition is equivalent to all leading principal minors \Delta_k \neq 0 for k = 1, \dots, n, guaranteeing that Gaussian elimination without pivoting encounters no zero pivots.[23] In the special case of a symmetric positive definite tridiagonal matrix (where b_i = c_i > 0 for all i and all leading principal minors are positive), the entries of the inverse decay monotonically away from the main diagonal, with the rate of decay depending on the minimum eigenvalue and the off-diagonal magnitudes; specifically, for strictly diagonally dominant matrices, |(A^{-1})_{i,j}| \leq C \rho^{|i-j|} for some constants C > 0 and $0 < \rho < 1.[25] The leading and trailing minors can be computed in O(n) time via their respective recurrence relations, enabling the full inverse to be assembled in O(n^2) arithmetic operations by evaluating the products \prod_{k=i}^{j-1} c_k (which can be accelerated using prefix products).[26]Similarity to Symmetric Forms
A similarity transformation preserves the eigenvalues of a matrix, as the transformed matrix P^{-1} A P shares the same spectrum as the original matrix A. This property holds because the characteristic polynomials of A and P^{-1} A P are identical, ensuring that the eigenvalues and their algebraic multiplicities remain unchanged under such transformations. For Hermitian matrices, orthogonal similarity transformations can reduce the matrix to a symmetric tridiagonal form, which simplifies subsequent computations while preserving the spectrum. The Householder reduction method achieves this by applying a sequence of Householder reflections—orthogonal matrices that zero out elements below the subdiagonal in successive columns, starting from the first. This process eliminates entries outside the tridiagonal band in a stable manner, requiring approximately \frac{4}{3}n^3 floating-point operations for an n \times n matrix. Alternatively, the Lanczos algorithm, originally proposed in 1950, performs tridiagonalization through an iterative process that generates an orthonormal basis via three-term recurrence relations, effectively projecting the matrix onto a Krylov subspace to yield a symmetric tridiagonal representation.[27] For real nonsymmetric matrices, the standard similarity reduction for eigenvalue computations is to upper Hessenberg form using Householder reflections or Givens rotations, which eliminates elements below the subdiagonal. Reduction to tridiagonal form is possible but less common due to numerical stability issues and higher costs. In the QR algorithm, Hessenberg (or tridiagonal for symmetric cases) reduction accelerates iterations by banding the matrix.[28] Post-reduction, the tridiagonal matrix may be unreduced, meaning all subdiagonal elements are nonzero, which implies it has no multiple eigenvalues and corresponds to an irreducible structure. In contrast, a block tridiagonal form arises if some subdiagonal elements are exactly zero, indicating the presence of invariant subspaces or multiple eigenvalues that partition the matrix into smaller tridiagonal blocks. This distinction is important for further analysis, as block forms allow divide-and-conquer strategies in eigenvalue solvers.[29][29]Solution of Linear Systems
Solving a linear system Ax = b, where A is an n \times n tridiagonal matrix with subdiagonal entries b_i (for i=2,\dots,n), diagonal entries a_i (for i=1,\dots,n), and superdiagonal entries c_i (for i=1,\dots,n-1), and b \in \mathbb{R}^n, requires ensuring the existence and uniqueness of the solution. A sufficient condition for A to be nonsingular—and thus for the system to have a unique solution—is strict diagonal dominance, where |a_1| > |c_1|, |a_n| > |b_n|, and |a_i| > |b_i| + |c_i| for i=2,\dots,n-1.[30] This property guarantees invertibility for general matrices, including tridiagonal ones, via the Gershgorin circle theorem, as all eigenvalues lie within disks centered at the diagonal entries that do not include the origin.[30] One theoretical approach to solving the system adapts Cramer's rule, which expresses each component x_j as the ratio of two determinants: the determinant of the matrix obtained by replacing the j-th column of A with b, divided by \det(A). For tridiagonal matrices, these determinants (and subdeterminants) can be computed efficiently using a three-term recurrence relation, reducing the cost from O(n!) for dense matrices to O(n^2) overall, as each of the n determinants requires O(n) operations.[31] However, this method remains inefficient compared to direct factorization techniques, as it does not exploit the structure for linear-time performance and can accumulate rounding errors in finite precision.[32] More efficient theoretical resolution leverages factorization, such as LU decomposition, which for tridiagonal matrices can be performed in O(n) time due to the limited bandwidth.[33] The resulting lower and upper triangular factors then allow solution via forward substitution to solve Ly = b followed by backward substitution to solve Ux = y, each also requiring O(n) operations, yielding an overall complexity of O(n) for the system.[33] This approach underscores the structural advantage of tridiagonal systems over dense ones, where substitution alone costs O(n^2). The well-posedness of the problem depends on the condition number \kappa(A), typically measured in the 2-norm as \kappa(A) = \|A\|_2 \|A^{-1}\|_2, which bounds the relative error amplification: \frac{\| \delta x \| / \|x\|}{\| \delta A \| / \|A\| + \| \delta b \| / \|b\|} \leq \kappa(A). For tridiagonal matrices, explicit O(n)-time algorithms exist to compute \kappa_1(A) or \kappa_\infty(A), exploiting the QR factorization or recurrence relations for norms of A and A^{-1}.[34] Well-posedness holds when \kappa(A) remains bounded independently of n, as in certain Toeplitz tridiagonal matrices where \kappa_\infty(A) = 3 regardless of dimension; otherwise, growing \kappa(A) (e.g., exponentially in some cases) indicates sensitivity to perturbations.[35][34] Perturbation analysis reveals vulnerabilities in ill-conditioned tridiagonal systems, where small changes in entries or the right-hand side can produce large deviations in x. For instance, consider a symmetric tridiagonal M-matrix with diagonal entries $2 + \epsilon and off-diagonals -1, where small \epsilon > 0 (e.g., \epsilon = 0.009) yields \kappa(A) \approx 300 for n=50, amplifying relative errors by a factor of hundreds despite the matrix being diagonally dominant. Wilkinson's seminal work on error analysis in eigenvalue problems extended to linear systems highlights such cases, showing that Gaussian elimination on near-singular tridiagonals incurs backward errors bounded by O(n \epsilon \kappa(A)), but forward errors can approach \kappa(A) times the backward error, emphasizing the need for careful numerical handling.[36]Numerical Algorithms
LU Decomposition
The LU decomposition of a tridiagonal matrix A with subdiagonal entries a_i (i=2,\dots,n), diagonal entries b_i (i=1,\dots,n), and superdiagonal entries c_i (i=1,\dots,n-1) factors A = [LU](/page/LU_decomposition), where L is a unit lower bidiagonal matrix with 1's on the main diagonal and subdiagonal entries l_i (i=2,\dots,n), and U is an upper bidiagonal matrix with diagonal entries u_i (i=1,\dots,n) and superdiagonal entries c_i (i=1,\dots,n-1).[37] The factors are computed using the following forward recurrence relations without pivoting: \begin{align*} u_1 &= b_1, \\ l_i &= \frac{a_i}{u_{i-1}}, \quad i = 2, \dots, n, \\ u_i &= b_i - l_i c_{i-1}, \quad i = 2, \dots, n. \end{align*} This process requires O(n) arithmetic operations and O(n) storage, as only the nonzero elements of L and U need to be retained.[37] No pivoting is required if A is strictly diagonally dominant, i.e., |b_1| \geq |c_1| and |b_i| > |a_i| + |c_i| for i=2,\dots,n-1, |b_n| \geq |a_n|, ensuring all u_i \neq 0 and numerical stability in floating-point arithmetic.[38] Otherwise, the algorithm may encounter breakdown if some u_i = 0, necessitating pivoting strategies that disrupt the bidiagonal structure.[38] This factorization is closely related to the evaluation of continued fractions, where the recurrence relations mirror the iterative computation of convergents in a continued fraction expansion for solving tridiagonal systems.[39]Thomas Algorithm
The Thomas algorithm, also known as the tridiagonal matrix algorithm (TDMA), is a direct method for solving a system of linear equations Ax = d, where A is an n \times n tridiagonal matrix with subdiagonal elements a_i (for i = 2, \dots, n), diagonal elements b_i (for i = 1, \dots, n), and superdiagonal elements c_i (for i = 1, \dots, n-1), and d is the right-hand side vector.[40] Named after physicist Llewellyn Thomas, who introduced it in 1949, the algorithm exploits the banded structure of A to perform Gaussian elimination without fill-in, reducing computational complexity from O(n^3) for general dense matrices to O(n).[41] It consists of a forward elimination phase that transforms the system into an upper triangular form and a subsequent back-substitution phase to recover the solution x.[40] In the forward elimination phase, the algorithm first computes the diagonal elements u_i of the upper triangular factor U in the LU decomposition of A (with unit diagonal in L), followed by updating the right-hand side to a modified vector z. Specifically, set u_1 = b_1 and z_1 = d_1 / u_1; then, for i = 2 to n, compute the multiplier w_i = a_i / u_{i-1}, u_i = b_i - w_i c_{i-1}, and z_i = (d_i - w_i z_{i-1}) / u_i.[40] This phase effectively solves L z = d while constructing U, avoiding storage of the full L or U matrices beyond the necessary scalars. The back-substitution phase then solves U x = z by setting x_n = z_n and, for i = n-1 down to $1, x_i = z_i - (c_i / u_i) x_{i+1}.[40] The following pseudocode outlines the algorithm:This implementation requires only $8n - 7 floating-point operations, confirming the O(n) complexity.[40] The algorithm is numerically stable when the tridiagonal matrix is strictly diagonally dominant, i.e., |b_i| > |a_i| + |c_i| for all i, or when it is symmetric positive definite, as these conditions prevent growth in rounding errors during elimination.[40] However, it can be sensitive to rounding errors in non-diagonally dominant cases, potentially leading to instability without pivoting, though pivoting introduces fill-in and reduces the sparsity advantage.[42] Due to its sequential dependencies—each step relies on the previous computation—the Thomas algorithm poses challenges for parallelization on multi-core or distributed systems when solving a single tridiagonal system, often requiring domain decomposition or alternative methods like cyclic reduction for effective parallelism.[42] In contrast to general Gaussian elimination, which demands O(n^3) operations and is ill-suited for banded matrices, the Thomas algorithm's efficiency makes it ideal for large-scale tridiagonal systems arising in numerical simulations.[40]# Forward elimination u[1] = b[1] z[1] = d[1] / u[1] for i = 2 to n: w = a[i] / u[i-1] u[i] = b[i] - w * c[i-1] z[i] = (d[i] - w * z[i-1]) / u[i] # Back-substitution x[n] = z[n] for i = n-1 downto 1: x[i] = z[i] - (c[i] / u[i]) * x[i+1]# Forward elimination u[1] = b[1] z[1] = d[1] / u[1] for i = 2 to n: w = a[i] / u[i-1] u[i] = b[i] - w * c[i-1] z[i] = (d[i] - w * z[i-1]) / u[i] # Back-substitution x[n] = z[n] for i = n-1 downto 1: x[i] = z[i] - (c[i] / u[i]) * x[i+1]