Fact-checked by Grok 2 weeks ago

Numerical stability

Numerical stability is a fundamental concept in that evaluates the robustness of algorithms to perturbations, ensuring that small errors—such as those from rounding in —do not propagate to produce large inaccuracies in the computed results. It distinguishes between the inherent of a () and the algorithm's ability to handle computational errors without amplifying them. This property is crucial for reliable computations in fields like scientific computing, simulations, and , where finite precision arithmetic inevitably introduces errors on the order of , typically $10^{-16} for double precision. There are two primary notions of stability: forward stability and backward stability. An algorithm is forward stable if its output is close to the exact solution of the original problem, meaning the forward error bound is small, often on the order of times the of the problem. In contrast, backward stability requires that the computed result is the exact solution to a slightly version of the input problem, where the perturbation is also bounded by ; this is often easier to analyze and proves more attainable for many algorithms. For example, with partial pivoting for solving linear systems is backward stable, ensuring the computed solution satisfies a nearby system exactly. In the context of solving differential equations, stability takes on an additional meaning related to time-stepping methods, where the numerical scheme must not amplify high-frequency error modes or allow spurious growth in solutions. Techniques like assess this by examining amplification factors for Fourier modes, requiring them to be bounded by 1 to prevent exponential error growth. Foundational work on numerical stability, particularly backward error analysis, was advanced by J. H. Wilkinson in the 1960s, whose contributions to error analysis in algebraic processes earned him the in 1970 and remain central to modern numerical methods. Subsequent texts, such as Nicholas J. Higham's Accuracy and Stability of Numerical Algorithms (2002), provide comprehensive frameworks for analyzing and improving stability across diverse algorithms.

Fundamental Concepts

Definition of Numerical Stability

Numerical stability refers to the property of a numerical algorithm that ensures small perturbations in the input data or small errors introduced during computation, such as rounding errors, result in only small changes in the computed output. This robustness is essential in because computers perform calculations with finite precision, inevitably introducing approximations that could otherwise propagate and amplify. In the context of , a problem is well-posed if it satisfies three criteria: a solution exists for every input, the solution is unique, and the solution varies continuously with respect to the input data, meaning small changes in the input lead to small changes in the solution. Conversely, an ill-posed problem fails at least one of these criteria, often exhibiting extreme sensitivity to perturbations, which can render numerical solutions unreliable even for stable algorithms. Numerical stability thus complements well-posedness by addressing how discretely implemented algorithms behave under the constraints of computational arithmetic. The concept of numerical stability originated in the 1940s, during the early development of electronic computers, when and collaborators began systematically analyzing error propagation in algorithms for scientific computing, particularly in methods for differential equations. This work laid the groundwork for modern error analysis in numerical methods. Stability is fundamentally analyzed through , which examines how small changes in the problem—such as perturbed inputs—affect the ; an is considered stable if its computed result is close to the exact of a slightly perturbed but nearby problem. The sensitivity of this process is often influenced by the of the problem, a measure of how much errors can be amplified (detailed in subsequent sections).

Forward and Backward Stability

In numerical analysis, forward stability refers to the property of an algorithm that computes an approximate solution \hat{y} to the problem y = f(x) such that the forward error \|\hat{y} - y\| / \|y\| is small, typically on the order of the condition number of the problem times the unit roundoff u. This measures how closely the output matches the exact solution of the original problem, despite rounding errors introduced during computation. Forward stability focuses directly on the accuracy of the result relative to the true solution, making it a direct indicator of practical reliability. Backward stability, in contrast, assesses an by considering whether the computed \hat{y} is the exact to a nearby problem, specifically \hat{y} = f(x + \delta x) where the relative \|\delta x\| / \|x\| is small, often bounded by a modest multiple of roundoff cu. This backward error \eta = \min \{ \|\delta x\| / \|x\| : \hat{y} = f(x + \delta x) \} quantifies how much the input must be perturbed to exactly produce the computed output, providing insight into the 's robustness against data perturbations. Backward stability is often easier to establish than forward because it reformulates errors as input perturbations rather than output inaccuracies. A classic example illustrating the distinction is the Gram-Schmidt process for . The classical Gram-Schmidt algorithm is unstable in finite precision arithmetic, as it can lead to significant loss of due to rounding error accumulation, failing to produce a small backward error. In contrast, the modified Gram-Schmidt algorithm is backward stable, ensuring that the computed satisfies the exact conditions for a slightly perturbed input . For well-conditioned problems, where the is modest, backward stability guarantees forward stability, as the small input perturbation translates to a small output error via the problem's inherent sensitivity. This implication makes backward stability a desirable and often sufficient criterion for ensuring overall algorithmic reliability in practice.

Role of Condition Number

The of a matrix A in the solution of the linear system Ax = b is defined as \kappa(A) = \|A\| \cdot \|A^{-1}\|, where \|\cdot\| denotes a matrix norm consistent with a vector norm. This measure quantifies the sensitivity of the solution to perturbations in the input data. A condition number \kappa(A) \approx 1 indicates that the problem is well-conditioned, meaning small relative changes in A or b lead to comparably small relative changes in x, promoting numerical stability. Conversely, a large \kappa(A) signifies an ill-conditioned problem, where errors in the input can be amplified dramatically in the output, leading to potential instability even with precise computations. The provides an upper bound on error growth: the relative error in the computed solution satisfies \frac{\|\delta x\|}{\|x\|} \leq \kappa(A) \left( \frac{\|\delta A\|}{\|A\|} + \frac{\|\delta b\|}{\|b\|} \right), assuming the is small enough that \kappa(A) \frac{\|\delta A\|}{\|A\|} < 1. This bound shows that the maximum amplification of relative input errors by the problem itself is governed by \kappa(A). Condition numbers can be computed using the singular value decomposition (SVD) of A = U \Sigma V^T, where for the 2-norm, \kappa_2(A) = \sigma_{\max}/\sigma_{\min}, with \sigma_{\max} and \sigma_{\min} being the largest and smallest singular values, respectively. Other norms, such as the 1-norm or \infty-norm, can be estimated via direct computation of the relevant matrix norms, though the approach is particularly reliable for the 2-norm. Even algorithms that are backward stable—meaning they produce solutions close to those of slightly perturbed inputs—can yield large forward errors if the condition number is high, as the problem's inherent sensitivity dominates. Thus, numerical stability requires both a well-conditioned problem and a stable algorithm.

Stability in Numerical Linear Algebra

Stability of Direct Solvers

Direct methods for solving linear systems Ax = b, such as , aim to compute an exact factorization or decomposition of the matrix A, but floating-point arithmetic introduces rounding errors that can propagate and amplify during the process. Without pivoting, is prone to severe numerical instability, where small perturbations in the input can lead to exponential growth in errors. A classic example, provided by , involves an n \times n matrix with elements of order 1, for which the elements in the upper triangular factor U grow exponentially with n, resulting in a growth factor \rho(A) that can reach $2^{n-1} or worse, rendering the computed solution useless for even moderate n on machines with limited precision. To mitigate this instability, partial pivoting is employed during , where at each step the row with the largest absolute value in the pivot column is selected and swapped to the pivot position. This strategy bounds the growth factor \rho(A) = \max |u_{ij}| / \max |a_{ij}| by $2^{n-1} for an n \times n matrix, ensuring that the elements in the factors do not grow uncontrollably in the worst case. Although this bound is exponential, in practice for random or typical matrices, \rho(A) remains small (often O(1) or O(n)), making partial pivoting reliable for most applications. The LU decomposition with partial pivoting, which factorizes a permuted matrix PA = LU where P is a permutation matrix, L is unit lower triangular, and U is upper triangular, is backward stable. Specifically, the computed factors satisfy PA + \Delta A = \hat{L}\hat{U} with |\Delta A| \leq O(n) \epsilon |A|, where \epsilon is the machine precision, leading to a relative forward error in the solution bounded by O(n) \epsilon \kappa(A), with \kappa(A) the condition number amplifying the inherent sensitivity of the problem. This stability holds under the assumption of bounded growth, which partial pivoting provides theoretically and empirically. For symmetric positive definite matrices, the Cholesky decomposition A = RR^T (with R upper triangular) requires no pivoting and is backward stable, producing a relative backward error bounded by O(n) \epsilon, independent of \kappa(A) for the factorization itself, though the forward error in the solution still depends on the condition number. Overall, direct solvers with appropriate pivoting strategies achieve backward stability for dense linear systems, ensuring computed solutions are accurate to nearly machine precision times the condition number, but their cubic O(n^3) time complexity limits applicability to large-scale problems.

Stability of Iterative Methods

Iterative methods approximate solutions to linear systems Ax = b through repeated applications of an update rule, typically of the form x_{k+1} = G x_k + c, where the iteration matrix G determines convergence via its spectral radius \rho(G) < 1. Stability in these methods encompasses both convergence reliability and robustness to perturbations in A, b, or finite-precision arithmetic, as errors can accumulate and amplify if the process diverges or stagnates. A central challenge is that small perturbations may degrade the effective condition number \kappa(A), slowing convergence or inducing divergence, particularly in ill-conditioned systems where iterative refinement becomes essential. The Jacobi and Gauss-Seidel methods are classical stationary iterations for sparse systems. In the Jacobi method, the iteration matrix is T_J = I - D^{-1}A, where D is the diagonal part of A, and convergence requires \rho(T_J) < 1, which is guaranteed for strictly diagonally dominant matrices. The Gauss-Seidel method employs the lower triangular matrix L (including the diagonal), yielding T_{GS} = (D + L)^{-1} (D + L - A), and also converges under diagonal dominance, often faster than Jacobi since \rho(T_{GS}) \leq [\rho(T_J)]^2. Both methods are sensitive to perturbations that violate dominance, potentially increasing \rho > 1 and causing , though they remain stable for well-conditioned, positive definite matrices. Krylov subspace methods like the conjugate gradient (CG) and generalized minimal residual (GMRES) offer improved efficiency for large systems. The CG method, designed for symmetric positive definite A, is backward stable in exact arithmetic, computing iterates that satisfy a perturbed system close to the original. However, finite-precision effects can cause loss of orthogonality in the Krylov basis, leading to breakdown where the algorithm halts prematurely or loses accuracy. For non-symmetric systems, GMRES minimizes the norm over the and is stable in exact arithmetic, with convergence bounds scaling as \left( \frac{\sqrt{\kappa(A)} - 1}{\sqrt{\kappa(A)} + 1} \right)^m in the worst case. Restarting GMRES after m steps controls memory but can introduce instability by discarding subspace information, exacerbating sensitivity to \kappa(A). A primary strategy to enhance stability across iterative methods is preconditioning, which transforms the system to \tilde{A} = M^{-1} A (left) or A M^{-1} (right), reducing the effective \kappa(\tilde{A}) and mitigating perturbation effects that worsen conditioning. For instance, or preconditioners can cluster eigenvalues, accelerating and improving robustness without altering the solution. Without such measures, perturbations in ill-conditioned problems can amplify errors, rendering iterations ineffective even if theoretically convergent.

Stability in Eigenvalue Computations

Computing of a is a fundamental task in , but stability concerns arise due to errors and the inherent sensitivity of spectral information to . The , a cornerstone method for this purpose, is backward stable for eigenvalue computations, meaning the computed eigenvalues are exact for a slightly perturbed input with perturbation bounded by times the matrix norm. However, the accuracy of computed eigenvectors can degrade significantly when eigenvalues are close or multiple, as small perturbations may cause mixing within degenerate eigenspaces, amplifying errors in the eigenvector directions. A key preprocessing step in the is the reduction to upper Hessenberg form using Householder transformations, which is backward stable with relative perturbation on the order of . This reduction preserves the eigenvalues and simplifies subsequent iterations without introducing instability. The overall QR process, including shifts for acceleration, maintains this backward stability for the spectrum, but eigenvector computation relies on the problem's . The Bauer-Fike theorem provides a fundamental perturbation bound for eigenvalues of diagonalizable matrices: if A = V \Lambda V^{-1} and B = A + E has eigenvalue \mu, then there exists an eigenvalue \lambda of A such that |\lambda - \mu| \leq \kappa(V) \|E\|, where \kappa(V) = \|V\| \cdot \|V^{-1}\| is the of the eigenvector matrix V, and the norm is subordinate to the vector norm used. This bound quantifies how eigenvalue errors scale with matrix s, modulated by \kappa(V). For normal matrices, which are diagonalized by unitary matrices, \kappa(V) = 1, ensuring eigenvalues are well-conditioned and stable under ; in contrast, non-normal matrices can have large \kappa(V), leading to error amplification. Pseudospectra offer insight into instability for non-normal matrices, defined as the set of points z where \|(zI - A)^{-1}\| > 1/\epsilon for small \epsilon > 0, representing approximate eigenvalues under perturbations of size \epsilon. These regions extend beyond the true spectrum, explaining why non-normal matrices exhibit transient growth or sensitivity despite well-separated eigenvalues, as perturbations can shift the spectrum substantially within the pseudospectral contour. This phenomenon underscores the challenges in eigenvector stability for such matrices, where invariant subspaces may be poorly conditioned even if individual eigenvalues appear stable. The eigenvector condition number extends the scalar case, measuring sensitivity via angles between perturbed and exact subspaces, often large when eigenvalues cluster.

Stability in Numerical Differential Equations

Stability for Ordinary Differential Equations

Absolute stability in the numerical solution of ordinary differential equations (ODEs) refers to the behavior of a numerical method when applied to the scalar test equation y' = \lambda y with \Re(\lambda) < 0, where the exact solution decays to zero as t \to \infty. For a method with step size \Delta t, the numerical solution remains bounded (and ideally decays) if \Delta t \lambda lies within the method's stability region in the complex plane. This concept, introduced by Germund Dahlquist in 1963, assesses whether the method mimics the asymptotic stability of the exact solution for linear problems with negative real eigenvalues. The stability region is the set of complex values z = \Delta t \lambda for which the absolute value of the method's stability function |R(z)| \leq 1. For the explicit Euler method, R(z) = 1 + z, yielding a stability region that is the closed disk |1 + z| \leq 1, centered at -1 with radius 1, thus limited to \Re(z) \geq -2. In contrast, the implicit (backward) Euler method has R(z) = \frac{1}{1 - z}, with a stability region encompassing the entire left half-plane \Re(z) < 0, making it unconditionally stable for such test problems. These regions highlight how explicit methods impose restrictive step-size limits to avoid instability, while implicit methods do not. Dahlquist's second barrier, established in 1963, proves that no A-stable linear multistep method (where the stability region includes the entire negative half-plane) can have order greater than 2; the trapezoidal rule achieves this bound. For explicit methods, stability regions are inherently bounded due to polynomial stability functions, precluding A-stability even for order 1, whereas implicit methods like backward Euler attain it. Stability analysis for general ODEs often employs linear multistep methods, as pioneered by Dahlquist in 1956. Stiff ODEs, characterized by widely varying eigenvalues with large negative real parts, were first identified by Curtiss and Hirschfelder in 1952; explicit methods require prohibitively small steps to remain stable, leading to inefficiency or oscillations. Implicit A-stable methods mitigate this but may damp high-frequency components slowly; L-stability, introduced by J.R. Cash in 1975, strengthens A-stability by requiring \lim_{z \to -\infty} |R(z)| = 0, ensuring rapid damping for large |z| and suitability for stiff solvers like backward differentiation formulas.

Stability for Partial Differential Equations

Numerical stability for partial differential equations (PDEs) extends the concepts from ordinary differential equations by incorporating spatial discretization, where errors can propagate both temporally and spatially, often leading to wave-like instabilities if not controlled. Finite difference schemes for PDEs must ensure that perturbations in initial or boundary data do not grow unboundedly over time and space. A key tool for assessing this is von Neumann stability analysis, which applies to linear PDEs with constant coefficients by decomposing solutions into Fourier modes and examining the amplification factor g for each mode; stability requires |g| \leq 1 for all wavenumbers, a condition that is necessary but not sufficient for overall scheme stability. For hyperbolic PDEs, such as the advection equation \partial_t u + c \partial_x u = 0, explicit finite difference schemes are prone to instability unless the time step \Delta t and spatial step \Delta x satisfy the \Delta t \leq \frac{\Delta x}{|c|}, which ensures that information does not propagate faster than the numerical domain of dependence allows. This condition arises from , where violation leads to amplification factors exceeding unity for high-frequency modes, causing oscillatory blow-up. In contrast, central difference schemes for advection are unconditionally unstable under , as their amplification factors grow without bound for certain wavenumbers, often requiring artificial viscosity to stabilize. Upwind schemes address these issues for advection-dominated problems by biasing the spatial differencing in the direction of wave propagation, yielding stable explicit methods when the CFL condition holds; for instance, the first-order upwind scheme has |g| \leq 1 for $0 < c \Delta t / \Delta x \leq 1. For parabolic PDEs like the heat equation \partial_t u = \kappa \partial_{xx} u, implicit methods such as the Crank-Nicolson scheme are unconditionally stable, with amplification factors satisfying |g| \leq 1 for all \Delta t > 0, allowing larger time steps at the cost of solving linear systems per step; however, applying implicit methods to PDEs can introduce non-physical oscillations despite formal . The Lax equivalence theorem establishes that, for linear PDEs on a bounded domain with appropriate boundary conditions, a consistent finite difference scheme converges to the true solution if and only if it is stable, linking these notions rigorously.

Illustrative Examples

Rounding Error Propagation in Arithmetic

In floating-point arithmetic, rounding errors arise whenever a real number cannot be exactly represented or when the result of an operation exceeds the precision limits of the format. A classic illustration of how these errors can propagate and lead to loss of accuracy is subtractive cancellation, where the subtraction of two nearly equal numbers amplifies relative errors in the result. Consider computing (1 + \epsilon) - 1, where \epsilon is a small positive value smaller than the machine epsilon \epsilon_m. In exact arithmetic, the result is \epsilon, but in floating-point, if \epsilon < \epsilon_m / 2, then $1 + \epsilon rounds to 1, yielding (1 + \epsilon) - 1 = 0. This demonstrates catastrophic cancellation, where significant digits are lost, and the computed result has no accurate information about \epsilon. A more complex example of rounding error propagation occurs in the evaluation of Wilkinson's polynomial, p(x) = \prod_{i=1}^{20} (x - r_i), the of 20 with r_i = i at the consecutive integers 1 through 20. In exact arithmetic, the coefficients are integers, but even a tiny perturbation in a single —on the order of the unit roundoff—can cause to deviate dramatically, with some moving by factors of 10 or more due to the ill-conditioned nature of the root-finding problem. This forward instability highlights how rounding errors during or can accumulate, leading to grossly inaccurate despite the polynomial appearing well-behaved. Wilkinson's showed that such perturbations, typical in floating-point , result in root sensitivities that grow exponentially with the , emphasizing the need for methods like Horner's to mitigate but not eliminate the issue. Error propagation is particularly evident in summation operations, where naive recursive addition of n floating-point numbers can accumulate errors linearly. In the worst case, the absolute error in the sum grows as O(n \epsilon_m), where \epsilon_m \approx 2^{-52} is the for double-precision , representing the relative error bound for operations near 1. For double precision, with 52 bits, \epsilon_m = 2^{-52} \approx 2.22 \times 10^{-16}, ensuring that each operation introduces a relative error at most \epsilon_m / 2. However, in naive , cancellation or alignment of terms can cause the error to approach this bound times n, potentially overwhelming the result for large n. To counteract this, compensated summation algorithms, such as Kahan's method, track and correct the lost low-order bits from each addition, reducing the error growth to nearly O(\epsilon_m) independent of n. Kahan's algorithm computes the sum s iteratively with a compensation term c, where s \leftarrow s + x_i + c and c accumulates the error from s + x_i, achieving accuracy close to exact in many cases. Recursive computations provide a visual demonstration of exponential error growth due to in the backward direction. For the defined by f_0 = 0, f_1 = 1, f_n = f_{n-1} + f_{n-2} for n \geq 2, forward (starting from small indices) is stable, with errors growing at most linearly. However, computing backward from large indices toward smaller ones amplifies errors exponentially, as the of the recurrence (the \phi \approx 1.618) propagates perturbations backward with factor \phi^{-n}, but in floating-point, small errors in large terms lead to relative errors that explode when dividing by nearly zero or ill-conditioned steps. This illustrates how the direction of computation affects , with the revealing instability bounds of O(\phi^n \epsilon_m) in the backward pass for large n.

Instability in Stiff Systems

A classic test problem illustrating instability in stiff systems is the (ODE) y' = -1000(y - \cos t) - \sin t, where the stiffness originates from the large negative coefficient -1000, corresponding to a fast transient mode with eigenvalue approximately -1000, while the forcing terms \cos t and \sin t drive a slow oscillatory on the timescale of order 1. This equation models scenarios where rapid adjustment to a slowly varying is required, such as in certain reaction-diffusion processes. The exact solution is y(t) = \cos t. When solved using the explicit , y_{n+1} = y_n + \Delta t \, f(t_n, y_n), stability imposes a severe restriction on the step size \Delta t < 0.002 to prevent oscillations or divergence, as dictated by the stability condition for the test equation y' = \lambda y with \lambda = -1000, where |1 + \lambda \Delta t| \leq 1 yields \Delta t \leq 2/|\lambda|. For larger steps, such as \Delta t = 0.01, the numerical solution rapidly becomes unstable, exhibiting growing oscillations that fail to capture the underlying slow dynamics, rendering the method computationally inefficient for long-time integrations. In contrast, the implicit , given by y_{n+1} = y_n + \frac{\Delta t}{2} \left( f(t_n, y_n) + f(t_{n+1}, y_{n+1}) \right), remains for much larger \Delta t, such as 0.01 or greater, due to its A-stable nature, allowing accurate resolution of the slow components like y(t) = \cos t without resolving the fast transient. This method requires solving a nonlinear at each step but avoids the restrictive step-size barrier of explicit schemes. In real-world applications like , stiffness ratios—the ratio of the fastest to slowest timescales, often quantified by the spread in eigenvalues of the —can reach up to $10^6, necessitating implicit methods to maintain efficiency. Numerical trajectories for this problem highlight the contrast: with \Delta t = 0.01, the explicit Euler solution diverges after a few steps, showing wild oscillations around the true path, whereas the implicit produces a smooth closely following \cos t, demonstrating the practical of stiffness-induced .

References

  1. [1]
    [PDF] Notes on Numerical Stability - UT Computer Science
    Oct 10, 2014 · The unit roundoff is often alternatively defined as the maximum positive floating point number which can be added to the number stored as 1.
  2. [2]
    [PDF] Numerical stability
    Sep 18, 2020 · A forward stable algorithm gives an “approximately correct answer to a closely related question.” MATH 6610-001 – U. Utah. Stability. Page 7 ...
  3. [3]
    [PDF] Stability of Numerical Schemes for PDE's. - MIT Mathematics
    Stability simply means that the scheme does not amplify errors. Obviously ... 4 Reference. For more information regarding stability of numerical schemes (and many ...
  4. [4]
    [PDF] the science of deriving stability analyses
    Wilkinson, a Turing Award in 1970. ... In this paper, we described a systematic approach to deriving numerical stability results for linear algebra algorithms.
  5. [5]
    What is Numerical Stability? - Nick Higham
    Aug 4, 2020 · Numerical stability concerns how errors affect an algorithm's result. It can mean backward, forward, or mixed backward-forward stability.
  6. [6]
    [PDF] Numerical Stability - University of Saskatchewan
    Jan 11, 2013 · Stability tells us what is possible (or what we can expect) when solving a continuous problem with discrete arithmetic. In other words, it tells ...
  7. [7]
    [PDF] Lecture 1. Introduction to well- and ill-posed problems.
    Nov 19, 2009 · A problem is well-posed if it has a solution for any input, a unique solution, and the solution depends continuously on the input. Otherwise, ...
  8. [8]
    Who introduced the notion of "stability" in numerical analysis?
    Sep 2, 2011 · John von Neumann is credited as having pioneered the stability analysis of finite difference schemes. Crank and Nicholson [1] acknowledge ...
  9. [9]
    John von Neumann's Analysis of Gaussian Elimination and the ...
    Von Neumann once remarked that to found a mathematical theory one had to prove the first theorem, which he and Goldstine did for the accuracy of mechanized ...
  10. [10]
    Modified Gram-Schmidt (MGS), Least Squares, and Backward ...
    Here we show that MGS-GMRES is backward stable. The result depends on a more general result on the backward stability of a variant of the MGS algorithm.
  11. [11]
    [PDF] Conditioning and stability
    Apr 3, 2023 · Here is how to interpret the result: If the problem is well-conditioned O(κ) = 1, this immedi- ately implies good accuracy of the solution!
  12. [12]
    [PDF] Numerical Stability of Linear System Solution Made Easy - Ilse Ipsen
    in floating point arithmetic [Higham, Wilkinson]. Solve: Ax = b where A ... numerical stability of direct methods A = S1S2. Model: Splits backward error ...
  13. [13]
    Rounding errors in algebraic processes - Internet Archive
    May 8, 2019 · Rounding errors in algebraic processes. by: Wilkinson, J. H. (James Hardy). Publication date: 1963. Topics: Electronic digital computers ...
  14. [14]
    [PDF] How Accurate is Gaussian Elimination?
    This example opposes the conventional wisdom that GE with partial pivoting is preferable to GE without pivoting. It also shows that iterative refinement in ...Missing: instability | Show results with:instability
  15. [15]
    [PDF] Iterative Methods for Sparse Linear Systems Second Edition
    In the six years that passed since the publication of the first edition of this book, iterative methods for linear systems have made good progress in ...
  16. [16]
    [PDF] Iterative Methods - WPI
    Theorem 5: If A is strictly diagonally dominant, then A is nonsingular. Moreover, ρ(TJ) < 1 and ρ(TGS) < 1; consequently, both the Jacobi and Gauss–Seidel ...
  17. [17]
    [PDF] The Lanczos and conjugate gradient algorithms in finite precision ...
    The Lanczos and conjugate gradient algorithms were introduced more than five decades ago as tools for numerical computation of dominant eigenvalues.
  18. [18]
    [PDF] Preconditioning Techniques for Large Linear Systems: A Survey
    Preconditioning as a means of reducing the condition number in or- der to improve convergence of an iterative process seems to have been first considered by ...
  19. [19]
    Numerical Stability - NetLib.org
    This is what makes the QR algorithm backward stable. In other words, the QR ... The accuracy of the computed eigenvalues and eigenvectors depends upon the ...
  20. [20]
    [PDF] The QR Algorithm - Ethz
    The QR algorithm computes a Schur decomposition of a matrix. It is certainly one of the most important algorithm in eigenvalue computations [9].
  21. [21]
    [PDF] Lecture 14 Hessenberg/Tridiagonal Reduction - DSpace@MIT
    Oct 26, 2006 · ... QR divided by two = 4m3/3. 5. Page 6. Stability of Householder Hessenberg. • The Householder Hessenberg reduction algorithm is backward stable:.
  22. [22]
    Norms and exclusion theorems | Numerische Mathematik
    Bauer, F.L., Fike, C.T. Norms and exclusion theorems. Numer. Math. 2, 137–141 (1960). https://doi.org/10.1007/BF01386217. Download citation. Received: 15 ...
  23. [23]
    [PDF] LN TREFETHEN - Pseudospectra of matrices - People
    In general, any quantitative prediction about the behavior of a non-normal matrix, if based on eigenvalues, can be valid only up to a factor such as к(V) that ...
  24. [24]
    A special stability problem for linear multistep methods
    Dahlquist, G.,Stability questions for some numerical methods for ordinary differential equations, To appear in Proc. Symposia on Applied Mathematics, vol. 15, „ ...
  25. [25]
    Convergence and stability in the numerical integration of ordinary ...
    Dahlquist, G. (1956). Convergence and stability in the numerical integration of ordinary differential equations. MATHEMATICA SCANDINAVICA, 4, 33–53.Missing: absolute paper
  26. [26]
    integration of stiff equations - PNAS
    MATHEMATICS: CURTISS AND HIRSCHFELDER PRoC. N. A. S.. The right-hand side of this equation represents a general function of x and y which for each value of x ...
  27. [27]
    [PDF] Survey of the Stability of Linear Finite Difference Equations
    Our assumption that is complete with respect to the norm plays an important role in the equivalence theorem of Section 8. 3. The Initial Value Problem. Let A ...
  28. [28]
    [PDF] On the Partial Difference Equations of Mathematical Physics
    Problems involving the classical linear partial differential equations of mathematical physics can be reduced to algebraic ones of a very much simpler ...
  29. [29]
    Crank, J. and Nicolson, P. (1947) A Practical Method for Numerical ...
    Crank, J. and Nicolson, P. (1947) A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat-Conduction Type.
  30. [30]
    What Every Computer Scientist Should Know About Floating-Point ...
    For example rounding to the nearest floating-point number corresponds to an error of less than or equal to .5 ulp. However, when analyzing the rounding error ...
  31. [31]
    [PDF] Computing Fibonacci numbers efficiently - Williams College
    Sep 25, 2025 · Beyond this point, the accumulation of rounding errors in the floating-point arithmetic would make the results unreliable. For readers ...
  32. [32]
    [PDF] 4 Stiffness and Stability
    The first notion of stability is concerned with the behavior of the numerical solution for a fixed value t > 0 as h → 0.
  33. [33]
    A data-driven reduced-order model for stiff chemical kinetics using ...
    The stiffness, S , of the system is defined as the ratio of the maximum (fastest scale) to the minimum (slowest scale) eigenvalues of J : (2) S = max j { | R e ...