Fact-checked by Grok 2 weeks ago

Condition number

In numerical analysis, the condition number of a function or matrix quantifies the sensitivity of its output to small perturbations in the input, serving as an indicator of how much relative error in the input can be amplified in the output due to rounding errors or data inaccuracies. For a square nonsingular matrix A, the condition number \kappa(A) is defined as \kappa(A) = \|A\| \cdot \|A^{-1}\|, where \|\cdot\| denotes a consistent matrix norm such as the 2-norm, in which case it equals the ratio of the largest to the smallest singular value of A. This value is always at least 1 for nonsingular matrices, with \kappa(A) = 1 indicating a well-conditioned problem where input and output errors are roughly equal in magnitude, while large values (e.g., exceeding $10^6 in floating-point computations) signal an ill-conditioned problem prone to numerical instability. The concept originated in the 1940s. For instance, von Neumann and Goldstine used eigenvalue ratios as a measure for symmetric positive definite matrices in 1947. Alan Turing introduced the term "condition number" and the general definition in his 1948 paper on rounding-off errors in matrix processes, which was popularized by James H. Wilkinson in his 1963 book "Rounding Errors in Algebraic Processes." In the specific case of solving the linear system Ax = b, the condition number bounds the relative error in the solution x by \frac{\| \Delta x \| / \| x \|}{\| \Delta b \| / \| b \|} \leq \kappa(A) for perturbations \Delta b in the right-hand side, or similarly for perturbations in A itself, highlighting its role in assessing algorithmic stability. Beyond matrices, condition numbers extend to nonlinear functions, where for a differentiable function f: \mathbb{R}^n \to \mathbb{R}^m, it measures the maximum amplification of relative input errors via the Lipschitz constant or Jacobian singular values, aiding in the analysis of optimization and root-finding problems. Condition numbers are dimensionless and problem-intrinsic, depending only on the mathematical formulation and data rather than the specific algorithm used, though computing them exactly can be costly (equivalent to inverting the matrix in complexity). They play a critical role in fields like scientific computing, engineering simulations, and machine learning, where ill-conditioning can render solutions unreliable; for example, in finite element methods, high condition numbers due to mesh distortion may necessitate preconditioning techniques to improve solvability. Recent advancements include structured condition numbers for specific matrix classes (e.g., Toeplitz or low-rank) and probabilistic analyses for random matrices, which often show milder conditioning in high dimensions.

Fundamentals

Definition

The condition number of a function quantifies its sensitivity to small perturbations in the input data. In numerical analysis, for a function f: X \to Y between normed spaces, the relative condition number \kappa(f, x) at a point x \in X with f(x) \neq 0 is formally defined as \kappa(f, x) = \lim_{\epsilon \to 0} \sup_{\|\delta\| \leq \epsilon} \frac{\|f(x + \delta) - f(x)\| / \|f(x)\|}{\|\delta\| / \|x\|}, where the supremum is taken over perturbations \delta with \|\delta\| \leq \epsilon \|x\|, and \|\cdot\| denotes compatible norms on the spaces. This limit measures the maximum amplification factor of relative input errors into relative output errors as the perturbation size approaches zero. For differentiable functions, the relative condition number admits the first-order approximation \kappa(f, x) \approx \|Df(x)\| \cdot \frac{\|x\|}{\|f(x)\|}, where Df(x) is the derivative (or Jacobian matrix in finite dimensions) and \|\cdot\| is the induced operator norm. The absolute condition number is the analogous quantity without relative scaling by the input and output magnitudes: \kappa_{\text{abs}}(f, x) = \lim_{\epsilon \to 0} \sup_{\|\delta\| \leq \epsilon} \frac{\|f(x + \delta) - f(x)\|}{\|\delta\|} \approx \|Df(x)\|. A problem is considered well-posed if its condition number is finite, ensuring continuous dependence of the solution on the data; an infinite condition number signals an ill-posed problem, where arbitrarily small input changes can produce unbounded output variations. The forward condition number, as defined above, assesses the propagation of input errors to output errors, while the backward error measures the relative size of the input perturbation needed to produce the observed output error. The forward error in computed solutions is typically bounded by the product of the (forward) condition number and the backward error.

Interpretation

The condition number \kappa of a function quantifies the maximum possible amplification of relative errors from input to output. Specifically, if the input x is perturbed by a small relative error \epsilon, the relative error in the output f(x) is bounded above by \kappa \epsilon, assuming \epsilon is sufficiently small. This bound arises from the definition of the condition number as the supremum of the ratio of relative changes in output to relative changes in input over all possible small perturbations. Problems with a condition number \kappa \approx 1 are well-conditioned, meaning small changes in the input produce proportionally small changes in the output, rendering the problem inherently stable to perturbations. In contrast, ill-conditioned problems exhibit large \kappa, where even minuscule input variations can lead to dramatically larger output discrepancies, highlighting intrinsic sensitivity. This distinction is crucial for evaluating whether a computational problem is amenable to accurate solution on digital hardware. In practical computations, rounding errors inherent to finite-precision arithmetic are magnified by the condition number, potentially leading to significant loss of accuracy in the results. For instance, in solving a linear system, floating-point roundoff during operations can introduce errors equivalent to a small relative perturbation in the input, which the condition number then scales up in the solution. The condition number plays a key role in assessing the numerical stability of problems, independent of the specific algorithm used, by indicating the worst-case error growth due to data uncertainties or computational imperfections. It thus guides practitioners in identifying inherently challenging problems that may require specialized techniques, such as regularization, to mitigate instability.

Linear Operators

Condition Number of Matrices

The condition number of an invertible square matrix A \in \mathbb{R}^{n \times n} is defined as \kappa(A) = \|A\| \cdot \|A^{-1}\|, where \|\cdot\| denotes a consistent matrix norm induced by a vector norm, such as the 1-norm, 2-norm (spectral norm), or \infty-norm. This measure quantifies the sensitivity of the matrix to perturbations, with \kappa(A) \geq 1 always holding. In the 2-norm, equality holds if and only if A is a scalar multiple of a unitary matrix. Common choices include the 1-norm \|A\|_1 = \max_j \sum_i |a_{ij}|, the \infty-norm \|A\|_\infty = \max_i \sum_j |a_{ij}|, and the 2-norm \|A\|_2 = \sigma_{\max}(A), the largest singular value of A. For the 2-norm, the condition number simplifies to \kappa_2(A) = \frac{\sigma_{\max}(A)}{\sigma_{\min}(A)}, where \sigma_{\max}(A) and \sigma_{\min}(A) are the largest and smallest singular values of A, respectively. This follows from the fact that \|A\|_2 = \sigma_{\max}(A) and \|A^{-1}\|_2 = 1 / \sigma_{\min}(A), as the singular values of A^{-1} are the reciprocals of those of A. If A is a normal matrix (i.e., A A^H = A^H A), then \kappa_2(A) = \frac{|\lambda_{\max}(A)|}{|\lambda_{\min}(A)|}, where \lambda_{\max}(A) and \lambda_{\min}(A) are the eigenvalues of largest and smallest modulus, since the singular values coincide with the absolute values of the eigenvalues in this case. The value of \kappa(A) depends on the chosen norm, as different norms yield equivalent but not identical measures; specifically, for any two consistent norms \|\cdot\|_a and \|\cdot\|_b, there exists a constant c > 0 (often bounded by n or n^2) such that \frac{1}{c} \kappa_a(A) \leq \kappa_b(A) \leq c \kappa_a(A). For example, the 1-norm and \infty-norm condition numbers emphasize row or column sums, making them suitable for problems with structured perturbations, while the 2-norm provides a geometrically intuitive measure tied to the matrix's stretching factors. In practice, the 2-norm is often preferred for its computational tractability via the singular value decomposition (SVD). For singular or near-singular matrices, where \sigma_{\min}(A) = 0 or is very small, \kappa(A) is undefined or effectively infinite, rendering standard computations unstable. In such cases, a pseudo-condition number can be defined using regularization techniques like truncated SVD (TSVD), where the matrix is approximated by retaining only the k largest singular values (k < n) to form A_k = U_k \Sigma_k V_k^H, and the pseudo-condition number is then \kappa(A_k) = \frac{\sigma_1}{\sigma_k}. This approach mitigates ill-conditioning by filtering out small singular values associated with noise or null space components, effectively bounding the condition number at the cost of approximation error. Alternatively, Tikhonov regularization replaces A with A_\lambda = [A; \sqrt{\lambda} I] for small \lambda > 0, yielding a finite condition number \kappa(A_\lambda) \approx \frac{\sigma_{\max}(A)}{\min(\sigma_{\min}(A), \sqrt{\lambda})}. The choice of truncation level k or regularization parameter \lambda balances bias and variance in the approximation.

Applications to Linear Systems

In the context of solving linear systems Ax = b, where A is an n \times n nonsingular matrix, the condition number \kappa(A) quantifies the sensitivity of the solution x to small perturbations in the input data. Specifically, consider perturbed inputs \tilde{A} = A + \delta A and \tilde{b} = b + \delta b, leading to the perturbed system \tilde{A} \tilde{x} = \tilde{b}. Assuming \|\delta A\| / \|A\| < 1 / \kappa(A), the relative error in the solution satisfies \frac{\|\tilde{x} - x\|}{\|x\|} \leq \frac{\kappa(A)}{1 - \kappa(A) \|\delta A\| / \|A\|} \left( \frac{\|\delta b\|}{\|b\|} + \frac{\|\delta A\|}{\|A\|} \right). For sufficiently small perturbations, this bound simplifies to approximately \kappa(A) times the sum of the relative perturbations in b and A, indicating that errors in the data can be amplified by a factor up to \kappa(A). Backward error analysis provides insight into the numerical stability of algorithms for solving linear systems. When computing an approximate solution \hat{x} using methods like Gaussian elimination with partial pivoting, the result is typically the exact solution to a slightly perturbed system (A + \Delta A) \hat{x} = b + \Delta b, where the backward errors satisfy \|\Delta A\| \leq c n \epsilon \|A\| and \|\Delta b\| \leq c n \epsilon \|b\|, with \epsilon the machine epsilon, n the matrix dimension, and c a modest constant independent of n and \kappa(A). The corresponding forward relative error is then bounded by approximately c n \epsilon \kappa(A), showing that well-conditioned problems (\kappa(A) \approx 1) yield accurate solutions even with floating-point arithmetic, while ill-conditioned ones may lose significant precision. For overdetermined least squares problems \min_x \|Ax - b\|_2 with A an m \times n matrix (m > n) of full column rank, the condition number plays a key role in error analysis. The solution \hat{x} satisfies the normal equations A^T A \hat{x} = A^T b, and the effective condition number for perturbations in b is \kappa_2(A) / \cos \theta, where \kappa_2(A) = \sigma_1 / \sigma_n is the 2-norm condition number of A (with \sigma_1 \geq \cdots \geq \sigma_n > 0 its singular values) and \theta is the angle between the column space of A and the vector b. When solving the normal equations directly, the condition number of the Gram matrix A^T A is [\kappa_2(A)]^2, which can lead to severe error amplification if \kappa_2(A) is large; this motivates stable alternatives like QR decomposition, whose backward error remains O(n \epsilon) independent of \kappa(A). A classic example of an ill-conditioned matrix is the n \times n Hilbert matrix H_n with entries (H_n)_{ij} = 1/(i + j - 1), which arises in problems like polynomial least squares approximation. Its condition number \kappa(H_n) grows exponentially with n, asymptotically as \kappa(H_n) \sim C (1 + \sqrt{2})^{4n} / \sqrt{n} for some constant C > 0, making even moderate-sized systems (e.g., n = 10) numerically challenging due to error amplification exceeding $10^{13}. This exponential growth underscores the practical importance of assessing \kappa(A) before solving linear systems.

Nonlinear Functions

Scalar Functions

For scalar functions f: \mathbb{R} \to \mathbb{R} that are differentiable, the local relative condition number at a point x where f(x) \neq 0 is defined as \kappa(f, x) = \left| \frac{x f'(x)}{f(x)} \right|. This quantity quantifies the relative change in the output f(x) due to a relative perturbation in the input x, derived from a first-order Taylor expansion of f around x. A value of \kappa(f, x) close to 1 indicates well-conditioned behavior, while larger values signal increased sensitivity to input errors. An absolute condition number can also be considered, given by |f'(x)|, which measures the absolute sensitivity of f(x) to absolute changes in x. This is particularly useful when relative measures are inappropriate, such as near zero inputs. When evaluating f at a root where f(x) = 0, the relative condition number \kappa(f, x) becomes infinite because the denominator vanishes, implying extreme ill-conditioning unless f'(x) = 0, which indicates a multiple root. In the multiple root case, the sensitivity is even more pronounced, often requiring specialized analysis or deflation techniques for practical computation. As illustrative examples, consider f(x) = x^2: here f'(x) = 2x, so \kappa(f, x) = 2 for all x \neq 0, showing moderate and uniform conditioning away from the origin. For f(x) = e^x, with f'(x) = e^x, the condition number simplifies to \kappa(f, x) = |x|, which grows linearly with |x| and highlights increasing ill-conditioning for large inputs.

Multivariate Functions

For nonlinear functions f: \mathbb{R}^n \to \mathbb{R}^m, the condition number extends the scalar case by quantifying local sensitivity through the function's differential approximation. The local relative condition number at a point x \in \mathbb{R}^n is defined as \kappa(f, x) = \|J_f(x)\| \cdot \|x\| / \|f(x)\|, where J_f(x) denotes the Jacobian matrix of f at x, and \|\cdot\| is a consistent vector norm on \mathbb{R}^n and \mathbb{R}^m. This measure approximates the maximum relative change in the output \| \delta f \| / \| f(x) \| for a relative perturbation \| \delta x \| / \| x \| \leq \epsilon, as \epsilon \to 0, under the linear approximation \delta f \approx J_f(x) \delta x. When f(x) = 0, the expression is undefined, reflecting potential instability in problems like root-finding near such points. The choice of norms is crucial for consistency and interpretability. Typically, the matrix norm \|J_f(x)\| is the operator norm induced by the underlying vector norms, defined as \|J_f(x)\| = \sup_{\|v\|=1} \|J_f(x) v\|, ensuring submultiplicativity and compatibility with the vector spaces involved. Common choices include the 2-norm (spectral norm, related to singular values) for its geometric insight or the infinity-norm for computational convenience in error bounds. This induced norm aligns the condition number with the amplification factor of the Jacobian, providing a dimensionally homogeneous measure of ill-conditioning. In optimization, condition numbers for objectives and constraints often invoke second-order information via Hessians for twice-differentiable functions. For an unconstrained minimization problem \min g(y) with y \in \mathbb{R}^p, the local condition number at a critical point is \kappa(g, y) = \lambda_{\max}(H_g(y)) / \lambda_{\min}(H_g(y)), where H_g(y) is the Hessian matrix, assuming positive definiteness; large values indicate slow convergence in methods like gradient descent. For constrained problems, such as \min h(z) subject to c(z) = 0, the Hessian of the Lagrangian \mathcal{L}(z, \lambda) = h(z) + \lambda^T c(z) similarly governs second-order conditioning, with its eigenvalues reflecting the curvature and potential ill-conditioning in directions tangent to the feasible set. A key example of instability arises near points where the Jacobian has reduced rank, i.e., \operatorname{rank}(J_f(x)) < \min(n, m). Consider f: \mathbb{R}^2 \to \mathbb{R} given by f(x_1, x_2) = x_1 x_2; the Jacobian is J_f(x) = [x_2, x_1], which is singular along the axes (rank 0 at (0,0) and rank 1 elsewhere on axes). Near such points, \kappa(f, x) can grow unbounded as x \to 0 with f(x) \to 0, since \|J_f(x)\|_2 = \sqrt{x_1^2 + x_2^2} while the denominator vanishes, amplifying relative errors dramatically. This highlights how low-rank Jacobians signal critical regions of high sensitivity in nonlinear mappings.

Advanced Topics

Condition Numbers for Special Structures

In the context of eigenvalue problems, the condition number of a simple eigenvalue \lambda of a matrix A measures its sensitivity to perturbations in A. For a simple eigenvalue with associated unit right eigenvector v and unit left eigenvector y, the condition number is given by \kappa(\lambda) = 1 / |y^* v|, where y^* denotes the conjugate transpose of y. This formula arises from perturbation theory, where small changes in A lead to eigenvalue shifts bounded by \kappa(\lambda) times the perturbation size, assuming normalized eigenvectors such that |y^* v| \leq 1. The reciprocal |y^* v| quantifies the angle between the left and right eigenspaces, with values near 1 indicating well-conditioned eigenvalues and smaller values signaling increased sensitivity. For structured matrices, such as Toeplitz or low-rank matrices, specialized condition numbers account for the matrix structure to provide tighter bounds than general norms. For Toeplitz matrices arising in signal processing, the condition number can be estimated using the symbol's properties, where \kappa(T) \approx \sup |\phi(\theta)| / \inf |\phi(\theta)| for the generating function \phi, often leading to fast algorithms via Levinson-Durbin recursion. Low-rank matrices, common in data compression and machine learning, have condition numbers determined by the effective rank, with \kappa \approx \sigma_1 / \sigma_r for rank-r approximation, and structured perturbations preserve low-rank structure for stable computations. Probabilistic analyses for random matrices, such as Gaussian or Wishart ensembles, reveal that condition numbers typically scale as O(\sqrt{n}) in high dimensions n, milder than worst-case O(n), aiding reliability in random matrix theory applications like principal component analysis. For polynomial root-finding, the condition number of a simple root \rho of a monic polynomial p(x) = \prod (x - \rho_i) assesses how perturbations in the coefficients affect \rho. The absolute condition number is \kappa(\rho) = 1 / |p'(\rho)|, reflecting that roots cluster near multiple roots where the derivative vanishes, while the relative condition number incorporates scaling by \sum |a_k| |\rho|^k / (|\rho| |p'(\rho)|), where a_k are coefficients. Wilkinson's polynomial w(x) = \prod_{k=1}^{20} (x - k), introduced in 1963, exemplifies extreme ill-conditioning: perturbing the coefficient of x^{19} by $2^{-23} shifts the root near 20 by about 7 units due to its large condition number, around $10^{16}, highlighting floating-point vulnerabilities in root computations. Alternative assessments use the resultant, which vanishes at multiple roots and bounds root separation, or the pseudospectrum, where the \epsilon-pseudospectrum \{z : |p(z)| < \epsilon \|p\| \} visualizes sensitivity regions, as in Wilkinson's case where pseudospectral contours reveal root instability. In graph theory, the condition number of the Laplacian matrix L = D - A, where D is the degree matrix and A the adjacency matrix, governs network stability in processes like consensus and synchronization. For a connected undirected graph, the eigenvalues satisfy $0 = \mu_1 < \mu_2 \leq \cdots \leq \mu_n, and the condition number is typically \kappa(L^\dagger) = \mu_n / \mu_2, where L^\dagger is the pseudoinverse; \mu_2 (algebraic connectivity) measures connectivity, while \mu_n bounds the maximum degree. Large \kappa implies slow convergence in diffusion dynamics or vulnerability to node failures, as seen in optimizing Laplacian spectra for stable multi-agent systems. This spectral ratio directly influences network robustness, with well-conditioned Laplacians (small \kappa) ensuring rapid information propagation. Recent work as of 2025 explores distributed optimization to minimize the finite condition number of Laplacians in multi-agent systems. For integral operators and differential equations, condition numbers are often evaluated via spectral norms of their discretized approximations, bridging continuous and discrete analyses. In boundary integral methods for elliptic problems, the single-layer potential operator's discretization yields matrices with condition numbers estimated from the continuous spectrum, typically bounded independently of mesh size for low-frequency acoustics, using \kappa \approx \sigma_{\max} / \sigma_{\min} where singular values reflect operator compactness. Similarly, for discretized differential equations like the Laplacian -\Delta u = f, finite difference or spectral collocation methods produce matrices with \kappa \sim 1/h^2 in the 2-norm for second-order problems, but spectral methods achieve \kappa bounded by the continuous operator's condition, preserving stability for well-posed boundary value problems. These estimates ensure numerical solutions inherit the continuous problem's conditioning, avoiding artificial ill-posedness from discretization.

Computation and Estimation

The condition number of a matrix A in the 2-norm can be computed directly using the singular value decomposition (SVD), where A = U \Sigma V^T, and the condition number \kappa_2(A) = \sigma_1 / \sigma_n, with \sigma_1 and \sigma_n being the largest and smallest singular values, respectively. This method provides an exact value but requires O(n^3) time for an n \times n matrix, making it suitable for dense matrices of moderate size. For large or sparse matrices, direct SVD is often impractical, leading to approximation methods. Power iteration can estimate the dominant singular value \sigma_1 by iteratively applying A^T A or A A^T to a starting vector, converging to the direction of the largest singular vector; the smallest singular value \sigma_n can be approximated similarly on the inverse or via inverse iteration, though this assumes access to solves with A. The Lanczos algorithm extends this idea for symmetric matrices (or via bidiagonalization for general cases), generating a tridiagonal or bidiagonal approximation to estimate extreme singular values efficiently, with complexity scaling as O(k n^2) for k iterations, where k is small (e.g., 10–20) for good approximations in many applications. Probabilistic methods, such as randomized SVD, project the matrix onto a low-dimensional random subspace to approximate the top singular values, achieving near-optimal error with high probability in O(n m \log k + (m + k) k^2) time for an m \times n matrix with m \geq n, particularly effective for ill-conditioned or rank-deficient cases. Recent advancements as of 2025 include quantum algorithms for estimating condition numbers in high-dimensional settings, such as using quantum singular value transformation (QSVT) with numerical angle estimation for matrices with large condition numbers, enabling efficient inversion on quantum hardware. In machine learning, automatic precision estimation techniques integrate condition number bounds into neural network libraries to select optimal floating-point formats, reducing computational costs while maintaining accuracy. For nonlinear functions, computing the condition number involves estimating the Jacobian matrix J_f(x) at a point x, followed by evaluating its condition number as for linear operators. Finite differences approximate J_f(x) by perturbing inputs and measuring output changes, such as the forward difference [f(x + h e_i) - f(x)] / h for the i-th column, where h is a small step size (e.g., \sqrt{\epsilon} \|x\| with machine epsilon \epsilon); this is simple but introduces truncation and rounding errors, requiring careful h selection for accuracy. Automatic differentiation (AD) provides exact Jacobian entries by propagating derivatives through the function's computational graph, avoiding approximation errors and enabling efficient computation for complex, high-dimensional functions, though it requires code modification or AD frameworks. Software libraries implement these techniques robustly. LAPACK's DGECON routine estimates the reciprocal condition number in the 1-norm or infinity-norm using LU factorization and iterative refinement, returning a value between 0 and 1 for detectably singular matrices. MATLAB's cond(A) function computes the 2-norm condition number via SVD, with options for other norms like cond(A,1); for faster estimates, condest(A) uses a block Lanczos method to bound the 1-norm condition number.

References

  1. [1]
    Condition Numbers · CS 357 Textbook
    Condition Number: a measure of sensitivity of solving a linear system of equations to variations in the input.
  2. [2]
    [PDF] Condition Number of a matrix, MATH3511
    Mar 7, 2018 · A condition number for a matrix measures how sensitive the answer is to perturbations in the input data and to roundoff errors made during ...
  3. [3]
    What is the Condition Number of a Matrix? - MathWorks Blogs
    Jul 17, 2017 · A condition number for a matrix and computational task measures how sensitive the answer is to perturbations in the input data and to roundoff errors made ...Condition Number for Inversion · Norms · Linear equations · Example
  4. [4]
    Who Invented the Matrix Condition Number? - Nick Higham
    Jan 23, 2019 · In his 1963 book Rounding Errors in Algebraic Processes, Wilkinson credits the first use of “condition number” to Turing and notes that “the ...
  5. [5]
    [PDF] Condition Numbers and Their Condition Numbers
    Condition numbers measure the sensitivity of matrix inversion and linear system solutions, recording the worst-case sensitivity to small perturbations.<|control11|><|separator|>
  6. [6]
    [PDF] Numerical Conditioning
    Aug 29, 2022 · Numerical conditioning involves forward and backward error, multiple roots, and the condition number, which is the error magnification factor.
  7. [7]
    1.2. Problems and conditioning - Toby Driscoll
    The condition number is a ratio of the relative error of the output to the relative error of the input. It depends only on the problem and the data.
  8. [8]
    A Theory of Condition | SIAM Journal on Numerical Analysis
    Jul 14, 2006 · Backward error and condition number analysis for the indefinite linear least squares problem. International Journal of Computer Mathematics, Vol ...
  9. [9]
    What Is Backward Error? - Nick Higham
    Mar 25, 2020 · Whereas the forward error is the distance between the approximate and true solutions, the backward error is how much the data must be perturbed ...
  10. [10]
    [PDF] Error measures and norms Introducing the condition number
    Feb 4, 2015 · The condition number relates the size of a relative change to a problem to the size of a relative change in the solution. It characterizes the ...
  11. [11]
    Accuracy and Stability of Numerical Algorithms: Second Edition
    Condition number estimation is the process of computing an inexpensive estimate of a condition number, where “inexpensive” usually means that the cost is an ...
  12. [12]
  13. [13]
  14. [14]
    The truncatedSVD as a method for regularization | BIT Numerical ...
    Jul 15, 1987 · The truncated singular value decomposition (SVD) is considered as a method for regularization of ill-posed linear least squares problems.
  15. [15]
    [PDF] Regularization, GSVD and Trucated GSVD
    This paper analyzes Tikhonov regularization using generalized SVD (GSVD) and defines a truncated GSVD solution, which sheds light on regularization.
  16. [16]
    [PDF] Errors for Linear Systems
    We would like to know how the errors of ˆA and b influence the error in ˆx. Example: Consider the linear system Ax = b with. 1.01 .99 .99 1.01 x1.
  17. [17]
    Accuracy and Stability of Numerical Algorithms | SIAM Publications Library
    ### Summary of Backward Error for Gaussian Elimination or LU Factorization
  18. [18]
    4.2.4 Conditioning of the linear least squares problem
    The question we want to examine is by how much the relative error in b b is amplified into a relative error in ˆx. x ^ . We will restrict our discussion to ...
  19. [19]
    [PDF] Four Cholesky Factors of Hilbert Matrices and their Inverses
    Aug 26, 2011 · Condition numbers of Hilbert matrices HN,K are known to grow rapidly (exponentially) with their dimension N.
  20. [20]
    [PDF] On the Conditioning of the Nonsymmetric Eigenproblem - NetLib.org
    Dec 9, 1981 · Let λ be a simple eigenvalue of the n by n matrix A, with unit right eigen- vector x and unit left eigenvector y. In other words Ax = λx, yT A ...
  21. [21]
    [PDF] Eigenvalue condition numbers and a formula of Burke, Lewis and ...
    Oct 18, 2004 · For a simple eigenvalue the relationship is as follows. Let x, y ∈ Cn be a right and a left eigenvector of A to the eigenvalue λ respectively.
  22. [22]
    [PDF] Eigenvalues and the Laplacian of a graph - Fan Chung Graham
    Spectral graph theory has a long history. In the early days, matrix theory and linear algebra were used to analyze adjacency matrices of graphs.
  23. [23]
    [PDF] Distributed Optimization of Finite Condition Number for Laplacian ...
    Jul 8, 2025 · This paper addresses the distributed optimization of the finite condition number of the Laplacian matrix in multi-agent systems.
  24. [24]
    Spectral and condition number estimates of the acoustic single-layer ...
    We show that relatively accurate estimates of the spectrum as well as condition number of the single-layer operator can be obtained. We extend these estimates ...Missing: norm | Show results with:norm
  25. [25]
    Norm-Preserving Discretization of Integral Equations for Elliptic ...
    We investigate the behavior of integral formulations of variable coefficient elliptic partial differential equations in the presence of steep internal ...
  26. [26]
    [PDF] FINITE DIFFERENCE AND SPECTRAL METHODS FOR ORDINARY ...
    This book is largely linear, and for that reason and others, there is much more to the numerical solution of differential equations than you will find here. On ...<|control11|><|separator|>
  27. [27]
    MATRIX PROPERTIES VIA SVD – Linear Algebra and Applications
    The SVD allows the computation of an orthonormal basis for the nullspace of a matrix. ... Condition number. The condition number of an invertible n \times n ...
  28. [28]
    [PDF] Lecture 2 Linear Algebra Review Condition Numbers
    The singular values define a condition number of a matrix as follows: κ(A) ... Proof to Lemma 3: Consider the singular value decomposition (SVD), A = USV T , U, V.<|separator|>
  29. [29]
    [PDF] Strategies for computing the condition number of a matrix
    Jun 28, 2017 · We also introduce an iterative algorithm for estimating the condition number in the 2-norm which is based on power iteration.
  30. [30]
    [PDF] Adaptive Lanczos Methods for Recursive Condition Estimation
    The purpose of this paper is to propose an adaptive. ~anczos gstimator scheme, which we call ale, for tracking the condition number of the modified matrix over ...
  31. [31]
    Spectral Condition-Number Estimation of Large Sparse Matrices
    Jan 7, 2013 · We describe a randomized Krylov-subspace method for estimating the spectral condition number of a real matrix A or indicating that it is numerically rank ...
  32. [32]
    [PDF] Numerical Jacobian approximation - GitHub Pages
    In those cases, the Jacobian matrix can be approximated numerically with simple finite differences. This is also useful in testing if the analytic Jacobian has ...
  33. [33]
    [PDF] Automatic Differentiation in Machine Learning: a Survey
    Automatic differentiation (AD), also called algorithmic differentiation or simply “auto- diff”, is a family of techniques similar to but more general than ...
  34. [34]
    LAPACK: dgecon - NetLib.org
    DGECON estimates the reciprocal of the condition number of a general !> real matrix A, in either the 1-norm or the infinity-norm, using !
  35. [35]
    cond - Condition number for inversion - MATLAB - MathWorks
    C = cond(A) returns the 2-norm condition number for inversion, equal to the ratio of the largest singular value of A to the smallest.Description · Examples · Input Arguments · More About