Fact-checked by Grok 2 weeks ago

Interval arithmetic

Interval arithmetic is a computational framework that represents real numbers as closed intervals of the form [a, b], where a \leq b, and defines operations on these intervals to produce enclosures that contain all possible outcomes of the corresponding operations on the real numbers within the intervals, thereby providing rigorous bounds on computational errors such as and . This approach ensures inclusion isotonicity, meaning the result interval always contains the exact range of the operation, even in finite-precision using outward to account for floating-point limitations. For basic operations, addition and subtraction are straightforward—e.g., [a, b] + [c, d] = [a + c, b + d]—while and division require evaluating endpoint combinations to determine the tightest enclosing interval, with special handling for yielding the entire real line or appropriate unions. Developed in the mid-20th century to address uncertainties in numerical computations, interval arithmetic traces its modern origins to Teruo Sunaga's 1958 work on inclusion theory, but it was Ramon E. Moore who formalized and popularized it through his 1962 dissertation and 1966 book Interval Analysis, motivated by error bounding needs at . Subsequent advancements in the and by researchers including Eldon R. , William , Karl Nickel, and Ulrich Kulisch refined its theoretical foundations and implementations, leading to standardized hardware support like interval extensions in and software libraries such as INTLAB and PROFIL/. Despite its strengths, interval arithmetic suffers from the dependency problem, where expressions lose information about variable correlations (e.g., x - x = [0, 0] but widened intervals may not), resulting in potentially pessimistic bounds that can be mitigated through centered forms or monotonicity exploitation. Interval arithmetic finds broad applications in verified , where it guarantees enclosures for solutions to equations, optimizations, and simulations, enabling computer-assisted proofs such as the 1998 resolution of Kepler's conjecture. In engineering fields like chemical , , and , it bounds uncertainties from measurements and models to ensure safety and reliability. It also supports solving nonlinear systems via methods like the interval Newton or Krawczyk algorithms, with error guarantees, and by partitioning search spaces into subintervals. These capabilities make it indispensable for high-stakes computations requiring certified accuracy, though challenges like computational overhead persist in real-time systems.

Introduction and Basics

Definition and Motivation

Interval arithmetic provides a framework for performing computations with quantities that are known only within certain bounds, representing or in numerical values. An interval X is defined as a [a, b] of real numbers where a \leq b, encapsulating the of all possible values for a given . This representation allows for the propagation of bounds through arithmetic operations, ensuring that results enclose the true without requiring exact inputs. The primary motivation for interval arithmetic arises from the inherent limitations of , which introduces rounding errors and complicates the tracking of in computations. In fields such as scientific computing, engineering design, and , where precise error bounds are essential, traditional methods often fail to guarantee containment of all possible outcomes due to accumulated inaccuracies. Interval arithmetic addresses this by systematically bounding errors and uncertainties, enabling reliable analysis even with approximate data. The set of all closed real intervals forms a under the partial order of , where the meet and join operations correspond to intersection and , respectively. Arithmetic operations on intervals are defined to yield enclosures—intervals that contain the of all possible results from applying the operation to values within the input intervals—thus preserving the inclusion property. A key measure is the width of an interval X = [a, b], given by w(X) = b - a, which quantifies the or spread represented by the interval. Interval arithmetic originated in the mid-20th century to meet the growing needs of error analysis in digital , with foundational developments by R. E. Moore in his 1962 dissertation.

Illustrative Examples

To illustrate the basic principle of interval arithmetic, consider the addition of two intervals representing approximate values with known bounds. Suppose the value of π is known to lie within the interval [3.14, 3.15], and an additional correction term is bounded by [0.001, 0.002]. The sum of these intervals is computed as [3.14 + 0.001, 3.15 + 0.002] = [3.141, 3.152]. This result encloses all possible sums of values from the original intervals, providing a guaranteed bound on the uncertainty without requiring exact inputs. Interval arithmetic extends naturally to more complex formulas by propagating bounds through multiple operations. For example, the is given by A = \pi r^2, where the r is uncertain and lies in the interval [r_1, r_2] with r_1 < r_2. Applying interval arithmetic yields the interval extension \pi [r_1, r_2]^2 = [\pi r_1^2, \pi r_2^2], which contains the true area for any r \in [r_1, r_2]. This demonstrates how uncertainties in input parameters are tracked to bound the output, useful in applications like engineering tolerances. In contrast to point arithmetic, which uses single representative values and may accumulate untracked rounding errors, interval arithmetic produces wider results that rigorously enclose the true value. For instance, point evaluation of the earlier addition using midpoints 3.145 + 0.0015 = 3.1465 might suggest false precision, whereas the interval [3.141, 3.152] ensures containment but reflects the actual uncertainty range. This enclosure property makes interval methods reliable for verified computations, though at the cost of broader bounds due to the dependency problem in repeated uses of the same variable. Visual representations, such as diagrams depicting the overlap of input intervals and the union of their Minkowski sum for addition, or the monotonic expansion of the area interval with increasing radius bounds, can aid understanding of these enclosures.

Interval Representation

Standard Notation

In interval arithmetic, an interval X over the real numbers is typically denoted by X = [\underline{x}, \overline{x}], where \underline{x} and \overline{x} represent the lower and upper bounds, respectively, satisfying \underline{x} \leq \overline{x}. This closed notation encompasses all real numbers x such that \underline{x} \leq x \leq \overline{x}, providing a compact representation for sets of possible values in computations involving uncertainty or error bounds. Special cases include degenerate intervals, which are singletons of the form [a, a] equivalent to the real number a itself, representing precise values without width. The empty set is denoted by \emptyset, arising when no real numbers satisfy the interval condition, such as in inconsistent bounds where the lower exceeds the upper. Unbounded intervals, such as [-\infty, b] or [a, \infty), are occasionally considered but rarely emphasized in standard treatments, as the focus remains on finite, closed intervals. These notations assume familiarity with basic properties of the real numbers, including ordering and completeness. The set of all such intervals forms a complete lattice under the partial order of inclusion \subseteq, where X \subseteq Y if and only if \underline{y} \leq \underline{x} and \overline{x} \leq \overline{y}. In this structure, the meet operation corresponds to the intersection X \cap Y, and the join is the convex hull (or interval hull) of the union, ensuring the smallest interval containing both sets. For a finite set of real numbers \{x_1, \dots, x_n\}, the convex hull is given by \operatorname{conv}(\{x_1, \dots, x_n\}) = [\min\{x_1, \dots, x_n\}, \max\{x_1, \dots, x_n\}], which extends naturally to intervals as subsets of the power set of the reals. This lattice framework underpins the algebraic properties essential for rigorous error analysis in numerical computations.

Real and Complex Intervals

In interval arithmetic, complex intervals extend the real interval concept to the , primarily through rectangular representations. A rectangular complex interval Z is defined as the Cartesian product of two real intervals, expressed as Z = [a, b] + i [c, d], where [a, b] and [c, d] are closed real intervals, and i is the . This form encloses all complex numbers z = x + i y such that x \in [a, b] and y \in [c, d], forming an axis-aligned rectangle in the complex plane. Such representations are straightforward to compute using real interval arithmetic on the real and imaginary components separately. Alternative representations for complex intervals include disk or circular forms, which enclose sets as disks centered at a complex point with a radius given by a nonnegative real interval. A circular complex interval can be denoted as Z = z_c + r \mathbb{D}, where z_c is a complex center, r is a real interval radius, and \mathbb{D} is the closed unit disk \{ w \in \mathbb{C} : |w| \leq 1 \}. These forms often provide tighter enclosures for operations involving magnitude or rotation, such as multiplication, but may overestimate for addition or subtraction compared to rectangular forms, leading to trade-offs in enclosure tightness depending on the application. Interval arithmetic extends naturally to higher dimensions via vector and matrix intervals. An interval vector in \mathbb{R}^n is the Cartesian product \mathbf{X} = [a_1, b_1] \times \cdots \times [a_n, b_n], bounding component-wise ranges for vectors. Similarly, an interval matrix consists of entries that are real or complex intervals, enabling enclosures for linear systems and transformations in multi-dimensional spaces. The operations on real, complex, vector, and matrix intervals inherit key properties from real interval arithmetic, including inclusion monotonicity and subdistributivity. Inclusion monotonicity ensures that if X \subseteq Y and Z \subseteq W, then the result of any operation satisfies X \oplus Z \subseteq Y \oplus W, where \oplus denotes addition, subtraction, multiplication, or division (when defined). Subdistributivity holds for multiplication, such that X(Y + Z) \subseteq XY + XZ, though equality does not always obtain due to dependency issues. These properties guarantee that computed enclosures contain all possible values from input uncertainties. For complex addition in rectangular form, the operation proceeds component-wise: given Z_1 = [a_1, b_1] + i [c_1, d_1] and Z_2 = [a_2, b_2] + i [c_2, d_2], Z_1 + Z_2 = [a_1 + a_2, b_1 + b_2] + i [c_1 + c_2, d_1 + d_2]. This preserves the rectangular structure exactly. Unlike real intervals, which have unique closed bounded representations, complex intervals exhibit non-unique enclosures due to the two-dimensional nature and multiple representational choices (e.g., rectangular versus disk), potentially leading to different bounding sets for the same points. Tighter enclosures can often be achieved using alternative forms like polar or disk representations, which better capture rotational symmetries or circular uncertainties in complex computations.

Core Operations

Arithmetic Operators

Interval arithmetic defines the basic operations on closed real intervals to ensure that the result contains all possible values arising from pointwise applications of the corresponding real operations on the endpoints. For two intervals X = [\underline{x}, \overline{x}] and Y = [\underline{y}, \overline{y}], where \underline{x} \leq \overline{x} and \underline{y} \leq \overline{y}, the operations are specified as follows. Addition is given by X + Y = [\underline{x} + \underline{y}, \overline{x} + \overline{y}], which directly sums the lower and upper bounds, preserving the inclusion property. Subtraction follows as X - Y = [\underline{x} - \overline{y}, \overline{x} - \underline{y}], subtracting the upper bound of Y from the lower bound of X and the lower bound of Y from the upper bound of X to account for the range of possible differences. Multiplication requires evaluating all combinations of endpoint products and selecting the minimum and maximum: X \cdot Y = [\min\{\underline{x}\underline{y}, \underline{x}\overline{y}, \overline{x}\underline{y}, \overline{x}\overline{y}\}, \max\{\underline{x}\underline{y}, \underline{x}\overline{y}, \overline{x}\underline{y}, \overline{x}\overline{y}\}], ensuring the result encloses the full range of products. Division is defined as X / Y = X \cdot (1/Y), where the reciprocal interval is \frac{1}{Y} = \left[\frac{1}{\overline{y}}, \frac{1}{\underline{y}}\right] provided $0 \notin Y (i.e., \underline{y} > 0 or \overline{y} < 0); if $0 \in Y, the operation is undefined in standard real or requires extended intervals incorporating infinity. These operations exhibit inclusion monotonicity: if X \subseteq X' and Y \subseteq Y', then X + Y \subseteq X' + Y', and similarly for subtraction, multiplication (when applicable), and division. Additionally, multiplication satisfies subdistributivity: for intervals X, Y, and Z, X(Y + Z) \subseteq XY + XZ, meaning the interval obtained by multiplying X by the sum Y + Z contains the pointwise range but is contained within the sum of the separate products, potentially leading to tighter bounds in some cases.

Elementary Functions

In interval arithmetic, the extension of a continuous unary function f: \mathbb{R} \to \mathbb{R} to an interval X = [\underline{x}, \overline{x}] is defined as the range f(X) = \bigcup \{f(x) \mid x \in X\} = [\inf f(X), \sup f(X)], which provides a tight enclosure of all possible function values over X. This range can be computed exactly for simple cases but often requires approximation methods to ensure inclusion. For monotonic functions, the range simplifies to evaluation at the endpoints: if f is increasing, f(X) = [f(\underline{x}), f(\overline{x})]; if decreasing, f(X) = [f(\overline{x}), f(\underline{x})]. Non-monotonic functions demand more sophisticated techniques, such as subdivision of X into monotonic subintervals or bounding via derivatives (e.g., using the mean value theorem to estimate deviations from a point). The exponential function \exp(X) is monotonic increasing, so its interval extension is straightforward: \exp(X) = [\exp(\underline{x}), \exp(\overline{x})]. For example, \exp([0, 1]) = [1, e], where e \approx 2.718, enclosing all values from \exp(0) to \exp(1). Similarly, the natural logarithm \log(X) is monotonic increasing on X > 0, yielding \log(X) = [\log(\underline{x}), \log(\overline{x})] for \underline{x} > 0; for instance, \log([2.5, 3.5]) \approx [0.916, 1.253] when using endpoint evaluations with rounding adjustments. The square root function, defined for X \geq 0, is also monotonic increasing: \sqrt{X} = [\sqrt{\underline{x}}, \sqrt{\overline{x}}]. An example is \sqrt{[0, 4]} = [0, 2], which precisely bounds the range. For non-monotonic functions like , the cannot rely solely on endpoints and may require subdivision or auxiliary bounds. For \sin(X), the exact range is determined by the minimum and maximum values over X, considering the function's periodicity and critical points (e.g., at \pi/2 + 2k\pi). For example, \sin([0, \pi]) = [0, 1], obtained by evaluating extrema within the ; more complex intervals like \sin([0.1, 0.3]) use endpoint evaluations since sine is monotonic increasing there, yielding enclosures such as \approx [0.0998, 0.2955]. The power function X^n for integer n handles cases based on the sign of the base and parity of n. If X \geq 0, it follows monotonicity: for even n, X^n = [0, \max(\underline{x}^n, \overline{x}^n)]; for odd n, endpoint evaluation applies directly. When X includes negative values, even powers yield non-negative results via the convex hull of positive and absolute values, e.g., [-2, -1]^2 = [1, 4]; odd powers preserve sign, e.g., [-2, -1]^3 = [-8, -1]. For X = [-2, 3] and even n=4, X^4 = [0, 81], accounting for the minimum at zero and maximum at the farthest endpoint. A key property of these extensions is inclusion isotonicity: for continuous f and nested intervals X \subseteq Y, the extended function satisfies f(X) \subseteq f(Y), ensuring wider inputs produce wider (or equal) output enclosures. This holds because the range union preserves relations for continuous s. Compositions with arithmetic operations from prior sections can form more complex expressions while maintaining .

Extensions to General Functions

Interval arithmetic extends to general functions through the principle of interval extension, which requires constructing an interval-valued function F such that f(x) \in F(X) for all x \in X, where X is an containing the domain points of interest. This ensures that the image of any point in X under f lies within the computed interval enclosure, providing guaranteed bounds despite the loss of exact point information. The extension F is typically derived from properties of f, such as differentiability, to minimize overestimation inherent in naive pointwise evaluations. A fundamental approach is the mean value form, which leverages the for . For a f on an X, the mean value extension is given by f(X) = f(c) + f'(X)(X - c), where c \in X is a chosen point (often the m(X)) and f'(X) is an of the f' over X. This form guarantees because, by the , f(y) - f(c) = f'(\xi)(y - c) for some \xi between y and c, so the difference lies in f'(X)(X - c). The mean value form is particularly effective for continuous functions, where the is bounded, allowing tighter enclosures compared to natural extensions. The Lipschitz form provides a coarser but simpler enclosure for functions satisfying a Lipschitz condition, i.e., |f(x) - f(y)| \leq L |x - y| for some constant L > 0 and all x, y \in X. In this case, the extension simplifies to f(X) \subseteq f(c) + L (X - c), where L bounds the absolute value of f' over X, ensuring the width of the enclosure is at most L times the width of X. This form is useful when computing the full range of f' is expensive, as it relies only on a scalar bound rather than an interval for the derivative. For higher accuracy, the Taylor form extends the mean value approach using higher-order derivatives. For an n-times differentiable function f, the Taylor expansion around c \in X is f(X) \approx \sum_{k=0}^{n} \frac{f^{(k)}(c)}{k!} (X - c)^k + R_n(X), where the remainder term R_n(X) encloses the Lagrange form \frac{f^{(n+1)}(\xi)}{(n+1)!} (X - c)^{n+1} for some \xi \in X, typically bounded by an interval extension of f^{(n+1)} over X. This provides a polynomial approximation plus a verified remainder, reducing overestimation for smooth functions by incorporating curvature information. The full Taylor form inclusion is thus f(X) = \sum_{k=0}^{n-1} \frac{f^{(k)}(c)}{k!} (X - c)^k + f^{(n)}(X) \frac{(X - c)^n}{n!}, ensuring enclosure via the generalized mean value theorem for higher derivatives. These extensions are applied to differentiable functions to obtain narrower enclosures than those from arithmetic operations alone, especially when dependencies between variables are present or for non-elementary functions lacking explicit interval rules. The choice depends on available derivative information: mean value or for , for higher-order precision, with computational cost increasing with the order of .

Computational Techniques

Rounded Interval Arithmetic

Rounded interval arithmetic extends standard interval arithmetic by incorporating directed rounding to account for finite-precision computations on digital computers, ensuring that the resulting intervals rigorously enclose the exact mathematical ranges despite rounding errors. This technique, also known as outward rounding, rounds the lower of an interval downward and the upper upward, guaranteeing of the true interval. Rounding modes in rounded interval arithmetic utilize directed rounding provided by floating-point hardware standards, such as rounding downward (to the nearest representable number not exceeding the exact value) for lower bounds and upward (to the nearest representable number not less than the exact value) for upper bounds. These modes leverage the processor's ability to switch rounding directions on a per-operation basis, enabling precise control over enclosure accuracy. The error introduced by rounding is bounded by the \varepsilon, the smallest relative spacing between floating-point numbers around 1, typically $2^{-52} for double precision. For a computed \hat{X} enclosing the exact X, outward rounding ensures X \subseteq \hat{X} with the width satisfying w(\hat{X}) \leq w(X) + 2\varepsilon \max(| \underline{x} |, | \overline{x} |), where \underline{x} and \overline{x} are the endpoints, limiting the excess width to at most two units in the last place (ulp) per bound. Implementation relies on floating-point units supporting directed rounding, often integrated into processors like those compliant with , where lower and upper bounds are computed in parallel using dedicated rounding instructions. Software libraries, such as INTLAB, automate this process by enforcing outward rounding for all operations, minimizing overhead while preserving enclosure properties. For basic operations like of intervals X = [\underline{x}, \overline{x}] and Y = [\underline{y}, \overline{y}], the rounded result is computed as: \hat{X} + \hat{Y} = \left[ \mathrm{fl}_\downarrow(\underline{x} + \underline{y}), \mathrm{fl}_\uparrow(\overline{x} + \overline{y}) \right], where \mathrm{fl}_\downarrow denotes downward and \mathrm{fl}_\uparrow denotes upward to the nearest floating-point number. This ensures the exact sum range is contained within the computed interval. The importance of rounded interval arithmetic lies in its foundation for validated numerics, where it guarantees that computed enclosures contain all possible solutions, enabling reliable error analysis and computer-assisted proofs in scientific and engineering applications. By bounding rounding errors explicitly, it supports the development of trustworthy computational methods without underestimation risks.

Handling Dependencies

In interval arithmetic, the dependency problem arises when a function involves the same variable multiple times, but the arithmetic treats each occurrence as independent, leading to enclosures that are wider than necessary. For instance, consider the f(x) = x - x, which evaluates exactly to 0 for any real x. If x is represented by the X = [a, b] with a < b, standard interval computes f(X) = X - X = [a - b, b - a], an of width $2(b - a) that contains 0 but overestimates the true range \{0\}. This overestimation stems from the fundamental design of interval operators, which compute bounds assuming no correlation between input intervals, thereby discarding information about dependencies between variables. In expressions where variables appear repeatedly, such as in subtraction or multiplication, the subadditivity and submultiplicativity properties of interval arithmetic ensure containment but fail to exploit the fact that correlated occurrences (e.g., the same x in x - x) constrain the result more tightly. The dependency effect is particularly evident in basic operations like subtraction: for an interval X, the width w(X - X) = 2w(X), whereas the true width of the range is 0, highlighting how multiple uses amplify uncertainty without reflecting actual functional behavior. To address dependencies, alternative representations such as and have been developed, which track linear correlations or mean-value expansions to produce narrower enclosures while maintaining guaranteed bounds. represent quantities as affine combinations of noise symbols to model dependencies explicitly, while evaluate functions around a midpoint to reduce wrapping effects from repeated variables. These approaches mitigate but do not fully eliminate the issue in nonlinear or higher-order contexts.

Widening and Overestimation

In interval arithmetic, overestimation occurs when the computed enclosure of a function's range over an input interval is wider than the true range, leading to a loss of sharpness in bounds. This widening is distinct from but can compound the dependency effects handled through variable correlation techniques. The excess width, defined as e(X) = w(F(X)) - w(f(X)), where F(X) is the interval extension and w(\cdot) denotes the width, measures the degree of overestimation for a given evaluation. A fundamental metric for quantifying overestimation is the Hausdorff distance d_H(R(f; X), F(X)), which captures the maximum deviation between the true range R(f; X) and the enclosure F(X). For continuous functions, this distance converges linearly to zero as the width w(X) shrinks, with d_H \leq \gamma w(X) for some constant \gamma depending on the function's properties over a reference interval. The wrapping effect represents a key source of overestimation, arising when the geometric shape of the function's image—often curved or non-axis-aligned in multiple dimensions—is projected onto an interval vector, inflating the enclosure to include unnecessary regions. For instance, the evaluation \sin([0, 2\pi]) = [-1, 1] is exact, but for a nonlinear mapping like f(x, y) = (x^2 - y^2, 2xy) over a square domain, the circular image wraps into a larger bounding box, exaggerating the width beyond the true range. This effect intensifies with dimensionality and curvature, as interval arithmetic assumes independent variation along each axis. Beyond dependency issues, overestimation stems from function non-monotonicity, where interior extrema fragment the range into disconnected components that interval evaluations merge conservatively, and from multiple variable occurrences amplifying artificial correlations not captured in basic forms. These factors contribute to quadratic or higher-order excess width in higher-degree polynomials. Overestimation can be bounded using the function's Lipschitz constant L, which satisfies w(f(X)) \leq L \, w(X) for the true range width, with interval extensions achieving w(F(X)) \leq (1 + \kappa) L \, w(X), where \kappa is a condition number reflecting derivative variation (e.g., \kappa = \sup |f'| / \inf |f'| - 1). This slope-based bound highlights how steep or varying gradients exacerbate widening, as in the mean-value form F(X) = f(c) + f'(X)(X - c), yielding w(F(X)) \leq \sup |f'| \cdot w(X). To minimize the wrapping effect, subdivision techniques partition the input interval into smaller subdomains, where local linearity or alignment reduces geometric distortion in enclosures. While computationally intensive, this yields tighter overall bounds by unioning refined sub-enclosures, often achieving near-optimal sharpness for moderately complex functions.

Solution Methods

Linear Interval Systems

Linear interval systems concern the computation of enclosures for the solution set of equations A \mathbf{x} = \mathbf{b}, where A is an n \times n interval matrix and \mathbf{b} is an interval vector. The solution set is defined as S = \{ \mathbf{x} \in \mathbb{R}^n \mid \exists A \in A, \, b \in B \text{ s.t. } A \mathbf{x} = b \}, representing all possible solutions over the ranges of coefficients in A and B. This set is typically non-convex and computing its exact hull is NP-hard, so methods focus on outer approximations using . Interval matrix arithmetic, which extends pointwise operations to matrices while accounting for dependencies, underpins these computations. A fundamental technique for enclosing S is the interval adaptation of Gaussian elimination, which transforms the augmented interval matrix [A \mid B] into row echelon form via elementary row operations compatible with . To mitigate excessive width growth from dependency ignorance and potential division by zero-containing pivots, partial pivoting selects the candidate pivot interval with the minimal width in the current column, thereby minimizing overestimation in subsequent elimination steps. Pivot tightening can further refine this by computing exact ranges for pivots using determinants of leading principal submatrices, ensuring nonsingularity for classes like inverse nonnegative matrices. The resulting upper triangular system is then solved via back-substitution, yielding a verified enclosure, though widths may still expand quadratically in the worst case. The Hansen-Bliek method offers a preconditioning-based approach to achieve sharper enclosures. It first computes a preconditioner M \approx (\mathrm{mid}(A))^{-1}, the inverse of the midpoint matrix of A, and applies it to form the transformed system M A \mathbf{x} = M B. Assuming the preconditioned matrix M A is regular, the method then uses iterative refinement—often intersecting enclosures from the Hansen hull procedure and linear programming relaxations—to compute the tight interval hull of solutions. This yields an enclosure X satisfying X \subseteq A^{-1} B, with optimality in the sense of minimal width for certain inverse-positive systems. The approach reduces dependency effects but may introduce slight overestimation if M is inexact. A important property concerns unique solvability, where S reduces to a singleton. The system has a unique solution if \mathrm{mid}(A) is nonsingular and \rho( |\mathrm{mid}(A)|^{-1} \mathrm{rad}(A) ) < 1, with \rho denoting the spectral radius, |\cdot| the entrywise absolute value, and \mathrm{rad}(A) the radius matrix (half-widths); this ensures all realizations of A and point B yield the same \mathbf{x} = \mathrm{mid}(A)^{-1} \mathrm{mid}(B). For general interval B, analogous bounds apply using the radius of B. Such conditions guarantee the absence of width propagation in the solution enclosure.

Interval Newton Method

The interval Newton method extends the classical Newton-Raphson iteration to interval arithmetic, providing a means to rigorously enclose the zeros of nonlinear functions f: \mathbb{R}^n \to \mathbb{R}^n within an initial interval vector X^0. For a differentiable function f, the interval Newton operator is defined as N(X) = \{ x - f'(X)^{-1} f(x) \mid x \in X \}, where f'(X) is an invertible interval extension of the Jacobian over X, ensuring that if a zero x^* \in X exists, then x^* \in N(X). In practice, this set-valued operator is often computed using the midpoint m(X) of X for efficiency, yielding N(X) = m(X) - f'(X)^{-1} f(m(X)), and the iteration proceeds as X^{k+1} = X^k \cap N(X^k). This approach guarantees that all zeros in the initial box remain enclosed throughout the process, with the method terminating when X^k is sufficiently narrow or empty (indicating no zeros). Under suitable conditions, such as f' satisfying a Lipschitz condition on X (i.e., \|f'(x_1) - f'(x_2)\| \leq L \|x_1 - x_2\| for some L > 0), the interval Newton method exhibits quadratic convergence, meaning the width w(X^{k+1}) satisfies w(X^{k+1}) \leq C [w(X^k)]^2 for some constant C > 0 once the iteration enters a neighborhood of the zero. A key convergence criterion is that the width reduces contractively: if w(N(X)) < \beta w(X) with $0 \leq \beta < 1, then the sequence of intervals nests and converges to the unique zero in X, provided $0 \notin f'(X). This ensures linear or superlinear width reduction per iteration, contrasting with the potentially slower linear convergence of simpler interval methods. A prominent variant is the Krawczyk method, which modifies the interval Newton operator to improve reliability and enclosure tightness: K(X) = x - f'(x)^{-1} f(x) + [I - f'(X)^{-1} f'(x)] (X - x) for some x \in X, where f'(x) is evaluated at the point x. If K(X) \subseteq \operatorname{int}(X), then X contains exactly one zero of f, and the method converges quadratically under the same Lipschitz assumptions on f'. This preconditioned form often requires fewer iterations than the basic interval Newton, especially for ill-conditioned problems, while maintaining guaranteed s. In applications, the interval Newton method is primarily used to enclose roots of nonlinear equations, such as in solving f(x) = 0 where point estimates from classical may miss multiple or ill-behaved solutions due to errors or dependencies. Unlike the point-based classical method, which converges quadratically to a single without enclosure guarantees, the version provides validated bounds on all solutions in the domain, making it essential for verified computing in fields like optimization and control systems. For linear systems, it reduces to a special case of , but its strength lies in handling nonlinearity iteratively.

Bisection and Covering

Bisection methods in interval arithmetic provide a reliable means to enclose the roots of nonlinear equations or functions within specified regions by systematically refining enclosures through subdivision. The process begins with an initial interval or X over which the f is evaluated using interval arithmetic to compute the f(X). Subintervals are discarded if f(X_i) \not\ni 0, indicating no is present, while promising subintervals are further bisected, typically by halving along the longest to maintain balanced refinement. This adaptive strategy prioritizes dimensions with the greatest width, reducing the overall more efficiently than uniform subdivision across all dimensions. Algorithms for bisection vary in their splitting criteria: uniform bisection divides the equally in all directions, suitable for symmetric problems but potentially inefficient for elongated domains, whereas adaptive bisection selects the direction of maximum width w(X) = \max(b_j - a_j) for each split, where [a_j, b_j] are the bounds in coordinate j. This is implemented in packages like INTBIS, which combines bisection with existence tests to isolate all solutions within a bounded region. Termination occurs when the width of all remaining subintervals satisfies w(X_i) < \epsilon for a user-specified \epsilon, or after a fixed number of iterations to bound computational effort. The trade-offs in bisection involve balancing computational cost against enclosure tightness; while adaptive methods converge faster in practice for most nonlinear systems, they require more evaluations of the inclusion function, making them up to 200 times slower than point-based methods but guaranteed to find all roots without missing any. Overestimation from interval arithmetic, which can inflate ranges due to dependency issues, motivates finer bisections to achieve tighter bounds. Covering techniques extend interval arithmetic to represent non-convex or disconnected sets that cannot be enclosed by a single interval or box, approximating them as unions of multiple intervals or boxes (often termed pavings or subpavings). For a non-convex set S, the covering C = \bigcup C_i satisfies S \subseteq C, where each C_i is a box computed via subdivision and validated using interval evaluations to ensure containment. This is particularly useful in global optimization or viability analysis, where the feasible region may consist of disjoint components, and boxes are retained only if they intersect S based on tests like f(C_i) \ni 0. Algorithms for covering often integrate by recursively partitioning the initial and selecting sub-boxes that cover viable portions, discarding inconsistent ones to minimize excess volume. For instance, in set inversion problems, the union of surviving boxes forms an outer , with inner approximations obtained by further refining to exclude uncertainties. Termination mirrors , halting when the maximum box width falls below \epsilon or the covering achieves a desired volume ratio relative to the original . Trade-offs in covering methods include increased for the list of boxes, which grows with the of the non-convex set, versus improved over single-box enclosures that suffer from excessive overestimation. These approaches ensure rigorous guarantees for non-convex domains, though at higher computational expense, as demonstrated in applications like where unions of boxes enclose basins of attraction.

Applications

Rounding Error Analysis

Interval arithmetic provides a rigorous for verifying and bounding rounding errors in floating-point computations by representing values as intervals that enclose all possible results due to . In this approach, arithmetic operations are performed with outward , ensuring that the computed interval contains the exact mathematical result despite finite precision limitations. This method systematically tracks the of uncertainties introduced by at each step, offering guaranteed enclosures rather than probabilistic estimates. A key application is backward error analysis, where the computed floating-point result \mathrm{fl}(f(x)) is interpreted as the exact evaluation of the function on a perturbed input, f(x + \delta x), with the perturbation \delta x bounded by an interval derived from the rounding errors. For instance, in floating-point arithmetic, each operation satisfies \mathrm{fl}(x \oplus y) = (x \oplus y) (1 + \delta) where |\delta| \leq u (machine epsilon), and interval arithmetic extends this to enclose \delta x within an interval such as [-u \cdot |x|, u \cdot |x|], providing a tight bound on the backward error. This allows assessment of an algorithm's stability by checking if small input perturbations lead to acceptable output changes, with widening intervals signaling potential instability. Forward error propagation complements this by enclosing the accumulation of all possible s throughout a , often modeled as a of operations where s flow from inputs to outputs. For example, in the s = \sum_{i=1}^n x_i, the [\underline{s}, \overline{s}] includes the true an term bounded by an representing cumulative , such as [s - e, s + e] where |e| \leq (n-1) u \max |x_i| for naive , revealing order-dependent growth in like \sum (-1)^i / i. The relative bound for a basic operation is given by |\mathrm{fl}(\mathrm{op}(x,y)) - \mathrm{op}(x,y)| \leq \varepsilon \cdot |\mathrm{op}(x,y)|, enclosed within an [-\varepsilon \cdot |\mathrm{op}(x,y)|, \varepsilon \cdot |\mathrm{op}(x,y)|] with \varepsilon typically the unit roundoff, ensuring the captures all feasible outcomes. Compared to classical error bounds, which require manual derivation and often yield loose estimates for nonlinear or complex codes, interval arithmetic automates the process and produces tighter enclosures, particularly for programs with dependencies or branches, by inherently accounting for worst-case without additional analysis. This makes it especially valuable for verifying numerical software reliability, as demonstrated in detecting instabilities where classical methods might overlook subtle accumulations.

Tolerance and Design Analysis

Interval arithmetic plays a crucial role in and by providing a rigorous for propagating uncertainties arising from in systems. In this context, dimensional variables are represented as intervals to capture nominal values along with their allowable deviations, enabling designers to compute guaranteed bounds on performance without relying on probabilistic assumptions. This approach is particularly valuable in and , where component variations can accumulate and affect fit, function, or reliability. Tolerance stack-up refers to the propagation of these interval uncertainties through sums, products, or other operations on dimensional variables. For instance, the total length of an assembly formed by two components with lengths l_1 \pm t_1 and l_2 \pm t_2 is computed as the interval sum [l_1 + l_2 - (t_1 + t_2), l_1 + l_2 + (t_1 + t_2)], yielding the worst-case range of possible outcomes. Similar interval operations apply to products or more complex geometric functions, ensuring that the enclosure of all feasible values is obtained deterministically. This method contrasts with statistical techniques like root-sum-square (RSS), which assume normal distributions and provide probabilistic estimates; interval arithmetic delivers conservative, guaranteed bounds suitable for safety-critical designs where overestimation is preferable to underestimation. A key tool in this analysis is sensitivity propagation, which quantifies how tolerances in inputs affect the output through the first-order approximation: w(f(X_1, \dots, X_n)) \leq \sum_{i=1}^n \left| \frac{\partial f}{\partial x_i} \right| w(X_i) Here, w(\cdot) denotes the width of an , f is the function (e.g., clearance or ), and the partial derivatives represent sensitivities at nominal points. This provides an upper bound on the total , facilitating optimization of component specifications to minimize variation while respecting constraints. In mechanical assemblies, interval arithmetic is applied to evaluate fit tolerances, such as or in parts. Consider a - where the g = d_c - d_p (cylinder minus piston ) has tolerances leading to an interval gap [g_{\min}, g_{\max}]; if d_c \in [50, 50.1] mm and d_p \in [49.8, 49.9] mm, then g \in [0.1, 0.3] mm, ensuring no binding occurs across the tolerance zone. Modal interval extensions handle geometric deviations (e.g., form tolerances) by incorporating small , providing tighter bounds than classical intervals for multidimensional effects. This deterministic analysis outperforms in cases with non-normal distributions or unknown , as demonstrated in tolerance charting for stack-up chains. Interval-based tolerance analysis is often integrated into (CAD) software, where tools like tolerance zones and (GD&T) standards facilitate automated propagation. Libraries such as those implementing modal intervals allow seamless embedding in design workflows, supporting iterative optimization of tolerances to balance manufacturability and performance.

Computer-Assisted Proofs

Interval arithmetic facilitates computer-assisted proofs by delivering rigorous enclosures for solutions to equations and inequalities, thereby certifying the validity of mathematical theorems through verified numerical computations. These methods bound errors and uncertainties, ensuring that computed intervals contain all possible exact values, which is essential for proving existence, uniqueness, or nonexistence of solutions in complex systems. In verified computations, interval arithmetic encloses zeros of nonlinear s or definite s with guaranteed error bounds, transforming approximate numerical results into provable statements. For instance, the image of an interval domain under a can be rigorously enclosed to confirm that no root lies within it, or to bound values such as \int_0^{1} e^x \, dx \in [1.71828, 1.718282], which contains the exact value e - 1 \approx 1.718281828. This approach underpins the certification of properties like fixed points or equilibria in partial differential equations. A prominent application appears in the proof of the , where interval arithmetic verified hundreds of nonlinear inequalities in geometric optimization problems, establishing that the face-centered cubic packing achieves the maximum density for equal spheres in three dimensions. By computing lower bounds on functions like the E over domains, the method confirmed that no denser configuration exists, with each inequality checked via outward to handle floating-point errors. In dynamical systems, interval methods have rigorously verified chaotic behavior, such as the existence of transversal homoclinic points in discrete maps, which imply and sensitive dependence on initial conditions. For example, enclosures of orbits in the Henon map confirmed by isolating invariant sets containing dense orbits, a feat requiring global coverage of that manual analysis could not achieve. To establish globality, domain covering techniques partition the search space into subintervals, applying local verifiers exhaustively to ensure complete analysis without gaps. Uniqueness is often certified using interval adaptations of the Kantorovich theorem, which provide existence and isolation balls around approximate solutions by bounding the constant and residual via interval derivatives. A fundamental exclusion test in these proofs, particularly within the interval Newton method, states that if $0 \notin N(X) and w(N(X)) < w(X)—where X is an interval vector, N(X) is the interval Newton image, and w(\cdot) denotes width—then X contains no zero of the system. This criterion discards subdomains without solutions, accelerating branch-and-bound searches while maintaining rigor. The impact of these techniques is profound, enabling proofs in and that surpass human computational limits, such as certifying in the Muskat problem or global regularity in the surface quasi-geostrophic equation, where traditional methods fail due to high dimensionality. Recent advances as of 2025 include computer-assisted proofs for in Picard-Fuchs equations and in slowly oscillating solutions to equations, leveraging interval methods for rigorous bounds.

Uncertain and Fuzzy Reasoning

Fuzzy intervals extend the concept of interval arithmetic to handle gradual uncertainties through fuzzy set theory, where membership functions assign degrees of belonging to elements rather than binary inclusion. A fuzzy interval is defined by a membership function μ_X(x) that maps real numbers to [0,1], representing the degree to which x belongs to the X. Unlike crisp intervals with hard bounds, fuzzy intervals allow for smooth transitions in membership, enabling the modeling of linguistic variables such as "approximately 5" or "around 10 to 20." The α-cut of a fuzzy interval X, denoted [X]^α, is the crisp {x | μ_X(x) ≥ α} for α ∈ (0,1], forming a family of nested intervals that fully represent the fuzzy set. This decomposition theorem allows fuzzy arithmetic to leverage operations: for a f, the α-cut of the result is [f(X)]^α = f([X]^α), where f is applied using standard arithmetic. This approach ensures the fuzzy output membership is reconstructed from the of these α-cuts. Operations on fuzzy intervals follow Zadeh's extension principle, which generalizes crisp functions to fuzzy arguments by maximizing membership over preimages: for fuzzy sets A and B, the membership of z in A ⊕ B (e.g., ) is sup_{x+y=z} min(μ_A(x), μ_B(y)), often computed efficiently via α-cuts and sup-min to avoid direct optimization. This method preserves the convexity and normality of fuzzy numbers under arithmetic operations like , , and inversion, though it can introduce overestimation similar to interval arithmetic. In contrast to crisp intervals, which provide guaranteed bounds but no gradation, fuzzy operations yield results with varying levels across the . Type-2 fuzzy intervals address higher-order uncertainties by allowing membership degrees themselves to be fuzzy, forming intervals of possible membership values rather than single points. Introduced to model in membership functions, such as inter-expert disagreements on "high temperature," an interval type-2 fuzzy set has a footprint of uncertainty bounded by upper and lower membership functions. Arithmetic on type-2 fuzzy intervals extends the α-cut to secondary memberships, using the extension on both primary and secondary levels, often simplified via interval type-2 representations for computational tractability. This provides finer for imprecise parameters compared to type-1 fuzzy intervals. These extensions find applications in under , where fuzzy intervals rank alternatives with partial preferences, and in systems, such as adaptive fuzzy controllers that adjust gains based on gradual sensor imprecision. Recent developments in semi-type-2 fuzzy structures and interval type-2 methods, blending type-1 and type-2 elements for imprecise parameters, have enhanced in multi-criteria optimization and as of 2025.

Historical and Practical Aspects

Development History

The development of interval arithmetic emerged from efforts to rigorously bound numerical errors in early digital computing. Contemporaneous advancements in error analysis, such as James H. Wilkinson's pioneering work on backward error analysis in the 1960s—which provided insights into how floating-point computations perturb problems to nearby ones with exact solutions, as detailed in his 1963 monograph Rounding Errors in Algebraic Processes []—complemented these efforts. However, the formal foundations of interval arithmetic were laid earlier by Teruo Sunaga in his 1958 paper on inclusion theory [], and further developed by Ramon E. Moore in his 1959 technical report "Automatic Error Analysis in Digital Computation," where he introduced interval representations to enclose all possible rounding errors in arithmetic operations []. This work, stemming from Moore's PhD dissertation at Stanford University completed in 1962, shifted focus from statistical error estimates to deterministic enclosures []. Moore's seminal 1966 book Interval Analysis established the theoretical framework, defining interval operations and demonstrating their use in automatic error bounding for computational problems, including applications to ordinary differential equations (ODEs) via enclosure methods for initial value problems []. In the 1970s, the field evolved from manual error analysis to computational implementations, with significant attention to mitigating the dependency problem—where multiple occurrences of a in an expression lead to overly wide intervals—through techniques like the mean value extension and Hansen's forms []. Key contributors during this period included Götz Alefeld and Jürgen Herzberger, whose 1983 book Introduction to Interval Computations advanced the algebraic theory and practical algorithms for interval methods []. The 1980s marked a milestone in software development, with early packages implemented in languages like and to support verified computations, as showcased in international symposia such as the 1980 Interval Mathematics conference []. Innovations like E. Kaucher's extended interval arithmetic, introduced in 1980, incorporated improper intervals to enhance enclosure precision and address certain dependency issues []. More recently, incremental advances have focused on efficient methods for linear systems, such as the generalized interval accelerated overrelaxation (GIAOR) solver proposed in 2023, which improves convergence for interval linear equations while maintaining enclosure guarantees []. Ongoing research through 2025 continues to refine these solvers for applications in verified numerics, emphasizing tighter bounds and integration with modern computing paradigms [].

IEEE 1788 Standard

The IEEE 1788-2015 standard specifies basic interval arithmetic operations for floating-point numbers, adhering to one of the commonly used mathematical interval models while supporting floating-point formats. It establishes a framework that serves as an intermediary layer between hardware implementations and programming languages, without requiring dedicated hardware support. Published on June 30, 2015, the standard emphasizes portability and reliability in verified numerical computations. Key features of the include a defined set of required operations—such as , , , , , and mixed-type comparisons—along with support for multiple modes (inward and directed ) to ensure enclosure properties. is managed through a mechanism rather than global flags, allowing for exception-free computations that track reliability without interrupting execution. The is flavor-independent, accommodating different models like set-based ( intervals) or modal (including infinite or endpoints) variants, provided they meet core compliance criteria. Decorated intervals extend basic intervals by attaching a decoration—a metadata tag indicating the status and reliability of the computation—to each interval result. The five possible decorations, ordered by increasing reliability, are: ill (ill-formed, indicating undefined operations), trv (trivial, empty or whole real line), def (defined, non-empty interior), dac (defined and continuous ), and com (common, bounded continuous with finite endpoints). This system enhances trustworthiness in applications like rigorous error analysis by propagating decoration information through operations, enabling users to assess result validity without additional checks. Compliance with the standard requires that for input intervals X and Y, the computed result \operatorname{op}(X, Y) satisfies X \subseteq \operatorname{op}(X, Y) \subseteq \hat{X}, where X represents the true range of the operation and \hat{X} is an enclosing interval with bounded excess width to control overestimation. This enclosure property, combined with directed rounding, guarantees that the result contains all possible values while limiting unnecessary expansion. As of November 2025, no major revisions to IEEE 1788-2015 have been published beyond the related IEEE 1788.1-2017 simplified subset for binary64 floating-point intervals, though a full revision is mandated before the end of 2025 to address evolving needs in verified . The standard continues to influence the development of libraries for reliable numerical software.

Software Implementations

Several software libraries implement interval arithmetic across various programming languages, providing tools for validated numerical computations with guarantees on and bounds. These implementations vary in their focus on , , and compliance with standards like IEEE , enabling applications in scientific computing, optimization, and dynamical systems analysis. In , INTLAB serves as a prominent toolbox for high-performance interval arithmetic, supporting real and complex intervals, sparse matrices, and self-validating methods such as for rigorous error estimation. It is designed for speed, with operations like interval matrix computations achieving near-point arithmetic performance through optimized rounding mode switches. Similarly, VSDP extends capabilities for verified semidefinite-quadratic-linear programming, using interval arithmetic to compute rigorous error bounds on optimal values and enclosures of epsilon-optimal solutions, particularly useful in optimization tasks requiring certified feasibility. For C-based implementations, the mpfi library provides arbitrary-precision interval arithmetic built on the MPFR library for multiple-precision floating-point operations, ensuring portable and correctly rounded computations suitable for high-accuracy enclosures in scientific simulations. In C++, the CAPD library offers a flexible framework for rigorous analysis of dynamical systems, incorporating interval arithmetic with support for double, , and multiprecision types to compute validated trajectories and enclosures for initial value problems. Julia's JuliaIntervals.jl package delivers comprehensive interval arithmetic for validated numerics, including operations on intervals, root-finding, and with Julia's for efficient handling of arrays and functions. It supports complex intervals and is extensible for specialized solvers. Many of these libraries emphasize features like IEEE 1788 compliance for standardized interval operations, support for complex-valued intervals, and with validated solvers for tasks such as and solving. For instance, INTLAB and JuliaIntervals.jl include tools for enclosing solutions to nonlinear systems with guaranteed inclusion properties. A key challenge in these implementations is balancing computational performance with guaranteed correctness, as interval arithmetic inherently widens enclosures due to problems and errors, often making it 10-100 times slower than point arithmetic for large-scale computations. Optimized techniques, such as centered forms or models in libraries like INTLAB and CAPD, mitigate this by reducing overestimation while preserving enclosure guarantees. Recent developments by 2025 include enhanced integrations, such as the PyInterval library, which provides an algebraically closed interval system for the extended reals with support for arithmetic operations, unions, and intersections, filling gaps in open-source tools for verification and uncertainty propagation. In , the inari crate offers a high-performance, IEEE 1788-conforming implementation of interval arithmetic, leveraging Rust's safety features for reliable enclosures in and applications.

Conferences and Recent Advances

The International Symposium on Scientific Computing, Computer Arithmetic, and Validated Numerics () remains a premier venue for advancing interval arithmetic research, with the 20th edition held from September 22-26, 2025, at the in , focusing on reliable computing, , and . This event featured sessions on interval methods in tensor frameworks, such as integrating interval arithmetic into for enhanced numerical verification. The 19th in 2023, originally planned but adapted due to circumstances, emphasized verified numerical computations and interval-based enclosures. The IEEE International Symposium on Computer Arithmetic (ARITH) frequently includes tracks on interval arithmetic, as seen in the 32nd edition from May 4-7, 2025, in , , where presentations addressed robust interval operations amid overflows and floating-point challenges. Earlier, ARITH 2023 highlighted interval arithmetic enhancements for reliability in scientific . SIAM workshops have occasionally incorporated interval topics within broader events, such as uncertainty sessions at the 2023 SIAM Meeting, though no dedicated interval arithmetic workshops occurred between 2023 and 2025. Recent advances in from 2023 to 2025 have extended iterative methods for linear systems, with the generalized accelerated overrelaxation (GIAOR) method introduced in 2023, converging for strictly diagonally dominant matrices, M-matrices, and H-matrices while reducing computational overhead in numerical examples. In 2025, the generalized two-parameter overrelaxation (GITOR) method generalized diagonal matrices to band matrices, ensuring convergence for similar matrix classes and providing spectral radius bounds for efficient solving of linear systems. Innovations in interval representations include the semi-type-2 interval approach, proposed in 2025, which models imprecise parameters in by allowing flexible bounds and defining operations with order relations for applications in optimization and . Asymmetric interval numbers, also from 2025, incorporate expected values to mitigate overestimation in modeling, with proven properties for operations and theorems enhancing enclosure accuracy. For powered intervals, recent work has refined enclosure formulas through pseudo-complex extensions of interval , enabling exact resolutions of related nonlinear systems. Emerging trends integrate interval arithmetic with for , exemplified by interval neural networks in 2025, which apply interval arithmetic to LSTM and neural parameters to generate prediction intervals without probabilistic assumptions, improving reliability in tasks. In , interval methods support verified enclosures for simulations, with 2025 advances in computational performance bounds using interval techniques to predict quantum circuit reliability amid noise. These developments underscore interval arithmetic's role in handling across computational domains.

References

  1. [1]
    [PDF] Introduction, Uses, and Resources - Interval Computations
    This article introduces interval arithmetic and its interaction with established math- ematical theory.
  2. [2]
    [PDF] Interval Arithmetic: from Principles to Implementation - MIT Fab Lab
    Interval arithmetic op- erations (addition, subtraction, multiplication and division) are likewise defined mathematically and we provide algo- rithms for ...
  3. [3]
    [PDF] Introduction to - INTERVAL ANALYSIS
    The first applications of interval arithmetic appear in Chapter 3. ... applications, described in various scholarly articles, generally are more practical today.
  4. [4]
    None
    ### Summary of Interval Arithmetic
  5. [5]
    Interval Arithmetic - an overview | ScienceDirect Topics
    Interval arithmetic is defined as a method of representing real numbers by intervals, whose endpoints are floating-point numbers, allowing for the ...
  6. [6]
    [PDF] stanford university
    R. E. MOORE. TECHNICAL REPORT NO. 25. NOVEMBER 15, 1962. PREPARED UNDER ... study the methods based on interval arithmetic are much more general and ...Missing: original paper
  7. [7]
    Complex Interval Arithmetic Using Polar Form | Reliable Computing
    In this paper, the polar representation of complex numbers is extended to complex polar intervals or sectors; detailed algorithms are derived for performin.
  8. [8]
    [PDF] Interval Analysis − Basics
    Elementary functions. ♢ Elementary functions are extended to intervals using the same idea. ♢ Examples: ♢. ♢. ♢ The absolute value of an interval is defined by.
  9. [9]
    [PDF] Interval Arithmetic – An Elementary Introduction and Successful ...
    What is Interval Arithmetic? Interval arithmetic is based on defining the four elementary arithmetic operations on intervals.
  10. [10]
    Interval Arithmetic Library - Boost
    The functions sqrt , log , exp , sin , cos , tan , asin , acos , atan , sinh , cosh , tanh , asinh , acosh , atanh are also defined. There is not much to say; ...
  11. [11]
    None
    Summary of each segment:
  12. [12]
    [PDF] Algorithm for Evaluation of the Interval Power Function of ... - arXiv
    We describe an algorithm for evaluation of the interval extension of the power function of variables x and y given by the expression xy. Our algorithm reduces ...
  13. [13]
    [PDF] Mean Value and Taylor Forms in Interval Analysis. - DTIC
    These theorems provide ways to construct accurate interval inclusions of operators, called mean value and Taylor forms. The forms resulting from expansion ...Missing: Lipschitz | Show results with:Lipschitz
  14. [14]
    Hardware Support for Interval Arithmetic
    The lower bound of the result is computed with rounding downwards and the upper bound with rounding upwards by parallel units simultaneously. The rounding mode.Missing: outward | Show results with:outward
  15. [15]
    [PDF] Interval Computations: Introduction, Uses, and Resources
    Interval arithmetic is an arithmetic defined on sets of intervals, rather than sets of real numbers. A form of interval arithmetic perhaps first appeared in.
  16. [16]
    [PDF] Affine Arithmetic: Concepts and Applications
    Solution: • Write γ(t)=(x(t),y(t)). • Represent t ∈ T with an affine form: t= t0+ t1 ε1, t0 = (b + a)/2, t1 = (b − a)/2. • Compute coordinate functions x ...
  17. [17]
    Centered Forms
    The exact range is if(X)= [0, ]. For attaining realizable outer approximations of f(X), interval arithmetic offers the following approach" From all arithmetic ...
  18. [18]
    Interval analysis: theory and applications - ScienceDirect
    R.E. Moore, Interval Arithmetic and Automatic Error Analysis in Digital Computing, Thesis, Stanford University, October 1962. Google Scholar. [64]. R.E. Moore.
  19. [19]
    [PDF] The wrapping effect, ellipsoid arithmetic, stability and confidence ...
    Abstract. The wrapping effect is one of the main reasons that the application of interval arithmetic to the enclosure of dynamical systems is difficult.
  20. [20]
    [PDF] Prof. W. Kahan's Comments on SORN Arithmetic
    Jul 15, 2016 · Since SORN arithmetic produces only coffins, it cannot avoid the wrapping effect. To attenuate this effect the coffin X0 must be subdivided ...Missing: reduce | Show results with:reduce
  21. [21]
    [PDF] Methods For Interval Linear Equations
    Abstract. We discuss one known and five new interrelated methods for bounding the hull of the solution set of a system of interval linear equations.
  22. [22]
    [PDF] Interval Gaussian Elimination with Pivot Tightening - HTWG Konstanz
    Abstract. We present a method by which the breakdown of the interval Gaussian elimina- tion caused by division of an interval containing zero can be avoided ...Missing: selection | Show results with:selection
  23. [23]
    [PDF] Institute of Computer Science The Hansen-Bliek Optimality Result as ...
    Feb 22, 2011 · Abstract: We give an alternative proof of the Hansen-Bliek optimality result relying on the general theory of interval linear equations.<|control11|><|separator|>
  24. [24]
    [PDF] On Full-Rank Interval Matrices
    ρ. (. |(midA)+| · radA. ) < 1, where ρ(·) means taking the spectral radius of the square matrix. Then A has full rank. Proof. Let us assume ...
  25. [25]
  26. [26]
    [PDF] Some Tests of Generalized Bisection - R. Baker Kearfott
    (If interval arithmetic is implemented in hardware, it runs at roughly the same speed as floating-point arithmetic.) Thus, to get a (conservative) “equivalent” ...
  27. [27]
    [PDF] Interval Robotics, 2011-2012 - ENSTA Bretagne
    Interval arithmetic makes it possible to contract the domains [xi] ... The set W+ is a subpaving (i.e., a union of boxes) ... Figure 8.1: Illustration of the impact, ...<|separator|>
  28. [28]
    [PDF] Capture basin approximation using interval analysis - ENSTA - HAL
    This process gives rise to the so-called interval arithmetic (see ... union of boxes covering K ... union of boxes covering the set K, that is K ⊆ C+.
  29. [29]
    [PDF] Application of Interval Analysis to Error Control. - DTIC
    A formal treatment of interval analysis on finite-preci- sion number spaces may be found in Kulisch [1]. In the case of ordinary computer arithmetic, rouridoff ...Missing: backward | Show results with:backward
  30. [30]
    Basic Issues in Floating Point Arithmetic and Error Analysis
    Rounding up and down are useful for interval arithmetic, which can provide guaranteed error bounds; unfortunately most languages and/or compilers provide no ...
  31. [31]
    [PDF] An Interval Arithmetic for Robust Error Estimation
    Jan 5, 2022 · Interval arithmetic is a simple way to compute a mathematical expression to an arbitrary accuracy, widely used for verifying floating-point ...
  32. [32]
    Tolerance analysis of mechanical assemblies based on modal ...
    Mar 12, 2010 · The uncertainty of dimensions and geometrical form of features due to tolerances is mathematically described using modal interval arithmetic.
  33. [33]
    (PDF) Tolerance analysis of mechanical assemblies based on ...
    PDF | Tolerance analysis is a key analytical tool for estimation of accumulating effects of the individual part tolerances on the design specifications.
  34. [34]
    An Interval Arithmetic Approach to Sensitivity Analysis in Geometric ...
    Dec 1, 1987 · We use interval arithmetic to generate an interval of solution values associated with an interval of parameter values. These results indicate a ...
  35. [35]
    [PDF] Computer-assisted proofs in PDE: a survey - arXiv
    Oct 1, 2018 · We detail some typical examples in Table 1: It is now clear where the interval arithmetic takes place. In order to enclose the value of the.
  36. [36]
    [PDF] A proof of the Kepler conjecture - Annals of Mathematics
    Suppose that we have a discontinuous piecewise linear function on the unit interval [−1,1], as in Figure 4.1. It is continuous, except at x = 0. Figure 4.1: A ...
  37. [37]
    Rigorous chaos verification in discrete dynamical systems
    In this paper we show how interval analysis can be used to calculate rigorously valid enclosures of transversal homoclinic points in discrete dynamical ...
  38. [38]
    Review of Type-1 and Type-2 Fuzzy Numbers - IntechOpen
    Mar 29, 2023 · In recent years, type-2 fuzzy number theory has been developing and is being studied in principle as well as in application. Applications to ...<|control11|><|separator|>
  39. [39]
    IEEE 1788-2015 - IEEE SA
    Jun 30, 2015 · This standard specifies basic interval arithmetic (IA) operations selecting and following one of the commonly used mathematical interval models.
  40. [40]
    [PDF] Introduction to the IEEE 1788-2015 Standard for Interval Arithmetic
    The IEEE 1788-2015 standard for interval arithmetic, developed from 2008-2015, uses levels, different models, and a decoration system. Interval arithmetic is ...
  41. [41]
    [PDF] Introduction to the IEEE 1788-2015 Standard for Interval Arithmetic
    Jul 23, 2017 · the standard. Anyway, the standard will incur a revision for 2025. Interval arithmetic in a nutshell. Precious features of interval arithmetic.
  42. [42]
    [PDF] The Forthcoming IEEE1788 Standard for Interval Arithmetic
    IEEE Working Group P1788 “A standard for interval arithmetic”. Officers ... Exception handling—decorations. So one needs a mechanism to track whether a ...
  43. [43]
    IEEE 1788.1-2017 - IEEE SA
    Jan 31, 2018 · IEEE Std 1788.1-2017 specifies interval arithmetic operations based on intervals whose endpoints are IEEE Std 754TM-2008 binary64 floating-point numbers.Missing: 2020 | Show results with:2020
  44. [44]
    [PDF] About the ”accurate mode” of the IEEE 1788-2015 standard for ...
    As the revision of the IEEE 1788-2015 must be performed before the end of 2025, these remarks are intended as food for thought and will be brought to the ...Missing: history 2020
  45. [45]
    INTLAB - INTerval LABoratory - TUHH
    INTLAB - Fast INTerval LABoratory. Matlab Octave toolbox. Self-validating methods. Interval arithmetic for real and complex data. Full and sparse matrices.
  46. [46]
    Home · IntervalArithmetic.jl - JuliaIntervals
    IntervalArithmetic.jl is a Julia package for validated numerics in Julia. All calculations are carried out using interval arithmetic where quantities are ...
  47. [47]
    Getting started — VSDP 2020 manual
    VSDP provides functions for computing rigorous error bounds of the true optimal value, verified enclosures of epsilon-optimal solutions, and verified ...
  48. [48]
    CAPD library
    interval - template written interval arithmetic, supports double, long double and multiprecision. It can be extended to any arithmetic type for which we can ...
  49. [49]
    CAPD::DynSys: A flexible C++ toolbox for rigorous numerical ...
    We present the CAPD::DynSys library for rigorous numerical analysis of dynamical systems. The basic interface is described together with several interesting ...
  50. [50]
    1788-2015 - IEEE Standard for Interval Arithmetic
    Jun 30, 2015 · This standard specifies basic interval arithmetic (IA) operations selecting and following one of the commonly used mathematical interval models.Missing: revisions 2020
  51. [51]
    [PDF] implementation and evaluation of interval arithmetic software - DTIC
    Dec 10, 2024 · Interval arithmetic should be used (as any tool would be) where accuracy is of critical importance. Interval arithmetic is expensive to use in ...
  52. [52]
    [PDF] A Cross-Platform Benchmark for Interval Computation Libraries
    While the formal correctness of interval computation has been proven [22], ensuring that an implementation of interval arithmetic is correct is a daunting.Missing: challenges | Show results with:challenges
  53. [53]
    PyInterval — Interval Arithmetic in Python — PyInterval 1.2.1.dev0 ...
    This library provides a Python implementation of an algebraically closed interval system on the extended real number set. Interval objects, as defined in this ...
  54. [54]
    PyInterval — Interval arithmetic in Python - GitHub
    This library provides a Python implementation of an algebraically closed interval system on the extended real number set.
  55. [55]
    unageek/inari: A Rust implementation of interval arithmetic ... - GitHub
    inari is a Rust implementation of interval arithmetic. It conforms to IEEE Std 1788.1-2017. It also implements a subset of IEEE Std 1788-2015.
  56. [56]
    SCAN2025 // University of Oldenburg
    20th International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computations. Conference Dates: September 22-26, 2025.
  57. [57]
    [PDF] Towards Interval Arithmetic in TensorFlow
    SCAN 2025, Oldenburg, Germany. Towards Interval Arithmetic in TensorFlow: A Comparison of Approaches. Page 2. •. Introduction & Motivation. •. Interval ...
  58. [58]
    Preface | Acta Cybernetica
    May 23, 2023 · The 19th International Symposium on Scientific Computing, Computer Arithmetic and Verified Numerical Computation (SCAN) was originally ...
  59. [59]
    ARITH 2025: Main page
    Welcome to the web site of the 32 nd IEEE International Symposium on Computer Arithmetic, to be held in El Paso, TX, USA. May 4-7, 2025.Missing: interval tracks
  60. [60]
    [PDF] Making Interval Arithmetic Robust to Overflow - ARITH 2023
    Compared to Mathe- matica, our movability-flag-enhanced interval arithmetic library resolves 60.3% more challenging inputs, returns 7.6× fewer completely ...
  61. [61]
    Conferences & Events - SIAM.org
    SIAM conferences focus on timely topics in applied and computational mathematics and applications. These conferences provide a place for members to exchange ...SIAM PD25 Conference 2025... · SIAM Conferences · About SIAM Conferences...Missing: interval | Show results with:interval
  62. [62]
    Generalized interval AOR method for solving interval linear equations
    In this paper we introduce a generalization of IAOR method and termed it as generalized interval accelerated overrelaxation (GIAOR) method.Missing: solvers | Show results with:solvers
  63. [63]
    Generalization of the interval TOR method for solving interval linear ...
    The generalized interval TOR method (GITOR) uses interval band matrices to solve interval linear systems, and converges under certain conditions.Missing: arithmetic | Show results with:arithmetic
  64. [64]
    Developments of Semi-Type-2 Interval Approach with Mathematics ...
    This paper aims to introduce a new interval approach called the Semi-Type-2 interval to represent imprecise parameters in uncertain decision-making.Missing: solvers | Show results with:solvers
  65. [65]
    Asymmetric interval numbers: A new approach to modeling uncertainty
    Jan 15, 2025 · The paper defines basic arithmetic operations for A I N s , provides proofs of their properties, and presents theorems on symmetry and asymmetry ...
  66. [66]
    Non-Linear Extension of Interval Arithmetic and Exact Resolution of ...
    Jan 24, 2025 · This paper introduces a novel extension of interval arithmetic through the formulation of pseudo- complex numbers, a mathematical framework ...
  67. [67]
    Introducing Interval Neural Networks for Uncertainty-Aware System ...
    By employing interval arithmetic throughout the network, INNs can generate Prediction Intervals (PIs) that capture target coverage effectively.Missing: 2023-2025 | Show results with:2023-2025
  68. [68]
    [PDF] Computational Performance Bounds Prediction in Quantum ... - arXiv
    Jul 22, 2025 · Abstract—Quantum computing has significantly advanced in recent years, boasting devices with hundreds of quantum bits.