Fact-checked by Grok 2 weeks ago

Numerical integration

Numerical integration, also known as numerical quadrature, refers to a collection of algorithms designed to approximate the value of a definite \int_a^b f(x) \, dx for a given f(x) over an [a, b], particularly when the integral lacks a closed-form or when the is defined only numerically, such as through experimental or simulations. These methods approximate the integral by dividing the into subintervals and summing weighted evaluations of the at selected points, leveraging geometric interpretations like rectangles, trapezoids, or higher-order polynomials to estimate areas under the curve. The importance of numerical integration stems from its broad applicability in scientific and engineering disciplines, where analytical solutions are often infeasible due to the complexity of the integrand or the need for rapid computation in real-time applications. Common techniques include the , which approximates the integral over a single interval as \frac{b-a}{2} [f(a) + f(b)], and its composite variant for multiple subintervals to improve accuracy; , which uses quadratic interpolation for even higher precision, yielding an error proportional to h^4 where h is the subinterval width; and , a more advanced method that optimally chooses evaluation points and weights to achieve exact results for polynomials up to a certain degree. These approaches, rooted in the Newton-Cotes formulas, ensure convergence to the true integral as the number of subintervals increases, with error estimates guiding practical implementation. In computational practice, numerical integration facilitates solutions to problems in physics, statistics, and , such as evaluating probabilities or simulating physical systems, by enabling efficient approximations on digital computers even for irregular or discontinuous functions. Adaptive strategies, which refine subintervals based on local error estimates, further enhance reliability, while specialized variants like address high-dimensional integrals prevalent in modern simulations. Overall, these methods balance computational cost and accuracy, forming a cornerstone of .

Fundamentals

Definition and Motivation

Numerical integration, also known as , is the process of approximating the value of a definite \int_a^b f(x) \, dx using discrete of evaluations at specific points, typically in the form \sum_{i=1}^n w_i f(x_i), where x_i are nodes and w_i are weights. This approach is essential when the of f(x) cannot be expressed in terms of elementary functions, such as for integrals involving like the \text{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} \, dt or non-elementary expressions like \int_0^1 \cos(x^3) \, dx. The mathematical motivation stems from the fact that while exact antiderivatives exist for polynomials and basic transcendental functions, many real-world problems involve functions derived from experimental data or complex models without closed-form solutions, necessitating numerical approximation to obtain practical results. In practice, numerical integration is widely applied in physics for computing quantities like work done by variable forces or probability densities in quantum mechanics, in engineering for signal processing tasks such as convolution in image analysis, and in finance for option pricing under stochastic volatility models where integrals over probability distributions must be evaluated numerically. From a basic error perspective, the definite integral \int_a^b f(x) \, dx is defined as the limit of Riemann sums \sum_{i=1}^n (x_{i} - x_{i-1}) f(\xi_i) as the partition size approaches zero, for \xi_i in each subinterval [x_{i-1}, x_i]; numerical methods approximate this limit using finite partitions to balance computational cost and accuracy. A simple motivating example is the trapezoidal rule, derived by connecting Riemann sums: for a single interval [a, b] divided into one subinterval, the left Riemann sum is (b-a) f(a), the right is (b-a) f(b), and averaging them yields \frac{b-a}{2} [f(a) + f(b)], which geometrically interprets the area under f as a trapezoid and provides an exact result for linear functions. This rule extends to multiple subintervals, illustrating how numerical integration refines the Riemann limit for broader applicability.

Historical Overview

The origins of numerical integration trace back to ancient approximations of areas and volumes, with Archimedes (c. 250 BCE) employing the method of exhaustion to bound the value of π by inscribing and circumscribing polygons around a circle, establishing a precursor to quadrature techniques that relied on iterative geometric refinements. In the 14th century, Indian mathematician Madhava of Sangamagrama developed infinite series expansions for π and trigonometric functions, enabling precise numerical approximations through partial sums, which anticipated later European developments in series-based integration. During the , the method of indivisibles emerged as a key advancement, formalized by in 1635, who treated areas as sums of infinitely thin lines to compute without explicit limits, influencing subsequent approaches. built on this in 1655 with his formula for π, derived via indivisibles to quadrature curves like hyperbolas and cycloids, marking an early systematic use of infinite processes for . Isaac Newton's divided-difference , introduced in 1687, provided a foundation for polynomial-based quadrature by fitting curves to discrete points, paving the way for closed-form approximations of integrals. In the , Thomas Simpson formalized in 1743, a parabolic method for numerical integration that improved accuracy over linear approximations, though its core idea echoed earlier works by and others. The saw advance quadrature significantly in the early 1800s, developing rules around 1814 that optimally chose nodes and weights for polynomial exactness, integrating least-squares principles from his astronomical computations. The 20th century brought computational innovations, with and Stanislaw Ulam introducing methods in 1949 for probabilistic integration via random sampling, initially applied to neutron diffusion but broadly impactful for high-dimensional problems. Werner Romberg proposed his extrapolation technique in 1955, enhancing accuracy through for smoother error reduction. Charles Clenshaw and A.R. Curtis developed Clenshaw-Curtis quadrature in 1960, using Chebyshev points for efficient spectral integration, particularly suited to analytic functions. Sergey Smolyak introduced sparse grid methods in 1963, constructing multidimensional quadrature via anisotropic tensor products to mitigate the curse of dimensionality in high dimensions. Post-1950s, the advent of digital computers accelerated these techniques, enabling adaptive and automated implementations across scientific computing.

One-Dimensional Methods

Step Function Quadrature Rules

Step function quadrature rules approximate the integrand f(x) over an interval [a, b] by partitioning the domain into n subintervals of equal width h = (b - a)/n and representing f(x) as a piecewise , or , that takes a constant value on each subinterval. This approach, rooted in the definition of the , treats the integral as the limit of sums of areas of rectangles whose heights are determined by sampled values of f. The , also known as the rectangular method, encompasses several variants based on the choice of sampling point within each subinterval to set the constant height. In the left rectangle rule, the height on the i-th subinterval [x_{i-1}, x_i] is f(x_{i-1}), yielding the approximation \sum_{i=1}^n h f(x_{i-1}). The right rectangle rule uses f(x_i) instead, giving \sum_{i=1}^n h f(x_i). The , often preferred for its improved accuracy, samples at the midpoint x_i^* = x_{i-1} + h/2, resulting in the composite formula h \sum_{i=1}^n f(a + (i - 1/2)h). These rules derive directly from Riemann sums, where the step function approximation corresponds to choosing tags (sampling points) in each subinterval of a partition. As the mesh size h approaches zero (i.e., n \to \infty), the converges to the definite integral \int_a^b f(x) \, dx for any Riemann-integrable f, provided the partition is regular and the tags are consistently chosen. For the composite rectangle rule with n subintervals, the left and right variants exhibit accuracy, with a global error bound of |E| \leq \frac{(b-a)^2}{2n} \max_{x \in [a,b]} |f'(x)|, assuming f is continuously differentiable. In contrast, the midpoint rule achieves second-order accuracy, with local error O(h^2) per subinterval due to the centering of the sample point, leading to a global error of O(h^2). The primary advantages of step function quadrature rules lie in their simplicity, requiring only function evaluations at n points, and their ease of and parallelization across subintervals. However, they suffer from low accuracy for functions, as the constant ignores , resulting in larger s compared to higher-order methods. As an illustrative example, consider approximating \int_0^\pi \sin(x) \, dx = 2 using the midpoint rule with n=4 subintervals, so h = \pi/4. The midpoints are \pi/8, 3\pi/8, 5\pi/8, 7\pi/8, and the approximation is (\pi/4) [\sin(\pi/8) + \sin(3\pi/8) + \sin(5\pi/8) + \sin(7\pi/8)] \approx 1.99958, with an absolute error of about $0.00042. This demonstrates the rule's reasonable performance even for modest n, though refinement is needed for higher precision.

Interpolation-Based Quadrature Rules

Interpolation-based rules approximate the integrand f(x) over a finite interval [a, b] by a p(x) of degree at most n-1 that interpolates f at n distinct nodes x_i, and then compute the exact of p(x). This approach yields an interpolatory quadrature formula \int_a^b f(x) \, dx \approx \sum_{i=1}^n w_i f(x_i), where the weights w_i are the integrals of the Lagrange basis polynomials associated with the nodes. Unlike zeroth-order approximations, of degree at least 1 provides higher accuracy for smooth functions by capturing . Newton-Cotes formulas derive from this interpolation framework using equally spaced nodes x_i = a + i h with h = (b-a)/n. Closed Newton-Cotes rules include the endpoints a and b as nodes, while open rules exclude them and use interior points. The two-point closed formula is the : \int_a^b f(x) \, dx \approx \frac{h}{2} [f(a) + f(b)], with error O(h^3) proportional to the second . The three-point closed formula is : \int_a^b f(x) \, dx \approx \frac{h}{3} [f(a) + 4f(a+h) + f(b)], achieving error O(h^5) involving the fourth . Higher-order Newton-Cotes rules exist, but for large n, they suffer from the , where the interpolating oscillates wildly near the endpoints, leading to exponential error growth even for smooth f. Gaussian quadrature improves precision by choosing nodes as the roots of orthogonal polynomials tailored to the interval and weight function, rather than equally spaced points. For the standard interval [-1, 1] with weight w(x) = 1, Gauss-Legendre quadrature uses the roots \xi_i of the nth Legendre polynomial P_n(x) as nodes, with weights w_i = \frac{2}{(1 - \xi_i^2) [P_n'(\xi_i)]^2}. This rule is exact for all polynomials of degree up to $2n-1, doubling the precision of interpolatory rules based on n points, with error E_n(f) = \frac{2^{2n+1} (n!)^4}{(2n+1) [(2n)!]^3} f^{(2n)}(\xi) for some \xi \in (-1,1). To apply Gauss-Legendre quadrature on an arbitrary [a, b], use the x = \frac{b-a}{2} t + \frac{a+b}{2} with t \in [-1,1], yielding \int_a^b f(x) \, dx \approx \frac{b-a}{2} \sum_{i=1}^n w_i f\left( \frac{b-a}{2} \xi_i + \frac{a+b}{2} \right). Variants like Gauss-Lobatto quadrature incorporate the endpoints as nodes (roots of P_{n-1}'(x) = 0 for the interior points), exact for polynomials up to degree $2n-3, and useful when endpoint evaluations are required. The following table lists nodes \xi_i and weights w_i for Gauss-Lobatto quadrature on [-1,1] for small n:
nNodes \xi_iWeights w_i
2-1, $1$1, $1
3-1, $0, $1\frac{1}{3}, \frac{4}{3}, \frac{1}{3}
4-1, -\sqrt{\frac{1}{5}}, \sqrt{\frac{1}{5}}, $1\frac{1}{6}, \frac{5}{6}, \frac{5}{6}, \frac{1}{6}
5-1, -\sqrt{\frac{21}}{7}, $0, \sqrt{\frac{21}}{7}, $1\frac{1}{10}, \frac{49}{90}, \frac{32}{45}, \frac{49}{90}, \frac{1}{10}

Adaptive Quadrature Algorithms

algorithms address the limitations of fixed-grid methods when integrand functions exhibit non-uniform behavior, such as rapid variations or singularities within the integration interval, by dynamically refining the in regions where higher accuracy is required. This adaptivity is essential for functions like those with endpoints near singularities, where a partition would waste computational effort on smooth areas while failing to capture details in problematic regions. The basic strategy involves recursive subdivision of the , typically starting with an initial approximation using a higher-order rule like Simpson's on the full interval and then bisecting subintervals to apply both a higher-order and a lower-order rule, such as Simpson's versus the , on each half. If the difference between these approximations exceeds a local , further subdivision occurs, allowing the algorithm to concentrate evaluations where the function changes most abruptly. Error estimation in these algorithms relies on the difference between the higher- and lower-order approximations on a subinterval, which provides a local error indicator approximately proportional to the actual error; specifically, for paired with the , the local error is estimated as |I_S - I_T| / 15, where I_S and I_T are the Simpson and trapezoidal integrals, respectively, and refinement proceeds if this exceeds the allocated . This approach ensures that the error is controlled locally before aggregating to meet a global requirement. Key algorithms include the adaptive , introduced by Kuncir in 1962, which employs recursive with Simpson's rule for efficiency on moderately smooth functions. Another prominent method is adaptive Gauss-Kronrod quadrature, which pairs a Gaussian rule of order $2n-1 with an embedded Kronrod extension of order $3n-2 to obtain both the approximation and error estimate without additional function evaluations beyond the initial set, making it particularly efficient for higher accuracy needs. Global error control is achieved by summing the local error estimates across all subintervals and ensuring the total does not exceed a user-specified \epsilon, often with a safety factor to account for estimation inaccuracies. This allows the algorithm to balance refinement across the while preventing over-refinement in isolated areas. Implementation considerations include defining clear stopping criteria, such as when local errors fall below a of the global or when a maximum depth is reached to avoid infinite loops near intractable singularities. Additionally, function evaluations are cached during to minimize redundant computations, enhancing overall efficiency. A illustrative example is the integration of f(x) = 1/\sqrt{x} over (0, 1], where the at x=0 demands fine subdivision near the lower limit; an might initially approximate the full coarsely but recursively refine the leftmost subintervals, achieving the exact value of 2 with far fewer evaluations than a uniform grid of equivalent accuracy.

Extrapolation Methods

methods in numerical integration accelerate the convergence of a sequence of approximations by combining them to cancel leading-order error terms, assuming the error expansion is known asymptotically. These techniques exploit the asymptotic error behavior of basic rules, such as the , to achieve higher-order accuracy without deriving new interpolation formulas. Richardson extrapolation, introduced by Lewis Fry Richardson, forms the foundation of these methods. For a quadrature approximation I_h to the integral I with asymptotic error I - I_h \sim c h^k as the step size h \to 0, where k is the order of the leading error term, two approximations I_h and I_{h/2} can be combined to eliminate the h^k term: I \approx \frac{2^k I_{h/2} - I_h}{2^k - 1}. This yields an improved estimate with error of order O(h^{k+1}) or higher, depending on the full error expansion. Repeated application to successively refined step sizes generates a hierarchy of higher-order approximations. Romberg integration applies Richardson extrapolation specifically to the trapezoidal rule, which has even-powered error terms due to its second-order accuracy. Developed by Werner Romberg in 1955, the method constructs a triangular tableau where entries R_{m,n} are computed recursively from trapezoidal approximations T_m at step size h / 2^m: R_{m,n} = \frac{4^n R_{m,n-1} - R_{m-1,n-1}}{4^n - 1}, \quad R_{m,0} = T_m. The diagonal elements R_{m,m} converge to the integral with order O(h^{2m+2}), achieving exponential convergence for smooth integrands. This tableau efficiently reuses prior computations, making it computationally economical. The Euler-Maclaurin formula provides the theoretical basis for these extrapolations by expanding the error in the trapezoidal rule as a series involving Bernoulli numbers and derivatives of the integrand. The formula relates a sum \sum_{i=a}^b f(i) to the integral \int_a^b f(t) \, dt plus correction terms: \sum_{i=a}^b f(i) = \int_a^b f(t) \, dt + \frac{f(a) + f(b)}{2} + \sum_{k=1}^m \frac{B_{2k}}{(2k)!} \left( f^{(2k-1)}(b) - f^{(2k-1)}(a) \right) + R_m, where B_{2k} are Bernoulli numbers and R_m is the remainder. In quadrature, this expansion justifies the even-powered error terms in Romberg integration, allowing extrapolation to cancel them successively. The Bulirsch-Stoer method extends using approximations instead of polynomials, offering improved stability for certain problems. Originally formulated for ordinary differential equations but adaptable to , it performs deferred corrections by fitting a rational interpolant to a sequence of approximations at halved step sizes, then extrapolating to the limit. This approach mitigates issues like Runge phenomenon in polynomial-based methods and achieves high orders with fewer evaluations. These methods offer significant advantages, including attaining high-order accuracy without constructing high-degree interpolants, which can be unstable, and requiring only evaluations of the integrand at uniform points. However, they assume the integrand is sufficiently to ensure the error expansion holds and require prior knowledge of the error order, limiting applicability to non-smooth or oscillatory functions. As an illustrative example, consider Romberg integration for \int_0^1 e^{-x^2} \, dx \approx 0.746824, using the as the base. The tableau below shows successive refinements, with diagonal entries converging rapidly:
StepsStep SizeR_{0,0}R_{1,1}R_{2,2}R_{3,3}R_{4,4}
11.000.68394
20.500.731370.74718
40.250.742980.746860.74683
80.1250.744880.746820.7468240.746824
160.06250.745760.7468240.7468240.7468240.746824
The final entry achieves machine precision after 33 evaluations.

Error Analysis and Estimation

In numerical integration, errors arise primarily from two sources: , which stem from the of the integrand or the by a finite sum, and , which result from the finite precision of in computations. Truncation errors are the dominant concern in most analyses of rules, as they depend on the of the integrand and the order of the method, whereas roundoff errors become significant only for very fine discretizations or ill-conditioned problems. A priori error estimation provides bounds on the before computation, typically requiring knowledge of higher s of the integrand. For the applied to \int_a^b f(x) \, dx with step size h = (b-a)/n, the satisfies |E| \leq \frac{(b-a) h^2}{12} \max_{a \leq x \leq b} |f''(x)|, assuming f \in C^2[a,b]. This bound arises from Taylor expansion and the , highlighting the second-order accuracy of the method. For of order n (exact for polynomials up to $2n-1) on [a,b] with 1, the is bounded by |E| \leq \frac{(b-a)^{2n+1} (n!)^4}{((2n)!)^3 (2n+1)} \max_{a \leq x \leq b} |f^{(2n)}(x)|, derived from the in the interpolation using orthogonal polynomials; this reflects the high precision for smooth functions but dependence on the (2n)-th . Such estimates guide the choice of n to achieve a desired accuracy, provided derivative bounds are available or estimable. The Peano kernel theorem offers a general framework for representing quadrature errors without explicit derivative bounds in the leading term. For a rule exact for polynomials of degree at most n, the error for f \in C^{n+1}[a,b] is E(f) = \frac{1}{n!} \int_a^b f^{(n+1)}(t) K_n(t) \, dt, where the Peano kernel K_n(t) = E[(x-t)_+^n] is computed by applying the quadrature functional E to the truncated power function (x-t)_+^n = (x-t)^n for x > t and 0 otherwise. If K_n(t) does not change sign, the yields E(f) = \frac{K_n(\xi)}{n!} f^{(n+1)}(\xi) for some \xi \in [a,b], enabling sharp bounds via \|K_n\|_\infty. This representation unifies error analysis across rules, such as yielding the trapezoidal bound when n=1. A posteriori estimation computes indicators after evaluating the , often without requiring , making it practical for adaptive or reliability checks. Residual-based approaches estimate the by approximating the difference between the exact and the sum using auxiliary computations, such as divided-difference tests on rules. Deferred correction methods iteratively refine the by adding corrections based on lower-order , with the correction term serving as an proxy; for instance, in Newton-Cotes rules, the difference between successive refinements bounds the within a prescribed tolerance under assumptions. These techniques ensure solutions are near-optimal relative to spline interpolants on the same . Conservative error bounds focus on worst-case scenarios, often using Chebyshev polynomials to achieve properties. The scaled Chebyshev polynomial T_{n+1}( \frac{2x - a - b}{b-a} ) minimizes the maximum deviation from zero among monic polynomials of degree n+1 on [a,b], with leading coefficient $2^n. Truncating the Chebyshev series of f at n+1 terms yields a near- approximation p_n, where the error \epsilon_n(x) \approx d_{n+1} T_{n+1}( \cdot ) satisfies |\epsilon_n|_\infty \leq (1 + L_n) |d_{n+1}| 2^{-n}, with L_n < 1 + 2^{1-n} bounding the deviation from true . In quadrature, this informs bounds for interpolatory rules by equating integration error to the approximation error weighted by the interval length. For oscillatory integrals, such as \int_a^b f(x) \sin(\omega x) \, dx with large \omega, standard quadrature suffers from ill-posedness due to poor conditioning: the rapid oscillations lead to cancellation in sums, amplifying roundoff errors and requiring \mathcal{O}(\omega) evaluations for accuracy, rendering the problem exponentially ill-conditioned as \omega \to \infty. Specialized methods like Levin's collocation approach mitigate this but can still produce ill-conditioned moment matrices, necessitating regularization for stability. In composite quadrature rules, which partition [a,b] into subintervals and apply a base rule locally, the total truncation error propagates as the sum of local errors. For the composite trapezoidal rule with uniform h = (b-a)/n, each subinterval contributes -\frac{h^3}{12} f''(\eta_j), yielding total error -\frac{(b-a) h^2}{12} f''(\eta) for some \eta \in [a,b], assuming sufficient smoothness; similarly, for composite Simpson's rule, the total is -\frac{(b-a) h^4}{180} f^{(4)}(\eta). This additive structure allows global bounds from local maxima but ignores interactions, providing a conservative estimate.

Techniques for Infinite Intervals

Integrating over infinite intervals, such as (-\infty, \infty) or [0, \infty), poses significant challenges in numerical analysis because standard partitioning methods rely on finite subintervals, which are inapplicable here. These techniques necessitate assumptions about the integrand f(x), particularly that it decays sufficiently rapidly as |x| \to \infty to ensure the integral converges; for instance, |f(x)| = O(1/|x|^{1+\epsilon}) for some \epsilon > 0 guarantees absolute integrability. Without such decay, the integral may diverge, making numerical approximation infeasible. One primary strategy to address these challenges involves variable transformations that map the infinite domain to a finite interval, enabling the use of established quadrature rules like those based on Gaussian interpolation. For the semi-infinite interval [0, \infty), the substitution x = t / (1 - t) with t \in [0, 1) transforms the integral \int_0^\infty f(x) \, dx = \int_0^1 f\left( \frac{t}{1-t} \right) \frac{dt}{(1-t)^2}, compressing the unbounded range while adjusting the differential. For the full infinite interval (-\infty, \infty), the hyperbolic tangent transformation x = \tanh u maps u \in (-\infty, \infty) to x \in (-1, 1), but the inverse mapping u = \artanh x is often used in reverse to finite-ize; a more robust variant is the double-exponential (DE) mapping, such as x = \sinh(\pi/2 \sinh t) for t \in [-1, 1], which provides superior convergence for analytic integrands by exponentially decaying the transformed integrand near the endpoints. These transformations, particularly DE formulas introduced by Mori, achieve near-optimal error decay rates of O(e^{-c n}) for n evaluation points when f is analytic. A complementary class of methods employs orthogonal polynomials tailored to weighted infinite domains, avoiding explicit transformations by incorporating the decay directly into the quadrature weights. The Gauss-Hermite rule approximates integrals of the form \int_{-\infty}^\infty f(x) e^{-x^2} \, dx \approx \sum_{i=1}^n w_i f(x_i), where the nodes x_i are the roots of the nth H_n(x) = 0, and the weights w_i = \frac{2^{n-1} n! \sqrt{\pi}}{n^2 [H_{n-1}(x_i)]^2} ensure exactness for any f of at most $2n-1. For general unweighted integrals over (-\infty, \infty), the integrand is rewritten as f(x) = g(x) e^{-x^2} with g(x) = f(x) e^{x^2}, assuming g is smooth. Similarly, the Gauss-Laguerre quadrature targets semi-infinite integrals \int_0^\infty f(x) e^{-x} \, dx \approx \sum_{i=1}^n w_i f(x_i), with nodes x_i as roots of the L_n(x) = 0 and weights w_i = \frac{x_i}{ (n+1)^2 [L_{n+1}(x_i)]^2 }, again exact for polynomials up to $2n-1; the generalized form allows shifts in the exponential weight for better adaptation. These rules leverage the of Hermite and with respect to their respective weights e^{-x^2} and e^{-x}. For integrands without inherent exponential decay or to refine approximations beyond a finite truncation point A, asymptotic expansions estimate the tail contributions \int_A^\infty f(x) \, dx or \int_{-\infty}^{-A} f(x) \, dx by integration by parts. Assuming f(x) admits an asymptotic series f(x) \sim \sum_{k=0}^m a_k / x^{k+1} as x \to \infty, repeated integration by parts yields \int_A^\infty f(x) \, dx = \sum_{k=1}^m (-1)^{k-1} \frac{a_{k-1}}{A^k} + (-1)^m \int_A^\infty \frac{r_m(x)}{x^{m+1}} \, dx, where r_m(x) is the remainder, providing explicit error bounds if |r_m(x)| \leq M. This technique, detailed in standard quadrature packages, efficiently captures tail behavior for large A without full evaluation. Error analysis for these infinite-interval methods parallels finite-case orthogonal quadratures but incorporates the weight function's moments. For the n-point Gauss-Hermite rule, the is E = \frac{f^{(2n)}(\xi)}{(2n)!} \int_{-\infty}^\infty e^{-x^2} p_n(x)^2 \, dx for some \xi \in \mathbb{R}, where p_n(x) is the monic Hermite polynomial of degree n; the integral equals $2^{n-1} [n!](/page/Factorial) \sqrt{\pi}, yielding an error scaling as O(2^{-n} [n!](/page/Factorial) / n^{2n}) for analytic f. Analogously, for Gauss-Laguerre, E = \frac{f^{(2n)}(\xi)}{(2n)!} \int_0^\infty e^{-x} p_n(x)^2 \, dx = \frac{f^{(2n)}(\xi)}{(2n)!} [n!](/page/Factorial), with similar factorial growth in the denominator for smooth functions. Transformations introduce additional from Jacobian distortion, but DE mappings minimize this compared to algebraic ones like t/(1-t). Comparisons indicate that transformed trapezoidal or Gaussian rules often outperform direct orthogonal quadratures for functions with finite smoothness, as Gauss-Hermite can suffer suboptimal O(n^{-s}) for s-times differentiable f, whereas DE-transformed rules achieve near-exponential rates.

Multidimensional Methods

Monte Carlo Integration

Monte Carlo integration is a probabilistic for approximating the value of multidimensional by relying on random sampling. The core idea involves estimating the \int_V f(\mathbf{x}) \, d\mathbf{x} over a V with finite volume |V| using the unbiased estimator \hat{I} = \frac{|V|}{N} \sum_{i=1}^N f(\mathbf{x}_i), where the \mathbf{x}_i are independently drawn from a over V. This estimator has equal to the true and variance \frac{|V|^2 \sigma^2}{N}, where \sigma^2 = \mathrm{Var}(f(\mathbf{X})) is the variance of f under the uniform measure, leading to a that scales as O(1/\sqrt{N}). A key advantage of Monte Carlo integration in multidimensional settings is its avoidance of the curse of dimensionality that plagues many deterministic quadrature methods. While grid-based deterministic approaches require an exponential number of points in the dimension d to maintain accuracy, the Monte Carlo error remains O(1/\sqrt{N}) independent of d, making it particularly suitable for high-dimensional integrals where N can be increased affordably to reduce error. This dimension-independent convergence enables practical computations in spaces with dozens or hundreds of dimensions, such as in or simulations. To improve efficiency, techniques modify the sampling strategy while preserving unbiasedness. changes the sampling distribution to a proposal g(\mathbf{x}) that concentrates points where |f(\mathbf{x})| is large, yielding the \hat{I} = \frac{1}{N} \sum_{i=1}^N \frac{|V| f(\mathbf{x}_i)}{g(\mathbf{x}_i)}, provided g > 0 wherever f \neq 0 to ensure bounded variance. The optimal g that minimizes variance is g(\mathbf{x}) = \frac{|f(\mathbf{x})|}{ \int_V |f(\mathbf{u})| \, d\mathbf{u} }, achieving zero variance in the ideal case, though approximations are used in practice since the is unknown. Other methods include antithetic variates, which pair samples \mathbf{x}_i with negatively counterparts (e.g., $1 - \mathbf{x}_i for [0,1] variables) to reduce in the , and , which subtract a term involving a known of a h(\mathbf{x}) with E = \mu_h, using \hat{I}_c = \hat{I} - \alpha (\hat{\mu}_h - \mu_h) where \alpha is chosen to minimize variance, often yielding reductions proportional to the squared between f and h. These techniques can decrease variance by orders of magnitude when correlations are strong. Quasi-Monte Carlo (QMC) methods enhance standard by replacing pseudorandom samples with deterministic low-discrepancy sequences, such as Sobol or Halton sequences, which distribute points more uniformly across the domain to mimic while achieving faster . For integrands of , QMC provides an error bound of O( (\log N)^d / N ), improving over the O(1/\sqrt{N}) of pure , especially in moderate dimensions where the logarithmic factor remains manageable. Randomized QMC (RQMC) further randomizes these sequences (e.g., via random shifts or Latin hypercube scrambling) to enable variance estimation and error assessment while retaining near-optimal rates. Recent advances in RQMC, including improved scrambling techniques and with multilevel estimators, have extended applicability to non-smooth integrands and high-dimensional problems, with analyses under relaxed variation conditions achieving rates close to O(N^{-1}) in practice. A classic illustrative example is estimating \pi by approximating the area of a quarter unit circle via Monte Carlo: sample N points uniformly in the unit square [0,1]^2 and compute the proportion p inside the circle x^2 + y^2 \leq 1, yielding \hat{\pi} = 4p with variance decreasing as O(1/N). In high dimensions, this extends to volume estimation of balls or other regions, where the relative error remains bounded independently of dimension for fixed N, but variance reduction becomes crucial as \sigma^2 grows with d due to concentration effects; for instance, in 100 dimensions, billions of samples may be needed for 1% accuracy without techniques like importance sampling.

Sparse Grid Methods

Sparse grid methods address the curse of dimensionality in multidimensional numerical integration, where full quadrature rules require a number of evaluations scaling as O(h^{-d}) for accuracy h in d dimensions, rendering them infeasible for d > 3. These methods construct sparse approximations by combining univariate quadrature rules in a hierarchical framework, achieving a complexity of O(h^{-1} (\log h)^{d-1}) evaluations while retaining near-optimal accuracy for functions with bounded mixed derivatives. The foundational approach, known as Smolyak's construction, builds sparse grids as a of full tensor products of one-dimensional rules, restricted to multi-indices i = (i_1, \dots, i_d) satisfying |i|_1 \leq q for a sparsity level q. This sums over anisotropic subspaces to avoid redundant evaluations in lower-dimensional interactions. The quadrature formula is given by A(q,d) = \sum_{|i|_1 \leq q} (-1)^{q - |i|_1} \binom{d-1}{q - |i|_1} (U^{i_1} \otimes \cdots \otimes U^{i_d}), where U^k denotes the one-dimensional quadrature rule at level k, typically built from nested points like those in interpolation-based methods. The alternating signs and binomial coefficients ensure exactness for polynomials up to a certain mixed degree, effectively truncating the full tensor expansion. For smooth integrands, the worst-case error of Smolyak quadrature decays as O(h^2 (\log h)^{d-1}), blending algebraic convergence from the univariate rules with a logarithmic factor arising from the sparsity constraint. This rate holds under assumptions of bounded second mixed derivatives, providing deterministic guarantees superior to methods in moderate dimensions but degrading more slowly than full grids in high dimensions. A key application lies in cross , which targets anisotropic functions where effective dimensionality is reduced by prioritizing directions with stronger variation; sparse grids align with this by emphasizing low-index terms in the expansion. Common variants employ nested univariate rules for efficiency: Clenshaw-Curtis points, derived from Chebyshev extrema, offer spectral-like accuracy with O(2^l) points per level l, while Gaussian rules provide exactness for polynomials up to degree $2m-1 with m points but require non-nested adaptations. Combination techniques extend sparse grids for specific weight functions, such as the Gaussian weight in Bayesian contexts or finance; for instance, the Genz-Keister method constructs nested rules in one dimension and combines them via Smolyak summation to yield efficient multidimensional quadratures with positive weights and improved stability.

Bayesian Quadrature

Bayesian quadrature frames the problem of numerical integration as a task of Bayesian inference, where uncertainty in the integrand is explicitly modeled to provide both point estimates and probabilistic error bounds. Introduced by O'Hagan in 1991, the approach posits a prior distribution over possible integrand functions, typically a Gaussian process (GP) with a kernel such as the squared exponential, which encodes assumptions about the function's smoothness and regularity.90002-V) The likelihood is derived from noisy point evaluations of the true integrand f(x_i) at selected points x_i, treating these as observations y_i = f(x_i) + \epsilon_i where \epsilon_i \sim \mathcal{N}(0, \sigma^2). This setup yields a posterior distribution over functions, enabling the integral \int f(x) \mu(dx) to be inferred as an expectation under the posterior measure \mu_{\text{post}}. The posterior serves as the quadrature estimate, computed as the of the posterior GP : \mathbb{E}_{\mu_{\text{post}}}[\int f(x) \mu(dx)] = \mathbf{k}_*^T (\mathbf{K} + \sigma^2 \mathbf{I})^{-1} \mathbf{y}, where \mathbf{K} is the kernel matrix over the evaluation points, \mathbf{y} the observed values, and \mathbf{k}_* the kernel covariances between the evaluation points and the (fictitious) points defining the measure \mu. The associated uncertainty is captured by the posterior variance \sigma^2_{\text{post}} = k(\mu, \mu) - \mathbf{k}_*^T (\mathbf{K} + \sigma^2 \mathbf{I})^{-1} \mathbf{k}_*, providing a principled that quantifies remaining epistemic after conditioning on the . This probabilistic output contrasts with deterministic methods by naturally incorporating prior knowledge about the integrand, such as smoothness via the choice, and supports active point selection strategies, for instance, by maximizing between the and potential new observations to minimize posterior variance efficiently. Key advantages of Bayesian quadrature include its ability to leverage prior beliefs for improved estimates in data-scarce regimes and to guide adaptive sampling, outperforming standard Monte Carlo methods in low dimensions by exploiting function correlations encoded in the GP. For non-stationary functions where a standard GP prior fails due to varying smoothness, warped Gaussian processes extend the framework by learning a nonlinear transformation of the output space, allowing the prior to adapt to heteroscedasticity or multimodality while maintaining Gaussian marginals. Connections to kernel herding arise in the sequential selection process, where optimally weighted herding minimizes the maximum mean discrepancy to the target measure, equivalent to reducing posterior variance in Bayesian quadrature and yielding superior convergence for quasi-Monte Carlo-like approximations. As an illustrative example, consider integrating a function such as a f(x) = \sum_{k=1}^K w_k \exp\left( -\frac{(x - m_k)^2}{2 s_k^2} \right) over [0, 1], where modes are centered at disparate locations. Using a prior with a squared exponential kernel, Bayesian quadrature selects evaluation points adaptively near suspected modes via mutual information, yielding a posterior mean close to the true integral value (e.g., approximately 0.8 for balanced weights) alongside 95% confidence intervals that widen in regions of high multimodality, effectively capturing the integrand's complexity with fewer evaluations than uniform sampling. Recent advances address scalability for large datasets or high-dimensional integrals, where inverting the full kernel matrix becomes prohibitive. Methods like SOBER reformulate batch selection as a quadrature problem over inducing points, enabling parallel evaluation and orders-of-magnitude speedups for thousands of points while preserving theoretical guarantees on convergence rates. Similarly, fast Fourier-based approximations exploit kernel structure for \mathcal{O}(n \log n) computations, facilitating Bayesian quadrature on big data applications such as Bayesian model evidence computation in machine learning pipelines.

Advanced Connections

Numerical integration plays a fundamental role in solving ordinary differential equations (ODEs) by approximating the form inherent in their solutions. For an y'(t) = f(t, y(t)) with y(a) = y_0, the exact solution satisfies y(b) - y(a) = \int_a^b f(t, y(t)) \, dt, where the integral captures the cumulative change over the interval [a, b]. This representation transforms the differential problem into an equivalent , enabling techniques to approximate the solution increment. Single-step methods for ODEs directly employ quadrature rules to estimate this integral. The forward Euler method, for instance, approximates the integral using the left rectangle rule: y_{n+1} = y_n + h f(t_n, y_n), which is exact for constant integrands but incurs a local truncation error of order O(h^2). Higher-order single-step methods build on this by using more sophisticated quadrature approximations. Runge-Kutta methods, in particular, compute multiple stage values of f within each step to mimic higher-degree quadrature rules; the classical fourth-order Runge-Kutta (RK4) method corresponds to a composite Simpson's rule approximation when f is independent of y, achieving a local error of O(h^5). The structure of Runge-Kutta methods is encapsulated in the tableau, a of coefficients that determines the method's , , and with properties. The tableau's A-matrix encodes the stage weights analogous to quadrature nodes and weights, while the b- provides the final combination for the step; satisfying simplifying assumptions like B(p) (sum conditions on b) and C(q) (node accuracy) ensures the method matches a quadrature rule of up to p \leq 2s for s stages, with regions determined by the stability function derived from the tableau. Multistep methods extend this quadrature perspective by integrating over past solution points. The explicit Adams-Bashforth methods approximate the using backward differences on previous f-evaluations, such as the second-order variant y_{n+1} = y_n + \frac{h}{2} (3f_n - f_{n-1}), which equates to a linear interpolant and yields order 2 accuracy. These methods leverage historical data for efficiency but require a starter like Runge-Kutta for initial steps. Error analysis in these solvers ties directly to quadrature inaccuracies. The local truncation error, measuring the discrepancy when the exact solution is plugged into the method, mirrors the quadrature rule's error term—for Euler, it is \frac{h^2}{2} y''(\xi), propagating to a global error of O(h) over fixed intervals as local errors accumulate. In adaptive contexts, step-size control can incorporate quadrature-based error estimates to maintain solution accuracy. Extensions of quadrature-based approaches apply to more complex systems. For stochastic differential equations (SDEs) dY = f(t, Y) dt + g(t, Y) dW, methods like Euler-Maruyama or Milstein approximate the drift integral via deterministic while sampling the diffusion term, with higher-order variants using iterated integrals for improved . In differential-algebraic equations (DAEs), where algebraic constraints couple with differentials, projection techniques or index-reduced forms allow on the differential components, as in (BDF) methods adapted for semi-explicit DAEs. For systems, variational integrators preserve the structure by discretizing \int L(q, \dot{q}) \, dt via discrete approximations, yielding methods like the Störmer-Verlet scheme that maintain and long-term better than standard Runge-Kutta.

Software and Implementation Considerations

Several widely used software libraries provide robust implementations for numerical in one and multiple dimensions. In , the library's quad function employs adaptive quadrature techniques from the QUADPACK library, including the Gauss-Kronrod rule for efficient error estimation on finite and infinite intervals. The GNU Scientific Library (GSL) in C offers a comprehensive suite of routines that reimplement QUADPACK algorithms, supporting adaptive methods like QAGS for general integrands and QAGI for infinite intervals. MATLAB's quadgk function utilizes high-order adaptive Gauss-Kronrod quadrature, suitable for smooth functions with automatic subdivision for accuracy control. For multidimensional , the library implements Monte Carlo-based algorithms such as Vegas, which uses importance sampling to reduce variance in high-dimensional spaces, alongside deterministic methods like Cuhre for exactness in lower dimensions. Implementing numerical integration presents challenges, particularly with ill-conditioned integrands or special domains. Overflow issues can arise in evaluations over intervals due to rapid function growth or improper transformations, necessitating careful scaling or substitution methods as implemented in QUADPACK's infinite-range routines. For oscillatory integrands, Levin-type methods address conditioning problems by transforming the integral into a solved via , enabling stable computation even for highly irregular oscillations. These approaches mitigate numerical instability but require precise phase function identification to avoid amplification of errors. Parallelization enhances efficiency in numerical integration, with strategies varying by method. Monte Carlo techniques parallelize readily through independent sampling of random points, allowing distribution across cores or GPUs for variance reduction without synchronization overhead. Deterministic quadrature methods benefit from domain decomposition, where the integration region is partitioned into subdomains evaluated concurrently, followed by summation, as seen in adaptive schemes for multidimensional problems. Selecting an appropriate method depends on problem characteristics. For smooth one-dimensional integrands, excels due to its spectral and minimal evaluations. In high dimensions, methods, including Vegas variants, are preferred for their despite slower , avoiding the curse of dimensionality inherent in grid-based approaches. When prior information or uncertainty quantification is available, Bayesian quadrature provides probabilistic estimates by modeling the integrand with Gaussian processes. Validation of numerical results involves cross-checking with multiple independent methods to confirm consistency, such as comparing outputs against estimates for the same integral. settings play a critical role; relative and absolute error thresholds, typically set between 1e-6 and 1e-12, guide adaptive refinement, with tighter tolerances increasing computational cost but ensuring reliability for sensitive applications. Open-source alternatives expand accessibility, such as Quadpy in , which aggregates rules for various domains including spheres and simplices, supporting both deterministic and schemes without proprietary dependencies. , particularly GPUs, boosts via parallel random sampling, achieving speedups of orders of magnitude for high-dimensional integrals as demonstrated in implementations like cuVegas and TorchQuad. Best practices emphasize for performance, where integrand evaluations arrays of points simultaneously in libraries like to leverage optimized linear algebra routines. Handling discontinuities requires explicit detection and subdivision, using techniques like singularity-aware to isolate jumps and apply tailored rules, preventing error propagation across the domain.

References

  1. [1]
    [PDF] 10.001: Numerical Integration - MIT
    Numerical methods are developed based on the results of mathematical analyses. ... The concept of convergence is of cardinal importance in numerical analysis.
  2. [2]
    1.11 Numerical Integration
    In this section we turn to the problem of how to find (approximate) numerical values for integrals, without having to evaluate them algebraically.
  3. [3]
    9.3 Numerical Integration
    Nov 11, 2010 · Numerically integrates a function of one variable using the trapezoidal rule. We can estimate the integral of f(x) from a to b using the formula T = (ba)/2 (f( ...<|control11|><|separator|>
  4. [4]
    Numerical Evaluation of Integrals - Duke People
    You may recall from Calculus that integrals can be numerically evaluated using quadrature methods such as Trapezoid and Simpson's's rules. This is easy to do in ...
  5. [5]
    [PDF] MATH 350 - Applied Mathematics - Illinois Institute of Technology
    the process of evaluating a definite integral numerically — is also known as quadrature. fasshauer@iit ...
  6. [6]
    [PDF] Numerical Integration and Differentiation
    Page 2. 12.1 Motivation. It is not hard to formulate simple applications of numerical integration and differentiation given how often the tools of calculus ...
  7. [7]
    [PDF] A review of structure-preserving numerical methods for engineering ...
    May 15, 2020 · Accurate numerical simulation of dynamical systems is essential in applications ranging from particle physics to geophysical fluid flow to space ...
  8. [8]
    [2006.13181] Numerical aspects of integration in semi-closed option ...
    Jun 23, 2020 · In mathematical finance, a process of calibrating stochastic volatility (SV) option pricing models to real market data involves a numerical ...
  9. [9]
    [PDF] Numerical Integration and Differentiation =1[2]Lecture slides based ...
    Jan 28, 2019 · Integration. ▻ For f : R → R, definite integral over interval [a, b]. I(f ) = Z b a f (x) dx is defined by limit of Riemann sums. Rn = n. X i ...
  10. [10]
    [PDF] Lecture 7: Numerical Integration I, 9/20/2021
    The goal is to get solutions to integration problems even if an analytic solution is missing. Early motivations were astronomy or the task to compute volumes of ...
  11. [11]
    None
    ### Summary of Key Historical Milestones in Numerical Integration and Quadrature
  12. [12]
    [PDF] Mathematics in India
    approximations to π. Another major root of Indian mathematics is the work of P¯an.ini and Pi˙ngala (perhaps in the fifth century BCE and the third century ...
  13. [13]
    Bonaventura Cavalieri | A Short Account of the History of Mathematics
    The principle of indivisibles had been used by Kepler in 1604 and 1615 in a somewhat crude form. It was first stated by Cavalieri in 1629, but he did not ...
  14. [14]
    [PDF] John Wallis (1616 - 1703)
    In 1655 Wallis published a treatise on conic sections in which they were defined analytically. ... the abscissa, x, its quadrature can be determined: thus ...
  15. [15]
    [PDF] A Survey of Gauss-Christoffel Quadrature Formulae
    We present a historical survey of Gauss-Christoffel quadrature formulae, beginning with Gauss' discovery of his well-known method of approximate integration ...
  16. [16]
    Thomas Simpson - math for college
    Simpson went as far as applying himself to publish a theory on Annuities and Reversions in 1742. This “branch” of mathematics, originated by James Dodson, deals ...
  17. [17]
    [PDF] Is Gauss Quadrature Better than Clenshaw–Curtis? - People
    Clenshaw–Curtis quadrature corresponds to an approximation whose order of accuracy at z = ∞ is only half as high, but which is nevertheless equally accurate ...Missing: history | Show results with:history
  18. [18]
    [PDF] Smolyak Method for Solving Dynamic Economic Models
    Apr 3, 2014 · In a seminal paper, Smolyak (1963) proposed a sparse-grid method that allows to efficiently represent, integrate and interpolate functions on ...
  19. [19]
    Numerical Integration: Rectangle Method
    The rectangle method utilizes the Riemann integral definition to calculate an approximate estimate for the area under the curve by drawing many rectangles with ...Rectangle Method · Error Analysis · Example
  20. [20]
    [PDF] The Riemann Integral - UC Davis Math
    Riemann sums. An alternative way to define the Riemann integral is in terms of the convergence of Riemann sums. This was, in fact, Riemann's original ...
  21. [21]
    [PDF] Analysis III Ben Green - People
    We are going to define the (Riemann) integral of a function by approximating it using simple functions called step functions. Definition 1.1. Let [a, b] be an ...
  22. [22]
    [PDF] ERROR BOUNDS FOR NUMERICAL INTEGRATION
    For each rule, we stated an Error Bound which provides an upper limit to the size of the error in the approximation. In this supplement, we prove the Error ...
  23. [23]
    [PDF] Numerical Integration 1 Introduction 2 Midpoint Rule, Trapezoid ...
    We want to have a small quadrature error |Q−I| using as few function evaluations as possible. ... 2.1 Errors for the Midpoint Rule, Trapezoid Rule, Simpson Rule.
  24. [24]
    [PDF] Chapter 5 - Quadrature - Computer Science
    The midpoint rule M approximates the integral by the area of a rectangle whose base has length h and whose height is the value of f(x) at the midpoint: M = hf.
  25. [25]
    [PDF] Chapter 3 Numerical uadrature 3.1 Midpoint Rule and Simpson's ...
    To begin our discussion, we analyze two very common quadrature techniques, the midpoint rule and Simpson's rule. Before stating any of the methods or giving ...
  26. [26]
    DLMF: §3.5 Quadrature ‣ Areas ‣ Chapter 3 Numerical Methods
    An interpolatory quadrature rule with weight function w ⁡ ( x ) , is one for which E n ⁡ ( f ) = 0 whenever f is a polynomial of degree ≤ n − 1.
  27. [27]
    Newton-Cotes Formulas -- from Wolfram MathWorld
    The 2-point closed Newton-Cotes formula is called the trapezoidal rule because it approximates the area under a curve by a trapezoid with horizontal base and ...Missing: Runge phenomenon
  28. [28]
    Legendre-Gauss Quadrature -- from Wolfram MathWorld
    Legendre-Gauss quadrature is a numerical integration method also called "the" Gaussian quadrature or Legendre quadrature.
  29. [29]
    Lobatto Quadrature -- from Wolfram MathWorld
    Also called Radau quadrature (Chandrasekhar 1960). A Gaussian quadrature with weighting function W(x)=1 in which the endpoints of the interval [-1,1] ...
  30. [30]
    Adaptive Quadrature—Revisited | BIT Numerical Mathematics
    Two Matlab quadrature programs are presented. The first is an implementation of the well-known adaptive recursive Simpson rule; the second is new.
  31. [31]
    Quadpack | SpringerLink
    Quadpack. A Subroutine Package for Automatic Integration. Authors ... Pages N1-VIII. PDF · Introduction. Robert Piessens, Elise de Doncker-Kapenga, Christoph W.
  32. [32]
    [PDF] Richardson's Extrapolation - UW Math Department
    Richardson's Extrapolation uses two O(h^2) approximations to create an O(h^4) approximation, requiring evaluations at h and h/2.Missing: seminal paper
  33. [33]
    Werner Romberg (1909 - 2003) - Biography - MacTutor
    This paper contains what today is known as Romberg integration and we now look briefly at this method. Romberg's aim with this paper was to increase the speed ...<|separator|>
  34. [34]
    [PDF] Euler-Maclaurin Formula 1 Introduction - People | MIT CSAIL
    Euler-Maclaurin summation formula is an important tool of numerical analysis. ... Integration by parts gives us. Z n. 1. P1(t) t dt = P2(t) t n. 1. +. Z n. 1. P2( ...Missing: quadrature | Show results with:quadrature
  35. [35]
    Numerical treatment of ordinary differential equations by ...
    Cite this article. Bulirsch, R., Stoer, J. Numerical treatment of ordinary differential equations by extrapolation methods. Numer. Math. 8, 1–13 (1966) ...
  36. [36]
    scipy.integrate.romberg — SciPy v1.12.0 Manual
    Examples. Integrate a gaussian from 0 to 1 and compare to the error function. >>> from scipy import integrate >>> from scipy.special import erf >>> import ...
  37. [37]
    3.6: Numerical Integration
    ### Summary of Truncation and Roundoff Errors in Numerical Integration
  38. [38]
    [PDF] TRAPEZOIDAL METHOD ERROR FORMULA Theorem Let f(x) have ...
    There are two stages in deriving the error: (1) Obtain the error formula for the case of a single subinterval (n = 1); (2) Use this to obtain the general error ...
  39. [39]
    Gaussian Quadrature -- from Wolfram MathWorld
    The error is given by. E_n, = (f^((2n))(xi))/((2n. (13). = (gamma_n)/(A_n^2)(f ... Gaussian Quadrature Formulas. Englewood Cliffs, NJ: Prentice-Hall, 1966 ...<|separator|>
  40. [40]
    [PDF] Peano Kernel Analysis In the previous lecture, we proved an error ...
    Oct 10, 2003 · For example, the trapezoid rule has x0 = a, x1 = b, and A0 = A1 = (b − a)/2. Define the error function for this quadrature rule as. E(f) = Z b a.
  41. [41]
    [PDF] On the Power of a posteriori Error Estimation for Numerical ...
    A posteriori estimation, or estimation of truncation error in an algorithm for numerical approximation such as numerical integration, function approximation ...
  42. [42]
    DLMF: §3.11 Approximation Techniques ‣ Areas ‣ Chapter 3 Numerical Methods
    ### Summary: Chebyshev Polynomials for Minimax Error in Approximation
  43. [43]
    An improved Levin quadrature method for highly oscillatory integrals
    This method has a high computational speed and has addressed the problem that the Levin method is susceptible to the ill conditioning. Numerical experiments ...Missing: posedness | Show results with:posedness
  44. [44]
    [PDF] Lecture 23: Interpolatory Quadrature 4. Quadrature. The ...
    Oct 27, 2009 · When we interpolate at equally spaced points, the formulas that result are called Newton–Cotes quadrature rules. You encountered the most basic ...Missing: source | Show results with:source
  45. [45]
    [PDF] methods of - numerical integration - Math-Unipd
    Methods include primitive, compound, interpolatory, open, Gauss type, and integration using derivative data, for finite intervals.
  46. [46]
    Double Exponential Formulas for Numerical Integration - EMS Press
    The purpose of the present paper is to show, on the basis of an error analysis using contour integral method, that the mapping is in a certain sense optimal ...
  47. [47]
    [PDF] quadrature.pdf - UMD MATH
    If the goal is to design a quadrature rule for fixed n that is exact for the highest degree polynomials, the Gaussian quadrature is the best choice. If one ...
  48. [48]
    [PDF] Suboptimality of Gauss–Hermite Quadrature and Optimality of the ...
    Jan 1, 2023 · Since these quadratures are arguably more complicated to use than the trapezoidal rule, and the error analysis should be less involved, we had ...
  49. [49]
    [PDF] General Principles of the Monte Carlo Method - Computer Science
    Advanced Monte Carlo Methods: General Principles of the Monte. Carlo Method. Prof. Dr. Michael Mascagni. E-mail: mascagni@math.ethz.ch, ...Missing: core | Show results with:core
  50. [50]
    [PDF] Monte Carlo Integration - Dartmouth Computer Science
    Standard integration techniques exist which converge much faster in one dimension; however, these techniques suffer from the curse of dimensionality, where the ...
  51. [51]
    [PDF] Chapter 6 Importance sampling - Arizona Math
    We may be able to use importance sampling to turn a problem with an unbounded random variable into a problem with a bounded random variable.
  52. [52]
    [PDF] Chapter 5 Variance reduction - Arizona Math
    The error in a direct Monte Carlo simulation goes as σ/√n. So there are two ways we can reduce the error. Run the simulation for a longer time, i.e., increase ...
  53. [53]
    [PDF] Quasi- Monte Carlo Multiple Integration
    Quasi Monte-Carlo is able to achieve a deterministic error bound O((logN)d/N) for suitably chosen sets of nodes and for integrands with a relatively low degree.
  54. [54]
    [PDF] recent advances in randomized quasi-monte carlo methods
    In Section 4, we present measures of quality (or selection criteria) for. Page 5. Recent Advances in Randomized Quasi-Monte Carlo Methods ... L'Ecuyer.
  55. [55]
    [PDF] Sparse grids - Institute for Numerical Simulation - Universität Bonn
    We present a survey of the fundamentals and the applications of sparse grids, with a focus on the solution of partial differential equations (PDEs). The.
  56. [56]
    S. A. Smolyak, “Quadrature and interpolation formulas for tensor ...
    Applications of Smolyak quadrature formulas to the numerical integration of Fourier coefficients and in function recovery problems.
  57. [57]
    [PDF] Active Learning of Model Evidence Using Bayesian Quadrature
    In this paper, we have made several advances to the BQ method for evidence estimation. These are: approximately imposing a positivity constraint6, approximately ...Missing: seminal | Show results with:seminal<|control11|><|separator|>
  58. [58]
    [1204.1664] Optimally-Weighted Herding is Bayesian Quadrature
    Apr 7, 2012 · We show that the criterion minimised when selecting samples in kernel herding is equivalent to the posterior variance in Bayesian quadrature.
  59. [59]
    [PDF] Bayesian Quadrature for Ratios
    Our mod- elling of the correlations between integrals over r(φ) also grant bqr superior performance for multi-modal, heavy-tailed likelihoods. We conclude that ...<|control11|><|separator|>
  60. [60]
    [2301.11832] SOBER: Highly Parallel Bayesian Optimization and ...
    Jan 27, 2023 · SOBER is a novel algorithm for scalable batch global optimization and quadrature, using a reformulation of batch selection as a quadrature ...
  61. [61]
    Fast Fourier Bayesian Quadrature
    Fast Fourier Bayesian Quadrature recasts Bayesian quadrature as a convolution, using fast Fourier transform for efficient computation, enhancing scalability ...
  62. [62]
    [PDF] An Introduction to Numerical Methods for ODEs
    Ordinary differential equations (ODEs) provide a way to describe and predict the nature of chemical reaction systems, n-body interactions in physics, current ...
  63. [63]
    [PDF] Numerical Methods for Ordinary Differential Equations
    What is a differential equation? An ordinary differential equation (ODE) is an equation that contains one or more derivatives of an unknown function x(t).
  64. [64]
    [PDF] numerical methods for solving ordinary differential equations
    We will approximate the integral using various quadrature rules. Applying the left-hand rule we obtain the forward Euler method (12). The right-hand rule ...Missing: rectangle | Show results with:rectangle
  65. [65]
    [PDF] 3 Runge-Kutta Methods
    Simpson's rule yields the fourth-order Runge-Kutta method in case there is no dependence of f on y. 5. Gauss quadrature leads to so-called Gauss-Runge-Kutta or ...
  66. [66]
    [PDF] A new way of deriving implicit Runge-Kutta methods based ... - arXiv
    Dec 12, 2024 · If a Runge-Kutta method satisfies B(p), C(q) and D(r) with p ≤ q +r +1 and p ≤ 2q +2, then its order of convergence is p. There are three groups ...<|separator|>
  67. [67]
    [PDF] 2 Multistep Methods
    Adams-Bashforth methods are optimal in the sense that the corresponding order is as high as possible. The same is true for Adams-Moulton formulas with odd s.
  68. [68]
    Adaptive Density Tracking by Quadrature for Stochastic Differential ...
    May 17, 2021 · In this paper, we extend upon existing tensorized DTQ procedures by utilizing a flexible quadrature rule that allows for unstructured, adaptive ...
  69. [69]
    [PDF] Four Lectures on Differential-Algebraic Equations
    Jun 13, 2003 · The main tool is a pro- cedure to decouple the DAE into it's dynamical and algebraic part. In section three this analysis is carried out for ...
  70. [70]
    [PDF] AN OVERVIEW OF VARIATIONAL INTEGRATORS - Caltech Authors
    The average kinetic energy as a function of the integration run T for a nonlinear spring–mass lattice system in the plane, using a first order variational ...
  71. [71]
    quad — SciPy v1.16.2 Manual
    Compute a definite integral. Integrate func from a to b (possibly infinite interval) using a technique from the Fortran library QUADPACK.Missing: GSL MATLAB
  72. [72]
    Numerical Integration — GSL 2.8 documentation - GNU.org
    The library reimplements the algorithms used in QUADPACK, a numerical integration package written by Piessens, de Doncker-Kapenga, Ueberhuber and Kahaner.Missing: SciPy MATLAB quadgk
  73. [73]
    quadgk - Numerically evaluate integral — Gauss-Kronrod quadrature
    This MATLAB function integrates the function handle fun from a to b using high-order global adaptive quadrature and default error tolerances.Missing: libraries SciPy GSL
  74. [74]
    Cuba - a library for multidimensional numerical integration - arXiv
    Apr 5, 2004 · The Cuba library provides new implementations of four general-purpose multidimensional integration algorithms: Vegas, Suave, Divonne, and Cuhre.
  75. [75]
    [PDF] and Two-Dimensional Integrals of Functions with Rapid Irregular ...
    A collocation procedure for efficient integration of rapidly oscillatory functions is presented. The integration problem is transformed into a certain O.D.E. ...
  76. [76]
    Parallel Numerical Integration — mcs572 0.7.8 documentation
    Finer domain decompositions of D lead to more triangles Δ, more function evaluations, and more accurate approximations. Like Monte Carlo simulation, numerical ...
  77. [77]
    Integration (scipy.integrate) — SciPy v1.16.2 Manual
    The scipy.integrate sub-package provides several integration techniques including an ordinary differential equation integrator.Missing: GSL MATLAB quadgk
  78. [78]
    sigma-py/quadpy: :triangular_ruler: Numerical integration ... - GitHub
    Quadpy provides integration schemes for many different 1D, 2D, even nD domains. To start off easy: If you'd numerically integrate any function over any given 1D ...
  79. [79]
    esa/torchquad: Numerical integration in arbitrary ... - GitHub
    The torchquad module allows utilizing GPUs for efficient numerical integration with PyTorch and other numerical Python3 modules. The software is free to use ...