Fact-checked by Grok 2 weeks ago

Laplace's method

Laplace's method is a fundamental technique in asymptotic analysis for approximating integrals of the form I(\lambda) = \int_a^b f(t) e^{\lambda \phi(t)} \, dt as \lambda \to \infty, where the primary contribution to the integral stems from a small neighborhood around the point t_0 where the phase function \phi(t) attains its maximum value. By expanding \phi(t) via Taylor series around t_0 and approximating the resulting integrand, the method reduces the integral to a Gaussian form, yielding a leading-order asymptotic estimate such as I(\lambda) \sim f(t_0) e^{\lambda \phi(t_0)} \sqrt{\frac{2\pi}{\lambda |\phi''(t_0)|}} when the maximum is interior and non-degenerate. Named after the mathematician and astronomer (1749–1827), the method was developed in the context of and integral approximations during the late , with early applications to problems in and . Laplace's original formulation focused on integrals where the exponent involves a large negative parameter, emphasizing the minimum of the phase function, but the technique has been generalized to various endpoint and oscillatory cases. The method's significance lies in its ability to provide accurate approximations for large parameters without exact evaluation, with key extensions including higher-order asymptotic expansions formalized by Arthur Erdélyi in 1956 using Watson's lemma to compute systematic correction terms via derivatives of the amplitude and phase functions. Notable applications include deriving for the , \Gamma(z) \sim \sqrt{2\pi/z} (z/e)^z as z \to \infty, and broader uses in , statistical physics, and the analysis of . Further refinements, such as those by Oskar Perron in 1917 and modern recursive formulas, have enhanced its precision for complex integrals.

Fundamentals

Historical background

Pierre-Simon Laplace introduced the method that bears his name in 1774, within a seminal memoir titled "Mémoire sur la probabilité des causes par les événements" addressing the probability of causes given , where he developed an asymptotic approximation for integrals dominated by a maximum in the exponent. This technique emerged as part of his efforts to approximate the in for binomial distributions, providing a practical means to evaluate otherwise intractable integrals by expanding around the mode of the integrand. Laplace's innovation was particularly suited to problems where a large parameter amplifies the contribution near the maximum, laying the groundwork for broader . Laplace promptly applied the method to probabilistic astronomy, using it to assess uncertainties in celestial observations and predict planetary perturbations under probabilistic models. In this domain, the approximations facilitated computations in , enabling inferences about astronomical causes from observed data, such as satellite positions. Concurrently, Laplace extended these ideas to error theory, where the method approximated the of measurement errors, assuming a Gaussian-like concentration around the ; this work underpinned his contributions and influenced early statistical practices in astronomy. These developments solidified the method's utility up to the early , influencing subsequent work in before broader generalizations emerged.

Basic concept

Laplace's method, named after the French mathematician who introduced it in , approximates integrals of the form \int_a^b e^{M f(x)} g(x) \, dx for large positive parameters M \to \infty, where the integrand is dominated by the region near the global maximum x_0 of the real-valued function f(x) on the interval [a, b]. The core intuitive idea, known as the Laplace principle, is that as M becomes large, the exponential factor e^{M f(x)} causes the integrand to peak sharply at x_0, with contributions from elsewhere becoming exponentially negligible. This concentration implies that the integral's asymptotic behavior is determined solely by the local properties of f(x) and g(x) in a small neighborhood around x_0, effectively behaving like a centered at x_0 in the limit M \to \infty. For the method to apply in its basic form, f(x) is assumed to be twice continuously differentiable, with a strict interior maximum at x_0 satisfying f'(x_0) = 0 and f''(x_0) < 0, ensuring the peak is non-degenerate and the second-order Taylor expansion captures the local quadratic decay away from x_0. A simple example illustrating this concentration is the Gaussian integral \int_{-\infty}^{\infty} e^{M f(x)} \, dx with f(x) = -x^2, where the maximum occurs at x_0 = 0 and f''(0) = -2 < 0; for large M, the integrand forms a narrow bell-shaped curve centered at zero, with the width scaling as $1/\sqrt{M} and nearly all the integral's value arising from within a few standard deviations of this point.

Core Theory

General formulation for real integrals

Laplace's method provides an asymptotic approximation for integrals of the form I(M) = \int_a^b e^{M f(x)} \, dx as M \to \infty, where f(x) is a smooth real-valued function achieving its global maximum at an interior point x_0 \in (a, b) with f''(x_0) < 0. The method exploits the fact that the integrand concentrates sharply around x_0 for large M, allowing the dominant contribution to the integral to be captured by a local approximation near this maximum. To derive the approximation, expand f(x) in a Taylor series around x_0: f(x) = f(x_0) + \frac{1}{2} f''(x_0) (x - x_0)^2 + O((x - x_0)^3). Substituting into the exponent yields M f(x) = M f(x_0) + \frac{M}{2} f''(x_0) (x - x_0)^2 + O(M (x - x_0)^3), so the leading-order for the integrand is e^{M f(x)} \approx e^{M f(x_0)} \exp\left( \frac{M f''(x_0)}{2} (x - x_0)^2 \right). Since f''(x_0) < 0, let \lambda = -f''(x_0) > 0, transforming the approximation to the Gaussian form e^{M f(x)} \approx e^{M f(x_0)} e^{- (M \lambda / 2) (x - x_0)^2}. This quadratic approximation is valid in a neighborhood of x_0 shrinking like O(M^{-1/2}) as M \to \infty, where higher-order terms in the expansion become negligible. The integral then approximates as I(M) \approx e^{M f(x_0)} \int_{-\infty}^{\infty} e^{- (M \lambda / 2) u^2} \, du, where u = x - x_0 and the limits are extended to (-\infty, \infty) because the Gaussian decays rapidly, making boundary contributions from a and b asymptotically insignificant under suitable conditions (e.g., f(x) < f(x_0) near the endpoints). The standard Gaussian integral evaluates to \int_{-\infty}^{\infty} e^{-a u^2} \, du = \sqrt{\frac{\pi}{a}}, with a = M \lambda / 2, giving \int_{-\infty}^{\infty} e^{- (M \lambda / 2) u^2} \, du = \sqrt{\frac{2\pi}{M \lambda}} = \sqrt{\frac{2\pi}{M |f''(x_0)|}}. Thus, the leading asymptotic approximation is I(M) \sim \sqrt{\frac{2\pi}{M |f''(x_0)|}} \, e^{M f(x_0)} \quad \text{as} \quad M \to \infty. This result holds for non-degenerate maxima where f''(x_0) \neq 0, ensuring the quadratic term dominates; the relative error is O(M^{-1}), arising from neglected cubic and higher terms in the Taylor expansion, provided f is sufficiently smooth and the maximum is isolated.

Formal statement and proof

Under suitable conditions, Laplace's method provides a rigorous asymptotic approximation for integrals of the form \int_a^b e^{M f(x)} \, dx as M \to \infty, where the dominant contribution arises from the neighborhood of the maximum of f. Specifically, suppose f is twice continuously differentiable on the closed interval [a, b], f attains a unique maximum at an interior point x_0 \in (a, b), and f''(x_0) < 0. Then, \lim_{M \to \infty} \frac{\int_a^b e^{M f(x)} \, dx}{\sqrt{\frac{2\pi}{M |f''(x_0)|}} \, e^{M f(x_0)}} = 1. This limit theorem, often referred to as , establishes that the leading-order approximation is exact in the asymptotic sense. The proof proceeds by normalizing the integral and applying a change of variables to transform it into a . Define the normalized integral as I(M) = \frac{\int_a^b e^{M f(x)} \, dx}{e^{M f(x_0)}}. By Taylor expansion, f(x) = f(x_0) + \frac{1}{2} f''(x_0) (x - x_0)^2 + r(x), where r(x) = o((x - x_0)^2) as x \to x_0. Thus, M f(x) = M f(x_0) + \frac{M f''(x_0)}{2} (x - x_0)^2 + M r(x), and the integrand becomes e^{M f(x_0)} \exp\left( -\frac{M |f''(x_0)|}{2} (x - x_0)^2 + M r(x) \right). Substitute u = \sqrt{M |f''(x_0)|} (x - x_0), so dx = du / \sqrt{M |f''(x_0)|} and the limits transform to [-c_1 \sqrt{M}, c_2 \sqrt{M}] with c_1, c_2 > 0, which approach (-\infty, \infty) as M \to \infty. The integral then reads I(M) = \frac{1}{\sqrt{M |f''(x_0)|}} \int_{-c_1 \sqrt{M}}^{c_2 \sqrt{M}} \exp\left( -\frac{u^2}{2} + s(M, u) \right) \, du, where s(M, u) = M r(x_0 + u / \sqrt{M |f''(x_0)|}). Near u = 0, s(M, u) = o(1) uniformly on compact sets, and the tails of the integrand decay exponentially, ensuring their contribution is negligible (o(1/\sqrt{M})). By the , as M \to \infty, \lim_{M \to \infty} \sqrt{M |f''(x_0)|} \, I(M) = \int_{-\infty}^{\infty} e^{-u^2 / 2} \, du = \sqrt{2\pi}. This yields the desired limit. For boundary maxima, the scaling differs due to the restricted integration range. Suppose the unique maximum occurs at the endpoint a, with f'(a) = 0 and f''(a) < 0. The Taylor expansion yields a half-Gaussian contribution, leading to \int_a^b e^{M f(x)} \, dx \sim \sqrt{\frac{\pi}{2 M |f''(a)|}} \, e^{M f(a)} as M \to \infty, or equivalently, the normalized ratio tends to 1. The proof follows analogously, with the substitution restricting the u-integral to [0, \infty) and yielding \int_0^{\infty} e^{-u^2 / 2} \, du = \sqrt{\pi / 2}. If instead f'(a) < 0 (non-vanishing first derivative), the leading behavior simplifies to a linear approximation, giving \int_a^b e^{M f(x)} \, dx \sim \frac{e^{M f(a)}}{M |f'(a)|}, with the proof relying on direct integration of the exponential after linear Taylor expansion, confirming exponential decay away from the boundary.

Variations for Real-Valued Integrals

Inclusion of prefactor functions

In the basic formulation of Laplace's method, the integrand consists solely of an exponential term, but many applications involve an additional prefactor function h(x) that multiplies the exponential. This prefactor is typically assumed to be continuous and positive near the point of dominance, allowing the method to be extended straightforwardly. For the integral \int_a^b h(x) e^{M f(x)} \, dx as M \to \infty, where f(x) attains its maximum at an interior point x_0 with f''(x_0) < 0, the leading-order approximation becomes \int_a^b h(x) e^{M f(x)} \, dx \approx h(x_0) \sqrt{\frac{2\pi}{M |f''(x_0)|}} \, e^{M f(x_0)}, provided h(x) varies slowly compared to the exponential near x_0, so that h(x) \approx h(x_0). This holds under the standard assumptions that f(x) is twice continuously differentiable near x_0 and the maximum is non-degenerate. The justification for evaluating the prefactor at x_0 stems from the rapid decay of the exponential away from the maximum: the main contribution to the integral arises from a narrow neighborhood around x_0, where variations in h(x) are negligible relative to the scale set by M^{-1/2}. Thus, factoring out h(x_0) reduces the problem to the Gaussian integral derived in the basic case, with relative error O(M^{-1}). If the prefactor h(x) varies more significantly, higher-order corrections can be incorporated by expanding h(x) in a Taylor series around x_0 and integrating term by term against the Gaussian, though the leading term remains dominant for the asymptotic behavior.

Multivariate extensions

Laplace's method extends naturally to multivariate integrals of the form \int_{\mathbb{R}^d} h(\mathbf{x}) e^{M f(\mathbf{x})} \, d^d\mathbf{x}, where M > 0 is large, f: \mathbb{R}^d \to \mathbb{R} is a attaining a unique global maximum at an interior point \mathbf{x}_0, and h: \mathbb{R}^d \to \mathbb{R} is a positive prefactor . The approximation relies on a second-order Taylor expansion of f around \mathbf{x}_0: f(\mathbf{x}) \approx f(\mathbf{x}_0) + \frac{1}{2} (\mathbf{x} - \mathbf{x}_0)^T H(\mathbf{x}_0) (\mathbf{x} - \mathbf{x}_0), where H(\mathbf{x}_0) is the Hessian matrix of f at \mathbf{x}_0, which must be negative definite to ensure the maximum is non-degenerate. This quadratic approximation transforms the integral into a multivariate Gaussian form, as the exponential term e^{M f(\mathbf{x})} concentrates the integrand near \mathbf{x}_0 for large M. Substituting the expansion yields \int_{\mathbb{R}^d} h(\mathbf{x}) e^{M f(\mathbf{x})} \, d^d\mathbf{x} \approx h(\mathbf{x}_0) e^{M f(\mathbf{x}_0)} \left( \frac{2\pi}{M} \right)^{d/2} \left[ \det \left( -H(\mathbf{x}_0) \right) \right]^{-1/2}, with relative error O(M^{-1}) under the stated conditions. The determinant of the negative Hessian, \det(-H(\mathbf{x}_0)), arises from evaluating the Gaussian integral \int_{\mathbb{R}^d} e^{\frac{M}{2} \delta^T H(\mathbf{x}_0) \delta} \, d^d \delta = (2\pi)^{d/2} M^{-d/2} \left[ \det(-H(\mathbf{x}_0)) \right]^{-1/2}, where \delta = \mathbf{x} - \mathbf{x}_0; this factor captures the scaling of the "volume" of the high-probability region, which behaves like the inverse square root of the determinant for the ellipsoidal level sets defined by the quadratic form. The method requires that the maximum be unique and interior, with the non-degenerate (i.e., \det(H(\mathbf{x}_0)) \neq 0) and negative definite, ensuring the integral's main contribution comes from a neighborhood of \mathbf{x}_0. These conditions guarantee the validity of the local over the entire domain. In applications, such as approximating volumes of high-dimensional regions (e.g., via integrals over indicator functions smoothed appropriately), the multivariate extension provides scaling laws where the volume grows or shrinks factorially with dimension, modulated by the of the at the optimizing point.

Techniques for Complex Integrals

Steepest descent method

The method of steepest descent extends Laplace's method to asymptotic approximations of integrals over contours in the complex plane, particularly for forms such as I(M) = \int_\Gamma e^{M f(z)} \, dz, where M is a large positive parameter, f(z) is analytic, and \Gamma is a suitable contour. The dominant contribution arises from saddle points z_0 where f'(z_0) = 0, and the contour is deformed to pass through such points along paths of steepest descent. These paths are defined by level curves where \operatorname{Im} f(z) is constant, ensuring that \operatorname{Re} f(z) decreases monotonically away from z_0, thereby concentrating the integrand's magnitude near the saddle. The deformation of the original contour \Gamma to a steepest descent path is justified by , provided the functions involved are analytic in a simply connected containing both contours and free of singularities between them. If the deformation crosses poles or branch cuts, their contributions must be accounted for separately via calculations. Near a simple z_0 with f''(z_0) \neq 0, a local quadratic approximation is applied: f(z) \approx f(z_0) + \frac{1}{2} f''(z_0) (z - z_0)^2. Substituting a along the descent direction, parameterized by an angle \theta such that the quadratic term becomes negative real (typically \theta = \frac{1}{2} \arg(-\frac{1}{f''(z_0)}) + \frac{\pi}{2}), the integral reduces to a Gaussian form. The leading asymptotic contribution is then I(M) \sim e^{M f(z_0)} \sqrt{\frac{2\pi}{M |f''(z_0)|}} \, e^{i \phi}, where \phi accounts for the phase from the direction of descent and the argument of f''(z_0). A classic example is the asymptotic approximation of the Airy function \operatorname{Ai}(x), defined by the contour integral \operatorname{Ai}(x) = \frac{1}{2\pi i} \int_\Gamma \exp\left( \frac{t^3}{3} - x t \right) \, dt, where \Gamma runs from \infty e^{-i\pi/3} to \infty e^{i\pi/3}, passing through the origin. For large positive x, the relevant saddle point is at t_0 = \sqrt{x}, and the steepest descent contour deforms to pass through this point along the direction where \operatorname{Re}(t^3/3 - x t) decreases. The resulting approximation is \operatorname{Ai}(x) \sim \frac{1}{2 \sqrt{\pi} x^{1/4}} \exp\left( -\frac{2}{3} x^{3/2} \right) as x \to +\infty, capturing the exponential decay. For large negative x, contributions from imaginary saddles at t = \pm i \sqrt{|x|} yield oscillatory behavior, with the contour deformation highlighting the pair of saddles.

Saddle-point approximations

Saddle-point approximations extend the asymptotic analysis of integrals of the form \int e^{M f(\mathbf{x})} g(\mathbf{x}) \, d\mathbf{x}, where M \to \infty, to cases where the dominant contribution arises from a \mathbf{x}_0 satisfying \nabla f(\mathbf{x}_0) = 0, but the H = \nabla^2 f(\mathbf{x}_0) is not necessarily negative definite. In such scenarios, the real part of f may have directions of ascent and descent, leading to a saddle configuration rather than a strict local maximum. The method identifies principal directions aligned with the eigenvectors of the , allowing the integral to be approximated by factoring the contribution into independent Gaussian integrals along these directions, accounting for the eigenvalues \lambda_k that govern the curvature. For the leading-order approximation near a simple , the to the principal axes diagonalizes the in the f(\mathbf{x}) \approx f(\mathbf{x}_0) + \frac{1}{2} \sum_k \lambda_k u_k^2, where u_k are coordinates along the eigenvectors. The asymptotic form then becomes \int e^{M f(\mathbf{x})} g(\mathbf{x}) \, d\mathbf{x} \sim e^{M f(\mathbf{x}_0)} g(\mathbf{x}_0) \prod_{k=1}^d \sqrt{\frac{2\pi}{M |\lambda_k|}} \exp\left(i \sum_k \frac{\arg(\lambda_k)}{2}\right), with the product over the d dimensions, and phase factors \arg(\lambda_k) arising in or oscillatory settings to capture the direction of steepest . This Gaussian factorization holds when the real part of the is positive definite in the descent directions, ensuring convergence, and the prefactor g(\mathbf{x}) is evaluated at the . In degenerate cases with zero eigenvalues, higher-order terms or uniform approximations may be needed, but the standard form assumes non-degenerate . Unlike Laplace's method, which requires a strict interior maximum where the Hessian is negative definite (all \lambda_k < 0) for the exponent's real part, saddle-point approximations apply more broadly to points with mixed eigenvalue signs or flat directions, enabling analysis of integrals without a global maximum on the real line. This generality is particularly useful for oscillatory integrals, where the method aligns with the stationary phase approximation by locating saddles that balance phase stationarity and amplitude decay. A classic example is the asymptotic expansion of the J_n(z) for fixed order n and large argument |z| with |\arg z| < \pi/2, derived from the saddle-point evaluation of its integral representation \int_0^{2\pi} e^{i(z \sin \theta - n \theta)} \, d\theta / (2\pi). The dominant saddles yield J_n(z) \sim \sqrt{\frac{2}{\pi z}} \cos\left(z - \frac{(2n+1)\pi}{4}\right) as |z| \to \infty, capturing the oscillatory behavior through the phase contributions from the saddle points.

Advanced Generalizations

Nonlinear and Riemann-Hilbert generalizations

The nonlinear steepest descent method, introduced by Deift and Zhou in 1993, extends the classical steepest descent technique to analyze the asymptotics of oscillatory arising in various nonlinear settings. This approach systematically deforms the contours of integration in the complex plane to directions where the oscillatory behavior is damped exponentially, enabling precise asymptotic expansions as a large parameter tends to infinity. Central to the method is the construction of g-functions, which serve as phase functions to rescale and normalize the problem, transforming the original oscillatory RH formulation into a sequence of more tractable model problems. Local parametrices—approximate solutions built near critical points using special functions such as or —are then employed to capture the behavior in neighborhoods where the asymptotics are singular, ensuring uniform validity across the domain. A key innovation lies in the lens decomposition of contours, which divides the complex plane into regions bounded by "lenses" to isolate oscillatory regions and facilitate contour deformation along paths of steepest descent. This decomposition allows for the global rescaling of the RH problem while providing uniform approximations near turning points, where the phase function's real part exhibits stationary behavior. The method proceeds through a series of iterative transformations: first, the introduction of the g-function to exponentiate away oscillations; second, the lens-based contour adjustment; third, the insertion of local parametrices; and finally, the solution of a normalized RH problem with small jump matrices, yielding asymptotic expansions via a Newton-type error analysis. This framework surpasses linear steepest descent by handling nonlinear interactions inherent in RH formulations, making it applicable to problems with multiple scales and turning points. The Deift-Zhou method has found extensive applications in random matrix theory, particularly for determining the asymptotic eigenvalue distributions in ensembles like the , where RH problems characterize orthogonal polynomials and their zeros. In soliton theory, it provides long-time asymptotics for solutions to integrable equations such as the , , and equations, revealing and sectorial behaviors in the complex plane. Additionally, it addresses combinatorial asymptotics, including the evaluation of partition functions through the analysis of generating functions in statistical mechanics models. A representative example is the asymptotics of , which arise in the study of orthogonal polynomials with varying weights and can be reformulated as RH problems; the method yields explicit leading-order terms involving exponential growth rates tied to the equilibrium measure on the support of the weight. Similarly, for with , the approach delivers uniform asymptotics for determinants, connecting to combinatorial counts in tilings and dimer models.

Median-point approximation

The median-point approximation represents a 2019 generalization of Laplace's method tailored for evaluating certain real integrals that emerge in the context of fermionic partition functions. Introduced by Makogon and Morais Smith, it addresses limitations of the mode-based expansion in standard Laplace approximations by instead aligning the integrand with the median of a reference Gaussian distribution. This method applies particularly to integrals of the form \int e^{-g(x) - (\gamma/2) y^2(x)} y'(x) \, dx, where g(x) is a smooth function, \gamma > 0 scales the quadratic term, and y(x) is a monotonic transformation incorporating a prefactor via its derivative. The approximation proceeds by identifying the median m of the normalized integrand, defined as the point where the cumulative integral from the lower limit reaches half the total value, \int_{-\infty}^{m} e^{-g(x) - (\gamma/2) y^2(x)} y'(x) \, dx = \frac{1}{2} \int_{-\infty}^{\infty} e^{-g(x) - (\gamma/2) y^2(x)} y'(x) \, dx. A local quadratic expansion of the exponent around m then yields the leading asymptotic estimate, analogous to the Gaussian form but centered at the median for improved symmetry matching. Compared to Laplace's method, which expands around the (maximum of the integrand), the median-point approach provides greater accuracy for skewed or densities, where the mode and diverge significantly, as often occurs in systems with non-Gaussian fluctuations. This makes it particularly suitable for capturing asymmetries in effective actions without requiring higher-order corrections that can complicate mode-based analyses. In applications to , the median-point approximation facilitates efficient evaluation of fermionic path integrals, such as those arising in the after Hubbard-Stratonovich decoupling, enabling studies of phase transitions and instabilities like with finite effective interactions even near critical points.

Applications

Stirling's approximation

Stirling's approximation provides an asymptotic formula for the n! as n becomes large, derived by applying Laplace's method to the representation of the . The is related to the by n! = \Gamma(n+1), where \Gamma(z) for positive integer z = n+1 is given by the \Gamma(n+1) = \int_0^\infty x^n e^{-x} \, dx for n > -1. To apply Laplace's method, substitute x = n t, so dx = n \, dt, transforming the to n! = n^{n+1} \int_0^\infty e^{n (\log t - t)} \, dt. This is of the form \int_0^\infty e^{n f(t)} \, dt with f(t) = \log t - t, which attains its maximum at t_0 = 1 where f(1) = -1 and f''(1) = -1. The leading-order approximation from Laplace's method for a maximum in the interior yields \int_0^\infty e^{n f(t)} \, dt \approx \sqrt{\frac{2\pi}{n |f''(1)|}} e^{n f(1)} = \sqrt{\frac{2\pi}{n}} e^{-n}. Thus, n! \sim \sqrt{2\pi n} \left(\frac{n}{e}\right)^n. The contribution to the integral from the boundary at t=0 is asymptotically negligible for large n, as the integrand decays rapidly away from the maximum at t=1, allowing the lower limit to be effectively extended to -\infty without altering the approximation. Higher-order terms in the asymptotic expansion arise from including the cubic term in the Taylor expansion of f(t) around t=1, leading to the Stirling series in logarithmic form: \log n! \approx n \log n - n + \frac{1}{2} \log(2\pi n) + \frac{1}{12n} - \frac{1}{360 n^3} + \cdots. This series provides successively more accurate approximations for \log n!, with the \frac{1}{12n} term originating from the third derivative in the .

Modern uses in statistics and physics

In , Laplace's method provides an of the posterior distribution p(\theta \mid \text{data}) by fitting a Gaussian centered at the of the posterior, leveraging a second-order Taylor expansion of the log-posterior around this maximum a posteriori estimate. This approach simplifies in complex models where exact computation is intractable, enabling efficient of posterior moments and predictive distributions. It forms the basis for advanced techniques such as variational , where the Gaussian serves as a starting point for optimizing variational lower bounds, and the integrated nested Laplace (INLA), which extends the method to latent Gaussian models for rapid in spatial and spatiotemporal contexts. INLA, in particular, has been widely adopted for modeling spatial data, such as disease mapping or environmental processes, by combining nested Laplace approximations with representations of Gaussian fields. In machine learning, Laplace's approximation aids in estimating marginal likelihoods, which are crucial for model selection and hyperparameter tuning in probabilistic models. For instance, extensions like variational Laplace autoencoders integrate Laplace approximations to improve inference in deep generative models such as variational autoencoders by providing a Gaussian fit to the posterior over latent variables. Similarly, Laplace methods can enhance marginal likelihood computations in models involving normalizing flows by offering scalable Gaussian approximations to intractable posteriors. These applications reduce computational overhead compared to sampling-based methods while maintaining accuracy in tasks such as density estimation and anomaly detection. In physics, Laplace's method underpins approximations in s for , where s—classical solutions representing tunneling events—emerge as saddle points in the Euclidean , with fluctuations around these paths captured by a akin to the . This semiclassical technique quantifies effects, such as , by evaluating the action at the instanton and integrating quadratic fluctuations. In , the method approximates partition functions Z = \int e^{-\beta H(\phi)} d\phi as Gaussians centered at the energy minimum, yielding insights into thermodynamic properties like in systems ranging from Ising models to disordered materials. Recent developments in the 2020s have expanded Laplace's role in for , where post-hoc approximations around trained weights provide epistemic estimates without full retraining, as seen in scalable libraries like laplax that enable efficient computations for posteriors. In climate modeling, INLA has been integrated with spatiotemporal Gaussian processes to interpolate variables like and precipitation, fusing sparse observations with forecast data for improved predictions under . These advances highlight Laplace's versatility in handling large-scale, data-intensive problems across disciplines.

References

  1. [1]
    [PDF] Laplace method for integrals, PHYS 2400 - UConn Physics
    Mar 29, 2023 · Laplace's method is a general technique for obtaining the asymptotic behavior of integrals in which the large parameter λ, λ → ∞, appears ...
  2. [2]
    [PDF] Section 4: Asymptotic expansions of integrals 4. 1. Laplace's Method ...
    Thus we expect the dominant contribution to the integral to come from near ψ0(t) = 0. Then one proceeds analogously to Laplace's method and the leading order ...Missing: mathematics | Show results with:mathematics
  3. [3]
    [PDF] arXiv:1207.5222v2 [math.CA] 4 Apr 2013
    Apr 4, 2013 · Laplace's method is one of the fundamental techniques in the as- ymptotic approximation of integrals. The coefficients appearing in the re-.<|control11|><|separator|>
  4. [4]
    Laplace's 1774 Memoir on Inverse Probability - jstor
    Abstract. Laplace's first major article on mathematical statistics was pub- lished in 1774. It is arguably the most influential article in this field to.
  5. [5]
    Computing the Coefficients in Laplace's Method | SIAM Review
    Laplace's method is a preeminent technique in the asymptotic approximation of integrals. Its utility was enhanced enormously in 1956 when Erdélyi found a ...
  6. [6]
    History of asymptotic expansion of Laplace's method between ...
    Nov 4, 2023 · In 1956 Erdélyi wrote (in his book) a formula improving Laplace's limk→∞I/I₀=1 to an asymptotic expansion in 1/k for I/I₀ (in terms of the ...Missing: Pierre- Simon
  7. [7]
    [PDF] LAPLACE'S METHOD, FOURIER ANALYSIS, AND RANDOM ...
    LAPLACE'S METHOD OF ASYMPTOTIC EXPANSION. 1.1. Stirling's formula. Laplace's approach to Stirling's formula is noteworthy first, because it.
  8. [8]
    [PDF] 8 Laplace's Method and Local Limit Theorems
    e°x2/2 dx = 1. The Laplace method is a technique for obtaining sharp approximations to integrals of the form (or similar in form to). J( ...
  9. [9]
    [PDF] Integral Asymptotics: Laplace's Method - UNL Math
    There are three ideas behind Laplace's method. These are a. For λ 1, the main contribution to I(λ) comes from a small region of the minimizer t = c.Missing: mathematics | Show results with:mathematics
  10. [10]
    [PDF] asymptotic expansion of integrals - Physics
    Feb 6, 2012 · The method of integration by parts is rather inflexible; it can only produce asymp- totic series of the form in (6.3.22) which contain integral ...
  11. [11]
    [PDF] Laplace's Method - Statistics & Data Science
    Laplace's method is an elementary technique for approximating an integral of the form. I = Z f(t) exp(nh(t))dt. (1) where f(t) and h(t) are smooth ...
  12. [12]
    None
    ### Summary of Laplace’s Method for Integrals with Prefactors or Slowly Varying Functions (Gamma Function Example)
  13. [13]
    Asymptotic Expansions of Integrals - Google Books
    Asymptotic Expansions of Integrals. Front Cover · Norman Bleistein, Richard A. Handelsman. Courier Corporation, Jan 1, 1986 - Mathematics - 425 pages.
  14. [14]
    Asymptotic Methods in Analysis - N. G. de Bruijn - Google Books
    Jan 1, 1981 · Asymptotic Methods in Analysis. Front Cover · N. G. de Bruijn ... Mathematics / Mathematical Analysis. Export Citation, BiBTeX EndNote ...
  15. [15]
    [PDF] Laplace's Method of Integration - Oxford statistics department
    The integral can be the moment generating function of the distribution of g(Y ) when Y has density h, it could be a posterior expectation of h(Y ), or just an ...
  16. [16]
    2.4 Contour Integrals ‣ Areas ‣ Chapter 2 Asymptotic Approximations
    For this reason the name method of steepest descents is often used. However, for the purpose of simply deriving the asymptotic expansions the use of steepest ...
  17. [17]
    [PDF] Asymptotic Methods The method of steepest descent - Arizona Math
    need to find contours for which |eρ(t)| increases (or decreases) the fastest; these are steepest descent contours. Note that |eρ(t)| = eRe (ρ(t)).
  18. [18]
    [PDF] Lecture Notes on Asymptotics - DAMTP
    Mar 2, 2014 · Thus, the method of steepest descent provides the extension of the Laplace method to the complex plane. It turns out that curves along which v ...<|control11|><|separator|>
  19. [19]
    [PDF] Method of steepest descents - UConn Physics
    Apr 12, 2015 · The method of steepest descents is a technique for finding the asymptotic behavior of integrals of the form. I(λ) = ZC h(t)eλρ(t) dt.
  20. [20]
    [PDF] Apéry Polynomials and the multivariate Saddle Point Method - arXiv
    Jul 1, 2013 · The method applied is a general one, so that the treatment can serve as a model for the study of objects related to the Apéry polynomials.
  21. [21]
    [PDF] 1. Saddle Point Method of Asymptotic Expansion
    perform a Laplace-like transform and look for a solution Ψ(z) in the form of a contour integral. Ψ(z) = ZΓ dt e tz. Φ(t);. (2.4) here Γ is some z-independent ...<|control11|><|separator|>
  22. [22]
    DLMF: §10.7 Limiting Forms ‣ Bessel and Hankel Functions ‣ Chapter 10 Bessel Functions
    ### Summary of Asymptotic Approximation for Bessel Function J_nu(z)
  23. [23]
    A steepest descent method for oscillatory Riemann–Hilbert ...
    A steepest descent method for oscillatory Riemann–Hilbert problems. Asymptotics for the MKdV equation. Pages 295-368 from Volume 137 (1993), Issue 2.
  24. [24]
    An extension of the steepest descent method for Riemann-Hilbert ...
    This paper extends the steepest descent method for Riemann-Hilbert problems introduced by Deift and Zhou in a critical new way. We present, in particular ...
  25. [25]
    [PDF] Stirling's Formula: An Application of Calculus - University of Regina
    Nov 4, 2005 · where ǫ > 0 is an arbitrary constant. The contibutions from the second and third integral are asymptotically negligible. That is, as N → ∞,. Z ∞.
  26. [26]
    [PDF] How good is your Laplace approximation of the Bayesian posterior ...
    The Laplace approximation is a popular method for constructing a Gaussian approximation to the Bayesian posterior and thereby approximating the posterior mean ...<|separator|>
  27. [27]
    Scalable Marginal Likelihood Estimation for Model Selection ... - arXiv
    Apr 11, 2021 · Our marginal-likelihood estimate is based on Laplace's method and Gauss-Newton approximations to the Hessian, and it outperforms cross ...
  28. [28]
    Discrete worldline instantons | Phys. Rev. D
    Oct 12, 2018 · It is possible to approximate this worldline path integral for inhomogeneous fields numerically using discretization and Monte Carlo methods
  29. [29]
    [PDF] Lecture 08: Laplace's Method and the Mean Field Ising Model
    The Gaussian function e−(x−3/2)2/6 is plotted in the blue dashed line and the exponential function e−(x−2)2+ln[2 cosh(x)] is plotted in the solid black line. ...
  30. [30]
    [2507.17013] laplax -- Laplace Approximations with JAX - arXiv
    Jul 22, 2025 · The Laplace approximation provides a scalable and efficient means of quantifying weight-space uncertainty in deep neural networks, enabling the ...Missing: 2020s | Show results with:2020s
  31. [31]
    Interpolating climate variables by using INLA and the SPDE approach
    Sep 5, 2023 · In particular, we employ INLA and SPDE to develop a spatiotemporal model to derive gridded monthly temperature climatologies for Italy both for ...