Fact-checked by Grok 2 weeks ago

Leading-order term

In , particularly within , the leading-order term refers to the dominant term in an or that provides the primary contribution to the behavior of a or expression as a approaches a , such as zero or , with subsequent terms becoming negligible in . This term is the first non-zero component in an asymptotic series, where higher-order terms vanish more rapidly relative to it, enabling simplified models for complex problems. The concept is fundamental in fields like and , where exact solutions are often intractable, and approximations are derived by identifying the balance of dominant terms—known as the method of dominant balance—to capture essential dynamics. For instance, in solving differential with a small \epsilon, the leading-order assumes the solution scales such that the leading term satisfies a reduced , ignoring smaller until higher accuracy is needed. Asymptotic expansions formalize this through sequences of functions \{\phi_n\} where \phi_{n+1} = o(\phi_n) as the limit is approached, with the leading-order term given by a_0 \phi_0(x). This approach is widely used in physics and engineering, such as in or , to derive scalable models like the Euler equations from the Navier-Stokes equations by treating as a higher-order effect.

Fundamentals

Definition

In asymptotic analysis, functions or solutions to equations are approximated by series expansions that capture their behavior as a small , denoted ε, approaches zero, providing methods for handling small perturbations in mathematical models. These approximations are particularly useful when exact solutions are intractable, allowing one to identify dominant behaviors without solving the full problem. The refers to the dominant contribution in such an of a f(ε) as ε → 0, representing the primary that governs the 's behavior in this . It is the first non-zero in the series, with all preceding terms being zero, ensuring it is the largest in magnitude compared to subsequent corrections. Formally, if f(ε) admits an asymptotic expansion of the form
f(\varepsilon) \sim \sum_{n=0}^\infty a_n \varepsilon^n
as ε → 0, then the leading-order is a_k ε^k, where k is the smallest non-negative such that a_k ≠ 0. This means that f(ε) - a_k ε^k = o(ε^k) as ε → 0, capturing the essential scaling of f(ε).
Common notation in asymptotic analysis includes the symbol ~ for asymptotic equivalence, where f(ε) ~ g(ε) indicates that f(ε)/g(ε) → 1 as ε → 0, often used to denote that g(ε) is the leading-order to f(ε). Additionally, big-O notation (ε^m) describes the or , signifying that the magnitude is bounded by a constant times |ε|^m for sufficiently small ε > 0, while little-o notation o(ε^m) implies the term vanishes faster than ε^m. These conventions facilitate precise statements about the accuracy of .

Asymptotic Series Expansion

In asymptotic analysis, an asymptotic power series provides a formal expansion of a function f(\epsilon) in powers of a small parameter \epsilon, typically expressed as f(\epsilon) = \sum_{n=k}^{\infty} a_n \epsilon^n + O(\epsilon^m) \quad \text{as} \quad \epsilon \to 0, where k is the lowest order (possibly negative or zero), the coefficients a_n are constants independent of \epsilon, and m > k specifies the remainder term's order. This representation captures the function's behavior near \epsilon = 0, but unlike Taylor series, it may diverge for any nonzero \epsilon, meaning the infinite sum does not converge to f(\epsilon). In contrast, convergent series approach the exact value as more terms are added, whereas asymptotic series are useful precisely because their partial sums approximate f(\epsilon) increasingly well up to an optimal truncation point before diverging. Truncation at the leading order involves retaining only the first term a_k \epsilon^k, yielding the approximation f(\epsilon) \approx a_k \epsilon^k, which dominates the function's behavior as \epsilon \to 0. The error in this approximation is then bounded by the subsequent term, satisfying |f(\epsilon) - a_k \epsilon^k| = O(\epsilon^{k+1}). This leading-order truncation is particularly effective when higher powers of \epsilon become negligible, providing a simple yet accurate description for small perturbations. The origins of asymptotic series trace back to Henri Poincaré's work in the late , where he developed these expansions to study solutions of differential equations in , particularly for analyzing the stability of planetary orbits in the . Poincaré's seminal 1886 paper in Acta Mathematica formalized the theory, drawing analogies to divergent series like , and his multi-volume Les méthodes nouvelles de la mécanique céleste (1892–1899) integrated them into for gravitational systems. A key property of asymptotic series is their validity strictly in the \epsilon \to 0; the holds for sufficiently small \epsilon, but the error does not decrease uniformly across all \epsilon in a fixed away from zero. This non-uniformity distinguishes them from convergent expansions and underscores their role in methods rather than exact representations.

Core Concepts

Leading-Order Behavior

In asymptotic analysis, the leading-order term exerts behavioral dominance by governing the primary qualitative and quantitative characteristics of a system as the small perturbation parameter \epsilon approaches zero, rendering higher-order contributions negligible. This dominance manifests in the scaling of variables and the determination of dominant balances, where the leading term dictates the essential structure of solutions, such as in singular perturbation problems where qualitative changes occur abruptly. For instance, in reduced-order models, non-leading terms are systematically dropped to yield simplified dynamics, often transforming complex differential equations into algebraic relations or lower-dimensional approximations that capture the core trends and stability properties of the system. The leading-order ensures that and long-term trends align closely with the unperturbed limit, as the term's influence preserves key dynamical features like or rates under small perturbations. As \epsilon \to 0, this reveals how systems evolve toward their limiting , prioritizing the laws that emerge from the principal contributions in the asymptotic . Such simplifications are particularly valuable in perturbative regimes, where they provide tractable models for analyzing without the full computational burden of higher fidelities. Error analysis of the leading-order term confirms its reliability for small \epsilon, with the relative error bounded by O(\epsilon), meaning the approximation deviates from the exact solution by a factor proportional to the perturbation size itself. This O(\epsilon) accuracy underscores the method's effectiveness in regimes where precision beyond the dominant scale is unnecessary. Conceptually, leading-order asymptotics differ from exact solutions by emphasizing physical or mathematical insight over numerical exactitude, offering formal series that may not converge but deliver profound understanding of system behavior in limiting cases, as formalized in foundational works on asymptotic expansions.

Next-to-Leading Order

In asymptotic analysis, the next-to-leading order (NLO) term refers to the subdominant contribution immediately following the leading-order term in a perturbation expansion, typically expressed as a_{k+1} \epsilon^{k+1} after the dominant a_k \epsilon^k, where \epsilon is a small parameter and k is the order of the leading term. This term provides the first-order correction, refining the approximation by accounting for effects that become relevant as \epsilon increases slightly from negligible values. For instance, in regular perturbation theory, the solution is expanded as x(\epsilon) = x_0 + \epsilon x_1 + O(\epsilon^2), where x_1 constitutes the NLO correction derived by matching coefficients in the perturbed equation. The NLO term is particularly useful in scenarios where the leading-order yields insufficient accuracy, such as when \epsilon is moderately small (e.g., on the order of 0.1) and higher precision is needed without resorting to full numerical solutions. In such cases, the leading-order behavior— which dominates for very small \epsilon—may overlook subtle influences that affect quantitative predictions, prompting the inclusion of NLO to balance computational simplicity with improved fidelity. This approach is common in and physics, where initial approximations must be iteratively enhanced for practical reliability. Computationally, the NLO contribution is incorporated systematically via perturbation series methods: the assumed expansion f(\epsilon) \approx a_k \epsilon^k + a_{k+1} \epsilon^{k+1} is substituted into the original equation, and terms are collected by powers of \epsilon to solve for the coefficients sequentially. This equating of coefficients ensures the approximation satisfies the equation up to the desired order, often involving algebraic manipulations or integral evaluations in more complex cases like Laplace's method. For example, in the Liouville-Green approximation for differential equations, the NLO term arises from iterative corrections to the phase function. However, the NLO approximation retains the asymptotic nature of the series, meaning it is valid only in the as \epsilon \to 0 and does not yield an exact solution; errors can accumulate if \epsilon is not sufficiently small, necessitating higher-order terms for enhanced precision in demanding applications. Additionally, the series may exhibit non-uniform or require special techniques like resummation to handle divergences at higher orders.

Illustrative Examples

Basic Mathematical Example

A straightforward mathematical illustration of the leading-order term involves the of the function f(\epsilon) = \sin(\epsilon) as \epsilon \to 0. As defined earlier, the leading-order term is the dominant, non-vanishing contribution in the expansion that captures the primary behavior for small \epsilon. The expansion of \sin(\epsilon) centered at \epsilon = 0 is \sin(\epsilon) = \epsilon - \frac{\epsilon^3}{3!} + \frac{\epsilon^5}{5!} - \frac{\epsilon^7}{7!} + \cdots. To derive this step by step, evaluate the function and its derivatives at \epsilon = 0: f(0) = 0, f'(\epsilon) = \cos(\epsilon) so f'(0) = 1, f''(\epsilon) = -\sin(\epsilon) so f''(0) = 0, f'''(\epsilon) = -\cos(\epsilon) so f'''(0) = -1, and higher even derivatives vanish while odd ones alternate in sign. Substituting into the formula yields the series above. The lowest power of \epsilon with a non-zero coefficient is \epsilon^1, making \epsilon the leading-order term. Thus, the leading-order approximation is \sin(\epsilon) \approx \epsilon, with the error (remainder) bounded by O(\epsilon^3) as \epsilon \to 0. This approximation aligns closely with the actual function near \epsilon = 0, where the linear line y = \epsilon overlaps the sine curve almost perfectly for |\epsilon| \ll 1. However, as |\epsilon| grows larger—such as \epsilon = 1, where \sin(1) \approx 0.841 while the approximation gives 1—the deviation becomes noticeable, with the sine function curving away due to its oscillatory nature while the linear term continues unbounded. This example highlights the pedagogical value of leading-order terms in , demonstrating their straightforward application in contexts like computing limits (e.g., \lim_{\epsilon \to 0} \frac{\sin(\epsilon)}{\epsilon} = 1) or analyzing series , where retaining only the dominant term provides sufficient accuracy without complexity.

Physical System Example

A prominent physical example of the leading-order term arises in the analysis of a simple , consisting of a mass m suspended from a fixed by a massless of L, undergoing small angular oscillations under . The full nonlinear equation of motion, derived from considerations, is \theta'' + \frac{[g](/page/G)}{L} \sin \theta = 0, where \theta is the from the vertical, g is the , and primes denote time derivatives. For small initial angles \theta \ll 1 (typically |\theta| \lesssim 10^\circ), the leading-order approximation employs the Taylor expansion \sin \theta = \theta - \frac{\theta^3}{6} + O(\theta^5), retaining only the \theta term to yield the linearized equation \theta'' + \frac{g}{L} \theta = 0. This describes with angular frequency \omega = \sqrt{g/L}, resulting in oscillatory solutions \theta(t) = \theta_0 \cos(\omega t + \phi) and a period T \approx 2\pi \sqrt{L/g}, independent of and to leading order. The exact period requires evaluation of the complete elliptic integral of the first kind: T = 4 \sqrt{L/g} \, K\left(\sin^2(\theta_0/2)\right), where K(k) is the and \theta_0 is the maximum angle. The asymptotic expansion of this period for small \theta_0 is T = 2\pi \sqrt{L/g} \left[1 + \frac{1}{16} \theta_0^2 + O(\theta_0^4)\right], confirming that the leading-order approximation incurs a relative error of order \theta_0^2. This leading-order pendulum approximation is a staple in introductory physics education, demonstrating how asymptotic expansions transform intractable nonlinear dynamics into tractable linear models, enabling analytical solutions and insights into oscillatory behavior in systems like clocks and seismometers.

Applications

Matched Asymptotic Expansions

Matched asymptotic expansions address singular perturbation problems where standard regular perturbation methods fail due to non-uniform behavior, such as in . In these scenarios, the solution exhibits rapid changes in thin regions near , requiring separate asymptotic expansions for different domains: an outer expansion valid away from the boundary and an inner expansion valid within the boundary layer. For instance, consider the singularly perturbed \epsilon y'' + y' + y = 0 with boundary conditions y(0) = 0 and y(1) = 1, where \epsilon \ll 1. The outer solution, obtained by setting \epsilon = 0, yields y_{\text{outer}}^{(0)}(x) = e^{1 - x}, which satisfies the boundary condition at x=1 but fails at x=0 since e^{1 - 0} = e \neq 0. To capture the boundary layer near x=0, introduce a stretched inner variable \eta = x / \epsilon, transforming the equation to Y'' + Y' + \epsilon Y = 0, where Y(\eta) = y(\epsilon \eta). At leading order, this simplifies to Y'' + Y' = 0, giving Y_{\text{inner}}^{(0)}(\eta) = a (1 - e^{-\eta}), and applying the boundary condition at \eta = 0 is satisfied for any a. The ensures consistency between the inner and outer expansions in an overlap region where \epsilon \ll x \ll 1, so both expansions are valid and their leading-order terms must asymptotically agree. For the example, the outer expansion as x \to 0^+ behaves as y_{\text{outer}}^{(0)}(x) \sim e, while the inner expansion as \eta \to \infty is Y_{\text{inner}}^{(0)}(\eta) \sim a. Matching requires a = e, yielding the leading-order inner solution Y_{\text{inner}}^{(0)}(\eta) = e (1 - e^{-\eta}), matching the outer's e^{1 - x} in the overlap to leading order. This principle, rooted in the leading-order behavior where dominant terms balance, guarantees a uniform approximation across the domain. The composite expansion combines the inner and outer solutions to form a uniformly valid approximation: y(x) \approx y_{\text{outer}}(x) + y_{\text{inner}}(\eta) - y_{\text{match}}(c), where y_{\text{match}} is the common leading-order term from the matching region, often evaluated at an intermediate variable c with \epsilon^\alpha \ll c \ll 1 for $0 < \alpha < 1. In the example, this yields y(x) \approx e^{1 - x} + e (1 - e^{-x/\epsilon}) - e = e^{1 - x} - e \, e^{-x/\epsilon}, which satisfies both boundary conditions and approximates the exact solution well for small \epsilon. This additive form avoids double-counting the matched part, providing accuracy up to O(\epsilon) outside the layer. The technique originated with Ludwig Prandtl's 1904 work on boundary layers in fluid flow, where he introduced the concept of a thin viscous layer near solid surfaces to reconcile inviscid outer flow with no-slip boundary conditions, laying the foundation for matched expansions despite initial lack of formal asymptotic rigor. Modern formalizations, such as those using multiple scales, extend Prandtl's ideas but were not fully developed until the mid-20th century.

Simplifying the Navier–Stokes Equations

In high-Reynolds-number flows, the Navier–Stokes equations are simplified using leading-order asymptotic approximations by treating the inverse Reynolds number \epsilon = 1/\mathrm{Re} as a small parameter. As \epsilon \to 0, the viscous term \epsilon \Delta \mathbf{u} becomes negligible outside thin boundary layers near solid surfaces, leading to a balance dominated by inertial and pressure gradient terms. This yields the inviscid Euler equations for the outer flow region: \frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u} \cdot \nabla) \mathbf{u} = -\nabla p, \quad \nabla \cdot \mathbf{u} = 0, where \mathbf{u} is the velocity and p is the pressure (non-dimensionalized by density). The derivation involves non-dimensionalizing the incompressible Navier–Stokes equations: \frac{\partial \mathbf{u}^\epsilon}{\partial t} + (\mathbf{u}^\epsilon \cdot \nabla) \mathbf{u}^\epsilon + \nabla p^\epsilon = \epsilon \Delta \mathbf{u}^\epsilon, \quad \nabla \cdot \mathbf{u}^\epsilon = 0, and expanding solutions in powers of \epsilon. At leading order O(1), the viscous diffusion term is dropped, recovering the Euler equations away from boundaries, where solutions converge in appropriate norms under suitable conditions. Within s, scaled by \delta \sim \epsilon^{1/2}, viscous effects balance convection, leading to the Prandtl boundary layer equations as the inner leading-order approximation. This matched asymptotic structure resolves the no-slip boundary condition while maintaining outer . A canonical example is the steady laminar boundary layer over a flat plate at zero incidence, known as the Blasius problem. Assuming similarity in the streamwise direction, the leading-order inner equations reduce to the third-order f''' + \frac{1}{2} f f'' = 0, with boundary conditions f(0) = f'(0) = 0 and f'(\infty) = 1, where f is the dimensionless . This Blasius equation describes the velocity profile, with skin friction scaling as \mathrm{Re}^{-1/2}. The Blasius solution is a special case (\beta = 0) of the more general Falkner–Skan equation, f''' + f f'' + \beta (1 - (f')^2) = 0, which accounts for gradients in wedge flows and extends the leading-order approximation to varied external flows. Despite their utility, these leading-order approximations break down at flow separation points, where adverse pressure gradients cause reverse flow and the boundary layer thickness assumption fails, leading to singularities in the Prandtl equations (e.g., Goldstein wake). Recent (CFD) advancements since 2020 integrate these asymptotic frameworks with high-fidelity simulations, such as large-eddy simulations, to handle high-Reynolds-number separation more robustly than classical theory alone.

Simplification of Differential Equations by Machine Learning

Machine learning techniques have emerged as powerful tools for simplifying differential equations by identifying leading-order terms, particularly in scenarios where traditional analytical methods are infeasible due to complexity or lack of prior knowledge of the underlying dynamics. Neural networks, such as (PINNs), are trained on observational data to approximate the dominant balances in partial equations (PDEs), effectively isolating the leading-order contributions that govern the system's behavior at specific scales. This data-driven approach leverages the physics constraints embedded in the loss function to enforce the PDE residuals, allowing the network to prioritize terms with the largest magnitude while suppressing higher-order corrections. A prominent method involves -based combined with to discover dominant terms in the governing equations. In the framework proposed by Champion et al., a deep learns a nonlinear coordinate transformation that maps high-dimensional data into a low-dimensional where the dynamics exhibit sparsity, enabling the identification of parsimonious models that capture the leading-order structure. This is often paired with the () algorithm, introduced by Brunton et al., which uses sequential thresholded on a library of candidate functions to select the sparse set of terms that best fit time-series data, effectively revealing the dominant nonlinear interactions akin to leading-order approximations. Extensions of , such as the weak-form variant (WSINDy), further enhance robustness by integrating Galerkin projections to handle noisy or incomplete measurements, focusing on integral forms that amplify leading-order effects over errors. An illustrative example is the application to chaotic systems like the Lorenz equations, where and autoencoder-SINDy hybrids reduce the full three-dimensional dynamics to a low-dimensional manifold by identifying the sparse terms that dominate the , such as the interactions driving . In this case, the method recovers the leading-order form \dot{x} = \sigma(y - x), \dot{y} = x(\rho - z) - y, \dot{z} = xy - \beta z from simulated trajectories, discarding negligible perturbations and enabling efficient long-term predictions on the manifold. Compared to traditional , these approaches excel in handling noisy experimental data, where small can obscure leading terms, by incorporating regularization techniques like thresholding to promote sparsity. Recent developments from onward have introduced hybrid asymptotic-ML methods for multiscale problems, such as physics-informed autoencoders in asymptotic homogenization, which combine neural networks with perturbation expansions to efficiently compute effective coefficients in elliptic PDEs, achieving up to 100-fold speedups over finite element solvers while maintaining accuracy in identifying scale-separated leading behaviors. These hybrids address computational bottlenecks in multiscale simulations by learning closure models that parameterize subgrid interactions, as demonstrated in reparameterization schemes for turbulent flows, where ML surrogates capture the dominant energy transfer terms across scales.

References

  1. [1]
    [PDF] Asymptotic Expansions - UC Davis Mathematics
    The leading-order term in the Chapman-Enskog expansion satisfies the Navier-. Stokes equations in which the fluid viscosity is of the order ε when ...
  2. [2]
    [PDF] Mathematical Preliminaries and Notation - bgoodlab
    We'll call the rst (non-zero) term in an expansion like this the leading-order term, and we'll call next one the next-order term (and so on for the higher-.Missing: definition | Show results with:definition
  3. [3]
    Leading Order Term - an overview | ScienceDirect Topics
    The higher-order terms have higher powers of Δx premultiplying them, implying that they will vanish more rapidly than the leading order term, and therefore, are ...
  4. [4]
    [PDF] ASYMPTOTIC METHODS
    Taking out the prefactor xα, and changing variables to ξ = xβ, we recover the standard definition of an asymptotic series. ... leading order term in the first ...
  5. [5]
    [PDF] Asymptotic Methods - Richard Chapling
    ... leading-order term. In many applications, the leading-order term is actually all that is required: you know several examples of this already, such as ...
  6. [6]
    DLMF: §2.1 Definitions and Elementary Properties ‣ Areas ...
    Most operations on asymptotic expansions can be carried out in exactly the same manner as for convergent power series. These include addition, subtraction, ...
  7. [7]
    [PDF] Convergent and Divergent Series in Physics
    This also shows that convergent power series, such as the Taylor series, are always asymp- totic to the analytic functions they define within their convergence ...
  8. [8]
    [PDF] ASYMPTOTIC METHODS - DAMTP
    Asymptotic Series. A real-valued function f(x) has an asymptotic series or asymptotic expansion around x0, written f(x) ∼. ∞. X n=0 an(x − x0)n as x → x0.
  9. [9]
    [PDF] DIVERGENT SERIES AND ASYMPTOTIC EXPANSIONS, 1850-1900
    I elucidate how and why that happened by examining the work on asymptotic expansions of George Gabriel Stokes, Jules Henri Poincaré and Thomas Jan Stieltjes.
  10. [10]
    [PDF] POINCARÉ'S WORK ON CELESTIAL MECHANICS - arXiv
    Poincaré called these asymptotic solutions as homoclinic solutions. He showed that in the 3-dimensional solution space of the restricted 3-body problem ...
  11. [11]
    [PDF] Asymptotic Analysis and Singular Perturbation Theory
    The leading-order term in the Chapman-Enskog expansion satisfies the Navier-. Stokes equations in which the fluid viscosity is of the order ε when ...
  12. [12]
    Calculus II - Taylor Series - Pauls Online Math Notes
    Nov 16, 2022 · The Taylor Series for f(x) about x=a is f(x)=∞∑n=0f(n)(a)n!(x−a)n, where cn=f(n)(a)/n!. If a=0, it's called a Maclaurin Series.<|control11|><|separator|>
  13. [13]
    [PDF] Taylor's Series of sin x - MIT OpenCourseWare
    Since sin(4)(x) = sin(x), this pattern will repeat. Next we need to evaluate the function and its derivatives at 0: sin(0) = 0 sin (0) ...
  14. [14]
    Simple Pendulum
    This is known as the small-angle approximation. Making use of this approximation, the equation of motion (1.50) simplifies to. $\displaystyle \skew{3}\ddot ...
  15. [15]
    Simple Pendulum - UMD Physics
    For small angles, θ∼0 θ ∼ 0 , we can drop all but the lowest order term and get sinθ→θ sin ⁡ θ → θ as θ→0 θ → 0 . Using this small angle approximation where the ...
  16. [16]
    [PDF] Lecture Note Chapter 15 Harmonic Oscillation 1 Simple ... - bingweb
    Exact measurement of the period for the simple pendulum with small maximum angle ... The series expansion of T around k = 0 is given by .... 65536. 3969.
  17. [17]
    The Simple Pendulum - Graduate Program in Acoustics
    A simple pendulum consists of a mass m hanging from a string of length L and fixed at a pivot point P. When displaced to an initial angle and released, ...
  18. [18]
    [PDF] Chapter 4 - UC Davis Mathematics
    Chapter 4 discusses the Method of Matched Asymptotic Expansions (MMAE) for solving singularly perturbed ODEs, constructing solutions inside and outside rapid ...
  19. [19]
    [PDF] 2 Matched asymptotic expansions.
    In this section we develop a technique used in solving problems which exhibit boundary- layer phenomena. The technique is known as method of matched asymptotic ...
  20. [20]
    HISTORY OF BOUNDARY LA YER THEORY - Annual Reviews
    The boundary-layer theory began with Ludwig Prandtl's paper On the motion of a fluid with very small viscosity, which was presented at the Third International ...
  21. [21]
    [PDF] boundary layer theory and the zero-viscosity limit of the navier ...
    In this article, we review recent progresses on the analysis of Prandtl's equation and the related issue of the zero-viscosity limit for the solutions of the ...
  22. [22]
    [PDF] Falkner-Skan Solutions to Laminar Boundary Layer Equations
    Moreover, as in the Blasius case, there are no parameters in this governing equation other than m and the family of solutions need only be generated once. The ...
  23. [23]
    [PDF] 1 Introduction. 2 Boundary Layer Governing Equations. - MIT
    For the Blasius laminar boundary layer similarity solution given by equation (3.47), the shear stress τxy is given by τxy ∼ µ. ∂2ψ. ∂y2= µ. U3. 2νx. 1/2 f00(η) ...
  24. [24]
    (PDF) A Computational Investigation of High Reynolds Number ...
    PDF | On Jan 6, 2020, Frederick Ferguson and others published A Computational Investigation of High Reynolds Number Compressible Boundary Layers | Find, read
  25. [25]
    Machine-learning-based spectral methods for partial differential ...
    Jan 31, 2023 · We present an approach for combining deep neural networks with spectral methods to solve PDEs. In particular, we use a deep learning technique known as the ...Missing: leading | Show results with:leading
  26. [26]
    Solving high-dimensional partial differential equations using deep ...
    This paper introduces a practical algorithm for solving nonlinear PDEs in very high (hundreds and potentially thousands of) dimensions.Missing: identification leading
  27. [27]
    Data-driven discovery of coordinates and governing equations - PNAS
    This work introduces an approach to discover governing models from data. The proposed method addresses a key limitation of prior approaches.
  28. [28]
    Discovering governing equations from data by sparse identification ...
    Mar 28, 2016 · The proposed sparse identification of nonlinear dynamics (SINDy) method depends on the choice of measurement variables, data quality, and the ...
  29. [29]
    WEAK SINDy: GALERKIN-BASED DATA-DRIVEN MODEL ... - NIH
    We present a novel weak formulation and discretization for discovering governing equations from noisy measurement data.
  30. [30]
    (PDF) Physics-informed machine learning in asymptotic ...
    Mar 1, 2023 · The only loss terms are then due to the differential equation itself, which removes the necessity of scaling the loss entries.