Fact-checked by Grok 2 weeks ago

Direct numerical simulation

Direct numerical simulation (DNS) is a high-fidelity computational in that directly solves the Navier–Stokes equations to simulate , explicitly resolving all spatial and temporal scales—from the largest eddies to the smallest —without any subgrid-scale modeling or approximations. This approach provides instantaneous velocity fields as functions of position and time, enabling detailed analysis of structures and dynamics in controlled, idealized configurations. DNS originated in the early 1970s with the development of spectral methods for solving the Navier–Stokes equations, as pioneered by Orszag and Patterson in their 1972 simulation of isotropic turbulence. A landmark achievement came in 1987 with , , and Moser's direct simulation of fully developed turbulent channel flow at low Reynolds numbers, which provided comprehensive turbulence statistics and validated experimental data. Subsequent advances in and numerical schemes, such as finite differences and pseudospectral methods, expanded DNS to compressible flows and more complex geometries by the . The method requires extremely fine spatial grids (scaling as Re9/4 grid points for Reynolds number Re) and small time steps (resolving the Kolmogorov timescale τη = (ν/ε)1/2, where ν is kinematic and ε is ) to ensure and accuracy. For instance, simulations at Re ≈ 6000 demand around 2 × 109 grid points and months of time. These demands limit DNS to low-to-moderate Reynolds numbers and simple geometries, distinguishing it from less resolved approaches like Reynolds-averaged Navier–Stokes (RANS) or (LES), which rely on modeling for efficiency. Applications of DNS span fundamental turbulence research, including wall-bounded flows, boundary-layer transitions, and , where it generates high-quality datasets for model validation and reveals coherent structures like low-speed streaks. In contexts, such as studies on flow control, DNS has informed active suppression of turbulence in attachment-line flows at Re ≈ 245. Despite its limitations, ongoing supercomputing advancements continue to push DNS toward higher Re and practical relevance, solidifying its role as an indispensable tool for understanding physics.

Introduction

Definition and principles

Direct numerical simulation (DNS) is a high-fidelity computational method that directly solves the time-dependent Navier-Stokes equations, subject to appropriate initial and boundary conditions, to capture the instantaneous and fields in turbulent flows. This approach enables the detailed examination of without relying on empirical closures or approximations for unresolved phenomena. The core principle of DNS lies in its complete of all turbulent scales, spanning from the largest energy-containing eddies—determined by the geometry and boundary conditions—to the smallest Kolmogorov scales, where viscous dissipation converts turbulent kinetic energy into heat. By achieving this full-spectrum in both and time, DNS avoids the need for subgrid-scale models, providing an unfiltered representation of the physics. In contrast to modeling-based methods like Reynolds-Averaged Navier–Stokes (RANS) or (LES), which approximate smaller scales through statistical or filtered treatments, DNS emphasizes exhaustive direct computation to yield precise, model-independent results. The typical workflow for a DNS begins with initialization, where initial conditions—often synthetic or perturbations from a base —are specified to trigger or sustain the desired dynamics. This is followed by the time evolution phase, in which the field is advanced numerically over time to simulate the transient and statistically steady behaviors. Post-processing then involves analyzing the resulting data, such as computing turbulence statistics, spectra, or conditional averages, to derive insights into the flow structure. A primary advantage of DNS is its provision of benchmark datasets that serve as gold standards for advancing research and validating approximate models, free from modeling uncertainties. For instance, early landmark DNS of fully developed turbulent channel flow demonstrated the method's capability to reproduce experimental statistics accurately, establishing foundational references for wall-bounded studies.

Historical development

The origins of direct numerical simulation (DNS) trace back to the late and early , when began enabling the resolution of flows on early computers using finite-difference methods. Steven Orszag and Gregory Patterson conducted the first DNS of three-dimensional homogeneous isotropic in 1972, achieving wind-tunnel Reynolds numbers and validating methods for periodic domains on limited . These early simulations, constrained by computational power equivalent to modern pocket calculators, focused on decaying and laid the groundwork for understanding multiscale interactions without subgrid modeling. The 1980s marked a breakthrough era for DNS, propelled by the advent of supercomputers and vector processors, which allowed for the first sustained three-dimensional simulations of forced turbulence. A seminal review by Roger Rogallo and Parviz Moin in 1984 synthesized progress in numerical techniques, highlighting the transition from two-dimensional to fully resolved three-dimensional flows and the role of spectral and finite-difference schemes in capturing isotropic turbulence statistics. Concurrently, applications expanded to wall-bounded flows, exemplified by John Kim, Parviz Moin, and Robert Moser's 1987 DNS of fully developed turbulent channel flow at low Reynolds numbers, which provided detailed turbulence statistics and near-wall structures previously inaccessible experimentally. These advancements, supported by Cray-1 class machines, elevated DNS from exploratory tools to benchmarks for turbulence physics. In the and , DNS evolved with refined spectral methods and increased computational resources, enabling simulations of more complex geometries and higher Reynolds numbers up to around 10^3 based on friction . Robert Moser, John Kim, and Nagi Mansour extended channel flow DNS to Re_τ ≈ 590 in 1999, revealing scaling laws for fluctuations and pressure-strain correlations that informed Reynolds-averaged models. The period also saw broader adoption in shear flows and layers, with Philippe Spalart's 1988 work on turbulent layers up to Re = 1410 influencing reduction studies. From the onward, and the shift to graphics processing units (GPUs) facilitated DNS at Reynolds numbers approaching 10^5, with projections for exascale systems enabling even larger domains. Notable achievements include Jie and colleagues' 2023 simulation of turbulent at Re_τ ≈ 5200, resolving over 10^12 grid points to study universal near-wall behaviors. By 2024, GPU-accelerated pseudo-spectral methods on exascale platforms like achieved record resolutions of 35 trillion grid points for homogeneous isotropic , accelerating simulations by orders of magnitude compared to CPU-based approaches and opening avenues for multiphysics integrations. These leaps, from vector supercomputers to heterogeneous GPU architectures, have transformed DNS into a for high-fidelity research.

Mathematical Foundations

Governing equations

Direct numerical simulation (DNS) solves the fundamental equations governing fluid motion without subgrid-scale modeling, resolving all relevant scales of . For incompressible flows of Newtonian fluids, these are the Navier-Stokes equations, derived from the principles of mass and momentum conservation. The enforces mass conservation: \nabla \cdot \mathbf{u} = 0 where \mathbf{u} is the . The momentum equation, expressing Newton's second law for a fluid continuum, is: \frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u} \cdot \nabla) \mathbf{u} = -\frac{1}{\rho} \nabla p + \nu \nabla^2 \mathbf{u} Here, \rho is the constant fluid density, p is the pressure, and \nu is the kinematic viscosity. The nonlinear convective term (\mathbf{u} \cdot \nabla) \mathbf{u} captures the transport of momentum by the flow itself, while the pressure gradient -\nabla p / \rho enforces incompressibility, and the viscous diffusion term \nu \nabla^2 \mathbf{u} represents momentum transfer due to molecular friction. The viscous term \nu \nabla^2 \mathbf{u} is crucial in DNS, as it directly models the dissipation of at the smallest scales, without any approximation, allowing the simulation to accurately capture the in . For compressible flows, the governing equations extend to include variable density and thermal effects, comprising the compressible Navier-Stokes equations: , , and , supplemented by an relating pressure, density, and (e.g., the p = \rho R T, where R is the and T is ). The becomes: \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{u}) = 0 The momentum equation generalizes to: \frac{\partial (\rho \mathbf{u})}{\partial t} + \nabla \cdot (\rho \mathbf{u} \mathbf{u} + p \mathbf{I} - \boldsymbol{\tau}) = 0 where \boldsymbol{\tau} is the viscous stress tensor for a Newtonian fluid, \boldsymbol{\tau} = \mu (\nabla \mathbf{u} + (\nabla \mathbf{u})^T - \frac{2}{3} (\nabla \cdot \mathbf{u}) \mathbf{I}), with \mu the dynamic viscosity. The total energy equation is: \frac{\partial (\rho E)}{\partial t} + \nabla \cdot (\rho E \mathbf{u} + p \mathbf{u} - \boldsymbol{\tau} \cdot \mathbf{u} + \mathbf{q}) = 0 where E is the total energy per unit mass (kinetic plus internal), and \mathbf{q} is the heat flux vector, often modeled as \mathbf{q} = -\kappa \nabla T with thermal conductivity \kappa. These equations conserve mass, momentum, and total energy in compressible Newtonian fluids, accounting for density variations due to compressibility effects like shock waves or high-speed flows. In the inviscid limit, setting \mu = 0 and \kappa = 0 yields the Euler equations, which describe compressible flows without viscous dissipation or heat conduction, serving as a simplified model for high-Reynolds-number limits in DNS studies. DNS requires specification of initial conditions, such as an initial velocity field often seeded with synthetic or derived from experimental data to initiate realistic flow development, and boundary conditions tailored to the domain, including periodic boundaries in homogeneous directions to mimic infinite extents, no-slip conditions (\mathbf{u} = 0) on solid walls to enforce zero velocity at surfaces, and inflow/outflow conditions for spatially developing flows. These ensure the equations accurately represent the physical scenario while maintaining and physical realism.

Non-dimensional forms and parameters

In the analysis of fluid flows via direct numerical simulation (DNS), the governing dimensional Navier-Stokes equations are typically transformed into non-dimensional forms to identify the dominant physical mechanisms and simplify numerical implementation. This process involves scaling the variables with characteristic quantities: length by a reference length L, velocity by a reference velocity U, time by L/U, and pressure by \rho U^2, where \rho is the fluid density. The resulting non-dimensional incompressible momentum equation becomes \frac{\partial \mathbf{u}^*}{\partial t^*} + (\mathbf{u}^* \cdot \nabla^*) \mathbf{u}^* = -\nabla^* p^* + \frac{1}{\mathrm{Re}} \nabla^{*2} \mathbf{u}^*, with the divergence-free condition \nabla^* \cdot \mathbf{u}^* = 0, where asterisks denote non-dimensional quantities and \mathrm{Re} = UL/\nu is the Reynolds number, with \nu the kinematic viscosity. The Reynolds number plays a central role in non-dimensional formulations, quantifying the ratio of inertial to viscous forces and thus determining the intensity of turbulence and the range of spatial and temporal scales that must be resolved in DNS. Introduced through experimental studies of flow stability, higher \mathrm{Re} values promote transition to turbulence by amplifying instabilities, necessitating exponentially finer computational grids to capture the full spectrum of turbulent motions without subgrid modeling. Other key non-dimensional parameters arise in specific flow regimes relevant to DNS. The Prandtl number \mathrm{Pr} = \nu / \alpha, where \alpha is , governs the relative rates of momentum and heat diffusion, influencing structures in simulations involving , such as in liquid metals (\mathrm{Pr} \ll 1) or oils (\mathrm{Pr} \gg 1).00014-4) The Mach number \mathrm{Ma} = U / a, with a the , characterizes effects; low \mathrm{Ma} (< 0.3) permits incompressible approximations, while higher values in DNS of high-speed flows introduce acoustic waves and variable density. For free-surface flows, the Froude number \mathrm{Fr} = U / \sqrt{gL}, where g is gravity, balances inertial and gravitational forces, affecting wave generation and surface deformation in open-channel simulations. In DNS, these parameters directly impact computational demands; for instance, high \mathrm{Re} flows require resolving the inertial subrange of turbulence, where energy cascades from large to small scales without significant viscous dissipation. This leads to grid resolutions scaling as \mathrm{Re}^{9/4} in three dimensions to maintain accuracy across the spectrum. The smallest scales in turbulent flows, known as Kolmogorov scales, provide a theoretical basis for resolution criteria and emerge from dimensional analysis assuming local isotropy and homogeneity at high \mathrm{Re}. The dissipation length scale is \eta = (\nu^3 / \epsilon)^{1/4} and the time scale is \tau_\eta = (\nu / \epsilon)^{1/2}, where \epsilon is the mean kinetic energy dissipation rate per unit mass, estimated as \epsilon \approx U^3 / L. These scales mark the transition to viscous-dominated dissipation, ensuring that DNS grids must resolve down to \eta to accurately capture the energy cascade.

Numerical Methods

Spatial discretization

Spatial discretization in direct numerical simulation (DNS) approximates the spatial derivatives of the governing on a computational grid, enabling the resolution of all turbulent scales without subgrid modeling. These methods must minimize numerical dissipation and dispersion to accurately capture the wide range of length scales in turbulent flows, with choices influenced by the flow regime, such as incompressible versus compressible conditions. High-order accuracy is essential to reduce truncation errors relative to the small grid spacings required for DNS. Finite difference methods evaluate derivatives at grid points using local polynomial approximations derived from Taylor expansions. Central schemes, which use symmetric stencils, exhibit low dispersion errors suitable for smooth turbulent fields, while upwind-biased schemes enhance stability for hyperbolic terms in compressible flows. Fourth-order compact finite difference schemes, introduced by Lele, achieve spectral-like resolution by solving implicit tridiagonal systems, allowing higher accuracy with fewer grid points compared to explicit schemes of similar order; the truncation error scales as O(\Delta x^p), where p is the scheme's order and \Delta x is the grid spacing. These schemes have been widely adopted in DNS of wall-bounded turbulence due to their efficiency and reduced numerical artifacts. Finite volume methods discretize the governing equations by integrating over control volumes, inherently preserving conservation properties for mass, momentum, and energy, which is critical in compressible DNS where shocks or discontinuities may arise. Fluxes at cell interfaces are reconstructed using Riemann solvers or limiters, such as the minmod or van Leer flux limiters, to maintain monotonicity and prevent oscillations near steep gradients. In compressible turbulence simulations, these methods ensure thermodynamic consistency, with low-dissipation variants preserving kinetic energy in the inviscid limit. The local truncation error remains O(\Delta x^p), but global conservation is exact regardless of grid non-uniformity. Spectral methods represent the solution as a truncated series of global basis functions, such as for periodic domains, enabling exact differentiation in spectral space and exponential convergence rates for smooth solutions. Pioneered in early DNS by for homogeneous isotropic turbulence, these methods excel in idealized geometries but require de-aliasing techniques, like the , to handle nonlinear interactions that introduce projection errors. For non-periodic boundaries, on mapped domains provide similar high accuracy, as detailed in the theoretical framework by . Aliasing errors, arising from quadrature inaccuracies in nonlinear terms, can be quantified and controlled to maintain fidelity in turbulent spectra. Various grid types support these discretization approaches depending on geometry complexity. Uniform Cartesian grids simplify implementation for periodic or channel flows, minimizing coordinate transformations. Curvilinear grids conform to smooth boundaries via body-fitted mappings, preserving orthogonality for accuracy in finite difference or volume methods. For irregular geometries, immersed boundary methods embed complex surfaces within a background Cartesian grid, applying forcing terms to enforce no-slip conditions without grid regeneration; originally developed by Peskin for fluid-structure interactions in biological flows, these have been extended to high-Reynolds-number DNS of turbulent flows around bluff bodies. Error analysis in these grids focuses on truncation from differencing (O(\Delta x^p)) and geometric distortions, with immersed boundaries introducing additional interpolation errors proportional to the boundary forcing spread. Hybrid approaches integrate strengths of multiple methods for enhanced efficiency and versatility. Spectral element methods partition the domain into elements where high-order polynomial approximations (often Legendre or Chebyshev bases) provide spectral accuracy locally, combined with finite element assembly for handling complex domains; introduced by Patera for laminar flows and extended to turbulent DNS, they balance geometric flexibility with minimal dissipation. Other hybrids, such as spectral difference embedded in finite volumes, leverage discontinuous high-order reconstructions for shock-capturing in compressible regimes while retaining conservation. These methods reduce aliasing through local projections and achieve convergence rates approaching exponential in smooth regions, making them suitable for large-scale DNS. The lattice Boltzmann method (LBM) offers a mesoscopic alternative for spatial (and temporal) discretization in DNS, representing the fluid as an ensemble of fictitious particles propagating and colliding on a discrete lattice according to simplified . This approach inherently incorporates viscosity through relaxation parameters and excels in parallel implementation due to local computations, making it suitable for turbulent flows in complex geometries. LBM has been validated for DNS of homogeneous isotropic turbulence and wall-bounded flows, capturing accurate statistics with reduced numerical dissipation compared to macroscopic methods.

Temporal integration and stability

In direct numerical simulations (DNS) of fluid flows, temporal integration advances the solution of the discretized governing equations from one time level to the next, requiring schemes that balance accuracy, stability, and computational efficiency given the wide range of temporal scales in turbulent flows. Explicit methods, such as , are widely employed for their simplicity and high-order accuracy; for instance, third- or fourth-order variants like the classical are common, but low-storage implementations are preferred to minimize memory usage in large-scale simulations. These low-storage , which store only two solution levels per stage, enable efficient computation while maintaining stability regions suitable for hyperbolic-parabolic systems like the . Implicit methods address stiffness arising from diffusive terms or incompressibility constraints, often applied selectively to enhance allowable time steps. The , a second-order implicit method, is frequently used for the diffusion terms due to its unconditional stability for linear diffusion equations and centered averaging that preserves accuracy. For incompressible flows, the fractional-step (or pressure-projection) method decouples advection and pressure via a multi-stage approach: an intermediate velocity field is advanced explicitly or semi-implicitly, followed by a Poisson solve to enforce divergence-free conditions, as pioneered in early DNS works. Operator splitting further facilitates this coupling by treating advection and diffusion separately within stages, integrating the Poisson equation for pressure correction at each step to maintain incompressibility without iterative solvers. Stability in temporal integration imposes strict constraints on the time step \Delta t, dictated by the flow's advective and diffusive characteristics. The Courant-Friedrichs-Lewy (CFL) condition for explicit advection requires \frac{|u| \Delta t}{\Delta x} \leq 1, where |u| is the local flow speed and \Delta x is the grid spacing, ensuring information does not propagate across more than one cell per step. For viscous terms in explicit schemes, a diffusive stability limit applies: \Delta t \leq O\left( \frac{\Delta x^2}{\nu} \right), with \nu the kinematic viscosity, which often governs low-Reynolds-number regions near walls in wall-bounded turbulence simulations. Implicit treatments, such as for diffusion, relax the viscous constraint, allowing larger \Delta t limited primarily by the CFL condition. To optimize efficiency without fixed small steps, adaptive time-stepping adjusts \Delta t dynamically based on error estimators (e.g., embedded ) or flow physics like local CFL numbers, enabling larger steps in quiescent regions while resolving transient events. The global temporal error for a q-th order scheme scales as O(\Delta t^q), ensuring that higher-order methods like third-order achieve sufficient accuracy for capturing small-scale dynamics in DNS without excessive dissipation.

Computational Requirements

Resolution and grid criteria

In direct numerical simulation (DNS) of turbulent flows, the grid resolution must be sufficiently fine to capture all relevant scales, particularly the smallest dissipative structures characterized by the \eta = (\nu^3 / \epsilon)^{1/4}, where \nu is the kinematic viscosity and \epsilon is the dissipation rate. Typically, the grid spacing \Delta x is set to resolve at least 1-2 points per \eta, ensuring \Delta x / \eta \approx 1-2, to accurately represent the dynamics of the smallest eddies without artificial dissipation or aliasing. For three-dimensional isotropic turbulence, this requirement leads to a total number of grid points scaling as N \sim Re^{9/4}, where Re is the based on the integral scale, reflecting the need to span from the large integral scale L to \eta, with L / \eta \sim Re^{3/4}. The time step \Delta t in DNS is constrained to resolve the fastest temporal fluctuations associated with the Kolmogorov time scale \tau_\eta = (\nu / \epsilon)^{1/2}, requiring \Delta t < \tau_\eta or \Delta t / \tau_\eta \sim O(1) to capture eddy turnover at small scales. Additionally, stability conditions such as the Courant-Friedrichs-Lewy (CFL) criterion, \Delta t < \Delta x / u_{\max} where u_{\max} is the maximum velocity, and diffusive limits \Delta t < \Delta x^2 / (2\nu), further restrict the choice, often resulting in \Delta t \sim Re^{-1/2} for high-Re flows. The computational domain size must encompass multiple integral length scales L, typically at least $2\pi L in each direction for periodic boundary conditions to minimize finite-size effects and allow uncorrelated large eddies to develop fully. For non-periodic setups, such as channel flows, outflow boundaries are employed to prevent artificial reflections while capturing the integral scale. Accuracy of the resolution is assessed through metrics like the energy spectrum E(k), which should exhibit the inertial-range decay E(k) \sim k^{-5/3} up to the dissipation range, and conservation of enstrophy \Omega = \langle \omega^2 \rangle / 2, where the production balances viscous dissipation in statistically steady flows. Validation often involves comparing simulated spectra and one-dimensional energy profiles to experimental benchmarks, such as the decaying grid turbulence data of , confirming fidelity across scales. The computational cost per unit physical time for DNS scales as the number of grid points times operations per point, yielding \sim N \sim Re^{9/4} for isotropic turbulence, highlighting the exponential growth with Reynolds number that limits practical applications to moderate Re.

Parallel computing and scalability

Direct numerical simulation (DNS) of turbulent flows demands immense computational resources due to the need for fine spatial and temporal resolutions to capture all scales of motion, often requiring billions to trillions of grid points. Parallel computing strategies are essential to distribute this workload across thousands to millions of processor cores or accelerators. Domain decomposition techniques, such as slab and pencil partitioning, enable efficient MPI-based parallelism by dividing the computational domain into subdomains assigned to individual processes. Slab decomposition partitions along one dimension, suitable for solvers like Poisson equation components in DNS, while pencil decomposition extends this to two dimensions, facilitating efficient handling of multidimensional arrays and fast Fourier transforms (FFTs) in pseudo-spectral methods. These approaches minimize communication overhead by aligning data distribution with the transform operations, achieving near-ideal load distribution on uniform grids. Load balancing becomes critical when employing adaptive meshes to handle variable resolution in DNS, particularly for flows with localized features like shocks or boundary layers. In unstructured adaptive mesh refinement (AMR) frameworks, dynamic repartitioning uses techniques such as diffusion-based algorithms to redistribute cells among processors, ensuring equitable computational load while minimizing data migration costs. For instance, the HAMISH code for compressible reacting flows achieves 85.6% parallel efficiency up to 1,536 cores by transferring cells only between nearest-neighbor processes during refinement. This approach addresses imbalances from octree-based AMR, where fine grids in high-gradient regions could otherwise overload specific cores. GPU acceleration has transformed DNS scalability, particularly for spectral methods reliant on FFTs for spatial discretization. CUDA and OpenMP offloading implementations optimize transforms and derivative computations on GPU architectures, leveraging high memory bandwidth for batched operations. On systems like Frontier, GPU-enabled pseudo-spectral solvers achieve approximately 10x to 20x speedups over CPU-only versions for grid sizes up to 8192³, with FFT routines gaining up to 13x and communication layers up to 30x through GPU-aware MPI. As of 2024, these gains enable simulations at unprecedented scales, such as 35 trillion grid points, while maintaining accuracy in turbulence modeling. Exascale supercomputers like Frontier and Aurora extend DNS to high Reynolds numbers. Frontier, with its AMD MI250x GPUs, supports NekRS-based DNS of wall-bounded turbulence, resolving complex geometries like cantilevered rod bundles with full spectral element discretization. High-fidelity simulations, such as channel flow at Re_τ = 10,000, demonstrate the feasibility of resolving turbulence at such scales. Aurora's Intel GPU integration targets multiphysics DNS, enabling simulations at Reynolds numbers exceeding previous petascale limits by distributing workloads across over 60,000 accelerators. These platforms overcome prior hardware limits, allowing sustained runs for weeks on end. Scalability in DNS is quantified by strong scaling efficiency, defined as η = T(1) / (p T(p)), where T(1) is the runtime on one core, T(p) on p cores, and η > 80% indicates effective parallelism up to thousands of cores. High-order solvers like Nek5000 demonstrate η ≈ 70-75% on up to 128 GPUs for problems with millions of per accelerator, scaling to over 8,000 CPU cores with minimal overhead from matrix-free operators. At larger scales, efficiency reaches 60% on a million MPI ranks, limited by communication but sufficient for exascale DNS. Software frameworks such as facilitate high-order DNS with built-in for long-running simulations on petascale systems. Nek5000's supports MPI/OpenMP hybrid parallelism and checkpointing to recover from node failures, achieving on clusters like with 5,000-10,000 degrees of freedom per core before efficiency drops. For GPU ports, (a Nek5000 derivative) extends this to accelerators, maintaining 70% efficiency at 128 GPUs. Similarly, CaNS provides GPU-accelerated finite-difference DNS with OpenACC directives, enabling runs on clusters for incompressible flows. Managing petabyte-scale outputs from DNS poses significant challenges, addressed through in-situ analysis to process data during simulation without full I/O dumps. Frameworks like integrate staging middleware with for predictive scheduling, offloading analysis to idle network periods and reducing write contention by 22-48% in S3D DNS on 33,000 cores. This approach enables visualization and feature extraction, such as turbulence statistics, while keeping data in memory buffers to avoid disk bottlenecks at exascale.

Applications

Turbulent flow simulations

Direct numerical simulations of turbulent flows have provided invaluable benchmarks for understanding fundamental physics, particularly through canonical configurations that isolate key dynamical processes. Homogeneous isotropic (HIT) serves as a primary , where and external forcing maintain statistical stationarity, enabling detailed examination of the from large to small scales without boundary influences. Seminal DNS studies of HIT have quantified the inertial range spectrum and verified the -5/3 Kolmogorov scaling, offering reference data for model validation. Another canonical case is the Taylor-Green vortex decay, an initial-value problem that evolves from organized vortical motion into fully developed , highlighting vortex stretching, reconnection, and enstrophy production mechanisms. Early high-resolution DNS of this flow at moderate Reynolds numbers revealed the transition pathways and rates, establishing it as a test for numerical schemes and onset studies. In wall-bounded flows, DNS has elucidated the hierarchical structure near solid surfaces, where viscous effects dominate. Channel flow simulations at low to moderate Reynolds numbers have captured the near-wall cycle, a regenerative process involving low- and high-speed streaks that lift away from the wall, burst, and sweep back to replenish momentum. The seminal DNS of plane channel flow at Re_τ ≈ 180 demonstrated the intermittent nature of these events, with ejections and sweeps contributing over 50% of the Reynolds shear stress in the buffer layer. Similarly, pipe flow DNS has revealed analogous buffer layer structures, though with azimuthal variations due to the geometry; early simulations at Re_τ ≈ 360 confirmed the presence of annular streaks and quantified their role in turbulence production, aligning closely with experimental particle image velocimetry data. Free-shear flows, such as jets and mixing layers, have been simulated to probe vortex dynamics and without wall constraints. In planar mixing layers, DNS has visualized the Kelvin-Helmholtz leading to paired vortex roll-up and subsequent three-dimensional breakdown into small-scale , capturing self-similar growth rates and scalar mixing efficiencies. Representative simulations at Re ≈ 10^4 based on initial momentum thickness showed that streamwise vortices enhance lateral , influencing and applications. Axisymmetric DNS, meanwhile, has highlighted helical vortex pairing and far-field spreading, with studies demonstrating how initial conditions modulate the potential core length and generation through large-scale coherent motions. Advancing to higher Reynolds numbers, DNS of zero-pressure-gradient boundary layers has reached Re_θ up to approximately 7000, providing datasets for and log-law validation. These simulations, requiring grids exceeding 10^9 points to satisfy criteria from the Kolmogorov , have quantified the outer-layer influence on near-wall , showing increased and superstructures spanning multiple thicknesses. Such high-Re data enable accurate forecasting of laminar-to-turbulent thresholds in aerodynamic designs. Key insights from these DNS include detailed statistical quantities that reveal turbulence organization. Reynolds stresses exhibit peaks in the near-wall , with the streamwise component dominating due to streak meandering, while spectra display inertial subranges extending over decades in . Coherent structures, such as low-speed streaks (elongated of decelerated ) and associated sweeps (high- incursions toward the wall), emerge as primary contributors to transport, accounting for the majority of turbulent production in wall units. These findings, synthesized from analyses, underscore the self-sustaining nature of near-wall . Post-2020 advances have integrated to extract subgrid insights in near-DNS regimes, where simulations approach but do not fully resolve the smallest scales. Neural network-based models trained on high-fidelity DNS data of channel flows have parameterized subgrid stresses, aiding predictions of energy spectra. These data-driven approaches, applied to turbulent channel flow at moderate Re_τ, offer pathways to hybrid simulations for even higher Reynolds numbers.

Multiphysics and interdisciplinary uses

Direct numerical simulation (DNS) extends beyond single-phase incompressible flows to multiphysics problems by incorporating additional governing equations for coupled phenomena, such as chemical reactions, electromagnetic fields, and particulate interactions, while resolving all relevant scales without subgrid modeling. This approach enables detailed analysis of complex interactions in engineering and scientific applications, revealing mechanisms that influence system behavior at fundamental levels. In reactive flows, DNS couples the Navier-Stokes equations with species transport and reaction kinetics to study processes, including premixed where scalar mixing and heat release drive propagation and wrinkling. For instance, simulations of in turbulent environments have quantified auto-ignition delays and kernel development under varying thermodynamic conditions, highlighting the role of scalar dissipation in extinction events. These studies provide benchmarks for validating reduced-order models used in practical design. Magnetohydrodynamics (MHD) simulations integrate the Navier-Stokes equations with to capture electromagnetic effects in conducting fluids, particularly flows where Lorentz forces alter structures. High-Reynolds-number DNS of MHD has demonstrated anisotropic cascades and enhanced dissipation due to alignment, essential for understanding processes in astrophysical s. Such computations reveal how magnetic Prandtl numbers influence flow stability in fusion-relevant regimes. Aeroacoustics benefits from DNS by directly computing generation from turbulent sources without hybrid approximations, resolving both hydrodynamic and acoustic field evolutions. Landmark simulations of subsonic and supersonic have identified dominant mechanisms, such as trailing-edge and radiation, with far-field levels scaling with jet velocity to the eighth . Recent high-fidelity DNS of installed jet configurations has quantified shielding effects from airframes, reducing overall radiated by up to 5 in specific directions. In biological flows, DNS resolves cellular-scale interactions in microvascular networks, modeling blood as a suspension of deformable red cells in to capture margination and aggregation phenomena. Simulations of in stenosed arteries have shown elevated wall shear stresses exceeding 100 near bifurcations, contributing to endothelial damage and plaque formation. For multiphase microfluidics, DNS of droplet-laden flows in biological channels elucidates particle-fluid coupling, where and interfacial tension influence mixing efficiency in organ-on-chip devices. Environmental applications employ DNS to model geophysical , incorporating Coriolis effects in rotating frames to simulate atmospheric layers where geostrophic influences shear-driven mixing. Studies of neutral layers over complex have quantified Coriolis-induced secondary circulations, enhancing momentum transport by 20-30% compared to non-rotating cases. In , DNS reveals how Coriolis forces stabilize anticyclonic wakes while destabilizing cyclonic ones, affecting lifetimes and tracer in surface layers. As of 2025, DNS has advanced fusion reactor simulations through MHD extensions, modeling plasma-wall interactions in edge regions to predict mitigation via detached regimes. In climate microscale modeling, high-resolution DNS of double-diffusive captures sub-finger-scale instabilities in layers, informing parameterizations for global circulation models on salinity-driven mixing rates.

Limitations and Comparisons

Challenges and error sources

One of the primary challenges in direct numerical simulation (DNS) is the immense computational cost, which scales exponentially with the (). For three-dimensional turbulent flows, the number of grid points required typically scales as O(Re^{9/4}), while the total computational effort can reach O(Re^3) or higher depending on the , severely limiting practical applications to Reynolds numbers up to around 10^6 or higher, depending on the flow configuration and available computational resources, as demonstrated in recent simulations. In wall-bounded flows, the cost further escalates to approximately O(Re_\tau^4), where Re_\tau is the , making simulations at higher Re prohibitive even on modern supercomputers. However, advancements in exascale supercomputing have enabled DNS at Reynolds numbers up to Re_τ ≈ 5200 in flows as of 2023. This scaling arises from the need to resolve all spatial and temporal scales down to the Kolmogorov length and time scales, which decrease rapidly as Re increases. Numerical errors represent another significant source of inaccuracy in DNS, primarily stemming from discretization schemes that introduce and . errors cause phase shifts in wave propagation, leading to unphysical oscillations, while errors artificially dampen small-scale structures essential for capturing turbulent cascades. In methods, commonly used for their high accuracy in periodic domains, errors from nonlinear terms necessitate de-aliasing techniques, such as the 3/2-rule, which extend the computational domain to filter high-wavenumber contributions but increase overhead by up to 3.375 times. These errors can distort the inertial range if not controlled, particularly in high-Re flows where small scales are sensitive to . Physical modeling gaps in DNS often arise from simplifications that deviate from real-world conditions, such as idealized periodic or boundaries versus , irregular geometries. Such approximations can overlook effects or confinement influences critical in applications, leading to discrepancies in statistics like drag or . Additionally, DNS struggles to capture rare events in , such as extreme spikes or intermittent bursts, which require exceedingly fine grids to resolve without under-sampling, potentially biasing predictions of tail distributions in probability density functions. Verification and validation of DNS results pose substantial hurdles due to the method's sensitivity to initial conditions and the challenges in experimental comparisons. Turbulent flows exhibit behavior, where small perturbations in initial velocity fields can amplify into large divergences over time, complicating and requiring statistically states achieved through long integration periods. Comparing DNS outputs with experiments is bottlenecked by measurement resolution limits in labs, which often fail to capture sub-Kolmogorov scales, and mismatches in boundary conditions, hindering quantitative assessments of quantities like higher-order moments. Looking ahead, future challenges for DNS include incorporating quantum effects in microscale flows, such as in nanoscale channels or quantum fluids, where classical Navier-Stokes assumptions break down and quantum-classical solvers are needed. Multi-scale coupling, particularly between macroscopic and microscopic phenomena like particle interactions, demands integrated frameworks that bridge disparate length and time scales without excessive computational penalty. These extensions push beyond traditional DNS paradigms, requiring advancements in algorithms and hardware. To mitigate these issues, techniques for establishing error bounds and adaptive refinement are increasingly employed. A posteriori error estimation provides guaranteed upper and lower bounds on discretization errors in energy norms, allowing targeted improvements without full re-simulation. Adaptive mesh refinement dynamically adjusts grid resolution based on local error indicators, such as residual-based metrics, reducing overall cost by focusing computation on high-gradient regions like shear layers while maintaining accuracy comparable to uniform high-resolution grids.

Relation to RANS and LES

Reynolds-Averaged Navier-Stokes (RANS) simulations compute the time-averaged flow field by decomposing the velocity into mean and fluctuating components, leading to the introduction of the tensor that requires closure modeling to account for turbulent effects. Common closure models include the k-ε model, which solves transport equations for turbulent (k) and its rate (ε) to estimate eddy , making it suitable for steady mean flows but less effective for capturing transient phenomena or non-equilibrium . RANS approaches are computationally efficient and independent of (Re) scaling for grid resolution, as they do not resolve unsteady fluctuations. Large Eddy Simulation (LES) applies spatial filtering to the Navier-Stokes equations, resolving the large, energy-containing scales directly while modeling the subgrid-scale (SGS) effects on the resolved motion using models such as the Smagorinsky eddy or dynamic procedures that adjust coefficients based on local characteristics. This method captures unsteady turbulent structures more accurately than RANS, particularly in flows with significant large-scale , but requires finer grids near walls to resolve viscous effects unless wall modeling is employed. Direct numerical simulation (DNS) differs from both RANS and LES by resolving all turbulent scales without modeling, providing the highest fidelity representation of but at substantially higher computational expense. While RANS and approximate through averaging or filtering—reducing costs but introducing uncertainties in model closures—DNS serves as a for validating and calibrating these approximations, such as tuning SGS models in using DNS-derived statistics. The is evident in grid requirements: DNS demands a number of grid points N as Re^{9/4} to resolve the smallest Kolmogorov scales, scales more favorably at approximately Re^{4/3} for wall-modeled configurations to capture separation and large-scale features, and RANS remains largely Re-independent. Hybrid methods bridge the gap between these approaches, such as implicit LES (ILES), which relies on numerical dissipation from high-order schemes to implicitly model SGS stresses without explicit closures, and wall-modeled LES (WMLES), which uses RANS-like models in the near-wall region to relax resolution requirements while resolving outer-layer eddies. These hybrids extend the applicability of LES toward higher Re flows, approaching DNS accuracy in resolved regions at reduced cost compared to full DNS. DNS is primarily employed in fundamental research to study physics and generate , whereas RANS and are favored in for their of accuracy and affordability in predicting practical flows like or . The choice depends on the required detail: full spectral content for DNS, mean flows for RANS, or unsteady large-scale dynamics for .

References

  1. [1]
    [PDF] A Primer on Direct Numerical Simulation of Turbulence - ePrints Soton
    Direct Numerical Simulation (DNS) is a high-fidelity method that explicitly resolves turbulence, capturing all scales, unlike RANS or LES.
  2. [2]
    Direct Numerical Simulation - an overview | ScienceDirect Topics
    Direct numerical simulation (DNS) means complete three-dimensional and time-dependent solution of the Navier–Stokes equations to obtain results for the ...
  3. [3]
    [PDF] DISCUSSION OF DNS - NASA Technical Reports Server (NTRS)
    Direct Numerical Simulation (DNS) is a CFD technology for nearly exact solutions to unsteady, nonlinear equations, associated with large-scale, computationally ...
  4. [4]
    The local structure of turbulence in incompressible viscous fluid for ...
    The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Andrei Nikolaevich Kolmogorov.
  5. [5]
    Turbulence statistics in fully developed channel flow at low ...
    Apr 21, 2006 · Turbulence statistics in fully developed channel flow at low Reynolds number. Published online by Cambridge University Press: 21 April 2006.
  6. [6]
    The Early Days and Rise of Turbulence Simulation - Annual Reviews
    The period covered starts with the first simulations of decaying homogeneous isotropic turbulence in 1971–1972 and ends about 25 years later. Some earlier ...
  7. [7]
  8. [8]
    Navier-Stokes Equations
    The Navier-Stokes equations consists of a time-dependent continuity equation for conservation of mass, three time-dependent conservation of momentum equations ...Euler Equations · Aerodynamics Index · Conservation of Momentum
  9. [9]
  10. [10]
    [PDF] Finite Difference Methods for Turbulence Simulations
    Conservation laws such as the Navier-Stokes equations expressed in curvilin- ear coordinates provide a powerful foundation for utilization of finite difference.Missing: Corrsin Orszag 1960s
  11. [11]
    Discrete, Kinetic Energy Consistent Finite-Volume Scheme for Flows
    Mar 20, 2009 · In a recent paper, Kennedy and Gruber [14] discussed skew-symmetric formulations for the cubic non-linearities that are found in the convective ...Missing: Grubel | Show results with:Grubel
  12. [12]
    Numerical Analysis of Spectral Methods | SIAM Publications Library
    Numerical Analysis of Spectral Methods: Theory and Applications. Author(s):. David Gottlieb and; Steven A. Orszag. Select. Book Series. Advances in Design and ...
  13. [13]
    Flow patterns around heart valves: A numerical method
    The subject of this paper is the flow of a viscous incompressible fluid in a region containing immersed boundaries which move with the fluid and exert forces ...
  14. [14]
    A spectral element method for fluid dynamics: Laminar flow in a ...
    A spectral element method that combines the generality of the finite element method with the accuracy of spectral techniques is proposedMissing: DNS | Show results with:DNS
  15. [15]
    Hybrid spectral difference/embedded finite volume method for ...
    A novel hybrid spectral difference/embedded finite volume method is introduced in order to apply a discontinuous high-order method for large scale engineering ...<|control11|><|separator|>
  16. [16]
    Adaptively switched time stepping scheme for direct aeroacoustic ...
    Mar 18, 2022 · This paper describes an implicit Adaptively Switched Time Stepping (ASTS) scheme that enables the use of larger time steps when solving DAC problems.
  17. [17]
    [PDF] A Primer on Direct Numerical Simulation of Turbulence – Methods ...
    The complete control of the initial and boundary conditions, and each term in the governing equations, also leads to profound advantages over laboratory and ...
  18. [18]
  19. [19]
    Parallel computing approaches for CFD - Flow Physics
    May 5, 2021 · We shall look at the following approaches: Slab decomposition with MPI * e.g. Direct Numerical Simulation (DNS), Poisson equation solver; Pencil ...
  20. [20]
    A pencil-distributed finite-difference solver for extreme-scale ... - arXiv
    Feb 10, 2025 · We present a computational method for extreme-scale simulations of incompressible turbulent wall flows at high Reynolds numbers. The numerical ...
  21. [21]
    An unstructured adaptive mesh refinement approach for ...
    Nov 1, 2022 · A new Direct Numerical Simulation (DNS) code HAMISH with Adaptive Mesh Refinement (AMR) has been developed to simulate compressible reacting flow in a ...
  22. [22]
    GPU-enabled extreme-scale turbulence simulations: Fourier pseudo ...
    This paper presents a new capability for simulating turbulence at a new record resolution up to 35 trillion grid points, on the world's first exascale computer.
  23. [23]
    Direct Numerical Simulations of Single-Phase Cantilevered Rod ...
    The simulations were performed using the NekRS spectral element solver on the Frontier supercomputer, fully resolving all scales of turbulence with Direct ...<|separator|>
  24. [24]
    Wall turbulence at high friction Reynolds numbers | Phys. Rev. Fluids
    Jan 10, 2022 · The friction Reynolds number, defined as Re τ = u τ h / ν , is the main control parameter in wall-bounded turbulence. Here u τ = τ w / ρ is the ...
  25. [25]
    Scalability of Nek5000 on High-Performance Computing Clusters ...
    Apr 19, 2022 · Nek5000 is an open-source code for the simulation of incompressible flows, which is based on a high-order SEM (spectral element method) discretization strategy.
  26. [26]
    [PDF] Large-Scale Direct Numerical Simulations of Turbulence Using ...
    Jun 23, 2022 · Overall, our parallel efficiency is very high and shows how high-order, matrix-free methods can be used for large-scale DNS, both with ...
  27. [27]
    [PDF] A comparison of Nek5000 and OpenFOAM for DNS of turbulent ...
    Dec 10, 2010 · For a given accuracy level, OpenFOAM re- quires over twice as many grid points across the layer. Here, better accuracy of Nek5000 even more ...Missing: fault | Show results with:fault
  28. [28]
    GPU acceleration of CaNS for massively-parallel direct numerical ...
    Jan 1, 2021 · The present work describes the extension of a fast DNS solver for massively-parallel calculations on GPU-accelerated clusters. The starting ...Gpu Acceleration Of Cans For... · 3. Implementation · 4. Validation And...
  29. [29]
    [PDF] Reducing I/O Contention in Staging-based Extreme-Scale In-situ ...
    The challenges associated with extreme scale data management for in-situ workflows and the costs arising from interference between background I/O tasks and ...
  30. [30]
    [PDF] Numerical Experiments in Homogeneous Turbulence
    The direct simulation methods developed by. Orszag and. Patterson. (1972) for. IsotropJc turbulence have been extended to homogeneous turbulence.
  31. [31]
    [PDF] Direct simulation of t~ee~dim~nsional turbulence in the Taylor ...
    The Taylor-Green vortex [3] is the three-dimensional flow that develops, following the constant density incompressible. Navier-Stokes equations, a,u+(u*v)v= -l, ...
  32. [32]
    Fully developed turbulent pipe flow: a comparison between direct ...
    Apr 26, 2006 · Eggels, J. G. M. 1994 Direct and large eddy simulation of turbulent flow in a cylindrical pipe geometry. PhD thesis, Delft University of ...Missing: JFM | Show results with:JFM
  33. [33]
    Two-point statistics for turbulent boundary layers and channels at ...
    Oct 28, 2014 · Two-point statistics are presented for a new direct simulation of the zero-pressure-gradient turbulent boundary layer in the range Re θ = 2780–6680.Missing: URL | Show results with:URL
  34. [34]
    [PDF] Coherent Motions in the Turbulent Boundary Layer - ChaosBook.org
    In an attempt to organize this body of knowledge, Kline & Robinson (1989a,b) grouped the various experimentally observed forms of coherent motions into eight ...Missing: URL | Show results with:URL
  35. [35]
    Development of a large-eddy simulation subgrid model based on ...
    Jun 24, 2021 · We therefore developed an alternative LES SGS model based on artificial neural networks (ANNs) for the computational fluid dynamics MicroHH code (v2.0).
  36. [36]
    Direct numerical simulation of reacting flows
    The objectives of this work are: (1) to extend the technique of direct numerical simulations to turbulent, chemically reacting flows, (2) to test the ...Missing: seminal reactive
  37. [37]
    Direct numerical simulations of reacting flows with detailed ...
    Sep 15, 2018 · The present study explores the path of creating a new implementation of our combustion DNS code, named the KAUST Adaptive Reacting Flows Solver ...Missing: seminal | Show results with:seminal
  38. [38]
    Using direct numerical simulations to understand premixed turbulent ...
    Direct numerical simulations (DNS) have become one of the most effective tools to undersland and model premixed turbulent combustion.
  39. [39]
    Direct Numerical Simulation of hydrogen combustion at auto-ignitive ...
    Direct Numerical Simulations (DNS) are performed to investigate the process of spontaneous ignition of hydrogen flames at laminar, turbulent, adiabatic and non ...
  40. [40]
    Direct Numerical Simulation of Complex Fuel Combustion with ...
    Direct numerical simulation (DNS) of turbulent premixed flames is of paramount importance for developing combustion sub-models for RANS and LES computations ...Missing: seminal | Show results with:seminal
  41. [41]
    A high-order finite-difference solver for direct numerical simulations ...
    This paper presents the development and validation of a Magnetohydrodynamics (MHD) module integrated into the Xcompact3d framework.Missing: seminal | Show results with:seminal
  42. [42]
    High Reynolds number magnetohydrodynamic turbulence using a ...
    Jul 25, 2011 · With the help of a model of magnetohydrodynamic (MHD) turbulence tested previously, we explore high Reynolds number regimes up to equivalent ...Missing: seminal | Show results with:seminal
  43. [43]
    High-order two-fluid plasma solver for direct numerical simulations ...
    Jan 22, 2019 · The accuracy and robustness of this two-fluid plasma solver in handling plasma flows in different regimes have been validated against four ...Missing: key | Show results with:key
  44. [44]
    High-fidelity flow and noise simulations of double-stream jet ...
    Oct 8, 2025 · The article focuses on a double-stream nozzle installed at the rear part of a business jet, in a T-tail configuration. The objectives of the ...
  45. [45]
    Direct Numerical Simulations for Flow and Noise Studies
    Direct Numerical Simulation of the Self-Noise Radiated by an Airfoil in a Narrow Stream, 18th CEAS/AIAA Aeroacoustics Conference, AIAA Paper 2012-2059.
  46. [46]
    Direct Numerical Simulation of Cellular-Scale Blood Flow in 3D ...
    We present, to our knowledge, the first direct numerical simulation of 3D cellular-scale blood flow in physiologically realistic microvascular networks.
  47. [47]
    Direct numerical simulations illustrate aberrant blood flow ...
    Jul 17, 2025 · “The blood flow rates in these arteries are very low, which would indicate that the flow should be simple and organized, yet the results show a ...
  48. [48]
    Generalized Langevin dynamics in multiphase direct numerical ...
    Mar 5, 2025 · We propose a novel methodology for performing continuum-based simulations of Brownian motion in systems of arbitrary geometric complexity at thermal ...
  49. [49]
    Direct Numerical Simulation of the Turbulent Ekman Layer
    The purpose of this research is to perform a direct numerical simulation (DNS) of the Ekman layer and analyze the three-dimensional, time-dependent database to ...
  50. [50]
    Numerical investigation of neutral atmospheric boundary layer flows ...
    The purpose of this study is to ascertain the effects of Coriolis force on neutral ABL flows over flat terrain and isolated three-dimensional hills.
  51. [51]
    On the stabilizing effect of the Coriolis force on the turbulent wake of ...
    Sep 16, 2009 · The effect of the Coriolis force is often said to be stabilizing on the cyclonic side of the wake and destabilizing on the anticyclonic side.
  52. [52]
    The sub-microscale dynamics of double-diffusive convection
    Mar 7, 2024 · This study investigates the dynamics of fingering convection on scales much smaller than the typical size of individual salt fingers.<|control11|><|separator|>
  53. [53]
    Turbulence modeling and simulation advances in CFD during the ...
    This paper reviews predictive methods of turbulent flows in CFD over the last 50 years, presenting different turbulence modeling schools.
  54. [54]
    (PDF) The numerical computation of turbulent flows - ResearchGate
    Jun 5, 2017 · PDF | On Jan 1, 1974, B.E. Launder and others published The ... The constants in the k − ε model given by [24] are listed in Table 1.
  55. [55]
    [PDF] A dynamic subgrid-scale eddy viscosity model
    One major drawback of the eddy viscosity subgrid-scale stress models used in large-eddy simulations is their inability to represent correctly with a single ...
  56. [56]
    [PDF] Large-eddy and direct simulation of turbulent flows - FEM/Unicamp
    Moin and Mahesh (1998) present a recent review of the achievements of DNS. Only a brief summary of the principal ones follows. 4.2 Model validation. Since DNS ...
  57. [57]
    (PDF) Reynolds-Number-Dependence of Length Scales Governing ...
    May 13, 2025 · The resulting local grid-point scaling, N ∼ Re 4/3 , is more restrictive than prior estimates for wall-modeled LES, and in practice, these ...
  58. [58]
    Wall-Modeled Large-Eddy Simulation for Complex Turbulent Flows
    Large-eddy simulation (LES) has proven to be a computationally tractable approach to simulate unsteady turbulent flows. However, prohibitive resolution ...