Fact-checked by Grok 2 weeks ago

Physics-informed neural networks

Physics-informed neural networks (PINNs) are a class of deep s designed to solve forward and inverse problems governed by physical laws, particularly nonlinear partial differential equations (PDEs), by embedding these laws directly into the network's training process. Building on earlier methods for PDE solving, such as those proposed by Lagaris et al., the modern PINN was first introduced in 2017 by Maziar Raissi, Paris Perdikaris, and George E. Karniadakis, with formal publication in 2019. PINNs leverage to compute PDE residuals and incorporate them into a composite alongside data-fitting terms, enabling mesh-free approximations that respect underlying physics without requiring extensive labeled datasets. This approach bridges traditional numerical methods like finite element analysis with , offering a data-efficient for modeling complex physical systems. At their core, PINNs approximate the solution to a PDE as the output of a , typically a fully connected with hyperbolic tangent or similar activation functions, where the network parameters are optimized to minimize a loss that balances empirical error and the violation of physical constraints evaluated at points in the . For forward problems, PINNs predict solutions given known parameters and boundary conditions, while for inverse problems, they infer unknown parameters or even discover governing equations from sparse or noisy observations. Key advantages include their ability to handle high-dimensional problems, incorporate through Bayesian variants, and generalize beyond by enforcing laws or symmetries, outperforming purely data-driven models in scenarios with limited measurements. Since their inception, PINNs have evolved through numerous variants to address limitations such as optimization challenges and stiffness in stiff PDEs, including conservative PINNs (cPINNs) that enforce integral constraints for better stability in , extended PINNs (XPINNs) using domain decomposition for scalability, and fractional PINNs (fPINNs) for non-local operators. These extensions have expanded applications across fields like (e.g., simulating blood flow from MRI data), (solving Schrödinger equations), climate modeling (parameterizing subgrid processes), and (predicting microstructure evolution). Despite successes, ongoing challenges include balancing loss terms to avoid failure modes like spectral bias and improving theoretical guarantees for convergence in diverse settings. Overall, PINNs represent a cornerstone of physics-informed , fostering hybrid models that enhance scientific discovery and design.

Overview and Background

Definition and Principles

Physics-informed neural networks (PINNs) are neural networks trained to solve tasks while respecting physical laws, such as those governed by nonlinear partial differential equations (PDEs) or ordinary differential equations (ODEs). They function as universal function approximators, representing solutions to physical systems by embedding the governing equations directly into the loss function during training via , which enables the computation of derivatives without explicit . The core principles of PINNs revolve around leveraging known physics to regularize the learning process, thereby addressing data scarcity in scientific applications where observations are often limited or expensive to obtain. By incorporating physical constraints as priors, PINNs constrain the solution space to physically plausible outcomes, reducing and improving . In contrast to conventional numerical methods like finite element analysis, which rely on and can be computationally intensive for complex geometries, PINNs employ a mesh-free , evaluating equations at arbitrarily chosen points within the domain. The fundamental workflow of PINNs involves parameterizing the solution—such as the function u(x,t) for spatiotemporal problems—with a and enforcing physical consistency through minimization of the residual arising from the governing equations. This residual is computed seamlessly using and added to the training loss, often alongside data-fitting terms from boundary or initial conditions. PINNs provide distinct advantages, including their capacity to tackle high-dimensional problems that challenge traditional solvers due to exponential scaling in computational cost, while integrating noisy or incomplete data effectively through balanced components. They support unified treatment of forward modeling, where solutions are predicted given known parameters, and modeling, where parameters are inferred from measurements, all within a differentiable . For example, PINNs approximate solutions to the one-dimensional , a nonlinear PDE describing formation in viscous fluids, by directly embedding the equation's structure to guide learning from sparse data points.

Historical Development

The origins of physics-informed neural networks (PINNs) trace back to 2017, when Maziar Raissi, Paris Perdikaris, and George Em. Karniadakis published an arXiv preprint introducing the core concept as a means to solve nonlinear partial differential equations (PDEs) by embedding physical laws directly into neural network training. This foundational work built on earlier ideas of data-driven PDE discovery but marked the explicit formulation of PINNs for both forward PDE solving and inverse parameter estimation. The approach leveraged the success of deep learning in scientific computing, particularly the rise of automatic differentiation in frameworks like PyTorch and TensorFlow, which enabled efficient computation of PDE residuals without traditional discretization. The foundational 2017 arXiv preprint by the same authors, updated in 2018 and published in 2019, formalized as a framework for forward and inverse problems involving nonlinear PDEs, including demonstrations of applications to the Navier-Stokes equations for modeling. This seminal 2019 established PINNs' data-driven solution capabilities, emphasizing sparse data integration with physics constraints, and rapidly gained influence with over 10,000 citations by 2023. Early focused on , such as solving the Navier-Stokes and Burgers' equations, where PINNs outperformed traditional numerical methods in handling noisy or limited data scenarios. Subsequent developments in 2020 extended PINNs to uncertainty quantification, with Bayesian physics-informed neural networks (B-PINNs) incorporating Bayesian inference to assess prediction reliability in PDE solutions, addressing a key limitation in deterministic neural network outputs. From 2021 to 2023, the field saw rapid growth in variants, including conservative PINNs (cPINNs) introduced in 2020, which enforce flux continuity on discrete domains to improve stability for conservation laws. By 2022, PINNs expanded beyond fluid dynamics to multiphysics problems, such as coupled flow-mechanics systems and subsurface transport, enabling simulations of interacting phenomena like poroelasticity and multiphase flows. These advancements were fueled by the frameworks' flexibility in handling complex geometries and the growing availability of open-source implementations, solidifying PINNs as a standard tool in computational science. By 2024–2025, PINNs continued to evolve with advancements in network architectures and theoretical guarantees, as detailed in subsequent sections.

Mathematical Foundations

Network Architecture

Physics-informed neural networks (PINNs) typically employ fully connected neural networks as their core architecture to approximate solutions to partial differential equations (PDEs). These networks take spatial and temporal coordinates, such as (x, t), as inputs and output the corresponding solution variables, for instance, u(x, t), representing the latent solution of the PDE. The architecture leverages the universal approximation theorem, enabling the network to represent complex functions defined over continuous domains without requiring a . Activation functions play a crucial role in ensuring the smoothness required for accurate derivative computations via , which is used to enforce physical constraints. Common choices include the hyperbolic tangent (tanh) function, favored for its bounded and differentiable properties that facilitate smooth approximations, or the swish activation (\text{swish}(z) = z \cdot \sigma(z), where \sigma is the ), which has shown improved in capturing non-linear behaviors in certain PDEs. Typical hyperparameters for PINN architectures involve 5 to 10 hidden layers with 50 to 100 neurons per layer, selected based on the complexity of the PDE to balance expressiveness and computational efficiency. These configurations are often optimized using gradient-based methods like Adam, allowing the trainable parameters \theta to adjust the network to fit both data and physics. For example, in solving the Burgers' equation, a network with 8 hidden layers and 20 neurons each has been effectively used. The input-output mapping in PINNs involves sampling collocation points randomly within the computational to enforce the governing PDE, while and conditions are incorporated as additional points. This unsupervised sampling strategy enables mesh-free enforcement of physics across the . Mathematically, the provides an \tilde{u}(\theta; x, t) to the true solution u(x, t), where \theta denotes the set of network weights and biases. For challenging problems involving stiff PDEs, adaptations such as multi-scale architectures have been developed to better resolve disparate length or time scales. These include hierarchical networks or feature embeddings that enhance the representational capacity for multi-scale phenomena, improving convergence on problems like high-Reynolds-number flows or reaction-diffusion systems.

Loss Function Formulation

The function in physics-informed s (PINNs) is designed to enforce both consistency and adherence to physical laws by combining multiple terms into a composite objective. The approximates the solution to a (PDE) as ũ(θ; x, t), where θ represents the network parameters, and x, t denote spatial and temporal coordinates. This approximation is trained by minimizing a that balances empirical fitting with the of the governing physics. The composite loss is generally expressed as
\mathcal{L}(\theta) = \mathcal{L}_\text{data}(\theta) + \lambda \mathcal{L}_\text{physics}(\theta) + \mathcal{L}_\text{boundary}(\theta),
where \mathcal{L}_\text{data} quantifies the discrepancy between predicted and observed values, \mathcal{L}_\text{physics} penalizes violations of the PDE, \mathcal{L}_\text{boundary} enforces initial and boundary conditions (ICs/BCs), and λ serves as a hyperparameter to balance the physics term. In the original PINN formulation, the data loss \mathcal{L}_\text{data} is the mean squared error (MSE) over N_f observed points:
\mathcal{L}_\text{data}(\theta) = \frac{1}{N_f} \sum_{i=1}^{N_f} \left\| \tilde{u}(\theta; x_f^i, t_f^i) - u_f^i \right\|^2,
while boundary terms are incorporated into this MSE to ensure compliance with ICs/BCs at designated points. The physics loss \mathcal{L}_\text{physics} arises from the PDE residual; for a general PDE \mathcal{F} = 0, the residual r is computed as r = \mathcal{F}[\tilde{u}(\theta; x, t)], evaluated at N_c collocation points via automatic differentiation to obtain required derivatives (e.g., partials with respect to x and t). Thus,
\mathcal{L}_\text{physics}(\theta) = \frac{1}{N_c} \sum_{i=1}^{N_c} \left\| \mathcal{F}[\tilde{u}(\theta; x_c^i, t_c^i)] \right\|^2.
The full unweighted loss in the seminal work simplifies to the sum of data and physics MSEs without an explicit λ, treating them equally. Boundary enforcement is handled softly by inclusion in the data loss, though separate \mathcal{L}_\text{boundary} terms can be added for clarity in more complex setups.
Weighting strategies are crucial for effective training, as mismatched scales between loss terms can lead to suboptimal . The original 2019 PINN approach used fixed equal weights, which often requires manual tuning of λ to prioritize physics over data or vice versa. Subsequent improvements introduced self-adaptive weighting, where λ (or equivalent per-term weights) is learned during training via gradient-based updates, such as ascent on a soft mechanism applied to residuals at each point. This adaptive approach dynamically balances terms without hyperparameter intervention, enhancing robustness across diverse PDEs. Soft constraints, like PDE residuals added to the , predominate in PINNs for their differentiability, in contrast to hard constraints enforced directly in the network architecture (e.g., via custom layers), though the former enables seamless . Training proceeds by minimizing the composite loss using gradient descent optimizers (e.g., Adam), with automatic differentiation facilitating computation of gradients through both the network outputs and embedded derivatives in the residuals. This end-to-end differentiability allows the optimizer to propagate errors from physics violations back to θ, ensuring the learned ũ satisfies both data and equations simultaneously.

Core Applications

Forward Problems: Solving PDEs

In the forward problem formulation within physics-informed neural networks (PINNs), the objective is to approximate the solution u(\mathbf{x}, t) to a given partial differential equation (PDE) \mathcal{N} = f(\mathbf{x}, t), subject to specified initial conditions (ICs) and boundary conditions (BCs), without relying on traditional mesh-based discretization. PINNs achieve this by parameterizing the solution with a deep neural network \hat{u}_\theta(\mathbf{x}, t), where \theta denotes the trainable parameters, and enforcing the PDE, ICs, and BCs through a composite loss function during training. This mesh-free approach leverages automatic differentiation to compute derivatives, enabling direct evaluation of the PDE residual \mathcal{N}[\hat{u}_\theta] - f at arbitrary points in the domain. Training proceeds via a collocation-based , where collocation points are randomly sampled within the spatio-temporal , on the boundaries, and at times. minimizes the of the PDE at these interior collocation points, alongside terms for IC and BC residuals, typically using optimizers like followed by L-BFGS. This enforces the physics constraints in a data-efficient manner, often requiring fewer samples than purely data-driven methods, and allows for large time steps in time-dependent problems. The reference to loss function components for PDE residuals underscores how this residual minimization drives the network toward satisfying the governing equations . Representative examples illustrate the efficacy of PINNs for forward problems. For the 1D nonlinear , PINNs have achieved relative errors on the order of $10^{-4} with appropriate network architectures, demonstrating convergence as the number of layers and neurons increases. In , PINNs solve the 2D incompressible Navier-Stokes equations, such as in lid-driven cavity flows, yielding relative errors around $10^{-3} for velocity and pressure fields after training on thousands of collocation points. The seminal application includes solving the Allen-Cahn equation for phase-field modeling, where PINNs approximated the solution with an error of approximately $7 \times 10^{-3}, as demonstrated in the original framework. Compared to the (FEM), PINNs offer distinct advantages, particularly for irregular domains and high-dimensional PDEs. By avoiding , PINNs handle complex geometries without the computational overhead of meshing, which can be prohibitive in irregular or evolving domains. In high dimensions, such as 10D PDEs arising in or , traditional methods like FEM suffer from the curse of dimensionality due to exponential growth in , whereas PINNs scale more favorably as the network dimensionality aligns naturally with the problem's input size. Evaluation typically employs error norms, \|u - \hat{u}_\theta\|_{L^2} / \|u\|_{L^2}, to quantify accuracy against exact or solutions, confirming errors in the range of $10^{-3} to $10^{-4} for well-conditioned problems with sufficient training.

Inverse Problems: Parameter Discovery

In inverse problems within the framework of physics-informed neural networks (PINNs), the goal is to infer unknown parameters \lambda of a governing (PDE) or even discover the form of the PDE itself from observed u_{\text{obs}}. This is achieved through a joint optimization process that simultaneously trains the neural network parameters \theta—which approximate the u(x, t; \theta)—and the PDE parameters \lambda, ensuring the predicted solution fits both the and the underlying physics. Unlike traditional methods that rely heavily on numerical solvers, PINNs embed the PDE constraints directly into the learning process, enabling parameter estimation even from sparse or limited observations. For data-driven discovery of PDEs, PINNs assume a candidate form for the governing , such as N + \sum_i \lambda_i g_i(u) = 0, where N[\cdot] represents known operators (e.g., spatial derivatives) and the g_i(u) are basis functions capturing potential nonlinear terms, with \lambda_i as coefficients to be determined. The network outputs u(x, t; \theta) and its derivatives, which are substituted into this form to evaluate residuals, allowing the optimization to identify the active terms and their strengths by minimizing a combined that balances fidelity and physical consistency. This approach leverages to compute derivatives without explicit , making it particularly suited for complex, high-dimensional systems. A representative example is the estimation of parameters in Darcy flow, where PINNs infer the heterogeneous permeability field \kappa(x) from sparse measurements, achieving accurate reconstructions by enforcing the steady-state PDE \nabla \cdot (\kappa \nabla p) = 0 alongside data constraints, with relative errors below 1% even for noisy inputs. Similarly, in reaction-diffusion systems, PINNs have been used to discover and coefficients from sparse spatiotemporal measurements, such as in the PDE \partial_t u = D \nabla^2 u + R u (1 - u), where D and R are inferred with high fidelity using sparse regression integrated into the physics-informed framework. The optimization relies on an extended formulated as \mathcal{L}(\theta, \lambda) = \mathcal{L}_{\text{data}}(\theta, \lambda) + \mathcal{L}_{\text{physics}}(\theta, \lambda), where \mathcal{L}_{\text{data}} measures the mismatch between the network prediction and observations (e.g., ), and \mathcal{L}_{\text{physics}} enforces the PDE residual |N[u(\theta)] + \lambda \cdot g(u(\theta))| at collocation points, with both terms minimized via gradient-based methods like . This joint training enables seamless integration of and physics, as gradients with respect to \theta and \lambda are computed automatically. A seminal demonstration of this capability is the 2019 work by Raissi et al., which successfully discovered the nonlinear terms of the Kuramoto-Sivashinsky equation \partial_t u + \partial_t^2 u + \partial_x^4 u + \lambda_1 (\partial_x u)^2 + \lambda_2 u \partial_x u = 0 from limited time-series , recovering coefficients with errors under 5%. PINNs exhibit robustness to noisy in settings due to the physics regularization term, which acts as a enforcing structural consistency and mitigating , allowing reliable parameter discovery even when observations are corrupted by up to 10% . This regularization helps propagate information from the PDE across the , compensating for scarcity.

Specialized Variants

Piece-wise and

Piece-wise and address limitations of standard physics-informed neural networks (PINNs) in handling complex geometries, multi-scale features, or discontinuities in solutions, such as shocks in equations, by partitioning the computational domain into smaller subdomains. This decomposition enhances accuracy for non-smooth solutions and enables parallel training, reducing computational overhead for large-scale problems. In piece-wise approximation approaches, such as extended PINNs (XPINNs), the is divided into non-overlapping subdomains, each approximated by a separate local . XPINNs, introduced by Jagtap, Kharazmi, and Karniadakis in 2020, generalize space-time domain decomposition for nonlinear partial differential equations (PDEs), particularly suited for multi-scale problems where global networks struggle with varying solution scales. Interface conditions between subdomains are enforced through additional loss terms to ensure continuity of the solution and its derivatives, promoting seamless global s. This method significantly reduces training time compared to monolithic PINNs by allowing independent optimization of local networks. The function in these methods combines local s within each with penalties. For a PDE \mathcal{R}(u) in \Omega_k, the local is \mathcal{L}_k = \frac{1}{N_k} \sum_{i=1}^{N_k} |\mathcal{R}(u_{\theta_k}(\mathbf{x}_i))|^2, where u_{\theta_k} is the local with parameters \theta_k and N_k points in \Omega_k. The includes these terms plus contributions for , such as \mathcal{L}_{\Gamma} = \frac{1}{M} \sum_{j=1}^{M} |u_{\theta_k}(\mathbf{y}_j) - u_{\theta_{k+1}}(\mathbf{y}_j)|^2 + |\nabla u_{\theta_k}(\mathbf{y}_j) \cdot \mathbf{n} - \nabla u_{\theta_{k+1}}(\mathbf{y}_j) \cdot \mathbf{n}|^2, where \Gamma denotes the and \mathbf{n} the . The total \mathcal{L} = \sum_k \mathcal{L}_k + \lambda \mathcal{L}_{\Gamma} is minimized, with \lambda balancing terms, akin to standard PINN formulations but localized. Domain decomposition variants, including conservative PINNs (cPINNs) and parallel PINNs (pPINNs), extend these ideas for specific challenges like laws. cPINNs, proposed by Jagtap et al. in 2020, incorporate discrete domain decomposition to enforce properties in nonlinear laws, using local networks with fluxes derived from the PDE structure. This ensures physical consistency across subdomains, improving stability for problems with discontinuities like shocks. pPINNs build on cPINNs and XPINNs by enabling via MPI or GPU parallelism, partitioning training across processors while maintaining synchronization. These approaches scale to large domains, with pPINNs demonstrating up to an order-of-magnitude in training for advection-dominated flows. An illustrative example is the application of piece-wise methods to the in acoustics, where domain decomposition handles heterogeneous media with varying wave speeds. XPINNs partition the acoustic domain into subregions with distinct material properties, training local networks to approximate fields while enforcing of and normal at interfaces, yielding accurate wave propagation predictions with reduced error compared to global PINNs.

Integration with Other Frameworks

Physics-informed neural networks (PINNs) have been effectively combined with the Theory of Functional Connections (TFC), a mathematical framework that enables exact enforcement of linear constraints through analytical basis functions. In TFC-augmented PINNs, the approximate solution takes the form \tilde{u}(x) = \mathrm{NN}(x) + \sum_{i=1}^{n} c_i \phi_i(x), where \mathrm{NN}(x) denotes the output of the , c_i are coefficients solved via a derived from the constraints, and \phi_i(x) are predefined basis functions that inherently satisfy the or initial conditions. This hybrid approach, first proposed in , ensures precise without relying solely on the neural network's optimization, leading to faster convergence rates compared to vanilla PINNs, often reducing training time by orders of magnitude while maintaining high accuracy for parametric differential equations. Another notable integration is the Physics-informed PointNet (PIPN), which merges PINNs with the PointNet architecture to address challenges in irregular geometries. PointNet processes unstructured data representing domains, while physics losses enforce governing equations across multiple datasets in a single training run. Developed in , PIPN excels in applications like on unstructured meshes, enabling simultaneous solutions for steady-state incompressible flows and thermal fields over diverse irregular shapes, with relative errors below 1% in benchmark tests on and geometries. PINNs have also been enhanced with Fourier features to improve handling of periodic problems, such as those involving oscillatory solutions in wave equations. By mapping inputs through random transformations before feeding into the network, these hybrids mitigate spectral bias and better capture high-frequency components, as shown in 2021 studies that demonstrated improved accuracy for multi-scale periodic phenomena without additional computational overhead. More recently, in 2024, Physics-informed Kolmogorov-Arnold Networks (PIKANs) integrate PINNs with Kolmogorov-Arnold representations, replacing fixed activations with learnable univariate functions for greater interpretability and efficiency in solving forward and inverse problems. In 2025, emerging variants such as Transformer-based PINNs (e.g., Transolver) have further integrated mechanisms to handle complex geometries with high efficiency. These framework integrations collectively enhance PINNs' generalization capabilities, particularly for complex boundaries and multi-domain scenarios, by leveraging complementary strengths in constraint handling, geometric flexibility, and representational power.

Domain-Specific Applications

Mechanics and Elasticity

Physics-informed neural networks (PINNs) have been applied to solve linear and nonlinear elasticity partial differential equations (PDEs) to determine and distributions in materials under various loading conditions. In these formulations, the neural network approximates the displacement field, and the physics constraints are incorporated into the loss function through the residual of the equilibrium equation. The governing PDE for is the momentum balance \nabla \cdot \boldsymbol{\sigma} + \mathbf{f} = 0, where \boldsymbol{\sigma} is the tensor, \mathbf{f} represents body forces, and the constitutive relation \boldsymbol{\sigma} = \mathbf{C} : \boldsymbol{\varepsilon} links to the tensor \boldsymbol{\varepsilon} via the material stiffness tensor \mathbf{C}. enables efficient computation of the necessary gradients for strains and stresses directly from the network outputs. This approach has been demonstrated in two-dimensional plane problems, where PINNs accurately predict deformations in beams and thick-walled cylinders with comparable accuracy to finite element methods but without . In inverse problems within , PINNs facilitate the inference of material properties, such as , from observed displacement data by minimizing a combined loss of data fidelity and physics residuals. This data-driven physics-constrained optimization allows for parameter discovery in heterogeneous solids, where traditional methods struggle with sparse measurements. For instance, in , PINNs have been used to identify damage in plates by solving inverse formulations tied to elasticity equations. Applications extend to , where PINNs model crack propagation in brittle materials using phase-field approximations integrated into the network loss, enabling simulation of dynamic fracture without explicit crack tracking. Additionally, PINNs support by embedding compliance minimization objectives directly into the physics-informed loss, optimizing material distribution for minimum strain energy under load. A notable application involves Kirchhoff-Love plate theory in 2024, where PINNs approximated thin plate deflections and stresses, incorporating boundary conditions and plate equilibrium into the training process. PINNs offer advantages over traditional finite element methods (FEM), particularly in handling heterogeneous materials, as their meshless nature avoids remeshing complexities in multi-material interfaces. The meshless formulation excels in domains with cracks or irregularities, allowing seamless representation of discontinuities without element distortion.

Stochastic and Biological Systems

Physics-informed neural networks (PINNs) have been extended to handle stochastic systems, particularly through formulations involving backward stochastic differential equations (BSDEs), which are useful for high-dimensional problems in and where approximating conditional expectations is required. In these approaches, PINNs approximate the processes Y_t and Z_t of the BSDE, defined by the dY_t = -f(t, Y_t, Z_t) \, dt + Z_t \, dW_t, with a given terminal condition Y_T = g(X_T), where W_t is a Brownian motion, f is the driver function, and g is the terminal payoff. The loss function enforces both the terminal condition and the drift consistency derived from Itô's lemma applied to the forward process, enabling efficient solving of high-dimensional BSDEs without curse-of-dimensionality issues. A notable application is in pricing options, where BSDE-based PINNs compute option prices and hedging strategies in high dimensions by incorporating the early exercise feature through a least-squares approximation within the training. For instance, in a 2021 framework, deep BSDE solvers achieved accurate for multi-asset options, outperforming traditional methods in dimensions up to 100 by minimizing a combined over continuation and exercise values. In biological systems, biologically-informed neural networks (BINNs), a specialized variant of PINNs, incorporate prior knowledge of biological pathways and reaction-diffusion processes to model from sparse data, such as or cellular interactions. BINNs parameterize the governing equations with neural networks constrained by known biological terms, like and rates in partial equations, allowing discovery of nonlinear effects in systems like collective or tumor . For example, BINNs have been applied to reaction-diffusion models in , inferring density-dependent diffusivities and rates from limited experimental observations, achieving robust predictions even with noisy or incomplete datasets. Recent extensions as of 2024 include multi-scale BINNs for tumor modeling, integrating cellular and -level dynamics. Inverse problems in biological modeling, such as estimation in , leverage BINNs to discover unknown coefficients or functional forms from observational data, often integrating sparse measurements to fit models like predator-prey interactions. A representative example is the data-driven discovery of the Lotka-Volterra equations governing predator-prey dynamics, where PINNs solve the by minimizing residuals of the ordinary differential equations alongside data-fitting losses, accurately recovering interaction rates α, β, γ, δ from simulated time-series data with errors below 5% in noisy settings. This approach draws briefly from broader techniques in PINNs for identification. Extensions to stochastic biological systems include Bayesian PINNs (B-PINNs), which propagate through variational on network weights, providing probabilistic predictions for systems with noisy inputs or aleatoric , such as variable reaction rates in epidemiological models. B-PINNs quantify posterior distributions over solutions, enabling reliable estimates in forward simulations and inverse discoveries, as demonstrated in handling measurement noise in systems with credible intervals capturing true variabilities.

Limitations and Future Directions

Challenges and Limitations

Physics-informed neural networks (PINNs) encounter significant accuracy challenges in advection-dominated systems, where high Péclet numbers lead to sharp gradients that the networks struggle to capture, resulting in unstable training and poor approximation of solutions. For instance, in convection-dominated problems with coefficients β > 10, PINNs exhibit relative errors exceeding 80%, failing to enforce physical constraints effectively due to the soft regularization in their loss formulation. Similarly, chaotic or systems with discontinuities, such as shockwaves in models, yield high relative L² errors (e.g., 0.309 for the LWR model), as the networks cannot resolve non-smooth features. This issue is exacerbated by bias in multilayer perceptrons (MLPs), which preferentially learn low-frequency components, neglecting high-frequency details essential for accurate PDE solutions. Computational demands pose another major hurdle, as for higher-order derivatives scales poorly with problem dimensionality and complexity, leading to prolonged times and resource-intensive optimization. instability arises from imbalanced terms, where physics residuals often dominate data-fitting components, creating a non-convex optimization that traps gradients in suboptimal minima. Studies from 2021 have shown that PINNs underperform traditional finite element methods (FEM) for stiff PDEs, such as reaction-diffusion systems with reaction rates ρ > 5, achieving errors up to 93% compared to FEM's robust handling of . The non-convex nature of these further complicates , particularly for stiff problems where soft constraints fail to mimic the hard enforcement in numerical solvers. Theoretically, PINNs lack rigorous convergence guarantees outside of simple, linear cases, with performance highly sensitive to the choice and distribution of points, which can lead to inconsistent approximations if not optimally selected. In practice, hyperparameter tuning remains a critical challenge, especially the weighting factors λ for balancing loss components, as manual or adaptive adjustments are often required to mitigate imbalances and achieve stable training, yet no universal strategy exists. Scalability to three-dimensional or higher problems is limited for applications, as the in parameters and points hinders efficient , restricting PINNs to offline or low-dimensional simulations in most scenarios. Recent advancements in physics-informed neural networks (PINNs) have focused on integrating Kolmogorov-Arnold Networks (KANs) to enhance interpretability and accuracy, particularly in complex dynamical systems. Physics-Informed Kolmogorov-Arnold Networks (PIKANs), introduced in , replace traditional multilayer perceptrons (MLPs) with KAN architectures that decompose multivariate functions into sums of univariate functions applied to affine combinations of inputs, enabling more transparent modeling of physical laws. For instance, in applications, PIKANs have demonstrated superior performance in simulating high-speed flows by embedding governing partial differential equations (PDEs) directly into the network's , achieving lower relative errors compared to standard PINNs while requiring fewer parameters. The core representation in PIKANs can be expressed as: \Phi(x_1, \dots, x_n) = \sum_{q=1}^n \Phi_q \left( \sum_{p=1}^n \phi_{q,p}(x_p) \right), where \Phi_q and \phi_{q,p} are univariate functions, typically parameterized by splines or basis expansions, and the physics-informed loss enforces PDE residuals alongside boundary conditions. Operator learning extensions have further propelled PINN capabilities, with physics-informed variants of DeepONets emerging as a key trend in 2024-2025. These models learn nonlinear solution operators for parametric PDEs by incorporating physical constraints into the branch and trunk networks, enabling efficient forward and inverse mappings for problems like convective heat transfer and multiphase flows. For example, multi-resolution physics-informed DeepONets have shown up to 50% reduction in prediction errors for time-dependent PDEs compared to classical operators, facilitating real-time simulations in engineering contexts. Integration of large language models (LLMs) for automated hyperparameter tuning represents a 2025 innovation, leveraging agentic frameworks to optimize PINN architectures and training parameters. Tools like PINNsAgent use LLMs to iteratively suggest and evaluate configurations, such as activation functions and layer depths, resulting in faster for PDE without . This approach has been applied to diverse tasks, including EEG , where periodic activations in PINNs modeled physiological oscillations more accurately than standard tanh functions in FitzHugh-Nagumo-based neural dynamics. Applications of PINNs continue to expand in challenging domains, with notable growth in , multiphase flows, and geoenergy systems as highlighted in 2024 reviews. In , augmented PINNs incorporating Spalart-Allmaras models have achieved up to 73% reduction in mean absolute errors for backward-facing step flows, outperforming traditional data-driven methods. For multiphase flows in porous media, PINNs have enabled precise history-matching in fractured reservoirs, capturing dynamics with relative errors below 5%. In geoenergy, particularly subsurface flow simulations, physics-informed approaches have improved parameter estimation for reactive transport, supporting and geothermal applications as detailed in comprehensive data-driven reviews. Chebyshev-based KAN variants, developed in 2024, have shown particular promise for periodic PDEs, outperforming MLPs by leveraging orthogonal polynomials to handle oscillatory solutions. These ChebPIKANs exhibit significantly lower errors in problems like the Kuramoto-Sivashinsky equation, due to enhanced representation and stability. Looking ahead, hybrid integrations with are gaining traction; for instance, hybrid quantum-classical PINNs (HQPINNs) in 2025 have modeled quantum control problems with improved optimization efficiency, potentially revolutionizing high-dimensional simulations. Additionally, efforts toward certified accuracy via rigorous error bounds are advancing, with a posteriori certification methods providing provable guarantees on prediction errors for equations.

References

  1. [1]
  2. [2]
  3. [3]
    Data-driven Solutions of Nonlinear Partial Differential Equations
    Nov 28, 2017 · We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given ...
  4. [4]
    Scientific Machine Learning through Physics-Informed Neural ... - arXiv
    Jan 14, 2022 · This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their ...
  5. [5]
    Physics-informed neural networks: A deep learning framework for ...
    Feb 1, 2019 · Physics-informed neural networks: A deep learning framework for solving forward and inverse problems ... Raissi, P. Perdikaris, G.E. ...
  6. [6]
  7. [7]
  8. [8]
  9. [9]
  10. [10]
  11. [11]
    maziarraissi/PINNs: Physics Informed Deep Learning - GitHub
    "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.
  12. [12]
    Is it time to swish? Comparing activation functions in solving ... - arXiv
    Oct 11, 2021 · Comparing activation functions in solving the Helmholtz equation using physics-informed neural networks. Authors:Ali Al-Safwan, Chao Song, Umair ...
  13. [13]
    Stiff-PDEs and Physics-Informed Neural Networks
    Feb 7, 2023 · In this review paper, we have investigated various PINN frameworks that are designed to solve stiff-PDEs. We took two heat conduction problems (2D and 3D) with ...
  14. [14]
    Dynamic Weight Strategy of Physics-Informed Neural Networks for ...
    Navier–Stokes: The history of relative L2 error (left) of dwPINNs and PINNs and the training process of dynamic weights (right). 4.2. Comparison of the ...2.2. Fully Connected Neural... · 4. Numerical Examples · 4.1. Navier--Stokes...
  15. [15]
    Scientific Machine Learning Through Physics–Informed Neural ...
    Jul 26, 2022 · Physics–Informed Neural Networks (PINNs) are a scientific machine learning technique used to solve problems involving Partial Differential ...
  16. [16]
    Physics-informed learning of governing equations from scarce data
    Oct 21, 2021 · This work introduces a novel approach called physics-informed neural network with sparse regression to discover governing partial differential equations from ...
  17. [17]
    Parallel physics-informed neural networks via domain decomposition
    Dec 15, 2021 · This domain decomposition endows cPINNs and XPINNs with several advantages over the vanilla PINNs, such as parallelization capacity, large ...
  18. [18]
    Extended Physics-Informed Neural Networks (XPINNs)
    The proposed XPINN method is the generalization of PINN and cPINN methods, both in terms of applicability as well as domain decomposition approach, which ...
  19. [19]
    Parallel Physics-Informed Neural Networks via Domain Decomposition
    Apr 20, 2021 · This domain decomposition endows cPINNs and XPINNs with several advantages over the vanilla PINNs, such as parallelization capacity, large ...
  20. [20]
    A Physics-Informed Neural Network Method for Solving Parametric ...
    May 15, 2020 · In this work we present a novel, accurate, and robust physics-informed method for solving problems involving parametric differential equations ( ...
  21. [21]
    Physics-informed PointNet: A deep learning solver for steady-state ...
    We present a novel physics-informed deep learning framework for solving steady-state incompressible flow on multiple sets of irregular geometries.Missing: unstructured meshes
  22. [22]
    [2407.18373] Physics Informed Kolmogorov-Arnold Neural Networks ...
    Jul 25, 2024 · In this work, we implement the Physics-Informed Kolmogorov-Arnold Neural Networks (PIKAN) through efficient-KAN and WAV-KAN, which utilize the Kolmogorov- ...
  23. [23]
    (PDF) Physics-informed neural networks for solving elasticity problems
    Jan 5, 2024 · Combining the elasticity conservation laws and boundary conditions into the neural network architecture creates a PINN and is trained on a ...
  24. [24]
    Application of Physics-Informed Neural Networks to the Solution of ...
    Feb 1, 2025 · This paper uses physics-informed neural networks to solve 2D elasticity problems, reducing them to optimization, and verifies it with a 1D rod ...
  25. [25]
    A physics-informed deep learning framework for inversion and ...
    This framework uses Physics-Informed Neural Networks (PINNs) for inversion and surrogate modeling in solid mechanics, incorporating physics equations and using ...
  26. [26]
    Transfer learning enhanced physics informed neural network for ...
    This paper presents a new PINN for fracture modeling, using variational energy, transfer learning for efficiency, and is the first for phase-field modeling.
  27. [27]
    A Physics-Informed Neural Network-based Topology Optimization ...
    Mar 1, 2023 · This paper proposes a novel topology optimization framework: Physics-Informed Neural Network-based Topology Optimization (PINNTO).
  28. [28]
    From PINNs to PIKANs: Recent Advances in Physics-Informed ...
    Oct 17, 2024 · In this review, we provide a comprehensive overview of the latest advancements in PINNs, focusing on improvements in network design, feature expansion, ...
  29. [29]
    Full article: Physics-Informed Neural Network with Fourier Features ...
    This shows the strong advantage of PiNNs including Fourier Features especially when the problem is strongly heterogeneous. Fig. 7. Problem 3 with Fourier ...<|control11|><|separator|>
  30. [30]
    FBSDE based neural network algorithms for high-dimensional ...
    Dec 1, 2022 · In this paper, we propose forward and backward stochastic differential equations (FBSDEs) based deep neural network (DNN) learning algorithms
  31. [31]
    [PDF] Integration Matters for Learning PDEs with Backwards SDEs - arXiv
    May 2, 2025 · Backward stochastic differential equation (BSDE)-based deep learning methods provide an alternative to Physics-Informed Neural Networks (PINNs) ...
  32. [32]
    Biologically-informed neural networks guide mechanistic modeling ...
    In this work, biologically-informed neural networks (BINNs) were introduced as a flexible and robust equation learning method for real-world biological ...
  33. [33]
    Data-driven discovery of the governing equations of dynamical ...
    Jul 12, 2022 · Summary of the present framework using the Lotka–Volterra system as an illustrative example, and for simplicity without considering control ...
  34. [34]
    [PDF] Characterizing possible failure modes in physics-informed neural ...
    We provide evidence that the soft regularization in PINNs, which involves PDE-based differential operators, can introduce a number of subtle problems, including ...<|separator|>
  35. [35]
    Physics-informed neural networks for PDE problems
    Jul 24, 2025 · ... (Raissi et al. 2019). By ... Physics-informed neural networks: a deep learning framework for solving forward and inverse problems ...
  36. [36]
  37. [37]
    [2302.04107] Can Physics-Informed Neural Networks beat the Finite ...
    Feb 8, 2023 · In terms of solution time and accuracy, physics-informed neural networks have not been able to outperform the finite element method in our study.
  38. [38]
    Adaptive weighting of Bayesian physics informed neural networks ...
    Oct 15, 2023 · The problem of how to tune the loss weights of a PINN is widely known and several potential solutions have been developed to balance the ...
  39. [39]
    Physics-Informed Neural Network (PINN) Evolution and Beyond
    Nov 21, 2022 · The first PINN papers were published in early 2019. For this reason, the papers used in this study are from 2019 to mid-2022. We discussed the ...
  40. [40]
    [2410.13228] From PINNs to PIKANs: Recent Advances in Physics ...
    Oct 17, 2024 · In this review, we provide a comprehensive overview of the latest advancements in PINNs, focusing on improvements in network design, feature expansion, ...
  41. [41]
  42. [42]
    History-Matching of Imbibition Flow in Multiscale Fractured Porous ...
    Oct 28, 2024 · We propose a workflow based on physics-informed neural networks (PINNs) to model multiphase fluid flow in fractured porous media.