Fact-checked by Grok 2 weeks ago

Partial differential equation

A partial differential equation (PDE) is an equation that relates a function of multiple independent variables to its partial derivatives, typically arising in the mathematical modeling of multidimensional phenomena such as heat diffusion, wave propagation, and fluid flow. Unlike ordinary differential equations, which involve derivatives with respect to a single variable, PDEs account for interactions across several dimensions, often requiring boundary and initial conditions to specify unique solutions. PDEs are fundamental in physics, engineering, and applied sciences, underpinning models for (via ), (), and financial mathematics (Black-Scholes equation). Their solutions often demand advanced techniques like , , or numerical methods, with well-posed problems ensuring existence, uniqueness, and stability of solutions under specified conditions. Linear PDEs, where the equation is a of the function and its derivatives, are more tractable and form the basis for many classical results, while nonlinear PDEs—such as the Navier-Stokes equations for —pose significant challenges and remain subjects of active research, including . PDEs are classified by order (the highest derivative degree), linearity, and type for second-order equations: elliptic (e.g., for steady-state problems), parabolic (e.g., for diffusion processes), and hyperbolic (e.g., for propagation phenomena). These categories influence solution methods and physical interpretations, with elliptic PDEs typically modeling states, parabolic ones transient , and hyperbolic ones wave-like . Key examples include the transport equation for and the in , illustrating the breadth of applications from acoustics to .

Fundamentals

Introduction

Partial differential equations (PDEs) are mathematical equations that involve an unknown of multiple variables and its partial with respect to those variables, contrasting with ordinary differential equations (ODEs), which depend on only one variable. Unlike ODEs, which describe phenomena varying along a single dimension such as time, PDEs capture behaviors in systems with spatial extent, enabling the modeling of complex interactions across multiple dimensions. Prominent examples include the , which governs diffusion processes like temperature distribution in a medium; the wave equation, which describes vibrations and propagations such as sound or light waves; and , which models steady-state phenomena including electrostatic potentials and incompressible fluid flow. The origins of PDEs trace back to the , with Leonhard Euler developing early formulations in around the 1750s, introducing equations that describe conservation. Jean d'Alembert contributed significantly in 1747 by deriving the wave equation, marking one of the first explicit PDEs for continuous media like vibrating strings. advanced the field in his 1822 treatise Théorie analytique de la chaleur, where he formulated the and pioneered series solutions for heat conduction problems. PDEs are fundamental across sciences and engineering, underpinning models in physics for and , in structural engineering for analysis, in finance via the Black-Scholes equation for option pricing, and in biology for and reaction-diffusion systems. These equations are broadly classified into elliptic, parabolic, and types based on their physical characteristics, such as steady-state versus time-evolving behaviors.

Definition

A partial differential equation (PDE) is a mathematical that relates an unknown function of multiple independent variables to its partial derivatives with respect to those variables. Typically, the unknown function u depends on n independent variables x_1, x_2, \dots, x_n, and the equation imposes constraints on how u varies across these dimensions. The general form of a PDE is given by F(x_1, \dots, x_n, u, \frac{\partial u}{\partial x_1}, \dots, \frac{\partial u}{\partial x_n}, \frac{\partial^2 u}{\partial x_i \partial x_j}, \dots ) = 0, where F is a given function, and the arguments include u and all relevant partial derivatives up to some order. Partial derivatives, denoted \partial u / \partial x_i, represent the rate of change of u with respect to x_i while holding all other independent variables constant. The order of a PDE is defined as the highest order of any partial derivative appearing in the equation; for instance, a first-order PDE involves only first partial derivatives, while a second-order PDE includes second partial derivatives such as \partial^2 u / \partial x_i \partial x_j. A quasilinear PDE is one in which the highest-order partial derivatives appear linearly, though their coefficients may depend on the independent variables, the function u itself, and lower-order derivatives. For example, in two variables, a first-order quasilinear PDE takes the form f(x, y, u) \frac{\partial u}{\partial x} + g(x, y, u) \frac{\partial u}{\partial y} = h(x, y, u).

Notation

In partial differential equations, the unknown function is commonly denoted by u(\mathbf{x}, t), where \mathbf{x} = (x_1, \dots, x_n) represents the spatial in \mathbb{R}^n and t is the time variable. The of u is written as \nabla u = \left( \frac{\partial u}{\partial x_1}, \dots, \frac{\partial u}{\partial x_n} \right), a capturing the first-order spatial derivatives. The Laplacian operator, central to many PDEs, is defined as \Delta u = \sum_{i=1}^n \frac{\partial^2 u}{\partial x_i^2}, often appearing in elliptic and parabolic equations. Partial derivatives are frequently expressed using subscript notation for conciseness: u_x = \frac{\partial u}{\partial x} for the first partial with respect to x, and u_{xx} = \frac{\partial^2 u}{\partial x^2} for the second partial, extending to mixed derivatives like u_{xy} = \frac{\partial^2 u}{\partial x \partial y}. For higher-order derivatives in multiple variables, is employed, where a multi-index \alpha = (\alpha_1, \dots, \alpha_n) with nonnegative integers \alpha_i and order |\alpha| = \sum_{i=1}^n \alpha_i denotes D^\alpha u = \frac{\partial^{|\alpha|} u}{\partial x_1^{\alpha_1} \cdots \partial x_n^{\alpha_n}}. This compact form facilitates summations over derivative orders in PDE analysis. Boundary conditions specify the behavior of solutions on the domain's boundary \partial \Omega. Dirichlet conditions prescribe the function value, u = g on \partial \Omega, where g is a given function. Neumann conditions instead specify the normal derivative, \frac{\partial u}{\partial n} = h on \partial \Omega, with \mathbf{n} the outward unit normal and h given. For time-dependent PDEs, initial conditions, often called Cauchy data, are given by u(\mathbf{x}, 0) = u_0(\mathbf{x}) at t = 0. For systems of PDEs involving vector-valued functions, \mathbf{u} = (u_1, \dots, u_m) uses boldface to denote the , with operations like the \nabla \cdot \mathbf{u} = \sum_{i=1}^n \frac{\partial u_i}{\partial x_i} for scalar fields or extended accordingly for tensor forms.

Classification

Linear and Nonlinear PDEs

Partial differential equations (PDEs) are classified as linear or nonlinear based on the structure of the acting on the unknown . A PDE is linear if it can be expressed as a of the unknown function u and its partial derivatives, with coefficients that may depend on the independent variables, equaling a forcing function f(\mathbf{x}). Formally, for independent variables \mathbf{x} = (x_1, \dots, x_n), a linear PDE takes the form a(\mathbf{x}) u + \sum_{i=1}^n b_i(\mathbf{x}) \frac{\partial u}{\partial x_i} + \ higher\ order\ terms = f(\mathbf{x}), where the higher-order terms involve s of higher partial derivatives of u. If f(\mathbf{x}) = 0, the equation is homogeneous; otherwise, it is inhomogeneous. In contrast, nonlinear PDEs include terms where the unknown function or its derivatives appear in nonlinear ways, such as products, powers, or other nonlinear functions. A canonical example is , \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2}, where the nonlinear term u \frac{\partial u}{\partial x} arises from the convective transport, modeling phenomena like fluid shocks when \nu > 0 is a viscosity coefficient. Linear PDEs possess advantageous properties that simplify their analysis. The superposition principle states that if u_1 and u_2 are solutions to the homogeneous linear PDE, then any c_1 u_1 + c_2 u_2 (with constants c_1, c_2) is also a solution. This scalability extends to multiples: if u solves the homogeneous equation, so does c u for any scalar c. Under suitable boundary and initial conditions, such as Dirichlet or conditions on bounded domains, solutions to linear PDEs are unique, often proven via energy methods or maximum principles. Nonlinear PDEs, however, present significant challenges due to the loss of these linear properties. The generally fails, preventing simple combination of known solutions to build new ones. Solutions may lack uniqueness, as demonstrated in cases like the where multiple s can satisfy the same initial data. Additionally, nonlinear effects can lead to the formation of shocks—discontinuities propagating through smooth initial data, as seen in the inviscid limit (\nu \to 0) of —or finite-time blow-up, where the solution becomes unbounded in finite time, exemplified by certain semilinear heat equations or wave equations with focusing nonlinearities. These phenomena complicate both theoretical analysis and numerical simulation, often requiring specialized techniques like regularization or frameworks.

Orders and Types of PDEs

Partial differential equations (PDEs) are classified by their order, which is the highest order of partial derivative appearing in the equation. First-order PDEs involve only first partial derivatives, while higher-order PDEs include derivatives of order greater than one. This classification influences the analytical and numerical methods applicable to solving them. For PDEs in two variables, the general is a(x, y) \frac{\partial u}{\partial x} + b(x, y) \frac{\partial u}{\partial y} + c(x, y) u = d(x, y), where a, b, c, d are functions of x and y. These equations are solved using the , which reduces the PDE to ordinary differential equations along certain curves called characteristic curves, defined by the direction field \frac{dy}{dx} = \frac{b(x, y)}{a(x, y)}. Second-order PDEs, the most commonly studied class, take the canonical form A u_{xx} + 2B u_{xy} + C u_{yy} + D u_x + E u_y + F u = G, where the coefficients A, B, C depend on x and y, and lower-order terms are included. Their type is determined by the B^2 - AC: elliptic if B^2 - AC < 0, parabolic if B^2 - AC = 0, and if B^2 - AC > 0. This classification is invariant under changes of independent variables and guides the behavior of solutions, such as in elliptic cases versus along characteristics in ones. Classic examples illustrate these types. The Laplace equation \Delta u = u_{xx} + u_{yy} = 0 is elliptic, with A = C = 1, B = 0, so B^2 - AC = -1 < 0, modeling steady-state phenomena like electrostatic potentials. The heat equation u_t = k u_{xx} (for one spatial dimension) is parabolic, with A = k, B = C = 0 (treating time as one variable), yielding B^2 - AC = 0, describing diffusion processes. The wave equation u_{tt} = c^2 u_{xx} is hyperbolic, with A = -c^2, B = 0, C = 1 (in space-time variables), giving B^2 - AC = c^2 > 0, capturing wave propagation. For higher-order PDEs in n dimensions, generalizes using the principal , the of degree m (the order) formed by the highest-order terms in the Fourier-transformed equation. For a PDE \sum_{|\alpha| = m} a_\alpha(x) \partial^\alpha u + \text{lower-order terms} = f, the principal is p(x, \xi) = \sum_{|\alpha| = m} a_\alpha(x) (i \xi)^\alpha, where \xi \in \mathbb{R}^n is the frequency variable. The type (e.g., elliptic if p(x, \xi) \neq 0 for \xi \neq 0) is determined by the properties of this , extending the second-order criterion to analyze well-posedness and solution regularity.

Systems of PDEs

Systems of partial differential equations (PDEs) consist of multiple interdependent equations coupling several unknown functions through their partial derivatives, commonly arising in the mathematical modeling of multi-component physical systems. Unlike scalar PDEs, these systems describe interactions among variables, such as in or , where solutions to one equation influence others. A representative example is the Euler equations governing inviscid flow, formulated as a system of laws for , , and densities. First-order systems of PDEs, a key class, take the quasilinear form \sum_i A_i \frac{\partial \mathbf{u}}{\partial x_i} = \mathbf{f}(\mathbf{u}), where \mathbf{u} is a vector-valued unknown , the A_i are matrices (possibly depending on \mathbf{x} and \mathbf{u}), and \mathbf{f} is a nonlinear source term. This structure encompasses many hyperbolic conservation laws, including the Euler equations in one dimension: \frac{\partial}{\partial t} \begin{pmatrix} \rho \\ \rho v_x \\ E \end{pmatrix} + \frac{\partial}{\partial x} \begin{pmatrix} \rho v_x \\ \rho v_x^2 + p \\ (E + p) v_x \end{pmatrix} = 0, with \rho , v_x , E total energy, and pressure p related via an ; the matrices A_i derive from the flux terms. Characteristic surfaces in these systems are hypersurfaces across which discontinuities in solutions, such as shocks in fluids, can propagate without smoothing. For a hypersurface with normal covector \xi, these are defined by the condition \det\left( \sum_i \xi_i A_i \right) = 0 on the principal symbol of the system, identifying directions where the highest-order terms lose full rank and higher derivatives cannot be uniquely solved for. Hyperbolic systems are characterized by the principal \sum_i \xi_i A_i having real and distinct eigenvalues for every real nonzero \xi, ensuring diagonalizability over the reals and the existence of well-posed initial value problems in Sobolev spaces. This guarantees continuous dependence of solutions on initial data and finite propagation speeds along distinct characteristics, foundational for stability in evolutionary problems like the nonlinear Euler equations.

Analytical Solutions

Separation of Variables

The method is a classical technique for obtaining analytical solutions to certain linear partial differential equations (PDEs), particularly those with separable boundary conditions on rectangular or Cartesian domains. It posits that a solution can be expressed as a product of functions, each depending on a single independent variable; for a PDE in two variables, such as u(x,t), the is u(x,t) = X(x) T(t). Substituting this form into the PDE separates it into ordinary differential equations (ODEs) in x and t, provided the PDE and boundary conditions permit . This approach yields a family of product solutions, which are then superposed using to match initial or boundary data. The method traces its origins to Joseph Fourier's work on heat conduction, where it was employed to derive series expansions for temperature distributions. Consider the one-dimensional heat equation u_t = k u_{xx} on a finite interval $0 < x < L, with homogeneous Dirichlet boundary conditions u(0,t) = u(L,t) = 0 for t > 0 and an initial condition u(x,0) = f(x). Assuming u(x,t) = X(x) T(t) and substituting into the PDE gives X(x) T'(t) = k X''(x) T(t), which rearranges to \frac{T'(t)}{k T(t)} = \frac{X''(x)}{X(x)} = -\lambda, where \lambda is the separation constant. This yields two ODEs: the spatial equation X''(x) + \lambda X(x) = 0 with boundary conditions X(0) = X(L) = 0, forming a Sturm-Liouville eigenvalue problem, and the temporal equation T'(t) + k \lambda T(t) = 0. The eigenvalues are \lambda_n = \left( \frac{n \pi}{L} \right)^2 for n = 1, 2, 3, \dots, with corresponding eigenfunctions X_n(x) = \sin \left( \frac{n \pi x}{L} \right). The time solutions are T_n(t) = e^{-k \lambda_n t}, producing product solutions u_n(x,t) = \sin \left( \frac{n \pi x}{L} \right) e^{-k \left( \frac{n \pi}{L} \right)^2 t}._Cintron_Copy/4%3A_Fourier_series_and_PDEs/4.6%3A_PDEs%2C_separation_of_variables%2C_and_the_heat_equation) The general solution is the superposition u(x,t) = \sum_{n=1}^\infty c_n \sin \left( \frac{n \pi x}{L} \right) e^{-k \left( \frac{n \pi}{L} \right)^2 t}, where the coefficients c_n are determined by the sine series of the : c_n = \frac{2}{L} \int_0^L f(x) \sin \left( \frac{n \pi x}{L} \right) \, dx. This expansion converges to f(x) under suitable conditions on f, such as piecewise continuity. Similar applications arise for other boundary conditions, like (insulated ends), yielding cosine eigenfunctions. The method extends to higher dimensions and other canonical PDEs, such as and Laplace equations, by analogous separation in multiple variables._Cintron_Copy/4%3A_Fourier_series_and_PDEs/4.6%3A_PDEs%2C_separation_of_variables%2C_and_the_heat_equation) Despite its elegance, is limited to problems with homogeneous boundary conditions and domains where coordinates separate, such as rectangles or cylinders; non-rectangular geometries or non-homogeneous boundaries require extensions like superposition of solutions or . For non-homogeneous cases, one may solve for the steady-state part separately and apply the method to the transient homogeneous remainder, leveraging the detailed elsewhere. The technique assumes the initial data admits a convergent expansion, restricting it to sufficiently smooth functions._Cintron_Copy/4%3A_Fourier_series_and_PDEs/4.6%3A_PDEs%2C_separation_of_variables%2C_and_the_heat_equation)

Method of Characteristics

The provides a powerful approach for solving partial differential equations (PDEs), especially and types, by transforming the PDE into a system of ordinary differential equations (ODEs) along curves called characteristics, which propagate in the solution. This , originally developed in the context of gas and , exploits the fact that solutions to such PDEs remain constant or evolve predictably along these curves, allowing construction of the solution from or boundary data. For PDEs of the form F(x, u, p) = 0, where x represents the independent variables (often in \mathbb{R}^n), u(x) is the unknown function, and p = \nabla u denotes the , the characteristics are parametrized by a curve s satisfying the coupled ODE system: \frac{dx}{ds} = F_p, \quad \frac{du}{ds} = p \cdot F_p, \quad \frac{dp}{ds} = -F_x - p F_u, where F_p, F_x, and F_u are partial derivatives of F with respect to p, x, and u, respectively. Integrating this system from an initial curve or surface yields the characteristic strips that generate the solution surface u(x), provided the characteristics do not intersect prematurely. A canonical illustration is the one-dimensional linear transport equation u_t + c u_x = 0, with constant speed c > 0 and initial condition u(x,0) = u_0(x). Here, the PDE takes the form F(t, x, u, p_t, p_x) = p_t + c p_x = 0, so the characteristic ODEs simplify to straight lines \frac{dt}{ds} = 1, \frac{dx}{ds} = c, \frac{du}{ds} = 0, implying x - c t = \xi (constant) and u constant along each line. The explicit solution is thus u(x,t) = u_0(x - c t), representing of the initial profile without distortion. This example highlights how characteristics trace the paths of constant solution values, enabling direct solution via to the initial time. The method extends naturally to nonlinear scalar conservation laws of the form u_t + f(u)_x = 0, where f is a smooth flux function and u(x,0) = u_0(x). The characteristics now satisfy \frac{dx}{ds} = f'(u), with u constant along them until potential intersections occur, so the speed f'(u) depends on the solution value itself. For convex f (e.g., with f(u) = \frac{1}{2} u^2), if u_0 is decreasing, characteristics converge and cross after a finite time t^* = -\frac{1}{\min u_0'(x)}, forming a discontinuity or that invalidates the classical solution beyond t^*. Shock formation arises because faster characteristics (higher f'(u)) overtake slower ones, compressing the solution profile until a jump develops, necessitating weak solutions for physical consistency. For systems of first-order hyperbolic PDEs, such as \partial_t \mathbf{u} + A(\mathbf{u}) \partial_x \mathbf{u} = 0 in one spatial dimension, the method of characteristics generalizes by decomposing into characteristic fields via the eigenvectors of A. Riemann invariants, scalar functions r_i(\mathbf{u}) constant along the i-th characteristic family (with speeds given by eigenvalues \lambda_i), simplify the system into decoupled transport equations along each family, facilitating analysis of wave interactions and simple waves. This framework, rooted in Riemann's 19th-century work on integrable systems, is essential for solving Riemann problems in gas dynamics and traffic flow.

Integral Transforms

Integral transforms provide a powerful method for solving linear partial differential equations (PDEs) by converting them into ordinary differential equations (ODEs) or algebraic equations in a transformed , which are often simpler to solve. These techniques are particularly effective for problems on unbounded domains, where boundary conditions at can be incorporated naturally through the transform properties. Common transforms include the , Laplace, and Hankel transforms, each suited to specific geometries and PDE types. The is widely used for PDEs on infinite spatial domains, such as the one-dimensional u_t = k u_{xx} for x \in (-\infty, \infty) and t > 0, with u(x, 0) = \phi(x). The of u(x, t) is defined as \hat{u}(\xi, t) = \int_{-\infty}^{\infty} u(x, t) e^{-i \xi x} \, dx, where \xi is the variable. Applying the transform to the yields the ODE \frac{\partial \hat{u}}{\partial t}(\xi, t) = -k |\xi|^2 \hat{u}(\xi, t), with \hat{u}(\xi, 0) = \hat{\phi}(\xi). The solution in the transform domain is \hat{u}(\xi, t) = \hat{\phi}(\xi) e^{-k |\xi|^2 t}. Inverting the transform recovers the spatial solution u(x, t), which represents a of the initial data with the Gaussian . This approach leverages the 's ability to diagonalize the Laplacian operator in space. For time-dependent PDEs on semi-infinite domains, such as t \geq 0, the is applied with respect to time, treating spatial variables as parameters. The of u(x, t) is \tilde{u}(x, s) = \int_0^{\infty} u(x, t) e^{-s t} \, dt, where s is a complex parameter with positive real part for convergence. For the u_t = k u_{xx} with initial condition u(x, 0) = \phi(x), the transform converts the PDE into the s \tilde{u}(x, s) - \phi(x) = k \tilde{u}_{xx}(x, s), an solvable subject to boundary conditions. The original solution is obtained by inverting the transform, typically using for poles in the or lookup tables for standard forms. This method excels in incorporating s directly into the transformed equation./6:_The_Laplace_Transform/6.5:_Solving_PDEs_with_the_Laplace_Transform) The Hankel transform addresses PDEs with radial symmetry in cylindrical coordinates, such as the axisymmetric diffusion equation u_t = \kappa (u_{rr} + \frac{1}{r} u_r) for r > 0 and t > 0, with initial condition u(r, 0) = f(r). For the zero-order case, the Hankel transform is \tilde{u}(k, t) = \int_0^{\infty} r J_0(k r) u(r, t) \, dr, where J_0 is the Bessel function of the first kind of order zero. Transforming the PDE results in the ODE \frac{d \tilde{u}}{dt}(k, t) + \kappa k^2 \tilde{u}(k, t) = 0, solved as \tilde{u}(k, t) = \tilde{f}(k) e^{-\kappa k^2 t}. The inverse Hankel transform is u(r, t) = \int_0^{\infty} k J_0(k r) \tilde{u}(k, t) \, dk. For a point source at the origin, this yields the fundamental solution u(r, t) = \frac{1}{4 \pi \kappa t} e^{-r^2 / (4 \kappa t)}. The Hankel transform effectively handles the radial Laplacian in cylindrical geometry. Integral transforms offer key advantages for PDEs on infinite or semi-infinite domains, where traditional boundary value methods may fail due to the absence of finite boundaries. They simplify differential operators into multiplications by polynomials in the transform variable, facilitating explicit solutions via terms. Inversion theorems ensure uniqueness and recovery of the original solution; for instance, the Fourier inversion formula reconstructs functions under suitable decay conditions, while Laplace inversion via the Bromwich contour guarantees convergence for causal problems. These properties make transforms indispensable for initial-value problems in unbounded spaces.

Advanced Analytical Methods

Change of Variables

Change of variables is a fundamental technique in the analysis of partial differential equations (PDEs), allowing the transformation of coordinates to simplify the equation's form, reveal symmetries, or reduce the problem to ordinary differential equations (ODEs). By introducing new independent variables ξ and η as functions of the original variables x and y, the PDE can be rewritten in a more tractable manner, often eliminating cross-derivative terms or exploiting inherent symmetries of the domain or equation. This method is particularly useful for second-order PDEs, where the goal is frequently to achieve a that aligns with the equation's classification as elliptic, parabolic, or . For second-order linear PDEs of the form A u_{xx} + 2B u_{xy} + C u_{yy} + \ lower\ order\ terms = 0, a change of variables ξ = ξ(x,y) and η = η(x,y) alters the coefficients through the chain rule. The new coefficients A', B', and C' in the transformed equation depend on the original coefficients and the partial derivatives of ξ and η; specifically, the transformation is chosen such that the cross-term coefficient B' vanishes, while A' and C' are adjusted to match the canonical forms (e.g., u_{\xi\xi} + u_{\eta\eta} = 0 for elliptic, or u_{\xi\xi} - u_{\eta\eta} = 0 for hyperbolic). This process involves solving a characteristic equation derived from the discriminant B² - AC to determine the directions of the transformation, ensuring the PDE type is preserved. In problems exhibiting , such as processes, similarity variables reduce the PDE to by collapsing the independent variables into a single scaling parameter. For the one-dimensional u_t = k u_{xx}, the similarity variable η = x / √(kt) (or equivalently η = x / √t for k=1) is introduced, assuming a form u(x,t) = f(η). Substituting via the chain rule yields for f(η), such as f'' + (η/2) f' = 0, whose provides the self-similar profile, like the Gaussian for initial delta distributions. This approach highlights scale-invariant behaviors and is widely applied in theory and nonlinear wave propagation. For elliptic PDEs like \nabla^2 u = 0 in two dimensions, conformal mappings offer a powerful that preserve angles and properties. Representing the via variables z = x + iy, a w = φ(z) = ξ + iη transforms the domain while keeping solutions , as the Laplacian is invariant under such analytic transformations: if u is in z, then u ∘ φ^{-1} is in w. This is exploited to map irregular boundaries (e.g., an ) to simpler shapes like a unit disk, where series solutions or explicit formulas are available, facilitating boundary value problems in and . The general rule for transforming derivatives under a change of variables relies on the chain rule and the Jacobian matrix. For a function u(x,y) expressed as u(ξ(x,y), η(x,y)), the first partials transform as \frac{\partial u}{\partial x} = \frac{\partial u}{\partial \xi} \frac{\partial \xi}{\partial x} + \frac{\partial u}{\partial \eta} \frac{\partial \eta}{\partial x}, \quad \frac{\partial u}{\partial y} = \frac{\partial u}{\partial \xi} \frac{\partial \xi}{\partial y} + \frac{\partial u}{\partial \eta} \frac{\partial \eta}{\partial y}. Higher-order derivatives follow by repeated application, with the Jacobian determinant J = ∂(ξ,η)/∂(x,y) ensuring the transformation is locally invertible (nonzero J) and preserving the volume element in integrals, though for PDE simplification, the focus is on the coefficient changes rather than ./02:_Second_Order_Partial_Differential_Equations/2.06:_Classification_of_Second_Order_PDEs)

Fundamental Solutions

In the context of linear partial differential equations (PDEs), a fundamental solution, also known as a free-space Green's function, is a particular solution defined on an unbounded domain that satisfies the PDE with a Dirac delta function as the inhomogeneous source term. For instance, for the Poisson equation \Delta G = \delta(\mathbf{x}), where \Delta is the Laplacian and \delta is the Dirac delta in \mathbb{R}^n, the fundamental solution provides the response to a unit point source at the origin. These solutions are singular at the source point and play a central role in representing general solutions to inhomogeneous PDEs via convolution. For the Poisson equation in three dimensions (n=3), the fundamental solution is given by G(\mathbf{x}) = -\frac{1}{4\pi |\mathbf{x}|}, satisfying \Delta G = \delta(\mathbf{x}). This form arises from the and decays inversely with distance, reflecting the harmonic nature of solutions away from the source. The , a parabolic PDE of the form \partial_t u - k \Delta u = f(\mathbf{x}, t), has a fundamental solution that captures diffusive propagation from an instantaneous . In n dimensions, the fundamental solution G(\mathbf{x}, t; \mathbf{y}, s) satisfies \partial_t G - k [\Delta](/page/Delta)_{\mathbf{x}} G = \delta(\mathbf{x} - \mathbf{y}) \delta(t - s) for t > s, with G = 0 for t < s, and is explicitly G(\mathbf{x}, t; \mathbf{y}, s) = (4\pi k (t - s))^{-n/2} \exp\left( -\frac{|\mathbf{x} - \mathbf{y}|^2}{4k(t - s)} \right). This Gaussian kernel spreads symmetrically and broadens over time, embodying the smoothing effect of diffusion. For the wave equation, a hyperbolic PDE \partial_{tt} u - c^2 \Delta u = f(\mathbf{x}, t), the fundamental solution describes sharp propagation of disturbances. In three dimensions, it is G(\mathbf{x}, t) = \frac{\delta(|\mathbf{x}| - c t)}{4\pi |\mathbf{x}|} for t > 0, satisfying (\partial_{tt} - c^2 \Delta) G = \delta(\mathbf{x}) \delta(t). This distribution is supported on the |\mathbf{x}| = c t, illustrating Huygens' principle, where s propagate exactly at speed c without tails in odd dimensions greater than one. Explicit forms of fundamental solutions are often derived using transforms, which diagonalize constant-coefficient linear operators, or similarity methods that exploit symmetries of the PDE. For the , similarity variables reduce the problem to an yielding the Gaussian; for the wave equation in radial coordinates, spherical symmetry and the delta function enforce the surface propagation. These kernels can be convolved with forcing terms and superposed to solve inhomogeneous initial-boundary value problems, as detailed in the section.

Superposition Principle

The superposition principle is a cornerstone of the theory of linear partial differential equations (PDEs), enabling the combination of known solutions to form more complex ones. For a linear homogeneous PDE given by Lu = 0, where L is a linear partial differential operator acting on functions in an appropriate space, if u_1 and u_2 are solutions, then the linear combination \alpha u_1 + \beta u_2 is also a solution for any constants \alpha, \beta \in \mathbb{R} (or \mathbb{C}). This property arises directly from the linearity of the operator L, as L(\alpha u_1 + \beta u_2) = \alpha L u_1 + \beta L u_2 = 0. The principle extends to countable or continuous superpositions, such as integrals of solutions weighted by arbitrary functions, provided the resulting expression remains well-defined in the function space. For inhomogeneous linear PDEs of the form Lu = f, where f is a given forcing term, the facilitates the construction of the general as the sum of a particular u_p (satisfying L u_p = f) and the general to the associated homogeneous equation L u_h = 0. Thus, u = u_p + u_h solves the inhomogeneous equation, and any within the homogeneous part preserves the . This leverages the to reduce the problem to finding one particular and solving the homogeneous case separately. In the context of boundary value problems for linear PDEs, the superposition principle manifests through integral representations involving Green's functions. For a problem on a domain \Omega with boundary data f on \partial \Omega and a volume source g in \Omega, the solution can be expressed as u(\mathbf{x}) = \int_{\partial \Omega} G(\mathbf{x}, \mathbf{y}) f(\mathbf{y}) \, dS_{\mathbf{y}} + \int_{\Omega} G(\mathbf{x}, \mathbf{y}) g(\mathbf{y}) \, d\mathbf{y}, where G(\mathbf{x}, \mathbf{y}) is the Green's function satisfying the homogeneous PDE with a Dirac delta source and appropriate boundary conditions. This form highlights the linearity, as the solution is a linear superposition (integral) of the boundary and source data weighted by the Green's kernel. The Green's function itself builds on fundamental solutions by incorporating boundary effects. For linear evolution PDEs, such as parabolic or hyperbolic equations governing time-dependent phenomena, the applies to the solution operator generated by the . Under suitable conditions, this operator forms a strongly continuous \{S(t)\}_{t \geq 0}, which is linear, so the solution to an u(t) = S(t) u_0 satisfies superposition in the initial data: if u_0 = \alpha u_{0,1} + \beta u_{0,2}, then u(t) = \alpha u_1(t) + \beta u_2(t), where u_i(t) = S(t) u_{0,i}. This linearity ensures that solutions evolve additively from linear combinations of initial conditions. However, the fails for nonlinear PDEs, where the operator includes terms like u \partial_x u or higher powers, preventing the sum of solutions from satisfying the equation due to cross terms that do not cancel. For instance, if u_1 and u_2 solve a nonlinear equation, L(u_1 + u_2) generally includes nonlinear contributions from both, yielding a different right-hand side. This non-additivity necessitates entirely different solution strategies for nonlinear problems.

Numerical Solutions

Finite Difference Methods

Finite difference methods approximate solutions to partial differential equations (PDEs) by discretizing the continuous domain into a structured of points and replacing the partial derivatives with algebraic difference quotients based on function values at these grid points. This approach is particularly suited for regular geometries and leads to systems of algebraic equations that can be solved numerically. The core of the method involves approximations for partial derivatives. For a derivative in the spatial direction, the forward is given by \frac{\partial u}{\partial x} \approx \frac{u_{i+1,j} - u_{i,j}}{h}, where h is the grid spacing, while the backward uses \frac{\partial u}{\partial x} \approx \frac{u_{i,j} - u_{i-1,j}}{h}. The central , which is second-order accurate, employs \frac{\partial u}{\partial x} \approx \frac{u_{i+1,j} - u_{i-1,j}}{2h}. For second-order derivatives, such as in terms, the standard approximation is \frac{\partial^2 u}{\partial x^2} \approx \frac{u_{i+1,j} - 2u_{i,j} + u_{i-1,j}}{h^2}. These approximations are derived from expansions and introduce truncation errors of order O(h) for forward/backward differences and O(h^2) for central differences. For parabolic PDEs like the one-dimensional \frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2}, methods yield time-stepping schemes. The implicit (backward Euler) scheme discretizes time with step \Delta t and approximates the time derivative as \frac{u_i^{n+1} - u_i^n}{\Delta t}, leading to the equation u_i^{n+1} - u_i^n = \frac{k \Delta t}{h^2} (u_{i-1}^{n+1} - 2 u_i^{n+1} + u_{i+1}^{n+1}). This results in a at each time step, solvable efficiently using the tridiagonal matrix algorithm due to the banded structure. The scheme is unconditionally stable, allowing larger time steps compared to explicit methods. In PDEs, such as the equation \frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0, explicit schemes require the Courant-Friedrichs-Lewy (CFL) condition for stability: \Delta t \leq \frac{h}{c}, ensuring that information does not propagate faster than the numerical domain of dependence. Violation of this condition leads to unstable oscillations or . Error analysis in methods focuses on , , and . requires that the local approaches zero as the grid sizes h and \Delta t tend to zero, typically verified via expansions. ensures that errors do not amplify unboundedly over time steps. For linear problems on uniform grids, the Lax equivalence theorem states that a consistent scheme converges it is .

Finite Element Methods

The (FEM) is a numerical technique for approximating solutions to partial differential equations (PDEs) by employing variational principles to convert the strong form of the PDE into a , which is then discretized using piecewise polynomial basis functions over a of the domain. This approach is particularly effective for elliptic PDEs, such as the Poisson equation -Δu = f in a domain Ω with homogeneous Dirichlet boundary conditions u=0 on ∂Ω, where the weak form requires finding u in the H^1_0(Ω) such that ∫_Ω ∇u · ∇v dx = ∫_Ω f v dx for all test functions v in H^1_0(Ω). The arises from multiplying the PDE by a test function v and integrating by parts, transferring derivatives from u to v and enabling the use of less regular solutions that may not satisfy the strong form pointwise. In the , the core of standard FEM, the solution is approximated by u_h = ∑{j=1}^N u_j φ_j, where {φ_j} are basis functions spanning a finite-dimensional V_h ⊂ H^1_0(Ω), and the coefficients u_j are determined by enforcing the weak form on test functions from the same space, yielding the discrete system A \mathbf{u} = \mathbf{b} with entries A{ij} = ∫_Ω ∇φ_i · ∇φ_j dx and load vector b_i = ∫_Ω f φ_i dx. This Galerkin projection minimizes the error in a variational sense and ensures and under suitable assumptions on the and degrees, as analyzed in foundational works. The method's flexibility stems from choosing local basis functions, typically polynomials of low degree, which allow efficient assembly of the global system via element-wise computations. Finite elements are defined on a of the domain, with common choices in two dimensions being linear elements where φ_j are piecewise linear functions continuous across element boundaries and zero outside their supporting elements. For domains with curved boundaries, isoparametric mappings transform a reference element (e.g., the standard with vertices at (0,0), (1,0), (0,1)) to the physical element using the same basis functions for both and solution approximation, enabling accurate representation without excessive distortion. These mappings preserve the polynomial degree and facilitate numerical for integrals over irregular shapes. To control approximation errors, adaptive strategies refine the or basis selectively: h-refinement reduces sizes in regions of high (e.g., via local ), improving resolution without increasing complexity, while p-refinement elevates the on fixed , achieving exponential convergence for solutions. estimators, often based on or a posteriori analysis, guide these refinements to balance computational cost and accuracy, as demonstrated in theoretical frameworks for elliptic problems.

Finite Volume Methods

Finite volume methods provide a robust framework for numerically solving partial differential equations, particularly laws, by enforcing integral properties over discrete control volumes. These methods discretize the spatial domain into a of finite volumes, typically unstructured or structured grids, and approximate the by integrating the PDE over each volume. This approach ensures that the numerical scheme mimics the physical of quantities like , , and , making it especially suitable for problems involving discontinuities such as shocks. In the cell-centered formulation, the average value of the solution u is stored at the center of each control volume V_i. For a conservation law of the form \partial_t u + \nabla \cdot \mathbf{f}(u) = 0, integration over the volume yields \int_{V_i} \partial_t u \, dV + \int_{\partial V_i} \mathbf{f}(u) \cdot \mathbf{n} \, dS = 0, where \partial V_i denotes the boundary of the cell and \mathbf{n} is the outward unit normal. Approximating the volume integral as |V_i| \frac{du_i}{dt} and the surface integral as a sum of fluxes across cell interfaces, the semi-discrete form becomes |V_i| \frac{du_i}{dt} + \sum_{faces} \mathbf{f} \cdot \mathbf{n} \, A = 0, with A the face area. Fluxes at interfaces are computed using reconstructed states from neighboring cells, ensuring upwind biasing for stability in hyperbolic systems. A foundational example is the Godunov method, applied to the Euler equations of compressible fluid dynamics, which describe hyperbolic conservation laws for density \rho, momentum \rho \mathbf{v}, and energy E. This first-order scheme solves local Riemann problems at each cell interface to determine time-exact fluxes, capturing wave propagation accurately even across discontinuities. The Riemann solver approximates the solution to the initial-value problem formed by left and right states at the interface, providing upwind fluxes that respect the direction of information propagation. Developed originally for hydrodynamic equations, this method laid the groundwork for shock-capturing in computational fluid dynamics (CFD). To achieve higher-order accuracy while maintaining monotonicity, the MUSCL (Monotonic Upstream-centered Scheme for Conservation Laws) reconstruction extends Godunov-type methods by piecewise linear interpolation of the solution within cells. Slopes are estimated using differences between cell averages and limited by nonlinear functions, such as the , to prevent spurious oscillations near discontinuities. For the , this yields second-order spatial accuracy, with the scheme updating cell averages via flux differences over a time step constrained by the . MUSCL has become a cornerstone for high-resolution simulations in , balancing accuracy and robustness. Finite volume methods excel in applications to laws in CFD, such as simulating inviscid flows around airfoils or blast waves, where they inherently satisfy discrete conservation and handle complex geometries via unstructured meshes. Their flux-based discretization provides superior resolution compared to pointwise approximations, making them indispensable for problems involving or supersonic flows.

Modern and Specialized Approaches

Neural Network Methods

Physics-informed neural networks (PINNs) represent a prominent machine learning approach for solving partial differential equations (PDEs) by embedding the governing physics directly into the neural network training process. In this framework, a neural network parameterized by θ, denoted as u_θ(x, t), approximates the solution u(x, t) to a PDE. The network is trained to minimize a composite loss function that enforces both the PDE residual and the initial/boundary conditions. Specifically, the PDE residual loss is computed as the integral over the domain Ω of the squared norm of the PDE operator applied to the network output, ∫_Ω |𝒩[u_θ]|^2 dΩ, where 𝒩 represents the differential operator of the PDE, augmented by mean squared error terms for boundary and initial data. A canonical example is the application of PINNs to the one-dimensional , ∂u/∂t + u ∂u/∂x = ν ∂²u/∂x², a nonlinear PDE modeling . Here, the loss function combines the of the PDE residual at points with the of the predicted solution against observed initial and data, enabling the network to learn smooth solutions even with sparse data. This approach leverages to compute derivatives, avoiding explicit of the PDE. One key advantage of PINNs is their mesh-free nature, which allows them to handle irregular or complex geometries without the need for grid generation, unlike traditional numerical methods. This flexibility has been particularly useful in forward and inverse problems involving high-dimensional or geometrically intricate domains. In the , advancements such as conservative PINNs (cPINNs) have addressed limitations in preserving physical properties like monotonicity in conservation laws by incorporating discrete conservation constraints into the loss function, improving accuracy for nonlinear PDEs. As of 2025, further developments include kernel packet accelerated PINNs (KP-PINNs) for enhanced computational efficiency. Despite these benefits, PINNs face significant challenges, including instability due to unbalanced terms between the PDE and , which can lead to poor on stiff or multi-scale problems. to high-dimensional PDEs remains limited by the curse of dimensionality and computational demands, often requiring careful hyperparameter tuning and advanced optimization techniques to mitigate spectral biases in approximations.

Methods for Nonlinear PDEs

Nonlinear partial differential equations (PDEs) often lack closed-form solutions, necessitating specialized analytical and semi-analytical techniques that exploit the structure of nonlinearity, such as small parameters, symmetries, or asymptotic behaviors. These methods extend beyond linear approximations by systematically incorporating nonlinear effects through expansions, deformations, or reductions, enabling approximate solutions that capture essential dynamics like wave propagation or critical scaling. Unlike numerical schemes, they provide insight into qualitative features and scaling laws, particularly for problems in , reaction-diffusion systems, and . Perturbation methods are foundational for addressing nonlinear PDEs where a small , such as ε, modulates the nonlinearity or higher-order terms, allowing solutions to be expanded as a series in powers of ε. In regular perturbation, the solution is assumed to be and expandable as u ≈ u₀ + ε u₁ + ε² u₂ + ..., where u₀ satisfies the reduced obtained by setting ε = 0, and higher-order terms correct for the . For instance, consider the steady nonlinear advection-diffusion ε u_{xx} + u u_x = 0, where the zeroth-order approximation u₀ is linear (u₀_x = 0, yielding a constant), and the first-order correction u₁ solves a linearized derived by substituting the expansion and collecting terms at O(ε). This approach works well when the small does not cause rapid variations, as verified in asymptotic analyses of viscous flows. Singular perturbation methods address cases where the small parameter leads to layers or internal shocks, requiring rescaling in specific regions to resolve steep gradients. For the same equation ε u_{xx} + u u_x = 0 with conditions u(0) = 1 and u(1) = 0, the outer solution (away from boundaries) is approximately constant, but near x = 0, a emerges, rescaled as ξ = x/ε, transforming the PDE into a nonlinear ODE solvable by matching to the outer region. This technique, pivotal in theory for nonlinear convection-diffusion problems, reveals how nonlinearity amplifies thin transition zones, as demonstrated in studies of high-Reynolds-number flows. The homotopy analysis method (HAM) provides a flexible framework for nonlinear PDEs by constructing a continuous deformation (homotopy) from a linear auxiliary problem to the full nonlinear equation, controlled by an embedding parameter q ∈ [0,1]. Starting with an initial guess u₀ and a linear operator L, the zeroth-order solution satisfies (1-q) L(φ - u₀) = q N(φ), where N is the nonlinear operator; Taylor expansion in q yields higher-order terms via recursive linear solves, with convergence optimized by an auxiliary parameter ħ. This method excels for strongly nonlinear problems without small parameters, such as the nonlinear Schrödinger equation, yielding series solutions valid over wide domains. Developed by Liao, HAM has been applied to viscous flows and quantum mechanics, offering auxiliary parameter control absent in traditional perturbations. The traveling wave ansatz reduces time-dependent nonlinear PDEs to ordinary differential equations (ODEs) by assuming a form u(x,t) = f(ξ), where ξ = x - c t and c is the wave speed to be determined. Substituting into the PDE, such as the Fisher-Kolmogorov equation u_t = u_{xx} + u(1 - u), yields f'' + c f' + f(1 - f) = 0, often integrable via phase-plane analysis or multiplication by f' to find c from boundary conditions (e.g., f(±∞) = 0 or 1). This approach uncovers propagating fronts in reaction-diffusion systems, like invasion waves in , where c ≥ 2 ensures monotonic profiles. Widely used since the early for solitons and shocks, it simplifies analysis of translationally invariant nonlinearities. The (RG) method adapts field-theoretic techniques to nonlinear PDEs, particularly for where solutions exhibit scaling near singularities or long times. By rescaling variables and integrating out short-scale modes iteratively, RG derives flow equations for effective parameters, revealing fixed points that dictate asymptotic behavior, such as power-law decay in equations. For the nonlinear PDE ∂_t u = ∇ · (u^m ∇u), RG analysis shows self-similarity exponents depending on m, matching Barenblatt solutions for m > 1. Originating from Wilson's work on critical points, this method, refined by , Goldenfeld, and Oono for PDE asymptotics, bypasses divergent series to capture universal scaling in and . Extensions of these analytical methods to frameworks, such as physics-informed architectures, have emerged for semi-analytical solving of nonlinear PDEs, though detailed implementations are covered elsewhere.

Lie Group Methods

Lie group methods provide a systematic for identifying symmetries of partial differential equations (PDEs), enabling the construction of exact solutions by exploiting these invariances. Developed from the work of in the late and extended to PDEs in the , these techniques reveal underlying group structures that leave the equation unchanged under specific transformations, facilitating reductions in the complexity of the problem. Lie point symmetries are local transformations of the independent variables x and dependent variable u(x) that map solutions of the PDE to other solutions. These are parameterized infinitesimally as x' = x + \varepsilon \xi(x, u) + O(\varepsilon^2), \quad u' = u + \varepsilon \eta(x, u) + O(\varepsilon^2), where \varepsilon is a small parameter, and \xi and \eta are the infinitesimal generators determining the symmetry. The associated vector field is v = \xi \frac{\partial}{\partial x} + \eta \frac{\partial}{\partial u}, and the group action preserves the PDE if the prolonged vector field annihilates the equation on its solution manifold. To find these symmetries, the second prolongation \operatorname{pr}^{(2)} v of v is substituted into the PDE, yielding the determining equations—a system of linear partial equations for \xi and \eta. The prolongation formula extends the action to derivatives, with components such as \phi^{xx} = D_x^2 (\eta - u_x \xi) + \xi u_{xxx} + 2 \xi_x u_{xx} + \xi_{xx} u_x, ensuring the symmetry condition \operatorname{pr}^{(2)} v (F) = 0 whenever F(x, u, u_x, u_{xx}, u_t, \dots) = 0, where F defines the PDE. Solving this classifies all point symmetries, often revealing a finite-dimensional . A canonical example is the Korteweg–de Vries (KdV) equation, u_t + u u_x + u_{xxx} = 0, which models shallow water waves and soliton dynamics. Its Lie point symmetries include generators such as v_1 = \partial_x (space translation), v_2 = \partial_t (time translation), and others like v_3 = t \partial_x - \partial_u, forming a spanned by four basis elements with arbitrary constants c_1, c_2, c_3, c_4. The determining equations, derived from the invariance condition on the prolonged action, yield these explicitly, revealing an infinite-dimensional algebra when including higher symmetries, though point symmetries suffice for basic . These symmetries enable the generation of traveling wave solutions by invariant reductions. For instance, the translation symmetry v_1 + c v_2 (with speed c) leads to the u(x, t) = f(x - c t), reducing the KdV equation to the (ODE) -c f' + f f' + f''' = 0. Integrating once gives a second-order ODE whose solutions include the profile u(x, t) = 3c \, \operatorname{sech}^2 \left( \frac{\sqrt{c}}{2} (x - c t) + \varepsilon \right), where \varepsilon is a shift, illustrating how group orbits yield explicit exact solutions. Invariant solutions under the full are found by solving along group , reducing the PDE's order by the dimension of the orbit. For a one-parameter , this involves characteristic equations \frac{dx}{\xi} = \frac{du}{\eta} to find invariants y(x, u) and w(x, u), transforming the original PDE into a lower-dimensional system, such as , solvable by or standard methods. This approach has been pivotal in uncovering similarity solutions and exact forms for nonlinear PDEs beyond perturbative techniques.

Theoretical Foundations

Weak Solutions

Weak solutions extend the concept of classical solutions to partial differential equations (PDEs) where the solution lacks the smoothness required for pointwise satisfaction of the equation. Introduced within the framework of distribution theory by , weak solutions are defined in an integral sense against smooth test functions with compact support, allowing for discontinuities or singularities that arise in physical models like or wave propagation. This approach ensures that the PDE holds in a generalized distributional meaning, capturing essential physical behaviors without demanding excessive regularity. For a general linear PDE of the form Lu = f, where L is a linear with constant or variable coefficients and f is a given , a u satisfies the identity \int_\Omega u (L^* v) \, dx = \int_\Omega f v \, dx for all test functions v \in C_c^\infty(\Omega), the space of functions with compact in the \Omega. Here, L^* denotes the formal operator of L, obtained by while ignoring boundary terms due to the compact of v. This formulation transfers the differentiability requirements from u to the test function v, enabling solutions in broader function spaces. If u is sufficiently , it recovers the classical solution by . In the context of hyperbolic conservation laws, such as the scalar equation u_t + f(u)_x = 0, s are particularly relevant due to the formation of shocks. A function u is a if it satisfies \int_0^\infty \int_{-\infty}^\infty \left( u \phi_t + f(u) \phi_x \right) dx \, dt = -\int_{-\infty}^\infty u(x,0) \phi(x,0) \, dx for all test functions \phi \in C_c^\infty(\mathbb{R} \times (0,\infty)) vanishing at . This distributional form admits discontinuous solutions, but multiple s may exist for the same initial data, necessitating additional criteria for . To select physically relevant weak solutions featuring shocks, entropy conditions are imposed. For scalar hyperbolic conservation laws, Oleinik's entropy condition requires that across any discontinuity, the solution satisfies a one-sided inequality on the slopes, ensuring the shock speed aligns with physical dissipation limits like vanishing viscosity. Specifically, for a convex flux f, if u jumps from u_- to u_+ with u_- > u_+, then \frac{f(v) - f(u_+)}{v - u_+} \geq \frac{f(u_-) - f(u_+)}{u_- - u_+} \geq \frac{f(u_-) - f(v)}{u_- - v} for all v between u_- and u_+, preventing non-physical expansions. This condition guarantees uniqueness and stability for the . Weak solutions are often sought in Sobolev spaces, which provide a natural setting for functions with integrable weak derivatives. The Sobolev space H^1(\Omega) consists of functions u \in L^2(\Omega) whose weak partial derivatives \nabla u also belong to L^2(\Omega), equipped with the norm \|u\|_{H^1} = \left( \|u\|_{L^2}^2 + \|\nabla u\|_{L^2}^2 \right)^{1/2}. These spaces, originally developed by Sergei Sobolev, enable the analysis of boundary value problems and variational formulations while accommodating limited regularity.

Well-Posedness

In the theory of partial differential equations (PDEs), well-posedness refers to the properties that ensure a meaningful to an or . Introduced by in 1902, the framework requires three conditions: existence of a , of that , and continuous dependence of the on the or data in an appropriate . These criteria distinguish well-posed problems from ill-posed ones, where small perturbations in data can lead to arbitrarily large changes in the . For parabolic PDEs, such as the , energy methods provide a key tool to establish well-posedness, particularly and stability. Consider the for a linear parabolic equation like u_t - \Delta u = f in a \Omega \times (0,T), with initial data u(0) = u_0. Multiplying the equation by u and integrating over \Omega yields an energy estimate of the form \frac{1}{2} \frac{d}{dt} \int_\Omega u^2 \, dx + \int_\Omega |\nabla u|^2 \, dx = \int_\Omega f u \, dx \leq \frac{1}{2} \int_\Omega u^2 \, dx + \frac{1}{2} \int_\Omega f^2 \, dx, which implies \frac{d}{dt} \int_\Omega u^2 \, dx \leq \int_\Omega u^2 \, dx + \int_\Omega f^2 \, dx, where the constant is 1 in this case but generally depends on the and coefficients. Applying to this differential inequality implies that the L^2-norm of the solution grows at most exponentially in time, controlled by the data u_0 and f, thus proving continuous dependence. Existence can be obtained via Galerkin approximations or semigroup theory, completing the well-posedness in suitable Sobolev spaces. For elliptic and parabolic PDEs, the offers another approach to , often under non-negative coefficients. For a linear Lu = -\sum a_{ij} \partial_i \partial_j u + \sum b_i \partial_i u + c u = f in a bounded with Dirichlet conditions, the weak states that if f \leq 0 and c \leq 0, then \sup_\Omega u \leq \sup_{\partial \Omega} u. Applying this to u and -u implies u \equiv 0 if homogeneous data are given, yielding . A similar principle holds for parabolic equations, where the maximum is attained on the parabolic (initial time or spatial ), ensuring for the forward . These principles extend to weak solutions in H^1 spaces, as defined in related formulations. An archetypal ill-posed problem is the backward , u_t + \Delta u = 0 for t < 0, seeking initial at t = 0 from terminal at some negative time. While solutions exist and are unique in spaces of analytic functions, they fail continuous dependence: high-frequency perturbations in the terminal amplify exponentially as t \to 0^-, violating Hadamard's stability criterion. For instance, adding a small Gaussian perturbation to smooth terminal produces wildly oscillating solutions near t = 0, illustrating the inherent instability. This example underscores why forward parabolic problems are well-posed, while their time-reversed counterparts are not.

Regularity Theory

Regularity theory for partial differential equations (PDEs) investigates how the smoothness of solutions can be enhanced from initial weak or distributional assumptions to higher classes of regularity, such as Hölder or Sobolev spaces, depending on the type of PDE and the regularity of coefficients and data. This theory is essential for understanding the qualitative behavior of solutions and is built upon a priori estimates that control norms of derivatives. For linear PDEs with smooth data, solutions often inherit infinite differentiability, but the focus here is on quantitative improvements in specific function spaces. In the elliptic case, Schauder estimates provide key interior regularity results for solutions to uniformly elliptic equations of the form Lu = f, where L is a second-order linear operator with Hölder continuous coefficients. These estimates assert that if the coefficients of L and the right-hand side f belong to C^{\alpha}(\Omega) for some $0 < \alpha < 1, then the solution u satisfies u \in C^{2,\alpha}(\Omega') for any compact subset \Omega' \subset \subset \Omega, with the C^{2,\alpha}-norm bounded by a constant times the C^{\alpha}-norms of the data and a lower-order norm of u. This result, originally established for the Dirichlet problem, extends to general boundary value problems under suitable conditions and forms the foundation for higher regularity via iteration. The bootstrap argument is a iterative technique commonly applied to refine regularity in elliptic and parabolic PDEs, starting from a weak solution in a low-order like H^1 or L^2 and successively applying theorems and regularity estimates to gain higher . For instance, in the of a linear elliptic with coefficients, an initial L^2-solution can be bootstrapped to H^1, then to H^2 using elliptic regularity, and further via H^k \hookrightarrow C^{m,\beta} for appropriate k, m, \beta, potentially reaching C^\infty if the data is . This method relies on the structure of the and is particularly effective for problems where the solution appears in the coefficients, allowing repeated application until maximal regularity is achieved. For hyperbolic PDEs, regularity is often established locally along characteristics, yielding of solutions in the spatial variables for equations, while global higher regularity follows from energy methods that control time derivatives through multiplication by test functions and . In the second-order case, such as the wave equation, energy estimates provide H^k-regularity for solutions with H^{k-1}-initial data, preserving the Sobolev scale over time intervals where the solution exists. These results hold under compatibility conditions at the boundary for initial-boundary value problems. Singularities in solutions represent points or sets where regularity fails, and regularity theory distinguishes removable singularities—where the solution can be redefined to be —from ones that persist. In the Navier-Stokes equations, for weak solutions in three dimensions, isolated singularities are removable under scale-invariant conditions, such as the solution and its gradient satisfying certain L^3-type integrability near the point, implying the solution extends across it; however, whether all potential singularities are removable remains an open Millennium Prize problem.

References

  1. [1]
    [PDF] Introduction to Partial Differential Equations - UCSB Math
    May 21, 2003 · Partial differential equations are often used to construct models of the most basic theories underlying physics and engineering.
  2. [2]
    [PDF] Chapter 1 Introduction - University of Utah Math Dept.
    A partial differential equation is an equation for a function which depends on more than one independent variable which involves the independent variables, the ...
  3. [3]
    [PDF] 1 Introduction
    A partial differential equation is an equation involving a function u of several variables and its partial derivatives. The order of the partial differential ...
  4. [4]
    [PDF] Introduction and some preliminaries 1 Partial differential equations
    A partial differential equation (PDE) is a relationship among partial derivatives of a function (or functions) of more than one variable.
  5. [5]
    [PDF] The analytical theory of heat
    It was the translator's hope to have been able to prefix to this treatise a Memoir of Fourier's life with BOme account of his writings; unforeseen circumstances ...
  6. [6]
    Extensions of the d'Alembert formulae to the half line and the finite ...
    Jean le Rond d'Alembert, in addition to deriving, in 1747, the wave equation, which was the first partial differential equation (PDE) ever written, also solved ...
  7. [7]
    Geometric Fluid Dynamics - UCSD CSE
    In the 1750's, Leonhard Euler (1707-1783) derived the Euler equations for fluid dynamics, which are a set of partial differential equations describing the ...
  8. [8]
    [PDF] The History of Differential Equations, 1670–1950
    Indeed, they grew steadily in importance, especially from the mid 18th century when partial equations were introduced by Jean d'Alembert and Leonhard Euler and ...Missing: credible | Show results with:credible
  9. [9]
    [PDF] The Black-Scholes Model
    Note that the. Black-Scholes PDE would also hold if we had assumed that µ = r. However, if µ = r then investors would not demand a premium for holding the stock ...
  10. [10]
    Applications of Partial Differential Equations in Bioengineering - MDPI
    The use of PDEs allows for the analysis and prediction of behavior in areas ranging from tissue engineering and cellular interactions to drug delivery systems ...Special Issue Editors · Special Issue Information
  11. [11]
    Partial differential equation - Scholarpedia
    Nov 4, 2011 · A partial differential equation (or briefly a PDE) is a mathematical equation that involves two or more independent variables, an unknown functionFirst-Order Partial Differential... · Second-Order Partial... · Higher-Order Partial...
  12. [12]
    [PDF] partial differential equations - Princeton Math
    To start with partial differential equations, just like ordinary differential or integral equations, are functional equations. That means that the unknown, ...
  13. [13]
    [PDF] Notes on Partial Differential Equations John K. Hunter
    Abstract. These are notes from a two-quarter class on PDEs that are heavily based on the book Partial Differential Equations by L. C. Evans, together.
  14. [14]
    Partial Differential Equation -- from Wolfram MathWorld
    Partial Differential Equation ; (partial^2psi)/(partialx^2)+(partial^2psi. (1) ; Au_(xx)+2Bu_(xy)+Cu_(yy)+Du_x. (2) ; Z=[A B; B C]. (3) ; u_(xx)+u_(yy)=f(u_x,u_y,.
  15. [15]
    Calculus III - Partial Derivatives - Pauls Online Math Notes
    Nov 16, 2022 · Note that the notation for partial derivatives is different than that for derivatives of functions of a single variable. With functions of a ...
  16. [16]
    Boundary Conditions -- from Wolfram MathWorld
    1. Dirichlet boundary conditions specify the value of the function on a surface T=f(r,t) . · 2. Neumann boundary conditions specify the normal derivative of the ...Missing: standard | Show results with:standard
  17. [17]
    [PDF] Partial Differential Equation: Penn State Math 412 Lecture Notes
    Linearity, Linear Operators & Homogeneous PDE's. Definition 1.60 (Linear Partial Differential Equation). A PDE is linear if it is a linear function of the ...
  18. [18]
    [PDF] Burgers Equation
    ν = 0 Burgers equation is one of the simplest nonlinear conservation laws [link], and when ν > 0 it is one of the simplest nonlinear dissipative PDEs, due to ...
  19. [19]
    [PDF] Linear PDEs and the Principle of Superposition - Trinity University
    Warning: The principle of superposition can easily fail for nonlinear PDEs or boundary conditions. Consider the nonlinear PDE ux + u2uy = 0. One ...
  20. [20]
    [PDF] Uniqueness of solutions to the Laplace and Poisson equations
    a linear partial differential equation subject to certain boundary conditions such that the solution is unique.3 Before imposing the boundary condition, the ...
  21. [21]
    Nonuniqueness of weak solutions of the nonlinear Schroedinger ...
    Mar 17, 2005 · Generalized solutions of the Cauchy problem for the one-dimensional periodic nonlinear Schrödinger equation, with certain nonlinearities, are not unique.
  22. [22]
    [PDF] Nonlinear Evolution Equations1 - UC Davis Math
    The main points are: • in order to guarantee uniqueness, f must be Lipschitz;. • if f is Lipschitz on bounded sets, the solution may blow up in finite time, and ...
  23. [23]
    [PDF] PARTIAL DIFFERENTIAL EQUATIONS - UCSB Math
    The second order linear PDEs can be classified into three types, which are invariant under changes of variables. The types are determined by the sign of the ...
  24. [24]
    [PDF] 5-Nonlinear Systems: The Euler Equations
    Euler Equations. ❑ The Euler equations of compressible gasdynamics are written as a system of conservation laws describing conservation of mass, momentum ...
  25. [25]
    [PDF] 8 Hyperbolic Systems of First-Order Equations
    The conditions under which plane wave solutions exist lead us to the definition of hyperbolicity given above. First, we rewrite the wave equation as a system in ...
  26. [26]
    [PDF] MATH3083/MATH6163 Advanced Partial Differential Equations
    ) The linear PDE system L(x, ∇)u = 0 is called elliptic at the point x if and only if det Lp(x,ik) 6= 0. ∀k∈ Rn 6= 0. (197). To save repetition, in the ...
  27. [27]
    [PDF] s heat conduction equation: History, influence, and connections
    In formulating heat conduction in terms of a partial differential equation and developing the methods for solving the equation, Fourier initiated many innova-.
  28. [28]
    [PDF] separation of variables - OSU Math
    We call this approach the method of separation of variables. We apply this method to the 1D Heat Equation and 1D Wave. Equation as follows. Henceforth, we ...
  29. [29]
    [PDF] 2 First-Order Equations: Method of Characteristics
    In this section, we describe a general technique for solving first-order equations. We begin with linear equations and work our way through the semilinear, ...<|control11|><|separator|>
  30. [30]
    [PDF] The method of characteristics applied to quasi-linear PDEs
    Oct 26, 2005 · For any point on the initial curve, we follow the vector (A,B,C) to generate a curve on the solution surface, called a characteristic curve of ...
  31. [31]
    [PDF] First-Order Quasilinear PDEs - MATH 467 Partial Differential ...
    A first-order PDE is called semilinear if it has the form a(x, y)ux + b(x, y)uy = c(x, y, u). We will generalize the method of characteristics in order to solve.
  32. [32]
    [PDF] The Method of Characteristics 1 Homogeneous transport equations
    Find the solution at the endpoint of the characteristic: The solution of the PDE at (x, t) is simply u(x, t) = U(t). 2. Page 3. Here are a few examples of how ...
  33. [33]
    The Method of Characteristics with Applications to Conservation Laws
    In addition to characteristics crossing and a shock forming, there is another way the method of characteristics can break down and a discontinuity can form ...
  34. [34]
    [PDF] 3 Conservation Laws
    We say a shock is admissible if it satisfies the Oleinik entropy condition. We say a weak, admissible solution u of (3.28) is an admissible entropy solution if ...
  35. [35]
    The method of characteristics and Riemann invariants for ...
    Burnat, M. The method of characteristics and Riemann invariants for multidimensional hyperbolic systems. Sib Math J 11, 210–232 (1970).
  36. [36]
    [PDF] Characteristic invariants and Darboux's method - arXiv
    The method is based on using functions that are constant in the direction of characteristics of the system. These functions generalize well-known Riemann ...
  37. [37]
    [PDF] 10 Integral Transforms - Partial Differential Equations - UNCW
    The idea is that one can transform the problem at hand to a new problem in a different space, hoping that the problem in the new space is easier to solve.
  38. [38]
    [PDF] Solving the heat equation with the Fourier transform
    This is the solution of the heat equation for any initial data φ. We derived the same formula last quarter, but notice that this is a much quicker way to ...
  39. [39]
    [PDF] Hankel Transforms and Their Applications
    This chapter deals with the defini- tion and basic operational properties of the Hankel transform. A large number of axisymmetric problems in cylindrical polar ...<|control11|><|separator|>
  40. [40]
    [PDF] 4 Classification of Second-Order Equations
    By making an appropriate change of variables, we can write the top-order term ... equations can be written in the canonical form, ux1x1 − n. X i=2 uxixi + ...
  41. [41]
    [PDF] Canonical form of second order PDE with two variables
    ... second order quasilinear PDE with two variables there exists a linear change of variables such that equation (1) to be written in the canonical form.
  42. [42]
    [PDF] B Similarity solutions
    Similarity solutions to PDEs are solutions which depend on certain groupings of the independent variables, rather than on each variable separately. I'll show.
  43. [43]
    [PDF] Chapter 6: Similarity solutions of partial differential equations
    We study the existence and properties of similarity solutions. Not all solutions to PDEs are similarity solutions, PDEs do not always have similar.
  44. [44]
    [PDF] Complex Analysis and Conformal Mapping
    Apr 18, 2024 · Conformal mappings can be effectively used for constructing solutions to the Laplace equation on complicated planar domains that are used in ...
  45. [45]
    [PDF] Chapter 7 Complex Analysis and Conformal Mapping - SMU Physics
    Feb 17, 2013 · Conformal mappings can be effectively used for constructing solutions to the Laplace equation on complicated planar domains that appear in a ...
  46. [46]
    Calculus III - Change of Variables - Pauls Online Math Notes
    Nov 16, 2022 · We will start with double integrals. In order to change variables in a double integral we will need the Jacobian of the transformation. Here is ...
  47. [47]
    Green's functions and fundamental solutions
    The fundamental solutions of partial differential equations are generally formulated for infinite domains. In some cases, it is possible to find solutions to ...
  48. [48]
    [PDF] Method of Green's Functions
    We introduce another powerful method of solving PDEs. First, we need to consider some preliminary definitions and ideas. 1 Preliminary ideas and motivation. 1.1 ...
  49. [49]
    [PDF] 4 The Heat Equation - DAMTP
    The heat kernel is a Gaussian centred on x0. The rms width (standard deviation) of the Gaussian is %K(t − t0) while the height of the peak at x = x0 is 1/%4πK( ...
  50. [50]
    [PDF] The Multi-dimensional Wave Equation (n > 1) Special Solutions
    1. Fundamental Solution (n = 3) and Strong Huygens' Principle. • In this section we consider the global Cauchy problem for the three-dimensional.
  51. [51]
    Green's Functions and Boundary Value Problems | Wiley Online Books
    Jan 24, 2011 · Green's Functions and Boundary Value Problems ; Editor(s):. Ivar Stakgold, Michael Holst, ; First published:24 January 2011 ; Print ISBN: ...
  52. [52]
    Semigroups of Linear Operators and Applications to Partial ...
    Semigroups of linear operators and its neighboring areas have developed into a beautiful abstract theory.
  53. [53]
    Finite Difference Methods for Ordinary and Partial Differential ...
    This book introduces finite difference methods for both ordinary differential equations (ODEs) and partial differential equations (PDEs)
  54. [54]
    [PDF] Implicit Scheme for the Heat Equation
    This requires us to solve a linear system at each timestep and so we call the method implicit. Writing the difference equation as a linear system we arrive at ...
  55. [55]
    [PDF] On the Partial Difference Equations of Mathematical Physics
    Problems involving the classical linear partial differential equations of mathematical physics can be reduced to algebraic ones of a very much simpler ...
  56. [56]
    [PDF] Survey of the stability of linear finite difference equations - fsu/coaps
    Applying this to the present case, we see that the set (8) is uniformly bounded, and the approximation is stable. Page 8. 274. P. D. LAX, AND R. D. RICHTMYER.
  57. [57]
    The Finite Element Method for Elliptic Problems
    The Finite Element Method for Elliptic Problems is the only book available that analyzes in depth the mathematical foundations of the finite element method.
  58. [58]
    [PDF] Theory of Adaptive Finite Element Methods: An Introduction
    Abstract This is a survey on the theory of adaptive finite element methods (AFEM), which are fundamental in modern computational science and engineering. We.
  59. [59]
    Finite Volume Methods for Hyperbolic Problems
    Similar to the book of R. J. LeVeque titled Numerical Methods for Conservation Laws, this manuscript will certainly become a part of the standard literature ...
  60. [60]
    [PDF] Finite difference method for numerical computation of ... - HAL
    Jul 25, 2019 · The purpose of this paper is to choose a scheme which is in some sense best and which still allows computation across the shock waves. This ...
  61. [61]
    Physics-informed neural networks: A deep learning framework for ...
    Feb 1, 2019 · We introduce physics-informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given laws of physics.
  62. [62]
    Conservative physics-informed neural networks on discrete domains ...
    We propose a conservative physics-informed neural network (cPINN) on discrete domains for nonlinear conservation laws.
  63. [63]
    A comprehensive review of advances in physics-informed neural ...
    Oct 2, 2024 · These challenges include a limited understanding of fundamental principles, sparse experimental data, and difficulties in creating accurate ...
  64. [64]
    Homotopy Analysis Method in Nonlinear Differential Equations
    In stockJun 22, 2012 · Presents the latest developments and applications of the analytic approximation method for highly nonlinear problems, namely the homotopy analysis method (HAM).
  65. [65]
    Traveling wave solutions of nonlinear partial differential equations
    We propose a simple algebraic method for generating classes of traveling wave solutions for a variety of partial differential equations of current interest in ...
  66. [66]
    [PDF] Kenneth G. Wilson - Nobel Lecture
    The renormalization group approach is to integrate out the fluctuations in sequence starting with fluctuations on an atomic scale and then moving to.
  67. [67]
    [PDF] Symmetry and Explicit Solutions of Partial Differential Equations
    A local Lie group of transformations G is called a symmetry group of the system of partial differential equations (1) if ¯f= g · f is a solution whenever f is.
  68. [68]
    [PDF] Introduction To The Theory Of Distributions
    THE THEORY OF. DISTRIBUTIONS by. ISRAEL HALPERIN. Associate Professor of Mathematics, Queens University based on the lectures given by. LAURENT SCHWARTZ.
  69. [69]
    [PDF] partial-differential-equations-by-evans.pdf - Math24
    I present in this book a wide-ranging survey of many important topics in the theory of partial differential equations (PDE), with particular emphasis.
  70. [70]
    [PDF] Weak Solutions
    Thus we can define “weak solution” using ∫ ∇u·∇v=0 instead of −△u=0. The hope is that the existence of thus defined weak solution would be easy to establish, ...
  71. [71]
    [PDF] Hyperbolic Conservation Laws An Illustrated Tutorial
    Figure 7: At time T when characteristics start to intersect, a shock is produced. The loss of regularity can be seen already in the solution to a scalar ...
  72. [72]
    O. A. Oleinik, “Discontinuous solutions of non-linear differential ...
    Full-text PDF (6366 kB) Citations (47) ... Citation: O. A. Oleinik, “Discontinuous solutions of non-linear differential equations”, Russian Math.
  73. [73]
    [PDF] Functional Analysis, Sobolev Spaces and Partial Differential Equations
    The Sobolev spaces occur in a wide range of questions, in both pure and applied mathematics. They appear in linear and nonlinear PDEs that arise, for example, ...Missing: seminal | Show results with:seminal
  74. [74]
    [PDF] Well - Posedness - MIT OpenCourseWare
    Def.: A PDE is called well-posed (in the sense of Hadamard), if. (1) a solution exists. (2) the solution is unique. (3) the solution depends continuously on ...
  75. [75]
    [PDF] Partial differential equations
    Definition 3.4 (Hadamard's well-posedness) A given problem for a partial differential equation is said to be well-posed if: (1) a solution exists,. (2) the ...
  76. [76]
    [PDF] Chapter 6: Parabolic equations - UC Davis Math
    Moreover, we may establish the existence and regularity of weak solutions of parabolic PDEs by the use of L2-energy estimates. 6.1. The heat equation. Just as ...
  77. [77]
    [PDF] Well-Posed Problems - UNL Math
    Methods like the one used above are for this reason called energy methods. Their use is not restricted to unicity arguments for linear, parabolic problems.
  78. [78]
    [PDF] Maximum Principles for Elliptic and Parabolic Operators
    As for the elliptic operators, the doing so for both w and −w yields uniqueness of the solution in the domain if it exists at all.
  79. [79]
    [PDF] Maximum principle for parabolic operators
    A parabolic operator is in particular a degenerate elliptic operator. So under our assumptions, weak maximum principle holds. This implies that maxΩT u = max∂ΩT ...
  80. [80]
    [PDF] Parabolic Partial Differential Equations Vorlesung: Armin Schikorra ...
    A consequence of the weak maximum principle is uniqueness of solutions and the compar- ison principle. Corollary 2.2.3 (Uniqueness). Let X ⊂ R n+1 and L as ...
  81. [81]
    [PDF] 3. Backward heat equation ? - People
    In a word, the backward heat equation is ill-posed because all solutions are instantly swamped by high-frequency noise.
  82. [82]
    [PDF] Ill-Posedness of Backward Heat Conduction Problem1 - IIT Madras
    A problem which is not well-posed is called an ill-posed problem. Thus, an operator equation. Af = g,. (1.1) where one wants to find f for a given linear ...
  83. [83]
    Regularity of very weak solutions for elliptic equation of divergence ...
    Feb 15, 2012 · In this paper, we study the local regularity of very weak solution u ∈ L loc 1 ( Ω ) of the elliptic equation D j ( a i j ( x ) D i u ) = 0 ...
  84. [84]
    [PDF] Chapter 7: Hyperbolic equations - UC Davis Math
    Hyperbolic PDEs arise in physical applications as models of waves, such as acoustic, elastic, electromagnetic, or gravitational waves. The qualitative ...
  85. [85]
  86. [86]
    Removable singularities of weak solutions to the navier-stokes ...
    Dec 23, 2010 · Consider the Navier-Stokes equations in Ω×(0,T), where Ω is a domain in R3. We show that there is an absolute constant ε0 such that ever, ...Missing: theory | Show results with:theory
  87. [87]
    A removable isolated singularity theorem for the stationary Navier ...
    Jan 1, 2006 · We show that an isolated singularity at the origin 0 of a smooth solution ( u , p ) of the stationary Navier–Stokes equations is removable ...