Fact-checked by Grok 2 weeks ago

Euler–Lagrange equation

The Euler–Lagrange equation is a fundamental second-order in the , providing the necessary condition for a to extremize (minimize or maximize) a given functional, typically expressed as an over a or . In its standard form for a functional J = \int_a^b L(x, y(x), y'(x)) \, dx, where L is the integrand depending on the independent variable x, the y(x), and its y'(x), the equation is \frac{\partial L}{\partial y} - \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right) = 0. This equation arises from setting the first variation of the functional to zero, analogous to setting the to zero in ordinary calculus for extrema. The equation is named after the mathematicians Leonhard Euler and , who independently developed its core ideas in the mid-18th century while addressing problems in optimization and mechanics. Euler first formulated the equation in a letter dated April 15, 1743, and elaborated it geometrically and intuitively in his 1744 treatise Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes sive solutio problematis isoperimetrici latissimo sensu accepti, where he applied it to problems like the brachistochrone and isoperimetric curves using elementary methods without modern variational notation. Lagrange extended and formalized the approach starting in 1755 through correspondence with Euler, introducing a more analytical framework, and fully presented it in his 1788 work Mécanique Analytique, where he derived the equation for systems with multiple variables and constraints, emphasizing its role in deriving from a principle of least action. In physics and engineering, the Euler–Lagrange equation is central to , where the L = T - V (with T as and V as ) yields the for conservative systems, generalizing Newton's laws to complex configurations involving constraints or . It also finds applications in field theories, such as and , where it determines the field equations from action principles, and in optimization problems across mathematics, including geodesics on surfaces and minimal surfaces. Extensions to higher dimensions or time-dependent systems lead to more general forms, such as those for multiple functions or with non-holonomic constraints, underscoring its versatility in modern theoretical frameworks.

Background

Historical development

The roots of the Euler–Lagrange equation trace back to the late 17th century, when the laid foundational work in variational problems. Jakob Bernoulli explored isoperimetric problems, which involve finding curves that maximize enclosed area for a fixed perimeter, providing early insights into optimization of functionals. These investigations influenced his brother , who in 1696 posed the brachistochrone problem—a challenge to determine the curve allowing the fastest descent between two points under gravity—in the journal Acta Eruditorum, sparking widespread interest in methods for extremizing integrals. Leonhard Euler advanced this field significantly in the mid-18th century, beginning with early treatises around 1736 on variational principles and culminating in his comprehensive 1744 publication Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes, where he introduced systematic variations of functions and derived a general for extremal curves. Euler's approach transformed the ad hoc solutions of earlier problems into a unified framework for the . Joseph-Louis Lagrange reformulated Euler's methods during 1760–1762 in his Essai d'une nouvelle méthode pour déterminer les maxima et minima des formules intégrales définies, published in the Mémoires de l'Académie des Sciences de Turin, by employing the delta (δ) notation for variations and integrating it into for broader applicability. Lagrange expanded these ideas in his seminal Mécanique analytique (1788), emphasizing algebraic manipulation over geometric intuition. The equation, initially termed Euler's differential equation, became known as the Euler–Lagrange equation in the to recognize both pioneers' contributions.

Calculus of variations

The is a branch of dedicated to finding functions that extremize functionals, which are mappings from a set of admissible functions to the real numbers. Unlike , where extrema are sought for functions of finitely many variables, the addresses infinite-dimensional optimization problems by treating functions themselves as the variables. A fundamental example of a functional is the form J = \int_a^b L\bigl(x, y(x), y'(x)\bigr) \, dx, where y: [a, b] \to \mathbb{R} is a , y'(x) = dy/dx, and L is a given smooth function called the integrand or . This setup generalizes problems in physics, , and optimization, where the goal is to determine y(x) that minimizes or maximizes J while respecting boundary conditions, such as fixed endpoints y(a) = y_a and y(b) = y_b. The variational problem thus involves identifying the extremal function y(x) among all possible curves connecting the endpoints that yields the optimal value of the functional. To solve such problems, one introduces the concept of a variation of the function, defined as a small perturbation \delta y(x) = \epsilon \eta(x), where \epsilon is an infinitesimal scalar and \eta(x) is a smooth test function vanishing at the boundaries, \eta(a) = \eta(b) = 0, to preserve the endpoint conditions. Substituting the varied function \tilde{y}(x) = y(x) + \epsilon \eta(x) into the functional yields J[\tilde{y}] = J[y + \epsilon \eta], which can be expanded in a Taylor series with respect to \epsilon. The first variation \delta J is the linear term in this expansion, corresponding to the of J at y in the direction of \eta, and is given by \delta J = \left. \frac{d}{d\epsilon} J[y + \epsilon \eta] \right|_{\epsilon=0}. For y(x) to be a of the functional—analogous to a critical point in finite dimensions—the first variation must vanish for every admissible \eta(x), so \delta J = 0. This condition of zero first variation leads directly to the Euler–Lagrange equation, which emerges as the necessary governing the extremal functions in the .

Core Formulation

Statement

The Euler–Lagrange equation arises as the necessary condition for a function y(x) to extremize the functional J = \int_a^b L(x, y(x), y'(x)) \, dx in the calculus of variations, where L(x, y, y') denotes the Lagrangian and y' = dy/dx. Assuming L is continuously differentiable with respect to x, y, and y', and considering one independent variable x with dependent variable y(x) involving only the first-order derivative y', the extremizing function y(x) satisfies the second-order ordinary differential equation \frac{\partial L}{\partial y} - \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right) = 0. If the endpoints x = a and x = b are fixed such that y(a) and y(b) are prescribed, the equation holds with these boundary values; otherwise, if an endpoint is free, y(x) must additionally satisfy the corresponding natural boundary condition \frac{\partial L}{\partial y'} = 0 at that point. This equation expresses a balance between the direct variation term \frac{\partial L}{\partial y}, analogous to a "force," and the total derivative \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right), representing an "inertial" contribution from the dependence on the slope y'. In the special case where L does not explicitly depend on y (so \frac{\partial L}{\partial y} = 0), the equation simplifies to \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right) = 0, implying \frac{\partial L}{\partial y'} = constant, which serves as a precursor to the Beltrami identity for conserved quantities in variational problems.

Derivation

The derivation of the Euler–Lagrange equation proceeds from the principle of stationarity applied to the variational functional in the . Consider the functional J = \int_a^b L(x, y(x), y'(x)) \, dx, where L is the (or integrand) depending on the independent variable x, the dependent function y(x), and its first derivative y'(x) = dy/dx, with fixed endpoints y(a) and y(b). This functional represents the quantity to be extremized, such as , , or , subject to the boundary conditions. To determine the function y(x) that makes J stationary, introduce a small perturbation to the path: let y_\epsilon(x) = y(x) + \epsilon \eta(x), where \epsilon is a small scalar and \eta(x) is an arbitrary variation satisfying the conditions \eta(a) = \eta(b) = 0. The perturbed functional is then J[y_\epsilon] = \int_a^b L\big(x, y + \epsilon \eta, y' + \epsilon \eta'\big) \, dx. For stationarity at \epsilon = 0, the first-order change in J, known as the first variation \delta J, must vanish. Expanding to first order in \epsilon and differentiating with respect to \epsilon at \epsilon = 0 yields \delta J = \left. \frac{d}{d\epsilon} J[y_\epsilon] \right|_{\epsilon=0} = \int_a^b \left[ \frac{\partial L}{\partial y} \eta + \frac{\partial L}{\partial y'} \eta' \right] dx = 0, which must hold for all admissible variations \eta(x). To simplify this condition, integrate the second term by parts. The term involving \eta' is \int_a^b \frac{\partial L}{\partial y'} \eta' \, dx = \left[ \frac{\partial L}{\partial y'} \eta \right]_a^b - \int_a^b \eta \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right) dx. The boundary term vanishes because \eta(a) = \eta(b) = 0, leaving \delta J = \int_a^b \left[ \frac{\partial L}{\partial y} - \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right) \right] \eta \, dx = 0. Since this integral is zero for every arbitrary smooth \eta(x) with the given boundary conditions, the integrand must identically vanish, yielding the Euler–Lagrange equation \frac{\partial L}{\partial y} - \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right) = 0. This governs the extremal paths. In a modern perspective, the condition \delta J = 0 corresponds to the of the functional J at y being zero in every direction \eta, providing a rigorous foundation in for the .

Examples

Brachistochrone problem

The brachistochrone problem involves finding the curve y = y(x) between two points, typically from (0, 0) to (a, b) with y measured downward, that minimizes the time for a particle to slide from rest under constant without . The speed v at position y follows from conservation of , giving v = \sqrt{2 g y}, where g is the . The infinitesimal time element is dt = ds / v, with arc length ds = \sqrt{dx^2 + dy^2} = \sqrt{1 + (y')^2} \, dx, so the total time is T = \int_0^a \frac{\sqrt{1 + (y')^2}}{\sqrt{2 g y}} \, dx = \frac{1}{\sqrt{2 g}} \int_0^a \sqrt{\frac{1 + (y')^2}{y}} \, dx. Minimizing T is equivalent to minimizing the functional J = \int_0^a L(y, y') \, dx, where the Lagrangian is L = \sqrt{\frac{1 + (y')^2}{y}}, as the constant factor $1/\sqrt{2 g} does not affect the minimizing path. Since L has no explicit dependence on x, the Beltrami identity from the Euler–Lagrange framework simplifies the equation to L - y' \frac{\partial L}{\partial y'} = C, where C is a constant. To apply this, compute \frac{\partial L}{\partial y'} = \frac{y'}{\sqrt{y(1 + (y')^2)}}. Then, L - y' \frac{\partial L}{\partial y'} = \sqrt{\frac{1 + (y')^2}{y}} - y' \cdot \frac{y'}{\sqrt{y(1 + (y')^2)}} = \frac{1}{\sqrt{y(1 + (y')^2)}} = C. Rearranging yields y(1 + (y')^2) = \frac{1}{C^2}, or (y')^2 = \frac{1/C^2 - y}{y} = \frac{k^2 - y}{y}, where k = 1/C. Thus, y' = \sqrt{\frac{k^2 - y}{y}} (taking the positive root for the descending path). Separating variables gives dx = \sqrt{\frac{y}{k^2 - y}} \, dy. Integrating this leads to the parametric solution of a generated by a of r = k^2 / 2: x(\theta) = r (\theta - \sin \theta), \quad y(\theta) = r (1 - \cos \theta), with \theta ranging from 0 to some \theta_0 chosen to match the endpoint (a, b). This curve is independent of g, as the factors out of the functional and does not influence the geometry of the minimizing path. The provides a true minimum, as confirmed by comparing travel times to other curves like the straight line, which takes longer.

Minimal surface of revolution

The problem of finding the surface of revolution that minimizes the area between two rings is a classic application of the Euler–Lagrange equation to geometric optimization. Consider two rings of radii r_1 and r_2 located at x = a and x = b along the x-axis. The generating curve y = y(x) is rotated about the x-axis to form an axisymmetric surface, and the area functional to minimize is A = 2\pi \int_a^b y \sqrt{1 + (y')^2} \, dx, with boundary conditions y(a) = r_1 and y(b) = r_2. The integrand of this functional serves as the L(y, y') = y \sqrt{1 + (y')^2}, which is independent of the explicit appearance of the variable x. For such , the full Euler–Lagrange equation \frac{\partial L}{\partial y} - \frac{d}{dx} \left( \frac{\partial L}{\partial y'} \right) = 0 simplifies via the to L - y' \frac{\partial L}{\partial y'} = c, where c is a . Here, \frac{\partial L}{\partial y'} = \frac{y y'}{\sqrt{1 + (y')^2}}, so substituting yields y \sqrt{1 + (y')^2} - y' \cdot \frac{y y'}{\sqrt{1 + (y')^2}} = c. This simplifies to \frac{y}{\sqrt{1 + (y')^2}} = c. Solving for the derivative gives y' = \pm \sqrt{\frac{y^2}{c^2} - 1}, and separating variables and integrating produces the explicit solution y(x) = c \cosh \left( \frac{x - x_0}{c} \right), where x_0 is determined by boundary conditions; the resulting surface is the catenoid. Leonhard Euler first derived this catenoid in 1744 as the surface of revolution with minimal area, proving it satisfies the necessary conditions from the . The provides a local minimum for the area functional only when the ring separation |b - a| is sufficiently small, specifically below a approximately $1.325 \times \max(r_1, r_2); for larger separations, the global minimizer consists of two separate disks spanning the rings, as the catenoid becomes unstable.

Generalizations

Higher-order derivatives

In the calculus of variations, the Euler–Lagrange equation can be extended to functionals that depend on higher-order derivatives of the extremizing function. Consider a functional of the form J = \int_a^b L\left(x, y, y', y'', \dots, y^{(n)}\right) \, dx, where y^{(k)} = \frac{d^k y}{dx^k} denotes the k-th derivative of y with respect to x, and L is a smooth integrand depending on up to the n-th order derivative. The necessary condition for y to extremize J is the generalized Euler–Lagrange equation, \sum_{k=0}^n (-1)^k \frac{d^k}{dx^k} \left( \frac{\partial L}{\partial y^{(k)}} \right) = 0. This equation reduces to the standard first-order form when n=1. For n=0, it simplifies to \frac{\partial L}{\partial y} = 0. The equation is a (2n)-th order in general, reflecting the increased complexity introduced by higher derivatives. The derivation proceeds from the first variation of the functional. For a varied path y + \epsilon \eta, where \eta is an admissible variation vanishing at the endpoints along with its first n-1 derivatives (under fixed boundary conditions), the condition \delta J = 0 for arbitrary \eta yields \int_a^b \left[ \sum_{k=0}^n \frac{\partial L}{\partial y^{(k)}} \eta^{(k)} \right] dx = 0. Applying integration by parts n times to each term involving \eta^{(k)} transfers the derivatives to the coefficients \frac{\partial L}{\partial y^{(k)}}, resulting in boundary terms at x = a and x = b plus the integral \int_a^b \eta \sum_{k=0}^n (-1)^k \frac{d^k}{dx^k} \left( \frac{\partial L}{\partial y^{(k)}} \right) dx = 0. Since \eta is arbitrary, the integrand must vanish, giving the generalized equation, provided the boundary terms vanish. If some derivatives of y are not specified at the , natural boundary conditions arise from setting the coefficients of the non-vanishing boundary terms to zero. For a second-order case (n=2), these include conditions such as \frac{\partial L}{\partial y'} - \frac{d}{dx} \frac{\partial L}{\partial y''} = 0 and \frac{\partial L}{\partial y''} = 0 at each endpoint, ensuring the variation is without prescribed values. Higher-order cases involve analogous higher-derivative expressions for the remaining boundary terms. A representative application occurs in elasticity theory for the deflection of a slender beam under transverse loading, modeled by the Euler–Bernoulli theory. The total potential energy functional is J = \int_a^b \left[ \frac{1}{2} EI (y'')^2 - q(x) y \right] dx, where EI is the flexural rigidity, q(x) is the distributed load, and the Lagrangian L = \frac{1}{2} EI (y'')^2 - q y depends on y and y'' (n=2). Applying the generalized Euler–Lagrange equation yields \frac{d^2}{dx^2} (EI y'') - q = 0, or equivalently (EI y'')'' = q, the fourth-order beam equation governing static deflection. Natural boundary conditions, such as those for a free end, include EI y'' = 0 (zero moment) and \frac{d}{dx} (EI y'') = 0 (zero shear).

Multiple functions

In variational problems involving multiple dependent functions y_1(x), \dots, y_m(x) that vary with a single independent variable x, the objective is to extremize a functional of the form J[y_1, \dots, y_m] = \int_a^b L(x, y_1, \dots, y_m, y_1', \dots, y_m') \, dx, where L denotes the , assumed sufficiently smooth, and primes indicate derivatives with respect to x. The stationarity condition yields a system of m first-order Euler–Lagrange equations, one for each i = 1, \dots, m: \frac{\partial L}{\partial y_i} - \frac{d}{dx} \left( \frac{\partial L}{\partial y_i'} \right) = 0. This system generalizes the single-function Euler–Lagrange equation, which corresponds to the scalar case m=1. The equations are typically coupled when L contains cross terms mixing distinct y_i and y_j' (for i \neq j), resulting in interdependent ordinary differential equations that must be solved simultaneously for all functions. A classical example occurs in finding geodesics on a two-dimensional surface, where the arc-length functional for a parametrized by coordinates y_1(x) and y_2(x) leads to coupled Euler–Lagrange equations whose solutions describe the shortest paths between points. Another instance arises in the variational formulation of the , where the action integral based on kinetic and potential energies of the two angles \theta_1(t) and \theta_2(t) (with time t as the independent variable) produces a coupled governing the . Conservation laws for such systems follow from : if the remains invariant under a transformation of the multiple functions and their derivatives, a corresponding exists, generalizing the single-function result to multi-component settings.

Multiple variables

In the , the Euler–Lagrange equation generalizes to problems involving a single scalar function u(\mathbf{x}) of multiple independent variables \mathbf{x} = (x_1, \dots, x_d) in a domain \Omega \subset \mathbb{R}^d. The objective is to extremize the functional J = \int_\Omega L(x_1, \dots, x_d, u, \frac{\partial u}{\partial x_1}, \dots, \frac{\partial u}{\partial x_d}) \, dV, where L is the density depending on the position variables, the function value, and its first partial derivatives, and dV = dx_1 \dots dx_d is the volume element. This formulation arises naturally in problems like finding minimal surfaces or steady-state fields, extending the one-dimensional case where d=1. The corresponding Euler–Lagrange equation is \frac{\partial L}{\partial u} - \sum_{i=1}^d \frac{\partial}{\partial x_i} \left( \frac{\partial L}{\partial (\partial u / \partial x_i)} \right) = 0. This must hold at every point in \Omega. To derive it, consider an admissible variation \delta u such that \delta u = 0 on the \partial \Omega. The first variation of the functional is \delta J = \int_\Omega \left( \frac{\partial L}{\partial u} \delta u + \sum_{i=1}^d \frac{\partial L}{\partial (\partial u / \partial x_i)} \frac{\partial (\delta u)}{\partial x_i} \right) dV = 0 for a . Applying in multiple dimensions to each term in the yields \int_\Omega \sum_{i=1}^d \frac{\partial}{\partial x_i} \left( \frac{\partial L}{\partial (\partial u / \partial x_i)} \right) \delta u \, dV = \int_{\partial \Omega} \sum_{i=1}^d \frac{\partial L}{\partial (\partial u / \partial x_i)} n_i \delta u \, dS, where the has been used, with \mathbf{n} = (n_1, \dots, n_d) the outward unit normal and dS the surface element. Since \delta u = 0 on \partial \Omega, the boundary integral vanishes, leaving the Euler–Lagrange equation upon invoking the fundamental lemma of the . A classic example is the for harmonic functions, where the functional is J = \int_\Omega \frac{1}{2} |\nabla u|^2 \, dV = \int_\Omega \frac{1}{2} \sum_{i=1}^d \left( \frac{\partial u}{\partial x_i} \right)^2 dV, corresponding to L = \frac{1}{2} \sum_{i=1}^d p_i^2 with p_i = \partial u / \partial x_i. Substituting into the Euler–Lagrange equation gives \partial L / \partial u = 0 and \partial L / \partial p_i = p_i, so - \sum_{i=1}^d \frac{\partial}{\partial x_i} \left( \frac{\partial u}{\partial x_i} \right) = -\Delta u = 0, the Laplace equation \Delta u = 0, whose solutions minimize the functional subject to fixed values. If essential boundary conditions are not specified (i.e., \delta u \neq 0 on \partial \Omega), natural boundary conditions emerge from setting the surface integral to zero: \sum_{i=1}^d \frac{\partial L}{\partial (\partial u / \partial x_i)} n_i = 0 \quad \text{on} \quad \partial \Omega. For the harmonic function example, this simplifies to \partial u / \partial n = 0, the condition of zero normal derivative.

Field theories

In field theories, particularly those describing physical systems in relativistic spacetime, the Euler–Lagrange framework is extended to functionals that depend on fields varying continuously over four-dimensional Minkowski space. The action functional for a scalar field \phi(x^\mu) is given by S[\phi] = \int L(\phi, \partial_\mu \phi, x^\mu) \, d^4 x, where L is the Lagrangian density, \partial_\mu denotes the partial derivative with respect to spacetime coordinates x^\mu, and the integral is over all spacetime volume. This formulation arises naturally in classical field theory as the continuous analog of the particle action, extremizing which yields the equations of motion. The Euler–Lagrange equations for fields are derived by requiring the variation of \delta S = 0, leading to the field equations \frac{\partial L}{\partial \phi} - \partial_\mu \left( \frac{\partial L}{\partial (\partial_\mu \phi)} \right) = 0. This covariant form ensures Lorentz invariance, with the \partial_\mu acting on the conjugate to the field derivatives. For multiple fields or vector/tensor fields, the equation generalizes accordingly, maintaining the structure of balancing the explicit field dependence against the variation of its gradients. A canonical example is the real , where the density is L = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2} m^2 \phi^2. Applying the Euler–Lagrange equation yields the , (\partial_\mu \partial^\mu + m^2) \phi = 0, which describes massive spin-0 particles in relativistic . This equation captures the relativistic wave propagation with a term, foundational for models like the . For vector fields, the is described by the Lagrangian density L = -\frac{1}{4} F_{\mu\nu} F^{\mu\nu}, where F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu is the field strength tensor constructed from the four-potential A_\mu. The Euler–Lagrange equations produce the sourceless inhomogeneous Maxwell equations, \partial_\mu F^{\mu\nu} = 0, while the Bianchi identity \partial_\lambda F_{\mu\nu} + \partial_\mu F_{\nu\lambda} + \partial_\nu F_{\lambda\mu} = 0 encodes the homogeneous Maxwell equations. Including sources J^\mu modifies the first to \partial_\mu F^{\mu\nu} = J^\nu, fully recovering Maxwell's equations. The covariant structure of these equations facilitates the derivation of conserved quantities via . For translation invariance, the stress-energy tensor emerges as T^\mu{}_\nu = \frac{\partial L}{\partial (\partial_\mu \phi)} \partial_\nu \phi - \delta^\mu_\nu L, whose divergence \partial_\mu T^{\mu\nu} = 0 encodes energy-momentum conservation in flat . This tensor couples to gravity in , providing the source for . In theories like , the exhibits local gauge invariance under transformations A_\mu \to A_\mu + \partial_\mu \Lambda, which leaves F_{\mu\nu} unchanged and preserves . This redundancy implies the Euler–Lagrange equations are underdetermined, requiring gauge-fixing constraints to select physical , often leading to non-variational subsidiary conditions such as the Lorenz gauge \partial_\mu A^\mu = 0. Such invariances underpin broader frameworks like the , where similar structures appear in non-Abelian gauge fields.

Advanced Extensions

Manifold generalization

The manifold generalization of the Euler–Lagrange equation provides a coordinate-invariant framework for variational problems on curved spaces, extending the classical theory to mappings between Riemannian manifolds. Consider a map f: (M, g) \to (N, h), where M and N are Riemannian manifolds equipped with metrics g and h, respectively. The associated functional is defined as J = \int_M L(f, df) \, \vol_g, where L: \TN \times f^* TN \to \mathbb{R} is a density depending on the map f and its df, and \vol_g denotes the volume form induced by the metric g on M. To find the critical points of J, consider variations of f given by sections of the f^* TN \to M. The first variation \delta J = 0 yields the intrinsic Euler–Lagrange equation in local coordinates (x^\mu) on M and (y^a) on N: \frac{\partial L}{\partial f^a} - \nabla_\mu \left( \frac{\partial L}{\partial (\partial_\mu f^a)} \right) = 0, where \nabla is the Levi-Civita covariant derivative on M coupled with the on N via the , and indices follow Einstein summation convention. This form replaces ordinary partial derivatives with covariant ones to ensure tensorial invariance under coordinate changes. The derivation proceeds by expressing the variation as f_t = f + t \xi, with \xi a of f^* TN vanishing on the if M has one, and computing \frac{d}{dt} \big|_{t=0} J[f_t] = 0. The resulting integral involves the of L along \xi, which, after , uses the on manifolds to transfer the derivative term: \int_M \div(\cdot) \, \vol_g = \int_{\partial M} (\cdot), yielding the Euler–Lagrange equation with no remainder when \xi vanishes on \partial M. A prominent special case arises when the is the L = \frac{1}{2} g(df, df) = \frac{1}{2} g^{\mu\nu} h_{ab}(f) \partial_\mu f^a \partial_\nu f^b, corresponding to the functional for maps. The Euler–Lagrange equation simplifies to the geodesic equation \nabla_{df} df = 0, or in coordinates, \frac{d^2 f^a}{d\tau^2} + \Gamma^a_{bc}(f) \frac{df^b}{d\tau} \frac{df^c}{d\tau} = 0, where \Gamma are the of h and \tau parameterizes curves on M; here, f restricted to curves describes geodesics in N. If the domain M has a nonempty and variations \xi do not vanish there, natural boundary conditions emerge from the boundary term in the , requiring \frac{\partial L}{\partial (\partial_\mu f^a)} n^\mu \xi^a = 0 on \partial M, where n is the outward ; this is known as the variational , enforcing transversality for boundaries.

Infinite-dimensional variants

In infinite-dimensional settings, the Euler–Lagrange framework extends to functionals defined on spaces of functions, such as Sobolev spaces W^{1,p}(\Omega), where minimizers are sought for variational problems involving partial differential equations (PDEs). These spaces provide the appropriate topology for handling weak solutions, ensuring compactness via embeddings like the Rellich-Kondrachov theorem, which facilitates the direct method in the for existence proofs. The Euler–Lagrange equation in this context adopts a weak form, derived from the first variation of the functional J(u) = \int_\Omega L(x, u, \nabla u) \, dx = 0 for test functions \delta u with compact support. This yields the integral condition \int_\Omega \left( \frac{\partial L}{\partial u} \delta u + \frac{\partial L}{\partial (\nabla u)} \cdot \nabla (\delta u) \right) dx = 0, which holds for all admissible variations and corresponds to the in the distributional sense when integrated by parts, assuming sufficient regularity. Such formulations are essential for problems where classical solutions may not exist, allowing solutions in the sense of distributions. In theory, the Pontryagin maximum principle serves as a counterpart to the Euler–Lagrange equation for infinite-dimensional systems with variables, maximizing the along optimal trajectories in function spaces like those of state and adjoint variables. This principle provides necessary conditions for optimality in problems such as distributed parameter systems, where the enters the dynamics nonlinearly, analogous to how the Euler–Lagrange equation governs unconstrained variational paths. A prominent example is the Plateau problem in infinite-dimensional spaces, where minimal surfaces spanning a given boundary are sought in Banach spaces using the theory of currents. Here, integral currents generalize the notion of oriented surfaces to infinite dimensions, and the mass-minimizing current solving the variational problem satisfies an Euler–Lagrange-type equation in the weak sense, ensuring existence via compactness in the space of bounded mass currents. Regularity theory for these weak solutions employs bootstrapping techniques to upgrade regularity: starting from u \in W^{1,2}(\Omega), higher integrability or Schauder estimates are applied iteratively using the structure of the Euler–Lagrange operator, often assuming ellipticity or growth conditions on L, to obtain classical C^\infty solutions in the interior. This process relies on elliptic regularity results tailored to the variational setting, confirming that weak minimizers are smooth where the data permits.

References

  1. [1]
    Euler-Lagrange Differential Equation -- from Wolfram MathWorld
    The Euler-Lagrange differential equation is the fundamental equation of calculus of variations. It states that if J is defined by an integral of the form ...
  2. [2]
    "Methodus inveniendi lineas curvas maximi minimive proprietate ...
    Sep 25, 2018 · A method for finding curved lines enjoying properties of maximum or minimum, or solution of isoperimetric problems in the broadest accepted sense.
  3. [3]
    Who came up with the Euler-Lagrange equation first?
    Jul 31, 2012 · Euler first discovered what we now call the Euler-Lagrange equation prior to April 15, 1743, which we know as a result of a letter from that date sent by Euler ...Is essence of Lagrangian mechanics just independence of ∂∂q ...Logic of Euler-Lagrange Equation - Math Stack ExchangeMore results from math.stackexchange.com
  4. [4]
    Who came up with the Euler-Lagrange equation? - MathOverflow
    Jul 31, 2012 · According to Giaquinta and Hildebrandt (Calculus of Variations I, p. 70): "Euler's differential equation was first stated by Euler in his Methodus inveniendi.
  5. [5]
  6. [6]
    Euler-Lagrange Equation - Richard Fitzpatrick
    This condition is known as the Euler-Lagrange equation. is the equation of a straight-line. Thus, the shortest distance between two fixed points in a plane is ...
  7. [7]
    Jacob Bernoulli (1655 - 1705) - Biography - MacTutor
    Jacob Bernoulli was a Swiss mathematician who was the first to use the term integral. He studied the catenary, the curve of a suspended string. He was an early ...<|control11|><|separator|>
  8. [8]
    Brachistochrone problem - MacTutor History of Mathematics
    Johann Bernoulli was not the first to consider the brachistochrone problem. Galileo in 1638 had studied the problem in his famous work Discourse on two new ...
  9. [9]
    [PDF] LEONHARD EULER, BOOK ON THE CALCULUS OF VARIATIONS ...
    In this book Euler extended known methods of the calculus of variations to form and solve differential equations for the general problem of optimizing ...
  10. [10]
    [PDF] A Brief Survey of the Calculus of Variations - arXiv
    Abstract. In this paper, we trace the development of the theory of the calculus of variations. From its roots in the work of Greek thinkers and continuing ...
  11. [11]
    [PDF] J. L. Lagrange's changing approach to the foundations of the ...
    LAGRANGE'S first published account of the calculus of variations, contained in the Memoirs of the Turin Academy for 1760-61, has the title "Essai d'une nouvelle ...Missing: delta | Show results with:delta
  12. [12]
    [PDF] The Calculus of Variations - UC Davis Math
    In particular, we will derive differential equations, called the Euler-Lagrange equations, that are satisfied by the critical points of certain functionals, and ...
  13. [13]
    [PDF] The Calculus of Variations - College of Science and Engineering
    Mar 21, 2021 · Let us now investigate what the Euler–Lagrange equation tells us about the examples of variational problems presented at the beginning of this ...
  14. [14]
    [PDF] Calculus of Variation
    The present course is based on lectures given by I. M. Gelfand in the Mechanics and Mathematics Department of Moscow State University.
  15. [15]
    [PDF] The Lagrangian Method
    The Lagrangian is then. L = 1. 2 m ˙x2 − V (x),. (6.5) and the Euler-Lagrange equation, eq. (6.3), gives m¨x = −. dV dx . (6.6). But −dV/dx is the force on the ...
  16. [16]
    [PDF] Methodus inveniendi lineas curvas maximi minimive proprietate ...
    Google is proud to partner with libraries to digitize public domain materials and make them widely accessible. Public domain books belong to the.
  17. [17]
    Brachistochrone Problem -- from Wolfram MathWorld
    The brachistochrone problem was one of the earliest problems posed in the calculus of variations. Newton was challenged to solve the problem in 1696, and did so ...
  18. [18]
    [PDF] brachistochrone problem. - UTK Math
    So (as expected) y(x) is linear, y(x)=(b/a)x. The first step in the solution of the Euler-Lagrange equation for the brachistochrone problem: 2yy. 00.
  19. [19]
    [PDF] Chapter 3 The Euler-Lagrange Theorem - John E. Prussing
    The brachistochrone problem posed by Johann Bernoulli was a new type of mathematical problem which required a new mathematical approach. Lagrange developed ...
  20. [20]
    Minimal Surface of Revolution -- from Wolfram MathWorld
    The surface breaks and forms circular disks in each ring to minimize area. Calculus of variations cannot be used to find such discontinuous solutions.
  21. [21]
    [PDF] The Calculus of Variations
    Jan 19, 2017 · The calculus of variations is a field of mathematics concerned with minimizing (or maximizing) functionals (that is, real-valued functions ...Missing: 1760 1762
  22. [22]
    The Catenoid - Minimal Surfaces
    The catenoid stands at the beginning of the theory of minimal surfaces. Leonhard Euler, in 1744, showed that among surfaces of revolution, it has minimal area.
  23. [23]
    [PDF] Minimal Surfaces
    Dec 13, 2012 · Catenoid, it was discovered and proved to be minimal by Leonhard Euler in. 1744. The Catenoid has parametric equations: x = c cosh v c cos u.
  24. [24]
    [PDF] calculus of variations: minimal surface of revolution - UChicago Math
    It is important to note that the Euler-Lagrange equation may not have a solution. This observation becomes useful in solving the minimal surfaces of revolution ...
  25. [25]
    On Lagrangians with Higher Order Derivatives - AIP Publishing
    Mar 1, 1972 · Classical mechanics based on Lagrangians with higher order derivatives is investigated. It is found that this generalization does not lead ...
  26. [26]
    [PDF] The Calculus of Variations - College of Science and Engineering
    Jan 7, 2022 · As for the first Euler–Lagrange equation, we note that the Lagrangian does not depend on the independent variable z, and hence, by adapting ...
  27. [27]
    [PDF] Variational Principles - Stanford University
    We illustrate this using the concrete example of a double pendulum with. 2N −k = 2·2−2 = 2 degrees of freedom. θ1 θ2 l1 m1 l2 m2. Figure : Double pendulum with ...Missing: multiple | Show results with:multiple
  28. [28]
    [PDF] Noether's Two Theorems
    The Calculus of Variations. Page 22. The First Derivative Test. A minimum of a function of several variables f(x. 1. ,...,x n. ) is a place where the gradient ...
  29. [29]
    [PDF] Euler equations for multiple integrals
    Jan 22, 2013 · Here, we give several examples of Lagrangians, the corresponding Euler equa- tions, and natural boundary conditions. We do not discuss the ...Missing: assumptions | Show results with:assumptions
  30. [30]
    [PDF] chapter 2. lagrangian quantum field theory §2.1 general formalism
    Jan 2, 2010 · That is we will consider field theories for which the Euler-Lagrange equations of motion. ∂L. ∂Φr. − ∂µ. ∂L. ∂∂µΦr. = 0. (2.1.27). 84. Page 6 ...
  31. [31]
    [PDF] 13.1 Field theories from Lagrangians - MIT
    This approach is based on developing a Lagrangian density which describes gravitation. Such an approach is now standard in much of field theory. The basic idea ...
  32. [32]
    [PDF] 2 Classical Field Theory
    In general the equation that determine the trajectories that leave the action stationary is called the Euler-Lagrange equation. 2.3 Scalar field theory. For ...
  33. [33]
    [PDF] lagrangian formulation of the electromagnetic field - UChicago Math
    Jul 16, 2012 · This paper will, given some physical assumptions and experimen- tally verified facts, derive the equations of motion of a charged particle in an ...
  34. [34]
    [PDF] RELATIVISTIC ELECTROMAGNETIC FIELDS - UT Physics
    Let's derive the Euler–Lagrange equations form the Lagrangian density (26) and see that they are indeed the inhomogeneous Maxwell equations. Since the Fαβ ...
  35. [35]
    [PDF] The Geometry of the Euler-Lagrange Equation
    Mar 11, 2010 · In this paper, I give a novel construction and presentation of the intrinsic geometry of a generic tan- gent bundle, in the terms of which ...
  36. [36]
    [PDF] Existence and Uniqueness Solution of Euler-Lagrange Equation in ...
    This paper discusses the existence of minimizers of a functional in. Sobolev spaces with direct method, and Euler-Lagrange equation with. Gateaux derivative.
  37. [37]
    [PDF] Sobolev spaces and calculus of variations
    The function I is defined on an infinite dimensional object: the space of functions and there is no reason why the minimum of I should be attained.
  38. [38]
    [PDF] Calculus of variations and weak forms
    To summarize, the Euler-Lagrange equations lead to variational (or weak) conditions for a solution u? of (44). The strong form of these conditions result in a ...Missing: distributional | Show results with:distributional
  39. [39]
    [PDF] classic theory of calculus of variation
    Jun 4, 2025 · calculus of variations framework. Conditions involving H∗, A∗, and ... 0 (a, b). Page 16. 16. LONG CHEN. We include the vanishing first variation ...<|separator|>
  40. [40]
    [PDF] The Maximum Principle of Pontryagin in control and in optimal control
    May 23, 2006 · The usual Euler–Lagrange equations only paint part of the picture, with the necessary conditions of Legendre and Weierstrass filling in the rest ...
  41. [41]
    The Euler Lagrange equation and the Pontriagin Maximum Principle
    Aug 7, 2025 · We consider the necessary conditions in the calculus of variations, expressed by the validity of the Euler Lagrange equation, or of the ...
  42. [42]
    [PDF] Regularity in the Calculus of Variations
    Then there cannot be any classical solution with boundary data 0 on the outer sphere and data ≥ C on the inner sphere by the Hopf maximum principle.
  43. [43]
    [PDF] Existence and Regularity Theory for Nonlinear Elliptic Systems and ...
    ... theorem shows that the C∞-regularity of weak solutions of a nonlinear elliptic system reduces to the C1,µ-regularity of the weak solutions. In the theorem, if.