Fact-checked by Grok 2 weeks ago

Partial derivative

In multivariable calculus, a partial derivative measures the rate of change of a function of multiple variables with respect to one specific variable, while treating all other variables as constants. For a function f(x, y), the partial derivative with respect to x at a point (a, b), denoted \frac{\partial f}{\partial x}(a, b), represents how f changes as x varies near a, with y fixed at b. This concept generalizes the single-variable derivative and is essential for analyzing functions in higher dimensions, such as those arising in physics, economics, and engineering. The formal definition of the partial derivative \frac{\partial f}{\partial x} at (a, b) is the \frac{\partial f}{\partial x}(a, b) = \lim_{h \to 0} \frac{f(a + h, b) - f(a, b)}{h}, provided the exists; a similar defines the partial with respect to y. Computationally, it involves differentiating f as if the other variables are constants, using standard rules like the chain rule or . Geometrically, partial derivatives correspond to the slopes of lines to the function's graph along axes-parallel directions, aiding in approximations via planes. Partial derivatives underpin key applications, including linear approximations of multivariable functions, identification of local extrema through critical points where all first partials vanish, and the formation of the vector, which points in the direction of steepest ascent. Higher-order partial derivatives, such as \frac{\partial^2 f}{\partial x \partial y}, describe curvatures and concavities; under continuity assumptions, mixed partials are equal by Clairaut's theorem, enabling the for second-order optimization tests. In fields like and , partials model rates such as heat flow or pressure changes while isolating specific influences. The notation \partial originated in the mid-18th century, with early developments traced to mathematicians like Leonhard Euler and in the context of solving problems in and .

Fundamentals

Definition

In multivariable calculus, the partial derivative measures the rate of change of a function with respect to one of its variables while treating all other variables as constants. This concept extends the familiar derivative from single-variable functions to functions of multiple variables, allowing analysis of how the function varies along specific directions in the domain. Consider a function f: \mathbb{R}^n \to \mathbb{R} defined on an open subset of \mathbb{R}^n. The partial derivative of f with respect to the i-th variable x_i at a point \mathbf{a} = (a_1, \dots, a_n) is given by the limit \frac{\partial f}{\partial x_i}(\mathbf{a}) = \lim_{h \to 0} \frac{f(\mathbf{a} + h \mathbf{e}_i) - f(\mathbf{a})}{h}, provided the limit exists, where \mathbf{e}_i is the i-th standard basis vector in \mathbb{R}^n with 1 in the i-th position and 0 elsewhere. This definition assumes familiarity with the concept of limits and the ordinary derivative from single-variable calculus. This formulation generalizes the single-variable derivative, where for a function g: \mathbb{R} \to \mathbb{R}, the derivative g'(a) = \lim_{h \to 0} \frac{g(a + h) - g(a)}{h} captures the instantaneous rate of change at a. In the multivariable case, the partial derivative isolates the contribution of one input variable by fixing the others, effectively reducing the problem to a one-dimensional derivative along the corresponding . Geometrically, the partial derivative \frac{\partial f}{\partial x_i}(\mathbf{a}) represents the slope of the tangent line to the curve obtained by intersecting the graph of f with the hyperplane where all variables except x_i are fixed at their values in \mathbf{a}. This slope lies within the tangent hyperplane to the graph at (\mathbf{a}, f(\mathbf{a})), providing insight into the function's local behavior along the i-th coordinate direction.

Notation

The notation for partial derivatives draws from established conventions in , primarily adapting the Leibniz notation for ordinary derivatives. The most widely used form is \frac{\partial f}{\partial x}, where f is a of multiple variables and the partial derivative is taken with respect to x while treating other variables as constants. This notation emphasizes the fractional aspect reminiscent of total derivatives but uses the distinctive \partial symbol to signify the partial nature of the operation. Alternative notations include the subscript form f_x, commonly employed for functions of several variables to denote the partial derivative with respect to x. Another variant is the operator notation D_x f, which treats the partial derivative as an application of the D_x to the function f. These forms are particularly useful in contexts requiring brevity, such as in proofs or when composing multiple derivatives. The \partial symbol was first used in 1770 by in his "Mémoire sur les équations aux différences partielles" to denote partial differences. introduced the modern notation \frac{\partial u}{\partial x} in 1786 in his "Mémoire sur la manière de distinguer les maxima des minima dans le calcul des variations," though he later abandoned it. The notation was revived and popularized by in 1841, becoming a standard in . For functions of multiple variables, indexed notations facilitate clarity, such as \frac{\partial}{\partial x_i} to denote the partial derivative with respect to the i-th variable x_i. In and related fields, the comma notation f_{,i} is conventional for the partial derivative \frac{\partial f}{\partial x^i}, often appearing in for efficiency in expressions involving multiple indices. A key distinction exists between \partial and the differential d: the latter denotes derivatives, applicable to functions of a single or when all dependent variables are allowed to vary (as in differentials), whereas \partial specifically indicates with respect to one while holding others fixed. Thus, d is reserved for contexts without independent variables to isolate, such as calculus, while \partial is for multivariable settings to avoid .

Computation and Examples

Basic Computation

To compute a partial derivative, treat all variables other than the one of interest as constants and apply the standard rules of differentiation from single-variable calculus. Consider the function f(x,y) = x^2 y + \sin(y). To find \partial f / \partial x, differentiate with respect to x while holding y constant: the term x^2 y yields $2x yby the power rule, and\sin(y)is constant with respect tox, so its derivative is zero. Thus, \partial f / \partial x = 2x y$. For \partial f / \partial y, differentiate with respect to y while holding x constant: the term x^2 y yields x^2 by the power rule, and \sin(y) yields \cos(y) by the trigonometric derivative rule. Thus, \partial f / \partial y = x^2 + \cos(y). Now consider a function of three variables, f(x,y,z) = x y z. To compute \partial f / \partial x, treat y and z as constants: this yields y z. Similarly, \partial f / \partial y = x z and \partial f / \partial z = x y. Partial derivatives can be evaluated at specific points by substituting the coordinates into the resulting expression. For the function f(x,y) = x^2 y + \sin(y), at the point (1,0), \partial f / \partial x = 2(1)(0) = 0.

Higher-Order Partial Derivatives

Higher-order partial derivatives arise when partial derivatives of a multivariable are themselves differentiated with respect to one or more variables, extending the process beyond the . For a f of two variables x and y, the second-order partial derivatives include the pure second partials \frac{\partial^2 f}{\partial x^2} and \frac{\partial^2 f}{\partial y^2}, as well as the mixed partial \frac{\partial^2 f}{\partial x \partial y}, which is obtained by differentiating first with respect to one variable and then the other. These derivatives measure rates of change of the first-order partials, providing information about and higher-level behavior of the . To illustrate computation, consider the function f(x,y) = x^3 y^2. The first partial derivative with respect to x is \frac{\partial f}{\partial x} = 3x^2 y^2. Differentiating this with respect to y yields the mixed second partial \frac{\partial^2 f}{\partial y \partial x} = 6x^2 y. Alternatively, starting with \frac{\partial f}{\partial y} = 2x^3 y and differentiating with respect to x gives \frac{\partial^2 f}{\partial x \partial y} = 6x^2 y, demonstrating that the order of differentiation does not matter when the relevant partial derivatives are continuous. For higher orders, notation generalizes accordingly: the nth-order pure partial with respect to a single variable x_i is denoted \frac{\partial^n f}{\partial x_i^n}, while mixed higher-order partials, such as a third-order one involving two differentiations with respect to x and one with respect to y, can be written as \frac{\partial^3 f}{\partial x^2 \partial y} or using subscript notation f_{xxy}. In the case of second-order partials, these are often arranged into the Hessian matrix, a square matrix whose entries are the second partial derivatives H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}.

Total Derivative

In multivariable calculus, the total derivative of a scalar-valued function f: \mathbb{R}^n \to \mathbb{R} at a point a \in \mathbb{R}^n is defined as the best linear approximation to the change in f near a, represented by a linear map Df(a): \mathbb{R}^n \to \mathbb{R}. Specifically, f is differentiable at a if there exists a linear map such that \lim_{\mathbf{h} \to \mathbf{0}} \frac{|f(a + \mathbf{h}) - f(a) - Df(a)(\mathbf{h})|}{\|\mathbf{h}\|} = 0, where Df(a)(\mathbf{h}) captures the first-order variation in all directions. For functions with continuous partial derivatives, this linear map is given by the dot product of the gradient vector \nabla f(a) with the increment vector \mathbf{h}, so Df(a)(\mathbf{h}) = \nabla f(a) \cdot \mathbf{h} = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(a) h_i. The total differential df formalizes this approximation as df = \sum_{i=1}^n \frac{\partial f}{\partial x_i} dx_i, where each dx_i represents an infinitesimal change in the input variables. Unlike a single partial derivative, which holds all other variables constant and measures change along one axis, the total derivative accounts for simultaneous variations in all variables, providing the full linear response of f to a multivariable increment. For vector-valued functions f: \mathbb{R}^n \to \mathbb{R}^m, the generalizes to the matrix Df(a), an m \times n matrix whose entries are the partial derivatives \frac{\partial f_j}{\partial x_i}(a), but in the scalar case (m=1), it reduces to the row vector of partials. The plays a central role in the for composite functions: if g: \mathbb{R}^m \to \mathbb{R} is differentiable at f(a) and f at a, then D(g \circ f)(a) = Dg(f(a)) \circ Df(a), or in matrix form, the Jacobian of the composition is the product of the individual Jacobians. This extends the single-variable to multivariable settings, enabling computation of derivatives along paths or through function compositions.

Gradient

The gradient of a scalar-valued f: \mathbb{R}^n \to \mathbb{R}, denoted \nabla f, is defined as the whose components are the partial derivatives of f with respect to each variable: \nabla f(\mathbf{x}) = \left( \frac{\partial f}{\partial x_1}(\mathbf{x}), \frac{\partial f}{\partial x_2}(\mathbf{x}), \dots, \frac{\partial f}{\partial x_n}(\mathbf{x}) \right). This points in the direction of the greatest rate of increase of f at the point \mathbf{x}, and it is defined wherever the partial derivatives exist. Key properties of the gradient include its magnitude \|\nabla f(\mathbf{x})\|, which equals the rate of steepest ascent of f at \mathbf{x}, and the fact that \nabla f(\mathbf{x}) is orthogonal to the level surface of f passing through \mathbf{x}. These properties arise because the directional derivative of f in any direction \mathbf{u} (a unit vector) is maximized when \mathbf{u} aligns with \nabla f, and the level sets satisfy \nabla f \cdot d\mathbf{r} = 0 for tangent vectors d\mathbf{r}. For example, consider f(x, y) = x^2 + y^2. The is \nabla f(x, y) = (2x, 2y), which at (1, 1) gives (2, 2) with \sqrt{8} \approx 2.828, indicating the steepest ascent rate there. The connects to the of f at a point \mathbf{a} via the relation Df(\mathbf{a})(\mathbf{h}) = \nabla f(\mathbf{a}) \cdot \mathbf{h}, where Df(\mathbf{a}) is the and \mathbf{h} is a ; this expresses the as a dot product with the .

Directional Derivative

The directional derivative of a scalar-valued multivariable f: \mathbb{R}^n \to \mathbb{R} at a point a \in \mathbb{R}^n in the direction of a u \in \mathbb{R}^n with \|u\| = 1 is defined as the D_u f(a) = \nabla f(a) \cdot u, where \nabla f(a) is the vector of f at a. This measures the instantaneous rate of change of f along the line passing through a in the specified by u. When the direction u aligns with one of the standard basis vectors e_i (the i-th along the coordinate axes), the directional derivative reduces to the corresponding partial derivative: D_{e_i} f(a) = \frac{\partial f}{\partial x_i}(a). Thus, partial derivatives are special cases of restricted to axis-aligned directions, while the general form extends this concept to arbitrary directions in the domain. For example, consider the f(x, y) = xy evaluated at (1, 1) in the direction of the unit u = \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right). The is \nabla f(x, y) = (y, x), so at (1, 1), \nabla f(1, 1) = (1, 1). The is then D_u f(1, 1) = (1, 1) \cdot \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right) = \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}} = \sqrt{2}. The is linear in the direction u, meaning D_{c u + v} f(a) = c D_u f(a) + D_v f(a) for scalars c and vectors u, v (with appropriate for vectors). Its maximum value at a is \|\nabla f(a)\|, achieved when u is parallel to the \nabla f(a); conversely, it is zero when u is to \nabla f(a).

Properties

Symmetry of Mixed Partials

In , Clairaut's theorem asserts that if a f(x, y) of two variables has continuous f_x, f_y, f_{xy}, and f_{yx} in a neighborhood of a point (a, b), then the mixed second partial derivatives are equal at that point: f_{xy}(a, b) = f_{yx}(a, b).
This result, named after the French mathematician who first stated and sketched a proof of it in 1740, establishes a key property for sufficiently smooth functions.
A standard proof begins with the increment definition. Consider the difference f(a + h, b + k) - f(a + h, b) - f(a, b + k) + f(a, b). By the applied to the function g(t) = f(t, b + k) - f(t, b) on [a, a + h], there exists \xi between a and a + h such that g(a + h) - g(a) = h g'(\xi) = h f_y(\xi, b + k) - h f_y(\xi, b). Applying the again to f_y(\xi, t) on [b, b + k], there exists \eta between b and b + k such that this equals h k f_{yx}(\xi, \eta). Repeating the process by switching the order of differentiation yields h k f_{xy}(\xi', \eta') for some \xi', \eta'. Dividing by h k and taking limits as h, k \to 0, continuity of the mixed partials ensures both limits equal f_{xy}(a, b) = f_{yx}(a, b). Without the continuity assumption, the mixed partials may differ, as shown by the counterexample f(x, y) = \frac{xy(x^2 - y^2)}{x^2 + y^2} for (x, y) \neq (0, 0) and f(0, 0) = 0. The first partials f_x(0, 0) = 0 and f_y(0, 0) = 0 exist, but f_{xy}(0, 0) = -1 while f_{yx}(0, 0) = 1.

Existence Conditions

The partial derivative of a multivariable function f: \mathbb{R}^n \to \mathbb{R} with respect to one variable, say the i-th coordinate, exists at a point \mathbf{a} if the corresponding one-variable limit holds, treating all other variables as fixed. Specifically, for f(x_1, \dots, x_n), the partial derivative \frac{\partial f}{\partial x_i}(\mathbf{a}) is defined as \lim_{h \to 0} \frac{f(a_1, \dots, a_i + h, \dots, a_n) - f(\mathbf{a})}{h}, provided this limit exists. This condition requires only that the function behaves differentiably along the coordinate axis parallel to x_i, without regard to behavior in other directions. However, the mere existence of all partial derivatives at \mathbf{a} does not guarantee that f is totally differentiable at \mathbf{a}, meaning the function may fail to have a linear approximation that works uniformly in all directions. A key distinction arises between the existence of partial derivatives and their continuity. Partial derivatives can exist at a point without being continuous there, illustrating that existence alone is a relatively weak condition. For instance, consider the function f(x,y) = \begin{cases} \frac{xy}{x^2 + y^2} & \text{if } (x,y) \neq (0,0), \\ 0 & \text{if } (x,y) = (0,0). \end{cases} The partial derivatives at the origin are \frac{\partial f}{\partial x}(0,0) = 0 and \frac{\partial f}{\partial y}(0,0) = 0, as the limits along the axes yield zero. Away from the origin, \frac{\partial f}{\partial x}(x,y) = \frac{y(y^2 - x^2)}{(x^2 + y^2)^2} and \frac{\partial f}{\partial y}(x,y) = \frac{x(x^2 - y^2)}{(x^2 + y^2)^2}, but these are discontinuous at (0,0) since, for example, approaching along the x-axis (y = 0) gives \frac{\partial f}{\partial x}(x,0) = 0, while along the y-axis (x = 0) it is \frac{\partial f}{\partial x}(0,y) = \frac{1}{y} for y \neq 0, which does not approach 0 as y \to 0. This example shows that partial derivatives may exist everywhere yet fail to be continuous, and in this case, f is not even continuous at the origin. A stronger condition ensures total differentiability: if all partial derivatives exist in a neighborhood of \mathbf{a} and are continuous at \mathbf{a}, then f is totally differentiable at \mathbf{a}. This result, often called the differentiability theorem for multivariable functions, guarantees that the matrix provides the best near \mathbf{a}. The proof typically involves applying the to increments along each variable and using to bound the remainder term, showing that the error in the vanishes faster than the distance to \mathbf{a}. Functions satisfying this are denoted as C^1 in a neighborhood, meaning they are continuously differentiable. In , the existence of partial derivatives represents a weaker prerequisite compared to full differentiability, allowing analysis of directional rates of change even when the function lacks a global . This distinction is crucial for understanding phenomena like directional derivatives or gradients, where partials provide building blocks but require additional checks for broader properties like or integrability. While partial existence suffices for many computations, such as tracing curves or surfaces, total differentiability is essential for theorems involving spaces or optimization.

Applications

Geometry and Vector Calculus

In multivariable calculus, partial derivatives provide a geometric interpretation for functions defining surfaces in three-dimensional space. For a surface given by z = f(x, y), the partial derivative \frac{\partial f}{\partial x} represents the slope of the tangent line to the curve obtained by fixing y and varying x, forming one component of the tangent vector to the surface along the x-direction. Similarly, \frac{\partial f}{\partial y} gives the slope in the y-direction. These partial derivatives thus serve as the components of the tangent vectors to the surface, enabling the construction of the tangent plane at any point on the surface. The vector, formed by the partial derivatives \nabla f = \left( \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y} \right), points in the direction of the steepest ascent on the surface and is to the level curves (or level sets) of f, where the value remains . This arises because the in the direction to a level set is zero, as the does not change along that path. In three dimensions, for a level surface F(x, y, z) = c, the \nabla F is to the surface, reflecting the rate of change to the . In , partial derivatives underpin key identities involving the operator. The of the of a scalar f with continuous second partial derivatives is identically zero: \nabla \times \nabla f = \mathbf{0}, indicating that gradient fields are irrotational and can be derived from a potential. Additionally, the of the product of a scalar \phi and the \nabla f follows a : \nabla \cdot (\phi \nabla f) = \phi \Delta f + \nabla \phi \cdot \nabla f, where \Delta f is the Laplacian, combining second partial derivatives. These identities rely on the equality of mixed partials and facilitate theorems like the . For parametrized surfaces, partial derivatives play a crucial role in computing surface integrals, particularly . A surface S parametrized by \mathbf{r}(u, v) = (x(u,v), y(u,v), z(u,v)) has vectors given by the partial derivatives \mathbf{r}_u = \frac{\partial \mathbf{r}}{\partial u} and \mathbf{r}_v = \frac{\partial \mathbf{r}}{\partial v}; their \mathbf{r}_u \times \mathbf{r}_v yields a normal vector whose accounts for the surface element in flux computations \iint_S \mathbf{F} \cdot d\mathbf{S} = \iint_D \mathbf{F}(\mathbf{r}(u,v)) \cdot (\mathbf{r}_u \times \mathbf{r}_v) \, du \, dv. This setup is essential for evaluating flux through oriented surfaces. A example illustrates the normal to a surface z = f(x,y). The surface can be viewed as the F(x,y,z) = f(x,y) - z = 0, so the \nabla F = \left( \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, -1 \right) provides a normal at any point (x_0, y_0, f(x_0, y_0)), to the tangent plane spanned by the partial derivative directions. This is used in applications like tangent plane equations or calculations.

Optimization

In optimization of multivariable functions, partial derivatives play a central role in identifying and classifying critical points, where local maxima, minima, or saddle points may occur. A critical point of a f(x, y) occurs at a point (x_0, y_0) in the where both partial derivatives vanish, i.e., \frac{\partial f}{\partial x}(x_0, y_0) = 0 and \frac{\partial f}{\partial y}(x_0, y_0) = 0. This condition is equivalent to setting the \nabla f = \mathbf{0}, which leads to a system of nonlinear equations that must be solved to locate potential extrema. Assuming the partial derivatives exist and are continuous in a neighborhood of the point, every local extremum in the interior of the is a critical point. To classify these critical points, the second derivative test employs second-order partial derivatives through the . For a f(x, y), the Hessian determinant D at a critical point (x_0, y_0) is given by D = \frac{\partial^2 f}{\partial x^2} \frac{\partial^2 f}{\partial y^2} - \left( \frac{\partial^2 f}{\partial x \partial y} \right)^2, evaluated at (x_0, y_0), assuming the second partials are continuous. The test states: if D > 0 and \frac{\partial^2 f}{\partial x^2}(x_0, y_0) > 0, then (x_0, y_0) is a local minimum; if D > 0 and \frac{\partial^2 f}{\partial x^2}(x_0, y_0) < 0, it is a local maximum; if D < 0, it is a saddle point; and if D = 0, the test is inconclusive, requiring higher-order analysis. This method generalizes the one-variable second derivative test and relies on the definiteness of the quadratic form associated with the . Consider the function f(x, y) = x^2 + y^2. The partial derivatives are \frac{\partial f}{\partial x} = 2x and \frac{\partial f}{\partial y} = 2y, which both equal zero at the critical point (0, 0). The second partials are \frac{\partial^2 f}{\partial x^2} = 2, \frac{\partial^2 f}{\partial y^2} = 2, and \frac{\partial^2 f}{\partial x \partial y} = 0, yielding D = (2)(2) - 0^2 = 4 > 0 and \frac{\partial^2 f}{\partial x^2}(0, 0) = 2 > 0, confirming a local (and global) minimum at (0, 0). For constrained optimization, where extrema of f(x, y) are sought subject to a constraint g(x, y) = c, the method of Lagrange multipliers uses partial derivatives to form the system \nabla f = \lambda \nabla g along with the constraint equation. Specifically, this yields \frac{\partial f}{\partial x} = \lambda \frac{\partial g}{\partial x}, \frac{\partial f}{\partial y} = \lambda \frac{\partial g}{\partial y}, and g(x, y) = c, solved simultaneously for x, y, and \lambda. This approach assumes \nabla g \neq \mathbf{0} at the extremum and identifies candidate points on the constraint surface./02:_Functions_of_Several_Variables/2.07:Constrained_Optimization-_Lagrange_Multipliers)

Physics and Engineering

In physics and engineering, partial derivatives play a crucial role in modeling systems that evolve over time and space, enabling the description of how quantities like , , and change with respect to independent variables such as , , coordinates, and time. These derivatives form the basis of partial differential equations (PDEs) that govern fundamental physical laws, allowing for the analysis of complex phenomena from microscopic quantum processes to macroscopic fluid flows and heat conduction. In thermodynamics, the internal energy U is expressed as a function of entropy S and volume V, U = U(S, V). The first law, combined with the second law for reversible processes, yields the fundamental thermodynamic relation dU = T \, dS - P \, dV, where T = \left( \frac{\partial U}{\partial S} \right)_V is the temperature and P = -\left( \frac{\partial U}{\partial V} \right)_S is the pressure. These partial derivatives encapsulate how energy responds to changes in thermodynamic state variables while holding the other constant, providing a cornerstone for deriving other potentials like enthalpy and Gibbs free energy./10%3A_Some_Mathematical_Consequences_of_the_Fundamental_Equation/10.01%3A_Thermodynamic_Relationships_from_dE_dH_dA_and_dG) In , partial derivatives describe the time-dependent of the wave function \psi(\mathbf{r}, t), which encodes the of a system. The time-dependent is given by i [\hbar](/page/H-bar) \frac{\partial \psi}{\partial t} = \hat{[H](/page/H+)} \psi, where [\hbar](/page/H-bar) is the reduced Planck's and \hat{[H](/page/H+)} is the , typically including terms with spatial partial derivatives like -\frac{[\hbar](/page/H-bar)^2}{2m} \nabla^2. This partial with respect to time governs the unitary of the system, distinguishing it from spatial derivatives that appear in the time-independent case for stationary states. Fluid dynamics relies on partial derivatives to capture the motion of viscous fluids through the Navier-Stokes equations. For an incompressible fluid with velocity field \mathbf{v}(x, y, z, t), \rho, p, and kinematic viscosity \nu, the momentum equation is \frac{\partial \mathbf{v}}{\partial t} + (\mathbf{v} \cdot \nabla) \mathbf{v} = -\frac{1}{\rho} \nabla p + \nu \nabla^2 \mathbf{v}, supplemented by the \nabla \cdot \mathbf{v} = 0. Here, the combines the local time partial \frac{\partial \mathbf{v}}{\partial t} with convective terms, while the Laplacian \nabla^2 \mathbf{v} involves second-order spatial partial derivatives that model viscous diffusion. These equations, derived from Newton's second law applied to fluid elements, are nonlinear PDEs central to simulating flows in , weather prediction, and cardiovascular modeling. In engineering contexts, partial derivatives underpin the , which models diffusive in solids and fluids. The equation is \frac{\partial u}{\partial t} = \alpha \nabla^2 u, where u(x, y, z, t) is the field and \alpha is the . The time partial \frac{\partial u}{\partial t} represents the rate of change, balanced by the spatial Laplacian \nabla^2 u, formed from second partial derivatives like \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} + \frac{\partial^2 u}{\partial z^2}, which arises from Fourier's law of heat conduction stating that is proportional to the negative gradient. This PDE is solved in applications ranging from designing heat exchangers to predicting thermal stresses in materials.

Economics and Other Fields

In , partial derivatives play a central role in analyzing consumer behavior through functions, which represent preferences over bundles of . For a utility function U(x, y) depending on quantities of two x and y, the partial derivative \frac{\partial U}{\partial x} measures the of good x, or the additional gained from consuming one more unit of x while holding y constant. Similarly, \frac{\partial U}{\partial y} gives the of good y. A prominent example is the Cobb-Douglas U(x, y) = x^a y^b, where a > 0 and b > 0 are parameters reflecting the relative importance of each good. The partial derivative with respect to x is \frac{\partial U}{\partial x} = a x^{a-1} y^b, which diminishes as x increases, illustrating decreasing . This form is widely used due to its tractability in deriving demand functions and elasticities. In production theory, partial derivatives quantify the marginal product of inputs in a firm's Q(L, K), where L is labor and K is . The , \frac{\partial Q}{\partial L}, indicates the additional output from employing one more unit of labor while keeping fixed, aiding decisions on input allocation and minimization. For instance, in Cobb-Douglas production functions Q(L, K) = A L^\alpha K^\beta, the marginal product \frac{\partial Q}{\partial L} = \alpha A L^{\alpha-1} K^\beta typically exhibits as labor increases./09:_Producer_Theory-_Costs/9.02:_Production_Functions) Beyond , partial derivatives appear in processing for resizing algorithms, particularly , which estimates values at non-integer coordinates to scale smoothly. This method approximates the using partial derivatives along the x and y directions from neighboring pixels, effectively performing linear interpolations sequentially to avoid and preserve . The relies on finite differences to estimate these partials, ensuring the interpolated value lies within the range of surrounding pixels. In , partial derivatives form the basis for , an optimization used to train neural networks by iteratively adjusting to minimize a . The , comprising partial derivatives of the loss with respect to each , indicates the direction of steepest descent, as detailed in the foundational method.

References

  1. [1]
    2. Partial Derivatives | Multivariable Calculus - MIT OpenCourseWare
    2. Partial Derivatives · They measure rates of change. · They are used in approximation formulas. · They help identify local maxima and minima.Multivariable Calculus · Second Derivative Test · 3. Double Integrals and Line...
  2. [2]
    Calculus III - Partial Derivatives - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will the idea of partial derivatives. We will give the formal definition of the partial derivative as well as the ...
  3. [3]
    Partial Derivatives
    Definitions and Examples. Partial derivatives help us track the change of multivariable functions by dealing with one variable at a time. If we think of z=f(x, ...
  4. [4]
    [PDF] Lecture 9: Partial derivatives - Harvard Mathematics Department
    f(x, y) is defined as the derivative of the function g(x) = f(x, y), where y is considered a constant. It is called partial derivative of f with respect to x. ...
  5. [5]
    Partial derivative by limit definition - Math Insight
    The partial derivative of a function f(x,y) at the origin is illustrated by the red line that is tangent to the graph of f in the x direction.
  6. [6]
    2.3 Higher Order Derivatives
    The history of this important theorem is pretty convoluted. See “A note on the history of mixed partial derivatives” by Thomas James Higgins which was published ...
  7. [7]
    [PDF] Differentiation of Multivariable Functions - People
    By the definition of the ordinary derivative, the partial deriv- ative f0xi (r0) exists if and only if the derivative F0(ai) exists because. (19.1) f0xi (r0) ...
  8. [8]
    history of calculus of several variables - MathOverflow
    Jan 29, 2014 · it was Alexis Fontaine des Bertins (1705-71), Euler, Clairaut, and d'Alembert who created the theory of partial derivatives. Greenberg, John L.
  9. [9]
    2.1 First-Order Partial Derivatives
    The derivative plays a central role in first semester calculus because it provides important information about a function.<|control11|><|separator|>
  10. [10]
  11. [11]
    Partial Derivatives - Mathonline - Wikidot
    Just to list a few examples, $f_x (x, y) = \frac{\partial}{\partial x} f(x,y) = \frac{\partial z}{\partial x} = D_x f$. One special type of notation we will use ...
  12. [12]
    Earliest Uses of Symbols of Calculus - MacTutor
    However, the "curly d" was first used in the form ∂ u ∂ x \large\frac{\partial u}{\partial x} ∂x∂u by Adrien Marie Legendre in 1786 in his "Memoire sur la mani ...
  13. [13]
    [PDF] Partial Derivatives - MIT OpenCourseWare
    Written as ∂w/∂x, the partial derivative gives the rate of change of w with respect to x alone, at the point (x0,y0): it tells how fast w is increasing as x ...
  14. [14]
    14.3: Partial Derivatives - Mathematics LibreTexts
    Feb 5, 2025 · Definition: Partial Derivatives · d in the original notation is replaced with the symbol ∂ . (This rounded “ d ” is usually called “partial,” so ...
  15. [15]
    Partial derivative examples - Math Insight
    These examples show, calculating a partial derivatives is usually just like calculating an ordinary derivative of one-variable calculus.
  16. [16]
    4.3 Partial Derivatives - Calculus Volume 3 | OpenStax
    Mar 30, 2016 · We can calculate a partial derivative of a function of three variables using the same idea we used for a function of two variables. For example, ...
  17. [17]
    Section - 10.2 First-Order Partial Derivatives - Active Calculus
    Thus, computing partial derivatives is straightforward: we use the standard rules of single variable calculus, but do so while holding one (or more) of the ...
  18. [18]
  19. [19]
    Partial Derivatives in Engineering Mathematics - GeeksforGeeks
    Sep 15, 2025 · ∂2f/∂x∂y or ∂2f/∂y∂x: The mixed partial derivatives. Example ... For f(x,y) = x3y2 + 2xy, find ∂²f/∂x², ∂²f/∂y², and ∂²f/∂x ...
  20. [20]
    Hessian Matrix -- from Wolfram MathWorld
    Hessian Matrix. See. Hessian · About MathWorld · MathWorld Classroom · Contribute · MathWorld Book · wolfram.com · 13,278 Entries · Last Updated: Thu Oct 30 ...
  21. [21]
    [PDF] Total derivatives Math 131 Multivariate Calculus
    We'll say f is differentiable if all its component func- tions are differentiable, and in that case, we'll take the derivative of f, denoted Df, ...
  22. [22]
  23. [23]
    [PDF] Contents 1. The Total Derivative 1 2. The Chain Rule 4 3. Multi ...
    The way to interpret this definition is that f′(a) is the “best linear approximation to f(x) at a” (or more precisely, the best linear approximation to f(x + a) ...Missing: multivariable | Show results with:multivariable
  24. [24]
    [PDF] linear maps, the total derivative and the chain rule
    Our definition of the total derivative should seem like a useful one, and it generalizes the cases we already had, but some questions remain. Chief among.
  25. [25]
    [PDF] Gradients Math 131 Multivariate Calculus
    Then, when a function is differentiable, we'll take the gradient, ∇f, which is the vector of partial derivatives, to be the derivative. Most common functions ...
  26. [26]
    Gradients - Department of Mathematics at UTSA
    Jan 20, 2022 · The gradient of f is defined as the unique vector field whose dot product with any vector v at each point x is the directional derivative of f along v.
  27. [27]
    The gradient vector - Math Insight
    The gradient vector, denoted as ∇f, is the derivative matrix of a scalar-valued function viewed as a vector, defined only for scalar-valued functions.
  28. [28]
    Calculus III - Gradient Vector, Tangent Planes and Normal Lines
    Nov 16, 2022 · This says that the gradient vector is always orthogonal, or normal, to the surface at a point. Also recall that the gradient vector is,. ∇f= ...
  29. [29]
    2.7 Directional Derivatives and the Gradient
    the unit vector giving the direction of maximum rate of increase is the unit vector in the direction of the gradient vector , 2 ⟨ 1 , 2 ⟩ , which is ...
  30. [30]
    [PDF] 3.3 Gradient Vector and Jacobian Matrix
    The gradient vector is the vector of derivatives for scalar functions, and the Jacobian matrix is the matrix of derivatives for vector-valued functions.
  31. [31]
    [PDF] Directional derivative and gradient vector (Sec. 14.6)
    Definition of directional derivative. • Directional derivative and partial derivatives. • Gradient vector. • Geometrical meaning of the gradient. Slide 2.<|control11|><|separator|>
  32. [32]
    Calculus III - Directional Derivatives - Pauls Online Math Notes
    Nov 16, 2022 · In other words, we can write the directional derivative as a dot product and notice that the second vector is nothing more than the unit vector ...Missing: total | Show results with:total
  33. [33]
    Directional derivatives
    Definition: The Directional Derivative of f(x,y) at (a,b) in the direction u is defined by (Duf)(a,b) = limt→0f(a+th,b+tk)−f(a,b)t.Missing: multivariable calculus
  34. [34]
    2.5 Directional Derivatives and the Gradient
    The partial derivatives of a function f tell us the rate of change of f in the direction of the coordinate axes. How can we measure the rate of change of f in ...
  35. [35]
    An introduction to the directional derivative and the gradient
    The result is called the directional derivative. The first step in taking a directional derivative, is to specify the direction. One way to specify a direction ...
  36. [36]
    [PDF] On the equality of mixed partial derivatives - Brooklyn College
    (A. C. Clairaut) Let f be a function of two variables, let (a, b) be a point, and let U be a disk with center (a, b). Assume that f is defined on U and its ...
  37. [37]
    [PDF] An Example With Unequal Mixed Partial Derivatives
    Here are the first derivatives: (1) For (x, y) 6= (0,0), we can use the quotient rule and simplify to obtain fx(x, y) = −x4y − 4x2y3 + y5. (x2 + y2)2. (2) For ( ...
  38. [38]
  39. [39]
    [PDF] Partial derivatives - UTK Math
    In multivariable calculus, if there is a notion of derivative that is to tell you the rate of change of the output f(x) as you change the input x, this ...
  40. [40]
    Discontinuous function for which partial derivatives exist
    If a function f(x,y) has continuous partial derivatives everywhere in the plane, then it is also continuous everywhere in the plane.
  41. [41]
    [PDF] Partial derivatives and differentiability (Sect. 14.3).
    the partial derivatives of a function f : R2 → R. ▻ There exist functions f : R2 → R such that fx(x0,y0) and fy(x0,y0) exist but f is not continuous at.
  42. [42]
    [PDF] Differentiation - CMU Math
    Note, however, it is possible for a function to be differentiable, and for the partial derivatives to exist and be discontinuous. Example 2.7. Let f : R2 → R be ...
  43. [43]
    [PDF] Partial Derivatives and Differentiability
    A partial derivative is the rate of change of a function as one moves away from a point in a direction. For two variables, differentiability requires both  ...Missing: multivariable | Show results with:multivariable
  44. [44]
    Calculus III - Interpretations of Partial Derivatives
    Nov 16, 2022 · As with functions of single variables partial derivatives represent the rates of change of the functions as the variables change. As we saw in ...Missing: multivariable | Show results with:multivariable
  45. [45]
    Geometric Interpretation of Partial Derivatives
    Partial derivatives are the slope of tangent lines on a surface, showing how fast z changes with respect to x or y, and the slope of the tangent plane in those ...
  46. [46]
    The gradient - Ximera - Xronos
    Gradient vectors are always orthogonal to level sets. The fact that the gradient is always orthogonal to level surfaces is very powerful. In fact it gives ...
  47. [47]
    Calculus III - Curl and Divergence - Pauls Online Math Notes
    Nov 16, 2022 · If f(x,y,z) f ( x , y , z ) has continuous second order partial derivatives then curl(∇f)=→0 curl ( ∇ f ) = 0 → . This is easy enough to check ...
  48. [48]
    4.1 Gradient, Divergence and Curl
    “Gradient, divergence and curl”, commonly called “grad, div and curl”, refer to a very widely used family of differential operators and related notations.
  49. [49]
    [PDF] Parametrized Surface Integrals
    Sep 8, 2025 · To figure out a normal on 𝑆, we first find two tangent vectors, which can be gotten by partial derivatives: 𝜕r. 𝜕𝑠. = 𝜕𝑟𝑥. 𝜕𝑠 i +.
  50. [50]
    16.6: Surface Integrals - Mathematics LibreTexts
    Jan 17, 2025 · Similarly, if \(S\) is a surface given by equation \(x = g(y,z)\) or equation \(y = h(x,z)\), then a parameterization of \(S\) is \(\vecs r(y,z) ...<|separator|>
  51. [51]
    Second Derivative Test -- from Wolfram MathWorld
    1. If D>0 and f_(xx)(x_0,y_0)>0 , the point is a local minimum. · 2. If D>0 and f_(xx)(x_0,y_0)<0 , the point is a local maximum. · 3. If D<0 , the point is a ...Missing: multivariable determinant
  52. [52]
    Proofs - Math 22a Harvard College Fall 2018
    In dimension 2, it implies that we can find coordinates near a critical point so that the function is f(x,y) = x2+y2 or f(x,y) = -x2-y2 or then f(x,y) = -x2+y2.
  53. [53]
    Navier-Stokes Equations
    The equations were derived independently by G.G. Stokes, in England, and M. Navier, in France, in the early 1800's. The equations are extensions of the Euler ...
  54. [54]
    The Heat Equation - Pauls Online Math Notes
    Sep 5, 2025 · In this section we will do a partial derivation of the heat equation ... With Fourier's law we can easily remove the heat flux from this equation.
  55. [55]
    Economic interpretation of calculus operations - multivariate
    The partial derivative of utility with respect to consumption of good x can be interpreted as the marginal utility of x, or the amount of utility gained when a ...
  56. [56]
    [PDF] Chapter 4: Topics in Consumer Theory - Nolan H. Miller
    EXAMPLE: Cobb-Douglas Utility: A famous example of a homothetic utility function is the Cobb-Douglas utility function (here in two dimensions): u(x1,x2) ...
  57. [57]
    Use of partial derivatives in economics; some examples
    Partial derivatives are used to calculate marginal utility (MUx, MUy), marginal product (MPK, MPL), and slopes of indifference curves and isoquants.