Fact-checked by Grok 2 weeks ago

Total derivative

In , the total derivative of a function \mathbf{f}: \mathbb{R}^m \to \mathbb{R}^n at a point \mathbf{a} is the best to the change in \mathbf{f} near \mathbf{a}, represented as an n \times m matrix known as the Jacobian matrix. This matrix consists of all first-order partial derivatives of the component functions of \mathbf{f}, with the entry in row i and column j given by \frac{\partial f_i}{\partial x_j} evaluated at \mathbf{a}. Formally, \mathbf{f} is differentiable at \mathbf{a} if there exists a linear map D\mathbf{f}(\mathbf{a}) such that \lim_{\mathbf{h} \to \mathbf{0}} \frac{\| \mathbf{f}(\mathbf{a} + \mathbf{h}) - \mathbf{f}(\mathbf{a}) - D\mathbf{f}(\mathbf{a}) \mathbf{h} \|}{\| \mathbf{h} \|} = 0, where the limit condition ensures the approximation error vanishes faster than the perturbation size. Unlike partial derivatives, which measure change with respect to a single variable while holding others fixed, the total derivative accounts for simultaneous variations in all input variables, providing a complete local of the function. For scalar-valued functions (n = 1), the total derivative reduces to the vector \nabla f = \left( \frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_m} \right), which points in the direction of steepest ascent and determines the hyperplane. In vector-valued cases, it generalizes this to transformations between spaces, essential for analyzing systems like those in physics or where multiple inputs and outputs interact. The total derivative underpins key theorems in multivariable calculus, including the chain rule, which composes derivatives as matrix products: if \mathbf{g}: \mathbb{R}^p \to \mathbb{R}^m and \mathbf{f}: \mathbb{R}^m \to \mathbb{R}^n, then D(\mathbf{f} \circ \mathbf{g})(\mathbf{b}) = D\mathbf{f}(\mathbf{g}(\mathbf{b})) \cdot D\mathbf{g}(\mathbf{b}). It also facilitates computations in optimization, where the Jacobian aids gradient-based methods, and in differential geometry, where it describes tangent spaces to manifolds. Properties such as linearity—e.g., D(\mathbf{f} + \mathbf{g}) = D\mathbf{f} + D\mathbf{g} and D(c\mathbf{f}) = c D\mathbf{f} for scalar c—make it a foundational tool for higher-dimensional analysis.

Definition and Basics

As a Linear Map

The total derivative of a multivariable f: \mathbb{R}^n \to \mathbb{R}^m at a point a \in \mathbb{R}^n is defined as the unique Df(a): \mathbb{R}^n \to \mathbb{R}^m that provides the best to f near a. Specifically, f is differentiable at a if there exists such a linear map satisfying the condition \lim_{h \to 0} \frac{\|f(a + h) - f(a) - Df(a)(h)\|}{\|h\|} = 0, where \|\cdot\| denotes a on the respective spaces, ensuring the is negligible compared to the perturbation size. This definition assumes familiarity with linear maps between finite-dimensional spaces and the associated norms. This construction generalizes the single-variable derivative, where for f: \mathbb{R} \to \mathbb{R}, Df(a) is simply multiplication by the scalar f'(a), to higher dimensions by replacing the scalar with a linear operator that captures directional changes in all input variables. In matrix form, the total derivative is represented by the Jacobian matrix J_f(a), an m \times n matrix whose entries are the partial derivatives of the component functions of f, such that Df(a)(h) = J_f(a) h for any h \in \mathbb{R}^n. The partial derivatives thus serve as the components assembling this matrix. Geometrically, the total derivative provides the best to the change in f near a, i.e., Df(a)(h) \approx f(a + h) - f(a), generalizing the single-variable where it gives the of the line. For scalar-valued functions (m=1), this corresponds to the hyperplane to the in \mathbb{R}^{n+1}.

Relation to Partial Derivatives

The total derivative of a function f: \mathbb{R}^n \to \mathbb{R}^m at a point a \in \mathbb{R}^n is represented by the Jacobian matrix J_f(a), whose entries are the partial derivatives of the component functions of f; specifically, the (i,j)-th entry is (J_f(a))_{ij} = \frac{\partial f_i}{\partial x_j}(a). In terms of linear maps, the total derivative Df(a) can be expressed as Df(a) = \sum_{j=1}^n \frac{\partial f}{\partial x_j}(a) \otimes e_j^*, where e_j^* are the dual basis vectors, or equivalently in coordinates, Df(a)(v) = J_f(a) v for v \in \mathbb{R}^n. For a scalar-valued function f: \mathbb{R}^n \to \mathbb{R}, the total derivative at a acts as the df(a): \mathbb{R}^n \to \mathbb{R} given by df(a)(v) = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(a) v_i, which in differential notation appears as df(a) = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(a) \, dx_i but represents solely the output of the on input vectors. Differentiability of f at a is defined by the existence of a linear map Df(a) satisfying \lim_{h \to 0} \frac{\|f(a+h) - f(a) - Df(a)h\|}{\|h\|} = 0, which implies that all partial derivatives exist at a. A sufficient condition for differentiability at a is that all partial derivatives exist in a neighborhood of a and are continuous at a; however, differentiability does not conversely imply continuity of the partials. As a concrete illustration, consider f(x,y) = x^2 + y at the point (1,1). The partial derivatives are \frac{\partial f}{\partial x} = 2x and \frac{\partial f}{\partial y} = 1, so Df(1,1) = \begin{pmatrix} 2 & 1 \end{pmatrix}. Applying this to a vector (h,k) yields Df(1,1)(h,k) = 2h + k. Unlike the directional derivative, which measures the rate of change along a specific direction via \frac{\partial f}{\partial u}(a) = \nabla f(a) \cdot u for a unit vector u and exists if the partials exist along that line, the total derivative demands the existence of all partial derivatives and the global linear approximation limit, ensuring it captures changes in all directions simultaneously.

Interpretations and Properties

As a Differential Form

In , the total derivative of a smooth function f: M \to \mathbb{R} defined on a smooth manifold M is interpreted as the df, which is a 1-form on M. This 1-form is expressed locally in coordinates (x_1, \dots, x_n) as df = \sum_{i=1}^n \frac{\partial f}{\partial x_i} \, dx_i, where the dx_i are the coordinate basis 1-forms. As a section of the T^*M, df assigns to each point p \in M a linear functional df_p: T_p M \to \mathbb{R} that represents the of f at p along tangent vectors in the T_p M. The differential df possesses key properties arising from the exterior derivative operator d. For a smooth f, df is exact by construction, meaning df = d(f) where f is viewed as a 0-form, and it is closed since the exterior derivative satisfies d^2 = 0, implying d(df) = 0. This closedness ensures that the wedge product involving higher derivatives vanishes in a trivial way, but the exactness of df is fundamental, distinguishing it from general closed 1-forms on non-contractible manifolds. For a smooth f: M \to N between manifolds, the operation provides a geometric of the total derivative through the induced map on forms: if \omega is a 1-form on N, then f^* \omega is the 1-form on M defined by (f^* \omega)_p(v) = \omega_{f(p)}(df_p(v)) for p \in M and v \in T_p M. In particular, the commutes with the , so d(f^* \omega) = f^* (d \omega), preserving exactness and closedness properties. This framework extends the total derivative to compositions and transformations between manifolds. The utility of df as a 1-form is evident in integration theory. For a piecewise smooth path \gamma: [a, b] \to M from point A to B, the line integral \int_\gamma df = f(B) - f(A), generalizing the fundamental theorem of calculus to manifolds via Stokes' theorem in the special case where the 1-form is exact. Locally, if \gamma(t) = (x_1(t), \dots, x_n(t)), this becomes \int_a^b \sum_i \frac{\partial f}{\partial x_i} \frac{dx_i}{dt} \, dt. Under a change of coordinates, say from (x_i) to (y_j), the basis 1-forms transform via the pullback: dy_j = \sum_k \frac{\partial y_j}{\partial x_k} dx_k, reflecting the contravariant nature of covectors with respect to the Jacobian matrix of the coordinate map. This ensures that df remains well-defined invariantly across charts.

Higher-Order Total Derivatives

The second total derivative of a scalar-valued function f: \mathbb{R}^n \to \mathbb{R} at a point a \in \mathbb{R}^n, denoted D^2 f(a), is defined as the derivative of the first total derivative Df, resulting in a symmetric bilinear map from \mathbb{R}^n \times \mathbb{R}^n to \mathbb{R}. For such functions, D^2 f(a) is represented by the Hessian matrix H_f(a), an n \times n symmetric matrix whose (i,j)-entry is the second partial derivative \frac{\partial^2 f}{\partial x_j \partial x_i}(a). This matrix form allows the second derivative to be expressed as D^2 f(a)(h, k) = k^T H_f(a) h for vectors h, k \in \mathbb{R}^n. In general, the k-th order total derivative D^k f(a) of a sufficiently function f at a is a symmetric from (\mathbb{R}^n)^k to \mathbb{R}, given by D^k f(a)(h_1, \dots, h_k) = \sum_{j_1=1}^n \cdots \sum_{j_k=1}^n h_{1,j_1} \cdots h_{k,j_k} \frac{\partial^k f}{\partial x_{j_1} \cdots \partial x_{j_k}}(a), where each h_\ell = (h_{\ell,1}, \dots, h_{\ell,n}). This expression arises from the identification of higher derivatives with multilinear maps on the , capturing all mixed partial derivatives up to order k. The symmetry of D^k f(a) follows from Schwarz's theorem (also known as Clairaut's theorem), which states that if the second partial derivatives of f are continuous in a neighborhood of a, then the mixed partials commute, i.e., \frac{\partial^2 f}{\partial x_i \partial x_j}(a) = \frac{\partial^2 f}{\partial x_j \partial x_i}(a); this extends to higher orders under suitable continuity assumptions on the partials, ensuring D^k f(a) is invariant under permutations of its arguments. Higher-order total derivatives underpin the multivariable Taylor theorem, which provides a approximation of f near a: for a C^p f, f(a + h) = \sum_{j=0}^p \frac{1}{j!} D^j f(a)(h, \dots, h) + R_{p,a}(h), where the remainder satisfies \|R_{p,a}(h)\| = o(\|h\|^p) as h \to 0, with the j-th term involving the j-th applied to j copies of h. For the second-order case, this yields the quadratic f(a + h) \approx f(a) + Df(a) \cdot h + \frac{1}{2} h^T H_f(a) h. Consider the example f(x,y) = xy on \mathbb{R}^2. The first partials are \frac{\partial f}{\partial x} = y and \frac{\partial f}{\partial y} = x, so the second total derivative at (0,0) is represented by the Hessian H_f(0,0) = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, with D^2 f(0,0)(h_1, h_2) = h_{1,y} h_{2,x} + h_{1,x} h_{2,y} for h_i = (h_{i,x}, h_{i,y}), symmetric by Schwarz's theorem since the mixed partials \frac{\partial^2 f}{\partial x \partial y} = 1 = \frac{\partial^2 f}{\partial y \partial x} are continuous everywhere.

Chain Rule and Computation

General Chain Rule Statement

The general chain rule for total derivatives in multivariable calculus states that if f: \mathbb{R}^n \to \mathbb{R}^m is differentiable at a point a \in \mathbb{R}^n and g: \mathbb{R}^m \to \mathbb{R}^p is differentiable at f(a), then the composition h = g \circ f: \mathbb{R}^n \to \mathbb{R}^p is differentiable at a, and the total derivative satisfies Dh(a) = Dg(f(a)) \circ Df(a). In matrix representation, this corresponds to the Jacobian matrix equation J_h(a) = J_g(f(a)) \, J_f(a), where J_f(a) is the m \times n Jacobian of f at a and J_g(f(a)) is the p \times m Jacobian of g at f(a). This formulation treats the total derivative as a linear map between vector spaces, composing via matrix multiplication. For scalar-valued functions, where p = 1, the chain rule specializes to the form \nabla h(a) = \nabla g(f(a))^\top J_f(a), interpreting the \nabla h(a) as a row vector multiplied by the of the inner function, generalizing the single-variable rule (u \circ v)'(x) = u'(v(x)) v'(x). This contrasts with chain rules expressed solely in partial derivatives, as the total derivative encapsulates the full , ensuring consistency across vector-valued compositions. A proof relies on the definition of differentiability: a is differentiable at a point if it admits a with an error term vanishing faster than the input . Specifically, write f(a + h) = f(a) + Df(a) h + \epsilon_f(h), where \lim_{h \to 0} \|\epsilon_f(h)\| / \|h\| = 0, and similarly g(b + k) = g(b) + Dg(b) k + \epsilon_g(k) with \lim_{k \to 0} \|\epsilon_g(k)\| / \|k\| = 0. Substituting b = f(a) and k = Df(a) h + \epsilon_f(h) into the expression for g(f(a + h)) yields h(a + h) = h(a) + [Dg(f(a)) Df(a)] h + higher-order terms, where the remainder satisfies the required condition for differentiability of h at a.

Direct Dependency Example

In cases of direct dependency, the total derivative captures the rate of change of a composite where intermediate variables explicitly depend on independent parameters. Consider z = f(x, y), with x = x(t) and y = y(t), where t is the independent parameter. The chain rule for the total derivative states that \frac{dz}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y} \frac{dy}{dt}. This formula arises from the of f along the parametric curve in the xy-plane, summing the contributions from each direction of change. To demonstrate the computation, take the specific functions f(x, y) = x^2 y, x(t) = t, and y(t) = \sin t. The goal is to find \frac{dz}{dt} at t = 0. First, compute the partial derivatives: \frac{\partial f}{\partial x} = 2xy, \quad \frac{\partial f}{\partial y} = x^2. Next, find the derivatives of the parameterizations: \frac{dx}{dt} = 1, \quad \frac{dy}{dt} = \cos t. Substitute into the chain rule: \frac{dz}{dt} = (2xy)(1) + (x^2)(\cos t) = 2xy + x^2 \cos t. Evaluate at t = 0, where x(0) = 0 and y(0) = \sin 0 = 0, with \cos 0 = 1: \frac{dz}{dt} \bigg|_{t=0} = 2(0)(0) + (0)^2 (1) = 0. This step-by-step process highlights the "dot product" structure, where the gradient of f at (x(t), y(t)) is dotted with the velocity vector \left( \frac{dx}{dt}, \frac{dy}{dt} \right) along the path. The result \frac{dz}{dt} = 0 at t = 0 indicates that, at this instant, the function z(t) is instantaneously stationary along the parametric path, even though z may change elsewhere; it reflects the combined effects of the direct dependencies on t. This total rate of change provides the instantaneous variation of z as t evolves, essential for analyzing motion or optimization in parameterized systems.

Indirect Dependency Example

In cases where the total derivative involves indirect dependencies through intermediate variables, the multivariable chain rule accounts for multiple paths of influence. Consider a function z = f(u, v), where u = g(x, y) depends on both independent variables x and y, and v = h(x) depends only on x. The total partial derivative of z with respect to x is then given by \frac{\partial z}{\partial x} = \frac{\partial f}{\partial u} \frac{\partial u}{\partial x} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial x}, while the partial derivative with respect to y simplifies to \frac{\partial z}{\partial y} = \frac{\partial f}{\partial u} \frac{\partial u}{\partial y}, since v does not depend on y. These expressions arise from summing the contributions along each dependency path in the non-parametric multivariable chain rule. To illustrate, take the specific functions z = f(u, v) = u^2 + v, u = g(x, y) = x y, and v = h(x) = x. First, compute the necessary partial derivatives: \frac{\partial f}{\partial u} = 2u, \frac{\partial f}{\partial v} = 1, \frac{\partial u}{\partial x} = y, \frac{\partial u}{\partial y} = x, and \frac{\partial v}{\partial x} = 1 (with \frac{\partial v}{\partial y} = 0). Substituting into the chain rule formulas yields \frac{\partial z}{\partial x} = (2u)(y) + (1)(1) = 2u y + 1 and \frac{\partial z}{\partial y} = (2u)(x) + (1)(0) = 2u x. Evaluating at the point (x, y) = (1, 1), where u = 1 \cdot 1 = 1 and v = 1, gives \frac{\partial z}{\partial x} \big|_{(1,1)} = 2(1)(1) + 1 = 3 and \frac{\partial z}{\partial y} \big|_{(1,1)} = 2(1)(1) = 2. This computation traces the indirect effects: the path through u affects both derivatives, while the path through v contributes only to \frac{\partial z}{\partial x}. The dependency structure can be visualized using a tree , which highlights the indirect paths:
  • Root: z
    • Branches to: u (labeled \frac{\partial z}{\partial u}) and v (labeled \frac{\partial z}{\partial v})
  • From u: Branches to x (labeled \frac{\partial u}{\partial x}) and y (labeled \frac{\partial u}{\partial y})
  • From v: Branch to x (labeled \frac{\partial v}{\partial x})
This diagram underscores how the total derivative aggregates products along each branch leading to the independent variables, providing a clear map for non-parametric computations in multivariable settings.

Applications

Total Differential Equations

A total differential equation is a first-order of the form P(x,y) \, dx + Q(x,y) \, dy = 0, where the equation represents the total differential df = 0 of some function f(x,y). Such an equation is solvable explicitly if it is , meaning there exists a f(x,y) such that \frac{\partial f}{\partial x} = P and \frac{\partial f}{\partial y} = Q. The necessary and sufficient condition for exactness, assuming the partial derivatives are continuous, is \frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x}. This condition ensures that the total differential df is well-defined and path-independent. To solve an exact equation, integrate P with respect to x to obtain f(x,y) = \int P \, dx + g(y), then differentiate with respect to y and set equal to Q to solve for g(y); the implicit solution is f(x,y) = c, where c is a constant. In relation to the total derivative, the equation df = 0 implies that along solution curves, the of f in the direction of the curve is zero, meaning the \nabla f = \left( P, Q \right) is to the of the path. Geometrically, the solution curves are the level curves of the f, on which f remains constant. If the equation is not exact, an integrating factor \mu(x) that depends only on x may render it exact if \frac{ \frac{\partial P}{\partial y} - \frac{\partial Q}{\partial x} }{Q} = h(x), a of x alone; then \mu(x) = \exp \left( \int h(x) \, dx \right). Similarly, an integrating factor \mu(y) depending only on y exists if \frac{ \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} }{P} = k(y), a of y alone, with \mu(y) = \exp \left( \int k(y) \, dy \right). Multiplying the original by such a \mu yields an exact equation that can then be solved as above. For example, consider the equation (2x + y) \, dx + (x + 2y) \, dy = 0. Here, P = 2x + y and Q = x + 2y, and \frac{\partial P}{\partial y} = 1 = \frac{\partial Q}{\partial x}, so it is . Integrating P with respect to x gives f(x,y) = x^2 + xy + g(y); then \frac{\partial f}{\partial y} = x + g'(y) = x + 2y, so g'(y) = 2y and g(y) = y^2. Thus, f(x,y) = x^2 + xy + y^2 = c, and the solution curves are the level sets of this .

Systems of Equations and Implicit Functions

In the context of systems of equations, the total derivative plays a crucial role in analyzing implicit relationships defined by functions F: \mathbb{R}^{n+m} \to \mathbb{R}^m that satisfy F(x, u) = 0, where x \in \mathbb{R}^n are independent variables and u \in \mathbb{R}^m are dependent ones. The provides conditions under which such a system locally defines u as a of x. Specifically, if F is continuously differentiable and the matrix \frac{\partial F}{\partial u}(x_0, u_0) is invertible at a point (x_0, u_0) where F(x_0, u_0) = 0, then there exist neighborhoods around x_0 and u_0 such that u = g(x) for a unique continuously differentiable function g, with g(x_0) = u_0. This invertibility ensures the full rank of the , guaranteeing local solvability and of the implicit . To find the total derivative of the implicit solution u = g(x), differentiate the equation F(x, g(x)) = 0 with respect to x. The total derivative yields dF = \frac{\partial F}{\partial x} \, dx + \frac{\partial F}{\partial u} \, du = 0, leading to \frac{du}{dx} = -\left( \frac{\partial F}{\partial u} \right)^{-1} \frac{\partial F}{\partial x}. This formula expresses the sensitivity of the dependent variables to changes in the independent ones, relying on the invertibility of the Jacobian \frac{\partial F}{\partial u} for well-definedness. For systems with multiple equations, the condition extends to the Jacobian matrix having full rank m, which is necessary and sufficient for local existence and uniqueness of the near the point of interest. A simple example illustrates this in two dimensions: consider the system defined by F(x, y) = x^2 + y^2 - 1 = 0, representing a . Assuming y > 0, the applies since \frac{\partial F}{\partial y} = 2y \neq 0 at points on the upper . The total derivative is then \frac{dy}{dx} = -\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} = -\frac{x}{y}, which matches the explicit of y = \sqrt{1 - x^2}. This approach avoids solving for y explicitly and highlights how total derivatives capture the of the . In applications, such as , total derivatives from implicit systems derive functions from conditions. For instance, in a general model where equations F(p, q) = 0 implicitly define quantities q as functions of prices p, the total derivative \frac{dq}{dp} = -\left( \frac{\partial F}{\partial q} \right)^{-1} \frac{\partial F}{\partial p} quantifies price elasticities, assuming the \frac{\partial F}{\partial q} is invertible to ensure stable equilibria. Similarly, in physics, constraints like those in use these derivatives to express dependent coordinates in terms of independent ones, facilitating the analysis of motion under restrictions.

References

  1. [1]
    Total derivative or Jacobian matrix - MIT
    The total derivative of a function f : R m → R n \mathbf f : \mathbb R^m \to \mathbb R^n f:Rm→Rn is an n × m n \times m n×m ( n n n rows and m m m columns) ...
  2. [2]
  3. [3]
    [PDF] Total derivatives Math 131 Multivariate Calculus
    We'll say f is differentiable if all its component func- tions are differentiable, and in that case, we'll take the derivative of f, denoted Df, ...
  4. [4]
    [PDF] linear maps, the total derivative and the chain rule
    We will discuss the notion of linear maps and introduce the total derivative of a function f : Rn → Rm as a linear map. We will then discuss composition of ...
  5. [5]
    Advanced Analysis - Penn Math
    Suppose U ⊂ R n is open and f : U → R m . We say that f is differentiable at p ∈ U when there exists a linear map D p f : R n → R m such that.
  6. [6]
    [PDF] Contents 1. The Total Derivative 1 2. The Chain Rule 4 3. Multi ...
    ||f(a − h) − f(a) − Th||. ||h||. = 0. If T exists it is called the total derivative of f at a and we write. Df(a) = T.Missing: multivariable | Show results with:multivariable
  7. [7]
    [PDF] Differentiability
    This is the idea behind the total derivative of f at a point a in Rn. Formally, we say that f is differentiable at a point a ∈ Rn if there exists an m × n ...
  8. [8]
    [PDF] (1) The derivative as a linear transformation, (2) The inverse and ...
    If all the partial derivatives are continuous, then the matrix above is called the total derivative. (or just derivative) of f and is denoted f0(x, y, z).
  9. [9]
    [PDF] Chapter 2 Differentiation in higher dimensions
    Definition. If the limit condition (∗) holds for a linear map L, we call L the total deriva- tive of f at a, and ...
  10. [10]
    Differentiation on Rn: partial, total, and directional derivatives The ...
    Then all partial derivatives exist and L(h)=(Jacobian matrix) · h. Theorem : If all partial derivatives exist and are all continuous, then f is differentiable.
  11. [11]
    Differentiability
    So for a function to be differentiable, we need more than just the existence of partial derivatives. We need continuity of partial derivatives: Theorem ...
  12. [12]
    A differentiable function with discontinuous partial derivatives
    The differentiability theorem states that continuous partial derivatives are sufficient for a function to be differentiable. It's important to recognize, ...
  13. [13]
    Part 1: The Total Derivative
    To this point, we have considered only partial derivatives of a function f( x,y) . ... The total derivative is also known as the Jacobian of the mapping f( x,y) ...
  14. [14]
    [PDF] Differential Forms - MIT Mathematics
    Feb 1, 2019 · One of the goals of this text on differential forms is to legitimize this interpretation of equa- tion (1) in 𝑛 dimensions and in fact, ...<|separator|>
  15. [15]
    [PDF] lecture 1: differential forms
    df = ∂f. ∂x dx +. ∂f. ∂y dy +. ∂f. ∂z dz. The expression df is called a 1-form. But what does this really mean? Definition: A smooth 1-form φ on ...Missing: sum | Show results with:sum
  16. [16]
    [PDF] Differential Forms and Integration - UCLA Mathematics
    The integration on forms concept is of fundamental importance in differ- ential topology, geometry, and physics, and also yields one of the most important.<|separator|>
  17. [17]
    [PDF] 6 Differential forms
    6.7.4 Exterior differential. Recall that we have defined the exterior differential on functions by the formula. (d f)(X) = X(f). (6.6) we will now extend this ...
  18. [18]
    [PDF] Differential Forms Lecture Notes Liam Mazurowski
    There is one more crucial property of pullback: f∗(dω) = d(f∗ω). In other words, pullback commutes with exterior derivative. There is also a dual prop ...
  19. [19]
    [PDF] Second Derivatives, Bilinear Maps, and Hessian Matrices
    The Hessian matrix expresses the second derivative of a scalar-valued multivariate function, and is always square and symmetric. A Jacobian matrix, in general, ...
  20. [20]
    [PDF] Math 396. Higher derivatives and Taylor's formula via multilinear maps
    Math 396. Higher derivatives and Taylor's formula via multilinear maps. Let V and W be finite-dimensional vector space over R, and U ⊆ V an open subset.
  21. [21]
    [PDF] Mixed Partial Derivatives
    Jan 13, 2014 · In these notes we prove that the mixed partial derivatives ∂2 f ... of mixed partials” and “Clairaut's theorem”. Following the proof ...
  22. [22]
    The Chain Rule for Functions of more than One Variable - UTSA
    Jan 20, 2022 · The matrix corresponding to a total derivative is called a Jacobian ... Since the entries of the Jacobian matrix are partial derivatives ...
  23. [23]
    multivariable-chain-rule - Stanford AI Lab
    Note that the directional derivative – considered as a function of the direction – coincides with the total derivative of f when f is scalar-valued.
  24. [24]
    [PDF] The Multivariable Chain Rule - UC Berkeley math
    Feb 11, 2015 · The chain rule is a simple consequence of the fact that differentiation produces the linear approximation to a function at a point, ...
  25. [25]
  26. [26]
    Chain Rule - Calculus III - Pauls Online Math Notes
    Nov 16, 2022 · To use this to get the chain rule we start at the bottom and for each branch that ends with the variable we want to take the derivative with ...
  27. [27]
    Differential Equations - Exact Equations - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will discuss identifying and solving exact differential equations. We will develop of a test that can be used to identify ...
  28. [28]
    Lecture 6: Exact Differentials | Calculus Revisited: Multivariable ...
    Video Description: Herb Gross explains the necessary and sufficient condition for an expression of the form Mdx + Ndy to be an exact differential.
  29. [29]
    [PDF] Differential Calculus of Several Variables
    To verify the claim that the gradient is perpendicular to the curve at p, use the chain rule: f(c(t)) = a = constant =⇒ f(c(t))′(p)=0 =⇒ ∇f(p) · c′(0) ...<|control11|><|separator|>
  30. [30]
  31. [31]
    [PDF] 2.1 Integrating factors. If the differential equation M(x, y)dx + N(x, y)dy
    An integrating factor (I(x, y)) makes a non-exact differential equation exact when multiplied by it. If I is independent of x, it satisfies IxN - IyM = I(My - ...
  32. [32]
    [PDF] Implicit Function Theorem
    One motivation for the implicit function theorem is that we can eliminate m variables from a constrained optimization problem using the constraint equations. In ...
  33. [33]
    [PDF] Math 326 Regular points and the Implicit Function Theorem
    Math 326 Regular points and the Implicit Function Theorem February, 2014. 3 regular point is one where the total derivative (matrix) has linearly independent ...
  34. [34]
    [PDF] Implicit Functions and Solution Mappings - UW Math Department
    Feb 3, 2009 · The implicit function theorem is one of the most important theorems in analysis and its many variants are basic tools in partial ...
  35. [35]
    [PDF] Math Tools 1 Total Differential 2 Implicit Function Theorem
    Using the shorthand notation, we may write the total differential formula as, dF = F1dx1 + ··· + Fndxn. (x∗,y∗) . If G(x, y) = 0 represents a model, x is an ...
  36. [36]
    [PDF] Implicit Function Theorems and Lagrange Multipliers
    The Fixed point theorem of Chapter 13will be used to establish the Implicit function theorem for systems. We note that in proving this theorem for a single.Missing: total | Show results with:total