Fact-checked by Grok 2 weeks ago

Multivariable calculus

Multivariable calculus is the extension of single-variable calculus to functions of several variables, encompassing , , and techniques applied to higher-dimensional spaces. It provides mathematical tools essential for modeling and analyzing phenomena in fields such as physics, , and , where quantities depend on multiple independent variables. At its core, multivariable calculus introduces partial derivatives, which measure how a function changes with respect to one while holding others constant, along with concepts like the chain rule, directional derivatives, and gradients for optimization and tangent planes to surfaces. These tools enable the study of from \mathbb{R}^n to \mathbb{R}, including level sets, extrema via Lagrange multipliers, and the geometry of curves and surfaces in . The integral calculus component generalizes to multiple integrals, such as double and triple integrals over regions in the plane or space, often computed using iterated integrals, , or coordinate transformations like polar, cylindrical, or spherical systems. These integrals quantify volumes, masses, and other accumulated quantities, with theorems like Fubini's allowing evaluation by successive single integrations. A significant aspect is vector calculus, which deals with vector fields, line integrals along curves, surface integrals over parametrized surfaces, and fundamental theorems including Green's theorem for planar regions, Stokes' theorem relating line and surface integrals, and the divergence theorem connecting flux through closed surfaces to volume integrals. These results unify differential and integral forms, facilitating applications in fluid dynamics, electromagnetism, and conservative fields. Overall, multivariable calculus forms a foundational framework for advanced mathematics and scientific computation, emphasizing both theoretical rigor and practical problem-solving.

Overview

Definition and Scope

Multivariable calculus is a of that extends the principles of single-variable calculus to functions involving two or more independent variables, generalizing concepts such as limits, derivatives, and integrals to higher-dimensional spaces. In this framework, functions map points in \mathbb{R}^n (where n \geq 2) to values in \mathbb{R} or \mathbb{R}^m, enabling the analysis of phenomena that depend on multiple inputs simultaneously. This generalization addresses the behavior of such functions over regions in multiple dimensions, building on foundational tools like vectors to represent points and directions in \mathbb{R}^n. The scope of multivariable calculus includes key topics such as partial derivatives for measuring rates of change with respect to individual variables, multiple integrals for computing volumes and masses in higher dimensions, and the study of vector fields through line integrals, surface integrals, and theorems like and the . It contrasts with single-variable calculus by introducing complexities like path non-uniqueness in limits—where the approach to a point can vary along different directions—and the incorporation of higher-dimensional , such as curves, surfaces, and manifolds, which require careful consideration of and . These elements provide a rigorous toolkit for handling multivariable systems without the linear constraints of one-dimensional . Multivariable calculus holds profound importance in applied sciences, serving as a cornerstone for modeling real-world systems with interdependent variables, such as electromagnetic fields in physics, optimization of production functions in , and stress analysis in engineering structures. By enabling the quantification of gradients, fluxes, and extrema in multiple dimensions, it facilitates precise predictions and designs in these fields, revealing insights unattainable through single-variable methods alone. This field emerged in the 19th century through pivotal contributions from mathematicians including , who advanced surface theory and , and , who introduced concepts of n-dimensional manifolds, setting the stage for its formal (see Historical Development for further details).

Historical Development

The foundations of multivariable calculus were established in the late through the independent of single-variable by and , which provided the analytical tools necessary for extending and to functions of multiple variables. These early contributions focused primarily on one-dimensional problems in physics and , but they set the stage for handling higher-dimensional phenomena by introducing concepts like limits, derivatives, and integrals that could be generalized. In the , Leonhard Euler advanced the field by incorporating multivariable ideas into his studies of , formulating equations that described the motion of inviscid fluids using partial differential equations around the 1750s. Toward the late 1700s, further developed partial derivatives as a key tool in , applying them to optimize functions subject to constraints and laying groundwork for variational problems involving multiple variables. The 19th century marked significant milestones, beginning with Carl Friedrich Gauss's 1827 paper on the theory of curved surfaces, which introduced intrinsic measures of curvature independent of embedding in higher-dimensional space. contributed to the study of double integrals in the 1810s, examining issues with changing the in his 1814 memoir on definite integrals. Key theorems emerged soon after: George Green published his theorem in 1828, relating line integrals to area integrals for potential functions; George Gabriel Stokes stated his generalization in 1850, connecting surface integrals to line integrals on boundaries; and Gauss formulated the around 1813, publishing it in 1833, linking volume integrals to surface fluxes. advanced integration and geometric theories in the 1850s through his work on complex functions and manifolds, influencing multivariable calculus. By the 1880s, and independently developed , systematizing operations like , , and to unify these theorems in a vector framework. In the , multivariable calculus was refined through abstract formulations in and , with Bernhard Riemann's 1854 habilitation lecture influencing later generalizations to manifolds, and subsequent work by figures like integrating tensor analysis for curved spaces in the early 1900s. These advancements, building on 19th-century foundations, enabled applications in and modern analysis by abstracting multivariable concepts to arbitrary dimensions without assumptions.

Mathematical Foundations

Vectors and Vector Operations

In multivariable calculus, vectors in Euclidean space \mathbb{R}^n are defined as ordered n-tuples of real numbers, such as \mathbf{v} = (v_1, v_2, \dots, v_n) where each v_i \in \mathbb{R}./04:_R/4.01:_Vectors_in_R) Geometrically, these vectors can be interpreted as points in n-dimensional space or as directed arrows originating from the origin, providing a foundation for representing positions and directions in higher dimensions./05:_Real-Valued_Functions_of_Several_Variables/5.00:_Structure_of_Rn) Basic vector operations in \mathbb{R}^n include addition and scalar multiplication. For two vectors \mathbf{u} = (u_1, \dots, u_n) and \mathbf{v} = (v_1, \dots, v_n), their sum is \mathbf{u} + \mathbf{v} = (u_1 + v_1, \dots, u_n + v_n), which geometrically corresponds to the of vector addition./04:_R/4.02:_Vector_Algebra) Scalar multiplication by a c yields c\mathbf{u} = (c u_1, \dots, c u_n), the vector's magnitude and possibly reversing its direction if c < 0./04:_R/4.02:_Vector_Algebra) The dot product, also known as the inner product, is a fundamental operation that produces a scalar from two vectors \mathbf{a} = (a_1, \dots, a_n) and \mathbf{b} = (b_1, \dots, b_n), defined algebraically as \mathbf{a} \cdot \mathbf{b} = \sum_{i=1}^n a_i b_i. The Euclidean norm, or length, of a vector \mathbf{a} is then given by \|\mathbf{a}\| = \sqrt{\mathbf{a} \cdot \mathbf{a}} = \sqrt{\sum_{i=1}^n a_i^2}, measuring the vector's magnitude in the . Linear combinations of vectors \mathbf{v}_1, \dots, \mathbf{v}_k in \mathbb{R}^n are formed as \sum_{i=1}^k c_i \mathbf{v}_i where c_i \in \mathbb{R}, and the span of these vectors is the set of all such combinations, forming a subspace of \mathbb{R}^n./04:_R/4.10:_Spanning_Linear_Independence_and_Basis_in_R) A basis for \mathbb{R}^n is a linearly independent set of n vectors that spans the entire space; the standard basis consists of the unit vectors \mathbf{e}_i, where \mathbf{e}_1 = (1, 0, \dots, 0), \mathbf{e}_2 = (0, 1, \dots, 0), up to \mathbf{e}_n = (0, \dots, 0, 1)./01:_Geometry_of_R/1.01:_Introduction_to_R) The Euclidean distance between two vectors \mathbf{x} and \mathbf{y} in \mathbb{R}^n is d(\mathbf{x}, \mathbf{y}) = \|\mathbf{x} - \mathbf{y}\|, quantifying the straight-line separation in the space. The angle \theta between two nonzero vectors \mathbf{a} and \mathbf{b} satisfies \cos \theta = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\| \|\mathbf{b}\|}, with \theta ranging from 0 to \pi radians, allowing for the determination of vector orientations relative to one another./04:_R/4.07:_The_Dot_Product)

Multivariable Functions

In multivariable calculus, a function f: \mathbb{R}^n \to \mathbb{R}^m assigns to each point in its domain, a subset of the vector space \mathbb{R}^n, an output vector in the codomain \mathbb{R}^m. Such functions generalize single-variable functions by accepting multiple inputs, represented as vectors, and producing vector-valued outputs. For instance, a scalar-valued function maps to \mathbb{R} (so m=1), such as f(x,y) = x^2 + y^2, which computes the squared Euclidean distance from the origin for points (x,y) in \mathbb{R}^2. A vector-valued example is f(x,y) = (x+y, xy), which transforms a point in the plane into a pair of real numbers representing the sum and product of its coordinates. The domain of a multivariable function is typically an open set in \mathbb{R}^n to facilitate analysis, while the range is the image of the domain under f, a subset of \mathbb{R}^m. Level sets provide a way to understand the function's behavior: for a scalar-valued function, the level set at value c is the set \{( \mathbf{x} \in \mathbb{R}^n \mid f(\mathbf{x}) = c \}, forming hypersurfaces such as curves in \mathbb{R}^2 or surfaces in \mathbb{R}^3. The graph of f, defined as \{ (\mathbf{x}, f(\mathbf{x})) \mid \mathbf{x} \in \mathrm{domain} \}, embeds the function as a hypersurface in \mathbb{R}^{n+m}. Continuity is a key property of such functions, verified through limits, ensuring small changes in inputs yield small changes in outputs. Visualizations aid in interpreting multivariable functions. For scalar-valued functions from \mathbb{R}^2 to \mathbb{R}, contour plots display level curves in the domain plane, where each curve connects points of equal function value, similar to topographic maps. Vector-valued functions, particularly those mapping to \mathbb{R}^3, can be graphed as parametric surfaces, where the image traces a surface defined by parameters corresponding to the inputs. Basic properties include composition and restrictions. If f: \mathbb{R}^n \to \mathbb{R}^k and g: \mathbb{R}^k \to \mathbb{R}^m, their composition g \circ f: \mathbb{R}^n \to \mathbb{R}^m is well-defined and follows the standard rule (g \circ f)(\mathbf{x}) = g(f(\mathbf{x})), preserving the vector structure. Restrictions of f to subspaces or lower-dimensional subsets of the domain yield functions with reduced input dimensions, maintaining the original codomain.

Limits and Continuity

Limits in Multiple Variables

In multivariable calculus, the limit of a function f: \mathbb{R}^n \to \mathbb{R}^m as the input \mathbf{x} approaches a point \mathbf{a} \in \mathbb{R}^n is defined using the \epsilon-\delta criterion adapted to vector norms. Specifically, \lim_{\mathbf{x} \to \mathbf{a}} f(\mathbf{x}) = \mathbf{L} if for every \epsilon > 0, there exists \delta > 0 such that $0 < \|\mathbf{x} - \mathbf{a}\| < \delta implies \|f(\mathbf{x}) - \mathbf{L}\| < \epsilon, where \|\cdot\| denotes a norm on \mathbb{R}^n or \mathbb{R}^m, such as the Euclidean norm. This definition ensures the function values approach \mathbf{L} uniformly in all directions near \mathbf{a}, regardless of the specific path taken, provided the input stays within the \delta-neighborhood excluding \mathbf{a} itself. A key challenge in multivariable limits is path dependence, where the limiting value may differ depending on the approach to \mathbf{a}, indicating the limit does not exist. For instance, consider the scalar function f(x,y) = \frac{xy}{x^2 + y^2} as (x,y) \to (0,0). Along the x-axis (y=0), f(x,0) = 0, so the limit is 0; similarly along the y-axis (x=0), the limit is 0. However, along the line y=x, f(x,x) = \frac{x^2}{2x^2} = \frac{1}{2}, yielding a limit of \frac{1}{2}. Since these path limits disagree, \lim_{(x,y) \to (0,0)} f(x,y) does not exist. This example illustrates how restricting to linear paths (like axes) can misleadingly suggest existence, while curved or other paths reveal inconsistencies. The sequential characterization provides an equivalent way to verify limits using sequences in \mathbb{R}^n. The limit \lim_{\mathbf{x} \to \mathbf{a}} f(\mathbf{x}) = \mathbf{L} holds if and only if, for every sequence \{\mathbf{x}_k\}_{k=1}^\infty in the domain of f with \mathbf{x}_k \neq \mathbf{a} for all k and \lim_{k \to \infty} \mathbf{x}_k = \mathbf{a}, it follows that \lim_{k \to \infty} f(\mathbf{x}_k) = \mathbf{L}. This criterion is particularly useful in metric spaces like \mathbb{R}^n, as it reduces the problem to checking sequence limits, which align with the single-variable case but account for the infinite possible directions in higher dimensions. To show non-existence, it suffices to find two sequences converging to \mathbf{a} along which f(\mathbf{x}_k) approaches different values, mirroring path dependence but in discrete terms. The squeeze theorem extends to multivariable functions to establish existence when direct computation is difficult. Suppose g, f, h: \mathbb{R}^n \to \mathbb{R}^m are defined on an open set containing \mathbf{a}, except possibly at \mathbf{a}, with g(\mathbf{x}) \leq f(\mathbf{x}) \leq h(\mathbf{x}) (in a componentwise sense for vectors) for all \mathbf{x} near \mathbf{a}, and \lim_{\mathbf{x} \to \mathbf{a}} g(\mathbf{x}) = \lim_{\mathbf{x} \to \mathbf{a}} h(\mathbf{x}) = \mathbf{L}. Then \lim_{\mathbf{x} \to \mathbf{a}} f(\mathbf{x}) = \mathbf{L}. This adaptation relies on the same bounding principle as in one variable but applies it over neighborhoods in \mathbb{R}^n, often using inequalities involving norms to "squeeze" f between simpler functions whose limits are known. For scalar cases, such as bounding |f(\mathbf{x})| between 0 and a term approaching 0, it confirms limits like 0 along all paths.

Continuity and Uniform Continuity

In multivariable calculus, a function f: D \subseteq \mathbb{R}^n \to \mathbb{R}^m, where D is a subset of \mathbb{R}^n, is said to be continuous at a point \mathbf{a} \in D if \lim_{\mathbf{x} \to \mathbf{a}} f(\mathbf{x}) = f(\mathbf{a}). This definition extends the single-variable notion by requiring that the function values approach f(\mathbf{a}) as the input \mathbf{x} approaches \mathbf{a} from any direction in the domain. The precise \epsilon-\delta formulation states that f is continuous at \mathbf{a} if for every \epsilon > 0, there exists \delta > 0 such that if \mathbf{x} \in D and $0 < \|\mathbf{x} - \mathbf{a}\| < \delta, then \|f(\mathbf{x}) - f(\mathbf{a})\| < \epsilon. A function is continuous on a set S \subseteq D if it is continuous at every point in S. Basic algebraic operations and compositions preserve continuity for multivariable functions. Specifically, if f and g are continuous at \mathbf{a}, then so are the sum f + g, difference f - g, scalar multiple c f (for constant c), product f \cdot g, and quotient f / g (provided g(\mathbf{a}) \neq \mathbf{0}). Moreover, if f is continuous at \mathbf{a} and g is continuous at f(\mathbf{a}), then the composition g \circ f is continuous at \mathbf{a}. These properties hold because limits respect these operations, mirroring the single-variable case but using vector norms. Examples illustrate these concepts clearly. Multivariable polynomials, such as f(x, y) = x^2 + 3xy + y^2, are continuous everywhere in \mathbb{R}^2 since they are finite sums and products of the continuous coordinate functions and constants. Rational functions, like f(x, y) = \frac{x^2 + y^2}{x + y} for x + y \neq 0, are continuous on their domains where the denominator is nonzero, but discontinuous along the line x + y = 0 if extended without redefinition. Uniform continuity strengthens the notion of continuity by requiring the \delta in the \epsilon-\delta definition to depend only on \epsilon, not on the location within the domain. Formally, f: D \to \mathbb{R}^m is uniformly continuous on D if for every \epsilon > 0, there exists \delta > 0 such that for all \mathbf{x}, \mathbf{y} \in D with \|\mathbf{x} - \mathbf{y}\| < \delta, \|f(\mathbf{x}) - f(\mathbf{y})\| < \epsilon. Every uniformly continuous function is continuous, but the converse does not hold on non-compact domains. A key result is that if f is continuous on a compact subset K \subseteq \mathbb{R}^n, then f is uniformly continuous on K; in \mathbb{R}^n, compact sets are precisely the closed and bounded ones by the Heine-Borel theorem. Counterexamples on open sets abound, such as f(x, y) = \frac{1}{\sqrt{x^2 + y^2}} on the punctured open unit disk \{(x, y) : 0 < x^2 + y^2 < 1\}, which is continuous but not uniformly continuous due to oscillations near the origin.

Key Theorems on Limits and Continuity

In multivariable calculus, several fundamental theorems characterize the behavior of limits and continuity for functions from \mathbb{R}^n to \mathbb{R}^m. These results extend single-variable concepts to higher dimensions, relying on topological properties like compactness and connectedness to ensure well-behaved limits and continuous mappings. They provide essential tools for analyzing the existence of limits along different paths and the preservation of structural properties under continuous functions. The Heine-Borel theorem establishes a criterion for compactness in Euclidean space, stating that a subset K \subset \mathbb{R}^n is compact if and only if it is closed and bounded. This equivalence holds because closed sets contain all their limit points, and bounded sets can be enclosed in a finite ball, ensuring every open cover has a finite subcover. In the context of limits, compactness implies sequential compactness, meaning every sequence in K has a convergent subsequence with limit in K. This theorem is pivotal for multivariable limits, as it allows uniform control over function behavior on such sets, preventing pathological discontinuities. Building on compactness, the Bolzano-Weierstrass theorem asserts that every bounded sequence \{ \mathbf{x}_k \} in \mathbb{R}^n possesses a convergent subsequence. Unlike in infinite-dimensional spaces where boundedness alone may not suffice, in \mathbb{R}^n this follows from the finite-dimensional structure, where sequences can be extracted componentwise using the one-dimensional case. For limits of multivariable functions, this theorem guarantees that if a function f: \mathbb{R}^n \to \mathbb{R}^m approaches a value along a bounded path, there are accumulation points where the limit can be evaluated, aiding in the detection of path-dependent behaviors. The extreme value theorem extends the single-variable result to higher dimensions: if f: K \to \mathbb{R} is continuous and K \subset \mathbb{R}^n is compact, then f attains its global maximum and minimum on K. Compactness ensures the image f(K) is also compact, hence closed and bounded in \mathbb{R}, so extrema exist without requiring differentiability. This theorem underpins optimization in , confirming that continuous functions on closed and bounded domains, such as balls or rectangles, achieve their bounds, which is crucial for limit existence when restricting to compact subsets. An analogue of the intermediate value theorem in multiple variables leverages connectedness: the continuous image of a connected set is connected. In \mathbb{R}^n, connected open sets are path-connected, so for a continuous path \gamma: [0,1] \to \mathbb{R}^n from \mathbf{a} to \mathbf{b}, the composition f \circ \gamma: [0,1] \to \mathbb{R} is continuous, and its image is an interval containing all values between f(\mathbf{a}) and f(\mathbf{b}) by the one-dimensional intermediate value theorem. More generally, for f: D \to \mathbb{R} continuous on a connected domain D \subset \mathbb{R}^n, f(D) is a connected subset of \mathbb{R}, i.e., an interval, ensuring no "jumps" in the range despite the higher-dimensional domain. This property supports the analysis of level sets and connectivity in limits.

Differentiation

Partial Derivatives

In multivariable calculus, partial derivatives measure the rate of change of a function with respect to one variable while holding all other variables constant. This concept extends the single-variable derivative to functions f: \mathbb{R}^n \to \mathbb{R}, allowing analysis of how the function varies along each coordinate direction independently. The partial derivative of f with respect to the i-th variable x_i at a point \mathbf{a} = (a_1, \dots, a_n) is formally defined as \frac{\partial f}{\partial x_i}(\mathbf{a}) = \lim_{h \to 0} \frac{f(a_1, \dots, a_{i-1}, a_i + h, a_{i+1}, \dots, a_n) - f(\mathbf{a})}{h}, provided the limit exists. Here, the function is evaluated along the line parallel to the x_i-axis passing through \mathbf{a}, mimicking the directional change in single-variable calculus but restricted to coordinate axes. Common notations for the partial derivative include f_{x_i} or D_i f. Higher-order partial derivatives, such as second-order ones, are obtained by successive differentiation; for instance, the mixed partial derivative is denoted f_{x_j x_i} or \frac{\partial^2 f}{\partial x_i \partial x_j}. Geometrically, the partial derivative \frac{\partial f}{\partial x_i}(\mathbf{a}) represents the slope of the tangent line to the curve traced by the graph of f in the hyperplane where all variables except x_i are fixed at their values from \mathbf{a}. This slope indicates the instantaneous rate of change along that coordinate direction and contributes to approximating the graph of f near \mathbf{a} by a tangent hyperplane. To compute partial derivatives, treat the function as depending on a single variable while regarding others as constants, then apply standard differentiation rules. For example, for f(x,y) = x^2 y, the partial derivative with respect to x is \frac{\partial f}{\partial x} = 2xy, found by differentiating x^2 with respect to x and holding y constant. Likewise, \frac{\partial f}{\partial y} = x^2. Higher-order partials include \frac{\partial^2 f}{\partial x^2} = 2y, \frac{\partial^2 f}{\partial y \partial x} = 2x, and \frac{\partial^2 f}{\partial y^2} = 0. If the relevant second partial derivatives are continuous at a point, guarantees that the mixed partial derivatives are equal, so \frac{\partial^2 f}{\partial y \partial x} = \frac{\partial^2 f}{\partial x \partial y}. In the example above, this holds as both mixed partials equal $2x. This equality simplifies computations and holds under the continuity assumption for most practical functions in . These coordinate-axis limits relate briefly to path-dependent limits in multiple variables and form the components of the total derivative for broader linear approximations.

Total Derivative and Jacobian Matrix

In multivariable calculus, the total derivative of a function \mathbf{f}: \mathbb{R}^n \to \mathbb{R}^m at a point \mathbf{a} \in \mathbb{R}^n is defined as the unique linear map D\mathbf{f}(\mathbf{a}): \mathbb{R}^n \to \mathbb{R}^m that best approximates the change in \mathbf{f} near \mathbf{a}, satisfying \lim_{\mathbf{h} \to \mathbf{0}} \frac{\| \mathbf{f}(\mathbf{a} + \mathbf{h}) - \mathbf{f}(\mathbf{a}) - D\mathbf{f}(\mathbf{a}) \mathbf{h} \|}{\| \mathbf{h} \|} = 0. This linear map is represented by the J_{\mathbf{f}}(\mathbf{a}), an m \times n matrix whose (j,i)-th entry is the partial derivative \frac{\partial f_j}{\partial x_i}(\mathbf{a}), where f_j is the j-th component of \mathbf{f}. The partial derivatives thus form the entries of this matrix, integrating the directional sensitivities into a complete linear transformation. The approximation provided by the total derivative is given by \mathbf{f}(\mathbf{a} + \mathbf{h}) \approx \mathbf{f}(\mathbf{a}) + J_{\mathbf{f}}(\mathbf{a}) \mathbf{h}, which captures the first-order behavior of \mathbf{f} for small \mathbf{h}. A function \mathbf{f} is differentiable at \mathbf{a} (meaning the total derivative exists) if all first-order partial derivatives exist in a neighborhood of \mathbf{a} and are continuous at \mathbf{a}; this sufficient condition ensures the limit defining differentiability holds. For a scalar-valued function f: \mathbb{R}^n \to \mathbb{R}, the Jacobian matrix reduces to a $1 \times n row vector known as the gradient \nabla f(\mathbf{a}) = \left[ \frac{\partial f}{\partial x_1}(\mathbf{a}), \dots, \frac{\partial f}{\partial x_n}(\mathbf{a}) \right], which represents the direction of steepest ascent and the rate of change in \mathbb{R}^n. The invertibility of the Jacobian matrix at a point provides insight into the local behavior of \mathbf{f}; for square matrices (m = n), if \det J_{\mathbf{f}}(\mathbf{a}) \neq 0, the function is locally invertible, mapping a neighborhood of \mathbf{a} bijectively onto a neighborhood of \mathbf{f}(\mathbf{a}). As an illustrative example, consider \mathbf{f}(x,y) = (x^2 + y, xy) from \mathbb{R}^2 \to \mathbb{R}^2. The Jacobian matrix is J_{\mathbf{f}}(x,y) = \begin{pmatrix} 2x & 1 \\ y & x \end{pmatrix}, obtained by computing the partial derivatives of each component.

Chain Rule for Multivariable Functions

The chain rule for multivariable functions extends the single-variable chain rule to compositions of functions between Euclidean spaces, enabling the computation of derivatives of composite functions through matrix multiplication of their individual derivatives. Suppose g: \mathbb{R}^k \to \mathbb{R}^n is differentiable at a point \mathbf{a} \in \mathbb{R}^k and f: \mathbb{R}^n \to \mathbb{R}^m is differentiable at \mathbf{g}(\mathbf{a}). Then the composition \mathbf{h} = f \circ g: \mathbb{R}^k \to \mathbb{R}^m is differentiable at \mathbf{a}, and its total derivative is given by the matrix product D\mathbf{h}(\mathbf{a}) = Df(\mathbf{g}(\mathbf{a})) \cdot Dg(\mathbf{a}), where Df(\mathbf{g}(\mathbf{a})) is the m \times n Jacobian matrix of f evaluated at \mathbf{g}(\mathbf{a}), and Dg(\mathbf{a}) is the n \times k Jacobian matrix of g at \mathbf{a}. This formulation captures how infinitesimal changes in the input to g propagate through f, aligning with the linear approximation property of differentiability. A common special case arises when f is scalar-valued (m = 1), such as a function f: \mathbb{R}^n \to \mathbb{R} composed with a vector-valued path \mathbf{x}: \mathbb{R} \to \mathbb{R}^n, yielding F(t) = f(\mathbf{x}(t)). In this scenario, the chain rule simplifies to \frac{dF}{dt} = \nabla f(\mathbf{x}(t)) \cdot \frac{d\mathbf{x}}{dt}, where \nabla f is the gradient vector of f and \frac{d\mathbf{x}}{dt} is the velocity vector. This dot product form is particularly useful in applications like particle motion, where it relates the rate of change of a scalar quantity (e.g., distance from the origin) to the direction of motion. To establish the general result, consider the definition of differentiability: g is differentiable at \mathbf{a} if \mathbf{g}(\mathbf{a} + \mathbf{h}) = \mathbf{g}(\mathbf{a}) + Dg(\mathbf{a}) \mathbf{h} + \mathbf{e}_g(\mathbf{h}), where \|\mathbf{e}_g(\mathbf{h})\| / \|\mathbf{h}\| \to 0 as \mathbf{h} \to \mathbf{0}; similarly for f at \mathbf{g}(\mathbf{a}) with error \mathbf{e}_f. Substituting yields (f \circ g)(\mathbf{a} + \mathbf{h}) = f(\mathbf{g}(\mathbf{a}) + Dg(\mathbf{a}) \mathbf{h} + \mathbf{e}_g(\mathbf{h})) = f(\mathbf{g}(\mathbf{a})) + Df(\mathbf{g}(\mathbf{a})) (Dg(\mathbf{a}) \mathbf{h} + \mathbf{e}_g(\mathbf{h})) + \mathbf{e}_f(Dg(\mathbf{a}) \mathbf{h} + \mathbf{e}_g(\mathbf{h})). The linear term is Df(\mathbf{g}(\mathbf{a})) Dg(\mathbf{a}) \mathbf{h}, and the error term's norm is bounded by o(\|\mathbf{h}\|) using the properties of the individual errors and matrix norms, confirming differentiability of the composition. A practical example illustrates the rule in coordinate transformations, such as converting from Cartesian to polar coordinates, where x = r \cos \theta, y = r \sin \theta, and f: \mathbb{R}^2 \to \mathbb{R} is a function of x and y. The partial derivatives in polar variables are \frac{\partial f}{\partial r} = \frac{\partial f}{\partial x} \cos \theta + \frac{\partial f}{\partial y} \sin \theta, \quad \frac{\partial f}{\partial \theta} = -\frac{\partial f}{\partial x} (r \sin \theta) + \frac{\partial f}{\partial y} (r \cos \theta). These follow directly from applying the chain rule to the composition f(r \cos \theta, r \sin \theta). For instance, if f(x,y) = x^2 + y^2, then f(r,\theta) = r^2, and the formulas yield \frac{\partial f}{\partial r} = 2r and \frac{\partial f}{\partial \theta} = 0, consistent with direct computation. For more complex dependencies involving multiple intermediate variables, tree diagrams provide a visual aid to apply the chain rule systematically. In such a diagram, the dependent function (e.g., z = f(x,y)) is placed at the root, branching to its direct variables x and y, which then branch to independent variables (e.g., s and t, where x = g(s,t), y = h(s,t)). Each branch is labeled with the corresponding partial derivative: \frac{\partial z}{\partial x} along the path from z to x, \frac{\partial x}{\partial s} from x to s, and so on. To find \frac{\partial z}{\partial s}, sum the products of labels along all paths from s to z: \frac{\partial z}{\partial s} = \frac{\partial z}{\partial x} \frac{\partial x}{\partial s} + \frac{\partial z}{\partial y} \frac{\partial y}{\partial s}. This method ensures all dependency paths are accounted for without omission. For example, if z = e^{2rs} \sin(3\theta) with r = st - t^2 and \theta = s^2 t, the tree diagram organizes the computation of \frac{\partial z}{\partial s} by tracing paths through r and \theta.

Advanced Differentiation Concepts

Directional Derivatives

In multivariable calculus, the directional derivative of a function f: \mathbb{R}^n \to \mathbb{R} at a point a \in \mathbb{R}^n in the direction of a unit vector u \in \mathbb{R}^n with \|u\| = 1 is defined as D_u f(a) = \lim_{h \to 0} \frac{f(a + h u) - f(a)}{h}, provided the limit exists. This measures the instantaneous rate of change of f at a along the line in the direction of u, generalizing the concept of the derivative from single-variable calculus to arbitrary directions in multiple dimensions. The directional derivative relates directly to partial derivatives: if u = e_i is the i-th standard basis vector (with 1 in the i-th component and 0 elsewhere), then D_{e_i} f(a) = \frac{\partial f}{\partial x_i}(a). More generally, for a unit vector u = (u_1, \dots, u_n), the directional derivative can be expressed as the linear combination D_u f(a) = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(a) u_i, assuming the partial derivatives exist. This shows how directional derivatives extend partial derivatives, which are special cases aligned with the coordinate axes, to any direction. Consider the function f(x,y) = x^2 + y^2 at the origin (0,0). The partial derivatives are \frac{\partial f}{\partial x} = 2x and \frac{\partial f}{\partial y} = 2y, both zero at the origin. For the unit vector u = \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right), the directional derivative is D_u f(0,0) = \frac{\partial f}{\partial x}(0,0) \cdot \frac{1}{\sqrt{2}} + \frac{\partial f}{\partial y}(0,0) \cdot \frac{1}{\sqrt{2}} = 0 \cdot \frac{1}{\sqrt{2}} + 0 \cdot \frac{1}{\sqrt{2}} = 0, reflecting the fact that the origin is a minimum point where the function increases equally in all directions but with zero instantaneous rate along this line at that point. In general, if the total derivative exists at a, then all directional derivatives exist there as well. The Gateaux derivative generalizes the directional derivative to functions between Banach spaces, requiring that the limit \lim_{t \to 0} \frac{f(x + t v) - f(x)}{t} exists for all directions v and defines a bounded linear operator on the direction space. This notion is particularly useful in functional analysis, where it captures directional sensitivity without assuming full differentiability.

Gradient and Higher-Order Derivatives

The gradient of a differentiable scalar function f: \mathbb{R}^n \to \mathbb{R} at a point is the vector whose components are the partial derivatives of f with respect to each variable, denoted as \nabla f(\mathbf{x}) = \left( \frac{\partial f}{\partial x_1}(\mathbf{x}), \dots, \frac{\partial f}{\partial x_n}(\mathbf{x}) \right). This vector provides a compact way to encode the first-order partial derivatives, facilitating computations in multivariable settings. For instance, the directional derivative of f at \mathbf{x} in the direction of a unit vector \mathbf{u} is given by the dot product D_{\mathbf{u}} f(\mathbf{x}) = \nabla f(\mathbf{x}) \cdot \mathbf{u}. A key property of the gradient is that it points in the direction of the steepest ascent of f, and its magnitude \|\nabla f(\mathbf{x})\| equals the maximum value of the directional derivative at that point. As an example, consider f(x, y) = x^2 + y^2; then \nabla f(x, y) = (2x, 2y), which at the origin is the zero vector, indicating a local minimum. Higher-order derivatives in multivariable calculus extend partial differentiation beyond the first order, yielding tensors that capture curvature and interaction effects among variables. The second-order partial derivatives, or second partials, are defined as f_{x_i x_j} = \frac{\partial^2 f}{\partial x_j \partial x_i} for i, j = 1, \dots, n, representing the rate of change of the partial derivative f_{x_i} with respect to x_j. When i \neq j, these are mixed partials, and their order of differentiation often matters in computation but not in value under suitable conditions. , also known as the symmetry of mixed partials, states that if f is defined on an open set and the second partial derivatives f_{x_i x_j} and f_{x_j x_i} are both continuous at a point, then f_{x_i x_j} = f_{x_j x_i} at that point. This equality simplifies calculations and holds for functions where the mixed partials exist and satisfy the continuity assumption, as proven using limits and the mean value theorem. A special case of second partials is the Laplacian operator \Delta f, which sums the pure second partials along each variable: \Delta f = \sum_{i=1}^n \frac{\partial^2 f}{\partial x_i^2}. This scalar operator measures the average second derivative and appears in applications like diffusion equations, though here it serves as an illustrative higher-order construct. For the example f(x, y) = x^2 + y^2, the second partials are f_{xx} = 2, f_{yy} = 2, and f_{xy} = f_{yx} = 0 by , yielding \Delta f = 4.

Hessian Matrix and Taylor Expansions

In multivariable calculus, the Hessian matrix of a twice continuously differentiable scalar-valued function f: \mathbb{R}^n \to \mathbb{R} at a point a \in \mathbb{R}^n is the n \times n symmetric matrix H_f(a) whose (i,j)-th entry is the second partial derivative \frac{\partial^2 f}{\partial x_i \partial x_j}(a). This matrix captures the second-order behavior of the function, generalizing the second derivative in the single-variable case. The Hessian plays a central role in the multivariable Taylor theorem, which provides a polynomial approximation of f near a. Specifically, if f is twice continuously differentiable at a, then for a small vector h \in \mathbb{R}^n, f(a + h) = f(a) + \nabla f(a) \cdot h + \frac{1}{2} h^T H_f(a) h + o(\|h\|^2) as \|h\| \to 0, where \nabla f(a) is the and the quadratic form h^T H_f(a) h encodes the second-order term. The remainder is of higher order, ensuring the approximation's accuracy for small perturbations. The eigenvalues of the Hessian determine its definiteness, which relates to the function's local curvature. If H_f(a) is positive definite (all eigenvalues positive), the quadratic form \frac{1}{2} h^T H_f(a) h > 0 for all h \neq 0, implying that a is a strict local minimum of f. Conversely, if negative definite (all eigenvalues negative), a is a strict local maximum. Consider the function f(x,y) = x^2 + y^2. At the a = (0,0), the vanishes, and the is H_f(0,0) = \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix}, which is positive definite since its eigenvalues are both 2. Thus, the second-order expansion f(0 + h) = 0 + 0 \cdot h + \frac{1}{2} h^T H_f(0,0) h + o(\|h\|^2) = \|h\|^2 + o(\|h\|^2) confirms that (0,0) is a local minimum, matching the global minimum of f.

Multiple Integrals

Double and Triple Integrals

Double integrals extend the concept of single-variable integration to functions of two variables, allowing the computation of signed volumes under surfaces in three-dimensional space. For a function f(x, y) defined on a bounded region D in \mathbb{R}^2, the double integral \iint_D f(x, y) \, dA is defined as the limit of Riemann sums over partitions of D. Specifically, partition D into subregions with areas \Delta A_i, select sample points (x_i^*, y_i^*) in each subregion, form the sum \sum f(x_i^*, y_i^*) \Delta A_i, and take the limit as the maximum \Delta A_i approaches zero. This definition applies initially to rectangular regions but extends to more general bounded domains where f is continuous, ensuring the integral exists. Key properties of double integrals mirror those of single integrals and hold for continuous functions over bounded regions. Linearity states that \iint_D [cf(x, y)] \, dA = c \iint_D f(x, y) \, dA for any constant c, and \iint_D [f(x, y) + g(x, y)] \, dA = \iint_D f(x, y) \, dA + \iint_D g(x, y) \, dA. Additivity over disjoint domains allows \iint_{D_1 \cup D_2} f(x, y) \, dA = \iint_{D_1} f(x, y) \, dA + \iint_{D_2} f(x, y) \, dA when D_1 and D_2 have no overlap. Monotonicity implies that if f(x, y) \geq g(x, y) on D, then \iint_D f(x, y) \, dA \geq \iint_D g(x, y) \, dA, with equality if f = g almost everywhere./15%3A_Multiple_Integration/15.01%3A_Double_Integrals_over_Rectangular_Regions) Representative applications include calculating volumes and average values. For instance, the volume under the graph of z = f(x, y) over a rectangular D is given by \iint_D f(x, y) \, dA when f \geq 0. The average value of f over D is \frac{1}{|D|} \iint_D f(x, y) \, dA, where |D| denotes the area of D. Triple integrals generalize this to functions of three variables over regions in \mathbb{R}^3, representing signed volumes or other accumulations in . For a function f(x, y, z) on a bounded E, the triple integral \iiint_E f(x, y, z) \, dV is the limit of Riemann sums \sum f(x_i^*, y_i^*, z_i^*) \Delta V_i, where \Delta V_i are volumes of subregions partitioning E, and the limit is taken as the maximum \Delta V_i approaches zero. As with double integrals, of f on a closed and bounded E guarantees integrability. The properties—linearity, additivity over disjoint solids, and monotonicity for nonnegative functions—extend analogously from the double case./15%3A_Multiple_Integration/15.04%3A_Triple_Integrals) Examples for triple integrals parallel those for doubles, such as the volume of a solid E given by \iiint_E 1 \, dV. The average value of f over E is \frac{1}{|E|} \iiint_E f(x, y, z) \, dV, where |E| is of E. The multiple integral provides a theoretical foundation independent of coordinate order, while iterated integrals offer a practical computational method by successive single integrations, often facilitated by Fubini's theorem for continuous functions.

Iterated Integrals and Fubini's Theorem

In multivariable calculus, iterated integrals provide a practical method for evaluating multiple integrals by reducing them to successive single-variable integrals. For a double integral over a region D in the xy-plane, the iterated integral treats the inner integral as a function of the outer variable. Specifically, for a Type I region where D = \{(x,y) \mid a \leq x \leq b, g(x) \leq y \leq h(x)\} with g and h continuous, the double integral is expressed as \iint_D f(x,y) \, dA = \int_a^b \int_{g(x)}^{h(x)} f(x,y) \, dy \, dx. This approach integrates first with respect to y, treating x as constant, yielding an antiderivative that is then integrated with respect to x. Similarly, for a Type II region D = \{(x,y) \mid c \leq y \leq d, p(y) \leq x \leq q(y)\}, the order reverses: \iint_D f(x,y) \, dA = \int_c^d \int_{p(y)}^{q(y)} f(x,y) \, dx \, dy. These forms are particularly useful for regions bounded by simple curves, allowing computation via fundamental theorem of calculus techniques. Fubini's theorem justifies equating the double integral to the and permits switching the under suitable conditions. Originally stated for continuous functions, the theorem asserts that if f(x,y) is continuous on a closed R = [a,b] \times [c,d], then \iint_R f(x,y) \, dA = \int_a^b \left( \int_c^d f(x,y) \, dy \right) dx = \int_c^d \left( \int_a^b f(x,y) \, dx \right) dy, where both iterated integrals exist and are equal. This result, proven by Guido Fubini in , relies on the of f on the compact set R, ensuring the Riemann sums converge appropriately. The extends beyond rectangles to more general regions. For a f continuous on a measurable set D (a set with of measure zero), the double integral equals the over the Type I or Type II description of D, provided the integrals converge absolutely. This generalization, building on Fubini's work, applies to regions like triangles or those bounded by smooth curves, facilitating computations in non-rectangular domains. Without absolute integrability, the order of iteration may affect the result for discontinuous functions, even if individually convergent, highlighting the theorem's conditional nature. A simple example illustrates the theorem: consider \iint_D (x + y) \, dA over the unit square D = [0,1] \times [0,1]. Iterating first in y, \int_0^1 \int_0^1 (x + y) \, dy \, dx = \int_0^1 \left[ xy + \frac{y^2}{2} \right]_{y=0}^1 dx = \int_0^1 \left( x + \frac{1}{2} \right) dx = \left[ \frac{x^2}{2} + \frac{x}{2} \right]_0^1 = 1. Switching order yields the same value: \int_0^1 \int_0^1 (x + y) \, dx \, dy = \int_0^1 \left[ \frac{x^2}{2} + xy \right]_{x=0}^1 dy = \int_0^1 \left( \frac{1}{2} + y \right) dy = \left[ \frac{y}{2} + \frac{y^2}{2} \right]_0^1 = 1, confirming equality for this continuous integrand. For circular regions, such as the unit disk, Cartesian iterated integrals become cumbersome due to split limits, often motivating a change to polar coordinates in subsequent methods.

Change of Variables and Jacobian Determinant

In multivariable calculus, the theorem provides a method to evaluate multiple integrals by transforming the coordinates of the , which simplifies the computation when the region or integrand is more naturally expressed in a new . This theorem generalizes the substitution rule from single-variable calculus to higher dimensions and relies on the determinant to account for the distortion of volumes or areas under the . Consider a g: U \to D, where U and D are open subsets of \mathbb{R}^n, mapping the parameter domain U onto the integration domain D. For a f: D \to \mathbb{R}, the formula states that \int_D f(\mathbf{x}) \, d\mathbf{x} = \int_U f(g(\mathbf{u})) \left| \det J_g(\mathbf{u}) \right| \, d\mathbf{u}, where \mathbf{x} = g(\mathbf{u}) and the integral is over the appropriate notation for dimension n. This holds under suitable regularity conditions, such as g being continuously differentiable and with a continuously differentiable . The Jacobian determinant, \det J_g(\mathbf{u}), is the determinant of the Jacobian matrix J_g(\mathbf{u}), which consists of the partial derivatives of the components of g. Explicitly, \det J_g(\mathbf{u}) = \frac{\partial(x_1, \dots, x_n)}{\partial(u_1, \dots, u_n)} = \det \begin{pmatrix} \frac{\partial x_1}{\partial u_1} & \cdots & \frac{\partial x_1}{\partial u_n} \\ \vdots & \ddots & \vdots \\ \frac{\partial x_n}{\partial u_1} & \cdots & \frac{\partial x_n}{\partial u_n} \end{pmatrix}, and the absolute value ensures the remains positive, reflecting the oriented scaling factor of the . The Jacobian matrix arises from the in multivariable differentiation, capturing how infinitesimal changes in \mathbf{u} map to changes in \mathbf{x}. A classic example in two dimensions is the transformation to polar coordinates, where x = r \cos \theta and y = r \sin \theta, with r \geq 0 and \theta \in [0, 2\pi). The Jacobian matrix is J = \begin{pmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta \end{pmatrix}, so \det J = r (\cos^2 \theta + \sin^2 \theta) = r, and |\det J| = r since r \geq 0. Thus, a double integral over a region D in the xy-plane becomes \iint_D f(x, y) \, dA = \int_{\theta} \int_r f(r \cos \theta, r \sin \theta) \, r \, dr \, d\theta, where the limits for r and \theta are adjusted to cover D. This is particularly useful for regions with , such as disks or annuli. In three dimensions, spherical coordinates provide another standard transformation: x = \rho \sin \phi \cos \theta, y = \rho \sin \phi \sin \theta, z = \rho \cos \phi, with \rho \geq 0, \phi \in [0, \pi], and \theta \in [0, 2\pi). The Jacobian determinant computation yields |\det J| = \rho^2 \sin \phi, so a triple integral over a region W simplifies to \iiint_W f(x, y, z) \, dV = \int_{\theta} \int_{\phi} \int_{\rho} f(\rho \sin \phi \cos \theta, \rho \sin \phi \sin \theta, \rho \cos \phi) \, \rho^2 \sin \phi \, d\rho \, d\phi \, d\theta. This change is essential for integrating over spheres, cones, or other rotationally symmetric volumes in physics and applications.

Vector Calculus

Vector Fields and Line Integrals

In multivariable calculus, a vector field is a mapping that assigns a vector to every point in a domain within Euclidean space \mathbb{R}^n. For instance, in \mathbb{R}^2, the vector field \mathbf{F}: \mathbb{R}^2 \to \mathbb{R}^2 defined by \mathbf{F}(x, y) = (-y, x) describes a rotational flow around the origin, where the vector at each point is perpendicular to the position vector and of equal magnitude. Vector fields model phenomena such as fluid velocity or force distributions in physics. A line integral of a scalar f along a parametrized C given by \mathbf{r}(t) for a \leq t \leq b measures the accumulation of f weighted by and is defined as \int_C f \, ds = \int_a^b f(\mathbf{r}(t)) \|\mathbf{r}'(t)\| \, dt. This integral generalizes the single-variable to paths in higher dimensions, often representing quantities like the total mass of a wire with f. The line integral of a vector field \mathbf{F} along the same curve C is given by \int_C \mathbf{F} \cdot d\mathbf{r} = \int_a^b \mathbf{F}(\mathbf{r}(t)) \cdot \mathbf{r}'(t) \, dt, which computes the projection of \mathbf{F} onto the tangent direction of C. In physical contexts, this represents the work done by a force field \mathbf{F} in moving a particle along C; for example, if \mathbf{F}(x, y) = (y, x) acts as a force, the work along a straight path from (0,0) to (1,1) is \int_0^1 (t, t) \cdot (1,1) \, dt = 1. A vector field is conservative if its line integral depends only on the endpoints of C, not the path taken, which occurs precisely when \mathbf{F} is the gradient of a scalar potential function. Gradient fields are inherently conservative.

Surface Integrals and Flux

Surface integrals extend the concept of integration to curved surfaces in three-dimensional space, allowing the computation of quantities such as surface area or total over non-flat regions. For a scalar f(x, y, z) defined on a surface S, the surface \iint_S f \, dS represents the accumulation of f weighted by the surface area element dS. This is particularly useful in applications like calculating the of a thin shell where f denotes . To evaluate such integrals, surfaces are typically parametrized using a vector-valued function \mathbf{r}(u, v) = \langle x(u,v), y(u,v), z(u,v) \rangle over a domain D in the uv-plane. The surface area element dS is given by \|\mathbf{r}_u \times \mathbf{r}_v\| \, du \, dv, where \mathbf{r}_u and \mathbf{r}_v are partial derivatives. Thus, the integral becomes \iint_S f \, dS = \iint_D f(\mathbf{r}(u,v)) \|\mathbf{r}_u \times \mathbf{r}_v\| \, du \, dv. For surfaces expressed as graphs, such as z = g(x,y) over a region R in the xy-plane, the parametrization simplifies to \mathbf{r}(x,y) = \langle x, y, g(x,y) \rangle. Here, the magnitude of the yields dS = \sqrt{1 + \left( \frac{\partial g}{\partial x} \right)^2 + \left( \frac{\partial g}{\partial y} \right)^2} \, dx \, dy, so \iint_S f \, dS = \iint_R f(x, y, g(x,y)) \sqrt{1 + g_x^2 + g_y^2} \, dx \, dy. A practical example is computing the mass of a surface with variable density. Consider the hemisphere S: z = \sqrt{1 - x^2 - y^2} for x^2 + y^2 \leq 1, with density f(x,y,z) = z. The mass is \iint_S z \, dS = \iint_R (1 - x^2 - y^2)^{1/2} \sqrt{1 + \frac{x^2}{1 - x^2 - y^2} + \frac{y^2}{1 - x^2 - y^2}} \, dx \, dy, which simplifies in polar coordinates to \pi. This illustrates how surface integrals quantify physical properties over curved domains. Flux integrals, in contrast, apply to vector fields \mathbf{F}(x,y,z) and measure the net flow through a surface S, denoted \iint_S \mathbf{F} \cdot d\mathbf{S}. The vector area element d\mathbf{S} incorporates orientation, typically via the unit normal \mathbf{n}, so d\mathbf{S} = \mathbf{n} \, dS. For a parametrized surface, \mathbf{r}_u \times \mathbf{r}_v provides a normal vector consistent with the right-hand rule for orientation, yielding \iint_S \mathbf{F} \cdot d\mathbf{S} = \iint_D \mathbf{F}(\mathbf{r}(u,v)) \cdot (\mathbf{r}_u \times \mathbf{r}_v) \, du \, dv. $$ Positive flux indicates net flow in the direction of the normal.[](https://tutorial.math.lamar.edu/classes/calciii/surfintvectorfield.aspx) For graph surfaces like $z = g(x,y)$, the upward orientation uses $d\mathbf{S} = \left\langle -g_x, -g_y, 1 \right\rangle dx \, dy$, aligning the normal to point away from the $xy$-plane. Flux integrals are essential in physics for modeling phenomena like fluid flow or electromagnetic fields through surfaces. An illustrative example is the flux of a radial [vector field](/page/Vector_field) $\mathbf{F} = \langle x, y, z \rangle$ through the [unit](/page/The_Unit) [sphere](/page/Sphere) $S: x^2 + y^2 + z^2 = 1$, oriented outward. Parametrizing with spherical coordinates $\mathbf{r}(\theta, \phi) = \langle \sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta \rangle$ for $0 \leq \theta \leq \pi$, $0 \leq \phi \leq 2\pi$, the normal $\mathbf{r}_\theta \times \mathbf{r}_\phi = \sin\theta \, \mathbf{r}$ gives $\iint_S \mathbf{F} \cdot d\mathbf{S} = \iint_D \sin^2\theta \, \sin\theta \, d\theta \, d\phi = 4\pi$. This result highlights how flux captures the divergence of the field through closed surfaces.[](https://tutorial.math.lamar.edu/classes/calciii/surfintvectorfield.aspx) ### Fundamental Theorems of Vector Calculus The fundamental theorems of vector calculus establish profound connections between line integrals over curves, surface integrals over boundaries, and volume integrals over regions, generalizing the one-dimensional [fundamental theorem of calculus](/page/Fundamental_theorem_of_calculus) to higher dimensions. These theorems—[Green's theorem](/page/Green's_theorem) in the plane, [Stokes' theorem](/page/Stokes'_theorem) on surfaces, and the [divergence theorem](/page/Divergence_theorem) in space—allow the evaluation of boundary integrals by converting them to integrals over the enclosed domains, often simplifying computations in physics and engineering. They rely on the concepts of [divergence](/page/Divergence) and [curl](/page/Curl) of a [vector field](/page/Vector_field), which quantify local expansion and rotation, respectively.[](https://mathworld.wolfram.com/StokesTheorem.html) The divergence of a vector field $\mathbf{F} = (F_1, F_2, F_3)$ in three dimensions is the scalar $\nabla \cdot \mathbf{F} = \frac{\partial F_1}{\partial x} + \frac{\partial F_2}{\partial y} + \frac{\partial F_3}{\partial z}$, measuring the net rate at which the field emanates from or converges to a point, analogous to the net [flux](/page/Flux) through an [infinitesimal](/page/Infinitesimal) volume.[](https://mathworld.wolfram.com/Divergence.html) The curl of $\mathbf{F}$ is the vector \nabla \times \mathbf{F} = \left( \frac{\partial F_3}{\partial y} - \frac{\partial F_2}{\partial z}, \frac{\partial F_1}{\partial z} - \frac{\partial F_3}{\partial x}, \frac{\partial F_2}{\partial x} - \frac{\partial F_1}{\partial y} \right), which captures the field's local circulation or [vorticity](/page/Vorticity) around a point, with magnitude equal to the limiting circulation per unit area in the [plane](/page/Plane) perpendicular to the [vector](/page/Vector).[](https://mathworld.wolfram.com/Curl.html) These operators appear centrally in the theorems, linking boundary behavior to interior properties. Green's theorem applies in two dimensions, stating that if $C$ is a positively oriented, piecewise smooth, simple closed curve bounding a region $D$ in the $xy$-plane, and $P(x,y)$ and $Q(x,y)$ are functions with continuous first partial derivatives on an open region containing $D$, then \oint_C P , dx + Q , dy = \iint_D \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right) , dA. In vector form, for a vector field $\mathbf{F} = (P, Q)$, this becomes $\oint_C \mathbf{F} \cdot d\mathbf{r} = \iint_D (\nabla \times \mathbf{F}) \cdot \mathbf{k} \, dA$, where $\mathbf{k}$ is the unit vector in the $z$-direction, equating the circulation around $C$ to the flux of the curl through $D$.[](https://mathworld.wolfram.com/GreensTheorem.html) The theorem holds for simply connected regions $D$ where the field is smooth. It was first stated and proved by George Green in his 1828 essay on electricity and magnetism, though independently discovered earlier by others.[](https://www.nottingham.ac.uk/physics/documents/historical/greenphystoday1203.pdf) A classic application verifies the area of $D$: choosing $P = -y$ and $Q = x$ yields $\oint_C -y \, dx + x \, dy = 2 \iint_D 1 \, dA$, so the area $A = \frac{1}{2} \oint_C -y \, dx + x \, dy$. For the unit disk bounded by the circle $x = \cos \theta$, $y = \sin \theta$ ($0 \leq \theta \leq 2\pi$), the line integral evaluates to $2\pi$, matching $2 \iint_D 1 \, dA = 2\pi$.[](https://mathworld.wolfram.com/GreensTheorem.html) Stokes' theorem extends Green's theorem to three dimensions, asserting that if $S$ is an oriented piecewise smooth surface with boundary curve $C$ (oriented consistently via the [right-hand rule](/page/Right-hand_rule)), and $\mathbf{F}$ is a [vector field](/page/Vector_field) with continuous first partial derivatives on an open region containing $S$, then \int_C \mathbf{F} \cdot d\mathbf{r} = \iint_S (\nabla \times \mathbf{F}) \cdot d\mathbf{S}. This equates the [line integral](/page/Line_integral) (circulation) along the boundary $C$ to the surface integral of the [curl](/page/Curl) over $S$, allowing computation of path-dependent integrals via surface properties.[](https://mathworld.wolfram.com/StokesTheorem.html) The theorem applies to orientable surfaces, often simplifying evaluations for non-closed paths by choosing convenient $S$. George Gabriel Stokes posed the result as an exam question at [Cambridge](/page/Cambridge) in 1850, with the first published proof appearing in Hermann Hankel's 1861 monograph; it generalizes earlier work by [Green](/page/Green) and others.[](https://legacy-www.math.harvard.edu/archive/21a_fall_11/exhibits/katz/katz.pdf) For example, consider $\mathbf{F} = (-y, x, z)$ over the upper [hemisphere](/page/Hemisphere) $S: x^2 + y^2 + z^2 = 1$, $z \geq 0$, bounded by the unit circle $C$ in the $xy$-plane. The [curl](/page/Curl) is $\nabla \times \mathbf{F} = (0, 0, 2)$, so $\iint_S (\nabla \times \mathbf{F}) \cdot d\mathbf{S} = 2 \iint_S \mathbf{k} \cdot d\mathbf{S} = 2 \times (\text{projected area } \pi) = 2\pi$, matching the circulation $\int_C \mathbf{F} \cdot d\mathbf{r} = 2\pi$.[](https://mathworld.wolfram.com/StokesTheorem.html) The divergence theorem, also known as Gauss's theorem, relates volume integrals to surface fluxes: if $V$ is a bounded region in space with piecewise smooth boundary surface $S$ (oriented outward), and $\mathbf{F}$ has continuous first partial derivatives on an open region containing $V$, then \iint_S \mathbf{F} \cdot d\mathbf{S} = \iiint_V \nabla \cdot \mathbf{F} , dV. This states that the total flux of $\mathbf{F}$ out of $S$ equals the integral of the divergence over $V$, capturing the net source or sink strength inside.[](https://mathworld.wolfram.com/DivergenceTheorem.html) It applies to regions where $\mathbf{F}$ is sufficiently smooth, enabling volume computations via boundary fluxes. The result was first noted by Joseph-Louis Lagrange in 1762 without proof, rediscovered and published by Carl Friedrich Gauss in 1833, and rigorously proved by Mikhail Ostrogradsky in 1828.[](https://christopherpruitt.files.wordpress.com/2015/01/a-history-of-the-divergence-greens-and-stokes-theorems.pdf) An illustrative case is $\mathbf{F} = (x, y, z)$ over the unit ball $V: x^2 + y^2 + z^2 \leq 1$. The divergence is $\nabla \cdot \mathbf{F} = 3$, so $\iiint_V 3 \, dV = 3 \cdot \frac{4}{3}\pi = 4\pi$; on $S$, $\mathbf{F} \cdot \mathbf{n} = 1$, and $\iint_S 1 \, dS = 4\pi$, confirming the equality.[](https://mathworld.wolfram.com/DivergenceTheorem.html) ## Applications ### Optimization and Critical Points In multivariable calculus, optimization involves identifying the local and global maxima and minima of a function $f: \mathbb{R}^n \to \mathbb{R}$. These extrema occur at critical points where the function's behavior changes, often analyzed using first- and second-order conditions derived from partial derivatives.[](https://ocw.mit.edu/courses/18-02sc-multivariable-calculus-fall-2010/2df7f490fc5f1bd28d6e8fd391e9a964_MIT18_02SC_notes_16.pdf) For unconstrained optimization, a point $\mathbf{x}_0 \in \mathbb{R}^n$ is a critical point of $f$ if the gradient vanishes, i.e., $\nabla f(\mathbf{x}_0) = \mathbf{0}$, or if the gradient is undefined there.[](https://tutorial.math.lamar.edu/classes/calciii/relativeextrema.aspx) This condition generalizes Fermat's theorem from single-variable calculus, indicating potential local extrema or saddle points.[](https://tutorial.math.lamar.edu/classes/calciii/relativeextrema.aspx) To classify these critical points, the second derivative test employs the Hessian matrix $H_f(\mathbf{x}_0)$, which is the symmetric matrix of second partial derivatives: H_f(\mathbf{x}_0) = \begin{pmatrix} \frac{\partial^2 f}{\partial x_1^2}(\mathbf{x}_0) & \cdots & \frac{\partial^2 f}{\partial x_1 \partial x_n}(\mathbf{x}_0) \ \vdots & \ddots & \vdots \ \frac{\partial^2 f}{\partial x_n \partial x_1}(\mathbf{x}_0) & \cdots & \frac{\partial^2 f}{\partial x_n^2}(\mathbf{x}_0) \end{pmatrix}. The nature of the critical point depends on the eigenvalues of $H_f(\mathbf{x}_0)$: if all eigenvalues are positive, $\mathbf{x}_0$ is a local minimum; if all are negative, a local maximum; if they have mixed signs, a saddle point; and if any are zero, the test is inconclusive.[](https://abel.math.harvard.edu/archive/21b_fall_02/supplements/hessian.pdf) This classification relies on the quadratic approximation from the second-order Taylor expansion near $\mathbf{x}_0$.[](https://www.mit.edu/~ashrstnv/hessian-matrix.html) Consider the function $f(x,y) = x^2 - y^2$. The partial derivatives are $f_x = 2x$ and $f_y = -2y$, so the only critical point is at $(0,0)$. The Hessian is H_f(0,0) = \begin{pmatrix} 2 & 0 \ 0 & -2 \end{pmatrix}, with eigenvalues $2 > 0$ and $-2 < 0$, confirming a saddle point. Along the x-axis, $f(x,0) = x^2$ has a minimum, while along the y-axis, $f(0,y) = -y^2$ has a maximum.[](https://tutorial.math.lamar.edu/classes/calciii/relativeextrema.aspx) For constrained optimization, where extrema are sought subject to $g(\mathbf{x}) = 0$, the method of Lagrange multipliers introduces a scalar $\lambda$ such that $\nabla f(\mathbf{x}_0) = \lambda \nabla g(\mathbf{x}_0)$ at the extremum, along with the constraint.[](https://ocw.mit.edu/courses/18-02-multivariable-calculus-fall-2007/resources/lecture-13-lagrange-multipliers/) This equates the gradients, ensuring the level surfaces of $f$ and $g$ are tangent. The points $\mathbf{x}_0$ satisfying these equations, solved alongside $g(\mathbf{x}_0) = 0$, are candidate extrema, which can then be classified using the bordered Hessian or evaluated directly.[](https://tutorial.math.lamar.edu/classes/calciii/lagrangemultipliers.aspx) An example is maximizing $f(x,y) = 8x^2 - 2y$ subject to [the circle](/page/The_Circle) $g(x,y) = x^2 + y^2 - 1 = 0$. Setting $\nabla f = (16x, -2)$ and $\nabla g = (2x, 2y)$, the system $16x = \lambda 2x$, $-2 = \lambda 2y$, and $x^2 + y^2 = 1$ yields solutions including $(0,1)$ where $f = -2$ (minimum) and points like $\left( \frac{3\sqrt{7}}{8}, -\frac{1}{8} \right)$ where $f = 8.125$ (maximum).[](https://tutorial.math.lamar.edu/classes/calciii/lagrangemultipliers.aspx) To find global extrema on a [domain](/page/Domain) $D \subseteq \mathbb{R}^n$, the [Extreme Value Theorem](/page/Extreme_value_theorem) states that if $f$ is continuous and $D$ is compact (closed and bounded), then $f$ attains its [maximum and minimum](/page/Maximum_and_minimum) on $D$. These occur either at critical points in the interior or on the [boundary](/page/Boundary), which may require parametrization or Lagrange multipliers for curved boundaries.[](https://tutorial.math.lamar.edu/classes/calciii/absoluteextrema.aspx) ### Physical Applications in Physics and Engineering In [fluid dynamics](/page/Fluid_dynamics), multivariable calculus provides essential tools for modeling the behavior of fluids through vector fields that describe [velocity](/page/Velocity). The [velocity](/page/Velocity) field $\mathbf{V}(x, y, z)$ represents the flow at each point in space, where the [divergence](/page/Divergence) $\nabla \cdot \mathbf{V}$ quantifies the net source or sink strength, indicating expansion or contraction of the [fluid](/page/Fluid) at that point.[](https://www.whitman.edu/mathematics/calculus_online/section16.05.html) Similarly, the [curl](/page/Curl) $\nabla \times \mathbf{V}$ measures the rotation or [vorticity](/page/Vorticity) of the [fluid](/page/Fluid), capturing swirling motions such as those in eddies or vortices.[](https://people.math.harvard.edu/archive/21a_summer_06/handouts/curldiv.pdf) A key application is the [continuity equation](/page/Continuity_equation), which expresses [conservation of mass](/page/Conservation_of_mass): $\nabla \cdot (\rho \mathbf{V}) = 0$, where $\rho$ is the [fluid](/page/Fluid) density, assuming steady-state [incompressible flow](/page/Incompressible_flow) with no sources or sinks.[](https://people.math.harvard.edu/archive/21a_summer_06/handouts/curldiv.pdf) In [electromagnetism](/page/Electromagnetism), multivariable calculus underpins [Maxwell's equations](/page/Maxwell's_equations), which govern electric and [magnetic field](/page/Magnetic_field)s as vector fields. The equation $\nabla \cdot \mathbf{B} = 0$ states that [magnetic flux](/page/Magnetic_flux) has no sources or sinks, implying [magnetic field](/page/Magnetic_field) lines form closed loops, derived from the [divergence](/page/Divergence) operator applied to the [magnetic field](/page/Magnetic_field) $\mathbf{B}$.[](https://ocw.mit.edu/courses/18-02-multivariable-calculus-fall-2007/09ae8ccfce72c4883f1cedf16213f381_relation_to_phy.pdf) [Faraday's law of induction](/page/Faraday's_law_of_induction) is expressed as $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}$, where the [curl](/page/Curl) of the [electric field](/page/Electric_field) $\mathbf{E}$ relates to the time-varying [magnetic field](/page/Magnetic_field), explaining phenomena like [electromagnetic induction](/page/Electromagnetic_induction) in generators.[](https://people.math.harvard.edu/archive/21a_summer_06/handouts/curldiv.pdf) Flux integrals over surfaces compute the total electric or [magnetic flux](/page/Magnetic_flux) through a [boundary](/page/Boundary), essential for calculating forces on charged particles or energy flow in circuits. Engineering applications leverage multivariable calculus for optimizing designs and solving diffusion problems. In [design optimization](/page/Design_optimization), techniques from multivariable calculus minimize functionals like surface area subject to volume constraints, as in the [calculus of variations](/page/Calculus_of_variations) for minimal surfaces, which models efficient structures such as soap films or architectural shells.[](https://www-users.cse.umn.edu/~olver/ln_/cvc.pdf) The [heat equation](/page/Heat_equation), $\frac{\partial u}{\partial t} = \kappa \nabla^2 u$, describes temperature distribution $u(x, y, z, t)$ in conducting materials, where the Laplacian $\nabla^2 u$ arises from multivariable [differentiation](/page/Differentiation) to represent net heat flow, applied in [thermal analysis](/page/Thermal_analysis) of engines or [electronics](/page/Electronics) cooling.[](https://tutorial.math.lamar.edu/classes/de/solvingheatequation.aspx) ### Applications in Economics In economics, multivariable calculus analyzes [consumer](/page/Consumer) behavior through [utility](/page/Utility) functions $u(x, y)$, which measure satisfaction from [goods](/page/Goods) $x$ and $y$. The marginal utilities are the partial [derivatives](/page/Hartshorn) $\frac{\partial u}{\partial x}$ and $\frac{\partial u}{\partial y}$, representing the additional satisfaction from consuming one more unit of each good.[](https://faculty.fiu.edu/~boydj/mathecon/math14.pdf) The [marginal rate of substitution](/page/Marginal_rate_of_substitution), defined as $\frac{\partial u / \partial x}{\partial u / \partial y}$, quantifies the rate at which a [consumer](/page/Consumer) is willing to [trade](/page/Trade) one good for another while maintaining constant [utility](/page/Utility), central to [indifference curve](/page/Indifference_curve) analysis and demand theory.[](https://belkcollegeofbusiness.charlotte.edu/azillant/wp-content/uploads/sites/846/2014/12/ECON6202_msmicroI_ch1notes.pdf)

References

  1. [1]
    Multivariable Calculus | Mathematics - MIT OpenCourseWare
    This course covers differential, integral and vector calculus for functions of more than one variable. These mathematical tools and methods are used ...
  2. [2]
    Calculus III - Pauls Online Math Notes - Lamar University
    Sep 21, 2020 · Tangent, Normal and Binormal Vectors – In this section we will define the tangent, normal and binormal vectors.
  3. [3]
    Math 32BH: Calculus of Several Variables, Honors - Richard Wong
    Multivariable calculus is the mathematical language that allows us to describe the geometry of the physical world around us, such as the areas, volumes, or mass ...
  4. [4]
    [PDF] MATH 230-1: Multivariable Differential Calculus
    This is a course in multivariable differential calculus. Our basic goal is extend the concepts you saw before in single-variable calculus, ...
  5. [5]
    Functions of Several Variables - Department of Mathematics at UTSA
    Nov 2, 2021 · In the following chapters, we will be discussing limits, differentiation, and integration of multivariable functions, using single-variable calculus as our ...
  6. [6]
    [PDF] Math 150: Multivariable Calculus: Steven J Miller, Spring 2020
    This course extends calculus to several variables: vectors, partial derivatives, multiple integrals. There is also a unit on infinite series, sometimes with ...
  7. [7]
    None
    Nothing is retrieved...<|separator|>
  8. [8]
    [PDF] Multivariable Calculus - Stat@Duke
    3. Linear Mappings and Their Matrices ...................... 61. 3.1 Linear Mappings .
  9. [9]
    [PDF] A HISTORICAL OVERVIEW OF CONNECTIONS IN GEOMETRY
    The generalization to higher dimensions of what Gauss did for surfaces is due to Bern- hard Riemann and is what makes up the content of his lecture ¨Uber die ...
  10. [10]
    Calculus history - MacTutor - University of St Andrews
    The main ideas which underpin the calculus developed over a very long period of time indeed. The first steps were taken by Greek mathematicians.Missing: Green Stokes Gauss Riemann Gibbs Heaviside
  11. [11]
    [PDF] FROM NEWTON'S MECHANICS TO EULER'S EQUATIONS
    The Euler equations of hydrodynamics, which appeared in their present form in the 1750s, did not emerge in the middle of a desert.
  12. [12]
    [PDF] Joseph Louis Lagrange's Algebraic Vision of the Calculus
    The paper discusses Lagrange's conception of algebraic analysis and critically examines his demonstration of Taylor's theo- rem, the foundation of his algebraic ...
  13. [13]
    [PDF] General Investigations of Curved Surfaces - Project Gutenberg
    In 1827 Gauss presented to the Royal Society of Göttingen his important paper on the theory of surfaces, which seventy-three years afterward the eminent ...
  14. [14]
    Mathematical Treasure: Cauchy on Definite Integrals
    Augustin-Louis Cauchy's Mémorie sur les Intégrales Définies, concerning integration in the field of the complex numbers, was first published in 1825.Missing: multiple 1820s
  15. [15]
    [PDF] A History of the Divergence, Green's, and Stokes' Theorems
    In 1813, Gauss formulated Green's Theorem, but could not provide a proof [14]. Although Gauss did excellent work, he would not publish his results until 1833.
  16. [16]
    [PDF] ON SOME HISTORICAL ASPECTS OF THE THEORY OF RIEMANN ...
    Our historical sight regards multiplicative number theory because the involvement of the Riemann zeta function is mainly motivated by prime number theory and ...
  17. [17]
    [PDF] A History of Vector Analysis
    In doing this, Gibbs introduces the terms and concepts of “dyad” and “dyadic.” Moreover, during the. 1880s Gibbs frequently teaches a course on vector analysis, ...
  18. [18]
    [PDF] The Concept of Manifold, 1850-1950
    Therefore, so Riemann concluded, the multivaluedness of integrals of holomorphic 1-forms. (abelian integrals of the first kind) depends only (and still to a ...
  19. [19]
    [PDF] Classical Differential Geometry Peter Petersen - UCLA Mathematics
    This is an evolving set of lecture notes on the classical theory of curves and surfaces. More pictures will be added eventually. I recommend people download.
  20. [20]
    Dot Product -- from Wolfram MathWorld
    The dot product can be defined for two vectors X and Y by X·Y=|X||Y|costheta, where theta is the angle between the vectors and |X| is the norm.
  21. [21]
    Norm -- from Wolfram MathWorld
    ### Summary of Euclidean Norm Definition from Wolfram MathWorld
  22. [22]
    Euclidean Metric -- from Wolfram MathWorld
    The Euclidean metric is the function d:R^n×R^n->R that assigns to any two vectors in Euclidean n-space x=(x_1,...,x_n) and y=(y_1,...,y_n) the number d(x ...
  23. [23]
    [PDF] Notes on Multivariable Differentiation Functions
    The basic object of study is a function of several variables f : Rn → Rm: such a function would take n inputs and give m outputs. Mainly, we'll be interested in ...
  24. [24]
    [PDF] OBJECTS f : R n → Rm IN MULTIVARIABLE CALCULUS Math21a
    SCALAR FUNCTION (2D). (n = 2,m = 1). A function f(x, y) defined in the plane is also called a scalar field. The graph of f is a curve in space (see figure).
  25. [25]
    [PDF] 2.1 The Geometry of Real-Valued Functions In this section, we will ...
    n = 2: the level sets for f : R2 → R1 are curves. We call level curves or level contours. Example 2. f(x, y) = x2 + y2. Describe the level sets of f. 4 ...Missing: plots | Show results with:plots
  26. [26]
    [PDF] Multivariable Vector-Valued Functions - Bard Faculty
    n → R m become simply F : R → R, which are single-variable real-valued functions. Hence, multivariable vector-valued functions include all the previous three ...
  27. [27]
    [PDF] SIMPLE MULTIVARIATE CALCULUS 1. Real-valued Functions of ...
    The set of y-values taken on by f is the range of the function. The symbol y is the dependent variable of f, and f is said to be a function of the n independent ...
  28. [28]
    Calculus III - Functions of Several Variables - Pauls Online Math Notes
    Nov 16, 2022 · In particular we will discuss finding the domain of a function of several variables as well as level curves, level surfaces and traces.
  29. [29]
    [PDF] continuity of multivariable functions. examples
    Note: a function f : Rn → Rm is clearly given by a row vector f = (f1,...,fm) where fi's are the components of f. In fact, fi = pi ◦ f (see below). Then limx→x0 ...
  30. [30]
    [PDF] Section 12.3: Contour Diagrams - Arizona Math
    A contour diagram is simply a graph on the xy-plane that shows curves of equal height for a two-variable function z = f(x, y). Question: What are some examples ...Missing: scalar- | Show results with:scalar-
  31. [31]
    Calculus III - Parametric Surfaces - Pauls Online Math Notes
    Mar 25, 2024 · In this section we will take a look at the basics of representing a surface with parametric equations.
  32. [32]
    [PDF] Differentiation of Multivariable Functions - People
    Functions of Several Variables. The concept of a function of several variables can be qualitatively un- derstood from simple examples in everyday life.<|control11|><|separator|>
  33. [33]
    4.2 Limits and Continuity - Calculus Volume 3 | OpenStax
    The definition of a limit of a function of two variables requires the δ disk to be contained inside the domain of the function. However, if we wish to find the ...
  34. [34]
  35. [35]
    [PDF] Functions of Several Variables (Continuity, Differentiability ...
    Limit and Continuity : (i) We say that L is the limit of a function f : R3 → R at X0 ∈ R3 (and we write limX→X0 f(X) = L) if f(Xn) → L whenever a sequence (Xn) ...
  36. [36]
    13.2 Limits and Continuity of Multivariable Functions
    We cover the key concepts here; some terms from Definitions 13.2.1 and 13.2.3 are not redefined but their analogous meanings should be clear to the reader.<|control11|><|separator|>
  37. [37]
    [PDF] 3.2 Limits and Continuity of Functions of Two or More Variables.
    Theorem 3.2.27 The following results are true for multivariable functions: 1. The sum, difference and product of continuous functions is a continuous function.<|control11|><|separator|>
  38. [38]
    Uniform continuity 2 variable function - Mathematics Stack Exchange
    Jan 4, 2015 · We have to check the uniform continuity of f on Bd2(0;3). My attempt: I start by observing the fact that Bd2(0;3) is a compact set (Heine-Borel ...Continuity implies uniform continuity - Math Stack Exchangereal analysis - Prove that a periodic continuous function is uniformly ...More results from math.stackexchange.comMissing: multivariable | Show results with:multivariable
  39. [39]
    Compactness and applications.
    The first part of the proof of the Extreme Value Theorem can be easily modified to show that if K is a compact subset of Rn and f:K→Rk is continuous, then {f(x) ...
  40. [40]
    Heine-Borel Theorem - Department of Mathematics at UTSA
    Oct 27, 2021 · Central to the theory was the concept of uniform continuity and the theorem stating that every continuous function on a closed interval is ...Missing: multivariable calculus
  41. [41]
    Lecture 9: Limsup, Liminf, and the Bolzano-Weierstrass Theorem
    We introduce limit inferiors and limit superiors to prove this is the case (known as the Bolzano-Weierstrass theorem). Speaker: Casey Rodriguez.<|separator|>
  42. [42]
    The Extreme Value Theorem - Advanced Analysis
    Jan 17, 2024 · If f : E → R is continuous and E is compact, then f attains its maximum and minimum values, i.e., there exist points a , b ∈ E such that f ( a ) ...
  43. [43]
    The Intermediate Value Theorem
    The familiar Intermediate Value Theorem (abbreviated IVT) in 1d applies to a continuous function f whose domain is an interval. To state an analogue of the IVT ...Missing: image | Show results with:image
  44. [44]
    4.3 Partial Derivatives - Calculus Volume 3 | OpenStax
    Mar 30, 2016 · Finding derivatives of functions of two variables is the key concept in this chapter, with as many applications in mathematics, science, and ...
  45. [45]
    2. Partial Derivatives | Multivariable Calculus - MIT OpenCourseWare
    2. Partial Derivatives · They measure rates of change. · They are used in approximation formulas. · They help identify local maxima and minima.Multivariable Calculus · Directional Derivatives · 3. Double Integrals and Line...
  46. [46]
    Calculus III - Partial Derivatives - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will the idea of partial derivatives. We will give the formal definition of the partial derivative as well as the ...<|control11|><|separator|>
  47. [47]
    Calculus III - Interpretations of Partial Derivatives
    Nov 16, 2022 · The partial derivative fx(a,b) f x ( a , b ) is the slope of the trace of f(x,y) f ( x , y ) for the plane y=b y = b at the point (a,b) ( a , b ) ...Missing: multivariable | Show results with:multivariable
  48. [48]
    The derivative matrix - Math Insight
    ### Summary of Total Derivative and Jacobian Matrix
  49. [49]
    The multidimensional differentiability theorem - Math Insight
    ### Theorem on Differentiability with Continuous Partial Derivatives
  50. [50]
    2.3 The Chain Rule
    ### Multivariable Chain Rule Summary
  51. [51]
    [PDF] The Multivariable Chain Rule - UC Berkeley math
    Feb 11, 2015 · The chain rule is a simple consequence of the fact that differentiation produces the linear approximation to a function at a point, ...
  52. [52]
    [PDF] We will prove the Chain Rule, including the proof that ... - LSU Math
    We will begin by proving that the composite function w(u(x, y),v(x, y)) is differentiable. From this the formulas for the partial derivatives will follow.
  53. [53]
    [PDF] 2.5 Chain Rule for Multiple Variables - UCSD Math
    The notation f ◦ g is read as “f composed with g” or “the composition of f with g.” A mountain has altitude z = f(x,y) above point (x,y). Plot a hiking trail ( ...
  54. [54]
    Chain Rule - Calculus III - Pauls Online Math Notes
    Nov 16, 2022 · We will start with a function in the form F(x,y)=0 F ( x , y ) = 0 (if it's not in this form simply move everything to one side of the equal ...
  55. [55]
  56. [56]
    Gâteaux Derivative -- from Wolfram MathWorld
    A function f is Gâteaux differentiable if an operator T_x exists, called the Gâteaux derivative of f at x, and it is unique if it exists.Missing: multivariable | Show results with:multivariable
  57. [57]
    2.7: Directional Derivatives and the Gradient - Mathematics LibreTexts
    Feb 22, 2022 · The natural analog of this interpretation for multivariable functions is the directional derivative, which we now introduce through a question.
  58. [58]
    14.6: Directional Derivatives and the Gradient - Math LibreTexts
    Feb 5, 2025 · Equation \ref{DD} provides a formal definition of the directional derivative that can be used in many cases to calculate a directional ...
  59. [59]
    14.5: Directional Derivatives - Mathematics LibreTexts
    Apr 15, 2025 · The directional derivative of a multivariate differentiable function along a given vector v at a given point x intuitively represents the ...Missing: definition | Show results with:definition
  60. [60]
    14.3: Partial Derivatives - Mathematics LibreTexts
    Feb 5, 2025 · Finding derivatives of functions of two variables is the key concept in this chapter, with as many applications in mathematics, science, ...Derivatives of a Function of... · Example 14 . 3 . 1 : Calculating...
  61. [61]
    Clairaut's theorem - PlanetMath.org
    Mar 22, 2013 · This theorem is commonly referred to as the equality of mixed partials. It is usually first presented in a vector calculus course.
  62. [62]
    Laplacian -- from Wolfram MathWorld
    The Laplacian for a scalar function phi is a scalar differential operator defined by (1) where the h_i are the scale factors of the coordinate system ...Missing: multivariable | Show results with:multivariable
  63. [63]
    [PDF] Second Derivatives, Bilinear Maps, and Hessian Matrices
    The Hessian matrix expresses the second derivative of a scalar-valued multivariate function, and is always square and symmetric. A Jacobian matrix, in general, ...
  64. [64]
    Labware - MA35 Multivariable Calculus - Three Variable Calculus
    The Hessian matrix H of a function f(x,y,z) is defined as the 3 * 3 matrix with rows [fxx, fxy, fxz], [fyx, fyy, fyz], and [fzx, fzy, fzz]. For twice ...
  65. [65]
    Introduction to Taylor's theorem for multivariable functions
    When f is a function of multiple variables, the second derivative term in the Taylor series will use the Hessian Hf(a). For the single-variable case, we ...
  66. [66]
    [PDF] Unit 17: Taylor approximation
    If we stop the Taylor series after two steps, we get the function Q(x + v) = f(x) + df(x) · v + v · d2f(x) · v/2. The matrix H(x) = d2f(x) is called the Hessian.
  67. [67]
    2.7: Critical Points
    If H(a) is positive definite, then a is a local minimum point; If H(a) is negative definite, then a is a local maximum point;
  68. [68]
    13.11 Hessians and the General Second Derivative Test - WeBWorK
    The proof requires the use of Taylor's theorem for a function of several variables, which we will not prove, and a bit of terminology from linear algebra. Our ...
  69. [69]
    The Hessian Matrix - Ximera - The Ohio State University
    We're now in position to define the second-order Taylor polynomial of a function, using the Hessian matrix to find the degree two terms.
  70. [70]
    11.1 Double Riemann Sums and Double Integrals over Rectangles
    11 Multiple Integrals ... We will extend this process in this section to its three-dimensional analogs, double Riemann sums and double integrals over rectangles.Missing: 1850s | Show results with:1850s<|separator|>
  71. [71]
  72. [72]
    Calculus III - Double Integrals - Pauls Online Math Notes
    Nov 16, 2022 · A double integral integrates a function of two variables over a 2D region, like a rectangle, and is defined as ∬Rf(x,y)dA.
  73. [73]
    11.7 Triple Integrals - Active Calculus
    Definition 11.7.3. · The triple integral. V ( S ) = ∭ S 1 d V · The average value of the function f = f ( x , y , x ) over a solid domain S is given by. f AVG ( S ) ...
  74. [74]
    11.2 Iterated Integrals - Active Calculus
    An iterated integral is a nested integral, where a double integral is computed by integrating first with respect to one variable, then the result with respect ...Missing: textbook | Show results with:textbook
  75. [75]
    Calculus III - Iterated Integrals - Pauls Online Math Notes
    Nov 16, 2022 · These integrals are called iterated integrals. Note that there are in fact two ways of computing a double integral over a rectangle.
  76. [76]
    Fubini's Theorem
    Fubini's Theorem: If f(x,y) is a continuous function on a rectangle R=[a,b]×[c,d], then the double integral ∬Rf(x,y)dA is equal to the iterated integral ...
  77. [77]
    [PDF] Multivariable integration These notes cover integrals of continuous ...
    Apr 23, 2024 · These notes cover integrals of continuous functions of several real variables. They use iterated integration and differentiation to reduce ...
  78. [78]
    [PDF] 18.022: Multivariable calculus — The change of variables theorem
    This determinant is called the Jacobian of F at x. The change-of- variables theorem for double integrals is the following statement. Theorem. Let F: U → V ...Missing: sources | Show results with:sources
  79. [79]
    Calculus III - Change of Variables - Pauls Online Math Notes
    Nov 16, 2022 · The Jacobian is defined as a determinant of a 2x2 matrix, if you are unfamiliar with this that is okay. Here is how to compute the determinant.Missing: total | Show results with:total
  80. [80]
    Calculus III - Vector Fields - Pauls Online Math Notes
    Nov 16, 2022 · A vector field on two (or three) dimensional space is a function →F F → that assigns to each point (x,y) ( x , y ) (or (x,y,z) ( x , y , z ) ) a ...
  81. [81]
    [PDF] Unit 19: Vector fields
    Definition: A planar vector field is a vector-valued map ~F which assigns to a point (x, y) ∈ R2 a vector ~F(x, y)=[P(x, y),Q(x, y)].
  82. [82]
    Lecture 19: Vector Fields | Multivariable Calculus | Mathematics
    Lecture 19: Vector Fields. Topics covered: Vector fields and line integrals in the plane. Instructor: Prof. Denis Auroux.
  83. [83]
    Calculus III - Line Integrals - Pauls Online Math Notes
    Nov 16, 2022 · Line integrals are a new type of integral, including those with respect to arc length, x, y, z, and of vector fields.
  84. [84]
    Introduction to a line integral of a scalar-valued function - Math Insight
    A line integral is a generalization of a one-variable integral over a curve, like calculating the mass of a wire with varying density. It's denoted as ∫cfds.
  85. [85]
    Calculus III - Line Integrals of Vector Fields - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will define the third type of line integrals we'll be looking at : line integrals of vector fields.
  86. [86]
    [PDF] Math 2400: Calculus III Line Integrals over Vector Fields
    We will now learn about line integrals over a vector field. A classic application is to find the work done by a force field in moving an object along a curve. 1 ...
  87. [87]
    Calculus III - Conservative Vector Fields - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will take a more detailed look at conservative vector fields than we've done in previous sections.
  88. [88]
    An introduction to conservative vector fields - Math Insight
    If a vector field is conservative, one can find a potential function analogous to the potential energy associated with conservative physical forces. Once the ...
  89. [89]
    Calculus III - Surface Integrals - Pauls Online Math Notes
    Nov 28, 2022 · In this section we introduce the idea of a surface integral. With surface integrals we will be integrating over the surface of a solid.
  90. [90]
    Calculus III - Surface Integrals of Vector Fields
    Nov 16, 2022 · In order to work with surface integrals of vector fields we will need to be able to write down a formula for the unit normal vector corresponding to the ...
  91. [91]
    Stokes' Theorem -- from Wolfram MathWorld
    Stokes' theorem connects to the "standard" gradient, curl, and divergence theorems by the following relations.
  92. [92]
    Divergence -- from Wolfram MathWorld
    The physical significance of the divergence of a vector field is the rate at which density exits a given region of space.
  93. [93]
    Curl -- from Wolfram MathWorld
    The curl of a vector field, denoted curl(F) or del xF (the notation used in this work), is defined as the vector field having magnitude equal to the maximum ...
  94. [94]
    Green's Theorem -- from Wolfram MathWorld
    Green's theorem is a vector identity which is equivalent to the curl theorem in the plane. Over a region D in the plane with boundary partialD, Green's ...
  95. [95]
    [PDF] The Green of Green Functions - University of Nottingham
    George Green's essay introducing Green's theorem and Green functions was published at the author's expense in 1828. Page 5. (4) where c0(r) is an incident ...
  96. [96]
    [PDF] The History of Stokes' Theorem - Harvard Mathematics Department
    However, the theorems as we know them today did not appear explicitly until the 19th century. The first of these theorems to be stated and proved in ...Missing: multivariable | Show results with:multivariable<|control11|><|separator|>
  97. [97]
    Divergence Theorem -- from Wolfram MathWorld
    The divergence theorem is a mathematical statement of the physical fact that, in the absence of the creation or destruction of matter, the density within a ...
  98. [98]
    [PDF] 18.02SC Notes: Critical Points - MIT OpenCourseWare
    Critical points: A standard question in calculus, with applications to many fields, is to find the points where a function reaches its relative maxima and ...
  99. [99]
    Calculus III - Relative Minimums and Maximums
    Nov 16, 2022 · In this section we will define critical points for functions of two variables and discuss a method for determining if they are relative ...
  100. [100]
    [PDF] Supplement on Critical Points and the 2nd Derivative Test
    2nd Derivative Test (second form): A critical point for a function f (x) will give: (1) a relative minimum if all eigenvalues of the Hessian matrix Hf (x0) are ...
  101. [101]
    Hessian matrix (second derivative test) - MIT
    The Hessian matrix of a scalar function of several variables f : R n → R f: \R^n \to \R f:Rn→R describes the local curvature of that function.
  102. [102]
    Lecture 13: Lagrange Multipliers | Multivariable Calculus
    Lecture 13: Lagrange Multipliers. Topics covered: Lagrange multipliers. Instructor: Prof. Denis Auroux.
  103. [103]
    Calculus III - Lagrange Multipliers - Pauls Online Math Notes
    Mar 31, 2025 · In this section we'll see discuss how to use the method of Lagrange Multipliers to find the absolute minimums and maximums of functions of ...
  104. [104]
    Calculus III - Absolute Minimums and Maximums
    Nov 16, 2022 · Extreme Value Theorem​​ If f(x,y) f ( x , y ) is continuous in some closed, bounded set D in R2 then there are points in D , (x1,y1) ( x 1 , y 1 ...
  105. [105]
    16.5 Divergence and Curl - Vector Calculus
    Divergence measures the tendency of the fluid to collect or disperse at a point, and curl measures the tendency of the fluid to swirl around the point.Missing: dynamics | Show results with:dynamics
  106. [106]
    [PDF] CURL AND DIV Maths21a, O. Knill
    FLUID DYNAMICS. v velocity, ρ density of fluid. Continuity equation. ˙ρ + div(ρv) = 0 no fluid get lost. Incompressibility div(v) = 0 incompressible fluids ...
  107. [107]
    [PDF] 18.02 Multivariable Calculus - MIT OpenCourseWare
    Application to Maxwell's equations. Each of Maxwell's equations in electromagnetic theory can be written in two equivalent forms: a differential form which ...
  108. [108]
    [PDF] The Calculus of Variations - College of Science and Engineering
    Jan 7, 2022 · Introduction. Minimization and maximization principles form one of the most wide-ranging means of formulating mathematical models governing ...
  109. [109]
    Solving the Heat Equation - Pauls Online Math Notes
    Nov 16, 2022 · In this section we go through the complete separation of variables process, including solving the two ordinary differential equations the ...
  110. [110]
    [PDF] 14. Calculus of Several Variables
    In consumer theory, partial derivatives can be used to compute marginal utilities and marginal rates of substitution. ▷ Example 14.11.1: Utility Functions.
  111. [111]
    [PDF] Consumer Theory - UNC Charlotte Pages
    , is called the marginal utility of good i. We can now define the marginal rate of substitution (MRS) between two goods as the ratio of the marginal utilities ...Missing: multivariable | Show results with:multivariable