Fact-checked by Grok 2 weeks ago

Differentiable function

In , a differentiable function is a function whose exists at every point in its , meaning it can be locally approximated by a at those points. For a function f: (a, b) \to \mathbb{R} of one real variable, differentiability at a point x_0 \in (a, b) requires that the \lim_{h \to 0} \frac{f(x_0 + h) - f(x_0)}{h} exists and is finite, yielding the f'(x_0), which represents the of the line to the of f at x_0. This concept forms the foundation of , enabling the analysis of rates of change and instantaneous behavior of functions. A key property of differentiable functions is that they are necessarily continuous at every point in their domain, as the existence of the implies the function values approach the point value without jumps. However, the does not hold: continuous functions are not always differentiable, as illustrated by the function f(x) = |x|, which is continuous everywhere but not differentiable at x = 0 due to the sharp corner in its graph. In higher dimensions, for a f: \mathbb{R}^n \to \mathbb{R}^m, differentiability at a point a means there exists a linear transformation T: \mathbb{R}^n \to \mathbb{R}^m (the ) such that \lim_{x \to a} \frac{\|f(x) - f(a) - T(x - a)\|}{\|x - a\|} = 0, providing an affine approximation via the matrix of partial derivatives. Notable theorems for differentiable functions include the chain rule, which composes derivatives for multivariable functions, and the , which guarantees the existence of points where the average rate of change equals the instantaneous rate. These properties underpin applications in optimization, physics, and engineering, where smooth approximations model real-world phenomena.

Real single-variable functions

Definition and basic properties

In the context of , a f: \mathbb{R} \to \mathbb{R} is said to be differentiable at an interior point a of its if the following exists: f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h}. This , when it exists, is called the of f at a. Geometrically, f'(a) represents the of the tangent line to the of y = f(x) at the point (a, f(a)), providing a to the near that point./04%3A_Differentiation/4.01%3A_Definition_and_Basic_Properties_of_the_Derivative) If the exists at every point in an open , then f is differentiable on that interval, and the function f' is itself a defined on the same . Differentiability at a implies at a. Examples of functions that are differentiable everywhere include polynomials; for instance, the derivative of f(x) = x^2 is f'(x) = 2x, obtained by direct computation of the , and higher-degree polynomials similarly yield differentiable derivatives of lower degree./04%3A_Differentiation/4.01%3A_Definition_and_Basic_Properties_of_the_Derivative) In contrast, the function f(x) = |x| is not differentiable at x = 0, as the \lim_{h \to 0} \frac{|h|}{h} does not exist: the left-hand is -1 while the right-hand is $1. linear functions, such as f(x) = |x - [1](/page/1)|, are differentiable everywhere except at points of non-smoothness like x = [1](/page/1), where a "" prevents the from existing. To handle boundary points or potential asymmetries, one-sided derivatives are defined. The right-hand derivative at a is \lim_{h \to 0^+} \frac{f(a + h) - f(a)}{h}, and the left-hand derivative is \lim_{h \to 0^-} \frac{f(a + h) - f(a)}{h}. The function is differentiable at a if and only if both one-sided derivatives exist and are equal. For example, for f(x) = |x| at x = 0, the right-hand derivative is $1 and the left-hand is -1, confirming non-differentiability./05%3A_Differentiation_and_Antidifferentiation/5.01%3A_Derivatives_of_Functions_of_One_Real_Variable)

Relation to continuity

A fundamental result in calculus states that if a function f: \mathbb{R} \to \mathbb{R} is differentiable at a point a \in \mathbb{R}, then f is continuous at a. To prove this theorem using the \epsilon-\delta definition, suppose f'(a) = L exists. For any \epsilon > 0, first select \delta_1 > 0 such that if $0 < |x - a| < \delta_1, then \left| \frac{f(x) - f(a)}{x - a} - L \right| < 1. This implies \left| \frac{f(x) - f(a)}{x - a} \right| < |L| + 1. Thus, |f(x) - f(a)| = |x - a| \left| \frac{f(x) - f(a)}{x - a} \right| < |x - a| (|L| + 1). Now choose \delta = \min(\delta_1, \epsilon / (|L| + 1)). If |x - a| < \delta, then |f(x) - f(a)| < \epsilon, establishing continuity at a. The converse does not hold: continuity at a point does not imply differentiability there. A striking counterexample is the Weierstrass function, defined as f(x) = \sum_{n=0}^{\infty} a^n \cos(b^n \pi x), where $0 < a < 1 and ab > 1 + \frac{3\pi}{2}. This function is continuous on \mathbb{R} but differentiable at no point, serving as the first published instance of such a "pathological" behavior. introduced it in a lecture in , highlighting that differentiability imposes a stricter condition than mere or even on bounded intervals. This one-way implication was implicitly understood by early pioneers of calculus. and , who independently developed the foundations of the subject in the late , recognized that the existence of lines—central to their methods—presupposes the of the curves they studied./02:_Calculus_in_the_17th_and_18th_Centuries/2.01:_Newton_and_Leibniz_Get_Started)

Differentiability classes

Functions are classified into differentiability classes based on the order of their continuous derivatives, denoted as C^k for k = 0, 1, 2, \dots, \infty, where k indicates the number of times the function can be differentiated while keeping all derivatives continuous on the domain. The class C^0 consists of continuous functions, which are precisely those that are differentiable zero times in the sense that no further differentiation is required beyond continuity itself. A function f: \mathbb{R} \to \mathbb{R} belongs to C^0(\mathbb{R}) if it is continuous at every point in its domain. Functions in the class C^1 are continuously differentiable, meaning the function itself and its first derivative f' are both continuous on the domain. For higher orders, a function f is in C^k if it has continuous derivatives up to order k, that is, f, f', f'', \dots, f^{(k)} are all continuous. Polynomials, for instance, are in C^\infty because they possess derivatives of all orders that remain polynomials, hence continuous everywhere. The , known as smooth functions, comprises functions that are infinitely differentiable with all derivatives continuous. While all analytic functions are , the does not hold; non-analytic smooth functions exist, such as the defined by \psi(x) = e^{-1/x^2} for x > 0 and \psi(x) = 0 for x \leq 0, which is C^\infty on \mathbb{R} but not analytic at x=0. Higher-order derivatives can be expressed using the forward difference operator , where \Delta f(a; h) = f(a+h) - f(a) and \Delta^n f(a; h) = \Delta(\Delta^{n-1} f(a; h); h), yielding f^{(n)}(a) = \lim_{h \to 0} \frac{\Delta^n f(a; h)}{h^n}, provided the limit exists. Whitney's extension theorem in one variable provides conditions under which a function defined on a closed subset of \mathbb{R} with prescribed derivatives up to order k can be extended to a C^k function on all of \mathbb{R}, ensuring compatibility of the jet data through remainder estimates.

Multivariable real functions

Partial derivatives and total differentiability

In multivariable calculus, for a function f: \mathbb{R}^n \to \mathbb{R}^m, the partial derivative with respect to the i-th variable at a point x = (x_1, \dots, x_n) is defined as \frac{\partial f}{\partial x_i}(x) = \lim_{h \to 0} \frac{f(x + h e_i) - f(x)}{h}, where e_i is the i-th standard basis vector in \mathbb{R}^n, provided the limit exists. This measures the rate of change of f along the direction of the i-th coordinate axis, treating other variables as constant. For example, consider f(x,y) = x^2 + y^2: the partial derivative with respect to x is \frac{\partial f}{\partial x} = 2x, obtained by differentiating x^2 + y^2 (with y fixed) as if it were a single-variable function in x, and similarly \frac{\partial f}{\partial y} = 2y. A stronger condition than the mere existence of partial derivatives is total differentiability, also known as Fréchet differentiability. A function f: \mathbb{R}^n \to \mathbb{R}^m is (Fréchet) differentiable at a point a if there exists a continuous linear map Df(a): \mathbb{R}^n \to \mathbb{R}^m such that \lim_{h \to 0} \frac{\|f(a + h) - f(a) - Df(a) h \|}{\|h\|} = 0, where \|\cdot\| denotes the Euclidean norm (or any equivalent norm). This means Df(a) provides the best linear approximation to f near a, with the error term o(\|h\|) vanishing faster than linearly as h \to 0. The single-variable derivative is a special case of this definition when n = m = 1. For differentiable functions f: \mathbb{R}^n \to \mathbb{R}^m, the Df(a) is represented by the matrix J_f(a), an m \times n whose i-th row consists of the partial derivatives of the i-th component function of f with respect to each variable, evaluated at a. That is, the (i,j)-entry of J_f(a) is \frac{\partial f_i}{\partial x_j}(a), and Df(a) h = J_f(a) h for h \in \mathbb{R}^n. This matrix generalizes the single-variable derivative to higher dimensions, capturing the full linear behavior in all directions. The existence of all partial derivatives at a point does not guarantee total differentiability; continuity of the partials is required for sufficiency. Specifically, if all partial derivatives of f: \mathbb{R}^n \to \mathbb{R}^m exist in a neighborhood of a and are continuous at a, then f is differentiable at a with Df(a) given by the Jacobian matrix. However, a shows that partial derivatives can exist everywhere (including at the ) without implying differentiability: consider f(x,y) = \begin{cases} \frac{xy^2}{x^2 + y^4} & (x,y) \neq (0,0), \\ 0 & (x,y) = (0,0). \end{cases} Here, \frac{\partial f}{\partial x}(0,0) = 0 and \frac{\partial f}{\partial y}(0,0) = 0, but along the path y = x^{1/2}, the limit in the differentiability definition fails to be zero, so f is not differentiable at (0,0). In this case, the partial derivatives exist but are not continuous at the . An intermediate notion between partial derivatives and total differentiability is Gâteaux differentiability, which requires that the exists in every direction: for all h \in \mathbb{R}^n, \lim_{t \to 0} \frac{f(a + t h) - f(a)}{t} exists (and is linear in h). This is weaker than Fréchet differentiability because it only ensures along rays, not uniformly over all small perturbations; Fréchet differentiability implies Gâteaux differentiability, but the converse fails in general.

Higher-order differentiability

In multivariable calculus, higher partial derivatives of a function f: \mathbb{R}^n \to \mathbb{R} are obtained by iterating partial differentiation. The second-order partial derivatives include the pure forms \frac{\partial^2 f}{\partial x_i^2} and the mixed forms \frac{\partial^2 f}{\partial x_i \partial x_j} for i \neq j, defined as \frac{\partial^2 f}{\partial x_i \partial x_j} = \frac{\partial}{\partial x_j} \left( \frac{\partial f}{\partial x_i} \right). Higher-order partials of order k \geq 3 are defined inductively in a similar manner, with the order of differentiation generally mattering only if continuity assumptions fail. A key result concerning mixed partial derivatives is Clairaut's theorem, also known as Schwarz's theorem, which asserts that if f_{xy} and f_{yx} both exist in a neighborhood of a point and are continuous at that point, then f_{xy} = f_{yx} at the point. This symmetry holds for higher-order mixed partials under analogous conditions, allowing the order of to be rearranged freely. The theorem, originally explored by in the 18th century and rigorously proved by in 1885, underpins much of multivariable analysis by ensuring consistency in derivative computations. Functions of class C^k in multiple variables extend the single-variable notion: a function f: \Omega \subseteq \mathbb{R}^n \to \mathbb{R}^m is in C^k(\Omega, \mathbb{R}^m) if all partial derivatives up to order k exist and are continuous on \Omega. The space C^k(\Omega) for scalar-valued functions forms a vector space, and C^\infty(\Omega) = \bigcap_{k=0}^\infty C^k(\Omega) denotes the smooth functions. These classes are essential for studying regularity in solutions to partial differential equations and for local approximations via Taylor expansions. The higher-order differentials d^k f(a) at a point a \in \mathbb{R}^n for a C^k function f are interpreted as continuous k-linear maps from (\mathbb{R}^n)^k to \mathbb{R}^m, symmetric under permutation of arguments due to Clairaut's theorem. This multilinear structure captures the intrinsic of higher derivatives, generalizing the first differential df(a) (the Jacobian) to tensor-like objects that act on tuples of vectors h_1, \dots, h_k via d^k f(a)(h_1, \dots, h_k). Such representations facilitate coordinate-free treatments in and analysis. Taylor's theorem in several variables provides a local polynomial approximation using these higher differentials. For a C^{k+1} function f: \mathbb{R}^n \to \mathbb{R} at a \in \mathbb{R}^n, f(x) = \sum_{j=0}^k \frac{1}{j!} d^j f(a) \bigl( (x - a)^{\otimes j} \bigr) + R_k(x, a), where the is over multi-indices or symmetric multilinear evaluations, and the R_k satisfies R_k(x, a) = o(\|x - a\|^k) as x \to a. This quantifies and is pivotal for and optimization in multiple dimensions. Without of the second partials, mixed derivatives may exist but differ, violating . A standard is the function f(x, y) = \begin{cases} \frac{xy(x^2 - y^2)}{x^2 + y^2} & (x, y) \neq (0,0), \\ 0 & (x, y) = (0,0), \end{cases} where the first partials f_x and f_y exist everywhere and are continuous at (0,0), but f_{xy}(0,0) = -1 while f_{yx}(0,0) = 1. This illustrates the necessity of continuity in Clairaut's theorem, as the second mixed partials are discontinuous at the origin. Similar constructions exist for higher orders, highlighting the role of regularity conditions.

Complex differentiable functions

Holomorphic functions

In complex analysis, a function f: \Omega \to \mathbb{C}, where \Omega \subset \mathbb{C} is an , is said to be holomorphic at a point z_0 \in \Omega if the complex derivative f'(z_0) = \lim_{h \to 0} \frac{f(z_0 + h) - f(z_0)}{h} exists, with the limit taken over h \in \mathbb{C}, h \neq 0. This limit must be independent of the path or direction by which h approaches 0 in the , distinguishing holomorphicity from weaker notions of differentiability in real . To express this condition in terms of real variables, write z = x + iy and f(z) = u(x, y) + iv(x, y), where u and v are real-valued functions. The function f is holomorphic at z_0 = x_0 + iy_0 if and only if u and v are real-differentiable at (x_0, y_0) and satisfy the Cauchy-Riemann equations \frac{\partial u}{\partial x}(x_0, y_0) = \frac{\partial v}{\partial y}(x_0, y_0), \quad \frac{\partial u}{\partial y}(x_0, y_0) = -\frac{\partial v}{\partial x}(x_0, y_0). A sufficient condition for holomorphicity on an open set is that the partial derivatives exist and are continuous there and the Cauchy-Riemann equations hold. This equivalence highlights how holomorphicity imposes a rigid relationship between the real and imaginary parts, ensuring the derivative is well-defined in the complex sense. Classic examples of holomorphic functions include the exponential function \exp(z) = e^x (\cos y + i \sin y), which satisfies the Cauchy-Riemann equations everywhere and is thus entire (holomorphic on all of \mathbb{C}). Similarly, the sine function \sin(z), defined by its power series \sum_{n=0}^\infty (-1)^n z^{2n+1}/(2n+1)!, and the cosine function \cos(z) are entire. In contrast, the complex conjugate \overline{z} = x - iy fails the Cauchy-Riemann equations at every point, as \partial u/\partial x = 1 but \partial v/\partial y = -1, rendering it nowhere holomorphic. A key property of holomorphic functions is their local analyticity: if f is holomorphic at z_0, then there exists a disk around z_0 in which f can be represented by a convergent f(z) = \sum_{n=0}^\infty a_n (z - z_0)^n, with at least as large as the distance to the nearest . This representation underscores that holomorphic functions are infinitely differentiable and analytic in their of holomorphicity.

Relation to real differentiability

Viewing the complex plane \mathbb{C} as topologically equivalent to \mathbb{R}^2 via the identification z = x + iy \leftrightarrow (x, y), a f: \mathbb{C} \to \mathbb{C} induces a map \mathbb{R}^2 \to \mathbb{R}^2 that is infinitely differentiable in the real sense, i.e., C^\infty. This follows from the fact that holomorphicity implies the existence of a expansion locally, which is smooth as a real function. The converse does not hold: there are functions that are C^\infty when viewed as real maps but fail to be holomorphic. A standard counterexample is the complex conjugation f(z) = \bar{z}, which is real analytic (hence C^\infty) everywhere but not complex differentiable at any point, as the limit defining the derivative depends on the direction of approach. The \frac{\partial}{\partial z} = \frac{1}{2} \left( \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right) and \frac{\partial}{\partial \bar{z}} = \frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right) formalize this distinction by treating z and \bar{z} as independent variables. A f is holomorphic on an \frac{\partial f}{\partial \bar{z}} = 0 there, meaning it depends only on z and not on \bar{z}. A key consequence of holomorphicity is conformality: at points where f'(z) \neq 0, the mapping preserves oriented angles between curves, scaling them by |f'(z)| while rotating by \arg f'(z). This angle-preserving property arises directly from the complex derivative acting as a similarity transformation in the real plane. Historically, Bernhard Riemann advanced the understanding of analyticity's rigidity beyond real smoothness through his 1851 doctoral dissertation, where he introduced Riemann surfaces to resolve branch points of multi-valued analytic functions, emphasizing that holomorphicity enforces global constraints absent in merely smooth real functions.

Differentiable functions on manifolds

Tangent spaces and derivations

A smooth manifold is defined as a that is locally homeomorphic to \mathbb{R}^n for some fixed n, equipped with an atlas of charts where the transition maps between overlapping charts are smooth (i.e., infinitely differentiable) functions. This structure allows the manifold to inherit the properties of locally while enabling global analysis on potentially curved spaces. The smoothness condition ensures that differentiable functions can be defined consistently across the manifold. The T_p M at a point p \in M on a smooth manifold M is the of all at p. A is a D: C^\infty(M) \to \mathbb{R} from the space of smooth real-valued functions on M to the reals that satisfies the Leibniz rule: D(fg) = f(p) \, Dg + g(p) \, Df for all f, g \in C^\infty(M). This definition abstracts the notion of directional derivatives, capturing tangent vectors as operators that differentiate functions while respecting the . The of T_p M equals the of M. For a smooth f: M \to N between smooth manifolds, the (or ) at p \in M is the df_p: T_p M \to T_{f(p)} N defined by df_p(D)(h) = D(h \circ f) for any D \in T_p M and h \in C^\infty(N). This construction extends the chain rule to manifolds, measuring how f transports vectors from M to N. A f is an if each df_p is injective, a submersion if surjective, and a if bijective with smooth inverse. On the manifold \mathbb{R}^n, the tangent space T_p \mathbb{R}^n identifies naturally with \mathbb{R}^n itself, where derivations correspond to directional derivatives along vectors in \mathbb{R}^n. The standard basis consists of the partial derivative operators \frac{\partial}{\partial x^i} \big|_p, which act on smooth functions f: \mathbb{R}^n \to \mathbb{R} by \frac{\partial f}{\partial x^i}(p). This recovers the familiar Jacobian matrix representation for the differential in the Euclidean case. Local coordinate on a manifold provide bases for spaces. Given a (U, \phi) around p \in U with \phi(p) = (x^1(p), \dots, x^n(p)) and \phi: U \to \mathbb{R}^n, the fields \frac{\partial}{\partial x^i} \big|_p form a basis for T_p M, defined by \frac{\partial}{\partial x^i} \big|_p (f) = \frac{\partial (f \circ \phi^{-1})}{\partial x^i} (\phi(p)) for f \in C^\infty(M). These basis elements allow tangent vectors to be expressed in local coordinates, facilitating computations of differentials via the chain rule.

Smooth structures

A smooth structure on an n-dimensional topological manifold M is specified by a smooth atlas, consisting of a family of charts \{(U_\alpha, \phi_\alpha)\}_{\alpha \in A} that covers M, where each U_\alpha \subset M is open, \phi_\alpha: U_\alpha \to \mathbb{R}^n is a homeomorphism onto an open subset of \mathbb{R}^n, and the transition maps \phi_\alpha \circ \phi_\beta^{-1}: \phi_\beta(U_\alpha \cap U_\beta) \to \phi_\alpha(U_\alpha \cap U_\beta) are infinitely differentiable (C^\infty) on their domains. These transition maps ensure compatibility between charts, allowing the manifold to inherit a global notion of smoothness from the local Euclidean coordinates. An atlas is maximal if it contains all charts compatible with its transition maps, and two atlases define the same smooth structure if their union is also a smooth atlas. More generally, a C^k structure for $0 \leq k < \infty is defined analogously, but with transition maps that are k-times continuously differentiable. A manifold is called C^k if it admits a C^k atlas, and (or C^\infty) if it admits a C^\infty atlas. This hierarchy extends the differentiability classes from spaces to manifolds, where the of functions and maps is determined locally via the atlas. On a smooth manifold, the spaces provide local linear approximations, enabling the definition of derivatives in a coordinate-independent way. A map f: M \to N between smooth manifolds M and N (of dimensions m and n) is smooth if, for every pair of charts (U, \phi) on M and (V, \psi) on N with f(U) \subset V, the coordinate representation \psi \circ f \circ \phi^{-1}: \phi(U) \to \psi(V) is a smooth map between open subsets of \mathbb{R}^m and \mathbb{R}^n. Similarly, f is C^k if these representations are C^k. This local criterion ensures that differentiability is well-defined globally, independent of chart choices, as long as the transition maps are smooth. A concrete example is the 2-sphere S^2, which admits a via charts. Let U_N = S^2 \setminus \{\text{[north pole](/page/North_Pole)}\} with \phi_N: U_N \to \mathbb{R}^2 the projection from the , mapping a point p to the intersection of the line from the through p with the equatorial . Similarly, define U_S = S^2 \setminus \{\text{[south pole](/page/South_Pole)}\} with \phi_S. The transition map on \phi_S(U_N \cap U_S) = \mathbb{R}^2 \setminus \{0\} is \phi_N \circ \phi_S^{-1}(x,y) = \left( \frac{x}{x^2 + y^2}, \frac{y}{x^2 + y^2} \right), which is . These two charts form an atlas covering S^2, endowing it with a compatible with its standard . To construct global smooth objects from local data, partitions of unity play a crucial role. For any open cover \{U_\alpha\} of a smooth manifold M, there exists a \{\rho_\alpha\} subordinate to the cover, meaning each \rho_\alpha: M \to [0,1] is smooth, \operatorname{supp}(\rho_\alpha) \subset U_\alpha, and \sum \rho_\alpha = 1. This tool allows gluing locally defined smooth functions or maps—such as those agreeing on overlaps—into a single global smooth map on M, ensuring the structure is coherent. Although smooth structures are unique up to for spheres in dimensions 1, 2, 3, and 5, 6, the existence of exotic smooth structures on the 4-sphere S^4 remains an . Higher dimensions exhibit exotic smooth structures: distinct atlases on the same that are not diffeomorphic. John Milnor's 1956 discovery of exotic smooth structures on the 7-sphere, homeomorphic but not diffeomorphic to the standard one, marked the beginning of this phenomenon. In dimension 4, there are uncountably many exotic smooth structures on \mathbb{R}^4, first rigorously established by in 1987 using to distinguish them from the standard structure. These examples highlight that smooth structures are not always unique, impacting the study of differentiable functions on such spaces.