In mathematics, a differentiable function is a function whose derivative exists at every point in its domain, meaning it can be locally approximated by a linear function at those points.[1] For a function f: (a, b) \to \mathbb{R} of one real variable, differentiability at a point x_0 \in (a, b) requires that the limit \lim_{h \to 0} \frac{f(x_0 + h) - f(x_0)}{h} exists and is finite, yielding the derivative f'(x_0), which represents the slope of the tangent line to the graph of f at x_0.[2] This concept forms the foundation of calculus, enabling the analysis of rates of change and instantaneous behavior of functions.[3]A key property of differentiable functions is that they are necessarily continuous at every point in their domain, as the existence of the derivative implies the function values approach the point value without jumps.[1] However, the converse does not hold: continuous functions are not always differentiable, as illustrated by the absolute value function f(x) = |x|, which is continuous everywhere but not differentiable at x = 0 due to the sharp corner in its graph.[4] In higher dimensions, for a function f: \mathbb{R}^n \to \mathbb{R}^m, differentiability at a point a means there exists a linear transformation T: \mathbb{R}^n \to \mathbb{R}^m (the differential) such that \lim_{x \to a} \frac{\|f(x) - f(a) - T(x - a)\|}{\|x - a\|} = 0, providing an affine approximation via the Jacobian matrix of partial derivatives.[5] Notable theorems for differentiable functions include the chain rule, which composes derivatives for multivariable functions,[6] and the mean value theorem, which guarantees the existence of points where the average rate of change equals the instantaneous rate.[3] These properties underpin applications in optimization, physics, and engineering, where smooth approximations model real-world phenomena.[3]
Real single-variable functions
Definition and basic properties
In the context of real analysis, a function f: \mathbb{R} \to \mathbb{R} is said to be differentiable at an interior point a of its domain if the following limit exists:f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h}.This limit, when it exists, is called the derivative of f at a. Geometrically, f'(a) represents the slope of the tangent line to the graph of y = f(x) at the point (a, f(a)), providing a linear approximation to the function near that point./04%3A_Differentiation/4.01%3A_Definition_and_Basic_Properties_of_the_Derivative)If the limit exists at every point in an open interval, then f is differentiable on that interval, and the derivative function f' is itself a function defined on the same domain. Differentiability at a implies continuity at a. Examples of functions that are differentiable everywhere include polynomials; for instance, the derivative of f(x) = x^2 is f'(x) = 2x, obtained by direct computation of the limit, and higher-degree polynomials similarly yield differentiable derivatives of lower degree./04%3A_Differentiation/4.01%3A_Definition_and_Basic_Properties_of_the_Derivative)[7]In contrast, the absolute value function f(x) = |x| is not differentiable at x = 0, as the limit \lim_{h \to 0} \frac{|h|}{h} does not exist: the left-hand limit is -1 while the right-hand limit is $1. Piecewise linear functions, such as f(x) = |x - [1](/page/1)|, are differentiable everywhere except at points of non-smoothness like x = [1](/page/1), where a "kink" prevents the limit from existing.[7]To handle boundary points or potential asymmetries, one-sided derivatives are defined. The right-hand derivative at a is \lim_{h \to 0^+} \frac{f(a + h) - f(a)}{h}, and the left-hand derivative is \lim_{h \to 0^-} \frac{f(a + h) - f(a)}{h}. The function is differentiable at a if and only if both one-sided derivatives exist and are equal. For example, for f(x) = |x| at x = 0, the right-hand derivative is $1 and the left-hand is -1, confirming non-differentiability./05%3A_Differentiation_and_Antidifferentiation/5.01%3A_Derivatives_of_Functions_of_One_Real_Variable)
Relation to continuity
A fundamental result in calculus states that if a function f: \mathbb{R} \to \mathbb{R} is differentiable at a point a \in \mathbb{R}, then f is continuous at a.[1]To prove this theorem using the \epsilon-\delta definition, suppose f'(a) = L exists. For any \epsilon > 0, first select \delta_1 > 0 such that if $0 < |x - a| < \delta_1, then \left| \frac{f(x) - f(a)}{x - a} - L \right| < 1. This implies \left| \frac{f(x) - f(a)}{x - a} \right| < |L| + 1. Thus, |f(x) - f(a)| = |x - a| \left| \frac{f(x) - f(a)}{x - a} \right| < |x - a| (|L| + 1). Now choose \delta = \min(\delta_1, \epsilon / (|L| + 1)). If |x - a| < \delta, then |f(x) - f(a)| < \epsilon, establishing continuity at a.[8]The converse does not hold: continuity at a point does not imply differentiability there. A striking counterexample is the Weierstrass function, defined asf(x) = \sum_{n=0}^{\infty} a^n \cos(b^n \pi x),where $0 < a < 1 and ab > 1 + \frac{3\pi}{2}. This function is continuous on \mathbb{R} but differentiable at no point, serving as the first published instance of such a "pathological" behavior.[9]Karl Weierstrass introduced it in a lecture in 1872, highlighting that differentiability imposes a stricter condition than mere continuity or even uniform continuity on bounded intervals.[10]This one-way implication was implicitly understood by early pioneers of calculus. Isaac Newton and Gottfried Wilhelm Leibniz, who independently developed the foundations of the subject in the late 17th century, recognized that the existence of tangent lines—central to their methods—presupposes the continuity of the curves they studied./02:_Calculus_in_the_17th_and_18th_Centuries/2.01:_Newton_and_Leibniz_Get_Started)
Differentiability classes
Functions are classified into differentiability classes based on the order of their continuous derivatives, denoted as C^k for k = 0, 1, 2, \dots, \infty, where k indicates the number of times the function can be differentiated while keeping all derivatives continuous on the domain.[11]The class C^0 consists of continuous functions, which are precisely those that are differentiable zero times in the sense that no further differentiation is required beyond continuity itself. A function f: \mathbb{R} \to \mathbb{R} belongs to C^0(\mathbb{R}) if it is continuous at every point in its domain.[11]Functions in the class C^1 are continuously differentiable, meaning the function itself and its first derivative f' are both continuous on the domain. For higher orders, a function f is in C^k if it has continuous derivatives up to order k, that is, f, f', f'', \dots, f^{(k)} are all continuous.[11] Polynomials, for instance, are in C^\infty because they possess derivatives of all orders that remain polynomials, hence continuous everywhere.The class C^\infty, known as smooth functions, comprises functions that are infinitely differentiable with all derivatives continuous.[11] While all analytic functions are smooth, the converse does not hold; non-analytic smooth functions exist, such as the bump function defined by \psi(x) = e^{-1/x^2} for x > 0 and \psi(x) = 0 for x \leq 0, which is C^\infty on \mathbb{R} but not analytic at x=0.Higher-order derivatives can be expressed using the forward difference operator \Delta, where \Delta f(a; h) = f(a+h) - f(a) and \Delta^n f(a; h) = \Delta(\Delta^{n-1} f(a; h); h), yieldingf^{(n)}(a) = \lim_{h \to 0} \frac{\Delta^n f(a; h)}{h^n},provided the limit exists.Whitney's extension theorem in one variable provides conditions under which a function defined on a closed subset of \mathbb{R} with prescribed derivatives up to order k can be extended to a C^k function on all of \mathbb{R}, ensuring compatibility of the jet data through remainder estimates.
Multivariable real functions
Partial derivatives and total differentiability
In multivariable calculus, for a function f: \mathbb{R}^n \to \mathbb{R}^m, the partial derivative with respect to the i-th variable at a point x = (x_1, \dots, x_n) is defined as\frac{\partial f}{\partial x_i}(x) = \lim_{h \to 0} \frac{f(x + h e_i) - f(x)}{h},where e_i is the i-th standard basis vector in \mathbb{R}^n, provided the limit exists.[12] This measures the rate of change of f along the direction of the i-th coordinate axis, treating other variables as constant. For example, consider f(x,y) = x^2 + y^2: the partial derivative with respect to x is \frac{\partial f}{\partial x} = 2x, obtained by differentiating x^2 + y^2 (with y fixed) as if it were a single-variable function in x, and similarly \frac{\partial f}{\partial y} = 2y.[12]A stronger condition than the mere existence of partial derivatives is total differentiability, also known as Fréchet differentiability. A function f: \mathbb{R}^n \to \mathbb{R}^m is (Fréchet) differentiable at a point a if there exists a continuous linear map Df(a): \mathbb{R}^n \to \mathbb{R}^m such that\lim_{h \to 0} \frac{\|f(a + h) - f(a) - Df(a) h \|}{\|h\|} = 0,where \|\cdot\| denotes the Euclidean norm (or any equivalent norm).[13] This means Df(a) provides the best linear approximation to f near a, with the error term o(\|h\|) vanishing faster than linearly as h \to 0. The single-variable derivative is a special case of this definition when n = m = 1.[13]For differentiable functions f: \mathbb{R}^n \to \mathbb{R}^m, the linear map Df(a) is represented by the Jacobian matrix J_f(a), an m \times n matrix whose i-th row consists of the partial derivatives of the i-th component function of f with respect to each variable, evaluated at a.[14] That is, the (i,j)-entry of J_f(a) is \frac{\partial f_i}{\partial x_j}(a), and Df(a) h = J_f(a) h for h \in \mathbb{R}^n. This matrix generalizes the single-variable derivative to higher dimensions, capturing the full linear behavior in all directions.[14]The existence of all partial derivatives at a point does not guarantee total differentiability; continuity of the partials is required for sufficiency. Specifically, if all partial derivatives of f: \mathbb{R}^n \to \mathbb{R}^m exist in a neighborhood of a and are continuous at a, then f is differentiable at a with Df(a) given by the Jacobian matrix.[15] However, a counterexample shows that partial derivatives can exist everywhere (including at the origin) without implying differentiability: considerf(x,y) =
\begin{cases}
\frac{xy^2}{x^2 + y^4} & (x,y) \neq (0,0), \\
0 & (x,y) = (0,0).
\end{cases}Here, \frac{\partial f}{\partial x}(0,0) = 0 and \frac{\partial f}{\partial y}(0,0) = 0, but along the path y = x^{1/2}, the limit in the differentiability definition fails to be zero, so f is not differentiable at (0,0).[16] In this case, the partial derivatives exist but are not continuous at the origin.An intermediate notion between partial derivatives and total differentiability is Gâteaux differentiability, which requires that the directional derivative exists in every direction: for all h \in \mathbb{R}^n,\lim_{t \to 0} \frac{f(a + t h) - f(a)}{t}exists (and is linear in h).[17] This is weaker than Fréchet differentiability because it only ensures linear approximation along rays, not uniformly over all small perturbations; Fréchet differentiability implies Gâteaux differentiability, but the converse fails in general.[18]
Higher-order differentiability
In multivariable calculus, higher partial derivatives of a function f: \mathbb{R}^n \to \mathbb{R} are obtained by iterating partial differentiation. The second-order partial derivatives include the pure forms \frac{\partial^2 f}{\partial x_i^2} and the mixed forms \frac{\partial^2 f}{\partial x_i \partial x_j} for i \neq j, defined as \frac{\partial^2 f}{\partial x_i \partial x_j} = \frac{\partial}{\partial x_j} \left( \frac{\partial f}{\partial x_i} \right).[19] Higher-order partials of order k \geq 3 are defined inductively in a similar manner, with the order of differentiation generally mattering only if continuity assumptions fail.[19]A key result concerning mixed partial derivatives is Clairaut's theorem, also known as Schwarz's theorem, which asserts that if f_{xy} and f_{yx} both exist in a neighborhood of a point and are continuous at that point, then f_{xy} = f_{yx} at the point.[19] This symmetry holds for higher-order mixed partials under analogous continuity conditions, allowing the order of differentiation to be rearranged freely.[20] The theorem, originally explored by Alexis Clairaut in the 18th century and rigorously proved by Hermann Schwarz in 1885, underpins much of multivariable analysis by ensuring consistency in derivative computations.[19]Functions of class C^k in multiple variables extend the single-variable notion: a function f: \Omega \subseteq \mathbb{R}^n \to \mathbb{R}^m is in C^k(\Omega, \mathbb{R}^m) if all partial derivatives up to order k exist and are continuous on \Omega.[21] The space C^k(\Omega) for scalar-valued functions forms a vector space, and C^\infty(\Omega) = \bigcap_{k=0}^\infty C^k(\Omega) denotes the smooth functions. These classes are essential for studying regularity in solutions to partial differential equations and for local approximations via Taylor expansions.[21]The higher-order differentials d^k f(a) at a point a \in \mathbb{R}^n for a C^k function f are interpreted as continuous k-linear maps from (\mathbb{R}^n)^k to \mathbb{R}^m, symmetric under permutation of arguments due to Clairaut's theorem.[20] This multilinear structure captures the intrinsic geometry of higher derivatives, generalizing the first differential df(a) (the Jacobian) to tensor-like objects that act on tuples of vectors h_1, \dots, h_k via d^k f(a)(h_1, \dots, h_k). Such representations facilitate coordinate-free treatments in differentialgeometry and analysis.[20]Taylor's theorem in several variables provides a local polynomial approximation using these higher differentials. For a C^{k+1} function f: \mathbb{R}^n \to \mathbb{R} at a \in \mathbb{R}^n,f(x) = \sum_{j=0}^k \frac{1}{j!} d^j f(a) \bigl( (x - a)^{\otimes j} \bigr) + R_k(x, a),where the sum is over multi-indices or symmetric multilinear evaluations, and the remainder R_k satisfies R_k(x, a) = o(\|x - a\|^k) as x \to a.[20] This expansion quantifies approximation error and is pivotal for asymptotic analysis and optimization in multiple dimensions.[20]Without continuity of the second partials, mixed derivatives may exist but differ, violating symmetry. A standard counterexample is the functionf(x, y) = \begin{cases}
\frac{xy(x^2 - y^2)}{x^2 + y^2} & (x, y) \neq (0,0), \\
0 & (x, y) = (0,0),
\end{cases}where the first partials f_x and f_y exist everywhere and are continuous at (0,0), but f_{xy}(0,0) = -1 while f_{yx}(0,0) = 1.[22] This illustrates the necessity of continuity in Clairaut's theorem, as the second mixed partials are discontinuous at the origin. Similar constructions exist for higher orders, highlighting the role of regularity conditions.[22]
Complex differentiable functions
Holomorphic functions
In complex analysis, a function f: \Omega \to \mathbb{C}, where \Omega \subset \mathbb{C} is an open set, is said to be holomorphic at a point z_0 \in \Omega if the complex derivativef'(z_0) = \lim_{h \to 0} \frac{f(z_0 + h) - f(z_0)}{h}exists, with the limit taken over h \in \mathbb{C}, h \neq 0. This limit must be independent of the path or direction by which h approaches 0 in the complex plane, distinguishing holomorphicity from weaker notions of differentiability in real multivariable calculus.[23][24]To express this condition in terms of real variables, write z = x + iy and f(z) = u(x, y) + iv(x, y), where u and v are real-valued functions. The function f is holomorphic at z_0 = x_0 + iy_0 if and only if u and v are real-differentiable at (x_0, y_0) and satisfy the Cauchy-Riemann equations\frac{\partial u}{\partial x}(x_0, y_0) = \frac{\partial v}{\partial y}(x_0, y_0), \quad \frac{\partial u}{\partial y}(x_0, y_0) = -\frac{\partial v}{\partial x}(x_0, y_0).A sufficient condition for holomorphicity on an open set is that the partial derivatives exist and are continuous there and the Cauchy-Riemann equations hold. This equivalence highlights how holomorphicity imposes a rigid relationship between the real and imaginary parts, ensuring the derivative is well-defined in the complex sense.[24][25][26]Classic examples of holomorphic functions include the exponential function \exp(z) = e^x (\cos y + i \sin y), which satisfies the Cauchy-Riemann equations everywhere and is thus entire (holomorphic on all of \mathbb{C}). Similarly, the sine function \sin(z), defined by its power series \sum_{n=0}^\infty (-1)^n z^{2n+1}/(2n+1)!, and the cosine function \cos(z) are entire. In contrast, the complex conjugate \overline{z} = x - iy fails the Cauchy-Riemann equations at every point, as \partial u/\partial x = 1 but \partial v/\partial y = -1, rendering it nowhere holomorphic.[27][28][29]A key property of holomorphic functions is their local analyticity: if f is holomorphic at z_0, then there exists a disk around z_0 in which f can be represented by a convergent power seriesf(z) = \sum_{n=0}^\infty a_n (z - z_0)^n,with radius of convergence at least as large as the distance to the nearest singularity. This representation underscores that holomorphic functions are infinitely differentiable and analytic in their domain of holomorphicity.[26][30]
Relation to real differentiability
Viewing the complex plane \mathbb{C} as topologically equivalent to \mathbb{R}^2 via the identification z = x + iy \leftrightarrow (x, y), a holomorphic function f: \mathbb{C} \to \mathbb{C} induces a map \mathbb{R}^2 \to \mathbb{R}^2 that is infinitely differentiable in the real sense, i.e., C^\infty.[31] This follows from the fact that holomorphicity implies the existence of a power series expansion locally, which is smooth as a real function.[32]The converse does not hold: there are functions that are C^\infty when viewed as real maps but fail to be holomorphic. A standard counterexample is the complex conjugation f(z) = \bar{z}, which is real analytic (hence C^\infty) everywhere but not complex differentiable at any point, as the limit defining the derivative depends on the direction of approach.[33]The Wirtinger derivatives \frac{\partial}{\partial z} = \frac{1}{2} \left( \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right) and \frac{\partial}{\partial \bar{z}} = \frac{1}{2} \left( \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right) formalize this distinction by treating z and \bar{z} as independent variables. A function f is holomorphic on an open setif and only if \frac{\partial f}{\partial \bar{z}} = 0 there, meaning it depends only on z and not on \bar{z}.A key consequence of holomorphicity is conformality: at points where f'(z) \neq 0, the mapping preserves oriented angles between curves, scaling them by |f'(z)| while rotating by \arg f'(z).[34] This angle-preserving property arises directly from the complex derivative acting as a similarity transformation in the real plane.[35]Historically, Bernhard Riemann advanced the understanding of analyticity's rigidity beyond real smoothness through his 1851 doctoral dissertation, where he introduced Riemann surfaces to resolve branch points of multi-valued analytic functions, emphasizing that holomorphicity enforces global constraints absent in merely smooth real functions.[36]
Differentiable functions on manifolds
Tangent spaces and derivations
A smooth manifold is defined as a topological space that is locally homeomorphic to Euclidean space \mathbb{R}^n for some fixed n, equipped with an atlas of charts where the transition maps between overlapping charts are smooth (i.e., infinitely differentiable) functions.[37] This structure allows the manifold to inherit the properties of Euclidean space locally while enabling global analysis on potentially curved spaces.[37] The smoothness condition ensures that differentiable functions can be defined consistently across the manifold.[37]The tangent space T_p M at a point p \in M on a smooth manifold M is the vector space of all derivations at p. A derivation is a linear map D: C^\infty(M) \to \mathbb{R} from the space of smooth real-valued functions on M to the reals that satisfies the Leibniz rule: D(fg) = f(p) \, Dg + g(p) \, Df for all f, g \in C^\infty(M).[37] This definition abstracts the notion of directional derivatives, capturing tangent vectors as operators that differentiate functions while respecting the product rule. The dimension of T_p M equals the dimension of M.[37]For a smooth map f: M \to N between smooth manifolds, the differential (or pushforward) at p \in M is the linear map df_p: T_p M \to T_{f(p)} N defined by df_p(D)(h) = D(h \circ f) for any derivation D \in T_p M and h \in C^\infty(N).[37] This construction extends the chain rule to manifolds, measuring how f transports tangent vectors from M to N. A map f is an immersion if each df_p is injective, a submersion if surjective, and a diffeomorphism if bijective with smooth inverse.[37]On the manifold \mathbb{R}^n, the tangent space T_p \mathbb{R}^n identifies naturally with \mathbb{R}^n itself, where derivations correspond to directional derivatives along vectors in \mathbb{R}^n.[37] The standard basis consists of the partial derivative operators \frac{\partial}{\partial x^i} \big|_p, which act on smooth functions f: \mathbb{R}^n \to \mathbb{R} by \frac{\partial f}{\partial x^i}(p). This recovers the familiar Jacobian matrix representation for the differential in the Euclidean case.[37]Local coordinate charts on a manifold provide bases for tangent spaces. Given a chart (U, \phi) around p \in U with \phi(p) = (x^1(p), \dots, x^n(p)) and \phi: U \to \mathbb{R}^n, the coordinate vector fields \frac{\partial}{\partial x^i} \big|_p form a basis for T_p M, defined by \frac{\partial}{\partial x^i} \big|_p (f) = \frac{\partial (f \circ \phi^{-1})}{\partial x^i} (\phi(p)) for f \in C^\infty(M).[37] These basis elements allow tangent vectors to be expressed in local coordinates, facilitating computations of differentials via the chain rule.[37]
Smooth structures
A smooth structure on an n-dimensional topological manifold M is specified by a smooth atlas, consisting of a family of charts \{(U_\alpha, \phi_\alpha)\}_{\alpha \in A} that covers M, where each U_\alpha \subset M is open, \phi_\alpha: U_\alpha \to \mathbb{R}^n is a homeomorphism onto an open subset of \mathbb{R}^n, and the transition maps \phi_\alpha \circ \phi_\beta^{-1}: \phi_\beta(U_\alpha \cap U_\beta) \to \phi_\alpha(U_\alpha \cap U_\beta) are infinitely differentiable (C^\infty) on their domains.[37] These transition maps ensure compatibility between charts, allowing the manifold to inherit a global notion of smoothness from the local Euclidean coordinates. An atlas is maximal if it contains all charts compatible with its transition maps, and two atlases define the same smooth structure if their union is also a smooth atlas.[37]More generally, a C^k structure for $0 \leq k < \infty is defined analogously, but with transition maps that are k-times continuously differentiable. A manifold is called C^k if it admits a C^k atlas, and smooth (or C^\infty) if it admits a C^\infty atlas.[37] This hierarchy extends the differentiability classes from Euclidean spaces to abstract manifolds, where the smoothness of functions and maps is determined locally via the atlas. On a smooth manifold, the tangent spaces provide local linear approximations, enabling the definition of derivatives in a coordinate-independent way.[37]A map f: M \to N between smooth manifolds M and N (of dimensions m and n) is smooth if, for every pair of charts (U, \phi) on M and (V, \psi) on N with f(U) \subset V, the coordinate representation \psi \circ f \circ \phi^{-1}: \phi(U) \to \psi(V) is a smooth map between open subsets of \mathbb{R}^m and \mathbb{R}^n.[37] Similarly, f is C^k if these representations are C^k. This local criterion ensures that differentiability is well-defined globally, independent of chart choices, as long as the transition maps are smooth.A concrete example is the 2-sphere S^2, which admits a smooth structure via stereographic projection charts. Let U_N = S^2 \setminus \{\text{[north pole](/page/North_Pole)}\} with \phi_N: U_N \to \mathbb{R}^2 the projection from the north pole, mapping a point p to the intersection of the line from the north pole through p with the equatorial plane. Similarly, define U_S = S^2 \setminus \{\text{[south pole](/page/South_Pole)}\} with \phi_S. The transition map on \phi_S(U_N \cap U_S) = \mathbb{R}^2 \setminus \{0\} is \phi_N \circ \phi_S^{-1}(x,y) = \left( \frac{x}{x^2 + y^2}, \frac{y}{x^2 + y^2} \right), which is smooth.[37] These two charts form an atlas covering S^2, endowing it with a smooth structure compatible with its standard topology.To construct global smooth objects from local data, partitions of unity play a crucial role. For any open cover \{U_\alpha\} of a smooth manifold M, there exists a partition of unity \{\rho_\alpha\} subordinate to the cover, meaning each \rho_\alpha: M \to [0,1] is smooth, \operatorname{supp}(\rho_\alpha) \subset U_\alpha, and \sum \rho_\alpha = 1.[37] This tool allows gluing locally defined smooth functions or maps—such as those agreeing on overlaps—into a single global smooth map on M, ensuring the structure is coherent.Although smooth structures are unique up to diffeomorphism for spheres in dimensions 1, 2, 3, and 5, 6, the existence of exotic smooth structures on the 4-sphere S^4 remains an open problem.[38] Higher dimensions exhibit exotic smooth structures: distinct atlases on the same topological manifold that are not diffeomorphic. John Milnor's 1956 discovery of exotic smooth structures on the 7-sphere, homeomorphic but not diffeomorphic to the standard one, marked the beginning of this phenomenon. In dimension 4, there are uncountably many exotic smooth structures on \mathbb{R}^4, first rigorously established by Clifford Taubes in 1987 using gauge theory to distinguish them from the standard structure.[39] These examples highlight that smooth structures are not always unique, impacting the study of differentiable functions on such spaces.