Differentiation rules are a set of fundamental theorems in calculus that provide efficient methods for computing the derivatives of functions without repeatedly applying the limit definition of the derivative.[1] These rules simplify the process of finding rates of change for a wide range of functions, including polynomials, rational expressions, and compositions, forming the backbone of differential calculus.[2]The basic differentiation rules encompass the constant rule, which asserts that the derivative of any constant function is zero; the power rule, stating that the derivative of x^n is n x^{n-1} for any real number n (except where undefined); the sum and difference rules, which allow derivatives of sums or differences to be found by differentiating each term separately; and the constant multiple rule, permitting a constant factor to be pulled out of the derivative.[2][3] These foundational rules are particularly useful for differentiating polynomials and power functions, enabling quick computation even for higher-order derivatives.[4]Building on these, more advanced rules include the product rule, which computes the derivative of a product of two functions as the first function times the derivative of the second plus the second times the derivative of the first; the quotient rule, for ratios of functions, given by the derivative of the numerator times the denominator minus the numerator times the derivative of the denominator, all over the square of the denominator; and the chain rule, essential for composite functions, stating that the derivative is the derivative of the outer function evaluated at the inner function, multiplied by the derivative of the inner function.[3][5][6] Together, these rules extend to derivatives of exponential, logarithmic, and trigonometric functions, supporting applications in optimization, physics, and engineering.[4]
Fundamental Rules
Constant Rule
The constant rule states that the derivative of any constant function f(x) = c, where c is a real number, is zero everywhere, so f'(x) = 0.[7][8]This result follows directly from the definition of the derivative as a limit. For f(x) = c,f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} = \lim_{h \to 0} \frac{c - c}{h} = \lim_{h \to 0} \frac{0}{h} = 0.[7][9]Geometrically, a constant function graphs as a horizontal line in the xy-plane, which has a slope of zero at every point, aligning with the derivative representing the instantaneous rate of change.[10][11]For example, the derivative of the constant function f(x) = 5 is f'(x) = 0, and similarly, \frac{d}{dx}(\pi) = 0.[7][8]
Linearity Rule
The linearity rule, also known as the sum and difference rules combined with the constant multiple rule, states that differentiation is a linear operation. Specifically, for differentiable functions f(x) and g(x), and any constant c, the following hold:\frac{d}{dx} \left[ f(x) + g(x) \right] = f'(x) + g'(x), \quad \frac{d}{dx} \left[ f(x) - g(x) \right] = f'(x) - g'(x), \quad \frac{d}{dx} \left[ c f(x) \right] = c f'(x).These properties allow the derivative of a linear combination of functions to be computed by applying the derivativeoperator term by term.[7][12]To prove the sum rule using the limit definition of the derivative, consider h(x) = f(x) + g(x). Then,h'(x) = \lim_{h \to 0} \frac{f(x + h) + g(x + h) - f(x) - g(x)}{h} = \lim_{h \to 0} \left( \frac{f(x + h) - f(x)}{h} + \frac{g(x + h) - g(x)}{h} \right) = f'(x) + g'(x),assuming the individual limits exist. The difference rule follows analogously by replacing the plus with a minus in the numerator. For the constant multiple rule,\frac{d}{dx} \left[ c f(x) \right] = \lim_{h \to 0} \frac{c f(x + h) - c f(x)}{h} = c \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} = c f'(x).These proofs rely on the linearity of limits and the definition of differentiability.[7][12]Examples illustrate the application of these rules. For instance, the derivative of x^2 + 3x is $2x + 3, obtained by differentiating each term separately. Similarly, the derivative of $4 \sin x is $4 \cos x, where the constant 4 factors out. The constant rule, which states that the derivative of a constant is zero, can be viewed as a special case of the linearity rule when one function is constant.[13]The linearity rule extends naturally to finite sums of functions. For functions f_1(x), f_2(x), \dots, f_n(x) and constants c_1, c_2, \dots, c_n, the derivative of \sum_{i=1}^n c_i f_i(x) is \sum_{i=1}^n c_i f_i'(x), proven by induction using the basic sum and multiple rules. This extension is fundamental for differentiating polynomials and other linear combinations in calculus.[12]
Rules for Algebraic Expressions
Power Rule
The power rule is a fundamental differentiation formula that applies to power functions of the form f(x) = x^n, where n is a real number. For positive integer values of n, the derivative is given by \frac{d}{dx} [x^n] = n x^{n-1}.[14] This result was initially established for integers and later generalized to rational exponents through the use of roots and substitutions, and subsequently to all real exponents via limits and continuity arguments in real analysis.[7]To prove the power rule for positive integers n, begin with the definition of the derivative:f'(x) = \lim_{h \to 0} \frac{(x + h)^n - x^n}{h}.Expand (x + h)^n using the binomial theorem:(x + h)^n = \sum_{k=0}^n \binom{n}{k} x^{n-k} h^k.Substitute into the numerator:\sum_{k=0}^n \binom{n}{k} x^{n-k} h^k - x^n = \sum_{k=1}^n \binom{n}{k} x^{n-k} h^k,since the k=0 term cancels with -x^n. Factor out h:h \sum_{k=1}^n \binom{n}{k} x^{n-k} h^{k-1}.The limit as h \to 0 then simplifies to the k=1 term, as higher powers of h vanish:\lim_{h \to 0} \sum_{k=1}^n \binom{n}{k} x^{n-k} h^{k-1} = \binom{n}{1} x^{n-1} = n x^{n-1}.This proof relies on the binomial theorem and the linearity of the limit.[7] For n=1, the result is immediate from the definition of the derivative, and higher integers follow by repeated application of the product rule combined with linearity, though the binomial approach provides a direct verification.[14]The power rule extends naturally to polynomials, which are finite sums of power terms, by applying the linearity of differentiation term by term. For a polynomial p(x) = a_m x^m + a_{m-1} x^{m-1} + \cdots + a_1 x + a_0, where the a_i are constants, the derivative isp'(x) = m a_m x^{m-1} + (m-1) a_{m-1} x^{m-2} + \cdots + a_1.Constants differentiate to zero. For example, the derivative of x^3 is $3x^2, and the derivative of $2x^4 - x + 7 is $8x^3 - 1.[14] This term-by-term process leverages the power rule alongside the constant multiple and sum rules./03:_Derivatives/3.03:_Differentiation_Rules)The power rule, along with related basic differentiation techniques, originated in the mid-17th century and is attributed to Isaac Newton and Gottfried Wilhelm Leibniz, who independently developed early forms of calculus during this period./02:_Calculus_in_the_17th_and_18th_Centuries/2.01:_Newton_and_Leibniz_Get_Started) Leibniz explicitly presented a version in his 1684 paper Nova Methodus pro Maximis et Minimis.[15]
Product Rule
The product rule is a fundamental differentiation technique used to find the derivative of the product of two differentiable functions, f(x) and g(x). It states that the derivative of the product f(x)g(x) is the first function times the derivative of the second plus the second function times the derivative of the first. This rule extends the linearity property for sums by addressing multiplicative combinations, allowing differentiation of expressions like polynomials multiplied by trigonometric or exponential functions without expanding everything first.[7]The formal statement of the product rule is:\frac{d}{dx} \left[ f(x) g(x) \right] = f'(x) g(x) + f(x) g'(x),provided that f and g are differentiable at x. This formula can be derived from the limit definition of the derivative. Consider the difference quotient:\frac{f(x+h)g(x+h) - f(x)g(x)}{h} = \frac{f(x+h)[g(x+h) - g(x)] + g(x)[f(x+h) - f(x)]}{h} = f(x+h) \cdot \frac{g(x+h) - g(x)}{h} + g(x) \cdot \frac{f(x+h) - f(x)}{h}.Taking the limit as h \to 0, and assuming the limits exist, yields f(x) g'(x) + g(x) f'(x), since f(x+h) \to f(x). This proof relies on the additivity of limits and the definitions of f' and g'.[7][16]A common mnemonic for remembering the product rule is "first times derivative of second plus second times derivative of first," which captures the symmetric structure of the formula.[17]To illustrate, consider differentiating x^2 \sin(x). Applying the product rule with f(x) = x^2 and g(x) = \sin(x), we get f'(x) = 2x and g'(x) = \cos(x), so the derivative is $2x \sin(x) + x^2 \cos(x). Another example is x e^x, where f(x) = x, g(x) = e^x, f'(x) = 1, and g'(x) = e^x, yielding e^x + x e^x = e^x (x + 1). These computations demonstrate how the rule simplifies otherwise tedious expansions.[18][19]For products of more than two functions, such as f(x) g(x) h(x), the product rule applies iteratively: first differentiate the product f(x) [g(x) h(x)] to get f'(x) g(x) h(x) + f(x) [g'(x) h(x) + g(x) h'(x)], resulting in f' g h + f g' h + f g h'. This repeated application handles higher-order products efficiently, as seen in differentiating cubic polynomials or products involving multiple transcendental functions.[7][20]
Quotient Rule
The quotient rule provides a method for differentiating the quotient of two differentiable functions f(x) and g(x), where g(x) \neq 0. This rule is essential for handling rational functions in calculus.[21]The formula for the quotient rule is:\frac{d}{dx} \left[ \frac{f(x)}{g(x)} \right] = \frac{f'(x) g(x) - f(x) g'(x)}{[g(x)]^2}, \quad g(x) \neq 0.This expression assumes that both f and g are differentiable at the point of interest.[22]The quotient rule can be proved using the product rule by rewriting the quotient as f(x) \cdot [g(x)]^{-1}. Differentiating this product yields:\frac{d}{dx} \left[ f(x) \cdot [g(x)]^{-1} \right] = f'(x) \cdot [g(x)]^{-1} + f(x) \cdot \frac{d}{dx} \left[ [g(x)]^{-1} \right].The derivative of the reciprocal [g(x)]^{-1} is -g'(x) [g(x)]^{-2}, so substituting gives:f'(x) \cdot [g(x)]^{-1} + f(x) \cdot \left( -g'(x) [g(x)]^{-2} \right) = \frac{f'(x)}{g(x)} - \frac{f(x) g'(x)}{[g(x)]^2} = \frac{f'(x) g(x) - f(x) g'(x)}{[g(x)]^2}.This derivation relies on the product rule and the chain rule for the power.[16]A common mnemonic for recalling the quotient rule is "low d-high minus high d-low over low squared," where "low" denotes the denominator g(x) and "high" the numerator f(x). This helps remember the numerator as g(x) f'(x) - f(x) g'(x).[23]For instance, consider \frac{d}{dx} \left[ \frac{x}{x+1} \right]. Here, f(x) = x so f'(x) = 1, and g(x) = x+1 so g'(x) = 1. Applying the rule gives:\frac{1 \cdot (x+1) - x \cdot 1}{(x+1)^2} = \frac{x+1 - x}{(x+1)^2} = \frac{1}{(x+1)^2}.Another example is \frac{d}{dx} \left[ \frac{\sin x}{x} \right], with f(x) = \sin x so f'(x) = \cos x, and g(x) = x so g'(x) = 1, yielding:\frac{\cos x \cdot x - \sin x \cdot 1}{x^2} = \frac{x \cos x - \sin x}{x^2}.These examples illustrate the rule's application to algebraic and transcendental quotients.[21]The quotient rule includes the reciprocal rule as a special case when the numerator f(x) = 1, resulting in \frac{d}{dx} \left[ \frac{1}{g(x)} \right] = -\frac{g'(x)}{[g(x)]^2}.[19]
The chain rule provides a method for differentiating composite functions, where the output of one function serves as the input to another. For differentiable functions f and g, the derivative of the composition f(g(x)) is given by\frac{d}{dx} [f(g(x))] = f'(g(x)) \cdot g'(x).This formula, often expressed in Leibniz notation as \frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx} where y = f(u) and u = g(x), captures the essential structure of differentiation for nested functions.[24]Intuitively, the chain rule multiplies the rate of change of the outer function f (evaluated at the inner function g(x)) by the rate of change of the inner function g itself, reflecting how small changes in x propagate through the composition.[25] This perspective aligns with the geometric interpretation of derivatives as slopes, where the overall slope is the product of segmental slopes in the function chain.[22]The proof relies on the limit definition of the derivative. Consider \frac{d}{dx} [f(g(x))] = \lim_{h \to 0} \frac{f(g(x+h)) - f(g(x))}{h}. Let u = g(x), so g(x+h) = u + k where k = g(x+h) - g(x). The expression becomes \lim_{h \to 0} \frac{f(u + k) - f(u)}{k} \cdot \frac{k}{h}. As h \to 0, k \to 0, yielding \lim_{k \to 0} \frac{f(u + k) - f(u)}{k} \cdot \lim_{h \to 0} \frac{g(x+h) - g(x)}{h} = f'(u) \cdot g'(x) = f'(g(x)) \cdot g'(x). This holds provided the limits exist and g(x+h) \neq g(x) for small h \neq 0, with the trivial case g(x+h) = g(x) yielding zero derivatives on both sides.[24]For example, to differentiate \sin(x^2), let f(u) = \sin u and g(x) = x^2, so f'(u) = \cos u and g'(x) = 2x, giving \frac{d}{dx} [\sin(x^2)] = \cos(x^2) \cdot 2x. Similarly, for (x^3 + 1)^5, let f(u) = u^5 and g(x) = x^3 + 1, so f'(u) = 5u^4 and g'(x) = 3x^2, yielding \frac{d}{dx} [(x^3 + 1)^5] = 5(x^3 + 1)^4 \cdot 3x^2. These illustrate application to trigonometric and polynomial compositions, respectively.[25]The chain rule extends to multiple compositions, such as f(g(h(x))), by repeated application: \frac{d}{dx} [f(g(h(x)))] = f'(g(h(x))) \cdot g'(h(x)) \cdot h'(x). For deeper nestings, this product form generalizes, and a tree diagram can clarify the structure by branching from the outermost derivative inward, multiplying along each path from x to the output.[26]
Reciprocal Rule
The reciprocal rule provides the derivative of the reciprocal of a differentiable function g(x), where g(x) \neq 0. The formula is\frac{d}{dx} \left[ \frac{1}{g(x)} \right] = -\frac{g'(x)}{[g(x)]^2}.[27]This rule can be derived using the chain rule by substituting u = g(x), so \frac{1}{g(x)} = u^{-1}. The power rule applied to the exponent -1 gives \frac{d}{du} (u^{-1}) = -u^{-2}, and the chain rule then yields \frac{d}{dx} (u^{-1}) = -u^{-2} \cdot u' = -\frac{g'(x)}{[g(x)]^2}.[27]The reciprocal rule connects directly to the power rule when the exponent is -1, extending its application to negative powers via the chain rule for composite functions of the form x^{-1}.[27]For example, the derivative of \frac{1}{x} is \frac{d}{dx} \left( \frac{1}{x} \right) = -\frac{1}{x^2}, which follows by setting g(x) = x and g'(x) = 1.[27] Another example is the derivative of \frac{1}{\sin x}, or \csc x, given by \frac{d}{dx} \left( \frac{1}{\sin x} \right) = -\frac{\cos x}{\sin^2 x}, using g(x) = \sin x and g'(x) = \cos x.[28]This rule is a special case of the quotient rule when the numerator is the constant 1.[27]
Inverse Function Rule
The inverse function rule describes how to compute the derivative of the inverse of a differentiable function f, provided the inverse function f^{-1} exists and f' is nonzero in the relevant domain. This rule is essential for handling functions defined implicitly through inversion, ensuring that the differentiability of f implies that of f^{-1} under suitable conditions, such as f being strictly monotonic and continuously differentiable.[29]Let y = f^{-1}(x), so x = f(y). The derivative is given by\frac{dy}{dx} = \frac{1}{f'(y)} = \frac{1}{f'(f^{-1}(x))},assuming f'(f^{-1}(x)) \neq 0. This formula expresses the slope of the inverse as the reciprocal of the original function's slope, evaluated at the corresponding point on the inverse.[29]To derive this, apply implicit differentiation to x = f(y). Differentiating both sides with respect to x yields$1 = f'(y) \cdot \frac{dy}{dx}.Solving for \frac{dy}{dx} gives\frac{dy}{dx} = \frac{1}{f'(y)},and substituting y = f^{-1}(x) completes the proof. This approach leverages the chain rule implicitly while treating y as a function of x.[29]A representative example is f(x) = x^3, which is strictly increasing and thus invertible, with inverse f^{-1}(x) = x^{1/3}. Here, f'(x) = 3x^2, so\frac{d}{dx} [f^{-1}(x)] = \frac{1}{3 (f^{-1}(x))^2} = \frac{1}{3 x^{2/3}} = \frac{1}{3} x^{-2/3}.This result aligns with direct differentiation of x^{1/3} using the power rule, confirming the rule's consistency.[30]The inverse function rule forms the basis for deriving derivatives of inverse trigonometric functions, such as \frac{d}{dx} \arcsin(x) = \frac{1}{\sqrt{1 - x^2}}, by applying it to the sine function after restricting its domain for invertibility. Full details on these applications appear in the section on trigonometric functions.[31]
Derivatives of Transcendental Functions
Exponential and Logarithmic Functions
The exponential function e^x is unique among common functions in that its derivative equals the function itself:\frac{d}{dx} \left[ e^x \right] = e^x.This property arises from the definition of the exponential function as the solution to the differential equation f'(x) = f(x) with f(0) = 1.[32]To prove this using the limit definition of the derivative, consider\frac{d}{dx} \left[ e^x \right] = \lim_{h \to 0} \frac{e^{x+h} - e^x}{h} = e^x \lim_{h \to 0} \frac{e^h - 1}{h}.The limit \lim_{h \to 0} \frac{e^h - 1}{h} = 1 holds by the definition of the derivative of e^x at x = 0, confirming the result.[32]For a general exponential function with base a > 0 and a \neq 1, the derivative is\frac{d}{dx} \left[ a^x \right] = a^x \ln a.This follows from rewriting a^x = e^{x \ln a} and applying the chain rule to the composition with the known derivative of e^u.[32]The natural logarithm function \ln x, defined for x > 0, has the derivative\frac{d}{dx} \left[ \ln x \right] = \frac{1}{x}.A proof uses implicit differentiation: let y = \ln x, so e^y = x. Differentiating both sides with respect to x gives e^y \frac{dy}{dx} = 1, hence \frac{dy}{dx} = \frac{1}{e^y} = \frac{1}{x}.[32]For the logarithm with base a > 0 and a \neq 1, defined as \log_a x = \frac{\ln x}{\ln a} for x > 0,\frac{d}{dx} \left[ \log_a x \right] = \frac{1}{x \ln a}.This is obtained by differentiating the change-of-base formula using the known derivative of \ln x.[32]Logarithmic differentiation provides a technique for finding derivatives of complicated products, quotients, or variable powers. For a function y = f(x)^{g(x)} where f(x) > 0, take the natural logarithm of both sides: \ln y = g(x) \ln f(x). Differentiate implicitly with respect to x:\frac{1}{y} \frac{dy}{dx} = \frac{g'(x) \ln f(x) + g(x) \frac{f'(x)}{f(x)}}{1},then solve for \frac{dy}{dx} = y \left[ g'(x) \ln f(x) + g(x) \frac{f'(x)}{f(x)} \right] = f(x)^{g(x)} \left[ g'(x) \ln f(x) + g(x) \frac{f'(x)}{f(x)} \right]. This method simplifies differentiation by converting multiplication to addition via logarithms.[32]For example, the derivative of e^{2x} is found using the chain rule on the composition e^{u} with u = 2x:\frac{d}{dx} \left[ e^{2x} \right] = 2 e^{2x}.Similarly, for \ln(x^2) where x > 0, rewrite as $2 \ln x and differentiate:\frac{d}{dx} \left[ \ln(x^2) \right] = \frac{2}{x}.These illustrate how the basic rules extend to composites via the chain rule.[32]
Trigonometric Functions
The differentiation rules for trigonometric functions form a fundamental part of calculus, enabling the computation of rates of change for periodic functions defined via the unit circle. These rules are derived primarily from the limit definition of the derivative and trigonometric identities, assuming angles are measured in radians for the limits to hold without additional constants.[33] The core derivatives stem from the sine and cosine functions, with others obtained via the quotient rule applied to their ratios.[33]The standard derivatives are as follows:\frac{d}{dx} [\sin x] = \cos x\frac{d}{dx} [\cos x] = -\sin x\frac{d}{dx} [\tan x] = \sec^2 xThese formulas apply for all real x where the functions are defined, with \tan x undefined at odd multiples of \pi/2.[33] Extensions to the remaining trigonometric functions yield:\frac{d}{dx} [\csc x] = -\csc x \cot x\frac{d}{dx} [\sec x] = \sec x \tan x\frac{d}{dx} [\cot x] = -\csc^2 xEach of these is derived by expressing the function as a quotient of sine and cosine and applying the quotient rule, for instance, \tan x = \sin x / \cos x, leading to \sec^2 x after simplification using the Pythagorean identity \sin^2 x + \cos^2 x = 1.[33][34]The foundational proofs for the sine and cosine derivatives rely on key limits established via the squeeze theorem and geometric arguments on the unit circle. Specifically,\lim_{\theta \to 0} \frac{\sin \theta}{\theta} = 1and\lim_{\theta \to 0} \frac{1 - \cos \theta}{\theta} = 0.For \sin x, the derivative is computed as\frac{d}{dx} [\sin x] = \lim_{h \to 0} \frac{\sin(x + h) - \sin x}{h} = \lim_{h \to 0} \left( \frac{\sin h}{h} \cos x + \cos h \sin x - \sin x \cdot \frac{h}{h} \right),which simplifies to \cos x using the angle addition formula and the first limit. Similarly, for \cos x,\frac{d}{dx} [\cos x] = \lim_{h \to 0} \frac{\cos(x + h) - \cos x}{h} = -\sin x,employing the second limit after algebraic rearrangement. These limits are proven geometrically by comparing areas or lengths in a unit circle sector, bounding \sin \theta between tangent and chord approximations as \theta approaches zero.[35][36]To illustrate application, consider \frac{d}{dx} [\sin(3x)], which by the chain rule equals $3 \cos(3x), combining the core sine derivative with the inner function's derivative. Another example is \frac{d}{dx} [\tan^2 x]: rewriting as (\tan x)^2, the chain rule gives $2 \tan x \cdot \sec^2 x, or alternatively using the product rule on \tan x \cdot \tan x yields the same result. These demonstrate how trigonometric derivatives integrate with earlier rules like the chain and product rules.[33]The derivatives of inverse trigonometric functions are also essential, derived via the inverse function rule, which states that if y = f^{-1}(x), then \frac{dy}{dx} = \frac{1}{f'(y)} where y = f^{-1}(x). For instance,\frac{d}{dx} [\arcsin x] = \frac{1}{\sqrt{1 - x^2}}, \quad |x| < 1,obtained by setting x = \sin y, differentiating implicitly to get $1 = \cos y \cdot \frac{dy}{dx}, and substituting \cos y = \sqrt{1 - x^2}. Similarly,\frac{d}{dx} [\arccos x] = -\frac{1}{\sqrt{1 - x^2}}, \quad \frac{d}{dx} [\arctan x] = \frac{1}{1 + x^2}.The remaining inverse derivatives are\frac{d}{dx} [\arccsc x] = -\frac{1}{|x| \sqrt{x^2 - 1}}, \quad |x| > 1,\frac{d}{dx} [\arcsec x] = \frac{1}{|x| \sqrt{x^2 - 1}}, \quad |x| > 1,\frac{d}{dx} [\arccot x] = -\frac{1}{1 + x^2}.These follow analogous implicit differentiation, with absolute values ensuring positivity in the domains. As a brief reference, the inverse function rule facilitates these derivations by inverting the known trigonometric derivatives.[31][37]
Hyperbolic Functions
Hyperbolic functions are defined in terms of exponential functions and arise naturally in the study of hyperbolas, providing analogs to trigonometric functions but without periodicity.[38] The principal hyperbolic functions are the hyperbolic sine and hyperbolic cosine, given by\sinh x = \frac{e^x - e^{-x}}{2}, \quad \cosh x = \frac{e^x + e^{-x}}{2}.These definitions stem from the exponential function e^x, whose derivative is itself. Additional hyperbolic functions include the hyperbolic tangent \tanh x = \frac{\sinh x}{\cosh x} and the hyperbolic secant \sech x = \frac{1}{\cosh x}.[39]The derivatives of these functions follow directly from the known derivative of the exponential function. For the hyperbolic sine,\frac{d}{dx} [\sinh x] = \frac{d}{dx} \left[ \frac{e^x - e^{-x}}{2} \right] = \frac{e^x + e^{-x}}{2} = \cosh x.Similarly, for the hyperbolic cosine,\frac{d}{dx} [\cosh x] = \frac{d}{dx} \left[ \frac{e^x + e^{-x}}{2} \right] = \frac{e^x - e^{-x}}{2} = \sinh x.These results highlight the interchange of sine and cosine roles compared to their trigonometric counterparts, derived solely from exponential differentiation.[38]For the hyperbolic tangent, the derivative is obtained using the quotient rule:\frac{d}{dx} [\tanh x] = \frac{d}{dx} \left[ \frac{\sinh x}{\cosh x} \right] = \frac{\cosh x \cdot \cosh x - \sinh x \cdot \sinh x}{\cosh^2 x} = \frac{\cosh^2 x - \sinh^2 x}{\cosh^2 x} = \sech^2 x,where the numerator simplifies via a fundamental identity.[39]A key identity relating hyperbolic sine and cosine is\cosh^2 x - \sinh^2 x = 1,proved by direct substitution of the exponential definitions:\cosh^2 x - \sinh^2 x = \left( \frac{e^x + e^{-x}}{2} \right)^2 - \left( \frac{e^x - e^{-x}}{2} \right)^2 = \frac{(e^x + e^{-x})^2 - (e^x - e^{-x})^2}{4} = \frac{4}{4} = 1.This identity parallels the Pythagorean theorem but applies to hyperbolas.[38]To illustrate application, consider the derivative of a composite function like \cosh(2x). By the chain rule,\frac{d}{dx} [\cosh(2x)] = \sinh(2x) \cdot 2 = 2 \sinh(2x).Likewise, the derivative of \tanh x is \sech^2 x, as shown earlier, which is useful in optimization and differential equations involving hyperbolic forms.[39]
Higher-Order and Specialized Differentiation
Derivatives of Integrals
The derivatives of integrals form a cornerstone of calculus, linking differentiation and integration through the Fundamental Theorem of Calculus (FTC). This theorem provides tools for computing the derivative of a function defined as an integral, either with fixed or variable limits of integration, and extends to cases where the integrand depends on the differentiation variable. These rules enable efficient evaluation without explicit antiderivative computation in many scenarios.[40]The first part of the FTC addresses the case of a definite integral with a fixed lower limit and variable upper limit. Specifically, if f is continuous on an interval containing a and x, and F(x) = \int_a^x f(t) \, dt, then F'(x) = f(x).[41] This asserts that the integral function F is an antiderivative of f, reversing the integration process directly.[42]To prove this, consider the definition of the derivative:F'(x) = \lim_{h \to 0} \frac{F(x+h) - F(x)}{h} = \lim_{h \to 0} \frac{1}{h} \int_x^{x+h} f(t) \, dt.By the mean value theorem for integrals, since f is continuous, there exists c_h between x and x+h such that \int_x^{x+h} f(t) \, dt = f(c_h) \cdot h. Thus,F'(x) = \lim_{h \to 0} f(c_h).As h \to 0, c_h \to x, and by continuity of f, f(c_h) \to f(x), so F'(x) = f(x).[42]The second part of the FTC, often extended to parameter-dependent integrands, allows differentiation under the integral sign for fixed limits. If f(x, t) is continuous in both variables on a suitable domain, and I(x) = \int_a^b f(x, t) \, dt, then under appropriate conditions (such as uniform continuity in x),\frac{d}{dx} I(x) = \int_a^b \frac{\partial}{\partial x} f(x, t) \, dt.This justifies interchanging differentiation and integration when the partial derivative exists and is integrable.[43]A more general form, known as the Leibniz integral rule, handles both variable limits and parameter dependence in the integrand. For functions u(x) and v(x) with u(x) < v(x), and f(x, t) continuous with continuous partial derivative \partial f / \partial x,\frac{d}{dx} \int_{u(x)}^{v(x)} f(x, t) \, dt = f(x, v(x)) v'(x) - f(x, u(x)) u'(x) + \int_{u(x)}^{v(x)} \frac{\partial}{\partial x} f(x, t) \, dt.This rule combines boundary contributions from the chain rule applied to the limits with the interior differentiation under the sign.[44][45]For example, applying the first FTC directly:\frac{d}{dx} \int_0^x t^2 \, dt = x^2,since the integrand does not depend on x beyond the limit, reducing to the upper boundary term.[40] In a variable-limit case without parameter dependence,\frac{d}{dx} \int_0^{\sin x} e^t \, dt = e^{\sin x} \cos x,using only the upper limit contribution, as the lower limit is constant and the integrand is independent of x.[46] These examples illustrate how the rules simplify computations for integral-defined functions.[47]
nth-Order Derivatives
Higher-order derivatives extend the concept of differentiation beyond the first order, allowing analysis of how the rate of change itself changes. The second derivative, denoted as f''(x) or \frac{d^2 f}{dx^2}, measures the instantaneous rate of change of the first derivative, often interpreted as concavity or acceleration in applied contexts. For the general nth-order derivative, the notation f^{(n)}(x) or \frac{d^n f}{dx^n} is used, where n is a positive integer greater than or equal to 2.[48][49]A concrete example illustrates the computation of successive derivatives. Consider the function f(x) = x^3. The first derivative is f'(x) = 3x^2, the second is f''(x) = 6x, the third is f'''(x) = 6, and the fourth is f^{(4)}(x) = 0; higher derivatives beyond the third order remain zero for this polynomial of degree 3.[48] This pattern holds for polynomials, where the nth derivative is zero for n exceeding the degree of the polynomial.Key properties of differentiation extend naturally to higher orders. Linearity persists, such that for constants a and b and functions f and g, the nth derivative satisfies (af + bg)^{(n)}(x) = a f^{(n)}(x) + b g^{(n)}(x).[50] For products, the second derivative follows the rule (fg)''(x) = f''(x)g(x) + 2f'(x)g'(x) + f(x)g''(x), a special case of the general Leibniz rule for higher orders.[48] Similarly, the chain rule applies to second derivatives of compositions, though higher-order cases require more advanced formulations.In applications, higher-order derivatives play crucial roles. In physics, the second derivative of position with respect to time represents acceleration, fundamental to Newton's laws of motion.[51] They also underpin the Taylor series expansion, where a function f(x) near a point a is approximated as f(x) \approx \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} (x - a)^n, with higher derivatives providing the curvature and finer approximations.[52]
Derivatives of Special Functions
The gamma function, denoted \Gamma(z), is defined for \Re(z) > 0 by the integral representation\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \, dt.Its derivative follows by differentiating under the integral sign, yielding\Gamma'(z) = \int_0^\infty t^{z-1} e^{-t} \ln t \, dt,valid for the same domain.[53] The digamma function, \psi(z), is the logarithmic derivative of the gamma function:\psi(z) = \frac{\Gamma'(z)}{\Gamma(z)}.A series expansion for the digamma function is\psi(z) = -\gamma + \sum_{k=0}^\infty \left( \frac{1}{k+1} - \frac{1}{k+z} \right),where \gamma is the Euler-Mascheroni constant, for z \neq 0, -1, -2, \dots.[54] For example, evaluating at z=1 gives \psi(1) = -\gamma \approx -0.5772156649.[54] The digamma function appears in number theory for evaluating sums related to harmonic numbers and in physics for computations in statistical mechanics and quantum field theory.[55] Higher-order derivatives of \ln \Gamma(z) are given by the polygamma functions \psi^{(n)}(z) for n \geq 1, which extend the digamma as the (n+1)-th derivative of the logarithm of the gamma function, though these receive less emphasis in introductory texts on differentiation.[55]The Riemann zeta function, \zeta(s), is defined for \Re(s) > 1 by the Dirichlet series\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}.[56] Differentiating term by term within this region of absolute convergence produces\zeta'(s) = -\sum_{n=1}^\infty \frac{\ln n}{n^s}.[57] This representation facilitates analytic continuations and evaluations in the complex plane, excluding the pole at s=1. The zeta function and its derivative play central roles in number theory, such as in the study of prime distributions via the Euler product, and in physics, including Casimir effect calculations.[58]