Fact-checked by Grok 2 weeks ago

Differentiation rules

Differentiation rules are a set of theorems in that provide efficient methods for computing the s of functions without repeatedly applying the limit definition of the . These rules simplify the process of finding rates of change for a wide range of functions, including polynomials, rational expressions, and compositions, forming the backbone of . The basic differentiation rules encompass the constant rule, which asserts that the derivative of any is zero; the power rule, stating that the derivative of x^n is n x^{n-1} for any n (except where undefined); the sum and difference rules, which allow derivatives of sums or differences to be found by differentiating each term separately; and the constant multiple rule, permitting a constant factor to be pulled out of the derivative. These foundational rules are particularly useful for differentiating polynomials and power functions, quick even for higher-order derivatives. Building on these, more advanced rules include the , which computes the of a product of two s as the first times the of the second plus the second times the of the first; the , for ratios of s, given by the of the numerator times the denominator minus the numerator times the of the denominator, all over the square of the denominator; and the chain rule, essential for composite s, stating that the is the derivative of the outer evaluated at the inner , multiplied by the of the inner . Together, these rules extend to derivatives of exponential, logarithmic, and , supporting applications in optimization, physics, and .

Fundamental Rules

Constant Rule

The constant rule states that the derivative of any f(x) = c, where c is a , is zero everywhere, so f'(x) = 0. This result follows directly from the definition of the as a . For f(x) = c, f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} = \lim_{h \to 0} \frac{c - c}{h} = \lim_{h \to 0} \frac{0}{h} = 0. Geometrically, a constant function graphs as a horizontal line in the xy-plane, which has a slope of zero at every point, aligning with the derivative representing the instantaneous rate of change. For example, the of the constant function f(x) = 5 is f'(x) = 0, and similarly, \frac{d}{dx}(\pi) = 0.

Linearity Rule

The linearity rule, also known as the sum and difference rules combined with the constant multiple rule, states that is a linear . Specifically, for differentiable functions f(x) and g(x), and any constant c, the following hold: \frac{d}{dx} \left[ f(x) + g(x) \right] = f'(x) + g'(x), \quad \frac{d}{dx} \left[ f(x) - g(x) \right] = f'(x) - g'(x), \quad \frac{d}{dx} \left[ c f(x) \right] = c f'(x). These properties allow the of a of functions to be computed by applying the term by term. To prove the sum rule using the definition of the , consider h(x) = f(x) + g(x). Then, h'(x) = \lim_{h \to 0} \frac{f(x + h) + g(x + h) - f(x) - g(x)}{h} = \lim_{h \to 0} \left( \frac{f(x + h) - f(x)}{h} + \frac{g(x + h) - g(x)}{h} \right) = f'(x) + g'(x), assuming the individual exist. The difference rule follows analogously by replacing the plus with a minus in the numerator. For the constant multiple rule, \frac{d}{dx} \left[ c f(x) \right] = \lim_{h \to 0} \frac{c f(x + h) - c f(x)}{h} = c \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} = c f'(x). These proofs rely on the linearity of and the definition of differentiability. Examples illustrate the application of these rules. For instance, the derivative of x^2 + 3x is $2x + 3, obtained by differentiating each term separately. Similarly, the derivative of $4 \sin x is $4 \cos x, where the constant 4 factors out. The constant rule, which states that the derivative of a constant is zero, can be viewed as a special case of the linearity rule when one function is constant. The linearity rule extends naturally to finite sums of functions. For functions f_1(x), f_2(x), \dots, f_n(x) and constants c_1, c_2, \dots, c_n, the derivative of \sum_{i=1}^n c_i f_i(x) is \sum_{i=1}^n c_i f_i'(x), proven by induction using the basic sum and multiple rules. This extension is fundamental for differentiating polynomials and other linear combinations in calculus.

Rules for Algebraic Expressions

Power Rule

The power rule is a fundamental differentiation formula that applies to power functions of the form f(x) = x^n, where n is a . For positive values of n, the is given by \frac{d}{dx} [x^n] = n x^{n-1}. This result was initially established for and later generalized to rational exponents through the use of and substitutions, and subsequently to all real exponents via limits and continuity arguments in . To prove the power rule for positive integers n, begin with the definition of the derivative: f'(x) = \lim_{h \to 0} \frac{(x + h)^n - x^n}{h}. Expand (x + h)^n using the binomial theorem: (x + h)^n = \sum_{k=0}^n \binom{n}{k} x^{n-k} h^k. Substitute into the numerator: \sum_{k=0}^n \binom{n}{k} x^{n-k} h^k - x^n = \sum_{k=1}^n \binom{n}{k} x^{n-k} h^k, since the k=0 term cancels with -x^n. Factor out h: h \sum_{k=1}^n \binom{n}{k} x^{n-k} h^{k-1}. The limit as h \to 0 then simplifies to the k=1 term, as higher powers of h vanish: \lim_{h \to 0} \sum_{k=1}^n \binom{n}{k} x^{n-k} h^{k-1} = \binom{n}{1} x^{n-1} = n x^{n-1}. This proof relies on the and the of the limit. For n=1, the result is immediate from the definition of the derivative, and higher integers follow by repeated application of the combined with linearity, though the binomial approach provides a direct verification. The power rule extends naturally to polynomials, which are finite sums of power terms, by applying the linearity of differentiation term by term. For a polynomial p(x) = a_m x^m + a_{m-1} x^{m-1} + \cdots + a_1 x + a_0, where the a_i are constants, the derivative is p'(x) = m a_m x^{m-1} + (m-1) a_{m-1} x^{m-2} + \cdots + a_1. Constants differentiate to zero. For example, the derivative of x^3 is $3x^2, and the derivative of $2x^4 - x + 7 is $8x^3 - 1. This term-by-term process leverages the power rule alongside the constant multiple and sum rules./03:_Derivatives/3.03:_Differentiation_Rules) The power rule, along with related basic differentiation techniques, originated in the mid-17th century and is attributed to and , who independently developed early forms of during this period./02:_Calculus_in_the_17th_and_18th_Centuries/2.01:_Newton_and_Leibniz_Get_Started) Leibniz explicitly presented a version in his 1684 paper Nova Methodus pro Maximis et Minimis.

Product Rule

The is a fundamental technique used to find the of the product of two differentiable s, f(x) and g(x). It states that the of the product f(x)g(x) is the first function times the of the second plus the second function times the of the first. This extends the property for sums by addressing multiplicative combinations, allowing of expressions like polynomials multiplied by trigonometric or functions without expanding everything first. The formal statement of the product rule is: \frac{d}{dx} \left[ f(x) g(x) \right] = f'(x) g(x) + f(x) g'(x), provided that f and g are differentiable at x. This formula can be derived from the limit definition of the derivative. Consider the difference quotient: \frac{f(x+h)g(x+h) - f(x)g(x)}{h} = \frac{f(x+h)[g(x+h) - g(x)] + g(x)[f(x+h) - f(x)]}{h} = f(x+h) \cdot \frac{g(x+h) - g(x)}{h} + g(x) \cdot \frac{f(x+h) - f(x)}{h}. Taking the limit as h \to 0, and assuming the limits exist, yields f(x) g'(x) + g(x) f'(x), since f(x+h) \to f(x). This proof relies on the additivity of limits and the definitions of f' and g'. A common mnemonic for remembering the is "first times of second plus second times of first," which captures the symmetric structure of the formula. To illustrate, consider differentiating x^2 \sin(x). Applying the with f(x) = x^2 and g(x) = \sin(x), we get f'(x) = 2x and g'(x) = \cos(x), so the derivative is $2x \sin(x) + x^2 \cos(x). Another example is x e^x, where f(x) = x, g(x) = e^x, f'(x) = 1, and g'(x) = e^x, yielding e^x + x e^x = e^x (x + 1). These computations demonstrate how the rule simplifies otherwise tedious expansions. For products of more than two functions, such as f(x) g(x) h(x), the product rule applies iteratively: first differentiate the product f(x) [g(x) h(x)] to get f'(x) g(x) h(x) + f(x) [g'(x) h(x) + g(x) h'(x)], resulting in f' g h + f g' h + f g h'. This repeated application handles higher-order products efficiently, as seen in differentiating cubic polynomials or products involving multiple transcendental functions.

Quotient Rule

The quotient rule provides a for differentiating the quotient of two differentiable functions f(x) and g(x), where g(x) \neq 0. This rule is essential for handling rational functions in . The formula for the quotient rule is: \frac{d}{dx} \left[ \frac{f(x)}{g(x)} \right] = \frac{f'(x) g(x) - f(x) g'(x)}{[g(x)]^2}, \quad g(x) \neq 0. This expression assumes that both f and g are differentiable at the point of interest. The can be proved using the by rewriting the quotient as f(x) \cdot [g(x)]^{-1}. Differentiating this product yields: \frac{d}{dx} \left[ f(x) \cdot [g(x)]^{-1} \right] = f'(x) \cdot [g(x)]^{-1} + f(x) \cdot \frac{d}{dx} \left[ [g(x)]^{-1} \right]. The of the [g(x)]^{-1} is -g'(x) [g(x)]^{-2}, so substituting gives: f'(x) \cdot [g(x)]^{-1} + f(x) \cdot \left( -g'(x) [g(x)]^{-2} \right) = \frac{f'(x)}{g(x)} - \frac{f(x) g'(x)}{[g(x)]^2} = \frac{f'(x) g(x) - f(x) g'(x)}{[g(x)]^2}. This derivation relies on the and rule for the power. A common mnemonic for recalling the is "low d-high minus high d-low over low squared," where "low" denotes the denominator g(x) and "high" the numerator f(x). This helps remember the numerator as g(x) f'(x) - f(x) g'(x). For instance, consider \frac{d}{dx} \left[ \frac{x}{x+1} \right]. Here, f(x) = x so f'(x) = 1, and g(x) = x+1 so g'(x) = 1. Applying the rule gives: \frac{1 \cdot (x+1) - x \cdot 1}{(x+1)^2} = \frac{x+1 - x}{(x+1)^2} = \frac{1}{(x+1)^2}. Another example is \frac{d}{dx} \left[ \frac{\sin x}{x} \right], with f(x) = \sin x so f'(x) = \cos x, and g(x) = x so g'(x) = 1, yielding: \frac{\cos x \cdot x - \sin x \cdot 1}{x^2} = \frac{x \cos x - \sin x}{x^2}. These examples illustrate the rule's application to algebraic and transcendental quotients. The includes the as a special case when the numerator f(x) = 1, resulting in \frac{d}{dx} \left[ \frac{1}{g(x)} \right] = -\frac{g'(x)}{[g(x)]^2}.

Rules for Composite and Inverse Functions

The chain rule provides a method for differentiating composite , where the output of one serves as the input to another. For differentiable f and g, the derivative of the composition f(g(x)) is given by \frac{d}{dx} [f(g(x))] = f'(g(x)) \cdot g'(x). This formula, often expressed in Leibniz notation as \frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx} where y = f(u) and u = g(x), captures the essential structure of for nested . Intuitively, the chain rule multiplies the rate of change of the outer function f (evaluated at the inner function g(x)) by the rate of change of the inner function g itself, reflecting how small changes in x propagate through the composition. This perspective aligns with the geometric interpretation of derivatives as slopes, where the overall slope is the product of segmental slopes in the function chain. The proof relies on the limit definition of the . Consider \frac{d}{dx} [f(g(x))] = \lim_{h \to 0} \frac{f(g(x+h)) - f(g(x))}{h}. Let u = g(x), so g(x+h) = u + k where k = g(x+h) - g(x). The expression becomes \lim_{h \to 0} \frac{f(u + k) - f(u)}{k} \cdot \frac{k}{h}. As h \to 0, k \to 0, yielding \lim_{k \to 0} \frac{f(u + k) - f(u)}{k} \cdot \lim_{h \to 0} \frac{g(x+h) - g(x)}{h} = f'(u) \cdot g'(x) = f'(g(x)) \cdot g'(x). This holds provided the limits exist and g(x+h) \neq g(x) for small h \neq 0, with the trivial case g(x+h) = g(x) yielding zero derivatives on both sides. For example, to differentiate \sin(x^2), let f(u) = \sin u and g(x) = x^2, so f'(u) = \cos u and g'(x) = 2x, giving \frac{d}{dx} [\sin(x^2)] = \cos(x^2) \cdot 2x. Similarly, for (x^3 + 1)^5, let f(u) = u^5 and g(x) = x^3 + 1, so f'(u) = 5u^4 and g'(x) = 3x^2, yielding \frac{d}{dx} [(x^3 + 1)^5] = 5(x^3 + 1)^4 \cdot 3x^2. These illustrate application to trigonometric and compositions, respectively. The chain rule extends to multiple compositions, such as f(g(h(x))), by repeated application: \frac{d}{dx} [f(g(h(x)))] = f'(g(h(x))) \cdot g'(h(x)) \cdot h'(x). For deeper nestings, this product form generalizes, and a tree diagram can clarify the structure by branching from the outermost inward, multiplying along each path from x to the output.

Reciprocal Rule

The reciprocal rule provides the derivative of the reciprocal of a g(x), where g(x) \neq 0. The formula is \frac{d}{dx} \left[ \frac{1}{g(x)} \right] = -\frac{g'(x)}{[g(x)]^2}. This rule can be derived using the chain rule by substituting u = g(x), so \frac{1}{g(x)} = u^{-1}. The power rule applied to the exponent -1 gives \frac{d}{du} (u^{-1}) = -u^{-2}, and the chain rule then yields \frac{d}{dx} (u^{-1}) = -u^{-2} \cdot u' = -\frac{g'(x)}{[g(x)]^2}. The reciprocal rule connects directly to the power rule when the exponent is -1, extending its application to negative powers via the chain rule for composite functions of the form x^{-1}. For example, the of \frac{1}{x} is \frac{d}{dx} \left( \frac{1}{x} \right) = -\frac{1}{x^2}, which follows by setting g(x) = x and g'(x) = 1. Another example is the of \frac{1}{\sin x}, or \csc x, given by \frac{d}{dx} \left( \frac{1}{\sin x} \right) = -\frac{\cos x}{\sin^2 x}, using g(x) = \sin x and g'(x) = \cos x. This rule is a special case of the when the numerator is the constant 1.

Inverse Function Rule

The inverse function rule describes how to compute the of the inverse of a f, provided the f^{-1} exists and f' is nonzero in the relevant . This is essential for handling functions defined implicitly through inversion, ensuring that the differentiability of f implies that of f^{-1} under suitable conditions, such as f being strictly monotonic and continuously differentiable. Let y = f^{-1}(x), so x = f(y). The derivative is given by \frac{dy}{dx} = \frac{1}{f'(y)} = \frac{1}{f'(f^{-1}(x))}, assuming f'(f^{-1}(x)) \neq 0. This expresses the of the as the of the original 's , evaluated at the corresponding point on the . To derive this, apply implicit to x = f(y). Differentiating both sides with respect to x yields $1 = f'(y) \cdot \frac{dy}{dx}. Solving for \frac{dy}{dx} gives \frac{dy}{dx} = \frac{1}{f'(y)}, and substituting y = f^{-1}(x) completes the proof. This approach leverages the chain rule implicitly while treating y as a of x. A representative example is f(x) = x^3, which is strictly increasing and thus invertible, with inverse f^{-1}(x) = x^{1/3}. Here, f'(x) = 3x^2, so \frac{d}{dx} [f^{-1}(x)] = \frac{1}{3 (f^{-1}(x))^2} = \frac{1}{3 x^{2/3}} = \frac{1}{3} x^{-2/3}. This result aligns with direct differentiation of x^{1/3} using the power rule, confirming the rule's consistency. The inverse function rule forms the basis for deriving derivatives of inverse trigonometric functions, such as \frac{d}{dx} \arcsin(x) = \frac{1}{\sqrt{1 - x^2}}, by applying it to the sine function after restricting its domain for invertibility. Full details on these applications appear in the section on trigonometric functions.

Derivatives of Transcendental Functions

Exponential and Logarithmic Functions

The exponential function e^x is unique among common functions in that its derivative equals the function itself: \frac{d}{dx} \left[ e^x \right] = e^x. This property arises from the definition of the exponential function as the solution to the differential equation f'(x) = f(x) with f(0) = 1. To prove this using the limit definition of the derivative, consider \frac{d}{dx} \left[ e^x \right] = \lim_{h \to 0} \frac{e^{x+h} - e^x}{h} = e^x \lim_{h \to 0} \frac{e^h - 1}{h}. The limit \lim_{h \to 0} \frac{e^h - 1}{h} = 1 holds by the definition of the derivative of e^x at x = 0, confirming the result. For a general exponential function with base a > 0 and a \neq 1, the derivative is \frac{d}{dx} \left[ a^x \right] = a^x \ln a. This follows from rewriting a^x = e^{x \ln a} and applying the chain rule to the composition with the known derivative of e^u. The natural logarithm function \ln x, defined for x > 0, has the derivative \frac{d}{dx} \left[ \ln x \right] = \frac{1}{x}. A proof uses implicit : let y = \ln x, so e^y = x. Differentiating both sides with respect to x gives e^y \frac{dy}{dx} = 1, hence \frac{dy}{dx} = \frac{1}{e^y} = \frac{1}{x}. For the logarithm with base a > 0 and a \neq 1, defined as \log_a x = \frac{\ln x}{\ln a} for x > 0, \frac{d}{dx} \left[ \log_a x \right] = \frac{1}{x \ln a}. This is obtained by differentiating the change-of-base formula using the known derivative of \ln x. Logarithmic differentiation provides a technique for finding derivatives of complicated products, quotients, or variable powers. For a y = f(x)^{g(x)} where f(x) > 0, take the natural logarithm of both sides: \ln y = g(x) \ln f(x). Differentiate implicitly with respect to x: \frac{1}{y} \frac{dy}{dx} = \frac{g'(x) \ln f(x) + g(x) \frac{f'(x)}{f(x)}}{1}, then solve for \frac{dy}{dx} = y \left[ g'(x) \ln f(x) + g(x) \frac{f'(x)}{f(x)} \right] = f(x)^{g(x)} \left[ g'(x) \ln f(x) + g(x) \frac{f'(x)}{f(x)} \right]. This method simplifies differentiation by converting multiplication to addition via logarithms. For example, the derivative of e^{2x} is found using the chain rule on the composition e^{u} with u = 2x: \frac{d}{dx} \left[ e^{2x} \right] = 2 e^{2x}. Similarly, for \ln(x^2) where x > 0, rewrite as $2 \ln x and differentiate: \frac{d}{dx} \left[ \ln(x^2) \right] = \frac{2}{x}. These illustrate how the basic rules extend to composites via the chain rule.

Trigonometric Functions

The differentiation rules for trigonometric functions form a fundamental part of calculus, enabling the computation of rates of change for periodic functions defined via the unit circle. These rules are derived primarily from the limit definition of the derivative and trigonometric identities, assuming angles are measured in radians for the limits to hold without additional constants. The core derivatives stem from the sine and cosine functions, with others obtained via the quotient rule applied to their ratios. The standard derivatives are as follows: \frac{d}{dx} [\sin x] = \cos x \frac{d}{dx} [\cos x] = -\sin x \frac{d}{dx} [\tan x] = \sec^2 x These formulas apply for all real x where the functions are defined, with \tan x undefined at odd multiples of \pi/2. Extensions to the remaining trigonometric functions yield: \frac{d}{dx} [\csc x] = -\csc x \cot x \frac{d}{dx} [\sec x] = \sec x \tan x \frac{d}{dx} [\cot x] = -\csc^2 x Each of these is derived by expressing the function as a quotient of sine and cosine and applying the quotient rule, for instance, \tan x = \sin x / \cos x, leading to \sec^2 x after simplification using the Pythagorean identity \sin^2 x + \cos^2 x = 1. The foundational proofs for the sine and cosine derivatives rely on key limits established via the squeeze theorem and geometric arguments on the unit circle. Specifically, \lim_{\theta \to 0} \frac{\sin \theta}{\theta} = 1 and \lim_{\theta \to 0} \frac{1 - \cos \theta}{\theta} = 0. For \sin x, the derivative is computed as \frac{d}{dx} [\sin x] = \lim_{h \to 0} \frac{\sin(x + h) - \sin x}{h} = \lim_{h \to 0} \left( \frac{\sin h}{h} \cos x + \cos h \sin x - \sin x \cdot \frac{h}{h} \right), which simplifies to \cos x using the angle addition formula and the first limit. Similarly, for \cos x, \frac{d}{dx} [\cos x] = \lim_{h \to 0} \frac{\cos(x + h) - \cos x}{h} = -\sin x, employing the second limit after algebraic rearrangement. These limits are proven geometrically by comparing areas or lengths in a unit circle sector, bounding \sin \theta between tangent and chord approximations as \theta approaches zero. To illustrate application, consider \frac{d}{dx} [\sin(3x)], which by the chain rule equals $3 \cos(3x), combining the core sine derivative with the inner function's derivative. Another example is \frac{d}{dx} [\tan^2 x]: rewriting as (\tan x)^2, the chain rule gives $2 \tan x \cdot \sec^2 x, or alternatively using the product rule on \tan x \cdot \tan x yields the same result. These demonstrate how trigonometric derivatives integrate with earlier rules like the chain and product rules. The derivatives of are also essential, derived via the , which states that if y = f^{-1}(x), then \frac{dy}{dx} = \frac{1}{f'(y)} where y = f^{-1}(x). For instance, \frac{d}{dx} [\arcsin x] = \frac{1}{\sqrt{1 - x^2}}, \quad |x| < 1, obtained by setting x = \sin y, differentiating implicitly to get $1 = \cos y \cdot \frac{dy}{dx}, and substituting \cos y = \sqrt{1 - x^2}. Similarly, \frac{d}{dx} [\arccos x] = -\frac{1}{\sqrt{1 - x^2}}, \quad \frac{d}{dx} [\arctan x] = \frac{1}{1 + x^2}. The remaining inverse derivatives are \frac{d}{dx} [\arccsc x] = -\frac{1}{|x| \sqrt{x^2 - 1}}, \quad |x| > 1, \frac{d}{dx} [\arcsec x] = \frac{1}{|x| \sqrt{x^2 - 1}}, \quad |x| > 1, \frac{d}{dx} [\arccot x] = -\frac{1}{1 + x^2}. These follow analogous implicit differentiation, with absolute values ensuring positivity in the domains. As a brief reference, the facilitates these derivations by inverting the known trigonometric derivatives.

Hyperbolic Functions

Hyperbolic functions are defined in terms of exponential functions and arise naturally in the study of hyperbolas, providing analogs to trigonometric functions but without periodicity. The principal hyperbolic functions are the hyperbolic sine and hyperbolic cosine, given by \sinh x = \frac{e^x - e^{-x}}{2}, \quad \cosh x = \frac{e^x + e^{-x}}{2}. These definitions stem from the exponential function e^x, whose derivative is itself. Additional hyperbolic functions include the hyperbolic tangent \tanh x = \frac{\sinh x}{\cosh x} and the hyperbolic secant \sech x = \frac{1}{\cosh x}. The derivatives of these functions follow directly from the known derivative of the . For the hyperbolic , \frac{d}{dx} [\sinh x] = \frac{d}{dx} \left[ \frac{e^x - e^{-x}}{2} \right] = \frac{e^x + e^{-x}}{2} = \cosh x. Similarly, for the hyperbolic cosine, \frac{d}{dx} [\cosh x] = \frac{d}{dx} \left[ \frac{e^x + e^{-x}}{2} \right] = \frac{e^x - e^{-x}}{2} = \sinh x. These results highlight the interchange of roles compared to their trigonometric counterparts, derived solely from differentiation. For the hyperbolic tangent, the derivative is obtained using the quotient rule: \frac{d}{dx} [\tanh x] = \frac{d}{dx} \left[ \frac{\sinh x}{\cosh x} \right] = \frac{\cosh x \cdot \cosh x - \sinh x \cdot \sinh x}{\cosh^2 x} = \frac{\cosh^2 x - \sinh^2 x}{\cosh^2 x} = \sech^2 x, where the numerator simplifies via a fundamental identity. A key identity relating hyperbolic sine and cosine is \cosh^2 x - \sinh^2 x = 1, proved by direct substitution of the definitions: \cosh^2 x - \sinh^2 x = \left( \frac{e^x + e^{-x}}{2} \right)^2 - \left( \frac{e^x - e^{-x}}{2} \right)^2 = \frac{(e^x + e^{-x})^2 - (e^x - e^{-x})^2}{4} = \frac{4}{4} = 1. This identity parallels the but applies to hyperbolas. To illustrate application, consider the of a composite like \cosh(2x). By rule, \frac{d}{dx} [\cosh(2x)] = \sinh(2x) \cdot 2 = 2 \sinh(2x). Likewise, the derivative of \tanh x is \sech^2 x, as shown earlier, which is useful in optimization and differential equations involving hyperbolic forms.

Higher-Order and Specialized Differentiation

Derivatives of Integrals

The derivatives of integrals form a cornerstone of calculus, linking differentiation and integration through the Fundamental Theorem of Calculus (FTC). This theorem provides tools for computing the derivative of a function defined as an integral, either with fixed or variable limits of integration, and extends to cases where the integrand depends on the differentiation variable. These rules enable efficient evaluation without explicit antiderivative computation in many scenarios. The first part of the addresses the case of a definite with a fixed lower and variable upper . Specifically, if f is continuous on an containing a and x, and F(x) = \int_a^x f(t) \, dt, then F'(x) = f(x). This asserts that the function F is an of f, reversing the process directly. To prove this, consider the definition of the derivative: F'(x) = \lim_{h \to 0} \frac{F(x+h) - F(x)}{h} = \lim_{h \to 0} \frac{1}{h} \int_x^{x+h} f(t) \, dt. By the for integrals, since f is continuous, there exists c_h between x and x+h such that \int_x^{x+h} f(t) \, dt = f(c_h) \cdot h. Thus, F'(x) = \lim_{h \to 0} f(c_h). As h \to 0, c_h \to x, and by of f, f(c_h) \to f(x), so F'(x) = f(x). The second part of the FTC, often extended to parameter-dependent integrands, allows differentiation under the integral sign for fixed limits. If f(x, t) is continuous in both variables on a suitable domain, and I(x) = \int_a^b f(x, t) \, dt, then under appropriate conditions (such as in x), \frac{d}{dx} I(x) = \int_a^b \frac{\partial}{\partial x} f(x, t) \, dt. This justifies interchanging and when the partial derivative exists and is integrable. A more general form, known as the , handles both variable limits and parameter dependence in the integrand. For functions u(x) and v(x) with u(x) < v(x), and f(x, t) continuous with continuous partial derivative \partial f / \partial x, \frac{d}{dx} \int_{u(x)}^{v(x)} f(x, t) \, dt = f(x, v(x)) v'(x) - f(x, u(x)) u'(x) + \int_{u(x)}^{v(x)} \frac{\partial}{\partial x} f(x, t) \, dt. This rule combines boundary contributions from the chain rule applied to the limits with the interior differentiation under the sign. For example, applying the first FTC directly: \frac{d}{dx} \int_0^x t^2 \, dt = x^2, since the integrand does not depend on x beyond the limit, reducing to the upper boundary term. In a variable-limit case without parameter dependence, \frac{d}{dx} \int_0^{\sin x} e^t \, dt = e^{\sin x} \cos x, using only the upper limit contribution, as the lower limit is constant and the integrand is independent of x. These examples illustrate how the rules simplify computations for integral-defined functions.

nth-Order Derivatives

Higher-order derivatives extend the concept of differentiation beyond the first order, allowing analysis of how the rate of change itself changes. The second derivative, denoted as f''(x) or \frac{d^2 f}{dx^2}, measures the instantaneous rate of change of the first derivative, often interpreted as concavity or acceleration in applied contexts. For the general nth-order derivative, the notation f^{(n)}(x) or \frac{d^n f}{dx^n} is used, where n is a positive integer greater than or equal to 2. A concrete example illustrates the computation of successive derivatives. Consider the function f(x) = x^3. The first derivative is f'(x) = 3x^2, the second is f''(x) = 6x, the third is f'''(x) = 6, and the fourth is f^{(4)}(x) = 0; higher derivatives beyond the third order remain zero for this polynomial of degree 3. This pattern holds for polynomials, where the nth derivative is zero for n exceeding the degree of the polynomial. Key properties of differentiation extend naturally to higher orders. Linearity persists, such that for constants a and b and functions f and g, the nth derivative satisfies (af + bg)^{(n)}(x) = a f^{(n)}(x) + b g^{(n)}(x). For products, the second derivative follows the rule (fg)''(x) = f''(x)g(x) + 2f'(x)g'(x) + f(x)g''(x), a special case of the general for higher orders. Similarly, the chain rule applies to second derivatives of compositions, though higher-order cases require more advanced formulations. In applications, higher-order derivatives play crucial roles. In physics, the second derivative of position with respect to time represents acceleration, fundamental to . They also underpin the Taylor series expansion, where a function f(x) near a point a is approximated as f(x) \approx \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} (x - a)^n, with higher derivatives providing the curvature and finer approximations.

Derivatives of Special Functions

The gamma function, denoted \Gamma(z), is defined for \Re(z) > 0 by the integral representation \Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \, dt. Its derivative follows by differentiating under the integral sign, yielding \Gamma'(z) = \int_0^\infty t^{z-1} e^{-t} \ln t \, dt, valid for the same domain. The , \psi(z), is the of the : \psi(z) = \frac{\Gamma'(z)}{\Gamma(z)}. A series expansion for the is \psi(z) = -\gamma + \sum_{k=0}^\infty \left( \frac{1}{k+1} - \frac{1}{k+z} \right), where \gamma is the Euler-Mascheroni constant, for z \neq 0, -1, -2, \dots. For example, evaluating at z=1 gives \psi(1) = -\gamma \approx -0.5772156649. The appears in for evaluating sums related to harmonic numbers and in physics for computations in and . Higher-order derivatives of \ln \Gamma(z) are given by the polygamma functions \psi^{(n)}(z) for n \geq 1, which extend the as the (n+1)-th derivative of the logarithm of the , though these receive less emphasis in introductory texts on differentiation. The , \zeta(s), is defined for \Re(s) > 1 by the \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. Differentiating term by term within this region of produces \zeta'(s) = -\sum_{n=1}^\infty \frac{\ln n}{n^s}. This representation facilitates analytic continuations and evaluations in the , excluding the pole at s=1. The zeta function and its derivative play central roles in , such as in the study of prime distributions via the Euler product, and in physics, including calculations.