Fact-checked by Grok 2 weeks ago

Taylor's theorem

Taylor's theorem is a fundamental result in that provides a precise way to approximate a smooth function near a specific point using a whose coefficients are determined by the function's derivatives at that point, along with a term that bounds the . Formally, if a function f is n+1 times differentiable on an open containing points a and x, then there exists some c between a and x such that f(x) = \sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!} (x - a)^k + \frac{f^{(n+1)}(c)}{(n+1)!} (x - a)^{n+1}. This Lagrange form of the quantifies how closely the nth-degree Taylor P_n(x) approximates f(x), with the error |R_n(x)| satisfying |R_n(x)| \leq \frac{M}{(n+1)!} |x - a|^{n+1} if |f^{(n+1)}(t)| \leq M on the . Named after the English mathematician , the theorem was first stated in a 1712 letter to John Machin and formally published in Taylor's 1715 work Methodus incrementorum directa et inversa. Although Taylor developed his result independently, precursors to the theorem appeared in the works of James Gregory in 1671 and , , , and in the late 17th and early 18th centuries. The remainder term in the modern Lagrange form was introduced by in 1797, enhancing the theorem's utility for error analysis. The theorem underpins the construction of , which are infinite expansions that converge to the function itself under suitable conditions, enabling approximations of transcendental functions like e^x, \sin x, and \cos x. When the expansion point is a = 0, it is called a Maclaurin series, named after who popularized the case in 1742. Beyond one variable, the theorem generalizes to multivariable and vector-valued functions, forming the basis for linear approximations, optimization, and numerical methods in higher dimensions.

Motivation

Historical Context

The development of Taylor's theorem traces its origins to the late , amid the burgeoning field of . In 1671, Scottish mathematician James Gregory anticipated key aspects of the theorem through his work on infinite series expansions for functions such as arctangent and , using successive in a letter to John Collins dated February 15. These expansions, detailed in unpublished notes from the same year, represented an early method for approximating functions via , though Gregory did not generalize it formally. Isaac Newton further advanced these ideas in his development of fluxions, the precursor to modern , with significant work completed by 1671 in the manuscript De Methodis Serierum et Fluxionum. In letters to in 1676, Newton hinted at his methods involving infinite series and fluxions for , building on Gregory's insights and influencing subsequent mathematicians. Similar ideas were independently developed by , , and in the late 17th and early 18th centuries. The theorem received its first formal statement from English mathematician , who described it in a 1712 letter to John Machin and published it in his 1715 book Methodus Incrementorum Directa et Inversa. Taylor's version generalized earlier series methods, presenting expansions for solving differential equations and finite difference problems, though without explicit error estimates. In the , the theorem evolved from purely infinite series to finite polynomial approximations with rigorous bounds, driven by refinements to the term. introduced a key form of the in in Théorie des Fonctions Analytiques, expressing the in terms of the next at an intermediate point. further developed this in 1826 in Exercices de Mathématiques, providing an alternative expression using a weighted , which enhanced the theorem's applicability in . These advancements solidified Taylor's theorem as a cornerstone of by the mid-19th century.

Intuitive Explanation

Taylor polynomials provide a way to approximate a locally near a specific point by constructing a that matches the function's value and its successive at that point. For instance, the first-degree Taylor is essentially the line to the function's at the expansion point, offering a that captures the function's immediate slope. Higher-degree polynomials, such as the second-degree version resembling a parabola, incorporate through the second , yielding a closer fit to the function's behavior in a small neighborhood around the point. This approximation technique finds a natural analogy in physics, particularly in analyzing small oscillations around an position, where nonlinear forces can be simplified using linear terms from the Taylor expansion. For small displacements, higher-order terms become negligible, transforming complex motion—such as that of a —into governed by a linear restoring force, much like for springs. Including additional derivative terms enhances accuracy closer to the expansion point by accounting for subtle nonlinear effects that the overlooks, making the method indispensable for modeling physical systems under small perturbations. In numerical methods, Taylor polynomials enable the of values and behaviors without direct computation, facilitating efficient algorithms for , optimization, and . They also play a key role in series expansions for solving equations, where assuming a solution allows recursive determination of coefficients from the equation and initial conditions, providing an intuitive pathway to approximate solutions for otherwise intractable problems. The term represents the between the and its Taylor approximation, quantifying how well the polynomial captures the beyond the matched . Understanding and estimating this is essential for practical applications, as it determines the reliability of the approximation over a desired and guides the choice of degree to balance accuracy and computational simplicity.

One-Variable Taylor's Theorem

Statement

Taylor's theorem provides a means to approximate a near a point using a based on its derivatives at that point. Specifically, for a f: \mathbb{R} \to \mathbb{R} that has continuous derivatives up to order n+1 on an open interval containing both a and x, the theorem asserts that f(x) = \sum_{k=0}^n \frac{f^{(k)}(a)}{k!} (x - a)^k + R_n(x, a), where R_n(x, a) denotes the remainder term. The assumptions require that f has continuous derivatives up to order n+1 on the interval. One common expression for the remainder is the Peano form, which states that R_n(x, a) = o((x - a)^n) as x \to a. This form holds under the condition that f is n-times differentiable at a, emphasizing the theorem's utility in capturing the asymptotic behavior near the expansion point. The theorem plays a fundamental role in understanding the local behavior of functions, allowing approximations that become increasingly accurate as n increases and x approaches a.

Remainder Formulas

In Taylor's theorem for a f that is n+1 times differentiable on an interval containing a and x, the term R_n(x, a) after the nth-order around a admits several explicit forms. The Lagrange form of the remainder is given by R_n(x, a) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (x - a)^{n+1}, where \xi lies between a and x. This expression, introduced by in 1797, expresses the error in terms of the (n+1)th at an point. The Cauchy form of the remainder is R_n(x, a) = \frac{f^{(n+1)}(\xi)}{n!} (x - \xi)^n (x - a), with \xi between a and x. Named after , this variant from 1823 factors the error to highlight the distance from \xi to x and from a to x. The form of the remainder is R_n(x, a) = \int_a^x \frac{f^{(n+1)}(t)}{n!} (x - t)^n \, dt. This representation, also attributable to Cauchy, integrates the (n+1)th weighted by a power of the distance from t to x. Among these, the Lagrange form is particularly suited for deriving bounds on the when bounds on |f^{(n+1)}| are available, whereas the form is advantageous for direct computation or since it avoids introducing an auxiliary point \xi.

Remainder Estimates

In the one-variable case, the Lagrange form of the remainder provides a practical means to estimate the error in Taylor polynomial approximations. Specifically, if f is (n+1)-times differentiable on an interval containing a and x, and if |f^{(n+1)}(\xi)| \leq M for some M > 0 and all \xi between a and x, then the absolute value of the satisfies |R_n(x, a)| \leq \frac{M}{(n+1)!} |x - a|^{n+1}. This bound arises from the expression R_n(x, a) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (x - a)^{n+1} for some \xi between a and x, allowing control over the approximation error by selecting M as the maximum of |f^{(n+1)}| on the relevant . As the degree n of the Taylor polynomial increases, the remainder term exhibits favorable asymptotic behavior provided the higher-order derivatives of f remain bounded on the interval of interest. The factorial denominator (n+1)! grows rapidly, causing |R_n(x, a)| to diminish exponentially relative to |x - a|^{n+1}, which enhances the accuracy of the approximation for fixed x close to a. This shrinking of the remainder underscores the utility of higher-degree polynomials in achieving precise local approximations. The remainder vanishes entirely in the limit as n \to \infty if f is analytic at a, meaning the Taylor series converges to f(x) exactly within the . In such cases, the function admits an infinite representation without residual error, a property that distinguishes analytic functions from merely ones. These remainder estimates find essential applications in , where they quantify and control errors in computational approximations of functions, such as in or series-based evaluations of transcendental functions. By invoking the bound, practitioners can determine the minimal degree needed to achieve a desired accuracy level, ensuring reliable numerical results.

Basic Example

A classic basic example of Taylor's theorem in one variable is the expansion of the exponential function f(x) = e^x around the point a = 0. This function is particularly suitable because all its derivatives are identical to itself, f^{(k)}(x) = e^x for every order k, yielding f^{(k)}(0) = 1 and ensuring uniform simplicity in computations. By Taylor's theorem, the expansion is given by e^x = \sum_{k=0}^n \frac{x^k}{k!} + R_n(x), where the Lagrange form of the remainder is R_n(x) = \frac{e^\xi}{(n+1)!} x^{n+1} for some \xi between 0 and x. To compute explicitly for n = 2, first note f(0) = 1, f'(x) = e^x so f'(0) = 1, and f''(x) = e^x so f''(0) = 1. The second-order Taylor polynomial is thus p_2(x) = 1 + x + \frac{1}{2} x^2, with remainder R_2(x) = \frac{e^\xi}{3!} x^3 = \frac{e^\xi}{6} x^3. For a numerical illustration at x = 1, consider n = 5 (extending the pattern): the polynomial approximates e^1 as $1 + 1 + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \frac{1}{120} = 2.71666\ldots, while the true value is e \approx 2.71828. The actual remainder is approximately 0.00162, which is less than 0.002, demonstrating the approximation's accuracy improving with higher n.

Analyticity and Taylor Series

Real Analytic Functions

A real analytic function is an infinitely differentiable function that can be locally represented by its Taylor series. Specifically, a function f: I \to \mathbb{R}, where I is an open interval, is real analytic at a point a \in I if there exists some radius r > 0 such that for all x in the interval (a - r, a + r), f(x) = \sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!} (x - a)^k, and the series converges to f(x) on that interval. Taylor's theorem provides the foundation for this representation: for a function f that is n+1 times differentiable on an interval containing a and x, f(x) equals the nth-order Taylor polynomial plus a remainder term R_n(x). If the remainder R_n(x) \to 0 as n \to \infty for x in some neighborhood of a, then f equals its infinite Taylor series locally around a, establishing that f is real analytic at a. Classic examples of real analytic functions include polynomials, which have infinite everywhere since their Taylor series terminate after a finite number of terms; the e^x, whose \sum_{k=0}^\infty \frac{x^k}{k!} converges to e^x for all real x; and the \sin x and \cos x, with series \sum_{k=0}^\infty \frac{(-1)^k x^{2k+1}}{(2k+1)!} and \sum_{k=0}^\infty \frac{(-1)^k x^{2k}}{(2k)!}, respectively, both converging everywhere. In contrast, the function f(x) = |x| is not real analytic at x = 0, as it fails to be differentiable there, and its formal around 0 is the zero , which does not equal f(x) for x \neq 0. The of the at a is determined by the growth rate of the f^{(k)}(a), via the Cauchy-Hadamard formula: \frac{1}{[R](/page/R)} = \limsup_{k \to \infty} \left| \frac{f^{(k)}(a)}{k!} \right|^{1/k}. If the derivatives grow slower than k! times any , [R](/page/R) can be infinite, as in the cases of e^x, \sin x, and \cos x; rapid growth, however, yields a finite or radius, limiting the interval where the series represents the function.

Series Convergence

The radius of convergence R of the Taylor series \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} (x - a)^n for a function f infinitely differentiable at a is given by R = \frac{1}{\limsup_{n \to \infty} \left| \frac{f^{(n)}(a)}{n!} \right|^{1/n}}, or equivalently via the root test applied to the coefficients c_n = f^{(n)}(a)/n!. This formula determines the open interval (a - R, a + R) around a where the series converges pointwise, with absolute convergence inside the interval and possible conditional convergence or divergence at the endpoints. However, convergence of the series within this radius does not guarantee that it equals f(x); equality holds if and only if the remainder term in Taylor's theorem tends to zero as n \to \infty for each x in the interval. A classic counterexample illustrating the limitations of Taylor series convergence to the function itself is provided by smooth but non-analytic functions, such as the f(x) = \exp(-1/x^2) for x > 0 and f(x) = 0 for x \leq 0, which is infinitely differentiable at x = 0 with all derivatives vanishing there, yielding the zero that converges everywhere but equals f(x) only at x = 0. Such functions highlight that infinite differentiability alone does not ensure the represents the function, as the may be positive (or infinite) yet fail to reproduce f outside isolated points. In contrast, real analytic functions are precisely those for which the converges to f in some neighborhood of every point in the domain. Even when the Taylor series diverges, it can serve as an asymptotic series, providing useful approximations via partial sums that improve as the expansion point is approached, despite the full series not converging. A prominent example is Stirling's series for the \Gamma(z+1) \sim \sqrt{2\pi z} \left( \frac{z}{e} \right)^z \sum_{k=0}^{\infty} \frac{a_k}{z^k} as |z| \to \infty in |\arg z| < \pi, where the divergent asymptotic expansion yields successively better approximations for the factorial but has zero radius of convergence as a power series around infinity. This utility underscores the theorem's role in approximation theory beyond strict convergence. To establish convergence of Taylor series in analytic cases, the majorant method constructs a series with coefficients majorizing those of the original Taylor series, such that convergence of the majorant implies convergence (and often analyticity) of the original within the same disk. For instance, if a majorant power series with radius R > 0 converges, then the Taylor series converges absolutely to an inside the disk of radius R. This technique is particularly valuable for proving local analyticity from formal power series solutions to differential equations.

Complex Analysis Extension

In complex analysis, Taylor's theorem extends to holomorphic functions, which are complex-differentiable in a domain. If f is holomorphic on a disk |z - a| < R centered at a \in \mathbb{C}, then for any n \geq 0 and z in a smaller disk |z - a| < r < R, f(z) = \sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!} (z - a)^k + R_n(z, a), where the remainder satisfies |R_n(z, a)| \leq M \frac{|z - a|^{n+1}}{(n+1)!}, with M an upper bound for |f^{(n+1)}(\zeta)| on the closed disk |\zeta - a| \leq r. This formulation mirrors the real-variable case but leverages the uniform convergence properties inherent to holomorphic functions within their domain of definition. A fundamental result is that a function f is holomorphic on an open set \Omega \subset \mathbb{C} if and only if, for every point a \in \Omega, there exists a disk around a contained in \Omega such that f equals its Taylor series \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!} (z - a)^k, which converges uniformly to f(z) on compact subsets of that disk. This power series representation underscores the rigid analytic structure of holomorphic functions, distinguishing them from merely smooth real functions. The coefficients in the Taylor series are explicitly given by Cauchy's integral formula: for a positively oriented simple closed contour \gamma enclosing a and contained in the domain of holomorphy, \frac{f^{(k)}(a)}{k!} = \frac{1}{2\pi i} \int_{\gamma} \frac{f(\zeta)}{(\zeta - a)^{k+1}} \, d\zeta. This connection allows the series to be derived directly from contour integration without relying on repeated differentiation. The Taylor series facilitates analytic continuation, enabling the extension of a holomorphic function from an initial domain to a larger region where the series converges, provided no singularities obstruct the process. For instance, if the series around a converges in |z - a| < \rho, it defines a holomorphic extension to that disk, and overlapping disks allow stepwise continuation along paths avoiding branch points or poles. This property parallels the behavior of real analytic functions but exploits the global nature of complex holomorphy for broader extensions.

Generalizations

Multivariable Case

Taylor's theorem extends naturally to functions of several variables, generalizing the one-variable case by incorporating partial derivatives and multi-index notation. Consider a function f: \mathbb{R}^m \to \mathbb{R} that is (n+1)-times continuously differentiable, denoted f \in C^{n+1}, in a neighborhood of a point a \in \mathbb{R}^m. Then, for x sufficiently close to a, the theorem states that f(x) = \sum_{|\alpha| \leq n} \frac{D^\alpha f(a)}{\alpha!} (x - a)^\alpha + R_n(x, a), where the sum is over all multi-indices \alpha = (\alpha_1, \dots, \alpha_m) with non-negative integer components satisfying |\alpha| = \alpha_1 + \dots + \alpha_m \leq n, D^\alpha f(a) denotes the corresponding partial derivative of order |\alpha| evaluated at a, \alpha! = \alpha_1! \cdots \alpha_m!, and (x - a)^\alpha = (x_1 - a_1)^{\alpha_1} \cdots (x_m - a_m)^{\alpha_m}. The remainder term satisfies R_n(x, a) = O(\|x - a\|^{n+1}) as x \to a, where \|\cdot\| is the Euclidean norm. The polynomial part of the expansion, \sum_{|\alpha| \leq n} \frac{D^\alpha f(a)}{\alpha!} (x - a)^\alpha, arises as the multivariable analog of the Taylor polynomial and can be derived using the multinomial theorem applied to the expansion along s in \mathbb{R}^m. This formulation assumes the existence and continuity of all partial derivatives up to order n+1 in an open neighborhood containing the line segment from a to x, ensuring the remainder vanishes at order n+1.

Weaker Differentiability Conditions

The Peano form of Taylor's theorem provides a generalization under weaker differentiability assumptions than the standard version, which typically requires the function to be n+1 times continuously differentiable. Specifically, if a function f is n times differentiable at a point a, then f(x) = \sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!} (x - a)^k + o((x - a)^{n}) as x \to a. Here, the existence of the nth derivative at a is assumed, but neither its continuity nor the existence of an (n+1)th derivative is required; lower-order derivatives up to n-1 must exist in a neighborhood of a. The remainder term in this form is expressed using little-o notation, o((x - a)^{n}), which indicates that the error divided by (x - a)^{n} approaches zero as x \to a, without providing an explicit bound or relying on higher derivatives. This contrasts with stronger remainder forms like Lagrange or integral, which demand additional smoothness for quantitative error estimates. For non-smooth functions where higher derivatives may not exist or be continuous, the Peano form still guarantees a local polynomial approximation of order n at a, though the error control is qualitative rather than precise, limiting applications needing explicit bounds. This makes it particularly useful in contexts like studying asymptotic behavior or proving existence of approximations under minimal regularity. The Peano form was introduced by Giuseppe Peano in 1889 as a new expression for the remainder in Taylor's formula, building on earlier work while relaxing smoothness requirements. Such extensions in the late 19th and early 20th centuries highlighted the theorem's robustness beyond classical analytic settings.

Multidimensional Example

In the multivariable generalization of Taylor's theorem, the expansion around a point uses partial derivatives evaluated at that point, with the remainder expressed in terms of higher-order derivatives at an intermediate point. Consider the function f(x, y) = e^{x + y} expanded around the point (0, 0) to first order (n=1). The value at the expansion point is f(0, 0) = e^{0} = 1. The first partial derivatives are \frac{\partial f}{\partial x} = e^{x + y} and \frac{\partial f}{\partial y} = e^{x + y}, both evaluating to $1at(0, 0). Thus, the first-order Taylor approximation is f(x, y) \approx 1 + x + y$. The Lagrange form of the remainder after the first-order terms is R_1(x, y) = \frac{1}{2} e^{\xi + \eta} (x^2 + 2xy + y^2) for some \xi between $0andx, and \eta between &#36;0 and y. This follows from the second partial derivatives: \frac{\partial^2 f}{\partial x^2} = e^{x + y}, \frac{\partial^2 f}{\partial y^2} = e^{x + y}, and \frac{\partial^2 f}{\partial x \partial y} = e^{x + y}, all equal to e^{\xi + \eta} at the intermediate point (\xi, \eta). Therefore, the full expansion is f(x, y) = 1 + x + y + R_1(x, y). To verify numerically, evaluate at (x, y) = (0.1, 0.1): the exact value is e^{0.2} \approx 1.2214, while the linear approximation gives $1 + 0.1 + 0.1 = 1.2, yielding an error of approximately $0.0214.[](https://www.webconversiononline.com/log-of.aspx?type=exp&of=0.2) This error aligns with the remainder term, as \frac{1}{2} e^{\xi + \eta} (0.04) = 0.02 e^{\xi + \eta}, which for small \xi, \eta is on the order of &#36;0.02, matching the observed error. In contrast to the one-variable case, consider slices of this expansion: fixing y=0 yields f(x, 0) = e^x \approx 1 + x, the standard first-order Taylor polynomial for the exponential function along the x-axis; similarly, fixing x=0 gives e^y \approx 1 + y along the y-axis. This highlights how the multivariable expansion reduces to the univariate version on coordinate axes, but captures cross terms like xy in the remainder for off-axis directions.

Proofs

One-Variable Proof via Integration

The proof of Taylor's theorem in one variable using integration begins with the fundamental theorem of calculus, which states that for a continuously differentiable function f on an interval containing a and x, f(x) - f(a) = \int_a^x f'(t) \, dt. This provides the zeroth-order approximation f(x) \approx f(a) with the first-order remainder given by the integral. To obtain higher-order terms, apply integration by parts iteratively to the remainder integral. For the first iteration, set u = f'(t) and dv = dt, so du = f''(t) \, dt and v = t - x. Then, \int_a^x f'(t) \, dt = \left[ f'(t) (t - x) \right]_a^x + \int_a^x f''(t) (x - t) \, dt = f'(a)(a - x) + \int_a^x f''(t) (x - t) \, dt. The boundary term at t = x vanishes, and a - x = -(x - a), yielding f(x) = f(a) + f'(a)(x - a) + \int_a^x f''(t) (x - t) \, dt. This assumes f'' exists and is integrable on the interval. Repeating the process n times generalizes the pattern. Assume the formula holds up to order n-1: f(x) = \sum_{k=0}^{n-1} \frac{f^{(k)}(a)}{k!} (x - a)^k + \int_a^x \frac{f^{(n)}(t)}{(n-1)!} (x - t)^{n-1} \, dt. Now integrate by parts on the remainder, setting u = f^{(n)}(t) and dv = \frac{(x - t)^{n-1}}{(n-1)!} dt, so du = f^{(n+1)}(t) \, dt and v = -\frac{(x - t)^n}{n!}. The boundary evaluation gives the nth Taylor term \frac{f^{(n)}(a)}{n!} (x - a)^n, and the new remainder is R_n(x) = \frac{1}{n!} \int_a^x f^{(n+1)}(t) (x - t)^n \, dt. Thus, the full expansion is f(x) = \sum_{k=0}^n \frac{f^{(k)}(a)}{k!} (x - a)^k + R_n(x). This derivation requires that f^{(n+1)} exists and is integrable on the interval containing a and x, ensuring the integrals converge.

Mean Value Theorem Approach

The mean value theorem offers an elementary path to establishing Taylor's theorem through repeated applications, deriving the Lagrange form of the remainder without invoking integration. This method leverages the theorem's ability to relate function values to derivatives at intermediate points, constructing the expansion using an auxiliary function. The base case follows directly from the mean value theorem: if f is continuous on [a, x] and differentiable on (a, x), then there exists \xi_1 \in (a, x) such that f(x) - f(a) = f'(\xi_1) (x - a). This expresses f(x) as its zeroth-order Taylor polynomial plus a first-order remainder term. For the general case, let P_n(t) = \sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!} (t - a)^k denote the n-th degree Taylor polynomial centered at a. Define the remainder R_n(x) = f(x) - P_n(x), and introduce the auxiliary function g(t) = f(t) - P_n(t) - K (t - a)^{n+1}, where K = \frac{R_n(x)}{(x - a)^{n+1}} is chosen so that g(x) = 0. By construction, g(a) = g'(a) = \cdots = g^{(n)}(a) = 0. Since g is (n+1)-times differentiable and vanishes at a along with its first n derivatives, repeated applications of —a direct consequence of the —imply the existence of a point \xi \in (a, x) such that g^{(n+1)}(\xi) = 0. Differentiating g yields g^{(n+1)}(t) = f^{(n+1)}(t) - (n+1)! K, so f^{(n+1)}(\xi) - (n+1)! K = 0 \implies K = \frac{f^{(n+1)}(\xi)}{(n+1)!}. Substituting back gives R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (x - a)^{n+1}. Thus, the full statement is f(x) = \sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!} (x - a)^k + \frac{f^{(n+1)}(\xi)}{(n+1)!} (x - a)^{n+1}, where \xi lies between a and x, assuming f is (n+1)-times differentiable on the interval. This approach is particularly advantageous for its simplicity and reliance solely on the mean value theorem and basic properties of derivatives, avoiding the need for integral calculus or more sophisticated tools like L'Hôpital's rule. It underscores the theorem's role in quantifying approximation errors through higher derivatives, providing a discrete buildup of the expansion.

Integral Remainder Derivation

The integral remainder form of Taylor's theorem provides an exact expression for the error in the Taylor polynomial approximation without invoking mean value theorems or bounds on derivatives. Assuming the function f is n times continuously differentiable on an interval containing a and x, with f^{(n)} continuous, the Taylor expansion up to order n-1 is given by f(x) = \sum_{k=0}^{n-1} \frac{f^{(k)}(a)}{k!}(x - a)^k + R_n(x), where the remainder R_n(x) takes the integral form R_n(x) = \int_a^x \frac{f^{(n)}(t)}{(n-1)!} (x - t)^{n-1} \, dt. \tag{1}\label{eq:integral_remainder} This form is exact under the condition that f \in C^n, ensuring the nth derivative is continuous and thus integrable, which guarantees the validity of the integration steps without additional assumptions. To derive \eqref{eq:integral_remainder}, begin with the fundamental theorem of calculus applied to f: f(x) = f(a) + \int_a^x f'(t) \, dt. \tag{2}\label{eq:ftc} Integrate the right-hand side by parts, setting u = f'(t) and dv = dt, so du = f''(t) \, dt and v = t - x. This yields \int_a^x f'(t) \, dt = f'(x)(x - x) - f'(a)(a - x) - \int_a^x f''(t)(t - x) \, dt = f'(a)(x - a) + \int_a^x f''(t)(x - t) \, dt, substituting back into \eqref{eq:ftc} to obtain f(x) = f(a) + f'(a)(x - a) + \int_a^x f''(t)(x - t) \, dt. \tag{3}\label{eq:second_order} Repeating this integration by parts on the remaining integral, now with u = f''(t) and dv = (x - t) \, dt (so v = -\frac{1}{2}(x - t)^2), produces the next term \frac{f''(a)}{2!}(x - a)^2 plus an integral involving f'''(t)(x - t)^2. Continuing this process inductively for n-1 steps generalizes the pattern: each integration by parts extracts a term \frac{f^{(k)}(a)}{k!}(x - a)^k for k = 1, \dots, n-1, leaving the remainder as \eqref{eq:integral_remainder}. The boundary terms at t = x vanish in each step, ensuring no additional contributions, and the factorial denominators arise from the repeated differentiation of the powers of (x - t). This repeated integration by parts establishes the uniqueness of the integral remainder form under the C^n smoothness condition, as the process is reversible: starting from \eqref{eq:integral_remainder} and integrating by parts in the reverse direction recovers the full expansion without loss of information, provided the derivatives exist and are continuous to allow the necessary substitutions.

Multivariable Remainder Sketch

In the multivariable setting, a common approach to deriving Taylor's theorem involves reducing the problem to the one-variable case by considering the function along straight lines connecting the expansion point a to x. Specifically, define the auxiliary function g(t) = f(a + t(x - a)) for t \in [0, 1], where f: \mathbb{R}^m \to \mathbb{R} is sufficiently differentiable. Applying the one-variable Taylor theorem to g(t) around t = 0 yields g(1) = \sum_{k=0}^n \frac{g^{(k)}(0)}{k!} + R_n(g), where the derivatives g^{(k)}(0) express the higher-order directional derivatives of f at a in the direction x - a. Substituting back gives the multivariable expansion up to order n, with the remainder R_n capturing the error term. To express the full polynomial part rigorously, the multivariable Taylor expansion employs multi-index notation, where a multi-index \alpha = (\alpha_1, \dots, \alpha_m) \in \mathbb{N}_0^m has length |\alpha| = \sum_{i=1}^m \alpha_i. Assuming f is of class C^{n+1}, the expansion becomes f(x) = \sum_{|\alpha| \leq n} \frac{D^\alpha f(a)}{\alpha!} (x - a)^\alpha + R_n(x, a), where D^\alpha f = \partial_1^{\alpha_1} \cdots \partial_m^{\alpha_m} f denotes the iterated partial derivative, and (x - a)^\alpha = \prod_{i=1}^m (x_i - a_i)^{\alpha_i}. This form arises from repeated applications of the chain rule in the line-integral approach or by induction on the order, leveraging the symmetry of mixed partial derivatives under continuity (). The iterated directional derivatives ( (x - a) \cdot \nabla )^k f(a) expand into the multi-index sum via the . The remainder term R_n(x, a) admits an integral form, such as R_n(x, a) = \sum_{|\alpha| = n+1} \frac{(x - a)^\alpha}{\alpha!} \int_0^1 (1 - t)^n D^\alpha f(a + t(x - a)) \, dt, which follows from integration by parts in the one-variable expansion of g(t). For bounding purposes, under the assumption that all (n+1)-th partial derivatives are bounded by some M > 0 on the line segment joining a and x, the remainder satisfies |R_n(x, a)| \leq C \|x - a\|^{n+1}, where C depends on M, the m, and n, typically of the form C = \frac{M \cdot m^{n+1}}{(n+1)!}. This bound ensures the approximates f locally with quadratic or higher-order error decay. Although mixed partial derivatives commute when continuous, constructing the expansion without assuming C^{n+1} regularity requires careful handling of non-symmetric forms, such as using symmetrized multilinear maps or specific ordering of differentiations to avoid inconsistencies in the polynomial terms. For greater generality, Taylor's theorem extends to functions between Banach spaces X and Y, where the expansion uses Fréchet f^{(k)}(a): X^k \to Y, and the remainder takes a similar integral form R_n(a, h) = \int_0^1 \frac{(1 - t)^n}{n!} f^{(n+1)}(a + t h) (h, \dots, h) \, dt, with bounds following from the uniform boundedness of higher .