Fact-checked by Grok 2 weeks ago

Differential calculus

Differential calculus is a fundamental branch of mathematics concerned with the study of rates of change and the slopes of curves, centered on the concept of the derivative, which quantifies the instantaneous rate at which a function's value changes with respect to its input variable. It forms one half of the broader field of calculus, alongside integral calculus, and provides essential tools for analyzing continuous change in quantities such as position, velocity, and acceleration. The origins of differential calculus trace back to ancient mathematicians, with early precursors in the work of Indian scholar in 499 CE, who employed notions of infinitesimals for astronomical calculations, and Persian mathematician in the 12th century, who discovered s of cubic polynomials. However, the systematic development occurred in the late 17th century through the independent contributions of in and in , who formalized the as a of secant slopes approaching a line. Leibniz introduced the term "differential calculus" in 1684, emphasizing infinitesimally small differences between finite quantities. At its core, differential calculus relies on the limit concept to define the f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}, enabling the computation of tangents, optimization of functions via critical points where f'(x) = 0, and approximation of function values using linearizations like f(x + \Delta x) \approx f(x) + f'(x) \Delta x. Key rules, such as the power rule \frac{d}{dx} x^n = n x^{n-1}, , , and , facilitate differentiation of complex expressions. Differential calculus underpins numerous applications across sciences and engineering, including modeling motion in physics through Newton's laws, where is the of and the of ; optimizing economic models in business; and simulating biological growth rates. Its integration with integral calculus via the links and as inverse operations, forming the bedrock of advanced mathematics and real-world problem-solving.

Foundations

Limits as Prerequisites

The concept of a limit provides an intuitive foundation for understanding how a function behaves as its input approaches a specific value, without necessarily requiring the function to be defined or evaluated at that point. For a function f(x), the \lim_{x \to a} f(x) = L means that as x gets arbitrarily close to a, the output f(x) gets arbitrarily close to L, capturing the idea of approaching values from nearby points. This notion extends to one-sided limits, where the approach is restricted to values greater than a (right-hand limit, \lim_{x \to a^+} f(x)) or less than a (left-hand limit, \lim_{x \to a^-} f(x)); the two-sided limit exists only if both one-sided limits exist and are equal. Infinite limits describe cases where f(x) grows without bound as x approaches a, denoted as \lim_{x \to a} f(x) = \infty or -\infty, indicating vertical asymptotes or unbounded behavior near a. A rigorous formalization of the limit for functions from the real numbers to the real numbers uses the epsilon-delta definition, which quantifies the intuitive notion with precise control over closeness. Specifically, \lim_{x \to a} f(x) = L if and only if for every \epsilon > 0, there exists a \delta > 0 such that whenever $0 < |x - a| < \delta, it follows that |f(x) - L| < \epsilon. This definition ensures that no matter how small the tolerance \epsilon around L, a corresponding interval around a (of width \delta) can be found where f(x) stays within that tolerance, excluding the point x = a itself to focus on approaching behavior. The epsilon-delta approach applies similarly to one-sided and infinite limits with appropriate modifications, such as replacing the two-sided distance with one-sided inequalities or considering unbounded \epsilon. Limits obey several algebraic properties that facilitate computation and analysis, assuming the individual limits exist. The sum rule states that \lim_{x \to a} [f(x) + g(x)] = \lim_{x \to a} f(x) + \lim_{x \to a} g(x), and similarly for differences. The product rule gives \lim_{x \to a} [f(x) \cdot g(x)] = \left( \lim_{x \to a} f(x) \right) \cdot \left( \lim_{x \to a} g(x) \right), while the constant multiple rule is \lim_{x \to a} [c \cdot f(x)] = c \cdot \lim_{x \to a} f(x) for any constant c. For quotients, \lim_{x \to a} \frac{f(x)}{g(x)} = \frac{\lim_{x \to a} f(x)}{\lim_{x \to a} g(x)} provided the limit of the denominator is not zero. Additionally, the limit of a composition satisfies \lim_{x \to a} g(f(x)) = g\left( \lim_{x \to a} f(x) \right) if g is continuous at that limiting value. These properties hold for one-sided and infinite limits under compatible conditions. A function f is continuous at a point a if \lim_{x \to a} f(x) = f(a), meaning the function value matches the limit of approaching values, allowing the graph to be drawn without interruption at that point. Continuity on an interval requires this property at every point in the interval. Discontinuities occur when this fails, classified into types based on limit behavior: a removable discontinuity arises if the limit exists but differs from f(a) or if f(a) is undefined, allowing redefinition to restore continuity; a jump discontinuity happens when the one-sided limits exist but differ, creating a sudden "jump" in the graph; and an infinite discontinuity occurs if at least one one-sided limit is infinite, often near vertical asymptotes. These classifications help identify and analyze breaks in function behavior. Examples illustrate these concepts clearly. For rational functions, consider \lim_{x \to 2} \frac{x^2 - 4}{x - 2}; direct substitution yields \frac{0}{0}, an indeterminate form, but factoring the numerator as (x - 2)(x + 2) simplifies to \lim_{x \to 2} (x + 2) = 4, demonstrating cancellation to resolve the apparent issue. Another rational example is \lim_{x \to 0} \frac{1}{x^2} = \infty, an infinite limit indicating a vertical asymptote at x = 0. For trigonometric functions, \lim_{x \to \pi/2} \sin x = 1 follows directly from the unit circle definition, as \sin(\pi/2) = 1 and the function approaches continuously; similarly, \lim_{x \to \pi/2} \cos x = 0, showcasing smooth behavior near key points. These limits underpin the rigorous definition of derivatives in subsequent developments.

Definition and Notation of Derivatives

The derivative of a function f at a point a in its domain is defined as the limit f'(a) = \lim_{h \to 0} \frac{f(a+h) - f(a)}{h}, provided this limit exists. This expression represents the instantaneous rate of change of f at a, and the function f is said to be differentiable at a if the limit is finite. For the limit to exist, the left-hand limit as h \to 0^- and the right-hand limit as h \to 0^+ must both exist and be equal; these are known as the left-hand derivative f'_-(a) and right-hand derivative f'_+(a), respectively. If either does not exist or they differ, the derivative is undefined at a. Geometrically, the derivative f'(a) gives the slope of the tangent line to the curve y = f(x) at the point (a, f(a)). This tangent line approximates the function near a, touching the graph at that point and having the same instantaneous direction as the curve. In cases where the tangent line is vertical, the slope is infinite, and the derivative does not exist, as the limit diverges to \pm \infty. Physically, the derivative interprets as the instantaneous rate of change of one quantity with respect to another; for instance, if s(t) denotes position as a function of time t, then s'(t) represents velocity at time t. This captures the rate at an exact instant, contrasting with average rates over intervals. Several notations are used for derivatives. The Leibniz notation, \frac{dy}{dx}, treats the derivative as a ratio of differentials and is common for functions y = f(x). Lagrange's notation, f'(x) or f'(a), emphasizes the function and is widely used for explicit derivatives. Newton's notation, \dot{x} or \ddot{x} for first and second derivatives, respectively, appears in physics for time-dependent variables. Higher-order derivatives follow similarly, such as f''(x) for the second derivative or \frac{d^2 y}{dx^2} in Leibniz form. Differentiability at a point implies continuity there, since if f'(a) exists, then \lim_{x \to a} f(x) = f(a). The converse does not hold: a function can be continuous at a but not differentiable. For example, the absolute value function f(x) = |x| is continuous at x = 0, but its derivative does not exist there because the left-hand derivative is -1 and the right-hand derivative is $1.

Historical Development

Pre-17th Century Contributions

The earliest precursors to differential calculus emerged in ancient Greek mathematics, particularly through the method of exhaustion developed by Eudoxus and refined by Archimedes around 250 BCE. This technique approximated areas and volumes by inscribing and circumscribing polygons, effectively approaching limits without formal notation, as seen in Archimedes' Quadrature of the Parabola, where he demonstrated that the area of a parabolic segment is four-thirds that of the inscribed triangle using successive inscribed triangles and properties of similar triangles to relate segments and tangents. Archimedes also employed geometric constructions involving tangents to parabolas, leveraging similar triangles to determine slopes and curvatures intuitively, though without algebraic symbols for rates of change. In ancient India, Aryabhata (c. 476–550 CE) used notions of infinitesimals in his astronomical calculations around 499 CE, expressing problems in the form of differential equations to model planetary motion and rates of change. In medieval India, the Kerala School of mathematics, founded by Madhava of Sangamagrama (c. 1340–1425), advanced infinite series expansions for trigonometric functions such as sine and cosine, which implicitly captured notions of instantaneous rates of change through term-wise differentiation of these series. These expansions, derived via geometric and recursive methods, represented a significant step toward understanding derivatives as limits of ratios, predating European developments by centuries. During the Islamic Golden Age, scholars like (965–1040), in his Book of Optics, utilized geometric techniques to solve reflection problems involving tangents to circular mirrors, approximating slopes through intersecting lines and conic sections to model light rays' paths. Similarly, (d. 1213) in his treatise on cubic equations pioneered algebraic methods to locate maxima and minima, treating functions as positive when their "roots" exceeded certain values, which constituted an early form of optimization via geometric and algebraic inequalities. In medieval Europe, the Oxford Calculators of Merton College in the 14th century, including (c. 1313–1372), explored instantaneous velocity in uniformly accelerated motion through the , which equated the distance traveled to that under constant velocity equal to the average of initial and final speeds, using suppositional reasoning to conceptualize rates at instants without formal limits. These intuitive geometric and kinematic approximations to slopes and velocities, devoid of derivative notation, provided foundational insights that later influenced the formalization of calculus by Newton and Leibniz.

Newton, Leibniz, and the Birth of Calculus

In the mid-1660s, Isaac Newton developed the method of fluxions during his annus mirabilis while isolated at Woolsthorpe due to the Great Plague, conceptualizing variables as "fluents" that change over time and their instantaneous rates of change as "fluxions." He represented fluxions using a dot notation placed above the variable, such as \dot{x} to denote the fluxion of x, treating these as limits of infinitesimal increments to compute tangents, areas, and rates without fully publishing the work at the time. Newton's ideas built on earlier geometric methods but formalized a dynamic approach to motion and curves, with a key manuscript, De Analysi per Aequationes Numero Terminorum Infinitas (1669), outlining infinite series and fluxions, though it circulated privately among British mathematicians like John Collins. Independently, Gottfried Wilhelm Leibniz began formulating his differential calculus in the 1670s while in Paris, influenced by Christiaan Huygens and studies of infinitesimals and infinite series, introducing the notation dx and dy for infinitesimal differences to represent small changes in variables. By 1675, Leibniz had sketched rules like d(x^n) = n x^{n-1} dx in private manuscripts, emphasizing a symbolic, algebraic framework for finding maxima, minima, and tangents to curves. His first public exposition appeared in 1684 with Nova Methodus pro Maximis et Minimis, itemque Tangentibus in Acta Eruditorum, where he detailed the differential \frac{dy}{dx} as the ratio of infinitesimals, along with rules for differentiating powers, products, quotients, and applications to geometry via infinite series expansions. Newton implicitly employed fluxions in his Philosophiæ Naturalis Principia Mathematica (1687) to derive laws of motion and planetary orbits, analyzing centripetal forces and curved paths through geometric limits equivalent to differentiation, though presented in synthetic style to avoid controversy over infinitesimals. Leibniz's framework, meanwhile, facilitated geometric problems like rectifying curves and finding tangents, with early adopters including the Bernoulli brothers—Jacob and Johann—who corresponded extensively with him from 1690 onward, refining and propagating his methods across Europe. Johann Bernoulli, in particular, solved brachistochrone problems using differentials by 1696, crediting Leibniz's notation for its clarity in handling rates. The independent inventions sparked a bitter priority dispute, ignited in 1699 when Bernoulli anonymously challenged British claims in the Acta Eruditorum, but escalating publicly in 1711 when Leibniz wrote to the Royal Society questioning Newton's precedence. Newton, as Society president, appointed a biased committee including allies like John Machin, which in 1712 issued Commercium Epistolicum citing Newton's 1669 manuscript as evidence of earlier discovery, while accusing Leibniz of plagiarism despite his independent Paris work. The controversy divided mathematicians, with the Continent favoring Leibniz's published, accessible notation, while Britain clung to Newton's fluxions until the mid-18th century, stalling cross-channel collaboration. Newton's full Method of Fluxions and Infinite Series appeared in 1711, vindicating his claims but not resolving the acrimony.

Core Techniques

Basic Differentiation Rules

The basic differentiation rules provide efficient methods for computing derivatives of elementary functions without repeatedly applying the limit definition of the derivative. These rules, developed in the foundational work of , allow for the differentiation of constants, powers, sums, products, quotients, and standard transcendental functions like exponentials and basic trigonometric functions. They form the building blocks for more complex techniques and are applicable to polynomials and simple algebraic expressions. The constant rule states that the derivative of a constant function f(x) = c, where c is a real number, is zero: \frac{d}{dx} = 0. This follows directly from the limit definition, as \lim_{h \to 0} \frac{c - c}{h} = \lim_{h \to 0} \frac{0}{h} = 0. The power rule gives the derivative of a power function f(x) = x^n, where n is a positive integer, as \frac{d}{dx} [x^n] = n x^{n-1}. For real exponents n, the rule extends similarly, though the proof for non-integer cases relies on or series expansions. A proof for positive integers uses the : starting from the limit definition, f'(x) = \lim_{h \to 0} \frac{(x + h)^n - x^n}{h}, expanding (x + h)^n = \sum_{k=0}^n \binom{n}{k} x^{n-k} h^k yields terms that cancel except for the k=1 term, \binom{n}{1} x^{n-1} = n x^{n-1}, confirming the result as h \to 0. The sum and difference rules, also known as the linearity property, state that if f and g are differentiable functions, then \frac{d}{dx} [f(x) \pm g(x)] = f'(x) \pm g'(x). This is proven by applying the limit definition to the sum or difference and factoring the numerator, leading to the separate limits for f and g. Consequently, derivatives of are computed by applying the to each term and using linearity; for example, the derivative of x^3 + 2x is $3x^2 + 2. The product rule for differentiable functions f and g is \frac{d}{dx} [f(x) g(x)] = f'(x) g(x) + f(x) g'(x). Its proof involves the limit definition, rewriting the numerator as f(x+h)g(x+h) - f(x)g(x) = f(x+h)[g(x+h) - g(x)] + g(x)[f(x+h) - f(x)], and taking the limit to separate the terms. The quotient rule for differentiable f and g with g(x) \neq 0 is \frac{d}{dx} \left[ \frac{f(x)}{g(x)} \right] = \frac{f'(x) g(x) - f(x) g'(x)}{[g(x)]^2}. The proof similarly uses the limit definition on the difference quotient, multiplying numerator and denominator by the conjugate to simplify. For standard functions, the derivative of the exponential e^x is itself: \frac{d}{dx} [e^x] = e^x, established by the limit definition where \lim_{h \to 0} \frac{e^{x+h} - e^x}{h} = e^x \lim_{h \to 0} \frac{e^h - 1}{h} = e^x \cdot 1, since the inner limit defines the base e. The derivatives of the basic trigonometric functions are \frac{d}{dx} [\sin x] = \cos x and \frac{d}{dx} [\cos x] = -\sin x, proven using the limit definition and angle addition formulas: for sine, \frac{d}{dx} [\sin x] = \lim_{h \to 0} \frac{\sin(x+h) - \sin x}{h} = \lim_{h \to 0} \left( \frac{\sin x (\cos h - 1)}{h} + \frac{\cos x \sin h}{h} \right) = \sin x \cdot 0 + \cos x \cdot 1 = \cos x, relying on the known limits \lim_{h \to 0} \frac{\cos h - 1}{h} = 0 and \lim_{h \to 0} \frac{\sin h}{h} = 1.

Advanced Rules Including Chain Rule

The chain rule provides a method for differentiating composite functions, where one function is applied to the result of another. If y = f(g(x)), then the derivative is given by \frac{dy}{dx} = f'(g(x)) \cdot g'(x), which multiplies the derivative of the outer function evaluated at the inner function by the derivative of the inner function. This rule is essential for handling nested expressions in calculus. For example, to differentiate y = \sin(x^2), let u = x^2, so y = \sin u; then \frac{dy}{dx} = \cos(u) \cdot 2x = 2x \cos(x^2). Derivatives of inverse functions can be found using the relationship between a function and its inverse. If y = f^{-1}(x), then \frac{dy}{dx} = \frac{1}{f'(f^{-1}(x))}, provided the derivative in the denominator is nonzero. This formula arises from differentiating x = f(y) implicitly and solving for \frac{dy}{dx}. For instance, the derivative of y = \arctan x is \frac{1}{1 + x^2}, since the derivative of \tan y = x yields \sec^2 y \cdot \frac{dy}{dx} = 1, and \sec^2 y = 1 + x^2. Logarithmic differentiation simplifies the process of finding derivatives for functions involving products, quotients, or powers by taking the natural logarithm of both sides and using properties of logarithms to convert multiplications into additions. For a function y = \frac{u(x) v(x)}{w(x)}, compute \ln y = \ln u + \ln v - \ln w, differentiate implicitly to get \frac{1}{y} \frac{dy}{dx} = \frac{u'}{u} + \frac{v'}{v} - \frac{w'}{w}, and solve for \frac{dy}{dx} = y \left( \frac{u'}{u} + \frac{v'}{v} - \frac{w'}{w} \right). This technique is particularly useful when direct application of product or quotient rules becomes cumbersome. The derivative of the exponential function a^x (for a > 0, a \neq 1) is a^x \ln a, derived by rewriting a^x = e^{x \ln a} and using the chain rule with the known of e^u. Similarly, the derivative of \ln x (for x > 0) is \frac{1}{x}. To prove this using , consider the definition: \frac{d}{dx} \ln x = \lim_{h \to 0} \frac{\ln(x + h) - \ln x}{h} = \lim_{h \to 0} \frac{1}{h} \ln \left(1 + \frac{h}{x}\right) = \frac{1}{x} \lim_{h \to 0} \frac{\ln(1 + h/x)}{h/x}, and the inner equals 1 as it is the derivative of \ln u at u = 1. For e^x, the \lim_{h \to 0} \frac{e^h - 1}{h} = 1 confirms its derivative is itself. Higher-order derivatives are obtained by repeatedly differentiating a function; the second derivative f''(x) is the derivative of f'(x), the third is the derivative of the second, and so on, with the n-th derivative denoted f^{(n)}(x). In physics, the second derivative of position with respect to time represents acceleration. For trigonometric functions, the higher derivatives exhibit cyclic patterns: the n-th derivative of \sin x is \sin(x + n \pi / 2), and for \cos x, it is \cos(x + n \pi / 2). The general power rule for differentiating [f(x)]^n, where n is any real number, follows from the chain rule: let u = f(x), so y = u^n; then \frac{dy}{dx} = n u^{n-1} \frac{du}{dx} = n [f(x)]^{n-1} f'(x). This extends the basic to variable bases.

Fundamental Theorems

Rolle's and Mean Value Theorems

, first published by the French mathematician Michel Rolle in 1691 as part of his work Démonstration d'une méthode pour résoudre les égalités de tous les degrés, states that if a real-valued f is continuous on the closed [a, b] and differentiable on the open (a, b), with f(a) = f(b), then there exists at least one point c \in (a, b) such that f'(c) = 0. Geometrically, Rolle's theorem implies that between two points on the graph of f where the function values are equal (the same "height"), there must be at least one point where the tangent line is horizontal, corresponding to a critical point where the instantaneous rate of change is zero. For example, consider f(x) = x^2 - 1 on the interval [-1, 1]. Here, f(-1) = 0 = f(1), f is continuous on [-1, 1], and differentiable on (-1, 1) with f'(x) = 2x. Setting f'(c) = 0 yields c = 0 \in (-1, 1), satisfying the theorem. The extends and was formulated by in 1797 in his seminal work Théorie des fonctions analytiques, where it appears as a consequence of Taylor expansions for analytic functions. It states that if f is continuous on [a, b] and differentiable on (a, b), then there exists at least one c \in (a, b) such that f'(c) = \frac{f(b) - f(a)}{b - a}. This equates the instantaneous rate of change at c to the average rate of change over [a, b], meaning the tangent at c is parallel to the connecting (a, f(a)) and (b, f(b)). To prove the mean value theorem, define an auxiliary function g(x) = f(x) - f(a) - \frac{f(b) - f(a)}{b - a}(x - a). Then g(a) = 0 = g(b), and g is continuous on [a, b] and differentiable on (a, b). By , there exists c \in (a, b) with g'(c) = 0, so f'(c) - \frac{f(b) - f(a)}{b - a} = 0, yielding the result. A key consequence of the is that if f'(x) = 0 for all x in an (a, b), then f is on [a, b]; to see this, for any x_1, x_2 \in [a, b] with x_1 < x_2, apply the theorem on [x_1, x_2] to get f(x_2) - f(x_1) = f'(c)(x_2 - x_1) = 0. Similarly, if f'(x) \geq 0 for all x \in (a, b), then f is non-decreasing (increasing if f'(x) > 0); the proof follows by showing f(x_2) - f(x_1) \geq 0 for x_1 < x_2. If f'(x) \leq 0, then f is non-increasing. For an illustration of the mean value theorem, take f(x) = x^2 on [0, 1]. The average rate of change is \frac{f(1) - f(0)}{1 - 0} = 1, and f'(x) = 2x, so at c = 0.5, f'(0.5) = 1, matching the secant slope. The hypotheses of Rolle's theorem are necessary; without continuity on [a, b], the conclusion may fail. For a counterexample, consider f(x) = x for $0 \leq x < 1 and f(1) = 0 on [0, 1]. Then f(0) = 0 = f(1), and f is differentiable on (0, 1) with f'(x) = 1 \neq 0, but f is discontinuous at x = 1 since \lim_{x \to 1^-} f(x) = 1 \neq 0 = f(1). Without differentiability on (a, b), consider f(x) = |x| on [-1, 1], where f(-1) = 1 = f(1), f is continuous, but not differentiable at x = 0, and f'(x) = -1 for x < 0, f'(x) = 1 for x > 0, so no c with f'(c) = 0.

Taylor's Theorem and Series

Taylor's theorem provides a method for approximating a function f(x) near a point a using its derivatives at that point, expressing the function as a polynomial plus a remainder term. Specifically, if f and its first n+1 derivatives are continuous on an open interval containing a, then for any x in that interval, f(x) = f(a) + f'(a)(x - a) + \frac{f''(a)}{2!}(x - a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x - a)^n + R_n(x), where the Lagrange form of the remainder is R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!}(x - a)^{n+1} for some \xi between a and x. This theorem, originally introduced by Brook Taylor in his 1715 work Methodus Incrementorum Directa et Inversa, expressed functions as infinite series expansions using finite differences, laying the foundation for polynomial approximations of arbitrary degree. The inclusion of the remainder term to quantify approximation error was later formalized by Joseph-Louis Lagrange in 1797. Augustin-Louis Cauchy provided an alternative form of the remainder in the early 19th century, enhancing the theorem's rigor for convergence analysis. A special case of Taylor's theorem occurs when the expansion point is a = 0, known as the Maclaurin series. For example, the has the Maclaurin series e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}, which converges for all real x. Similarly, the sine function is given by \sin x = \sum_{k=0}^{\infty} (-1)^k \frac{x^{2k+1}}{(2k+1)!}, converging everywhere. These series represent infinite Taylor expansions for analytic functions, where the function equals its within the . The R for a \sum c_k (x - a)^k determines the (a - R, a + R) where the series converges absolutely; for of analytic functions like e^x or \sin x, R = \infty. Outside this radius, the series may diverge, as seen in the for \frac{1}{1-x} with R = 1. To illustrate practical use, consider approximating e^{0.1} with the second-degree polynomial at a = 0: e^{0.1} \approx 1 + 0.1 + \frac{(0.1)^2}{2} = 1.105. The actual value is approximately 1.1051709, so the error is about 0.0001709. Using the Lagrange remainder with n=2, since |f'''(\xi)| = e^{\xi} \leq e^{0.1} \approx 1.105 for \xi \in (0, 0.1), |R_2(0.1)| \leq \frac{1.105}{6} (0.1)^3 \approx 0.000184, bounding the error effectively.

Inverse and Implicit Function Theorems

The inverse function theorem provides conditions under which a has a local that is also differentiable. Specifically, if f is a from an to the real numbers that is continuously differentiable near a point a with f'(a) \neq 0, then there exists a neighborhood around f(a) such that f is bijective onto that neighborhood, and the f^{-1} is differentiable there with (f^{-1})'(b) = \frac{1}{f'(a)}, where b = f(a). This theorem ensures local invertibility when the derivative is nonzero, reflecting the function's strict monotonicity in that region. A proof sketch for the differentiability of the inverse relies on the definition and the . Let g = f^{-1} and b = f(a). Then g'(b) = \lim_{y \to b} \frac{g(y) - g(b)}{y - b} = \lim_{x \to a} \frac{x - a}{f(x) - f(a)}, where x = g(y). By the , f(x) - f(a) = f'(c)(x - a) for some c between a and x. Thus, the limit simplifies to \frac{1}{f'(c)}, and as x \to a, c \to a, yielding \frac{1}{f'(a)}. Continuity of f' ensures the limit exists. The complements this by addressing relations defined implicitly, such as F(x, y) = 0, where y may not be explicitly solved for in terms of x. If F is continuously near (x_0, y_0) with F(x_0, y_0) = 0 and \frac{\partial F}{\partial y}(x_0, y_0) \neq 0, then there exists a neighborhood of x_0 in which y can be expressed as a unique, continuously y = g(x) satisfying the equation, with g'(x_0) = -\frac{\frac{\partial F}{\partial x}(x_0, y_0)}{\frac{\partial F}{\partial y}(x_0, y_0)}. This guarantees the local solvability of the relation for y as a of x. The derivative formula arises from a proof sketch using the chain rule: differentiate both sides of F(x, g(x)) = 0 with respect to x, yielding \frac{\partial F}{\partial x} + \frac{\partial F}{\partial y} g'(x) = 0, so g'(x) = -\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} provided the denominator is nonzero. The existence of g follows from the applied to the projection onto the y-coordinate. These theorems originated in the foundational work of calculus, with roots traceable to Gottfried Wilhelm Leibniz's early manipulations of differentials in the late , where he considered relations like implicit equations in solving problems of tangents. The modern formulation of the in multiple variables was advanced by Ulisse Dini in his 1879 work Lezioni di analisi infinitesimale, generalizing earlier real-variable ideas. A classic application of the is to the unit x^2 + y^2 = 1, or F(x, y) = x^2 + y^2 - 1 = 0. At points where y \neq 0, \frac{\partial F}{\partial y} = 2y \neq 0, so y is locally a of x with \frac{dy}{dx} = -\frac{2x}{2y} = -\frac{x}{y}. This describes the of the to the without solving explicitly for y = \pm \sqrt{1 - x^2}. The theorems also enable differentiation of , which are defined implicitly. For y = \arcsin x, we have \sin y = x with -\frac{\pi}{2} \leq y \leq \frac{\pi}{2}. Differentiating implicitly gives \cos y \cdot y' = 1, so y' = \frac{1}{\cos y} = \frac{1}{\sqrt{1 - \sin^2 y}} = \frac{1}{\sqrt{1 - x^2}} for |x| < 1, where \cos y > 0. Similar derivations apply to \arccos x, \arctan x, and others, relying on the nonzero condition for invertibility. Both theorems require continuity of the relevant derivatives and the non-vanishing condition to ensure local uniqueness and differentiability, preventing singularities or multiple branches that could violate invertibility.

Applications

Optimization and Extrema

Optimization in differential calculus involves using derivatives to identify the maximum and minimum values of functions, known as extrema. These techniques are essential for solving problems where the goal is to extremize a quantity, such as finding the highest or lowest point on a graph. By analyzing the first and second derivatives, one can locate points where the function's slope is zero or undefined, and determine the nature of those points. Critical points of a f(x) occur where the first f'(x) = 0 or where f'(x) is , provided f(c) exists at that point c. These points are candidates for local maxima or minima because, by the , if a has a local extremum in an open interval, its must be zero at that interior point. The first classifies critical points by examining the sign changes of f'(x) around them. If f'(x) changes from positive to negative at a critical point c, then f(c) is a local maximum. Conversely, if f'(x) changes from negative to positive at c, then f(c) is a local minimum. If there is no sign change, the test is inconclusive. The second derivative test provides another method to determine the nature of critical points where f'(c) = 0. If f''(c) > 0, then f(c) is a local minimum; if f''(c) < 0, then f(c) is a local maximum. If f''(c) = 0, the test is inconclusive, and further analysis, such as the , is required. For absolute extrema on a closed interval [a, b], the extreme value theorem states that if f is continuous on [a, b], it attains both an absolute maximum and minimum. To find them, evaluate f at the endpoints a and b, and at all critical points within (a, b), then compare the values. A representative example is maximizing or minimizing the quadratic function f(x) = x^2 - 4x + 3. The vertex occurs at x = -\frac{b}{2a} = 2, where f(2) = -1, which is the global minimum since the parabola opens upward. For constrained optimization in single-variable settings, substitution reduces the problem to an unconstrained one. Consider maximizing f(x, y) = xy subject to x + y = 10. Substitute y = 10 - x to get g(x) = x(10 - x) = 10x - x^2. Then g'(x) = 10 - 2x = 0 implies x = 5, so y = 5, and f(5, 5) = 25, the maximum. Concavity describes the curvature of the graph: the function is concave up where f''(x) > 0 and concave down where f''(x) < 0. occur where f''(x) changes sign, indicating a shift in concavity, provided f''(c) = 0 or is undefined at c. Consider f(x) = x^3 - 3x. The critical points are found from f'(x) = 3x^2 - 3 = 0, so x = \pm 1. The second derivative f''(x) = 6x gives f''(-1) = -6 < 0, a local maximum at x = -1, and f''(1) = 6 > 0, a local minimum at x = 1. Additionally, f''(x) = 0 at x = 0, and since it changes from negative to positive, x = 0 is an .

Motion, Rates, and Physics

In differential calculus, the study of motion in physics relies on derivatives to quantify rates of change, particularly in kinematics, where position as a function of time is differentiated to yield velocity and acceleration. The position function s(t) describes an object's location at time t, with velocity defined as the first derivative v(t) = s'(t), representing the instantaneous rate of change of position, and acceleration as the second derivative a(t) = v'(t) = s''(t), capturing the rate of change of velocity. A classic example is under constant , where the position function is s(t) = - \frac{1}{2} g t^2 + v_0 t + s_0, with g as the (approximately 9.8 m/s² or 32 ft/s²), v_0 the initial , and s_0 the initial . Differentiating gives v(t) = -g t + v_0 and a(t) = -g, constant and directed downward, illustrating how model uniform in one . Related rates problems extend this to scenarios where multiple quantities change with time, requiring implicit differentiation with respect to time to relate their rates. For instance, in the problem, a ladder of fixed length L leans against a wall, with base distance x(t) from the wall and top height y(t) on the wall satisfying x^2 + y^2 = L^2; differentiating yields $2x \frac{dx}{dt} + 2y \frac{dy}{dt} = 0, allowing computation of the rate \frac{dy}{dt} from known \frac{dx}{dt}. Similarly, for a spherical with V = \frac{4}{3} \pi r^3, differentiating gives \frac{dV}{dt} = 4 \pi r^2 \frac{dr}{dt}, relating the inflation rate to the radial expansion rate. Newton's second law, F = m a, connects these kinematic concepts to forces, where net force F equals m times a(t) = s''(t), enabling the modeling of dynamic systems. In , such as a mass on a , the position is s(t) = A \sin(\omega t), with A and \omega; is v(t) = A \omega \cos(\omega t), and a(t) = -A \omega^2 \sin(\omega t) = -\omega^2 s(t), restoring force proportional to displacement per . Projectile motion combines horizontal constant and vertical , with position components x(t) = v_{0x} t and y(t) = v_{0y} t - \frac{1}{2} g t^2, where initial velocity components are v_{0x} = v_0 \cos \theta and v_{0y} = v_0 \sin \theta; provide v_x(t) = v_{0x} (constant) and v_y(t) = v_{0y} - g t, and a_x = 0, a_y = -g, neglecting air resistance. Derivatives inherently carry units of rates, ensuring dimensional consistency; for example, if position s is in , time t in seconds, then v = \frac{ds}{dt} has units m/s, and a = \frac{dv}{dt} m/s², aligning with physical measurements in Newton's laws and kinematic equations.

Solving Differential Equations

calculus provides the foundational tools for solving differential equations (ODEs) by relating to rates of change and employing as the inverse operation. Basic ODEs, which involve the first of a , often model phenomena where the rate of change depends on the itself or external factors. Solving these equations typically transforms the differential form into an integrable expression, yielding explicit or implicit solutions that describe the 's behavior.

Separable Equations

A first-order ODE is separable if it can be written as \frac{dy}{dx} = f(x) g(y), where the right-hand side factors into a product of functions of x and y separately. To solve, rearrange to \frac{dy}{g(y)} = f(x) \, dx and integrate both sides: \int \frac{dy}{g(y)} = \int f(x) \, dx + C, assuming g(y) \neq 0. This yields an implicit solution, which may be solved explicitly for y if possible. For instance, the equation \frac{dy}{dx} = y separates to \frac{dy}{y} = dx, integrating to \ln |y| = x + C, so y = A e^x where A = \pm e^C. This solution arises frequently in growth models.

Linear First-Order Equations

Linear first-order ODEs take the form \frac{dy}{dx} + P(x) y = Q(x), where P(x) and Q(x) are functions of x. The method of integrating factors resolves this by multiplying through by \mu(x) = e^{\int P(x) \, dx}, transforming the left side into the of a product: \frac{d}{dx} [y \mu(x)] = Q(x) \mu(x). Integrating gives y \mu(x) = \int Q(x) \mu(x) \, dx + C, and solving for y produces the general solution y = \frac{1}{\mu(x)} \left( \int Q(x) \mu(x) \, dx + C \right). This technique, developed in the , enables solutions for non-homogeneous linear systems.

Initial Value Problems

Initial value problems (IVPs) specify a solution satisfying y(x_0) = y_0 alongside the ODE. Under assumptions of continuity of f(x, y) and in y (i.e., \frac{\partial f}{\partial y} bounded), the -Lindelöf theorem guarantees a unique local to the IVP \frac{dy}{dx} = f(x, y), y(x_0) = y_0. This uniqueness ensures that applying initial conditions to the general yields a single particular , critical for predictive modeling.

Examples

Exponential population growth exemplifies separable equations: \frac{dP}{dt} = k P, where P(t) is and k > 0 the growth rate, separates to \int \frac{dP}{P} = \int k \, dt, yielding P(t) = P_0 e^{kt} with initial P_0. This model, assuming unlimited resources, predicts unbounded increase. Mixing problems, often linear, describe solute concentration in a . Consider a 100-liter with initial 50 kg , pure entering at 5 L/min and mixture exiting at the same rate; let A(t) be salt amount. The rate of change is \frac{dA}{dt} = 0 - \frac{5}{100} A = -\frac{A}{20}, a separable equation solving to A(t) = 50 e^{-t/20}, approaching zero as t \to \infty. Such setups model dilution in . In physics, derivatives represent rates like v = \frac{dx}{dt}, forming \frac{dx}{dt} = v(t) solvable by : x(t) = \int v(t) \, dt + C, the inverting the to recover from . This links to , where \frac{dv}{dt} yields higher-order equations. Higher-order ODEs, involving second or more derivatives, can sometimes reduce to first-order via substitution, such as letting w = \frac{dy}{dx} for equations missing y, yielding \frac{dw}{dx} = f(x, w). This simplifies solving without addressing full methods for constant-coefficient cases.

References

  1. [1]
    Calculus - UC Davis Mathematics
    Differential calculus is based on the problem of finding the instantaneous rate of change of one quantity relative to another. Examples of typical differential ...
  2. [2]
    World Web Math: Definition of Differentiation - MIT
    Oct 14, 1999 · The essence of calculus is the derivative. The derivative is the instantaneous rate of change of a function with respect to one of its variables ...
  3. [3]
    [PDF] 1 History of Calculus
    in differential calculus. In the 15th century, an early version of the mean value theorem was first described by Parameshvara (13701460) from the Kerala ...
  4. [4]
    [PDF] History of calculus - UC Davis Mathematics
    Dec 31, 2009 · Differential calculus​​ The Indian mathematician-astronomer Aryabhata in 499 used a notion of infinitesimals and expressed an astronomical ...
  5. [5]
    Differential calculus - Digital Collections
    Differential calculus, the manner of differentiating quantities, that is to say, of finding the infinitely small difference from a variable finite quantity.
  6. [6]
    Differentials - Calculus I - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will compute the differential for a function. We will give an application of differentials in this section.
  7. [7]
    3.3: Differentiation Rules - Mathematics LibreTexts
    Jan 17, 2025 · The differentiation rules include constant, constant multiple, power, sum, difference, product, and quotient rules.
  8. [8]
    0.2 What Is Calculus and Why do we Study it? - MIT Mathematics
    Calculus is the study of how things change. It provides a framework for modeling systems in which there is change, and a way to deduce the predictions of such ...Missing: history | Show results with:history
  9. [9]
    Introduction to differentiation - The Open University
    Basic calculus splits into two halves, known as differential calculus and integral calculus. ... The fundamental ideas of calculus were developed in the 1600s, ...
  10. [10]
    Calculus I - One-Sided Limits - Pauls Online Math Notes
    Nov 16, 2022 · With one-sided limits we will only be looking at one side of the point in question. Here are the definitions for the two one sided limits.Missing: intuitive | Show results with:intuitive
  11. [11]
    Infinite Limits
    A limit with a value of ∞ means that as x gets closer and closer to a, f(x) gets bigger and bigger; it increases without bound.
  12. [12]
    Calculus I - The Definition of the Limit - Pauls Online Math Notes
    Mar 4, 2024 · We'll be looking at the precise definition of limits at finite points that have finite values, limits that are infinity and limits at infinity.
  13. [13]
    The Epsilon Delta Definition of a Limit
    The Epsilon-Delta Definition for the Limit of a Function​​ lim x → c f ( x ) = L means that for any , we can find a such that if 0 < | x − c | < δ , then | f ( x ...
  14. [14]
    Calculus I - Limit Properties - Pauls Online Math Notes
    Nov 16, 2022 · To take the limit of a sum or difference all we need to do is take the limit of the individual parts and then put them back together with the appropriate sign.
  15. [15]
    Limit Laws and Computations
    The limit of a quotient is the quotient of the limits as long as you are not dividing by zero: limx→af(x)g(x)=limx→af(x)limx→ag(x), if limx→ag(x)≠0.
  16. [16]
    Calculus I - Continuity - Pauls Online Math Notes
    Nov 16, 2022 · A function is continuous on an interval if we can draw the graph from start to finish without ever once picking up our pencil.
  17. [17]
    Continuity and the Intermediate Value Theorem
    Types of Discontinuities · If limx→a+f(x) and limx→a−f(x) both exist, but are different, then we have a jump discontinuity. · If either limx→a+f(x)=±∞ or limx→a−f ...
  18. [18]
    Calculus I - The Definition of the Derivative - Pauls Online Math Notes
    Nov 16, 2022 · In this section we define the derivative, give various notations for the derivative and work a few problems illustrating how to use the ...
  19. [19]
    1.4 The Derivative of a Function at a Point
    The derivative of f at the value x = a is defined as the limit of the average rate of change of f on the interval as h → 0.
  20. [20]
    [PDF] Differentiable Functions - UC Davis Mathematics
    Like continuity, differentiability is a local property. That is, the differentiability of a function f at c and the value of the derivative, if it exists, ...
  21. [21]
    [PDF] Geometric Interpretation of Differentiation - MIT OpenCourseWare
    The tangent line touches the graph at (x0,f(x0)); the slope of the tangent line matches the direction of the graph at that point. The tangent line is the.
  22. [22]
    Tangent and normal lines - Math Insight
    One fundamental interpretation of the derivative of a function is that it is the slope of the tangent line to the graph of the function.
  23. [23]
    3.2: The Derivative as a Function - Mathematics LibreTexts
    Dec 20, 2020 · A function is not differentiable at a point if it is not continuous at the point, if it has a vertical tangent line at the point, or if the ...
  24. [24]
    2.3: Interpretations of the Derivative - Mathematics LibreTexts
    Nov 22, 2021 · We conclude that the instantaneous velocity at time t = 1 , which is the instantaneous rate of change of distance per unit time at time t = 1 , ...
  25. [25]
    Calculus I - Interpretation of the Derivative - Pauls Online Math Notes
    Nov 16, 2022 · The derivative represents rate of change, the slope of the tangent line, and velocity of an object at a point.
  26. [26]
    7.2 Derivative Notation - BOOKS
    Leibniz: In this notation, due to Leibniz, the primary objects are relationships, such as , y = x 2 , and derivatives are written as a ratio, as in . d y d x = ...
  27. [27]
    1.10: The Derivative as a Function - Mathematics LibreTexts
    Sep 15, 2025 · Notations for the Derivative · Lagrange's Notation: \( f^{\prime}(x) \) · Leibniz's Notation: \( \dfrac{dy}{dx} \) · Euler's Notation: \( D \).
  28. [28]
    [PDF] Differentiation | CMU Math
    The converse if false: A function can be continuous at a point without being differentiable at that point. For example, the absolute-value function. (t 7 ...
  29. [29]
    [PDF] Archimedes and the Quadrature of the Parabola
    Nov 1, 2013 · One using the “method of exhaustion” – part of standard repertoire of Euclidean mathematics (laid out in Book 12 of Elements), based on work of ...
  30. [30]
    [PDF] Archimedes
    In it he extends the method of exhaustion to what has been termed the "method of compression." Instead of dealing only with in- scribed polygons, he employs ...
  31. [31]
    [PDF] The development of Calculus in the Kerala School
    Our discussion on Madhava's ways of finding infinite trigonometric series begins with his derivation of the power series for Sine values. Sankara, one of ...
  32. [32]
    "The development of Calculus in the Kerala School" by Phoebe Webb
    This paper focuses on Madhava's derivation of the power series for sine and cosine, as well as a series similar to the well-known Taylor Series.
  33. [33]
    [PDF] Ibn al-Haytham's Lemmas for Solving "Alhazen's Problem"
    IBN AL-HAYTHAM continues as follows: He produces GD to E and draws line ZDT tangent to the circle at D. He then draws DK at an angle GDK equal to angle QNF ( ...
  34. [34]
    [PDF] On Tusi's Classification of Cubic Equations and its Connections to ...
    Jan 28, 2022 · The work of Sharaf al-Din Tusi on cubic equations has been analyzed and documented in detail by the noted historian of mathematics of the Golden.Missing: proto | Show results with:proto
  35. [35]
    William Heytesbury - Stanford Encyclopedia of Philosophy
    Jan 19, 2018 · He formulated the Middle Degree Theorem (also known as the Mean Speed Theorem) offering a proper rule for uniformly accelerated motion, later ...
  36. [36]
    Isaac Newton - Biography
    ### Summary of Newton's Development of Fluxions
  37. [37]
    Gottfried Leibniz (1646 - 1716) - Biography - MacTutor
    In 1684 Leibniz published details of his differential calculus in Nova Methodus pro Maximis et Minimis, itemque Tangentibus... T. (A new method for maxima, ...Missing: key | Show results with:key
  38. [38]
    Charles Bossut on Leibniz and Newton - MacTutor
    We present below a version of Bossut's account of the Leibniz-Newton controversy over their priority in inventing the calculus. This was written around 100 ...
  39. [39]
    John Machin - Biography - MacTutor - University of St Andrews
    In response to this letter the Royal Society set up a committee to pronounce on the priority dispute. It was totally biased, not asking Leibniz to give his ...
  40. [40]
    Calculus I - Chain Rule - Pauls Online Math Notes
    Nov 16, 2022 · The chain rule states that if F(x)=(f∘g)(x), then F′(x)=f′(g(x))g′(x). It involves differentiating the outside function and multiplying by the ...
  41. [41]
    3.7 Derivatives of Inverse Functions
    Subsection 3.7.4 Key Equations · Inverse function theorem f − 1 ′ ( x ) = 1 f ′ ( f − 1 ( x ) ) whenever f ′ ( f − 1 ( x ) ) ≠ 0 and f ( x ) is differentiable.
  42. [42]
    Calculus I - Logarithmic Differentiation - Pauls Online Math Notes
    Nov 16, 2022 · Taking the derivatives of some complicated functions can be simplified by using logarithms. This is called logarithmic differentiation.
  43. [43]
    Calculus I - Derivatives of Exponential and Logarithm Functions
    Nov 16, 2022 · For the natural exponential function, f(x)=ex f ( x ) = e x we have f′(0)=limh→0eh−1h=1 f ′ ( 0 ) = lim h → 0 ⁡ e h − 1 h = 1 . So, provided we ...
  44. [44]
    Calculus I - Higher Order Derivatives - Pauls Online Math Notes
    Nov 16, 2022 · Collectively the second, third, fourth, etc. derivatives are called higher order derivatives. Let's take a look at some examples of higher order derivatives.
  45. [45]
    [PDF] Maclaurin Series for cos x and sin x. 1. Find the first 5 derivatives of ...
    Find the first 5 derivatives of cos x and sin x. Evaluate them at x = 0. 2. Use 1) to find the pattern for the nth derivative os cos x and sin x at x = 0.
  46. [46]
    [PDF] Section 9.6, The Chain Rule and the Power Rule - Math
    When f(u) = un, this is called the (General) Power Rule. (General) Power Rule: If y = un, where u is a function of x, then dy dx.
  47. [47]
    Michel Rolle - Biography - MacTutor - University of St Andrews
    In his 1691 work Rolle adopted the notion that if a > b a > b a>b then − b > − a -b > -a −b>−a. It seems strange today to realise that this was not the current ...
  48. [48]
    4.4 The Mean Value Theorem - Calculus Volume 1 | OpenStax
    Mar 30, 2016 · Using the Mean Value Theorem, we can show that if the derivative of a function is positive, then the function is increasing; if the derivative ...
  49. [49]
    Calculus I - The Mean Value Theorem - Pauls Online Math Notes
    Nov 16, 2022 · f′(c)=0 f ′ ( c ) = 0. Putting this into the equation above gives,. f(x2)−f(x1)=0⇒f(x2)=f(x1) f ( x 2 ) − f ( x 1 ) = 0 ⇒ f ( x 2 ) = f ( x 1 ).<|separator|>
  50. [50]
    [PDF] JOSEPH LOUIS LAGRANGE, THÉORIE DES FONCTIONS ...
    Lagrange obtained equation (21), the mean-value theorem (a term he never used), from his fundamental axiom concerning the expansion of a function in a Taylor ...
  51. [51]
    [PDF] Some consequences of the mean value theorem
    The mean value theorem implies that if f/(t)=0, then f is constant; if f/(t)≥0, then f is increasing; if f/(t)>0, then f is strictly increasing.
  52. [52]
    Examples where Rolle's Theorem fails for following [closed]
    Dec 11, 2015 · Rolle's Theorem: Suppose that f is continuous on [a,b] and is differentiable on (a,b). If f(a)=f(b), then there is a number c∈(a,b) for which f′(cRolle's theorem violation or not? - Math Stack ExchangeAlternative to Rolle's Theorem? - Math Stack ExchangeMore results from math.stackexchange.com
  53. [53]
    [PDF] On the role played by the work of Ulisse Dini on implicit function ...
    Dec 21, 2012 · The prolegomena to the idea for the implicit function theorem can be traced both in the works of I. Newton,. G.W. Leibniz, J. Bernoulli and L.
  54. [54]
    9.5 Inverse Trigonometric Functions
    Since this is an inverse function, we can discover the derivative by using implicit differentiation. Suppose y=arcsin(x). Then sin(y)=sin(arcsin(x))=x.
  55. [55]
    Optimization - Calculus I - Pauls Online Math Notes
    Nov 16, 2022 · ... Constraint : 50 = l w h = 3 w 2 h. As with the first example, we will solve the constraint for one of the variables and plug this into the cost.
  56. [56]
    Calculus I - Critical Points - Pauls Online Math Notes
    Nov 16, 2022 · We say that x=c x = c is a critical point of the function f(x) f ( x ) if f(c) f ( c ) exists and if either of the following are true.Missing: cubic | Show results with:cubic
  57. [57]
    The Second Derivative Test
    Second Derivative Test: If f′(c)=0 and f″(c)>0, then there is a local minimum at x=c. If f′(c)=0 and f″(c)<0, then there is a local maximum at x=c.
  58. [58]
    Calculus I - Finding Absolute Extrema - Pauls Online Math Notes
    Nov 16, 2022 · All that we really need to do is get a list of possible absolute extrema, plug these points into our function and then identify the largest and smallest values.
  59. [59]
    Find the Critical Points x^3-3x - Mathway
    Calculus Examples ; Step 2.6.1. First, use the positive value of the ± ± to find the first solution. ; Step 2.6.2. Next, use the negative value of the ± ± to find ...
  60. [60]
    The Feynman Lectures on Physics Vol. I Ch. 8: Motion
    Acceleration is defined as the time rate of change of velocity. From the preceding discussion we know enough already to write the acceleration as the derivative ...
  61. [61]
    Particle Kinematics for Continuous Motion - Mechanics Map
    If the velocity does change over time, then we will need to take the derivative of the velocity equation v(t) to find the acceleration equation a(t). The ...Missing: sources | Show results with:sources
  62. [62]
    3.6 Finding Velocity and Displacement from Acceleration
    Using integral calculus, we can work backward and calculate the velocity function from the acceleration function, and the position function from the velocity ...Missing: differential | Show results with:differential
  63. [63]
    [PDF] Kinematics and One-Dimensional Motion: Non-Constant Acceleration
    Somewhere on the graph, both trains have the same acceleration. Page 8. Average Acceleration. Change in instantaneous velocity divided by the time interval.Missing: sources | Show results with:sources<|control11|><|separator|>
  64. [64]
    Calculus I - Related Rates - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will discuss the only application of derivatives in this section, Related Rates. In related rates problems we are give ...
  65. [65]
    Solving Related Rates Problems - UC Davis Math
    If x=f(t) and y=g(t), then D{x}=dxdt=f′(t) and D{y}=dydt=g′(t) . For example, implicitly differentiating the equation x3+y2=x+y+3.
  66. [66]
    5.3 Newton's Second Law – General Physics Using Calculus I
    Newton's second law is more than a definition; it is a relationship among acceleration, force, and mass. It can help us make predictions. Each of those ...
  67. [67]
    15.1 Simple Harmonic Motion – General Physics Using Calculus I
    The maximum velocity occurs at the equilibrium position ( x = 0 ) when the mass is moving toward x = + A . The maximum velocity in the negative direction is ...
  68. [68]
    4.3 Projectile Motion – General Physics Using Calculus I
    Projectile motion is the motion of an object thrown or projected into the air, subject only to acceleration as a result of gravity.
  69. [69]
    1.7 Velocity and Derivatives of Vector-Valued Functions
    Projectile motion. 🔗. Assume we fire a projectile from a launcher and the only force acting on the fired object is the force of gravity pulling down on the ...
  70. [70]
    Derivatives in Science
    Momentum (usually denoted p) is mass times velocity, and force (F) is mass times acceleration, so the derivative of momentum is dpdt=ddt(mv)=mdvdt=ma=F.
  71. [71]
    Ordinary Differential Equation -- from Wolfram MathWorld
    An ordinary differential equation (frequently called an "ODE," "diff eq," or "diffy Q") is an equality involving a function and its derivatives.
  72. [72]
    Separation of Variables -- from Wolfram MathWorld
    Separation of variables is a method for solving differential equations by breaking them into independent equations, and is useful in mathematical physics.
  73. [73]
    Integrating Factor -- from Wolfram MathWorld
    An integrating factor is a function by which an ordinary differential equation can be multiplied in order to make it integrable.
  74. [74]
    [PDF] Picard's Existence and Uniqueness Theorem
    Picard's Existence and Uniqueness Theorem. Consider the Initial Value Problem (IVP) y0 = f(x, y), y(x0) = y0. Suppose f(x, y) and ∂f. ∂y(x, y) are continuous ...
  75. [75]
    Population Growth -- from Wolfram MathWorld
    The differential equation describing exponential growth is (dN)/(dt)=rN. (1) This can be integrated directly int_(N_0)^N(dN)/N=int_0^trdt (2) to give ...
  76. [76]
    Differential Equations - Modeling with First Order DE's
    Jun 11, 2025 · In this section we will use first order differential equations to model physical situations. In particular we will look at mixing problems ...
  77. [77]
    3.4 Derivatives as Rates of Change - Calculus Volume 1 | OpenStax
    Mar 30, 2016 · We have described velocity as the rate of change of position. If we take the derivative of the velocity, we can find the acceleration, or ...
  78. [78]
    Differential Equations - Reduction of Order - Pauls Online Math Notes
    Nov 16, 2022 · Reduction of order, the method used in the previous example can be used to find second solutions to differential equations.