Partial fraction decomposition
Partial fraction decomposition is a fundamental technique in algebra and calculus that expresses a rational function—a ratio of two polynomials—as a sum of simpler rational functions whose denominators are the factors of the original denominator.[1] This decomposition assumes the degree of the numerator is less than the degree of the denominator; if not, polynomial long division is first applied to reduce it.[1] The primary purpose of partial fraction decomposition is to simplify operations on rational functions, particularly integration in calculus, where the decomposed form allows the use of basic antiderivatives like logarithms for linear factors and arctangents for irreducible quadratic factors.[2] It also aids in solving certain differential equations and systems involving rational expressions by converting complex terms into manageable components.[3] The method applies over the real or complex numbers, with adjustments for repeated or irreducible factors to ensure completeness.[2] The technique traces its origins to the early 18th century, when it was independently discovered in 1702 by mathematicians Johann Bernoulli and Gottfried Wilhelm Leibniz during their work on infinite series and integration problems.[4] Bernoulli applied it to expressions like \frac{a^2}{a^2 - x^2}, while Leibniz explored its use in resolving rational functions more generally, building on earlier algebraic ideas from the Renaissance but formalizing it within the emerging framework of calculus.[4] In practice, the decomposition begins by factoring the denominator into distinct linear factors (ax + b), repeated linear factors, or irreducible quadratics (ax^2 + bx + c).[1] The partial fraction form is then written as a sum, such as \frac{A}{ax + b} + \frac{Bx + C}{ax^2 + bx + c} for non-repeated cases, with constants solved by clearing denominators and equating coefficients or substituting values.[2]Fundamentals
Definition and Motivation
Partial fraction decomposition is an algebraic technique used to express a rational function, defined as the quotient \frac{P(x)}{Q(x)} of two polynomials where the degree of the numerator P(x) is less than the degree of the denominator Q(x), as a sum of simpler fractions whose denominators consist of powers of the irreducible factors of Q(x).[2] This process assumes the rational function is proper; if the degree of P(x) exceeds or equals that of Q(x), polynomial long division must first be performed to separate out the polynomial quotient, leaving a proper remainder fraction for decomposition.[2] The primary motivation for partial fraction decomposition lies in its ability to simplify complex rational expressions for further analysis. In calculus, it facilitates the integration of rational functions by reducing them to sums of terms with elementary antiderivatives, such as logarithms or arctangents, which are otherwise difficult to compute directly.[2] Additionally, in discrete mathematics, the method aids in solving linear recurrence relations through generating functions, where decomposing the rational generating function yields closed-form solutions for sequence terms via partial fractions.[5] It also streamlines the evaluation of rational expressions at specific points by isolating contributions from individual factors, avoiding indeterminate forms in limits or series expansions.[6] Historically, partial fraction decomposition originated in the early 18th century, with independent discoveries by Johann Bernoulli and Gottfried Wilhelm Leibniz around 1702, who applied it to resolve fractions in differential equations and integration problems.[4] Leonhard Euler advanced the technique significantly in the mid-18th century, introducing it systematically in his 1748 treatise Introductio in analysin infinitorum and further developing methods for coefficient determination using differentials in his 1755 Institutiones calculi differentialis, primarily to support integration and the resolution of algebraic fractions.[7]Improper vs. Proper Fractions
A rational function \frac{P(x)}{Q(x)}, where P(x) and Q(x) are polynomials, is classified as proper if the degree of the numerator is less than the degree of the denominator, i.e., \deg(P) < \deg(Q). Conversely, it is improper if \deg(P) \geq \deg(Q).[8][9] For improper rational functions, partial fraction decomposition cannot be applied directly, as the method assumes a proper fraction to ensure the decomposition into simpler terms converges appropriately. The standard procedure involves first performing polynomial long division to separate the rational function into a polynomial quotient and a proper remainder fraction. Specifically, by the division algorithm for polynomials over the reals (or any field), there exist unique polynomials S(x) and R(x) such that P(x) = S(x) Q(x) + R(x) with \deg(R) < \deg(Q), yielding \frac{P(x)}{Q(x)} = S(x) + \frac{R(x)}{Q(x)}.[8][9][10] Partial fraction decomposition is then applied solely to the proper rational function \frac{R(x)}{Q(x)}, while the polynomial S(x) remains as is. This preprocessing step is crucial because it guarantees that the remainder term \frac{R(x)}{Q(x)} satisfies the degree condition necessary for the partial fraction expansion to consist of fractions with numerators of lower degree than their denominators, facilitating simplification, integration, or other manipulations. For instance, when the denominator Q(x) is linear, say Q(x) = x - a, the division yields S(x) as the quotient and a constant remainder, explicitly given by \frac{P(x)}{x - a} = S(x) + \frac{P(a)}{x - a} after evaluating the remainder at x = a.[8][9][10]Denominator Factorization
Partial fraction decomposition requires first factoring the denominator polynomial into its irreducible factors over the field of interest, typically the real or complex numbers, as this determines the form of the partial fractions. Over the complex numbers, the Fundamental Theorem of Algebra guarantees that every non-constant polynomial factors completely into linear factors, since every such polynomial has at least one complex root, and the process can be repeated on the resulting quotient. Over the real numbers, the situation is more nuanced: every polynomial factors into a product of linear factors (corresponding to real roots) and irreducible quadratic factors (corresponding to pairs of complex conjugate roots), as quadratics with negative discriminants cannot be factored further into real linears.[11][12] To factor polynomials with integer coefficients, one common method is the Rational Root Theorem, which identifies possible rational roots as fractions \frac{p}{q}, where p divides the constant term and q divides the leading coefficient; testing these candidates allows identification of linear factors via synthetic division or direct evaluation. For more general factorization, techniques such as grouping terms, completing the square, or substitution can be applied, especially for polynomials of higher degree or non-integer coefficients, though these may require trial and error or numerical approximation for real roots.[13] Irreducible factors over the reals are thus linear polynomials of the form x - a (where a is real) or quadratic polynomials x^2 + b x + c with discriminant b^2 - 4c < 0, ensuring no real roots and thus no further real factorization. For instance, consider the polynomial x^3 - 1; applying the difference of cubes formula or testing rational roots reveals it factors as (x - 1)(x^2 + x + 1), where x - 1 is linear and x^2 + x + 1 is an irreducible quadratic over the reals (discriminant $1 - 4 = -3 < 0).[14][15]General Form
The partial fraction decomposition expresses a rational function \frac{P(x)}{Q(x)}, where the degree of P(x) is less than the degree of Q(x), as a sum of simpler fractions whose denominators are the factors of Q(x). This decomposition relies on the complete factorization of the denominator Q(x) into linear and quadratic factors over the real numbers.[16][17] When Q(x) factors into distinct linear terms, such as Q(x) = (x - r_1)(x - r_2) \cdots (x - r_n) with all r_i distinct, the decomposition takes the form \frac{P(x)}{Q(x)} = \sum_{i=1}^n \frac{A_i}{x - r_i}, where each A_i is a constant coefficient.[16][18] For repeated linear factors, suppose a factor (x - r_i)^k appears with multiplicity k. The corresponding terms in the decomposition are \sum_{j=1}^k \frac{A_{ij}}{(x - r_i)^j}, yielding a full expansion that includes such sums for each repeated root.[17][16] If Q(x) includes irreducible quadratic factors over the reals, such as (x^2 + p_j x + q_j)^m with discriminant p_j^2 - 4q_j < 0 and multiplicity m, the terms are \sum_{\ell=1}^m \frac{B_{j\ell} x + C_{j\ell}}{(x^2 + p_j x + q_j)^\ell}, where each numerator is a linear polynomial in x.[18][16] The complete general form combines all these partial fractions, so that \frac{P(x)}{Q(x)} = \sum_i \frac{A_i}{x - r_i} + \sum_i \sum_{j=1}^{k_i} \frac{A_{ij}}{(x - r_i)^j} + \sum_j \sum_{\ell=1}^{m_j} \frac{B_{j\ell} x + C_{j\ell}}{(x^2 + p_j x + q_j)^\ell}, equaling the original rational function identically.[17][16]Theoretical Foundation
Statement over Algebraically Closed Fields
In an algebraically closed field F, such as the complex numbers \mathbb{C}, every proper rational function—that is, a quotient P(z)/Q(z) where P, Q \in F, Q \neq 0, and \deg P < \deg Q—admits a partial fraction decomposition into a sum of terms with linear denominators raised to powers corresponding to the multiplicities of the roots of Q.[19] Specifically, since F is algebraically closed, Q(z) factors completely as Q(z) = c \prod_r (z - r)^{m_r}, where c \in F^\times, the r are the distinct roots in F, and each m_r \geq 1 is the multiplicity. The decomposition takes the form \frac{P(z)}{Q(z)} = \sum_r \sum_{k=1}^{m_r} \frac{A_{r,k}}{(z - r)^k}, where the coefficients A_{r,k} \in F exist and are uniquely determined.[20] The existence of this decomposition follows from the structure of the rational function field F(z) and the unique factorization of polynomials in F. For improper rational functions (where \deg P \geq \deg Q), polynomial long division first yields a polynomial quotient plus a proper remainder, reducing to the proper case. For the proper case, the pairwise coprimality of the factors (z - r)^{m_r} for distinct roots r allows application of the Chinese Remainder Theorem: the quotient ring F / (Q(z)) decomposes as a product \prod_r F / ((z - r)^{m_r}), enabling the rational function to be expressed componentwise in each local factor, which corresponds to the partial fractions with powers up to m_r.[20][19] This theorem holds specifically over algebraically closed fields, where all irreducible polynomials are linear, ensuring denominators are powers of distinct linear terms. In contrast, over non-algebraically closed fields, such as the real numbers \mathbb{R}, the denominator may include irreducible factors of degree greater than one, leading to partial fraction terms with numerators of matching degree over those factors.[19] The result underscores the role of algebraic closure in simplifying the structure of rational functions, facilitating applications in integration and residue theory over \mathbb{C}.[21]Uniqueness Theorem
The uniqueness theorem asserts that, over an algebraically closed field K, the partial fraction decomposition of a proper rational function \frac{P(x)}{Q(x)}, where \deg P < \deg Q and Q(x) = \prod_{i=1}^n (x - a_i)^{m_i} with distinct a_i \in K and positive integers m_i, is unique in the form \frac{P(x)}{Q(x)} = \sum_{i=1}^n \sum_{j=1}^{m_i} \frac{A_{ij}}{(x - a_i)^j}, where the coefficients A_{ij} \in K are uniquely determined.[22] To prove this, suppose there are two such decompositions with coefficients A_{ij} and B_{ij}. Their difference yields \sum_{i=1}^n \sum_{j=1}^{m_i} \frac{C_{ij}}{(x - a_i)^j} = 0, where C_{ij} = A_{ij} - B_{ij}. Multiplying through by Q(x) produces the polynomial equation \sum_{i=1}^n \sum_{j=1}^{m_i} C_{ij} \cdot \frac{Q(x)}{(x - a_i)^j} = 0. Each term \frac{Q(x)}{(x - a_i)^j} = (x - a_i)^{m_i - j} \prod_{k \neq i} (x - a_k)^{m_k} is a polynomial of degree less than \deg Q = \sum m_i, and the set of all such polynomials (over i, j) forms a basis for the vector space of polynomials over K of degree less than \deg Q, which has dimension \deg Q. Since this set is linearly independent and the linear combination equals the zero polynomial, all C_{ij} = 0, implying A_{ij} = B_{ij} for all i, j.[23] This uniqueness implies that any valid method for computing the decomposition—such as solving the resulting system of linear equations from coefficient equating—will produce the same coefficients, independent of the approach chosen.[22] In the context of linear algebra over K, the partial fraction decomposition represents an expansion with respect to the basis \left\{ \frac{1}{(x - a_i)^j} \mid 1 \leq i \leq n, \, 1 \leq j \leq m_i \right\} in the vector space of proper rational functions whose denominators divide Q(x), ensuring the representation is canonical.[23]Partial Fractions over the Reals
Over the real numbers \mathbb{R}, partial fraction decomposition applies to rational functions with real coefficients, where the denominator polynomial factors into linear and irreducible quadratic factors. By the fundamental theorem of algebra specialized to the reals, every non-constant polynomial q(x) \in \mathbb{R} factors uniquely (up to ordering and constant multiples) as a product of linear factors (x - r_i)^{m_i} (with real roots r_i) and irreducible quadratic factors x^2 + c_j x + d_j (with c_j^2 - 4d_j < 0, no real roots).[24] This factorization ensures that any proper rational function p(x)/q(x) (with \deg p < \deg q) can be expressed as a sum of partial fractions aligned with these irreducible factors over \mathbb{R}.[25] The general form of the decomposition incorporates terms for both types of factors, including multiplicities. For each linear factor (x - r)^{m} raised to power m, the contribution is \sum_{k=1}^{m} \frac{A_k}{(x - r)^k}. For each irreducible quadratic factor (x^2 + c x + d)^{n} raised to power n, the contribution is \sum_{l=1}^{n} \frac{B_l x + C_l}{(x^2 + c x + d)^l}, where the numerators are linear polynomials to match the degree of the denominator factors.[2] Thus, a full decomposition of p(x)/q(x) is \frac{p(x)}{q(x)} = \sum_i \sum_{k=1}^{m_i} \frac{A_{i k}}{(x - r_i)^k} + \sum_j \sum_{l=1}^{n_j} \frac{B_{j l} x + C_{j l}}{(x^2 + c_j x + d_j)^l}, with all coefficients A_{i k}, B_{j l}, C_{j l} \in \mathbb{R}.[25] Existence and uniqueness of this decomposition hold for proper rational functions over \mathbb{R}, analogous to the general uniqueness theorem for algebraically closed fields but adapted to the real polynomial ring.[24] The real coefficients ensure all terms remain real-valued, preserving computational advantages in applications like integration. Compared to decomposition over \mathbb{C}, the real form consolidates pairs of complex conjugate linear factors into single quadratic terms, resulting in fewer but higher-degree partial fractions while avoiding explicit complex arithmetic.[2]Decomposition Methods
Standard Algebraic Method
The standard algebraic method for partial fraction decomposition systematically determines the coefficients in the general form by clearing the denominator and solving a resulting system of linear equations. Given a rational function \frac{P(x)}{Q(x)} where the degree of P(x) is less than the degree of Q(x), and Q(x) is factored into linear and irreducible quadratic factors, the method begins by expressing the decomposition as \frac{P(x)}{Q(x)} = \sum \frac{A_i x + B_i}{d_i(x)}, where the d_i(x) are the distinct factors. Multiplying both sides by Q(x) yields P(x) = \sum (A_i x + B_i) \cdot \frac{Q(x)}{d_i(x)}, transforming the equation into a polynomial identity. Expanding the right-hand side and equating coefficients of corresponding powers of x produces a system of linear equations in the unknowns A_i and B_i, which can be solved using standard techniques such as Gaussian elimination.[2] For cases involving distinct linear factors, substitution of the roots provides an efficient way to isolate individual coefficients within this framework. Suppose Q(x) = \prod (x - r_j)^{m_j} with simple roots (m_j = 1); substituting x = r_i into the cleared equation eliminates all terms except the one corresponding to the factor (x - r_i), yielding A_i = \frac{P(r_i)}{Q'(r_i)} (assuming Q(x) is monic; otherwise, adjust by the leading coefficient). This formula arises directly from the polynomial identity after substitution, as \frac{Q(x)}{x - r_i} \big|_{x = r_i} = Q'(r_i).[1] When repeated linear factors are present, such as (x - r)^m with m > 1, the decomposition includes terms \sum_{k=1}^m \frac{A_k}{(x - r)^k}. After multiplying by Q(x), substitute x = r to solve for the highest-order coefficient A_m = \frac{P(r)}{\frac{Q(x)}{(x - r)^m} \big|_{x = r}}. To find lower-order coefficients, differentiate the cleared equation repeatedly with respect to x, then evaluate at x = r; the k-th derivative isolates A_k through a formula involving higher derivatives of P(x) and Q(x). This successive differentiation approach systematically resolves the system without full expansion in complex cases.[26] As an illustrative setup, consider decomposing \frac{5x + 7}{(x + 2)(x - 1)}. The general form is \frac{A}{x + 2} + \frac{B}{x - 1}, and multiplying through by the denominator gives $5x + 7 = A(x - 1) + B(x + 2). Expanding the right side yields (A + B)x + (-A + 2B), and equating coefficients provides the system A + B = 5, -A + 2B = 7, solvable for A and B. Alternatively, substituting x = -2 isolates A = \frac{5(-2) + 7}{-2 - 1} = \frac{-3}{-3} = 1, and x = 1 gives B = \frac{5(1) + 7}{1 + 2} = \frac{12}{3} = 4, consistent with the formula for simple roots where Q'(x) = 2x + 1.[18]Heaviside Cover-Up Technique
The Heaviside cover-up technique provides a streamlined heuristic for computing the coefficients in the partial fraction decomposition of a proper rational function \frac{P(x)}{Q(x)}, where the denominator Q(x) factors into distinct linear terms over the reals, such as Q(x) = (x - r_1)(x - r_2) \cdots (x - r_n) with all r_i distinct.[27] The decomposition then takes the form \frac{P(x)}{Q(x)} = \sum_{i=1}^n \frac{A_i}{x - r_i}, and the method determines each A_i by temporarily removing the factor (x - r_i) from the denominator and evaluating the resulting expression—namely, the numerator P(x) divided by the product of the remaining factors—at x = r_i.[27] Formally, the coefficient is given by A_i = \frac{P(r_i)}{\prod_{j \neq i} (r_i - r_j)} = \left. \frac{P(x)}{Q(x)/(x - r_i)} \right|_{x = r_i}. [27] This "cover-up" step exploits the fact that substituting x = r_i nullifies all other denominator terms, isolating the desired coefficient without solving a full system of equations. For instance, consider \frac{3x + 2}{(x-1)(x+2)}; covering x-1 and setting x=1 yields A_1 = \frac{3(1) + 2}{1+2} = \frac{5}{3}, while covering x+2 and setting x=-2 gives A_2 = \frac{3(-2) + 2}{-2-1} = -\frac{4}{3}.[27] The technique is limited to cases with distinct linear factors and does not directly apply to repeated linear factors or irreducible quadratic factors, where additional steps—such as differentiation for repeats or completing the square for quadratics—are required to find all coefficients.[27] Named after Oliver Heaviside, the British electrical engineer and mathematician who developed it as part of his operational calculus in the late 19th century, the method was originally employed to simplify expansions in solving linear differential equations via Laplace transforms.Residue-Based Approach
The residue-based approach to partial fraction decomposition leverages concepts from complex analysis, particularly the residue theorem, to determine the coefficients in the decomposition of a rational function f(z) = \frac{p(z)}{q(z)}, where p(z) and q(z) are polynomials with \deg p < \deg q, and q(z) factors into linear terms over the complex numbers. In this framework, the partial fraction expansion takes the form f(z) = \sum_k \frac{A_k}{z - r_k} for distinct simple poles r_k, where each coefficient A_k is precisely the residue of f(z) at the pole r_k. This connection arises because the residue represents the coefficient of the \frac{1}{z - r_k} term in the Laurent series expansion of f(z) around r_k, which aligns directly with the principal part of the partial fraction decomposition.[28] For simple poles, the residue (and thus the coefficient) A_k = \operatorname{Res}(f, r_k) is computed as A_k = \lim_{z \to r_k} (z - r_k) f(z) = \frac{p(r_k)}{q'(r_k)}, assuming q(r_k) = 0 and q'(r_k) \neq 0. This formula provides an efficient way to isolate each coefficient without solving a system of equations, as it evaluates the function's behavior directly at the pole. The approach requires only the roots of q(z) and their multiplicities, making it particularly useful when poles are known or easily found.[29][28] When poles have higher multiplicity, say a pole of order m at r, the partial fraction terms are \sum_{j=1}^m \frac{A_{j}}{(z - r)^j}, where A_1 is the residue (coefficient of \frac{1}{z - r}). The coefficients are derived from the Laurent series principal part. Let g(z) = (z - r)^m f(z), which is analytic at r. Then, A_k = \frac{1}{(m - k)!} \left. \frac{d^{m - k}}{dz^{m - k}} g(z) \right|_{z = r}, for k = 1, \dots, m. In particular, the residue is A_1 = \frac{1}{(m-1)!} \left. \frac{d^{m-1}}{dz^{m-1}} g(z) \right|_{z = r}, and the highest-order coefficient is A_m = g(r) = \lim_{z \to r} (z - r)^m f(z). This method systematically handles repeated factors by differentiating the adjusted function, avoiding the need for successive undetermined coefficient substitutions in algebraic methods.[29][28] The primary advantage of the residue-based approach lies in its uniformity and computational efficiency for higher-order poles, as the derivative formulas provide a direct, algorithmic path to all coefficients without intermediate solving steps. It presupposes familiarity with basic residue computation but does not require evaluating contour integrals, focusing instead on local series expansions at each pole. This technique is especially valuable in applications like inverse Laplace transforms, where residues simplify the recovery of time-domain functions from s-domain rationals.[28]Applications
Symbolic Integration
Partial fraction decomposition plays a central role in the symbolic integration of rational functions by transforming complex integrands into sums of simpler fractions whose antiderivatives are elementary.[2] For a proper rational function \frac{P(x)}{Q(x)}, where the degree of P(x) is less than the degree of Q(x), the decomposition yields terms of the form \frac{A}{x - a} for linear factors and \frac{Ax + B}{x^2 + bx + c} for irreducible quadratics, enabling term-by-term integration.[30] The integration process proceeds by applying standard antiderivative formulas to each partial fraction. For a linear term, \int \frac{A}{x - a} \, dx = A \ln |x - a| + C.[2] For quadratic terms, the integral \int \frac{Ax + B}{x^2 + bx + c} \, dx is evaluated by completing the square in the denominator, resulting in a combination of logarithmic and arctangent functions, such as \frac{A}{2} \ln |x^2 + bx + c| + D \arctan\left( \frac{2x + b}{\sqrt{4c - b^2}} \right) + C when the discriminant is negative.[31] This approach ensures that the indefinite integral of any rational function can be expressed using elementary functions, provided the denominator factors appropriately over the reals.[2] A primary benefit of this method is its reduction of non-elementary integrals to familiar forms, facilitating both manual computation and symbolic software implementation in computer algebra systems.[32] However, it applies only to proper rational functions; for improper cases where the numerator degree exceeds or equals the denominator degree, polynomial long division must precede decomposition to isolate the rational part.[30] Consider the integral \int \frac{x + 1}{x^2 - 1} \, dx. After decomposition, noting that x^2 - 1 = (x - 1)(x + 1), the integrand simplifies to \frac{1}{x - 1} (for x \neq -1), yielding \ln |x - 1| + C.[2] For a quadratic example, \int \frac{1}{(x - 3)(x + 2)} \, dx decomposes to \frac{1}{5} \left( \frac{1}{x - 3} - \frac{1}{x + 2} \right), integrating to \frac{1}{5} \ln \left| \frac{x - 3}{x + 2} \right| + C.[31]Partial Fractions for Integer Evaluation
Partial fraction decomposition provides a powerful tool for evaluating sums over integers by transforming rational functions into telescoping forms, where most terms cancel in the partial sums. A classic application is the decomposition of the rational function \frac{1}{n(n+1)} into \frac{1}{n} - \frac{1}{n+1}, which allows the finite sum \sum_{n=1}^N \frac{1}{n(n+1)} to telescope to $1 - \frac{1}{N+1}.[33] This technique extends to more general forms like \frac{1}{n(n+k)} = \frac{1}{k} \left( \frac{1}{n} - \frac{1}{n+k} \right) for positive integer k, yielding sums such as \sum_{n=1}^N \frac{1}{n(n+k)} = \frac{1}{k} \left( H_N + H_k - H_{N+k} \right), where H_m denotes the m-th harmonic number. In the context of harmonic series partial sums, partial fractions enable the expression of differences of generalized harmonic numbers. For instance, the sum \sum_{j=1}^n \frac{(-1)^{j+1}}{j} = \int_0^1 \frac{1 - t^n}{1 + t} dt connects to beta function relations for integer parameters, but discretely, decompositions like \frac{1}{j \binom{n}{j}} = (n+1) \int_0^1 t^{j-1} (1-t)^{n-j} dt = B(j, n-j+1) facilitate summation identities involving harmonics.[34] These relations highlight how partial fractions bridge discrete sums to beta function evaluations at integers, where B(m,n) = \frac{(m-1)!(n-1)!}{(m+n-1)!}, aiding in closed-form expressions for combinatorial sums. More generally, partial fraction decomposition is essential for extracting coefficients from rational generating functions in combinatorics, where the ordinary generating function G(x) = \frac{P(x)}{Q(x)} with \deg P < \deg Q decomposes into \sum \frac{A_i}{(1 - \alpha_i x)^{m_i}}, and the coefficients follow from binomial expansions, counting objects like lattice paths or partitions.[35] This method, rooted in the uniqueness of decompositions over the rationals, simplifies the analysis of linear recurrences underlying combinatorial sequences.[36] Historically, Leonhard Euler employed partial fraction techniques in the 18th century to derive infinite product decompositions, such as the expansion of \pi \cot(\pi z) = \frac{1}{z} + \sum_{k=1}^\infty \left( \frac{1}{z-k} + \frac{1}{z+k} \right), which facilitated evaluations of series like the Basel problem sum \sum \frac{1}{n^2} = \frac{\pi^2}{6} by connecting products to partial fractions of trigonometric functions.[37] Euler's approach, detailed in his 1748 work on infinite products, influenced subsequent developments in analytic number theory and generating function methods.[7]Role in Differential Equations
Partial fraction decomposition aids in solving linear ordinary differential equations (ODEs) with rational coefficients by simplifying the treatment of nonhomogeneous forcing terms in standard solution methods. In the variation of parameters technique, the forcing term is incorporated into integrals that define the particular solution; decomposing a rational forcing function into partial fractions breaks it into simpler components, enabling the integrals to be evaluated more readily as a sum of individual terms. Similarly, in the method of undetermined coefficients, a rational forcing term can be expressed as a sum of partial fractions, each corresponding to a basic form (such as constants over linear factors) for which a trial particular solution can be assumed and coefficients determined systematically.[38] A common context arises in first-order linear ODEs solved via the integrating factor method, where the solution requires integrating the product of the integrating factor and the forcing term. When this product is a rational function, partial fraction decomposition resolves it into elementary fractions whose antiderivatives are known, thus yielding an explicit solution. For instance, equations like y' + P(x)y = Q(x) with rational Q(x) benefit from this approach to handle the resulting integral efficiently.[39] The technique extends naturally to the Laplace transform method for higher-order linear ODEs, where the transform of the solution is a rational function in the s-domain. Partial fraction decomposition of this transform allows inversion term by term using standard tables, often employing the Heaviside cover-up method to quickly find the coefficients. This is particularly effective for constant-coefficient ODEs with rational forcing inputs.[40] Despite its utility, partial fraction decomposition is limited to scenarios where the nonhomogeneous terms are rational functions, restricting its direct application to ODEs without such forms in the forcing or coefficients.[41]Examples and Illustrations
Basic Linear Factors
When the denominator of a proper rational function factors into distinct linear terms over the reals, the partial fraction decomposition expresses it as a sum of simpler fractions, each with one of those linear factors in the denominator and an unknown constant in the numerator. This approach is fundamental for simplifying rational expressions and is applicable when the roots of the denominator are real and distinct.[2] Consider the example of decomposing \frac{3x + 2}{(x-1)(x+2)}. The form is \frac{3x + 2}{(x-1)(x+2)} = \frac{A}{x-1} + \frac{B}{x+2}, where A and B are constants to be determined.[2] The Heaviside cover-up technique provides an efficient solution: to find A, substitute x = 1 into the numerator while "covering up" the (x-1) factor, yielding A = \frac{3(1) + 2}{1+2} = \frac{5}{3}; similarly, for B, substitute x = -2, giving B = \frac{3(-2) + 2}{-2-1} = \frac{-4}{-3} = \frac{4}{3}. Thus, the decomposition is \frac{3x + 2}{(x-1)(x+2)} = \frac{5/3}{x-1} + \frac{4/3}{x+2}.[2][42] To verify, recombine the right-hand side: \frac{5/3}{x-1} + \frac{4/3}{x+2} = \frac{\frac{5}{3}(x+2) + \frac{4}{3}(x-1)}{(x-1)(x+2)} = \frac{\frac{1}{3}(5x + 10 + 4x - 4)}{(x-1)(x+2)} = \frac{9x + 6}{3(x-1)(x+2)} = \frac{3x + 2}{(x-1)(x+2)}, which matches the original expression.[2] This decomposition is unique, meaning that for a given rational function with distinct linear factors, there is exactly one set of constants A and B that satisfies the equation, as guaranteed by the theory of rational function factorization.[2]Repeated Linear Factors
When the denominator of a rational function contains a repeated linear factor of the form (ax + b)^k where k > 1, the partial fraction decomposition requires a sum of terms with numerators that are constants, one for each power of the factor from 1 to k.[2] This setup allows the original fraction to be expressed as a sum of simpler fractions, facilitating operations like integration.[2] Consider the example of decomposing \frac{x^2 + 5x + 2}{(x+1)^3}. The form is \frac{x^2 + 5x + 2}{(x+1)^3} = \frac{A}{x+1} + \frac{B}{(x+1)^2} + \frac{C}{(x+1)^3}.[3] To solve using the standard algebraic method, multiply both sides by (x+1)^3 to obtain: x^2 + 5x + 2 = A(x+1)^2 + B(x+1) + C. Expanding the right side gives A(x^2 + 2x + 1) + Bx + B + C = Ax^2 + (2A + B)x + (A + B + C). Equating coefficients with the left side yields the system: A = 1, $2A + B = 5, and A + B + C = 2. Solving provides A = 1, B = 3, and C = -2.[2] An alternative approach for higher powers involves successive differentiation of the cleared equation. Starting from x^2 + 5x + 2 = A(x+1)^2 + B(x+1) + C, evaluate at x = -1 to find C = (-1)^2 + 5(-1) + 2 = -2. Differentiate both sides: $2x + 5 = 2A(x+1) + B, and evaluate at x = -1 to get B = 2(-1) + 5 = 3. Differentiate again: $2 = 2A, so A = 1. This method systematically isolates each coefficient beginning with the highest power.[3] Verification confirms the decomposition: \frac{1}{x+1} + \frac{3}{(x+1)^2} - \frac{2}{(x+1)^3} recombines to \frac{x^2 + 5x + 2}{(x+1)^3} by clearing the common denominator and simplifying.[2]Irreducible Quadratic Factors
When the denominator of a rational function includes one or more irreducible quadratic factors over the reals, the partial fraction decomposition assigns a linear numerator to each such factor. An irreducible quadratic factor is a quadratic polynomial ax^2 + bx + c with real coefficients where the discriminant b^2 - 4ac < 0, meaning it has no real roots and cannot be factored further into linear factors over the reals.[43] For a non-repeated irreducible quadratic factor, the corresponding term in the decomposition is of the form \frac{Ax + B}{ax^2 + bx + c}. If the quadratic factor is repeated k times, the decomposition includes terms \frac{A_1 x + B_1}{ (ax^2 + bx + c) } + \frac{A_2 x + B_2}{ (ax^2 + bx + c)^2 } + \cdots + \frac{A_k x + B_k}{ (ax^2 + bx + c)^k }. This approach maintains the decomposition over the real numbers, avoiding complex factors.[2] To find the coefficients, multiply both sides of the decomposition by the full denominator to clear fractions, then equate coefficients of corresponding powers of x or substitute specific values of x to form a system of linear equations.[2] Consider the simple case of a rational function with a single irreducible quadratic denominator, such as \frac{2x + 1}{x^2 + 2x + 2}. The denominator x^2 + 2x + 2 has discriminant $4 - 8 = -4 < 0, confirming it is irreducible over the reals. The partial fraction form is \frac{Ax + B}{x^2 + 2x + 2}. Setting Ax + B = 2x + 1 directly gives A = 2 and B = 1, so the decomposition is \frac{2x + 1}{x^2 + 2x + 2}, which is already in the required form.[44] For a mixed case involving both a linear factor and an irreducible quadratic factor, consider the rational function \frac{5x^2 - 3x + 10}{(x-1)(x^2 + 1)}. The denominator factors as (x-1)(x^2 + 1), where x-1 is linear and x^2 + 1 is irreducible (discriminant $0 - 4 = -4 < 0). The partial fraction decomposition is \frac{5x^2 - 3x + 10}{(x-1)(x^2 + 1)} = \frac{A}{x-1} + \frac{Bx + C}{x^2 + 1}. Multiplying through by the denominator yields $5x^2 - 3x + 10 = A(x^2 + 1) + (Bx + C)(x - 1). Expanding the right side gives A x^2 + A + B x^2 - B x + C x - C = (A + B) x^2 + (-B + C) x + (A - C). Equating coefficients with the left side produces the system of equations: \begin{align*} A + B &= 5, \\ -B + C &= -3, \\ A - C &= 10. \end{align*} Solving this system, substitute B = 5 - A from the first equation into the second: -(5 - A) + C = -3, so A - 5 + C = -3, or C = -3 - A + 5 = 2 - A. From the third equation, A - (2 - A) = 10, so A - 2 + A = 10, $2A = 12, A = 6. Then B = 5 - 6 = -1 and C = 2 - 6 = -4. Thus, the decomposition is \frac{5x^2 - 3x + 10}{(x-1)(x^2 + 1)} = \frac{6}{x-1} + \frac{-x - 4}{x^2 + 1}. Verification by recombining the right side confirms it equals the original rational function.[2]Integration via Decomposition
Partial fraction decomposition facilitates the integration of rational functions by breaking them down into sums of fractions with linear denominators, whose antiderivatives involve natural logarithms. This technique is essential when the degree of the numerator is less than that of the denominator, allowing term-by-term integration after decomposition. Consider the indefinite integral \int \frac{x + 4}{x^2 - x - 2} \, dx. First, factor the denominator as (x - 2)(x + 1). Express the integrand as partial fractions: \frac{x + 4}{(x - 2)(x + 1)} = \frac{A}{x - 2} + \frac{B}{x + 1}. Multiplying through by the denominator yields x + 4 = A(x + 1) + B(x - 2). Equating coefficients gives the system A + B = 1 and A - 2B = 4, solving to A = 2 and B = -1. Thus, \frac{x + 4}{(x - 2)(x + 1)} = \frac{2}{x - 2} - \frac{1}{x + 1}. Integrating term by term produces \int \left( \frac{2}{x - 2} - \frac{1}{x + 1} \right) dx = 2 \ln |x - 2| - \ln |x + 1| + C. This antiderivative can be rewritten as \ln \left| \frac{(x - 2)^2}{x + 1} \right| + C.[2] For a definite integral, evaluate the antiderivative at the limits while respecting the domain where the original function is defined (here, x \neq 2 and x \neq -1). For instance, \int_0^1 \frac{x + 4}{x^2 - x - 2} \, dx = \left[ 2 \ln |x - 2| - \ln |x + 1| \right]_0^1 = (2 \ln 1 - \ln 2) - (2 \ln 2 - \ln 1) = -\ln 2 - 2 \ln 2 = -3 \ln 2.Advanced Example Using Residues
To illustrate the residue-based approach for partial fraction decomposition of a rational function with both real and complex poles, consider the function P(z) = \frac{1}{(z^2 + 1)(z - 1)} = \frac{1}{(z - i)(z + i)(z - 1)}, where the poles are simple at z = 1, z = i, and z = -i.[29] The partial fraction decomposition is given by P(z) = \frac{\operatorname{Res}(P, 1)}{z - 1} + \frac{\operatorname{Res}(P, i)}{z - i} + \frac{\operatorname{Res}(P, -i)}{z + i}, as per the residue method for simple poles.[29] The residue at the real pole z = 1 is computed as \operatorname{Res}(P, 1) = \lim_{z \to 1} (z - 1) P(z) = \frac{1}{1^2 + 1} = \frac{1}{2}. The residue at the complex pole z = i is \operatorname{Res}(P, i) = \lim_{z \to i} (z - i) P(z) = \frac{1}{(i + i)(i - 1)} = \frac{1}{2i (i - 1)} = \frac{-1 + i}{4}, obtained by simplifying \frac{1}{2i(-1 + i)} = \frac{1}{-2 - 2i} = -\frac{1}{2(1 + i)} = -\frac{1 - i}{4} = \frac{-1 + i}{4}.[29] Similarly, the residue at z = -i is \operatorname{Res}(P, -i) = \lim_{z \to -i} (z + i) P(z) = \frac{1}{(-i - i)(-i - 1)} = \frac{1}{-2i (-1 - i)} = \frac{-1 - i}{4}, following analogous simplification \frac{1}{-2i( -1 - i )} = \frac{1}{2i(1 + i)} = \frac{1}{-2 + 2i} = -\frac{1}{2(1 - i)} = -\frac{1 + i}{4} = \frac{-1 - i}{4}.[29] Thus, the complex partial fraction decomposition is P(z) = \frac{1/2}{z - 1} + \frac{(-1 + i)/4}{z - i} + \frac{(-1 - i)/4}{z + i}. For verification, this can be converted to the real coefficient form P(z) = \frac{A}{z - 1} + \frac{Bz + C}{z^2 + 1}. Solving yields A = \frac{1}{2}, B = -\frac{1}{2}, C = -\frac{1}{2}, so P(z) = \frac{1/2}{z - 1} - \frac{1}{2} \cdot \frac{z + 1}{z^2 + 1}, which matches upon combining the complex terms and confirming identity with the original function.[29]Advanced Topics
Connection to Taylor Polynomials
In complex analysis, the partial fraction decomposition of a rational function establishes a direct connection to the Laurent series expansion, where the partial fractions correspond precisely to the principal parts of the Laurent series at each pole of the function.[45] For a rational function f(z) = \frac{P(z)}{Q(z)} with simple poles at distinct points a_k, the decomposition f(z) = \sum_k \frac{c_k}{z - a_k} + polynomial term isolates the singular behavior at each a_k, and each term \frac{c_k}{z - a_k} is the principal part of the Laurent series centered at a_k.[45] This link extends to higher-order poles. For a pole of order m at z = a, the principal part of the Laurent series is \sum_{k=1}^m \frac{b_k}{(z - a)^k}, where the coefficients b_k are determined by the Taylor expansion of the analytic function g(z) = (z - a)^m f(z) at z = a. Specifically, b_k = \frac{g^{(m - k)}(a)}{(m - k)!}, mirroring the structure of Taylor coefficients but applied to the negative powers after substitution w = \frac{1}{z - a}, which transforms the principal part into a polynomial in w.[46] A proof sketch for rational functions follows from the fact that any proper rational function (degree of numerator less than denominator) that vanishes at infinity equals the sum of its principal parts across all poles. Subtracting the polynomial part (from long division if improper) yields a function analytic at infinity, and by the residue theorem or direct expansion, it decomposes into the sum of Laurent principal parts at each finite pole, with no essential singularity terms due to rationality. This unification highlights how partial fraction decomposition provides a computational tool within the broader framework of complex analysis, facilitating residue calculations, contour integrations, and singularity analysis for rational functions.[46]Generalizations and Extensions
Partial fraction decomposition extends beyond the rational numbers to other fields, such as the p-adic numbers, where algorithms have been developed to compute univariate decompositions efficiently for symbolic computation purposes. This generalization leverages the structure of p-adic fields to handle Laurent polynomials and supports applications in integration and algebraic manipulation.[47] Similarly, the technique applies to function fields, such as the field of rational functions over a base field, where the polynomial ring behaves as a Euclidean domain, allowing unique decompositions analogous to the classical case.[48] In the multivariable setting, partial fraction decomposition generalizes to rational functions in several variables, often using polynomial reductions via Gröbner bases to avoid spurious factors and ensure uniqueness. This approach relies on algebraic tools like Hilbert's Nullstellensatz to separate denominators with common zeros and is particularly relevant in algebraic geometry for studying rational functions on varieties and computing residues or integrals over higher-dimensional spaces.[49] For non-polynomial cases, such as rational functions involving exponentials or trigonometric functions, substitution methods transform the expressions into algebraic rational functions amenable to partial fraction decomposition. The Weierstrass substitution, for instance, replaces trigonometric functions with rational expressions in a single variable t = \tan(x/2), enabling the application of standard techniques followed by back-substitution.[50] In modern computer algebra systems, partial fraction decomposition is implemented with efficient algorithms supporting these generalizations. SymPy'sapart() function, for example, performs univariate decompositions using either the undetermined coefficients method or Bronstein's algorithm, handling rational functions over various domains and facilitating symbolic integration.[51] The uniqueness of such decompositions holds over any field where the polynomial ring is a principal ideal domain.[52]