Fact-checked by Grok 2 weeks ago

Variation of parameters

Variation of parameters, also known as variation of constants, is a method in for finding particular solutions to inhomogeneous linear s (ODEs) by treating the constants in the general solution to the corresponding homogeneous as functions of the variable. The approach assumes a particular solution of the form y_p = u_1(x) y_1(x) + u_2(x) y_2(x) for a second-order , where y_1 and y_2 are linearly solutions to the homogeneous , and u_1(x) and u_2(x) are functions to be determined by substituting into the original ODE and solving the resulting system. Introduced by Leonhard Euler in the mid-18th century and refined by , the method originated in the context of for but has become a standard tool in differential equations. It applies to linear ODEs of any order with variable coefficients and nonhomogeneous forcing terms g(x) that may be arbitrary functions, making it more versatile than the method of undetermined coefficients, which requires g(x) to be a , , sine, cosine, or their products. The technique extends naturally to systems of linear ODEs and underpins concepts like Duhamel's principle for convolution representations of solutions and the construction of Green's functions for boundary value problems. In practice, the method involves setting up and solving a for the derivatives of the varying parameters—such as u_1' y_1 + u_2' y_2 = 0 and u_1' y_1' + u_2' y_2' = g(x) for second-order cases—then integrating to find the parameters, often yielding integrals that can be evaluated explicitly or left in integral form for theoretical purposes. This systematic procedure ensures the particular satisfies the nonhomogeneous without altering the homogeneous part, providing the full general as the sum of the homogeneous and particular solutions.

Background Concepts

Linear Differential Equations

Linear ordinary differential equations (ODEs) form a fundamental class in the study of differential equations, characterized by their with respect to the unknown function and its derivatives. An nth-order linear ODE has the general form a_n(t) y^{(n)}(t) + a_{n-1}(t) y^{(n-1)}(t) + \cdots + a_1(t) y'(t) + a_0(t) y(t) = g(t), where the coefficients a_i(t) (for i = 0, 1, \dots, n) and the forcing function g(t) are given functions, and a_n(t) \neq 0. This equation is linear because the dependent variable y(t) and its derivatives appear only to the first power, with no products or nonlinear functions involving them. Often, the equation is normalized to standard form by dividing through by a_n(t), yielding y^{(n)}(t) + p_{n-1}(t) y^{(n-1)}(t) + \cdots + p_1(t) y'(t) + p_0(t) y(t) = f(t), where p_i(t) = a_i(t)/a_n(t) and f(t) = g(t)/a_n(t). The equation is classified as homogeneous if g(t) \equiv 0 (or f(t) \equiv 0 in standard form), meaning no external forcing term is present, and nonhomogeneous otherwise. For homogeneous linear ODEs, the zero function y(t) = 0 is always a solution, and the superposition principle holds: if y_1(t), y_2(t), \dots, y_n(t) are n linearly independent solutions, then any linear combination y(t) = c_1 y_1(t) + \cdots + c_n y_n(t), with arbitrary constants c_i, is also a solution. Linear independence of solutions can be verified using the Wronskian determinant, which is nonzero over an interval if and only if the solutions are independent on that interval. This principle extends to nonhomogeneous equations in a modified form: the general solution is the sum of the general homogeneous solution and a particular solution to the nonhomogeneous equation. Under suitable conditions on the coefficients—specifically, if the p_i(t) and f(t) are continuous on an interval containing t_0—linear ODEs possess a unique solution satisfying given initial conditions y(t_0) = b_0, y'(t_0) = b_1, \dots, y^{(n-1)}(t_0) = b_{n-1}. This existence and uniqueness theorem ensures that solutions are well-defined and dependable for analysis and applications, such as in modeling physical systems like damped oscillators or electrical circuits. For constant-coefficient cases, where the a_i are constants, solutions often involve exponential functions and can be found using characteristic equations, providing a bridge to methods like variation of parameters for nonhomogeneous problems.

Homogeneous Solutions and Wronskian

For a linear homogeneous (ODE) of the form y'' + p(x)y' + q(x)y = 0, where p(x) and q(x) are continuous on an I, the solutions form a of dimension 2. A fundamental set of solutions consists of two linearly independent solutions y_1(x) and y_2(x), and the general solution is given by their y(x) = c_1 y_1(x) + c_2 y_2(x), where c_1 and c_2 are arbitrary constants determined by initial conditions. Linear independence of y_1(x) and y_2(x) means that the only constants c_1 and c_2 satisfying c_1 y_1(x) + c_2 y_2(x) = 0 for all x \in I are c_1 = c_2 = 0. Solutions are linearly dependent if one is a scalar multiple of the other, in which case they fail to span the full solution space. The principle of superposition ensures that any linear combination of homogeneous solutions remains a solution, allowing the construction of the general solution from a fundamental set. The provides a practical test for . For two solutions y_1(x) and y_2(x), it is defined as W(y_1, y_2)(x) = \begin{vmatrix} y_1(x) & y_2(x) \\ y_1'(x) & y_2'(x) \end{vmatrix} = y_1(x) y_2'(x) - y_2(x) y_1'(x). If W(y_1, y_2)(x_0) \neq 0 at some point x_0 \in I, then y_1 and y_2 are linearly independent on I; conversely, linear dependence implies W(y_1, y_2)(x) = 0 for all x \in I. For linearly independent solutions, the is never zero on I. Abel's theorem further characterizes the Wronskian for solutions of the homogeneous equation: W(y_1, y_2)(x) = W(y_1, y_2)(x_0) \exp\left( -\int_{x_0}^x p(t) \, dt \right), showing that it either vanishes identically or is nowhere zero, depending on the initial value. This non-vanishing property ensures that the fundamental matrix formed by y_1 and y_2 is invertible, which is essential for methods like variation of parameters that seek particular solutions to nonhomogeneous equations by assuming forms based on the homogeneous solutions. For higher-order equations, such as the n-th order linear homogeneous y^{(n)} + p_{n-1}(x) y^{(n-1)} + \cdots + p_0(x) y = 0, a fundamental set comprises n linearly independent solutions, and the generalizes to the of the matrix with rows consisting of the solutions and their first n-1 derivatives. Linear independence holds if and only if this is non-zero at some point in the interval.

Historical Development

Origins with Euler

Leonhard Euler (1707–1783) played a pivotal role in the early development of methods for solving linear differential equations, including the foundational ideas behind variation of parameters. While earlier mathematicians like had employed similar techniques in specific cases, such as solving Bernoulli equations in 1697 by treating the nonlinear term as a , Euler generalized and formalized the approach for linear nonhomogeneous equations. His work emerged in the context of , where small variations in parameters allowed for approximate solutions to complex systems, particularly in and physics. Euler's contributions built on his extensive studies of equations, which he began exploring in the 1730s through problems in the Acta Eruditorum and his own publications. The method's origins with Euler are most explicitly documented in his comprehensive treatise Institutionum Calculi Integralis, published between 1768 and 1770. In Volume II, Section I, Chapter IV, Euler systematically addressed the of linear differential equations of higher orders. Specifically, Problem 104 (§851) outlines the variation of parameters technique for second-order linear complete differential equations, assuming a particular solution of the form y_p = u(x) y_1(x) + v(x) y_2(x), where y_1 and y_2 are solutions to the homogeneous equation, and u and v are functions to be determined. He derived the conditions for u' and v' by substituting this form into the nonhomogeneous equation and using the fact that the homogeneous solutions satisfy the left-hand side, leading to a system solvable via the or determinants. This presentation marked a significant advancement, shifting from perturbations to a structured algebraic procedure. Euler further illustrated the method's utility in Problem 105 (§856), applying it to second-order equations with constant coefficients, such as y'' + a y' + b y = f(x). Here, he demonstrated how the variable parameters simplify the search for particular integrals when the right-hand side f(x) is arbitrary, emphasizing its versatility over other methods like undetermined coefficients, which are limited to specific forcing functions. Euler's exposition in this work not only provided explicit formulas but also connected the technique to broader integral calculus, influencing subsequent mathematicians. For instance, he noted the method's roots in earlier variational ideas but adapted it rigorously for differential equations, establishing it as a of exact solution techniques.

Contributions of Lagrange

Joseph-Louis Lagrange (1736–1813) played a pivotal role in developing the method of variation of parameters, extending and formalizing its application beyond Euler's preliminary formulations to address complex problems in and general differential equations. Initially inspired by , Lagrange adapted the technique to model small deviations in planetary orbits caused by gravitational interactions, treating orbital elements as slowly varying functions rather than fixed constants. In 1766, Lagrange introduced the method in a focused on , where he applied it to analyze perturbations in the orbits of planets such as and Saturn, improving upon Euler's earlier approaches by systematically varying the constants of to account for disturbing forces. This work marked the first explicit use of variation of parameters for solving nonhomogeneous linear equations arising in orbital , demonstrating its utility in reducing perturbed motion to a solvable of equations. By assuming the form of the particular solution as a linear of homogeneous solutions with variable coefficients, Lagrange derived equations that captured the of these coefficients under external influences, laying the groundwork for its broader adoption in . Lagrange's contributions deepened in the late 1770s and early through a series of memoirs published in the proceedings of the . In these, he expanded the method to study secular variations in planetary motions, including long-term changes in due to mutual gravitational attractions. One key series addressed variations in the movements of , where Lagrange derived explicit formulas for the rates of change of elements like semi-major axis and , expressing them in terms of partial derivatives of the perturbing function. A parallel series applied the technique to the of the , treating tidal and gravitational effects as nonhomogeneous terms in the governing differential equations. These developments emphasized the method's power for higher-order systems, transforming the nonlinear perturbation equations into a linear amenable to . By 1808–1810, Lagrange refined the method to its mature form in a third series of papers presented to the Paris Academy of Sciences, generalizing it beyond celestial mechanics to arbitrary problems in dynamics. In the 1808 "Mémoire sur la théorie des variations des éléments des planètes," he outlined differential equations for varying orbital parameters under general forces. The 1809 "Mémoire sur la théorie générale de la variation des constantes arbitraires dans tous les problèmes de mécanique" extended this to any mechanical system, introducing Lagrange's parentheses—bilinear forms representing the symplectic structure of phase space—to simplify the equations of motion. Finally, the 1810 "Second mémoire sur la théorie de la variation des constantes arbitraires" incorporated insights from Siméon Denis Poisson, using Poisson brackets to streamline computations and highlight the method's invariance properties. These late works solidified variation of parameters as a foundational tool for solving linear nonhomogeneous ordinary differential equations, influencing subsequent advances in both mathematics and physics.

Method Overview

Intuitive Explanation

The method of variation of parameters provides an intuitive approach to solving nonhomogeneous linear differential equations by building directly on the known solutions to the associated homogeneous equation. For a homogeneous equation, the general solution takes the form of a linear combination of fundamental solutions with constant coefficients, representing all possible behaviors in the absence of external forcing. When a nonhomogeneous term—such as an applied force or input signal—is introduced, the solution must include an additional particular component that accounts for this influence while preserving the homogeneous structure. The core intuition lies in treating the constant coefficients of the homogeneous solution as functions that "vary" in response to the nonhomogeneous term, thereby generating a particular solution that adapts to the forcing function. Suppose the homogeneous solution is y_h(x) = c_1 y_1(x) + c_2 y_2(x) for a second-order ; variation of parameters assumes a particular solution of the form y_p(x) = u_1(x) y_1(x) + u_2(x) y_2(x), where u_1(x) and u_2(x) are functions to be determined. This leverages the of y_1 and y_2, allowing the varying parameters to introduce the necessary flexibility to satisfy the full without altering the fundamental solution space. This method works because differentiating the assumed form produces terms that, when plugged into the , yield a system solvable for the derivatives of the varying , effectively isolating the nonhomogeneous term through the . Intuitively, it can be viewed as integrating contributions from the homogeneous solutions, weighted by the forcing function, which mirrors physical processes like accumulating response to a time-varying load in mechanical systems. Unlike undetermined coefficients, which relies on guessing forms for specific forcing types, variation of parameters applies universally to any continuous nonhomogeneous term, emphasizing its robustness and generality.

General Procedure

The method of variation of parameters provides a systematic approach to finding a solution to a nonhomogeneous , assuming the corresponding homogeneous has been solved. For a second-order of the form y'' + p(t)y' + q(t)y = g(t), where the leading coefficient is normalized to 1, the procedure begins by identifying a fundamental set of solutions \{y_1(t), y_2(t)\} to the homogeneous y'' + p(t)y' + q(t)y = 0, ensuring their W(y_1, y_2) = y_1 y_2' - y_2 y_1' \neq 0. The particular solution is assumed to be y_p(t) = u_1(t) y_1(t) + u_2(t) y_2(t), where u_1(t) and u_2(t) are functions to be determined. To simplify , impose the u_1'(t) y_1(t) + u_2'(t) y_2(t) = 0. Substituting y_p into the original yields a : \begin{cases} u_1' y_1 + u_2' y_2 = 0, \\ u_1' y_1' + u_2' y_2' = g(t). \end{cases} Solving this gives u_1'(t) = -\frac{y_2(t) g(t)}{W(y_1, y_2)} and u_2'(t) = \frac{y_1(t) g(t)}{W(y_1, y_2)}. Integrating these expressions provides u_1(t) and u_2(t) (constants of integration can be set to zero for the particular ), from which y_p(t) is formed. The general is then y(t) = c_1 y_1(t) + c_2 y_2(t) + y_p(t). This procedure extends naturally to nth-order linear equations L = a_n(t) y^{(n)} + \cdots + a_1(t) y' + a_0(t) y = g(t), with a_n(t) \neq 0. First, divide by a_n(t) to normalize the leading coefficient to 1, yielding y^{(n)} + p_{n-1}(t) y^{(n-1)} + \cdots + p_0(t) y = f(t), where f(t) = g(t)/a_n(t). Obtain a fundamental set \{y_1(t), \dots, y_n(t)\} for the homogeneous equation, with Wronskian W(t) = \det \begin{pmatrix} y_1 & \cdots & y_n \\ y_1' & \cdots & y_n' \\ \vdots & \ddots & \vdots \\ y_1^{(n-1)} & \cdots & y_n^{(n-1)} \end{pmatrix} \neq 0. Assume y_p(t) = \sum_{i=1}^n u_i(t) y_i(t). Impose n-1 simplifying conditions: \sum_{i=1}^n u_i'(t) y_i^{(k)}(t) = 0 for k = 0, 1, \dots, n-2, where y_i^{(0)} = y_i. The nth condition from substitution is \sum_{i=1}^n u_i'(t) y_i^{(n-1)}(t) = f(t). This forms a for the u_i'(t), solvable via : u_k'(t) = \frac{ \det M_k(t)}{W(t)}, where M_k(t) is the Wronskian matrix with the kth column replaced by (0, 0, \dots, 0, f(t))^T. Integrate to find the u_i(t), compute y_p(t), and add to the homogeneous solution for the general solution y(t) = \sum_{i=1}^n c_i y_i(t) + y_p(t). This method works for continuous coefficients and forcing functions, though integrals for the u_i(t) may require evaluation techniques.

Derivation by Order

First-Order Equations

The variation of parameters method applied to linear nonhomogeneous equations of the form y' + P(x)y = Q(x) involves assuming a particular that varies the arbitrary in the homogeneous . The associated homogeneous equation y' + P(x)y = 0 has the general y_h(x) = C \exp\left( -\int P(x) \, dx \right), where C is an arbitrary . A fundamental (with C = 1) is thus y_h(x) = \exp\left( -\int P(x) \, dx \right). To obtain a particular solution y_p(x), posit y_p(x) = v(x) y_h(x), where v(x) is a to be determined. Differentiating yields y_p'(x) = v'(x) y_h(x) + v(x) y_h'(x). Substituting into the original equation gives v'(x) y_h(x) + v(x) y_h'(x) + P(x) v(x) y_h(x) = Q(x). Since y_h'(x) + P(x) y_h(x) = 0, the equation simplifies to v'(x) y_h(x) = Q(x), so v'(x) = \frac{Q(x)}{y_h(x)}. Integrating both sides produces v(x) = \int \frac{Q(x)}{y_h(x)} \, dx, and thus the particular solution is y_p(x) = y_h(x) \int \frac{Q(x)}{y_h(x)} \, dx. The general solution to the nonhomogeneous equation is then y(x) = y_p(x) + C y_h(x) = y_h(x) \left( \int \frac{Q(x)}{y_h(x)} \, dx + C \right), where the integral is an antiderivative (indefinite) and C absorbs any constant of integration. Substituting the explicit form of y_h(x) yields y(x) = \exp\left( -\int P(x) \, dx \right) \left( C + \int Q(x) \exp\left( \int P(x) \, dx \right) \, dx \right), which matches the result from the integrating factor method, confirming the equivalence for first-order cases. As an example, consider y' + y = e^{-x}. Here, P(x) = 1 and Q(x) = e^{-x}, so y_h(x) = e^{-x}. Then v'(x) = e^{-x} / e^{-x} = 1, and v(x) = x. The particular solution is y_p(x) = x e^{-x}, and the general solution is y(x) = (x + C) e^{-x}. For another illustration, solve y' - \frac{2}{x} y = x^2 for x > 0. With P(x) = -2/x and Q(x) = x^2, the homogeneous solution is y_h(x) = x^2. Then v'(x) = x^2 / x^2 = 1, so v(x) = x. The particular solution is y_p(x) = x^3, and the general solution is y(x) = x^3 + C x^2. This approach, though computationally similar to the integrating factor technique for first-order equations, introduces the core idea of varying parameters, which extends naturally to higher-order linear equations.

Second-Order Equations

The method of variation of parameters for second-order linear differential equations provides a systematic approach to finding a particular solution to the nonhomogeneous equation y'' + p(x) y' + q(x) y = g(x), where p(x), q(x), and g(x) are continuous functions on an interval, assuming the associated homogeneous equation y'' + p(x) y' + q(x) y = 0 has two linearly independent solutions y_1(x) and y_2(x). To derive the particular solution, assume it takes the form y_p(x) = u_1(x) y_1(x) + u_2(x) y_2(x), where u_1(x) and u_2(x) are functions to be determined, motivated by varying the constants in the homogeneous solution c_1 y_1(x) + c_2 y_2(x). Differentiate y_p once:
y_p'(x) = u_1'(x) y_1(x) + u_1(x) y_1'(x) + u_2'(x) y_2(x) + u_2(x) y_2'(x).
To simplify the process and reduce the number of unknowns, impose the condition u_1'(x) y_1(x) + u_2'(x) y_2(x) = 0, which yields
y_p'(x) = u_1(x) y_1'(x) + u_2(x) y_2'(x).
Differentiate again to obtain the second derivative:
y_p''(x) = u_1'(x) y_1'(x) + u_1(x) y_1''(x) + u_2'(x) y_2'(x) + u_2(x) y_2''(x).
Substitute y_p, y_p', and y_p'' into the original nonhomogeneous . The terms involving u_1(x) and u_2(x) satisfy the homogeneous equation, so they cancel out, leaving
u_1'(x) y_1'(x) + u_2'(x) y_2'(x) = g(x).
This results in the following system of equations for u_1'(x) and u_2'(x):
\begin{align}
u_1'(x) y_1(x) + u_2'(x) y_2(x) &= 0, \
u_1'(x) y_1'(x) + u_2'(x) y_2'(x) &= g(x).
\end{align}
The determinant of the coefficient matrix is the Wronskian W(y_1, y_2)(x) = y_1(x) y_2'(x) - y_2(x) y_1'(x), which is nonzero on the interval due to linear independence. Solving the system via Cramer's rule gives
u_1'(x) = -\frac{y_2(x) g(x)}{W(y_1, y_2)(x)}, \quad u_2'(x) = \frac{y_1(x) g(x)}{W(y_1, y_2)(x)}.
Integrate these expressions to find u_1(x) and u_2(x), typically omitting constants of since they contribute to the homogeneous :
u_1(x) = -\int \frac{y_2(x) g(x)}{W(y_1, y_2)(x)} \, dx, \quad u_2(x) = \int \frac{y_1(x) g(x)}{W(y_1, y_2)(x)} \, dx.
Thus, the particular solution is
y_p(x) = y_1(x) \int -\frac{y_2(x) g(x)}{W(y_1, y_2)(x)} \, dx + y_2(x) \int \frac{y_1(x) g(x)}{W(y_1, y_2)(x)} \, dx,
and the general is y(x) = c_1 y_1(x) + c_2 y_2(x) + y_p(x).
This formulation assumes the equation is in standard form with leading 1; for the general form a(x) y'' + b(x) y' + c(x) y = f(x), divide by a(x) first, replacing g(x) with f(x)/a(x). The works for any continuous g(x), unlike undetermined coefficients, which requires specific forms.

General nth-Order Formulation

Detailed Derivation

Consider the general nth-order linear nonhomogeneous L = y^{(n)} + p_{n-1}(t) y^{(n-1)} + \cdots + p_1(t) y' + p_0(t) y = g(t), where the p_k(t) and the forcing function g(t) are continuous on an I, and the leading coefficient is normalized to 1. The associated homogeneous equation L = 0 admits a fundamental set of n linearly independent solutions \{y_1(t), y_2(t), \dots, y_n(t)\}, which satisfy the Wronskian determinant W(t) = \det \begin{pmatrix} y_1 & y_2 & \cdots & y_n \\ y_1' & y_2' & \cdots & y_n' \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)} & y_2^{(n-1)} & \cdots & y_n^{(n-1)} \end{pmatrix} \neq 0 for all t \in I. This nonvanishing Wronskian ensures the linear independence of the solutions. To find a particular solution y_p(t) to the nonhomogeneous equation, assume the form y_p(t) = u_1(t) y_1(t) + u_2(t) y_2(t) + \cdots + u_n(t) y_n(t), where the u_i(t) are unknown functions to be determined. This ansatz varies the constants in the homogeneous solution by treating them as functions of t. Differentiating y_p(t) yields y_p'(t) = u_1' y_1 + u_1 y_1' + u_2' y_2 + u_2 y_2' + \cdots + u_n' y_n + u_n y_n'. To simplify the process and avoid solving an overly complex system, impose the n-1 auxiliary conditions \sum_{i=1}^n u_i'(t) y_i^{(k)}(t) = 0, \quad k = 0, 1, \dots, n-2, where y_i^{(0)} = y_i. The first condition (k=0) simplifies the first derivative to y_p'(t) = \sum_{i=1}^n u_i(t) y_i'(t). Continuing this process for higher derivatives, the second derivative becomes y_p''(t) = \sum_{i=1}^n u_i'(t) y_i'(t) + \sum_{i=1}^n u_i(t) y_i''(t), and the auxiliary condition for k=1 sets the first sum to zero. Iterating up to the (n-1)th derivative gives y_p^{(n-1)}(t) = \sum_{i=1}^n u_i(t) y_i^{(n-1)}(t), since the term involving u_i' is zero by the condition for k=n-2. The nth is then y_p^{(n)}(t) = \sum_{i=1}^n u_i'(t) y_i^{(n-1)}(t) + \sum_{i=1}^n u_i(t) y_i^{(n)}(t). Substituting y_p, y_p', \dots, y_p^{(n)} into the original L[y_p] and using the fact that each L[y_i] = 0 for the homogeneous solutions, the terms involving the u_i(t) (without derivatives) cancel out. The remaining term is \sum_{i=1}^n u_i'(t) y_i^{(n-1)}(t) = g(t). This, combined with the n-1 auxiliary conditions, forms the full system: \begin{pmatrix} y_1 & y_2 & \cdots & y_n \\ y_1' & y_2' & \cdots & y_n' \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-2)} & y_2^{(n-2)} & \cdots & y_n^{(n-2)} \\ y_1^{(n-1)} & y_2^{(n-1)} & \cdots & y_n^{(n-1)} \end{pmatrix} \begin{pmatrix} u_1' \\ u_2' \\ \vdots \\ u_n' \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ g(t) \end{pmatrix}. The coefficient matrix is precisely the Wronskian matrix, so the system is W \mathbf{u}' = \mathbf{b}, where \mathbf{b} = (0, \dots, 0, g(t))^T. Since \det W(t) \neq 0, the unique solution for the vector \mathbf{u}' = (u_1', \dots, u_n')^T is obtained via Cramer's rule: u_k'(t) = \frac{\det W_k(t)}{W(t)}, \quad k = 1, \dots, n, where W_k(t) is the determinant of the matrix obtained by replacing the kth column of the Wronskian matrix with \mathbf{b}. Thus, W_k(t) = \det \begin{pmatrix} y_1 & \cdots & 0 & \cdots & y_n \\ y_1' & \cdots & 0 & \cdots & y_n' \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ y_1^{(n-2)} & \cdots & 0 & \cdots & y_n^{(n-2)} \\ y_1^{(n-1)} & \cdots & g(t) & \cdots & y_n^{(n-1)} \end{pmatrix}, with the g(t) entry in the kth column of the last row. The functions u_k(t) are then found by integrating: u_k(t) = \int^t u_k'(s) \, ds = \int^t \frac{W_k(s)}{W(s)} \, ds, where the lower limit of can be chosen arbitrarily within I (constants of integration are absorbed into the homogeneous solution). The particular solution is finally y_p(t) = \sum_{k=1}^n u_k(t) y_k(t). This derivation generalizes the method from lower orders and relies fundamentally on the linear independence encoded by the Wronskian.

Particular Solution Formula

The method of variation of parameters yields a particular solution to the nonhomogeneous of order n, L = y^{(n)} + p_{n-1}(x) y^{(n-1)} + \cdots + p_1(x) y' + p_0(x) y = g(x), where L is the , by assuming the form y_p(x) = u_1(x) y_1(x) + u_2(x) y_2(x) + \cdots + u_n(x) y_n(x). Here, y_1(x), \dots, y_n(x) form a fundamental set of solutions to the associated homogeneous L = 0, and the functions u_i(x) are to be determined such that y_p(x) satisfies the nonhomogeneous . To find the u_i(x), the derivatives u_i'(x) are obtained by solving the following system of n linear equations: \begin{align*} \sum_{i=1}^n u_i'(x) y_i(x) &= 0, \ \sum_{i=1}^n u_i'(x) y_i'(x) &= 0, \ &\vdots \ \sum_{i=1}^n u_i'(x) y_i^{(n-2)}(x) &= 0, \ \sum_{i=1}^n u_i'(x) y_i^{(n-1)}(x) &= g(x). \end{align*} This system ensures that the higher-order derivatives of y_p(x) simplify appropriately, avoiding the introduction of extraneous terms beyond the nonhomogeneous forcing function g(x). The solution for each u_k'(x) is given by as u_k'(x) = \frac{W_k(x)}{W(x)}, where W(x) is the Wronskian determinant of the fundamental solutions, W(x) = \det \begin{pmatrix} y_1 & y_2 & \cdots & y_n \\ y_1' & y_2' & \cdots & y_n' \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)} & y_2^{(n-1)} & \cdots & y_n^{(n-1)} \end{pmatrix}, and W_k(x) is the determinant obtained by replacing the k-th column of W(x) with the vector (0, 0, \dots, 0, g(x))^T. Since W(x) \neq 0 for a fundamental set, the u_k'(x) are well-defined. The functions u_k(x) are then found by integrating, u_k(x) = \int u_k'(x) \, dx, with the constants of integration typically set to zero to obtain a particular solution without homogeneous components. Substituting back yields the explicit formula for y_p(x). This formula generalizes the approach for lower-order cases and is particularly useful when g(x) is not of a form amenable to the method of undetermined coefficients, as it works for arbitrary continuous g(x) provided the fundamental solutions are known. The integrals involved may require or numerical methods in practice, but the formula provides a systematic, exact expression for y_p(x).

Illustrative Examples

First-Order Case

The variation of parameters method applied to first-order linear differential addresses nonhomogeneous of the standard form y' + p(x)y = q(x), where p(x) and q(x) are continuous functions. This approach assumes a particular of the form y_p(x) = u(x) y_h(x), where y_h(x) is a to the associated homogeneous y' + p(x)y = 0, and u(x) is a function to be determined. The method yields an explicit for the particular , which is equivalent to the obtained via the technique. To derive the solution, first solve the homogeneous equation by separation of variables: \frac{dy}{dx} = -p(x)y \implies \frac{dy}{y} = -p(x)\, dx \implies \ln |y| = -\int p(x)\, dx + C_1 \implies y_h(x) = C e^{-\int p(x)\, dx}, where C is an arbitrary constant (often normalized to y_h(x) = e^{-\int p(x)\, dx} for simplicity). Assume the particular solution y_p(x) = u(x) e^{-\int p(x)\, dx}. Differentiate using the product rule: y_p'(x) = u'(x) e^{-\int p(x)\, dx} + u(x) \left( -p(x) e^{-\int p(x)\, dx} \right). Substitute into the original equation: y_p' + p(x) y_p = u'(x) e^{-\int p(x)\, dx} - p(x) u(x) e^{-\int p(x)\, dx} + p(x) u(x) e^{-\int p(x)\, dx} = u'(x) e^{-\int p(x)\, dx} = q(x). Thus, u'(x) = q(x) e^{\int p(x)\, dx} \implies u(x) = \int q(x) e^{\int p(x)\, dx}\, dx. The particular solution is then y_p(x) = e^{-\int p(x)\, dx} \int q(x) e^{\int p(x)\, dx}\, dx, and the general solution is y(x) = y_h(x) + y_p(x) = C e^{-\int p(x)\, dx} + y_p(x). This formula highlights the integrating factor \mu(x) = e^{\int p(x)\, dx}, as multiplying the original equation by \mu(x) yields \frac{d}{dx} \left( y e^{\int p(x)\, dx} \right) = q(x) e^{\int p(x)\, dx}, which integrates directly to the same result. The method's strength lies in its generality, applicable even when q(x) is not easily integrable otherwise, though for first-order cases, the integrating factor is often more direct. For illustration, consider the equation y' + 2y = 4x. The homogeneous solution is y_h(x) = C e^{-2x}. Assuming y_p(x) = u(x) e^{-2x}, gives u'(x) e^{-2x} = 4x, so u'(x) = 4x e^{2x}. Integrating yields u(x) = (2x - 1) e^{2x} (ignoring the constant for the particular solution), hence y_p(x) = 2x - 1. The general solution is y(x) = C e^{-2x} + 2x - 1.

Second-Order Specific Case

For second-order linear nonhomogeneous equations of the form y'' + p(x) y' + q(x) y = g(x), the method of variation of parameters assumes a particular y_p = u_1(x) y_1(x) + u_2(x) y_2(x), where y_1 and y_2 are linearly independent solutions to the associated homogeneous y'' + p(x) y' + q(x) y = 0. The functions u_1 and u_2 are determined by solving the derived from substituting y_p into the original and imposing the u_1' y_1 + u_2' y_2 = 0 to simplify the process. This yields u_1' y_1' + u_2' y_2' = g(x), and the solutions are u_1' = -\frac{y_2 g}{W} and u_2' = \frac{y_1 g}{W}, where W = y_1 y_2' - y_2 y_1' is the of y_1 and y_2. Integrating these gives u_1 and u_2, leading to y_p. A classic illustrative example is solving y'' + y = \sec x for $0 < x < \frac{\pi}{2}. The homogeneous equation y'' + y = 0 has solutions y_1 = \cos x and y_2 = \sin x, so the complementary solution is y_c = c_1 \cos x + c_2 \sin x. The is W = \cos x \cdot \cos x - \sin x \cdot (-\sin x) = 1. Thus, u_1' = -\frac{y_2 g}{W} = -\sin x \cdot \sec x = -\tan x, \quad u_2' = \frac{y_1 g}{W} = \cos x \cdot \sec x = 1. Integrating, u_1 = \int -\tan x \, dx = \ln |\cos x| (ignoring the constant, as it contributes to the homogeneous solution) and u_2 = \int 1 \, dx = x. The particular solution is y_p = u_1 y_1 + u_2 y_2 = (\ln |\cos x|) \cos x + x \sin x. The general solution is y = y_c + y_p = c_1 \cos x + c_2 \sin x + (\ln |\cos x|) \cos x + x \sin x. This example demonstrates how variation of parameters handles non-constant forcing terms like \sec x, which are unsuitable for the method of undetermined coefficients. To verify, substitute y_p back into the original equation. First, compute derivatives: y_p' = (\ln |\cos x|) (-\sin x) + \cos x \cdot \frac{1}{\cos x} (-\sin x) + x \cos x + \sin x = -\sin x \ln |\cos x| - \sin x + x \cos x + \sin x = -\sin x \ln |\cos x| + x \cos x, y_p'' = -\cos x \ln |\cos x| - \sin x \cdot \frac{-\sin x}{\cos x} + \cos x \cdot x + (-\sin x) x = -\cos x \ln |\cos x| + \tan x \cos x - x \sin x - x \sin x. Simplifying, y_p'' + y_p = \sec x, confirming the solution. This step-by-step process highlights the method's reliance on the and , making it versatile for variable coefficients or irregular right-hand sides.

Higher-Order Illustration

To illustrate the application of variation of parameters to higher-order linear differential equations, consider a third-order nonhomogeneous equation with constant coefficients. The general approach involves identifying a fundamental set of solutions to the associated homogeneous equation and then solving for the parameter functions u_i(t) via a system derived from the matrix. This method systematically constructs a particular solution without assuming the form of the nonhomogeneous term g(t), making it versatile for arbitrary forcing functions. For concreteness, solve the equation y''' - 3y'' + 2y' = t using variation of parameters. First, solve the homogeneous equation y''' - 3y'' + 2y' = 0. The is r^3 - 3r^2 + 2r = 0, or r(r-1)(r-2) = 0, yielding r = 0, 1, 2. Thus, a fundamental set of solutions is y_1(t) = 1, y_2(t) = e^t, y_3(t) = e^{2t}, and the general homogeneous solution is y_h(t) = c_1 + c_2 e^t + c_3 e^{2t}. The of these s is W(t) = \begin{vmatrix} 1 & e^t & e^{2t} \\ 0 & e^t & 2e^{2t} \\ 0 & e^t & 4e^{2t} \end{vmatrix} = 1 \cdot (e^t \cdot 4e^{2t} - 2e^{2t} \cdot e^t) = 2e^{3t}. Assume a particular of the form y_p(t) = u_1(t) \cdot 1 + u_2(t) \cdot e^t + u_3(t) \cdot e^{2t}. The functions u_i'(t) satisfy the system \begin{cases} u_1' + u_2' e^t + u_3' e^{2t} = 0, \\ u_2' e^t + 2 u_3' e^{2t} = 0, \\ u_2' e^t + 4 u_3' e^{2t} = t. \end{cases} Using (or equivalently, the determinants), the s are u_1'(t) = \frac{1}{W(t)} \begin{vmatrix} 0 & e^t & e^{2t} \\ 0 & e^t & 2e^{2t} \\ t & e^t & 4e^{2t} \end{vmatrix} = \frac{t}{2}, u_2'(t) = \frac{1}{W(t)} \begin{vmatrix} 1 & 0 & e^{2t} \\ 0 & 0 & 2e^{2t} \\ 0 & t & 4e^{2t} \end{vmatrix} = -t e^{-t}, u_3'(t) = \frac{1}{W(t)} \begin{vmatrix} 1 & e^t & 0 \\ 0 & e^t & 0 \\ 0 & e^t & t \end{vmatrix} = \frac{t}{2 e^{2t}}. these (setting constants of to zero for the particular ), u_1(t) = \int \frac{t}{2} \, dt = \frac{t^2}{4}, u_2(t) = \int -t e^{-t} \, dt = e^{-t}(t + 1), u_3(t) = \int \frac{t}{2} e^{-2t} \, dt = -\frac{1}{8} e^{-2t} (2t + 1). Thus, y_p(t) = \frac{t^2}{4} \cdot 1 + e^{-t}(t + 1) \cdot e^t - \frac{1}{8} e^{-2t} (2t + 1) \cdot e^{2t} = \frac{t^2}{4} + (t + 1) - \frac{2t + 1}{8}. Simplifying the constant and linear terms gives y_p(t) = \frac{1}{4} t^2 + \frac{3}{4} t + \frac{7}{8}. The general is y(t) = y_h(t) + y_p(t) = c_1 + c_2 e^t + c_3 e^{2t} + \frac{1}{4} t^2 + \frac{3}{4} t + \frac{7}{8}. This particular aligns with the form expected for a right-hand side, demonstrating the method's effectiveness even when undetermined coefficients could also apply. For higher n, the process scales analogously, with the system size increasing to n \times n, but the framework remains the core tool.

Comparison to Undetermined Coefficients

Key Similarities

Both the method of variation of parameters and the method of undetermined coefficients serve as techniques for finding particular to nonhomogeneous linear ordinary differential equations, enabling the construction of the general as the sum of the complementary (homogeneous) and the particular . This shared objective applies particularly to second-order equations of the form ay'' + by' + cy = g(x), where the method of undetermined coefficients typically assumes constant coefficients, while variation of parameters applies more generally. A fundamental similarity lies in the prerequisite step of solving the associated homogeneous equation to obtain the basis functions for the complementary solution, which forms the foundation for constructing the particular solution in either approach. Furthermore, both methods follow a comparable three-step process: first, determine the homogeneous solution; second, derive a particular solution tailored to the nonhomogeneous term g(x); and third, combine the solutions and apply initial or boundary conditions to find the unique solution. This structured workflow ensures consistency in application, regardless of the specific technique used for the particular solution. In cases where the nonhomogeneous term g(x) takes forms amenable to undetermined coefficients—such as polynomials, exponentials, or trigonometric functions—both methods yield equivalent particular solutions, highlighting their interchangeable results under suitable conditions. This equivalence underscores the underlying linear algebra principles common to both, where the particular solution satisfies the nonhomogeneous equation without altering the homogeneous structure.

Differences and Selection Criteria

The method of undetermined coefficients and variation of parameters both seek particular solutions to nonhomogeneous linear differential equations, but they differ fundamentally in scope, procedure, and computational demands. The undetermined coefficients approach assumes a specific form for the particular solution based on the nonhomogeneous term g(t), such as polynomials, exponentials, sines, cosines, or their products, and determines unknown constants through substitution into the equation, reducing the problem to algebraic manipulation. In contrast, variation of parameters starts with the general solution to the associated homogeneous equation and replaces the arbitrary constants with functions to be determined, leading to a system of equations solved via integrals involving the Wronskian of the homogeneous solutions; this yields an explicit formula for the particular solution applicable to any continuous g(t). While undetermined coefficients is restricted to equations with constant coefficients and recognizable g(t), variation of parameters extends to variable coefficients and arbitrary forcing functions, though it requires prior knowledge of two linearly independent homogeneous solutions. A key similarity lies in their reliance on the homogeneous solution, but differences emerge in efficiency and limitations. Undetermined coefficients often produces cleaner results without extraneous terms when successful, as the guessed form directly matches the particular solution up to constants. Variation of parameters, however, may introduce additional homogeneous components in the particular solution, which do not affect the overall general solution but can complicate verification; for instance, solving y'' - y = e^x via undetermined coefficients gives y_p = \frac{1}{2} x e^x, whereas variation of parameters yields y_p = \frac{1}{2} x e^x - \frac{1}{4} e^x, where the subtracted term is part of the homogeneous solution. Computationally, undetermined coefficients avoids integration, making it faster for applicable cases, while variation of parameters demands evaluating potentially difficult integrals, such as those arising from g(t) = \sec x or g(t) = \ln |x|, where undetermined coefficients fails entirely. Selection between the methods hinges on the form of g(t) and equation coefficients, prioritizing without sacrificing solvability. Opt for undetermined coefficients when the equation has constant coefficients and g(t) is a finite sum of terms like p(t) e^{\alpha t} \cos(\beta t) or p(t) e^{\alpha t} \sin(\beta t), where p(t) is a , as this leverages for rapid algebraic resolution. Resort to variation of parameters for broader applicability, including variable coefficients or non-standard g(t) such as e^{x^2}, \tan x, or arbitrary continuous functions, serving as a reliable fallback once the homogeneous solution is obtained—though the integrals must be feasible. In practice, if undetermined coefficients applies but leads to messy algebra (e.g., high-degree ), variation of parameters may still be preferable for its systematic integral-based formula, ensuring completeness at the cost of added effort.

Applications and Extensions

In Physics and Engineering

In physics and , the of variation of parameters is extensively applied to solve nonhomogeneous linear equations (ODEs) that model dynamic systems under external forcing. These systems often arise in , electrical circuits, and processes, where the homogeneous solution captures free motion and the particular solution accounts for applied inputs like forces or voltages. The technique assumes a particular solution as a of the fundamental homogeneous solutions with time-varying parameters, determined via the or integrating factors, making it versatile for arbitrary forcing functions when methods like undetermined coefficients are inapplicable. A primary application occurs in mechanical engineering for analyzing forced vibrations in mass-spring-damper systems. The governing equation is typically m \ddot{x} + b \dot{x} + k x = F(t), where m is mass, b is damping coefficient, k is spring constant, and F(t) is the external force. For undamped cases (b = 0), the homogeneous solution is x_h(t) = c_1 \cos(\omega_0 t) + c_2 \sin(\omega_0 t) with natural frequency \omega_0 = \sqrt{k/m}. Variation of parameters yields the particular solution by setting x_p(t) = u_1(t) \cos(\omega_0 t) + u_2(t) \sin(\omega_0 t), solving for u_1' and u_2' using the system u_1' \cos(\omega_0 t) + u_2' \sin(\omega_0 t) = 0 and -u_1' \omega_0 \sin(\omega_0 t) + u_2' \omega_0 \cos(\omega_0 t) = F(t)/m. For harmonic forcing F(t) = F_0 \cos(\omega t) at resonance (\omega = \omega_0), this produces x_p(t) = \frac{F_0 t}{2 m \omega_0} \sin(\omega_0 t), illustrating amplitude growth over time. This approach is crucial for designing structures like bridges or vehicles to mitigate resonance effects. In , variation of parameters solves the ODE for series RLC circuits under impressed voltage: L q''(t) + [R](/page/R) q'(t) + \frac{1}{[C](/page/C)} q(t) = E(t), where L is , [R](/page/R) is , [C](/page/C) is , q(t) is charge, and E(t) is voltage. The homogeneous solution q_h(t) depends on the roots of the , while the particular solution q_p(t) is found by varying parameters in the fundamental set, leading to integrals involving E(t) and the . For example, with E(t) = E_0 \cos(\omega t), the steady-state q_p(t) reveals phase shifts and impedance, essential for and power systems. The method extends to transient analysis, where initial conditions determine constants in the full solution. Beyond these, the technique supports for systems like controllers in robotic arms, modeled by J \ddot{\theta} + b \dot{\theta} + k \theta = \tau(t), where variation of parameters handles non-constant torques \tau(t). It also aids in multidegree-of-freedom vibrations, such as beam dynamics via the Euler-Bernoulli equation \rho A \frac{\partial^2 u}{\partial t^2} + EI \frac{\partial^4 u}{\partial x^4} = f(x,t), by reducing to ODEs along modes. These applications underscore its role in predicting system responses to ensure stability and performance in physical prototypes.

To Partial Differential Equations

The extension of the variation of parameters method to partial differential equations (PDEs) primarily addresses linear inhomogeneous evolution equations, where the forcing term depends on both space and time variables. In this context, the technique is commonly referred to as Duhamel's principle, which constructs a particular solution by integrating the effects of the inhomogeneous term over time, using the solution operator of the corresponding homogeneous problem. This approach leverages the semigroup theory for abstract evolution equations of the form u_t = A u + f(t) in a Banach space, with initial condition u(0) = u_0, yielding the solution u(t) = e^{tA} u_0 + \int_0^t e^{(t-s)A} f(s) \, ds, where e^{tA} is the semigroup generated by the linear operator A. Duhamel's principle originates from the work of Jean-Marie Duhamel in the and generalizes case by treating the source term f(s) as an impulsive forcing at each time s, then superposing the homogeneous solutions starting from those impulses. For PDEs, this is particularly effective for parabolic and hyperbolic equations, such as the and equations, where A corresponds to spatial differential operators like the Laplacian. The principle ensures well-posedness under suitable conditions on A (e.g., sectorial for parabolic problems) and f (e.g., continuous in time). In applications to the u_t = \Delta u + g(x,t) on a with homogeneous conditions and u(x,0) = f(x), leads to an expansion u(x,t) = \sum_n a_n(t) \phi_n(x), where \{\phi_n\} are eigenfunctions of -\Delta with eigenvalues \lambda_n. Substituting yields a of ODEs a_n'(t) + \lambda_n a_n(t) = q_n(t), solved via variation of parameters as a_n(t) = a_n(0) e^{-\lambda_n t} + \int_0^t e^{-\lambda_n (t-s)} q_n(s) \, ds, with q_n(t) the of g onto \phi_n. This recovers Duhamel's integral form using the as the . Similar decompositions apply after handling nonhomogeneous boundaries by subtracting a reference function. For the wave equation u_{tt} = c^2 \Delta u + [f(x](/page/F/X),t) with initial conditions u(x,0) = g(x), u_t(x,0) = h(x), Duhamel's principle gives u(x,t) = u_h(x,t) + \int_0^t \int_\Omega K(x,y;t-s) f(y,s) \, dy \, ds, where u_h solves the homogeneous problem and K is the fundamental solution (e.g., in one dimension). This method highlights the propagation of the source term at speed c, essential for modeling wave phenomena with time-varying forces. Extensions to higher dimensions and nonlinear cases often reformulate semilinear problems as fixed-point integrals via this principle.

References

  1. [1]
    5.7: Variation of Parameters
    ### Summary of Variation of Parameters for Second-Order Linear Differential Equations
  2. [2]
    MATHEMATICA TUTORIAL, Part 1.4: Variation of Parameters
    The method of variation of parameters was introduced by Leonhard Euler (1707--1783) and completed by his follower Joseph-Louis Lagrange (1736--1813).
  3. [3]
    Differential Equations - Variation of Parameters
    Nov 16, 2022 · In this section we introduce the method of variation of parameters to find particular solutions to nonhomogeneous differential equation.
  4. [4]
    Method of variation of parameters revisited
    ### Summary of Historical Development and Basic Description of the Variation of Parameters Method
  5. [5]
    [PDF] 4.6 Variation of Parameters
    The method of variation of parameters applies to solve a(x)y′′ + b(x)y′ + c(x)y = f(x). (1). Continuity of a, b, c and f is assumed, plus a(x) 6= 0.
  6. [6]
  7. [7]
    Linear Ordinary Differential Equations - ScienceDirect.com
    Linear ordinary differential equations are defined as differential equations involving an unknown function and its derivatives, where the equation is linear ...
  8. [8]
    Differential Equations - Basic Concepts - Pauls Online Math Notes
    Nov 16, 2022 · Section 3.1 : Basic Concepts. In this chapter we will be looking exclusively at linear second order differential equations.
  9. [9]
    Differential Equations - More on the Wronskian
    Nov 16, 2022 · In this section we will look at another application of the Wronskian as well as an alternate method of computing the Wronskian.
  10. [10]
    Solutions to Linear Equations; the Wronskian
    The Wronskian computes a function that can be used to check of a solution set is sufficient to construct every possible solution.
  11. [11]
    [PDF] 19. The Wronskian
    We know that a general second order homogeneous linear ODE,. (1) y// + p(x)y/ + q(x)y = 0, has a pair of independent solutions; and that if y1,y2 is any pair of.
  12. [12]
    [PDF] Applications of the Wronskian to ordinary linear differential equations
    That is, if the yi(x) are solutions to an nth order ordinary linear differential equation and the Wronskian of the yi(x) vanishes, then {yi(x)} is a linearly ...
  13. [13]
    [PDF] Section 3.2 Solutions of linear homogeneous equations
    Section 3.2 Solutions of linear homogeneous equations; the Wronskian. A second order ordinary differential equation has the form d2y dt2. = f. ( t, y, dy dt. ).
  14. [14]
  15. [15]
    [PDF] 4.6 Variation of Parameters - UC Berkeley math
    Suppose that we have found two solutions y1 and y2 of the differential equation L(y) = ay00 + by0 + cy = 0 ... As you can see, Lagrange's method of variation of ...<|control11|><|separator|>
  16. [16]
    [PDF] the works of Lagrange and Poisson during the years 1808–1810
    It is to deal with that problem that Lagrange created the method of variation of the constants. Thirty years of bihamiltonian systems, Bedlewo, August 3–9, 2008.
  17. [17]
    Variation Of Parameters - Department of Mathematics at UTSA
    Nov 5, 2021 · Variation of parameters extends to linear partial differential equations as well, specifically to inhomogeneous problems for linear evolution ...
  18. [18]
    [PDF] Nonhomogeneous Equations and Variation of Parameters
    Jun 17, 2016 · Variation of parameters, also known as variation of constants, is a more general method to solve inhomogeneous linear ordinary differential ...Missing: explanation | Show results with:explanation
  19. [19]
    2.1: Linear First Order Equations
    ### Extracted Content on Variation of Parameters for First-Order Linear Differential Equations
  20. [20]
    17.3 First Order Linear Equations
    Some people find it easier to remember how to use the integrating factor method than variation of parameters. Since ultimately they require the same calculation ...
  21. [21]
    [PDF] Second Order Linear Differential Equations
    The method of variation of parameters uses a pair of linearly independent solutions of the reduced equation to construct a particular solution of (N). Let y1(x) ...
  22. [22]
    [PDF] linear independence, the wronskian, and variation of parameters
    Variation of Parameters. In this section we give another use of the Wronskian matrix. We start with the general nth−order linear differential equation. It ...
  23. [23]
    Differential Equations - Variation of Parameters
    ### Summary of Variation of Parameters for nth-Order Linear Differential Equations
  24. [24]
    [PDF] Higher Order Linear Equations Lecture 8
    The method of variation of parameters for determining a particular solution of the nonhomogeneous nth order linear differential equation. (8.8). L[y] = y(n) + ...<|control11|><|separator|>
  25. [25]
    9.4: Variation of Parameters for Higher Order Equations
    ### General Formula for Particular Solution Using Variation of Parameters for nth-Order Linear Differential Equations
  26. [26]
    Differential Equations - Variation of Parameters
    ### General Formula for Particular Solution Using Variation of Parameters
  27. [27]
    None
    ### Summary of Variation of Parameters for First-Order Linear DEs
  28. [28]
    None
    ### Summary of Comparisons Between Variation of Parameters and Undetermined Coefficients
  29. [29]
    [PDF] Second Order Linear Differential Equations - University of Connecticut
    They are the Method of. Undetermined Coefficients and Variation of Parameters. Undetermined Coefficients. The Method of Undetermined Coeffi- cients works ...
  30. [30]
    M555: Ordinary Differential Equations - Wichita State University
    The method of variation of parameters is a three-step process just like the method of undetermined coefficients. The first and last step are also the same.<|control11|><|separator|>
  31. [31]
    [PDF] Variation of Parameters
    Feb 5, 2019 · Unlike the Undetermined Coefficients Method (UCM), which is a way to guess yp, the Variation of Parameters Method (VPM) gives a formula to yp.
  32. [32]
    Variation of Parameters - Differential Equations - CliffsNotes
    Variation of parameters is a method used when undetermined coefficients fails, replacing constants with unknown functions to find a particular solution.<|control11|><|separator|>
  33. [33]
    [PDF] Engineering Differential Equations - UCLA | Bionics Lab
    A superficial example would be naming the procedure normally referred to as “integrating fac- tors” for first-order equations “variation of parameters” because ...
  34. [34]
    Differential Equations - Mechanical Vibrations
    Nov 16, 2022 · To get the particular solution we can use either undetermined coefficients or variation of parameters depending on which we find easier for a ...
  35. [35]
    6.3 The RLC Circuit - Ximera - The Ohio State University
    We discuss the solution of an th order nonhomogeneous linear differential equation, making use of variation of parameters to find a particular solution.
  36. [36]
    [PDF] Notes on Partial Differential Equations John K. Hunter - UC Davis Math
    may be expressed in terms of the solution operators of the homogeneous equation by the variation of parameters, or Duhamel, formula. Theorem 5.45. Suppose ...
  37. [37]
    [PDF] Math 531 - Nonhomogeneous Partial Differential Equations
    This system of ODEs is solved with the variation of parameters method, giving an(t) = an(0)e−λnkt + e−λnkt. Z t. 0. ¯qn(s)eλnksds. The nonhomogeneous ...
  38. [38]
    [PDF] Partial Differential Equations - Princeton University
    Duhamel's Principle for the Inhomogeneous Wave Equation 57. 4.5. The Wave Equation on R2 and R3 59. Problems 61. 5. The Heat Equation 65. 5.1. The Fundamental ...