Fact-checked by Grok 2 weeks ago

Linear differential equation

A linear differential equation is a in which the unknown and its appear linearly, meaning to the first power with no products between them or nonlinear functions involving them, typically expressed for an nth-order (ODE) as a_n(x) y^{(n)}(x) + a_{n-1}(x) y^{(n-1)}(x) + \cdots + a_1(x) y'(x) + a_0(x) y(x) = g(x), where the coefficients a_i(x) and the forcing function g(x) are given functions of the independent variable x. These equations are fundamental in and applied sciences because they satisfy the , allowing solutions to be added and scaled to form new solutions, which simplifies analysis of systems where responses are proportional to inputs. Linear differential equations encompass both homogeneous cases (where g(x) = 0) and nonhomogeneous cases (where g(x) \neq 0), and they can involve partial derivatives in partial differential equations (PDEs), though the focus is often on ODEs for initial value problems. For first-order linear ODEs of the form y'(x) + p(x) y(x) = q(x), solutions are obtained using an \mu(x) = e^{\int p(x) \, dx}, which transforms the equation into an exact derivative that can be integrated directly. Higher-order equations with constant coefficients are solved via the , whose roots determine the form of the general solution, often involving exponentials, sines, and cosines for complex roots. Methods like undetermined coefficients and extend to nonhomogeneous cases, making these equations solvable analytically in many scenarios. Linear differential equations model a wide array of phenomena, including population growth, electrical circuits, mechanical vibrations, and , due to their ability to represent systems governed by linear approximations of physical laws. Their study forms a core part of equations theory, with existence and uniqueness theorems guaranteeing solutions under conditions like of coefficients.

Basic Concepts

Definition

A linear ordinary differential equation (ODE) of order n takes the form a_n(x) \frac{d^n y}{dx^n} + a_{n-1}(x) \frac{d^{n-1} y}{dx^{n-1}} + \cdots + a_1(x) \frac{dy}{dx} + a_0(x) y = g(x), where the a_i(x) (for i = 0, 1, \dots, n) are given coefficient functions, y = y(x) is the unknown dependent variable, and g(x) is a known forcing function (also called the nonhomogeneous term). The order n refers to the highest derivative present in the equation. The equation is linear because the unknown function y and all its derivatives appear to the first (linear) power only, with no products among them or nonlinear functions (such as squares, exponentials, or compositions) involving y or its derivatives; any violation of this condition renders the equation nonlinear. If g(x) = 0, the equation is termed homogeneous (detailed further in the Terminology section). This entry concerns ordinary differential equations, which depend on a single independent variable; partial differential equations, involving partial derivatives with respect to multiple independent variables, lie beyond the present scope. Linear differential equations originated in the 18th century amid efforts by Leonhard Euler and to model mechanical systems, such as celestial motion and variational problems in physics. Euler advanced methods for integrating linear ODEs as early as 1736, applying them to tractional equations in . Lagrange built upon this foundation in the 1760s and 1770s, developing systematic approaches to solving and applying linear ODEs in and astronomy.

Terminology

The order of a linear differential equation is defined as the highest of appearing in the . For instance, an involving only the first is first-order, while one with the second is second-order. A linear differential is homogeneous if the right-hand side is zero, meaning all terms involve the unknown function or its derivatives; otherwise, it is non-homogeneous. In standard form for an nth-order , this appears as a_n(x) y^{(n)} + \cdots + a_1(x) y' + a_0(x) y = g(x), where g(x) = 0 for the homogeneous case. Standard notation in linear differential equations typically denotes the dependent variable as y(x), with the independent variable x, and derivatives using primes: y' = \frac{dy}{dx}, y'' = \frac{d^2 y}{dx^2}, and so on. Alternatively, the differential operator D = \frac{d}{dx} is used in some contexts, so Dy = y' and D^n y = y^{(n)}. An (IVP) consists of a supplemented by initial conditions specifying the value of the solution and its first n-1 derivatives at a single point x_0, for an nth-order equation. A (BVP), in contrast, specifies conditions at multiple points, often the endpoints of an interval. The general solution of an nth-order linear homogeneous differential equation is a linear combination of n linearly independent solutions, containing n arbitrary constants. For a non-homogeneous equation, it is the sum of the general solution to the associated homogeneous equation and a particular solution to the non-homogeneous equation. A particular solution is any specific function that satisfies the non-homogeneous equation, without arbitrary constants. Solutions to a are linearly independent if no nontrivial of them equals the zero function on an . The provides a test for linear independence: for n functions y_1, \dots, y_n, it is the W(y_1, \dots, y_n) = \det \begin{pmatrix} y_1 & y_2 & \cdots & y_n \\ y_1' & y_2' & \cdots & y_n' \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)} & y_2^{(n-1)} & \cdots & y_n^{(n-1)} \end{pmatrix}, and if W \neq 0 at some point in the , the functions are linearly independent there. This terminology assumes familiarity with basic calculus concepts, such as and integrals, but no prior knowledge of equations.

Linear Differential Operators

A linear is a that formalizes the action of in a linear manner on functions. Formally, it is expressed as L = \sum_{k=0}^n a_k(x) D^k, where D = \frac{d}{dx} denotes the operator, a_k(x) are functions, and n is the of the operator. This representation captures linear ordinary equations of n in operator notation. The defining property of such an is : for any functions y and z in its domain and scalar c, L(c y + z) = c L(y) + L(z). This additivity and homogeneity under ensure that the operator preserves linear combinations of functions. of operators is also possible; for instance, the product L_1 L_2 applied to a function yields another linear differential operator of order equal to the sum of the individual orders, provided the coefficients allow. In the abstract setting, a linear differential equation takes the form L = g(x), where g(x) is a given ; the kernel of L, denoted \ker L = \{ y \mid L = 0 \}, consists of all functions annihilated by the and spans the solution space for the associated homogeneous L = 0. The adjoint L^* provides a perspective, defined through to satisfy relations like \int u L \, dx = \int (L^* u) v \, dx + boundary terms, which is useful in variational formulations and boundary value problems. For a second-order example, consider L = a(x) D^2 + b(x) D + c(x), which corresponds to the equation a(x) y'' + b(x) y' + c(x) y = g(x); here, the linearity ensures that superpositions of solutions to the homogeneous case remain solutions.

Theory of Solutions

Types of Solutions

The general solution to a linear ordinary differential equation (ODE) of order n is given by y(x) = y_h(x) + y_p(x), where y_h(x) is the general solution to the associated homogeneous equation and y_p(x) is any particular solution to the nonhomogeneous equation. This superposition principle holds because the set of solutions to the homogeneous equation forms a of dimension n, and adding a particular solution shifts the solution space affinely to cover all solutions to the nonhomogeneous case. Classical solutions to linear ODEs are functions that are sufficiently differentiable—typically n times continuously differentiable for an nth-order equation—and satisfy the pointwise at every point in their of . These solutions assume coefficients and forcing terms, ensuring the derivatives exist in the classical sense without singularities or discontinuities disrupting the pointwise equality. Generalized solutions extend the notion of classical solutions for linear ODEs with discontinuous or nonsmooth coefficients or right-hand sides, often defined in the sense of distributions or through integral formulations that allow weaker regularity conditions. For instance, in cases where the right-hand side is non-differentiable, generalized derivatives can be employed to construct solutions that satisfy the equation in a distributional , preserving the linear structure while accommodating irregularities. Series solutions represent expansions that satisfy linear ODEs term by term, such as ordinary around regular points or Frobenius series around singular points, with guaranteed within a determined by the distance to the nearest in the . The , in particular, yields solutions of the form y(x) = x^r \sum_{k=0}^{\infty} a_k x^k for equations with regular singular points, providing analytic approximations valid in annular regions around the singularity. The Picard-Lindelöf theorem establishes local existence and uniqueness of solutions to initial value problems for linear ODEs y' = f(x, y) with y(x_0) = y_0, provided f is continuous and continuous in y on a rectangular around (x_0, y_0); this extends to higher-order linear systems via reduction to form. The theorem guarantees a unique solution on some interval |x - x_0| < h, where h depends on the constant and bounds of f, ensuring that classical solutions are isolated under these conditions.

Homogeneous Equations

A homogeneous linear differential equation of order n is expressed in the form L = 0, where L is a linear differential operator acting on sufficiently differentiable functions y, and the equation lacks a forcing term. The set of all solutions to this equation forms a vector space over the field of scalars (typically the real or complex numbers), with addition and scalar multiplication defined pointwise on the functions. This vector space has dimension exactly n, matching the order of the equation, provided the coefficients of L are continuous on an appropriate interval. The superposition principle is a key property of this solution space: if y_1, \dots, y_k are solutions to L = 0, then any linear combination \sum_{i=1}^k c_i y_i, where c_i are constants, is also a solution. This follows directly from the linearity of the operator L, as L\left[\sum c_i y_i\right] = \sum c_i L[y_i] = 0. The principle underscores the vector space structure and enables the construction of general solutions from basis elements. A fundamental set of solutions consists of n linearly independent solutions \{y_1, \dots, y_n\} that span the entire solution space. Linear independence means that the only scalars c_1, \dots, c_n satisfying \sum c_i y_i = 0 (as functions) are all c_i = 0. Such a set forms a basis for the vector space, and the general solution is then given by y_h(x) = c_1 y_1(x) + \cdots + c_n y_n(x), where the c_i are arbitrary constants determined by initial or boundary conditions. Linear independence of a set \{y_1, \dots, y_n\} can be verified using the Wronskian, defined as the determinant W(y_1, \dots, y_n)(x) = \det \begin{pmatrix} y_1 & y_2 & \cdots & y_n \\ y_1' & y_2' & \cdots & y_n' \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)} & y_2^{(n-1)} & \cdots & y_n^{(n-1)} \end{pmatrix}. If W(y_1, \dots, y_n)(x_0) \neq 0 at some point x_0 in the interval of interest, then the solutions are linearly independent on that interval. Conversely, if they are linearly dependent, the Wronskian vanishes identically. The evolution of the Wronskian is governed by Abel's identity, which describes its derivative in terms of the coefficients of the differential operator. For a second-order equation y'' + p(x) y' + q(x) y = 0, the Wronskian satisfies \frac{d}{dx} W(y_1, y_2)(x) = -p(x) W(y_1, y_2)(x), leading to the explicit form W(y_1, y_2)(x) = W(y_1, y_2)(x_0) \exp\left( -\int_{x_0}^x p(t) \, dt \right). This identity generalizes to higher orders, confirming that the Wronskian is either identically zero or never zero on the interval, reinforcing its role in establishing linear independence.

Non-homogeneous Equations

A non-homogeneous linear differential equation of order n is given by a_n(x) y^{(n)}(x) + a_{n-1}(x) y^{(n-1)}(x) + \cdots + a_1(x) y'(x) + a_0(x) y(x) = g(x), where the coefficients a_i(x) are continuous functions, a_n(x) \neq 0, and the forcing function g(x) is not identically zero. Unlike the homogeneous case, solutions must account for the non-zero right-hand side, which represents an external influence on the system. The general solution to this equation is the sum of the general solution to the associated homogeneous equation, denoted y_h(x), and any particular solution y_p(x) to the non-homogeneous equation: y(x) = y_h(x) + y_p(x). Here, y_h(x) spans the kernel of the linear differential operator and satisfies the homogeneous equation a_n(x) y^{(n)}(x) + \cdots + a_0(x) y(x) = 0. The particular solution y_p(x) is not unique, as adding any homogeneous solution yields another particular solution, but the overall general solution remains unchanged. This decomposition holds due to the linearity of the operator, ensuring that the operator applied to the sum separates into the homogeneous and non-homogeneous parts./17:_Second-Order_Differential_Equations/17.02:_Nonhomogeneous_Linear_Equations) The principle of superposition applies to the homogeneous component in the standard way: if y_{h1} and y_{h2} are solutions to the homogeneous equation, then so is any linear combination c_1 y_{h1} + c_2 y_{h2}. For the non-homogeneous part, superposition extends to the forcing function: if g(x) = \sum_{i=1}^m g_i(x) and y_{pi}(x) is a particular solution satisfying the equation with right-hand side g_i(x), then y_p(x) = \sum_{i=1}^m y_{pi}(x) is a particular solution for the original equation. This follows from the linearity of the differential operator, allowing the equation to be solved piecewise for each term in g(x). However, superposition does not directly combine arbitrary non-homogeneous solutions, as their difference would satisfy the homogeneous equation. One general method to find a particular solution y_p(x) is variation of parameters, which assumes the form y_p(x) = u_1(x) y_1(x) + u_2(x) y_2(x) + \cdots + u_n(x) y_n(x), where \{y_1(x), \dots, y_n(x)\} is a fundamental set of linearly independent solutions to the homogeneous equation. The functions u_i(x) are determined by solving the system of equations obtained by substituting into the original differential equation and imposing n-1 auxiliary conditions to simplify differentiation: \begin{align*} \sum_{i=1}^n u_i'(x) y_i(x) &= 0, \\ \sum_{i=1}^n u_i'(x) y_i'(x) &= 0, \\ &\vdots \\ \sum_{i=1}^n u_i'(x) y_i^{(n-2)}(x) &= 0, \\ \sum_{i=1}^n u_i'(x) y_i^{(n-1)}(x) &= \frac{g(x)}{a_n(x)}. \end{align*} This system can be written in matrix form as W \mathbf{u}' = \mathbf{b}, where W is the with entries from the fundamental solutions and their derivatives up to order n-1, \mathbf{u}' = (u_1'(x), \dots, u_n'(x))^T, and \mathbf{b} = (0, \dots, 0, g(x)/a_n(x))^T. Since the solutions are linearly independent, the Wronskian determinant is non-zero, allowing \mathbf{u}' to be solved via the inverse or , followed by integration to obtain the u_i(x). This method works for arbitrary continuous g(x) and variable coefficients, provided the homogeneous solutions are known./4.7:_Variation_of_Parameters_for_Nonhomogeneous_Linear_Systems) Under suitable conditions, the existence of solutions is guaranteed. Specifically, if the coefficients a_i(x) and g(x) are continuous on an open interval I containing t_0, then for any initial conditions y(t_0) = y_0, y'(t_0) = y_1, \dots, y^{(n-1)}(t_0) = y_{n-1}, there exists a unique solution on I satisfying the non-homogeneous equation and these conditions. The homogeneous equation on the same interval admits n linearly independent solutions, forming a basis for the solution space, which ensures the general solution can be constructed as described. This theorem extends the to higher-order linear equations via reduction to a first-order system./01:_Introduction/1.02:_Existence_and_Uniqueness_of_Solutions)

Constant Coefficient Equations

Homogeneous Case

Homogeneous linear differential equations with constant coefficients take the form a_n y^{(n)} + a_{n-1} y^{(n-1)} + \cdots + a_1 y' + a_0 y = 0, where the coefficients a_k are constants and a_n \neq 0. This assumption of constant coefficients allows for an exact solution method using exponential trial functions. To solve such equations, assume a solution of the form y = e^{rx}, where r is a constant to be determined. Substituting this into the differential equation yields the characteristic equation a_n r^n + a_{n-1} r^{n-1} + \cdots + a_1 r + a_0 = 0, a polynomial equation of degree n whose roots r_k determine the form of the general solution. The roots can be real or complex, distinct or repeated, and the general solution is a linear combination of linearly independent functions derived from these roots. For distinct real roots r_1, r_2, \dots, r_n, the general solution is y(x) = c_1 e^{r_1 x} + c_2 e^{r_2 x} + \cdots + c_n e^{r_n x}, where the c_k are arbitrary constants. If a real root r has multiplicity m, the corresponding terms include powers of x: e^{r x}, \, x e^{r x}, \, \dots, \, x^{m-1} e^{r x}. For complex roots appearing in conjugate pairs \alpha \pm i \beta (with \beta \neq 0) of multiplicity m, the real-valued solutions are e^{\alpha x} \cos(\beta x), \, e^{\alpha x} \sin(\beta x), \, x e^{\alpha x} \cos(\beta x), \, x e^{\alpha x} \sin(\beta x), \, \dots, \, x^{m-1} e^{\alpha x} \cos(\beta x), \, x^{m-1} e^{\alpha x} \sin(\beta x). The full general solution combines all such terms from the roots of the characteristic equation, yielding n linearly independent functions. Consider the second-order example y'' - 3y' + 2y = 0. The characteristic equation is r^2 - 3r + 2 = 0, with distinct real roots r = 1 and r = 2. Thus, the general solution is y(x) = c_1 e^{x} + c_2 e^{2x}.

Non-homogeneous Case

A non-homogeneous linear differential equation with constant coefficients takes the form L = g(x), where L is a linear differential operator with constant coefficients, such as L = a_n y^{(n)} + \cdots + a_1 y' + a_0 y, and g(x) is the non-homogeneous forcing function. The general solution is the sum of the homogeneous solution y_h (obtained from the characteristic equation) and a particular solution y_p. The method of undetermined coefficients is a standard technique for finding y_p when g(x) is a polynomial, exponential, sine, cosine, or a finite product of these forms. Assume y_p has a form similar to g(x), with undetermined coefficients to be solved for by substitution into the equation. For g(x) = p(x) e^{\alpha x} \cos(\beta x) or g(x) = p(x) e^{\alpha x} \sin(\beta x), where p(x) is a polynomial of degree m, assume y_p(x) = e^{\alpha x} \left[ (A_m x^m + \cdots + A_0) \cos(\beta x) + (B_m x^m + \cdots + B_0) \sin(\beta x) \right]. If this form overlaps with solutions to the homogeneous equation (i.e., resonance, where \alpha + i\beta is a root of multiplicity s of the characteristic equation), multiply the assumed form by x^s to obtain an independent particular solution. For simpler cases, such as g(x) = p(x) (a polynomial of degree m), assume y_p(x) = A_m x^m + \cdots + A_0; if the constant term is in y_h, multiply by x^s where s is the multiplicity of the zero root. As an example, consider the equation y'' + y = x. The assumed particular solution is y_p = ax + b, since g(x) = x is a polynomial of degree 1. Differentiating gives y_p' = a and y_p'' = 0, so substituting yields $0 + (ax + b) = x, or ax + b = x. Equating coefficients provides a = 1 and b = 0, hence y_p = x. An alternative method is variation of parameters, which applies more generally but is particularly useful for constant-coefficient cases when undetermined coefficients is impractical. For a second-order equation y'' + p(x) y' + q(x) y = g(x) (with constants p and q), let y_1 and y_2 be a fundamental set of homogeneous solutions, and assume y_p = u_1(x) y_1 + u_2(x) y_2. The functions u_1 and u_2 satisfy u_1' y_1 + u_2' y_2 = 0 and u_1' y_1' + u_2' y_2' = g(x), solved using the W(y_1, y_2) = y_1 y_2' - y_2 y_1' to give u_1' = -\frac{y_2 g(x)}{W}, \quad u_2' = \frac{y_1 g(x)}{W}. Integrate to find u_1 and u_2, then form y_p. For higher-order equations, extend to n functions using the appropriate determinant form of the .

Variable Coefficient Equations

First-Order Case

A first-order linear differential equation with variable coefficients is expressed in the standard form \frac{dy}{dx} + p(x) y = g(x), where p(x) and g(x) are continuous functions defined on an interval containing the independent variable x. To solve this equation exactly, the method of is employed. An integrating factor \mu(x) is defined as \mu(x) = \exp\left( \int p(x) \, dx \right). Multiplying both sides of the original equation by \mu(x) yields \mu(x) \frac{dy}{dx} + \mu(x) p(x) y = \mu(x) g(x), which simplifies to the exact derivative form \frac{d}{dx} \left( \mu(x) y \right) = \mu(x) g(x). This transformation, first systematically developed by Leonhard Euler in 1763, allows integration of both sides with respect to x. Integrating gives \mu(x) y = \int \mu(x) g(x) \, dx + C, where C is the constant of integration. Solving for y produces the general y(x) = \frac{1}{\mu(x)} \left( \int \mu(x) g(x) \, dx + C \right). For the homogeneous case, where g(x) = 0, the simplifies to y_h(x) = \frac{C}{\mu(x)}, representing the general to \frac{dy}{dx} + p(x) y = 0. Consider the example \frac{dy}{dx} + \frac{1}{x} y = x for x > 0. Here, p(x) = \frac{1}{x} and g(x) = x, so the integrating factor is \mu(x) = \exp\left( \int \frac{1}{x} \, dx \right) = x. Multiplying through by x gives x \frac{dy}{dx} + y = x^2, or \frac{d}{dx} (x y) = x^2. Integrating yields x y = \int x^2 \, dx = \frac{x^3}{3} + C, so y = \frac{x^2}{3} + \frac{C}{x}. The term \frac{x^2}{3} is a particular , while \frac{C}{x} is the homogeneous .

Higher-Order Case

Higher-order linear differential equations with variable coefficients generally lack closed-form solutions expressible in terms of elementary functions, unlike their constant-coefficient counterparts. The absence of a general analytical method necessitates specialized techniques that exploit the structure of or approximate the solution locally. These approaches are particularly useful for second- and higher-order equations where the coefficients p_i(x) vary, making exact infeasible. One foundational technique for homogeneous equations is reduction of order, which lowers the order when at least one nontrivial solution y_1(x) is known. For a second-order equation y'' + p(x) y' + q(x) y = 0, assume a second solution of the form y_2(x) = v(x) y_1(x); substituting yields a first-order equation in v', solvable via an integrating factor akin to the first-order case. This method extends to higher orders by iteratively reducing the equation, though it requires successive known solutions and can become cumbersome for n \geq 3. It is especially valuable for equations with variable coefficients where symmetry or prior knowledge provides an initial solution. For equations with analytic coefficients, solutions provide a systematic way to obtain local approximations around an ordinary point x_0, where the coefficients p_i(x) are analytic. Assume a solution y(x) = \sum_{k=0}^{\infty} a_k (x - x_0)^k, with derivatives y^{(m)}(x) = \sum_{k=m}^{\infty} \frac{k!}{(k-m)!} a_k (x - x_0)^{k-m}. Substituting into the and equating coefficients of like powers of (x - x_0) to zero produces a for the a_k, typically involving two arbitrary constants (e.g., a_0 and a_1) for second-order equations, yielding two linearly independent series solutions valid in a neighborhood of x_0. This method converges within the radius of the nearest singularity of the coefficients. When x_0 is a regular singular point—where (x - x_0) p_n(x), (x - x_0)^2 p_{n-1}(x), ..., (x - x_0)^n p_0(x) are analytic—the modifies the power series approach to handle potential singularities. Seek solutions of the form y(x) = (x - x_0)^r \sum_{k=0}^{\infty} a_k (x - x_0)^k, with a_0 \neq 0. Substituting into the and collecting the lowest-order terms (in powers of (x - x_0)) gives the indicial , a in r whose determine the exponents; for distinct roots differing by a non-integer, two independent Frobenius series solutions result. If roots differ by an integer or are repeated, a second solution may involve a logarithmic term, but the method still yields formal series solutions. Numerical methods complement these analytical techniques, especially for non-analytic coefficients or global behavior. The provides a simple forward approximation but is first-order accurate, while higher-order Runge-Kutta schemes, such as the fourth-order variant, offer improved precision by evaluating the right-hand side multiple times per step. For higher-order equations, reduce to a first-order (e.g., via \mathbf{y} = [y, y', \dots, y^{(n-1)}]^T) and apply these integrators; modern software like MATLAB's ode45 implements adaptive Runge-Kutta methods, automatically adjusting step sizes for efficiency and error control. A classic example is the Airy equation y'' - x y = 0, which arises in and wave propagation, with an ordinary point at x = [0](/page/0). Assuming y(x) = \sum_{k=[0](/page/0)}^{\infty} a_k x^k, yields the recurrence a_{k+3} = \frac{a_k}{(k+3)(k+2)} for k \geq [0](/page/0), with arbitrary a_0 and a_1, and a_2 = [0](/page/0). This produces two linearly independent solutions: one of the form \sum_{k=[0](/page/0)}^{\infty} c_k x^{3k} (starting with the constant term from a_0), and the other \sum_{k=[0](/page/0)}^{\infty} d_k x^{3k+1} (starting with the x term from a_1), defining the Airy functions \mathrm{Ai}(x) and \mathrm{Bi}(x). These series converge for all x, illustrating the method's power for unbounded domains.

Cauchy-Euler Equations

Cauchy-Euler equations, also known as Euler-Cauchy equations, form an important subclass of linear differential equations with variable coefficients, where the coefficients are powers of the independent variable x. The general form of an nth-order Cauchy-Euler equation is given by a_n x^n y^{(n)} + a_{n-1} x^{n-1} y^{(n-1)} + \cdots + a_1 x y' + a_0 y = g(x), where the a_i are constants and g(x) is the non-homogeneous term (with g(x) = 0 for the homogeneous case). These equations arise in applications involving radial symmetry or power-law dependencies, such as in certain problems in physics and engineering. For the homogeneous case (g(x) = 0), a standard solution method involves assuming a trial solution of the form y = x^r, where r is a constant to be determined. Substituting this into the equation yields the (or indicial) equation a_n r(r-1)\cdots(r-n+1) + a_{n-1} r(r-1)\cdots(r-n+2) + \cdots + a_1 r + a_0 = 0, a equation in r of degree n. The of this equation determine the general solution: for distinct real roots r_1, \dots, r_k, the solution is y(x) = c_1 x^{r_1} + \cdots + c_k x^{r_k}; for a repeated root r of multiplicity m, the corresponding terms include x^r (\ln x)^{j} for j = 0, 1, \dots, m-1; and for complex roots \alpha \pm i\beta, the solution involves x^\alpha \cos(\beta \ln x) and x^\alpha \sin(\beta \ln x). This approach leverages the self-similar structure of the equation under scaling transformations. An alternative method for solving the homogeneous Cauchy-Euler equation uses the x = e^t (with t = \ln x for x > 0) and y(x) = u(t), which transforms the equation into a linear differential equation with constant coefficients in the variable t. Specifically, the derivatives transform as \frac{dy}{dx} = \frac{1}{x} \frac{du}{dt} and higher derivatives follow via the chain rule, resulting in an equation of the form b_n \frac{d^n u}{dt^n} + \cdots + b_0 u = 0, solvable using standard constant-coefficient techniques. The solution u(t) is then expressed back in terms of x via t = \ln x. This highlights the connection between Cauchy-Euler equations and constant-coefficient equations, as the power-law coefficients become constants after the . For the non-homogeneous case (g(x) \neq 0), the general is the sum of the homogeneous solution and a particular . After applying the substitution x = e^t, y(x) = u(t) to transform the equation to constant coefficients in t, standard methods such as or undetermined coefficients can be used to find the particular solution in t, which is then converted back to x. is particularly versatile for arbitrary g(x), involving the of the homogeneous solutions. As an illustrative example of the homogeneous case with a repeated root, consider the equation $9x^2 y'' + 15 x y' + y = 0. The is $9r(r-1) + 15r + 1 = (3r + 1)^2 = 0, yielding a double root r = -1/3. The general solution is y(x) = (c_1 + c_2 \ln x) x^{-1/3}.

Systems and Advanced Topics

Systems of Linear Equations

A of linear differential equations (ODEs) can be expressed in vector-matrix form as \mathbf{Y}'(x) = \mathbf{A}(x) \mathbf{Y}(x) + \mathbf{G}(x), where \mathbf{Y}(x) is an n-dimensional of unknown functions, \mathbf{A}(x) is the n \times n of coefficients (which may depend on the independent variable x), and \mathbf{G}(x) is an n-dimensional representing the non-homogeneous term./3%3A_Systems_of_ODEs/3.3%3A_Linear_systems_of_ODEs) This formulation generalizes the scalar case to multiple interdependent equations, common in applications like coupled physical systems or multi-compartment models. For the constant coefficient homogeneous case, where \mathbf{A}(x) = \mathbf{A} is constant and \mathbf{G}(x) = \mathbf{0}, the system simplifies to \mathbf{Y}' = \mathbf{A} \mathbf{Y}. The general solution is \mathbf{Y}(t) = e^{\mathbf{A} t} \mathbf{Y}_0, where e^{\mathbf{A} t} is the matrix exponential, computed via the eigenvalues and eigenvectors of \mathbf{A}, analogous to the characteristic equation in scalar ODEs. If \mathbf{A} has eigenvalues \lambda_i with corresponding eigenvectors \mathbf{v}_i, the solution involves terms like e^{\lambda_i t} \mathbf{v}_i, yielding exponential growth, decay, or oscillations depending on the real parts of \lambda_i. In the general homogeneous case (\mathbf{G}(x) = \mathbf{0}), a fundamental matrix \Phi(t) whose columns are linearly independent solutions satisfies \Phi' = \mathbf{A}(t) \Phi, and the general solution is \mathbf{Y}_h(t) = \Phi(t) \mathbf{C}, where \mathbf{C} is a constant vector determined by initial conditions. For non-homogeneous systems, the particular solution can be found using variation of parameters: assume \mathbf{Y}_p(t) = \Phi(t) \mathbf{u}(t), where \mathbf{u}'(t) = \Phi^{-1}(t) \mathbf{G}(t), leading to \mathbf{u}(t) = \int \Phi^{-1}(t) \mathbf{G}(t) \, dt and thus \mathbf{Y}_p(t) = \Phi(t) \int \Phi^{-1}(t) \mathbf{G}(t) \, dt. The full solution is then \mathbf{Y}(t) = \mathbf{Y}_h(t) + \mathbf{Y}_p(t)./4.7%3A_Variation_of_Parameters_for_Nonhomogeneous_Linear_Systems) A representative example is a two-mass coupled oscillator , modeled by the 2x2 : \begin{align*} y_1'' &= -k y_1 + c (y_2 - y_1), \\ y_2'' &= -k y_2 + c (y_1 - y_2), \end{align*} which in first-order matrix form is \mathbf{Y}' = \mathbf{A} \mathbf{Y} with \mathbf{A} incorporating masses and spring constants k, c. The eigenvalues of \mathbf{A} are \pm i \omega_1 for the symmetric (in-phase at \omega_1 = \sqrt{k}) and \pm i \omega_2 for the antisymmetric (out-of-phase at higher \omega_2 = \sqrt{k + 2c}), assuming unit masses as implied by the second-order equations, revealing decoupled vibrational behaviors./14%3A_Coupled_Linear_Oscillators) Decoupling is possible when \mathbf{A} is diagonalizable, via a \mathbf{Y} = \mathbf{P} \mathbf{Z}, where \mathbf{P} is the matrix of eigenvectors; this transforms the system to \mathbf{Z}' = \mathbf{D} \mathbf{Z} with diagonal \mathbf{D} containing eigenvalues, allowing independent scalar solutions for each component of \mathbf{Z}.

Holonomic Functions

A f(x) is defined as a that satisfies a linear homogeneous with coefficients in x. Formally, there exist P_0(x), \dots, P_n(x) with P_n(x) \neq 0 such that [ P_n(x) \frac{d^n f}{dx^n} + P_{n-1}(x) \frac{d^{n-1} f}{dx^{n-1}} + \dots + P_1(x) \frac{df}{dx} + P_0(x) f(x) = 0. This concept extends to multivariate cases, where f satisfies a of such equations. The notion originates from the algebraic framework of D-modules, introduced by in the 1970s, where modules over the Weyl algebra (the ring of differential operators with coefficients) have minimal dimension, enabling finite representations of solutions. Holonomic functions exhibit strong closure properties: they are closed under , by other functions, , and . If f and g are , then so are f + g, f \cdot g, f', and \int f \, dx, with the resulting differential equations computable algorithmically via operations on their annihilating ideals in the sense. This facilitates symbolic manipulation and proof of identities among . Prominent examples of functions include elementary functions like exponentials e^x (satisfying f' - f = 0) and polynomials, as well as special functions such as J_\nu(x) (annihilated by x^2 y'' + x y' + (x^2 - \nu^2) y = 0) and hypergeometric functions {}_2F_1(a,b;c;x) (satisfying a second-order equation with quadratic coefficients). These functions arise as exact solutions to linear differential equations in variable coefficient cases. Key algorithms for working with holonomic functions include the Gosper-Zeilberger method, which automates the discovery and proof of hypergeometric identities by finding certificates via recurrences dual to the equations. Creative telescoping extends this to indefinite and , generating telescoping relations from the differential structure to evaluate sums or integrals exactly; modern implementations, such as the HolonomicFunctions package in Mathematica, support multivariate cases and integration with tools like Ore polynomials for efficient computation. Applications of holonomic functions span , where they prove identities for generating functions (e.g., coefficients via recurrences), and physics, particularly in evaluating Feynman integrals in through creative telescoping to obtain exact parametric forms. These methods provide closed-form solutions where numerical approximation falls short, leveraging the for rigorous results.