Fact-checked by Grok 2 weeks ago

Separation of variables

Separation of variables is a fundamental analytical method in mathematics for solving certain ordinary and partial differential equations (ODEs and PDEs) by assuming that the solution can be expressed as a product of functions, each depending on only one independent variable. For first-order ODEs of the form \frac{dy}{dx} = f(x)g(y), the technique involves rearranging the equation to separate the variables, yielding \frac{dy}{g(y)} = f(x)\, dx, followed by integrating both sides to obtain an implicit solution \int \frac{dy}{g(y)} = \int f(x)\, dx + C. This approach is particularly effective for separable equations, such as those modeling exponential growth \frac{dy}{dt} = ky or logistic population dynamics \frac{dR}{dt} = kR(1 - \frac{R}{b}), where explicit solutions can be derived after integration and application of initial conditions. In the context of PDEs, separation of variables applies to linear homogeneous equations like the \frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2} or the wave equation \frac{\partial^2 u}{\partial t^2} = a^2 \frac{\partial^2 u}{\partial x^2}, assuming a product u(x,t) = X(x)T(t). Substituting this form into the PDE separates the variables, resulting in two ordinary differential equations, typically involving a separation constant \lambda, such as X'' + \lambda X = 0 and T' + \lambda k T = 0 for the . Boundary conditions (e.g., Dirichlet or ) determine the eigenvalues \lambda_n and eigenfunctions (often sines or cosines), while initial conditions allow superposition via to construct the general , enabling modeling of physical phenomena like heat conduction in a or vibrations of a . The method's success relies on the and homogeneity of the equations, as well as the separability of variables, though it may require numerical methods or other techniques for non-separable cases. It forms a cornerstone of , underpinning solutions in physics, , and beyond, and often serves as a precursor to more advanced methods like Fourier transforms.

Fundamentals

Definition and Basic Principle

The method of separation of variables is a analytical technique for solving certain classes of equations, particularly linear homogeneous ones, by assuming that the solution can be expressed as a product of functions, each depending on only one independent variable. This approach exploits the additive separability of the equation, where the partial derivatives or terms involving distinct variables can be rearranged and isolated on separate sides, enabling the equation to be decoupled into simpler components. The technique is applicable when the equation's structure allows such isolation , transforming a coupled multivariable problem into independent differential equations (ODEs). For ordinary differential equations (ODEs), the method applies to separable equations of the general form \frac{dy}{dx} = f(x) g(y), where f depends solely on the independent variable x and g on the dependent variable y. Rearranging yields \frac{dy}{g(y)} = f(x) \, dx, which integrates directly to \int \frac{1}{g(y)} \, dy = \int f(x) \, dx + C, producing an implicit solution that can often be solved explicitly for y(x). This separation leverages the chain rule in reverse, reducing the first-order ODE to algebraic integration over single variables. In the case of partial differential equations (PDEs), such as those governing physical phenomena like heat conduction or wave propagation, the method posits a u(x,t) = X(x) T(t). Substituting into a linear PDE, for instance, the one-dimensional \frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2}, gives \frac{T'(t)}{k T(t)} = \frac{X''(x)}{X(x)} = -\lambda, where -\lambda is the separation introduced to the spatial and temporal dependencies. The and conditions of the problem then determine the eigenvalues \lambda and corresponding eigenfunctions, often through Sturm-Liouville theory, ensuring the separated ODEs X'' + \lambda X = 0 and T' + k \lambda T = 0 yield valid solutions that satisfy the original PDE. This process reduces the multivariable PDE to a pair of single-variable ODEs, solvable via standard techniques like characteristic equations or .

Historical Development

The method of separation of variables emerged in the context of solving ordinary differential equations (ODEs) in the late 17th century. Gottfried Wilhelm Leibniz used it implicitly in 1691 to solve certain inverse tangent problems, while Jacob Bernoulli applied it in 1690 to the isochrone problem. Johann Bernoulli coined the phrase “separation of variables” in a 1694 letter to Leibniz and used it systematically in his lectures on calculus from 1691–92. Its application to partial differential equations (PDEs) began in the mid-18th century, pioneered by in 1747 for the wave equation and extended by Leonhard Euler in 1748, who applied separation of variables to derive solutions for wave propagation problems. In the late 18th century, contributed to the analysis of PDEs in , employing variable separation in variational problems and equations to model gravitational potentials. Similarly, advanced the technique around 1782 for solving in , using separation in spherical coordinates to study and early heat conduction models, including for gravitational fields. The method gained formal prominence in the through Fourier's 1822 treatise Théorie Analytique de la Chaleur, where he systematically applied separation of variables to the , combining it with infinite series expansions to solve problems in conduction. This work expanded the technique beyond simple product solutions, integrating it with trigonometric series for arbitrary initial conditions, and marked a shift toward broader applicability in physical modeling. In the 20th century, the method found pivotal use in , notably in Erwin Schrödinger's 1926 formulation of the wave equation, where separation of variables in spherical coordinates yielded exact solutions for the , enabling quantization. These developments have profoundly influenced modern engineering and physics, serving as a cornerstone for analytical solutions in heat transfer, fluid dynamics, , and , where it reduces complex PDEs to solvable ODEs under linear, homogeneous conditions.

Ordinary Differential Equations

First-Order Separable Equations

A first-order (ODE) is separable if it can be expressed in the form \frac{dy}{dx} = f(x) g(y), where f(x) depends only on the independent variable x and g(y) depends only on the dependent variable y. This criterion allows the equation to be rewritten as g(y) \, dy = f(x) \, dx, isolating terms involving each variable on opposite sides. The solution process begins by integrating both sides of the separated equation: \int g(y) \, dy = \int f(x) \, dx + C, where C is the constant of . The resulting implicit solution is then solved explicitly for y(x) if possible, or left in implicit form. Initial conditions, such as y(x_0) = y_0, are applied to determine the specific value of C. Constant solutions where g(y) = 0 must be checked separately, as they may not appear in the general solution from . In the general solution form, y(x) is obtained as the of the involving g(y), equated to the of f(x) plus C: y = g^{-1}\left( \int f(x) \, dx + C \right). Solutions involving logarithms or other functions may require handling absolute values, such as replacing \ln|y| with \ln|y| = k and exponentiating to yield |y| = e^k, which simplifies to y = \pm e^k and absorbs the sign into the arbitrary constant. Domains must be restricted to intervals where the solution is defined, excluding points that cause , negative arguments in logarithms, or other singularities. Separable equations represent a special case of exact first-order ODEs, where the equation M(x,y) \, dx + N(x,y) \, dy = 0 satisfies \frac{\partial M}{\partial y} = \frac{\partial N}{\partial x} with M = f(x) and N = g(y). When separability fails but the equation is nearly exact, integrating factors—functions \mu(x) or \mu(y) that multiply the equation to make it exact—provide an method, often derived via separation of variables on an auxiliary equation.

Example: First-Order Population Model

A classic application of separation of variables arises in modeling using the Malthusian , which assumes that the rate of change of the y(t) is proportional to its current size, leading to the first-order \frac{dy}{dt} = k y, where k > 0 is the intrinsic growth rate constant. To solve this separable , divide both sides by y (assuming y \neq 0) and multiply by dt, yielding \frac{dy}{y} = k \, dt. Integrating both sides gives \int \frac{1}{y} \, dy = \int k \, dt, which integrates to \ln |y| = k t + C, where C is the constant of integration. Exponentiating both sides produces the general solution y(t) = A e^{k t}, with A = \pm e^C. For an initial value problem with y(0) = y_0 > 0, the constant is determined as A = y_0, resulting in the explicit solution y(t) = y_0 e^{k t}. This solution describes exponential growth, where the population doubles every t_d = \frac{\ln 2}{k} time units, reflecting rapid increases observed in unconstrained environments like bacterial cultures. However, the model predicts unbounded growth as t \to \infty, which is unrealistic for real populations limited by resources such as food or space. A more realistic variation is the logistic growth model, which incorporates a K to account for environmental limits, given by the equation \frac{dy}{dt} = k y \left(1 - \frac{y}{K}\right). This is also separable: rewrite as \frac{dy}{y \left(1 - \frac{y}{K}\right)} = k \, dt. Using , \frac{1}{y (1 - y/K)} = \frac{1}{y} + \frac{1/K}{1 - y/K}, the left side integrates to \ln \left| \frac{y}{K - y} \right| = k t + C. Solving yields the general y(t) = \frac{K}{1 + B e^{-k t}}, where B = \frac{K - y_0}{y_0} is determined from the initial condition y(0) = y_0. The solution curve forms an S-shape, starting slowly, accelerating to a maximum growth rate at y = K/2, and asymptotically approaching the carrying capacity K as t \to \infty, better capturing bounded population dynamics in ecosystems.

Higher-Order Separable Equations

The separation of variables method extends to higher-order ordinary differential equations (ODEs) primarily through reduction of order techniques, which reduce the equation to a series of lower-order separable equations. For second-order ODEs of the form y'' = f(x) g(y, y'), separability occurs if the terms involving x can be isolated from those involving y and y', though the method is most straightforward for autonomous cases where the independent variable x is absent, such as y'' = f(y). In these autonomous equations, the absence of explicit x-dependence allows the equation to be treated as a equation in terms of y' as a function of y. To solve y'' = f(y), introduce the substitution v = y', so y'' = \frac{dv}{dx} = \frac{dv}{dy} \frac{dy}{dx} = v \frac{dv}{dy} by the chain rule. This yields the separable v \frac{dv}{dy} = f(y), or v \, dv = f(y) \, dy. Integrating both sides gives \frac{1}{2} v^2 = \int f(y) \, dy + C_1, where C_1 is the constant of integration. Solving for v, we obtain v = \pm \sqrt{2 \int f(y) \, dy + 2C_1}, and since v = \frac{dy}{dx}, this results in the separable \frac{dy}{dx} = \pm \sqrt{2 \int f(y) \, dy + 2C_1}, or dx = \frac{dy}{\pm \sqrt{2 \int f(y) \, dy + 2C_1}}. Integrating again yields x = \int \frac{dy}{\pm \sqrt{2 \int f(y) \, dy + 2C_1}} + C_2, providing the implicit solution. An equivalent approach is to multiply the original by $2y', leading to $2y' y'' = 2 f(y) y', where the left side is \frac{d}{dx} (y')^2 and the right side integrates to $2 \int f(y) \, dy, confirming the same first integral. For general nth-order autonomous ODEs where the highest derivative can be isolated as y^{(n)} = f(y, y', \dots, y^{(n-1)}), successive reductions apply if the structure permits separation after substitutions. Each step treats the equation as in the highest derivative with respect to the previous one, reducing the order by one until reaching a separable equation, followed by successive integrations to recover the solution. This process introduces multiple constants of integration, corresponding to the of the original . In linear higher-order cases with conditions, such reductions can lead to eigenvalue problems where separation constants arise from assuming forms or other trial functions, determining the eigenvalues that satisfy the boundaries. However, not all higher-order ODEs are separable without additional substitutions; the method requires to lack explicit dependence on the independent or to allow clear of after , limiting its applicability to specific autonomous or quasi-autonomous forms.

Example: Second-Order Harmonic Oscillator

The second-order governing is given by \frac{d^2 x}{dt^2} + \omega^2 x = 0, where x(t) represents the from at time t, and \omega > 0 is a constant related to the system's parameters, such as the .\] This equation arises in [classical mechanics](/page/Classical_mechanics) for systems like a mass-spring setup, where the restoring [force](/page/Force) is proportional to [displacement](/page/Displacement), leading to oscillatory behavior without [damping](/page/Damping).\[ To solve this using a separation-of-variables approach, multiply both sides of the equation by the \frac{dx}{dt}: \frac{dx}{dt} \cdot \frac{d^2 x}{dt^2} + \omega^2 x \frac{dx}{dt} = 0. The first term simplifies to \frac{1}{2} \frac{d}{dt} \left( \left( \frac{dx}{dt} \right)^2 \right), and the second to \frac{\omega^2}{2} \frac{d}{dt} (x^2), yielding \frac{d}{dt} \left[ \frac{1}{2} \left( \frac{dx}{dt} \right)^2 + \frac{\omega^2}{2} x^2 \right] = 0. Integrating with respect to time gives the conservation of : \frac{1}{2} \left( \frac{dx}{dt} \right)^2 + \frac{\omega^2}{2} x^2 = E, where E is a constant determined by initial conditions.$$] Rearranging yields the separable first-order equation [ \frac{dx}{dt} = \pm \sqrt{2E - \omega^2 x^2}, which can be solved by separation: \int \frac{dx}{\sqrt{2E - \omega^2 x^2}} = \pm \int dt. Assuming $ 2E = \omega^2 A^2 $ for amplitude $ A $, the left integral evaluates to $ \frac{1}{\omega} \arcsin\left( \frac{\omega x}{A} \right) $, leading to the general solution x(t) = A \cos(\omega t + \phi), where $ \phi $ is the phase angle.$$\] An equivalent form is the linear combination \[ x(t) = C \cos(\omega t) + D \sin(\omega t), obtained via trigonometric identities.$$] The constants C and D (or A and \phi) are determined by initial boundary conditions, typically the initial displacement x(0) = x_0 and initial velocity \frac{dx}{dt}(0) = v_0. For instance, substituting into the cosine-sine form gives C = x_0 and D = v_0 / \omega, while the amplitude is A = \sqrt{x_0^2 + (v_0 / \omega)^2} and \phi = \tan^{-1}(v_0 / (\omega x_0)).[$$ Physically, this solution describes periodic motion with angular frequency \omega, period T = 2\pi / \omega, and frequency f = \omega / (2\pi), where the system returns to the initial position after each cycle.\] The motion is bounded between $ -A $ and $ A $, reflecting energy conservation. Extensions to damped harmonic oscillators, such as $ \frac{d^2 x}{dt^2} + 2\gamma \frac{dx}{dt} + \omega^2 x = 0 $ with damping coefficient $ \gamma > 0 $, introduce exponential decay but require additional techniques like the characteristic equation for separation, as direct multiplication no longer yields a simple conserved quantity.\[

Partial Differential Equations

Separation Technique for Linear PDEs

The separation of variables technique is a standard analytical method for solving linear homogeneous partial differential equations (PDEs), particularly those arising in physics such as the and equations. It relies on the assumption that the solution can be expressed as a product of functions, each depending on a single independent variable. For a PDE involving spatial variables x_1, \dots, x_n and possibly time t, the assumed form is u(x_1, \dots, x_n, t) = X_1(x_1) \cdots X_n(x_n) T(t). To apply the technique, the product form is substituted directly into the PDE. Due to the homogeneity and linearity of the equation, division by the product X_1(x_1) \cdots X_n(x_n) T(t) (assuming it is nonzero) separates the variables, resulting in an equation where each term depends only on one variable and equals a constant. This introduces separation constants, such as \lambda, leading to a system of ordinary differential equations (ODEs), one for each function X_i and T. The method extends the separation approach used in ODEs by adapting it to the multivariable context of PDEs. In the case of the or , the spatial ODEs typically form Sturm-Liouville eigenvalue problems, where the separation constant \lambda serves as the eigenvalue, and the corresponding eigenfunctions provide the basis for the solution. These problems ensure the existence of a complete set of orthogonal solutions under appropriate boundary conditions. The of the PDE allows the general to be constructed via superposition, expressing u as an infinite sum (or series) of the separated product solutions, with coefficients determined by or conditions. This step leverages the of the eigenfunctions to represent arbitrary functions in the . For PDEs with multiple spatial variables, separation proceeds successively, isolating one variable at a time by introducing multiple separation constants, yielding a chain of ODEs that can be solved independently. This iterative process is essential for higher-dimensional problems, maintaining the method's applicability while preserving the product structure.

Homogeneous Boundary Value Problems

Homogeneous boundary value problems arise when solving linear partial differential equations (PDEs) subject to boundary conditions that are identically zero on the boundaries, enabling the separation of variables to reduce the problem to an eigenvalue problem for the spatial . This setup is particularly common in and phenomena where the boundaries are maintained at a reference state, such as zero temperature. The resulting eigenfunctions form an , facilitating the expansion of the solution in terms of these modes. A canonical example is the one-dimensional , which models heat diffusion in a thin : \frac{\partial u}{\partial t} = \alpha \frac{\partial^2 u}{\partial x^2}, \quad 0 < x < L, \quad t > 0, where u(x,t) represents the at position x and time t, and \alpha > 0 is the . The homogeneous Dirichlet boundary conditions are u(0,t) = 0 and u(L,t) = 0 for t > 0, with an u(x,0) = f(x) for $0 < x < L. Applying separation of variables, assume a product solution u(x,t) = X(x) T(t). Substituting into the PDE yields \frac{T'(t)}{\alpha T(t)} = \frac{X''(x)}{X(x)} = -\lambda, where \lambda is the separation constant. This separates into two ordinary differential equations (ODEs): the spatial eigenvalue problem X''(x) + \lambda X(x) = 0 and the temporal equation T'(t) + \alpha \lambda T(t) = 0. The boundary conditions u(0,t) = 0 and u(L,t) = 0 imply X(0) = 0 and X(L) = 0. Solving the spatial Sturm-Liouville problem gives eigenvalues \lambda_n = \left( \frac{n\pi}{L} \right)^2 for n = 1, 2, 3, \dots, with corresponding eigenfunctions X_n(x) = \sin\left( \frac{n\pi x}{L} \right). For each n, the temporal ODE has solution T_n(t) = e^{-\alpha \lambda_n t}, up to a constant. The general solution is then a superposition: u(x,t) = \sum_{n=1}^\infty b_n \sin\left( \frac{n\pi x}{L} \right) e^{-\alpha (n\pi/L)^2 t}. The coefficients b_n are determined from the initial condition via the Fourier sine series: b_n = \frac{2}{L} \int_0^L f(x) \sin\left( \frac{n\pi x}{L} \right) \, dx. The eigenfunctions \{\sin(n\pi x / L)\}_{n=1}^\infty are on [0, L] with respect to the inner product \langle g, h \rangle = \int_0^L g(x) h(x) \, dx, as \langle X_m, X_n \rangle = 0 for m \neq n and \langle X_n, X_n \rangle = L/2. This justifies the expansion and ensures the coefficients are uniquely determined. The series solution converges to the f(x) in the L^2[0,L] sense for square-integrable f, and under additional regularity assumptions like .

Nonhomogeneous Boundary Value Problems

In nonhomogeneous boundary value problems (BVPs) for partial differential equations (PDEs), the presence of forcing terms or inhomogeneous boundary conditions prevents direct application of separation of variables, as the method requires linearity and homogeneity in the boundary conditions. To address this, the solution is decomposed as u(x,t) = v(x,t) + w(x), where w(x) is a steady-state particular solution satisfying the nonhomogeneous boundary conditions and the time-independent part of the PDE, while v(x,t) solves a homogeneous BVP with adjusted initial conditions using separation of variables. This decomposition transforms the original problem into one amenable to eigenfunction expansions derived from the associated homogeneous problem. For PDEs with a time-dependent forcing term, such as the nonhomogeneous \frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2} + f(x,t) on $0 < x < L with homogeneous Dirichlet boundary conditions u(0,t) = u(L,t) = [0](/page/0), separation of variables is first applied to the homogeneous version to obtain eigenfunctions \phi_n(x) = \sin\left(\frac{n\pi x}{L}\right) and eigenvalues \lambda_n = \left(\frac{n\pi}{L}\right)^2, n = [1](/page/1), 2, \dots. The solution is then expressed as u(x,t) = \sum_{n=1}^\infty a_n(t) \phi_n(x), where the coefficients a_n(t) satisfy the second-order \frac{d^2 a_n}{dt^2} + c^2 \lambda_n a_n = q_n(t), with q_n(t) = \frac{2}{L} \int_0^L f(x,t) \phi_n(x) \, [dx](/page/DX). Duhamel's provides the a_n(t) = a_n(0) \cos(\omega_n t) + \frac{a_n'(0)}{\omega_n} \sin(\omega_n t) + \frac{1}{\omega_n} \int_0^t q_n(s) \sin(\omega_n (t-s)) \, ds, where \omega_n = c \sqrt{\lambda_n}, and a_n(0), a_n'(0) are determined from initial conditions via coefficients. This yields the full nonhomogeneous solution as the superposition. When boundary conditions are nonhomogeneous, such as u(0,t) = g_1(t) and u(L,t) = g_2(t), the steady-state function w(x) is chosen to satisfy these conditions, often by solving an auxiliary like \frac{d^2 w}{dx^2} = 0 for the , giving w(x) = g_1(t) + \frac{g_2(t) - g_1(t)}{L} x if time-independent. For time-dependent cases, w(x,t) may require expansion in the eigenfunctions of the homogeneous spatial : w(x,t) = \sum_{n=1}^\infty c_n(t) \phi_n(x), where coefficients c_n(t) are determined by projecting the boundary onto the eigenbasis, ensuring v(x,t) inherits homogeneous boundaries. The complete solution takes the form of a particular solution w(x,t) plus the homogeneous series \sum_{n=1}^\infty \left[ A_n \cos(\omega_n t) + B_n \sin(\omega_n t) \right] \phi_n(x), with \omega_n = c \sqrt{\lambda_n} and coefficients fitted to initial conditions. This approach leverages the eigenfunctions from the homogeneous BVP to handle inhomogeneities efficiently, maintaining the utility of separation of variables for linear PDEs while extending it to practical scenarios like forced vibrations or heat conduction with external sources.

Equations with Mixed Derivatives

Equations involving derivatives with respect to multiple variables, such as those appearing in or equations, present challenges for direct application of separation of variables due to the coupling of derivatives across multiple spatial directions. Consider the two-dimensional linear equation \frac{\partial u}{\partial t} + a \frac{\partial u}{\partial x} + b \frac{\partial u}{\partial y} = 0, where a and b are constant velocities, defined on an infinite or periodic domain with initial condition u(x,y,0) = g(x,y). This hyperbolic PDE models the passive transport of a quantity u by a uniform flow in the (a, b) direction. Assuming a separable product form u(x,y,t) = X(x) Y(y) T(t) and substituting into the equation yields \frac{T'(t)}{T(t)} + a \frac{X'(x)}{X(x)} + b \frac{Y'(y)}{Y(y)} = 0. For separation to hold, each term must be constant: \frac{X'(x)}{X(x)} = -\frac{i k}{a}, \frac{Y'(y)}{Y(y)} = -\frac{i l}{b}, and \frac{T'(t)}{T(t)} = i (k + l), where k and l are separation constants. This results in plane wave solutions of the form u(x,y,t) = e^{i (k x + l y - \omega t)}, with dispersion relation \omega = a k + b l. These solutions satisfy the PDE and represent propagating without . However, direct separation often requires prior transformation to coordinates to decouple the mixed terms. The are lines parameterized by x - a t = \xi, y - b t = \eta, along which u remains constant. In these coordinates, the PDE reduces to \frac{\partial u}{\partial t} = 0 along each , yielding the general solution u(x,y,t) = f(\xi, \eta) = f(x - a t, y - b t), where f is determined by the initial data g. This transformation effectively renders the problem separable, as the solution depends independently on the shifted spatial variables. For boundary conditions in periodic or infinite domains, the general solution is obtained via superposition of plane waves using a two-dimensional Fourier transform: u(x,y,t) = \iint \hat{g}(k,l) e^{i (k (x - a t) + l (y - b t))} \, dk \, dl, which aligns with the characteristic solution and confirms the link between separation and the . Such conditions ensure well-posedness without reflections, allowing expansions to converge.

Application to Curvilinear Coordinates

In , such as polar and spherical coordinates, the separation of variables method is particularly effective for solving partial differential equations (PDEs) like , where the geometry of the domain aligns with the coordinate system's natural boundaries. These systems transform the PDE into a form that separates into differential equations (ODEs) along each coordinate direction, simplifying the analysis of problems with rotational or spherical symmetry. Consider in two-dimensional polar coordinates (r, \theta), given by \nabla^2 u = \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial u}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 u}{\partial \theta^2} = 0. Assuming a product solution u(r, \theta) = R(r) \Theta(\theta), substitution yields the separated equations \Theta'' + m^2 \Theta = 0 for the part and r^2 R'' + r R' - m^2 R = 0 for the radial part, where m is the separation constant. The equation admits periodic solutions \Theta(\theta) = A \cos(m\theta) + B \sin(m\theta) for integer m \geq 0 to ensure single-valuedness. The radial equation is an Euler-Cauchy equation with solutions R(r) = C r^m + D r^{-m} for m \neq 0, while for m = 0, the solutions are R(r) = E + F \ln r. The general solution in polar coordinates is thus a Fourier series in the angular variable combined with the corresponding radial functions: u(r, \theta) = a_0 + b_0 \ln r + \sum_{m=1}^\infty \left( a_m r^m + b_m r^{-m} \right) \cos(m\theta) + \sum_{m=1}^\infty \left( c_m r^m + d_m r^{-m} \right) \sin(m\theta). Boundary conditions tailored to the geometry determine the coefficients; for instance, in a disk of radius a with prescribed boundary values u(a, \theta) = h(\theta), the interior solution sets b_m = d_m = 0 to ensure boundedness at r=0, yielding u(r, \theta) = \sum_{m=0}^\infty a_m \left( \frac{r}{a} \right)^m \cos(m\theta) + \sum_{m=1}^\infty c_m \left( \frac{r}{a} \right)^m \sin(m\theta), where the a_m and c_m are Fourier coefficients of h(\theta). This approach extends naturally to three-dimensional spherical coordinates (r, \theta, \phi), where \nabla^2 u = 0 separates into radial, polar angular, and azimuthal parts. The azimuthal equation is \Phi'' + m^2 \Phi = 0 with solutions e^{\pm i m \phi}, while the polar equation becomes the associated Legendre equation \frac{d}{d\mu} \left( (1 - \mu^2) \frac{d \Theta}{d\mu} \right) + \left( l(l+1) - \frac{m^2}{1 - \mu^2} \right) \Theta = 0, where \mu = \cos \theta and l \geq |m| is an separation . The solutions are associated Legendre functions P_l^m(\cos \theta), which reduce to P_l(\cos \theta) for m=0. The radial solutions are R(r) = A r^l + B r^{-l-1}. For spherical boundary value problems, such as potential on a of a with u(a, \theta, \phi) = f(\theta, \phi), the solution expands in Y_l^m(\theta, \phi) = P_l^m(\cos \theta) e^{i m \phi} multiplied by radial terms, with coefficients chosen to match the boundary data via of the harmonics. Boundedness at the typically selects the r^l terms for interior problems, analogous to the disk case in polar coordinates.

Extensions and Applications

Formulation in Matrix Form

In the context of linear systems of ordinary differential equations (ODEs), the separation of variables technique manifests through the eigenvalue decomposition of the . Consider the system \frac{d\mathbf{x}}{dt} = A \mathbf{x}, where A is an n \times n constant and \mathbf{x}(t) \in \mathbb{R}^n. Assuming A has n linearly independent eigenvectors \mathbf{v}_i corresponding to eigenvalues \lambda_i, satisfying A \mathbf{v}_i = \lambda_i \mathbf{v}_i for i = 1, \dots, n, the general separates into a of independent modal solutions: \mathbf{x}(t) = \sum_{i=1}^n c_i \mathbf{v}_i e^{\lambda_i t}, with constants c_i determined by initial conditions. This form decouples the system into scalar equations along each eigenvector direction, mirroring the variable separation in single ODEs. If A is diagonalizable, it admits a decomposition A = P D P^{-1}, where P collects the eigenvectors as columns and D = \operatorname{diag}(\lambda_1, \dots, \lambda_n). The solution then takes the matrix exponential form \mathbf{x}(t) = e^{A t} \mathbf{x}(0) = P e^{D t} P^{-1} \mathbf{x}(0), where e^{D t} = \operatorname{diag}(e^{\lambda_1 t}, \dots, e^{\lambda_n t}). This product structure separates the temporal evolution (via the diagonal exponential) from the spatial transformation (via P and P^{-1}), enabling efficient computation and analysis of the decoupled dynamics. For partial differential equations (PDEs), a formulation arises upon , such as via finite differences, transforming the continuous separation into eigenvalue problems. The separated spatial , often a Sturm-Liouville problem, discretizes to a eigenvalue equation whose spectrum and eigenmodes approximate the continuous separation constants and functions; for instance, in the , the Laplacian yields eigenvalues that determine the decay rates of separated modes, paralleling the analytical process. This analogy preserves the separability, with the eigenvalues governing the temporal behavior in the numerical solution. Nonhomogeneous extensions involve matrix equations like the Sylvester equation A X + X B = C, which appears in steady-state analysis or control design. A unique solution exists if A and -B share no eigenvalues. If A and B commute (i.e., AB = BA) and are diagonalizable, they admit simultaneous diagonalization A = P D_1 P^{-1}, B = P D_2 P^{-1} with diagonal D_1, D_2. This reduces the equation to entrywise scalar separations \lambda_i^{(1)} y_{ij} + y_{ij} \lambda_j^{(2)} = \tilde{c}_{ij}, solvable independently as y_{ij} = \tilde{c}_{ij} / (\lambda_i^{(1)} + \lambda_j^{(2)}) when the denominator is nonzero. The Lyapunov equation, a special case with B = -A^T, similarly separates under these conditions for quadratic stability analysis. In control theory, this matrix separability facilitates stability analysis for linear time-invariant systems \dot{\mathbf{x}} = A \mathbf{x} + B \mathbf{u}, where diagonalizability of A decouples the modes, allowing eigenvalue-based checks for asymptotic stability (all \operatorname{Re}(\lambda_i) < 0) and independent controller design per mode. Such formulations underpin modal control and observer design, ensuring robust performance through separated dynamics.

Implementation in Software

Symbolic mathematical software packages implement separation of variables to obtain analytical solutions for separable equations (ODEs) and certain partial differential equations (PDEs). In , the DSolve function automatically applies separation of variables for first-order separable ODEs, such as those of the form \frac{dy}{dx} = f(x)g(y), by isolating terms and integrating both sides to yield the implicit or explicit solution. For linear PDEs, DSolve internally employs separation alongside symmetry reductions to derive solutions. Similarly, MATLAB's Symbolic Math Toolbox uses the dsolve function to handle separable ODEs by recognizing the form and performing the separation and integration steps. For instance, solving \frac{dy}{dt} = t y with initial condition y(0) = 1 yields y(t) = e^{t^2/2}. For PDEs like the one-dimensional \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2}, dsolve supports separation of variables by assuming u(t,x) = T(t) X(x), reducing it to ODEs, and assembling the general solution involving . In , the library's dsolve function includes a dedicated 'separable' hint for first-order ODEs, rewriting the equation as P(x) Q(y) dy = R(x) S(y) dx and integrating accordingly. For PDEs, provides pde_separate to explicitly separate variables using multiplicative or additive strategies, as in the wave equation \frac{\partial^2 u}{\partial x^2} = \frac{\partial^2 u}{\partial t^2}, yielding \frac{X''(x)}{X(x)} = \frac{T''(t)}{T(t)} = -\lambda. Once variables are separated, the resulting ODEs or eigenvalue problems are often solved numerically in software tailored for PDEs. MATLAB's Partial Differential Equation Toolbox computes eigenmodes for problems like the Helmholtz equation -\nabla^2 u = \lambda u, which arise from spatial separation in time-dependent PDEs, using finite element methods to approximate eigenvalues and modes without requiring manual separation. In Python's library, numerical solutions to the separated ODEs can be obtained via scipy.integrate.solve_ivp for time-dependent terms, such as exponential decays in solutions. Practical implementations often combine symbolic separation with numerical evaluation of coefficients. For the one-dimensional heat equation u_t = k u_{xx} on [0, L] with homogeneous Dirichlet boundaries, separation yields u(x,t) = \sum_{n=1}^\infty b_n \sin\left(\frac{n\pi x}{L}\right) e^{-k (n\pi/L)^2 t}, where Fourier coefficients b_n are computed numerically. Here is a SymPy example to separate and solve the heat equation symbolically:
python
from sympy import Function, dsolve, Derivative, symbols, Eq, sin, pi, exp, Sum
x, t, L, k, n = symbols('x t L k n')
u = Function('u')
pde = Eq(Derivative(u(x,t), t), k * Derivative(u(x,t), x, 2))
# Assume u(x,t) = X(x) * T(t); separation leads to ODEs
# Spatial: X'' + λ X = 0, with X(0)=X(L)=0 → X_n = sin(n π x / L), λ_n = (n π / L)^2
# Temporal: T' + k λ T = 0 → T_n = exp(-k λ_n t)
# General solution
sol = Sum((symbols('b_n') * sin(n * pi * x / L) * exp(-k * (n * pi / L)**2 * t)), (n, 1, oo))
print(sol)
This code assembles the series solution after manual separation, with coefficients b_n = \frac{2}{L} \int_0^L u_0(x) \sin\left(\frac{n\pi x}{L}\right) dx evaluated numerically using scipy.integrate.quad for initial conditions u_0(x). In , a similar for Fourier coefficient computation follows separation:
matlab
syms x t L k n bn
u(x,t) = sum(bn * sin(n*pi*x/L) * exp(-k*(n*pi/L)^2 * t), n, 1, Inf);
% Compute bn for initial condition u(x,0) = f(x)
% bn = (2/L) * int(f(x)*sin(n*pi*x/L), x, 0, L)
f = x*(L-x);  % Example initial [condition](/page/Initial_condition)
bn_expr = (2/L) * int(f * sin(n*pi*x/L), x, 0, L);
bn_n = subs(bn_expr, n, n);  % Evaluate for integer n
Numerical plotting uses ezplot or array evaluation for finite terms. Software implementations assume the underlying equations are linear and homogeneous to enable clean separation, often requiring users to preprocess non-separable forms or verify applicability manually; nonlinear or coupled terms may necessitate alternative numerical methods like finite differences without separation.

Applicability and Limitations

Conditions for Successful Separation

Separation of variables is applicable to ordinary equations (ODEs) when the equation can be expressed in the form \frac{dy}{dx} = f(x) g(y), allowing the variables to be isolated on opposite sides of the equation after rearrangement. This condition excludes equations with implicit functions that prevent clean separation, such as those involving products of derivatives or non-factorable terms. Autonomous ODEs, where \frac{dy}{dx} = f(y), satisfy this criterion since f(x) = 1, enabling integration after separation. For partial differential equations (PDEs), successful separation requires the equation to be linear and homogeneous, typically with constant coefficients, ensuring the product solution assumption yields differential equations without residual variable coupling. Additionally, conditions must be linear, homogeneous, and separable, often in product form such as u(0,t) = 0 and u(L,t) = 0, which translate to conditions on the spatial factor alone. Variable coefficients or non-separable boundaries generally prevent isolation of variables. The separation constant plays a crucial role by equating expressions dependent on different variables, transforming the PDE into a pair of ODEs, one of which forms a Sturm-Liouville eigenvalue problem whose eigenfunctions provide an for the solution expansion. This structure guarantees completeness of the eigenfunctions under appropriate conditions, allowing superposition to satisfy conditions. To test separability, assume a product solution u(\mathbf{x},t) = X(\mathbf{x}) T(t) and substitute into the PDE; if rearrangement yields \frac{\mathcal{L}[X]}{X} = \frac{\mathcal{M}[T]}{T} = -\lambda where \mathcal{L} and \mathcal{M} involve only \mathbf{x} and t respectively, the method succeeds; otherwise, variables do not isolate, indicating failure. Although separation of variables is primarily for linear equations, extensions to certain nonlinear cases exist through substitutions that linearize the PDE, such as the Hopf-Cole transformation for the viscous Burgers' equation u_t + u u_x = \nu u_{xx}, which maps it to the linear heat equation v_t = \nu v_{xx} via u = -2\nu \frac{v_x}{v}, enabling subsequent separation. Such transformations are rare and problem-specific.

Cases Where Separation Fails

Separation of variables fails for nonlinear partial equations (PDEs) because the method relies on assuming a product solution form that reduces the PDE to equations (ODEs), but nonlinear terms—such as the term (\mathbf{u} \cdot \nabla)\mathbf{u} in the Navier-Stokes equations—introduce couplings that prevent the product from satisfying the original equation without additional cross terms that cannot be separated. For instance, in the incompressible Navier-Stokes equations describing fluid motion, this nonlinearity arises from the advective transport of , rendering the invalid and precluding exact analytical solutions via separation. PDEs with variable coefficients, such as those modeling time-dependent where coefficients like thermal conductivity vary with position or time (e.g., \sin(xt) dependence), also resist separation because the coefficients do not factor neatly into functions of individual variables, blocking the reduction to independent ODEs. In such cases, the assumption that the solution separates into products leads to intractable equations that mix variables inseparably. Non-separable boundary conditions or irregular domains further limit the method, as separation requires boundaries aligned with constant coordinate surfaces (e.g., rectangular or circular geometries); for irregular shapes like arbitrary obstacles, the eigenfunctions do not match the domain, making the approach inapplicable. When separation fails, alternatives include for mildly variable coefficients, where solutions are expanded in series around a -coefficient base case; numerical methods like finite elements for irregular domains; or transform techniques such as Laplace or transforms to handle non-separable aspects. Green's functions provide another analytical route for irregular boundaries by constructing solutions via representations. For the resulting ODEs in semi-separable cases, numerical integration via Runge-Kutta methods may be employed. Historically, instances where separation failed for infinite or non-periodic domains in heat conduction problems spurred the development of the as an extension of , enabling solutions over unbounded regions.

References

  1. [1]
    [PDF] 5.3 Separation of Variables
    With the method of separation of variables, we can obtain formulas for solutions to a number of differential equations that were previously accessible only by ...
  2. [2]
    [PDF] ES.1803 S24: Reading: Topic 25: PDEs: Separation of Variables
    First, we'll use the method of optimism to find simple (modal) solutions that satisfy both the partial differential equation (HE) and the boundary conditions ( ...
  3. [3]
    Differential Equations - Separation of Variables
    Nov 16, 2022 · The method of separation of variables relies upon the assumption that a function of the form, u(x,t)=φ(x)G(t) (1) will be a solution to a linear homogeneous ...
  4. [4]
    18.4 Separation of Variables - BOOKS
    Separation of variables is a procedure which turns a partial differential equation into a set of ordinary differential equations. 🔗. The procedure only works in ...
  5. [5]
    [PDF] Separation of Variables - MATH 467 Partial Differential Equations
    In this lesson we will learn the approach of a fundamental technique for solving many PDEs, namely separation of variables.
  6. [6]
    Fontaine's Forgotten Method for Inexact Differential Equations
    Dec 22, 2017 · In 1739 Alexis-Claude Clairaut published the modern integrating factor method of solving inexact ordinary differential equations (ODEs). He ...
  7. [7]
    Alexis Clairaut (1713 - Biography - MacTutor History of Mathematics
    In 1739 and 1740 he published further work on the integral calculus, proving the existence of integrating factors for solving first order differential equations ...
  8. [8]
    None
    Summary of each segment:
  9. [9]
    Théorie analytique de la chaleur : Fourier, Jean Baptiste Joseph ...
    Dec 14, 2009 · Théorie analytique de la chaleur ; Publication date: 1822 ; Topics: Heat ; Publisher: Paris, F. Didot ; Collection: thomasfisher; universityofottawa ...Missing: separation variables
  10. [10]
    [PDF] The analytical theory of heat
    JOSEPH FOURIER. TRANSLATED, WITH NOTES,. BY. ALEXANDER FREEMAN, M.A.,. FJIlLLOW ... Nr 01' lbAT IN A BING. 101-105. The variable movement of heat in a ...
  11. [11]
    [PDF] 1926-Schrodinger.pdf
    The paper gives an account of the author's work on a new form of quantum theory. §1. The Hamiltonian analogy between mechanics and optics. §2. The analogy is to ...
  12. [12]
    [PDF] Research on Separation Variable Method in Mathematical Physics ...
    The separation variable method is one of the general methods to solve the boundary value problem of various types of linear partial differential equations. On ...
  13. [13]
    Separable Equations - Pauls Online Math Notes
    Feb 6, 2023 · Note that in order for a differential equation to be separable all the y y 's in the differential equation must be multiplied by the derivative ...
  14. [14]
    [PDF] 18.03SCF11 text: Separation of Variables - MIT OpenCourseWare
    An equation is called separable when you can use algebra to separate the two variables, so that each is completely on one side of the equation. We illustrate ...Missing: order | Show results with:order
  15. [15]
    [PDF] Exact Equations - Arizona Math
    Jun 13, 2016 · + cos(x) + y2 = 0,y(1) = π. 4 Special Integrating Factors ... Because this equation could be solved by separation of variables, we could generally ...
  16. [16]
    [PDF] Separable Differential Equations - Purdue Math
    Feb 16, 2007 · Malthusian Growth. The simplest mathematical model of population growth is obtained by assuming that the rate of increase of the population ...Missing: separation | Show results with:separation
  17. [17]
    [PDF] The Logistic Population Model Math 121 Calculus II
    We can solve this differential equation by the method of separation of variables. First, separate the variables to get. 1 y(1 - y/K) dy = r dt and integrate.<|control11|><|separator|>
  18. [18]
    [PDF] 6.5 Separable Equations Including the Logistic Equation
    THE LOGISTIC EQUATION. The simplest model of population growth is dyldt = cy. The growth rate c is the birth rate minus the death rate. If c is constant the ...
  19. [19]
    None
    Nothing is retrieved...<|control11|><|separator|>
  20. [20]
    MATHEMATICA TUTORIAL, Part 1.4: Reduction Higher Order ODEs
    Sometimes higher order differential equations can be reduced to lower and first order equations. We consider two classes of equations when this is possible.
  21. [21]
    [PDF] 8. Second-Order Equations Reducible to First-Order Ones
    Aug 2, 2022 · In this case we can use the autonomous reduction method, which seeks a reduc- tion to an autonomous equation.<|control11|><|separator|>
  22. [22]
    [PDF] Partial Differential Equations: An Introduction, 2nd Edition
    One of the most important techniques is the method of separation of variables. Many textbooks heavily emphasize this technique to the point of excluding other ...
  23. [23]
    [PDF] Peter J. Olver - Introduction to Partial Differential Equations
    This book introduces partial differential equations, which are central to mathematics and applications, and focuses on solution techniques and mathematical ...
  24. [24]
    [PDF] 2 Heat Equation
    Am = hXm,φi. hXm,Xmi . Example 4. (Dirichlet Boundary Conditions) In the case of Dirichlet boundary conditions on the interval [0,l], we showed earlier that ...Missing: reference | Show results with:reference
  25. [25]
    5.6 PDEs, separation of variables, and the heat equation
    A partial differential equation or PDE is an equation containing the partial derivatives with respect to several independent variables.
  26. [26]
    [PDF] Math 531 - Nonhomogeneous Partial Differential Equations
    Introduction: Separation of Variables requires a linear PDE with homogeneous BCs. Consider the following nonhomogeneous problems: ∂u. ∂t. = k. ∂2u. ∂x2. − h ...
  27. [27]
    [PDF] 10 Green's functions for PDEs - DAMTP
    We will also see how to solve the inhomogeneous (i.e. forced) version of these equations, and uncover a relationship, known as Duhamel's principle, between ...
  28. [28]
    [PDF] 1 Separation of Variables
    Separation of variables is an analytical technique for solving PDEs on bounded domains with constant coefficients, assuming the solution is a product of single ...
  29. [29]
  30. [30]
    [PDF] MATH3083/MATH6163 Advanced Partial Differential Equations
    Homework 14: a) Use separation of variables to solve the Cauchy problem for the advection equation on the line, u,t + v0u,x. = 0, −∞ <x< ∞, t ≥ 0,. (335).
  31. [31]
    12.4: Laplace's Equation in Polar Coordinates
    Jun 23, 2024 · We'll consider boundary value problems for Laplace's equation over regions with boundaries best described in terms of polar coordinates.
  32. [32]
    [PDF] Laplace's equation in the Polar Coordinate System - UC Davis Math
    Once we derive Laplace's equation in the polar coordinate system, it is easy to represent the heat and wave equations in the polar coordinate system. For ...
  33. [33]
    [PDF] Laplace's equation in polar coordinates - Arizona Math
    Laplace's equation in polar coordinates. Boundary value problem for disk: ∆u = urr + ur r. + uθθ r2. = 0, u(a,θ) = h(θ). Separating variables u = R(r)Θ(θ) gives.
  34. [34]
    Differential Equations - Laplace's Equation - Pauls Online Math Notes
    Apr 10, 2024 · So from that problem we know that separation of variables yields the following two ordinary differential equations that we'll need to solve.
  35. [35]
    MATHEMATICA tutorial, Part 2.6: Laplace in polar coordinates
    Now we apply separation of variables to find the general solution of the Laplace equation in polar coordinates: Δu=0⟺∂2u∂r2+1r∂u∂r+1r2∂2u∂θ2=0. R″(r)Θ(θ)+1rR′( ...
  36. [36]
    [PDF] Laplace's Equation on a Disc
    We shall solve this problem by first rewriting Laplace's equation in terms of a polar coordinates (which are most natural to the region D) and then separating ...
  37. [37]
    [PDF] Separation of Variables in Polar and Spherical Coordinates
    Let's start with the Laplace equation in the spherical coordinates: △V (r, θ, φ) = ∂2V. ∂r2. +. 2 r. ×. ∂V. ∂r. +. 1 r2. ×. ∂2V. ∂θ2. +. 1 r2 tan θ. ×. ∂V. ∂θ.
  38. [38]
    2.11: Variable Separation – Spherical Coordinates
    Mar 5, 2022 · The general solution of any axially-symmetric Laplace problem may be represented as Variable separation in spherical coordinates (for axial symmetry)
  39. [39]
    [PDF] 154 - 4.6. Solutions of Laplace's Equation in Spherical
    This is known as Legendre's equation. Its solutions are polynomials in cos 0 and are known as Legendre polynomials. They are designated by P₂(µ) or P₂(cos 0):.
  40. [40]
    [PDF] Laplace equation and related equations in spherical coordinates
    Apr 28, 2021 · Spherical part of the Laplace operator. We perform the second separation of variables by setting S(φ, θ) = Φ(φ)Θ(θ) in (3), multiplying on ...
  41. [41]
    [PDF] Boundary Value Problems in Electrostatics II
    Dec 23, 2000 · In particular, the first topic is the separation of variable method in spherical polar coordinates. 1 Laplace Equation in Spherical Coordinates.
  42. [42]
    [PDF] Notes on Eigenvalues, eigenvectors, and diagonalization
    These notes give an introduction to eigenvalues, eigenvectors, and diagonalization, with an emphasis on the application to solving systems of differential ...
  43. [43]
    [PDF] Matrix Diagonalization and Systems of ODEs
    What are eigenvaluesand eigenvectors? How do we compute them? How do we uSe eigenvalueS and eigenvectorS to diagonalize a matrix? How do we Solve ...
  44. [44]
    [PDF] ODE-Diagonalize: Examples - Penn Math
    Method 1 Observe that A is similar to the diagonal matrix D = (. 3 0. 0 5). , that is, S−1AS = D, where S = (. 1 1. −1 1. ) has the corresponding eigenvectors ...
  45. [45]
    [PDF] Eigenvalue Dependence of Numerical Oscillations in Parabolic ...
    This shows that when separation of variables can be assumed in the PDE, the eigenvalues of the matrix for the numerical scheme equals the error growth factor.
  46. [46]
    [PDF] Lecture Notes on Solving Large Scale Eigenvalue Problems - Ethz
    A property common to matrices obtained by the finite difference ... With the correct finite element discretization this problem turns in a matrix eigenvalue.
  47. [47]
    [PDF] Matrix Theory, Math6304 Lecture Notes from September 6, 2012
    Sep 6, 2012 · Then AB = BA if and only if A and B are simultaneously diagonalizable. Proof. We have already shown that if A and B are simultaneously ...
  48. [48]
    [PDF] Linear Matrix Inequalities in System and Control Theory
    Robust stability analysis and controller design with quadratic Lya- punov functions. In A. Zinober, editor, Variable structure and Lyapunov control, volume ...
  49. [49]
    [PDF] Control Systems I - Lecture 4: Diagonalization, Modal Analysis, Intro ...
    Oct 13, 2017 · In other words, if a square matrix A has a full set of independent eigenvectors, then it is diagonalizable (and vice-versa), with the similarity.
  50. [50]
    Ordinary Differential Equations (ODEs)
    The general solution to this equation is found by separation of variables: ... The presence of this scaling symmetry allows DSolve to find new coordinates in ...
  51. [51]
    Some Notes on Internal Implementation—Wolfram Documentation
    For linear and quasilinear partial differential equations, separation of variables and symmetry reduction are used. For first-order nonlinear PDEs, complete ...
  52. [52]
    dsolve - Solve system of differential equations - MATLAB - MathWorks
    S = dsolve(eqn) solves the differential equation eqn, where eqn is a symbolic equation. Use diff and == to represent differential equations.
  53. [53]
    Symbolic Math Toolbox
    ### Summary of dsolve for PDEs Using Separation of Variables
  54. [54]
    ODE - SymPy 1.14.0 documentation
    dsolve() always returns an Equality class (except for the case when the hint is all or all_Integral ). If possible, it solves the solution explicitly for the ...
  55. [55]
    PDE - SymPy 1.14.0 documentation
    pde_separate() - Separate variables in partial differential equation either by. additive or multiplicative separation approach. These are the helper functions ...
  56. [56]
    Eigenvalues and Eigenmodes of Square: PDE Modeler App
    This example shows how to compute the eigenvalues and eigenmodes of a square with the corners (-1,-1), (-1,1), (1,1), and (1,-1).Missing: separation variables
  57. [57]
    solve_ivp — SciPy v1.16.2 Manual
    Solve an initial value problem for a system of ODEs. This function numerically integrates a system of ordinary differential equations given an initial value: dy ...Solve_ivp · 1.7.0 · 1.13.0 · 1.12.0
  58. [58]
    Heating of Finite Slab - MATLAB & Simulink Example - MathWorks
    This example shows how to find the temperature distribution of a one-dimensional finite slab by solving the governing differential equation.Missing: code | Show results with:code
  59. [59]
    8.3: Separable Differential Equations
    ### Definition and Conditions for Separable ODEs
  60. [60]
  61. [61]
    [PDF] Applied Partial Differential Equations ; with Fourier Series and ...
    Mar 2, 2023 · Many diverse subject areas in engineering and the physical sciences are domi- nated by the study of partial differential equations.
  62. [62]
    5.9 A Summary of Separation of Variables
    First of all, the equation must be linear. After all, the solution is found as an sum of simple solutions. The partial differential equation does not ...
  63. [63]
    What kinds of PDE can't be solved by separation of variables?
    Jun 15, 2013 · This can't be solved using separation either. However, if we don't have the convective type of terms →b(t,x)⋅∇u (this term is ∇⋅(→bu) when →u ...
  64. [64]
    Synthesis, as Opposed to Separation, of Variables | SIAM Review
    In spite of its enormous applicability, this method has certain limitations. In particular, it requires the given domain, PDE, and boundary conditions to be ...
  65. [65]
    [PDF] Homotopy Perturbation Method for Solving Partial Differential ...
    The homotopy perturbation method solves partial differential equations with variable coefficients, solving parabolic and hyperbolic equations without complex ...
  66. [66]
    Scaled boundary finite-element method for solving non ...
    In this study, we employ the scaled boundary finite-element method (SBFEM) to solve two-dimensional heat conduction problems.Missing: alternatives | Show results with:alternatives
  67. [67]
    Highlights in the History of the Fourier Transform - IEEE Pulse
    Jan 25, 2016 · An example is a book by the Bavarian mathematician Martin Ohm (1792–1872), published four years later in Nürenberg [28, p. 358].