Fact-checked by Grok 2 weeks ago

Change of variables

In , particularly in , a change of variables is a substitution technique that replaces the original variables in an integral or differential expression with new variables, often to simplify the computation by transforming the domain of integration or the form of the integrand. This method extends the basic substitution rule from single-variable to multiple integrals, where it relies on the determinant to account for the distortion of volumes or areas under the transformation. The core principle is encapsulated in the change of variables theorem, which states that for a differentiable and invertible \mathbf{T}: \mathbb{R}^n \to \mathbb{R}^n with non-zero , the of a f over a region D in the original variables equals the of f \circ \mathbf{T} times the absolute value of the over the transformed region \mathbf{T}^{-1}(D). For double integrals, this takes the form \iint_D f(x,y) \, dx \, dy = \iint_{D^*} f(g(u,v), h(u,v)) \left| \frac{\partial(x,y)}{\partial(u,v)} \right| \, du \, dv, where x = g(u,v) and y = h(u,v) define the , and the \frac{\partial(x,y)}{\partial(u,v)} = \det \begin{pmatrix} g_u & g_v \\ h_u & h_v \end{pmatrix}. A similar applies to and higher-dimensional integrals, making the technique essential for evaluating integrals in non-Cartesian coordinates, such as polar, cylindrical, or spherical systems. This approach is particularly valuable in for handling complicated regions, like ellipses or paraboloids, by mapping them to simpler shapes, such as circles or rectangles, thereby reducing . The transformation must be a —smooth, bijective, and with a continuously differentiable —to ensure the theorem's validity and preserve integrability. Applications extend beyond to physics and , where change of variables facilitates solving problems in , , and optimization by aligning coordinates with symmetries in the system.

Basics and Motivations

Simple Example

The change of variables technique, also known as , simplifies the evaluation of by introducing a new that transforms a complicated integrand into a more manageable form. Consider an integral of the form \int [f(x)](/page/F/X) \, dx; by setting u = g(x) where g is a with a differentiable , we have du = g'(x) \, dx, allowing the integral to be rewritten as \int f(g^{-1}(u)) \frac{du}{g'(g^{-1}(u))}. In practice, when the integrand matches the composition f(g(x)) g'(x), this simplifies directly to \int f(u) \, du, preserving the integral's value while facilitating computation. A straightforward example illustrates this process: evaluate the indefinite \int (2x + 1)^5 \, dx. Begin by choosing the u = 2x + 1, which identifies the inner linear expression raised to a power. Differentiating gives du = 2 \, dx, or equivalently, dx = \frac{du}{2}. Substituting into the original yields \int u^5 \cdot \frac{du}{2} = \frac{1}{2} \int u^5 \, du. The of u^5 is \frac{u^6}{6}, so \frac{1}{2} \cdot \frac{u^6}{6} + C = \frac{u^6}{12} + C. Back-substituting u = 2x + 1 produces the final result \frac{(2x + 1)^6}{12} + C. This step-by-step approach—selecting u, computing du, substituting, integrating with respect to u, and reversing the —demonstrates how the method reverses the chain rule to simplify integration. Geometrically, reparameterizes the along the real line, where the factor \frac{1}{|g'(x)|} in the rewritten accounts for how the stretches or compresses intervals on the x- when mapped to the u-, ensuring the "area" under the remains unchanged. This not only simplifies computation but also extends to higher dimensions via the determinant, as explored in later sections.

Historical Context and Motivations

The concept of change of variables emerged in the late as part of the foundational work in by . During his development of integral calculus between 1672 and 1676, Leibniz employed techniques to simplify the evaluation of definite integrals, such as transforming the integral for the area of a unit circle's quadrant using the relation x = \frac{z^2}{1 + z^2} to yield the known value \pi/4. This algebraic allowed him to convert complex expressions into more manageable forms, building on earlier geometric methods from Pascal and Cavalieri. Additionally, Leibniz applied similar principles in arc length calculations, where he used differentials like ds = \sqrt{dx^2 + dy^2} to approximate lengths, integrating substituted forms to handle specific parametric curves without modern notation. In the , Leonhard Euler extended these ideas to multivariable settings, motivated by the need to evaluate double integrals over regions with irregular boundaries. In his 1769 paper "De formulis integralibus duplicatis," Euler introduced the change of variables formula for double integrals, demonstrating how transformations could preserve the integral's value while simplifying the domain, such as mapping rectangular regions to curved ones. This work was further generalized by to triple integrals in the 1770s, emphasizing the method's utility in higher dimensions. Euler's contributions formalized the technique's role in , shifting from ad hoc substitutions to systematic transformations. The 19th century brought rigorous formalization through the efforts of and , who addressed foundational issues in that underpinned change of variables. Cauchy's 1821 Cours d'analyse defined the definite via limits of sums with assumptions, enabling precise rules for single-variable cases by ensuring the transformation's differentiability. Riemann's 1854 thesis refined this by introducing the modern definition of the as a over partitions, which supported change of variables theorems under weaker conditions, including for absolutely continuous functions. These developments resolved earlier ambiguities in Leibnizian and Eulerian approaches, ensuring the method's validity in . The primary motivations for developing change of variables stemmed from the challenges of handling composite functions in and , as well as adapting to natural coordinate systems in and physics. In , it facilitated the chain rule for rates of change in nested variables; in , it transformed non-standard forms into recognizable ones, such as using polar coordinates x = r \cos \theta, y = r \sin \theta to integrate over circular domains, avoiding cumbersome Cartesian setups. This approach reduced by aligning variables with problem symmetries, preserved measures like areas or volumes under suitable transformations, and enabled exploitation of geometric properties in applications like curve rectification or surface area computations.

Formal Foundations

Single-Variable Case

In the single-variable case, a change of variables is defined using a differentiable \phi: U \to V between open intervals U and V in \mathbb{R}, where the transformation x = \phi(u) maps a f defined on V to the composite f \circ \phi defined on U. This simplifies the analysis or computation of and integrals by reparameterizing the problem in terms of the new u. For , the chain rule provides the key result under appropriate conditions. Suppose f: V \to \mathbb{R} is differentiable at \phi(u) and \phi: U \to V is differentiable at u \in U, with \phi continuous at u. Then the composite h(u) = f(\phi(u)) is differentiable at u, and h'(u) = f'(\phi(u)) \cdot \phi'(u). The proof relies on the limit definition of the . Consider h'(u) = \lim_{\Delta u \to 0} \frac{f(\phi(u + \Delta u)) - f(\phi(u))}{\Delta u}. Assuming \phi is differentiable, \Delta x = \phi(u + \Delta u) - \phi(u) = \phi'(u) \Delta u + o(\Delta u) as \Delta u \to 0. Substituting yields h'(u) = \lim_{\Delta u \to 0} \left[ \frac{f(\phi(u) + \Delta x) - f(\phi(u))}{\Delta x} \cdot \frac{\Delta x}{\Delta u} \right] = f'(\phi(u)) \cdot \phi'(u), where the first limit is f'(\phi(u)) by differentiability of f and the second is \phi'(u) by differentiability of \phi. Continuity of \phi' ensures the limits exist uniformly. For integration, the substitution theorem applies to Riemann integrals. Assume \phi: [c, d] \to [a, b] is continuous and strictly increasing with \phi(c) = a, \phi(d) = b, and \phi' exists and is integrable on [c, d]. Let f: [a, b] \to \mathbb{R} be continuous. Then \int_a^b f(x) \, dx = \int_c^d f(\phi(u)) \phi'(u) \, du. The proof uses the (FTC). Define F(t) = \int_a^t f(x) \, dx, so F'(t) = f(t) by FTC Part 2 since f is continuous. By the chain rule, (F \circ \phi)'(u) = f(\phi(u)) \phi'(u). Integrating both sides from c to d and applying FTC Part 1 gives \int_c^d (F \circ \phi)'(u) \, du = F(\phi(d)) - F(\phi(c)) = \int_a^b f(x) \, dx, which matches the left side. The strict monotonicity ensures \phi is a and preserves the orientation of the . If \phi is not monotonic, the general formula incorporates an absolute value to account for sign changes in \phi': \int_a^b f(x) \, dx = \int_c^d f(\phi(u)) |\phi'(u)| \, du, provided f \circ \phi \cdot |\phi'| is Riemann integrable. This requires splitting the domain of integration at critical points where \phi' = 0 or changes sign, applying the monotonic case to each subinterval, and summing the results. The absolute value arises because the Riemann integral measures "signed length," but substitution preserves the total measure regardless of direction. Edge cases include constant substitutions and improper integrals. If \phi(u) = k (constant), then \phi'(u) = 0, so \int_c^d f(\phi(u)) \phi'(u) \, du = 0 for bounded continuous f, matching the original integral over a point. For improper integrals, such as \int_a^\infty f(x) \, dx where the limit exists, substitution \phi: [c, \infty) \to [a, \infty) with \phi strictly increasing, continuous, differentiable, and \phi' > 0 integrable yields \int_c^\infty f(\phi(u)) \phi'(u) \, du, evaluated as \lim_{d \to \infty} \int_c^d f(\phi(u)) \phi'(u) \, du, preserving convergence under the theorem's conditions.

Multivariable Case

In the multivariable case, a change of variables is formalized through a differentiable \Phi: U \subset \mathbb{R}^n \to V \subset \mathbb{R}^n, where U and V are open sets, and the expresses points x \in V as x = \Phi(u) for u \in U. Such a \Phi is required to be a , meaning it is (C^1), bijective onto its image, and locally invertible with a , ensuring the transformation preserves the structure of the space. The Jacobian matrix J_\Phi(u) of the map \Phi at a point u \in U is the n \times n matrix whose entries are the partial derivatives \frac{\partial x_i}{\partial u_j} for i, j = 1, \dots, n, representing the best to \Phi near u. The \det(J_\Phi(u)) quantifies the local scaling effect of the transformation on volumes: specifically, it measures how the map distorts volumes, with |\det(J_\Phi(u))| giving the absolute scaling factor for the volume of small parallelepipeds under the linear approximation J_\Phi(u). For the transformation to be valid and orientation-preserving, \det(J_\Phi(u)) > 0 at all points in U, preventing reversal of the coordinate system's . A proof sketch for the volume-preserving property relies on the linear approximation: near u, \Phi behaves like the affine map x \approx \Phi(u) + J_\Phi(u)(v - u), where the image of the unit cube in the v-coordinates has volume |\det(J_\Phi(u))| times the original, as the determinant computes the signed volume of the parallelepiped spanned by the columns of J_\Phi(u). This local scaling extends to the global transformation under the diffeomorphism condition. The Jacobian matrix itself underlies the multivariable chain rule for differentiation, providing the derivative of composite functions. The inverse function theorem connects directly to these concepts: if \det(J_\Phi(u)) \neq 0, then \Phi is locally invertible near u, with the inverse's Jacobian given by \det(J_{\Phi^{-1}}(\Phi(u))) = 1 / \det(J_\Phi(u)), guaranteeing the existence of a smooth local inverse and thus the diffeomorphism property.

Differentiation Applications

Chain Rule in Single Variables

The chain rule in single variables provides a fundamental method for differentiating composite functions, where the output of one function serves as the input to another. Consider a composite function f(\phi(u)), where f and \phi are differentiable. The derivative with respect to u is given by \frac{d}{du} [f(\phi(u))] = f'(\phi(u)) \cdot \phi'(u). This rule arises from the need to account for the rate of change of the inner function \phi when computing the overall rate of change. To derive the chain rule from first principles, start with the definition of the . The of the composite at u is \lim_{h \to 0} \frac{f(\phi(u + h)) - f(\phi(u))}{h}. Let \Delta \phi = \phi(u + h) - \phi(u), so the expression becomes \lim_{h \to 0} \frac{f(\phi(u) + \Delta \phi) - f(\phi(u))}{h} = \lim_{h \to 0} \left( \frac{f(\phi(u) + \Delta \phi) - f(\phi(u))}{\Delta \phi} \cdot \frac{\Delta \phi}{h} \right). Assuming \phi is differentiable, \lim_{h \to 0} \frac{\Delta \phi}{h} = \phi'(u). Similarly, since f is differentiable at \phi(u), \lim_{\Delta \phi \to 0} \frac{f(\phi(u) + \Delta \phi) - f(\phi(u))}{\Delta \phi} = f'(\phi(u)), and \Delta \phi \to 0 as h \to 0. Thus, the limit simplifies to f'(\phi(u)) \cdot \phi'(u). This derivation relies on the continuity and differentiability assumptions of f and \phi. A classic example is differentiating y = \sin(x^2). Here, let \phi(x) = x^2 and f(u) = \sin u, so \frac{dy}{dx} = \cos(x^2) \cdot 2x. This illustrates how the chain rule multiplies the derivative of the outer function by that of the inner one. Another application appears in parametric curves, such as position x(t) and velocity v = \frac{dx}{dt}. If x = \phi(u) and u = u(t), then v = \frac{dx}{dt} = \phi'(u) \cdot \frac{du}{dt}, enabling computation of rates in terms of intermediate variables. Implicit differentiation employs the chain rule as a change-of-variables technique when an defines y implicitly as a of x, without solving explicitly. Differentiate both sides with respect to x, treating y as a of x, so terms involving y require the chain rule: \frac{dy}{dx} multiplies the with respect to y. For instance, in x^2 + y^2 = 1, differentiating yields $2x + 2y \frac{dy}{dx} = 0, so \frac{dy}{dx} = -\frac{x}{y}. This method is essential for relations not expressible as y = f(x). For higher-order derivatives of composite functions, the chain rule extends via , which generalizes to the nth derivative. For the second derivative, \frac{d^2}{du^2} [f(\phi(u))] = f''(\phi(u)) [\phi'(u)]^2 + f'(\phi(u)) \phi''(u). This formula, first published by Francesco Faà di Bruno in 1855, accounts for all ways the inner function's derivatives contribute to the outer one's higher derivatives through Bell partitions. It is particularly useful in analyzing or in forms. Computational tips for applying the chain rule include identifying the outermost function first and working inward, as in the \sin(x^2) example. For products or quotients raised to powers, such as y = \frac{x^2 (x+1)^3}{e^x}, logarithmic differentiation simplifies the process: take \ln y = \ln(x^2) + 3\ln(x+1) - x, differentiate to get \frac{1}{y} \frac{dy}{dx} = \frac{2}{x} + \frac{3}{x+1} - [1](/page/1), then multiply by y. This leverages the chain rule on the logarithm to avoid repeated product/quotient rules.

Jacobian and Multivariable Differentiation

In , the extends to compositions of vector-valued functions, where the matrix plays a central role. Consider functions F: \mathbb{R}^m \to \mathbb{R}^n and G: \mathbb{R}^k \to \mathbb{R}^m, both differentiable at the appropriate points. The matrix of the composition F \circ G at a point \mathbf{t} \in \mathbb{R}^k is given by D(F \circ G)(\mathbf{t}) = DF(G(\mathbf{t})) \cdot DG(\mathbf{t}), where DF and DG denote the matrices of F and G, respectively. This matrix equation generalizes the single-variable (f \circ g)' = f'(g) g' to higher dimensions, representing the of the composite . The under a change of variables \mathbf{x} = \Phi(\mathbf{u}), where \Phi: \mathbb{R}^k \to \mathbb{R}^m is differentiable, is captured by d\mathbf{x} = J(\Phi)(\mathbf{u}) \, d\mathbf{u}, with J(\Phi) the Jacobian matrix of \Phi. This relation provides the first-order of the transformation near a point, essential for understanding how changes in the new variables \mathbf{u} map to changes in the original variables \mathbf{x}. For instance, in computing partial derivatives after a variable change, the chain rule yields \frac{\partial f}{\partial u_i} = \sum_j \frac{\partial f}{\partial x_j} \frac{\partial x_j}{\partial u_i}, or in vector form, \nabla_{\mathbf{u}} (f \circ \Phi) = J(\Phi)^T [\nabla_{\mathbf{x}} f](/page/Gradient). Equivalently, the in the original coordinates transforms as \nabla_{\mathbf{x}} f = J(\Phi)^{-T} \nabla_{\mathbf{u}} (f \circ \Phi), assuming J(\Phi) is invertible. This framework is particularly useful for computing gradients in curvilinear coordinate systems, where the facilitates the inclusion of scale factors. In orthogonal curvilinear coordinates, the gradient operator takes the form \nabla f = \sum_\alpha \frac{1}{h_\alpha} \frac{\partial f}{\partial y_\alpha} \hat{e}_\alpha, with scale factors h_\alpha = \left\| \frac{\partial \mathbf{r}}{\partial y_\alpha} \right\| derived from the Jacobian entries of the position vector \mathbf{r}(y_1, \dots, y_k). These scale factors account for the stretching or compression in each coordinate direction, enabling efficient evaluation of directional derivatives without reverting to Cartesian components. For higher-order derivatives, the Hessian matrix under a change of variables is more complex, involving both the of the transformation and second derivatives of the coordinate map. Specifically, for a scalar f, the Hessian of f \circ \Phi at \mathbf{u} is H_{\mathbf{u}}(f \circ \Phi) = J(\Phi)^T H_{\mathbf{x}} f \, J(\Phi) + \sum_k (\nabla_{\mathbf{x}} f)_k \, D^2 \Phi_k, where D^2 \Phi_k is the Hessian of the k-th component of \Phi. This formula arises from applying the multivariable twice, highlighting the quadratic approximation's sensitivity to the curvature of the transformation.

Integration Applications

Substitution in Single Integrals

Substitution in single integrals, also known as u-substitution, applies the theorem to one-dimensional cases, transforming ∫ f(x) dx into ∫ f(g^{-1}(u)) |du / g'(g^{-1}(u))| du where u = g(x) and g is differentiable with g' ≠ 0 on the interval of integration. This method simplifies integrands by reversing the chain rule, particularly useful for composites like ∫ f(g(x)) g'(x) dx = ∫ f(u) du. For definite integrals, the substitution requires adjusting the according to the new variable u, ensuring the mapping preserves the orientation or accounting for reversals via the absolute value of the derivative. A classic example is the trigonometric substitution for integrals involving √(1 - x²). Consider the indefinite integral ∫ dx / √(1 - x²). Let x = sin θ, so dx = cos θ dθ and √(1 - x²) = cos θ (assuming θ in [-π/2, π/2] where cos θ ≥ 0). The integral becomes ∫ (cos θ dθ) / cos θ = ∫ dθ = θ + C = arcsin x + C. For the definite integral from 0 to 1, the limits change from x = 0 (θ = 0) to x = 1 (θ = π/2), yielding ∫_0^{π/2} dθ = π/2, which matches the area of a quarter unit circle. Another example illustrates bound adjustments and sign handling in definite integrals. Evaluate ∫_0^1 x √(1 - x) dx. Let u = 1 - x, so du = -dx and x = 1 - u. The limits shift from x = 0 (u = 1) to x = 1 (u = 0), transforming the integral to -∫_1^0 (1 - u) √u du = ∫_0^1 (1 - u) u^{1/2} du. Expanding gives ∫_0^1 (u^{1/2} - u^{3/2}) du = [ (2/3) u^{3/2} - (2/5) u^{5/2} ]_0^1 = 2/3 - 2/5 = 4/15. The negative sign from du flips the bounds, equivalent to taking |du/dx| = 1 to preserve the positive measure. Common techniques extend substitution for specific forms. Euler substitutions simplify integrals with radicals of linear or quadratic arguments, such as ∫ R(x, √(ax² + bx + c)) dx where R is rational. The first Euler substitution sets √(ax² + bx + c) = t + √a x (for a > 0), rationalizing the radical into a quadratic in t; the second uses √(ax² + bx + c) = t x for b² - 4ac > 0; the third applies √(ax² + bx + c) = t (a x + b/2) for b² - 4ac < 0. These yield rational functions integrable via partial fractions. Hyperbolic substitutions handle forms like √(x² - 1) or √(x² + 1), analogous to trigonometric but using identities like cosh² u - sinh² u = 1. For ∫ dx / √(x² - 1), let x = cosh u (u ≥ 0), dx = sinh u du, simplifying to ∫ du = arcosh x + C, useful for exponential-related integrands. Substitution can fail if the mapping g(x) = u is not bijective over the interval, as the change of variables assumes a one-to-one correspondence to avoid over- or under-counting contributions. For non-monotonic g, such as g(x) = x² on [-1, 1], the integral must be split into piecewise monotonic subintervals (e.g., [-1, 0] and [0, 1]) where the substitution applies separately, then summed. Failure to do so distorts the result, as the inverse is multi-valued. Numerically, the factor |du/dx| in the substitution accounts for the "speed" of the mapping, scaling the infinitesimal dx to du by the local stretching or compression rate; a large |du/dx| compresses intervals in x to smaller ones in u, concentrating the integral's contribution, while a small value expands them. This ensures the transformed integral preserves the original's value under the Lebesgue measure for improper cases.

Change of Variables in Multiple Integrals

The change of variables theorem for multiple integrals provides a method to transform the variables of integration in Riemann integrals over regions in \mathbb{R}^n, facilitating the evaluation of integrals that are difficult in the original coordinates. This theorem generalizes the substitution rule from single-variable calculus to higher dimensions, incorporating the Jacobian determinant to account for the scaling of volume elements under the . For a continuously differentiable bijection \Phi: U \to V between open sets in \mathbb{R}^n, where D \subset V is a bounded region and f: D \to \mathbb{R} is a bounded continuous function, the theorem states that \int_D f(\mathbf{x}) \, d\mathbf{x} = \int_{\Phi^{-1}(D)} f(\Phi(\mathbf{u})) \left| \det J_\Phi(\mathbf{u}) \right| \, d\mathbf{u}, where J_\Phi(\mathbf{u}) is the Jacobian matrix of \Phi at \mathbf{u}, and \det J_\Phi(\mathbf{u}) is its determinant. This formula applies to Riemann integrals and assumes \Phi is a diffeomorphism, ensuring the transformation is invertible with a continuously invertible Jacobian. The proof of the theorem relies on approximating the integral via Riemann sums over partitions of the domain, where the transformation distorts infinitesimal volume elements. Specifically, for a small parallelepiped in the \mathbf{u}-space with volume |\Delta \mathbf{u}|, its image under \Phi approximates a parallelepiped in the \mathbf{x}-space with volume |\det J_\Phi(\mathbf{u})| \cdot |\Delta \mathbf{u}|, as the Jacobian matrix linearly maps the edges and scales volumes by its determinant's absolute value. Summing these contributions and taking the limit as the partition refines yields the integral transformation, with the absolute value ensuring the volume measure remains positive regardless of orientation. This approach highlights the Jacobian's role in preserving the integral's value through local linear approximations of the mapping. A classic application arises when evaluating \iint_D x \, dA over the disk D = \{(x,y) \mid x^2 + y^2 \leq R^2\} using , where the transformation is x = r \cos \theta, y = r \sin \theta, with $0 \leq r \leq R and $0 \leq \theta \leq 2\pi. The Jacobian matrix is J = \begin{pmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta \end{pmatrix}, and \det J = r, so |\det J| = r since r \geq 0. Substituting yields \iint_D x \, dA = \int_0^{2\pi} \int_0^R (r \cos \theta) \cdot r \, dr \, d\theta = \int_0^{2\pi} \cos \theta \, d\theta \int_0^R r^2 \, dr = 0, as the angular integral vanishes by symmetry, demonstrating the theorem's utility in simplifying symmetric regions. The absolute value in the formula addresses orientation: if \det J_\Phi > 0, the transformation preserves (right-handed to right-handed), while \det J_\Phi < 0 reverses it, but the absolute value ensures the unsigned Riemann integral remains positive and correctly scaled for volume. In contexts involving differential forms or oriented integrals, the signed determinant is used instead to reflect the orientation change. The theorem is compatible with Fubini's theorem, allowing the transformed multiple integral to be evaluated as an iterated integral over rectangular or simple regions in the new variables, provided the integrand satisfies the necessary continuity conditions for Fubini to apply post-transformation. For a generalization to Lebesgue integrals, the theorem extends under milder measurability assumptions, replacing Riemann sums with measure-theoretic limits.

Coordinate Transformations

Polar and Cylindrical Coordinates

In polar coordinates, the change of variables from Cartesian coordinates (x, y) to (r, \theta) is given by x = r \cos \theta and y = r \sin \theta, where r \geq 0 and \theta \in [0, 2\pi). The Jacobian determinant for this transformation is r, so the area element transforms as dx\, dy = r\, dr\, d\theta. This factor arises from the partial derivatives of the transformation and is essential for integrals over regions with circular symmetry. A classic application is evaluating the Gaussian integral \iint_{-\infty}^{\infty} e^{-(x^2 + y^2)} \, dx\, dy. Switching to polar coordinates yields \int_0^{2\pi} \int_0^{\infty} r e^{-r^2} \, dr\, d\theta = \pi, simplifying the computation by exploiting radial symmetry. For differentiation, the gradient of a scalar function f(r, \theta) in polar coordinates is \nabla f = \frac{\partial f}{\partial r} \hat{r} + \frac{1}{r} \frac{\partial f}{\partial \theta} \hat{\theta}, reflecting the scaled angular component due to the coordinate geometry. The Laplacian operator transforms to \Delta f = \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial f}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 f}{\partial \theta^2}, which is derived via the chain rule and facilitates solving partial differential equations in polar settings. Cylindrical coordinates extend polar coordinates to three dimensions by keeping z unchanged, so x = r \cos \theta, y = r \sin \theta, z = z, with r \geq 0, \theta \in [0, 2\pi), and z \in \mathbb{R}. The Jacobian determinant remains r, transforming the volume element to dV = r\, dr\, d\theta\, dz, suitable for regions with axial symmetry along the z-axis. For example, the area of an annulus between radii a and b (with $0 < a < b) is computed as \int_0^{2\pi} \int_a^b r\, dr\, d\theta = \pi (b^2 - a^2), directly using the Jacobian for the radial integration. Line integrals around circular paths, such as \int_C \mathbf{F} \cdot d\mathbf{r} for a vector field in the plane, simplify in polar form by parameterizing \mathbf{r}( \theta ) = (r \cos \theta, r \sin \theta) with d\mathbf{r} = r (-\sin \theta, \cos \theta) d\theta, yielding \int_0^{2\pi} \mathbf{F}(r \cos \theta, r \sin \theta) \cdot (-r \sin \theta, r \cos \theta) \, d\theta. These transformations have limitations: the Jacobian vanishes at r = 0, introducing a singularity that requires careful handling in integrals or derivatives near the origin, and \theta's periodicity demands consistent branch choices to avoid discontinuities.

Spherical and Other Curvilinear Systems

Spherical coordinates provide a natural change of variables for problems exhibiting spherical symmetry in three-dimensional space, transforming from Cartesian coordinates (x, y, z) to radial distance \rho, polar angle \phi, and azimuthal angle \theta. The transformation is given by \begin{align*} x &= \rho \sin \phi \cos \theta, \\ y &= \rho \sin \phi \sin \theta, \\ z &= \rho \cos \phi, \end{align*} where \rho \geq 0, $0 \leq \phi \leq \pi, and $0 \leq \theta < 2\pi. The Jacobian determinant for this transformation, essential for changing variables in integrals, is \det J = \rho^2 \sin \phi. This factor accounts for the distortion in volume elements under the coordinate change. In integration, the volume element in spherical coordinates becomes dV = \rho^2 \sin \phi \, d\rho \, d\phi \, d\theta. For example, the volume of the unit ball \rho \leq 1 is computed as \iiint 1 \, dV = \int_0^\pi \int_0^{2\pi} \int_0^1 \rho^2 \sin \phi \, d\rho \, d\theta \, d\phi = \frac{4\pi}{3}, demonstrating the utility of the Jacobian in simplifying spherical integrals. Such transformations are particularly effective for integrating over spherically symmetric regions, like spheres or balls. For differentiation, the gradient of a scalar function f in spherical coordinates is \nabla f = \frac{\partial f}{\partial \rho} \hat{\rho} + \frac{1}{\rho} \frac{\partial f}{\partial \phi} \hat{\phi} + \frac{1}{\rho \sin \phi} \frac{\partial f}{\partial \theta} \hat{\theta}. The divergence of a vector field \mathbf{F} = F_\rho \hat{\rho} + F_\phi \hat{\phi} + F_\theta \hat{\theta} is \nabla \cdot \mathbf{F} = \frac{1}{\rho^2} \frac{\partial}{\partial \rho} (\rho^2 F_\rho) + \frac{1}{\rho \sin \phi} \frac{\partial}{\partial \phi} (\sin \phi F_\phi) + \frac{1}{\rho \sin \phi} \frac{\partial F_\theta}{\partial \theta}, and the curl is \nabla \times \mathbf{F} = \frac{1}{\rho \sin \phi} \left[ \frac{\partial}{\partial \phi} (\sin \phi F_\theta) - \frac{\partial F_\phi}{\partial \theta} \right] \hat{\rho} + \frac{1}{\rho} \left[ \frac{1}{\sin \phi} \frac{\partial F_\rho}{\partial \theta} - \frac{\partial}{\partial \rho} (\rho F_\theta) \right] \hat{\phi} + \frac{1}{\rho} \left[ \frac{\partial}{\partial \rho} (\rho F_\phi) - \frac{\partial F_\rho}{\partial \phi} \right] \hat{\theta}. These expressions arise from the general formulas for orthogonal curvilinear coordinates, adapted to the scale factors in spherical systems: h_\rho = 1, h_\phi = \rho, h_\theta = \rho \sin \phi. Other curvilinear systems include toroidal coordinates, suitable for ring-like or toroidal regions such as those in plasma physics or vortex flows. The transformation from Cartesian to toroidal coordinates (\xi, \eta, \phi) is \begin{align*} x &= \frac{a \sinh \eta \cos \phi}{\cosh \eta - \cos \xi}, \\ y &= \frac{a \sinh \eta \sin \phi}{\cosh \eta - \cos \xi}, \\ z &= \frac{a \sin \xi}{\cosh \eta - \cos \xi}, \end{align*} with $0 \leq \xi < 2\pi, \eta \geq 0, $0 \leq \phi < 2\pi, and a > 0 a scale parameter. The scale factors are h_\xi = h_\eta = a / (\cosh \eta - \cos \xi) and h_\phi = a \sinh \eta / (\cosh \eta - \cos \xi), yielding a Jacobian determinant of |\det J| = a^3 \sinh \eta / (\cosh \eta - \cos \xi)^3. This system facilitates integration over toroidal volumes by aligning coordinates with the geometry. Applications of these transformations abound in physics, particularly for computing gravitational potentials around spherically symmetric masses, where the change to spherical coordinates simplifies the equation \nabla^2 \Phi = 4\pi [G](/page/G) \rho due to radial . For instance, the potential outside a uniform sphere integrates straightforwardly using the spherical , yielding \Phi(r) = -GM/r for r greater than the sphere's .

Advanced and Specialized Uses

In Differential Equations

Change of variables is a fundamental technique in the solution of equations, allowing the transformation of complex equations into simpler forms that are more amenable to standard solution methods. In differential equations (ODEs), substitutions exploit the structure of the equation to reduce its order or , while in partial differential equations (PDEs), they often align the equation with characteristic curves or symmetry properties to yield explicit solutions. This approach preserves the essential dynamics while simplifying computations, and it underpins many analytical methods in . For ODEs, change of variables is particularly useful for homogeneous equations of the form \frac{dy}{dx} = f\left(\frac{y}{x}\right), where the right-hand side depends only on the y/x. The v = y/x, or equivalently y = v x, transforms the equation by differentiating to obtain \frac{dy}{dx} = v + x \frac{dv}{dx}, yielding x \frac{dv}{dx} = f(v) - v after . This separates variables, allowing as \int \frac{dv}{f(v) - v} = \int \frac{dx}{x}. The result is a separable equation solvable by direct , demonstrating how the exploits the scaling invariance of the homogeneous form. Another key application in ODEs is the Bernoulli equation, \frac{dy}{dx} + P(x) y = Q(x) y^n with n \neq 0, 1. The v = y^{1-n} linearizes the nonlinearity: differentiating gives \frac{dv}{dx} = (1-n) y^{-n} \frac{dy}{dx}, so multiplying the original equation by (1-n) y^{-n} yields the linear form \frac{dv}{dx} + (1-n) P(x) v = (1-n) Q(x). This linear ODE in v can then be solved using an , after which back-substitution recovers y. For exact equations, substitutions may also facilitate finding an integrating factor when the equation is not immediately exact, though this is case-specific. In PDEs, the employs change of variables to solve equations like a(x,t,u) u_x + b(x,t,u) u_t = c(x,t,u). The characteristics are curves parameterized by \frac{dx}{ds} = a, \frac{dt}{ds} = b, \frac{du}{ds} = c, and new variables \xi = x - c t (for the transport equation u_t + c u_x = 0) align the PDE with these curves. Substituting yields u_\xi = 0 along characteristics, implying u is constant on them, so the general solution is u(x,t) = f(x - c t) for arbitrary f. This reduces the PDE to an ODE system along the characteristics. Similarity solutions arise in nonlinear PDEs with scaling symmetries, such as the u_t = u_{xx}. The change of variables \eta = x / \sqrt{t} and u(x,t) = t^{-1/2} f(\eta) (or similar scaling) transforms the PDE into in \eta: f'' + \frac{\eta}{2} f' + \frac{1}{2} f = 0, whose solutions capture self-similar profiles invariant under time and space rescaling. This method reveals fundamental behaviors like diffusion fronts without solving the full initial-value problem. More advanced changes of variables stem from symmetries, where infinitesimal transformations generated by a leave the equation invariant. For an ODE or PDE, symmetries yield invariants that define new coordinates, reducing the equation's order or dimensionality; for instance, scaling symmetries in the lead directly to the similarity variable \eta. This framework, developed by , systematically identifies substitutions based on the equation's symmetry group, enabling solutions via canonical forms.

In Physics and Mechanics

In physics and mechanics, the change of variables is essential for reformulating the in terms of , which simplify the description of complex systems by exploiting symmetries and constraints. In , the function L(\mathbf{q}, \dot{\mathbf{q}}), where \mathbf{q} = (q_1, \dots, q_n) are the and \dot{\mathbf{q}} their time derivatives, encodes the system's minus . These coordinates can be any set of independent parameters that uniquely specify the system's , such as angles or lengths, rather than Cartesian positions. The arise from the Euler-Lagrange equations: \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q}_i} \right) - \frac{\partial L}{\partial q_i} = 0 for each i. Changes of variables to new Q_j = Q_j(\mathbf{q}, t) via time-dependent point transformations preserve the form of these equations, allowing the to be expressed in the new variables as L'(\mathbf{Q}, \dot{\mathbf{Q}}) = L(\mathbf{q}(\mathbf{Q}, t), \dot{\mathbf{q}}(\mathbf{Q}, \dot{\mathbf{Q}}, t)), up to a total time derivative that does not affect the . This invariance facilitates the analysis of systems with rotational or other symmetries. A key application is in central force problems, where polar coordinates (r, \theta) serve as generalized coordinates for a particle under a radial potential V(r). The Lagrangian becomes L = \frac{1}{2} m (\dot{r}^2 + r^2 \dot{\theta}^2) - V(r), leading to the Euler-Lagrange equation for \theta that implies conservation of angular momentum: p_\theta = m r^2 \dot{\theta} = \constant./04%3A_Hamilton's_Principle_and_Noether's_Theorem/4.09%3A_Example_2-__Lagrangian_Formulation_of_the_Central_Force_Problem) This conserved quantity arises from the cyclic nature of \theta in the Lagrangian, highlighting how coordinate choices reveal physical invariants. The canonical momentum p_i = \frac{\partial L}{\partial \dot{q}_i} generally differs from the linear momentum m \dot{\mathbf{x}} in non-Cartesian coordinates; for instance, in the polar example, p_\theta = m r^2 \dot{\theta} represents angular rather than linear momentum, while p_r = m \dot{r} aligns with the radial component./07%3A_Symmetries_Invariance_and_the_Hamiltonian/7.02%3A_Generalized_Momentum) This distinction is crucial in curvilinear frames, where velocities \dot{\mathbf{q}} do not directly correspond to physical velocities. In fluid mechanics, change of variables via scaling introduces dimensionless forms of the Navier-Stokes equations, such as rescaling position x' = x/L, time t' = t U/L, and velocity \mathbf{u}' = \mathbf{u}/U, where L is a characteristic length and U a velocity scale. This yields the Reynolds number \mathrm{Re} = U L / \nu as a coefficient governing the balance between inertial and viscous terms, enabling analysis of flow regimes without dimensional constants. In the Hamiltonian formulation, changes of variables in (\mathbf{q}, \mathbf{p}) to new coordinates (\mathbf{Q}, \mathbf{P}) must be transformations to preserve the structure, ensuring Hamilton's equations \dot{q}_i = \frac{\partial H}{\partial p_i}, \dot{p}_i = -\frac{\partial H}{\partial q_i} retain their form for the transformed H'(\mathbf{Q}, \mathbf{P}). Such transformations, generated by functions like F(\mathbf{q}, \mathbf{Q}), maintain the brackets \{q_i, p_j\} = \delta_{ij} and thus the underlying geometry of .

References

  1. [1]
    Calculus III - Change of Variables - Pauls Online Math Notes
    Nov 16, 2022 · We call the equations that define the change of variables a transformation. Also, we will typically start out with a region, R R , in xy x y ...
  2. [2]
    [PDF] 18.022: Multivariable calculus — The change of variables theorem
    The mathematical term for a change of variables is the notion of a diffeomorphism. A map F: U → V between open subsets of Rn is a diffeomorphism if F is ...
  3. [3]
    4.4 Change of Variables
    ### Summary of Introductory Description of Change of Variables
  4. [4]
    Change of Variables - Department of Mathematics at UTSA
    Nov 13, 2021 · A change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables.
  5. [5]
    5.5: The Substitution Rule - Mathematics LibreTexts
    Nov 9, 2020 · The term 'substitution' refers to changing variables or substituting the variable \(u\) and \(du\) for appropriate expressions in the integrand.Missing: single | Show results with:single
  6. [6]
    5.5 Substitution - Calculus Volume 1 | OpenStax
    Mar 30, 2016 · The method is called substitution because we substitute part of the integrand with the variable u and part of the integrand with du. It is also ...
  7. [7]
    15.9: Change of Variables in Multiple Integrals
    Nov 9, 2020 · By looking at the numerator and denominator of the exponent of e , we will try the substitution u = x − y and v = x + y . To use the change of ...
  8. [8]
    The Integration Theory of Gottfried Wilhelm Leibniz
    For a basic understanding of the integral, Leibniz examined the sequence of the squares, their first differences, and their second differences, noting that if ...
  9. [9]
    Change of Variables in Multiple Integrals: Euler to Cartan
    Leonhard Euler first developed the notion of a double integral in 1769 [7]. As part of his discussion of the meaning of a double integral and his ...
  10. [10]
    [PDF] Change of Variables Formula, Improper Multiple Integrals - NET
    3The Change of Variables formula was first proposed by Euler when he studied double integrals in 1769, and it was generalized to triple integrals by Lagrange in.
  11. [11]
    [PDF] The Definite Integrals of Cauchy and Riemann
    Nov 30, 2022 · We would have an example of a function that does not fulfill this condition by supposing that ϕ(x) equals a determined constant c whenever the.
  12. [12]
    3.6 The Chain Rule - Calculus Volume 1 | OpenStax
    Mar 30, 2016 · 3.6.5 Describe the proof of the chain rule. We have seen the techniques for differentiating basic functions ( x n ...
  13. [13]
    [PDF] The Riemann Integral - UC Davis Math
    The Riemann integral is the simplest integral to define, and it allows one to integrate every continuous function as well as some not-too-badly discontinuous.
  14. [14]
    Calculus I - Substitution Rule for Definite Integrals
    Nov 16, 2022 · We use the substitution rule to find the indefinite integral and then do the evaluation. There are however, two ways to deal with the evaluation step.
  15. [15]
    Calculus II - Improper Integrals - Pauls Online Math Notes
    Nov 16, 2022 · We will replace the infinity with a variable (usually t ), do the integral and then take the limit of the result as t goes to infinity.Missing: single | Show results with:single
  16. [16]
    Change of variables - MIT
    When we do a change of variables in several variables, we need to account for the area scaling factor or volume scaling factor the same way. ... Jacobian ...
  17. [17]
    [PDF] Chapter 8 Change of Variables, Parametrizations, Surface Integrals
    The formula which allows one to pass from the original integral to the new one is called the transformation formula (or change of variables formula). It should ...Missing: multivariable | Show results with:multivariable
  18. [18]
    3.8 Implicit Differentiation - Calculus Volume 1 | OpenStax
    Mar 30, 2016 · Now that we have seen the technique of implicit differentiation, we can apply it to the problem of finding equations of tangent lines to curves ...
  19. [19]
    3.9 Derivatives of Exponential and Logarithmic Functions - OpenStax
    Mar 30, 2016 · 3 Use logarithmic differentiation to determine the derivative of a function. So far, we have learned how to differentiate a variety of functions ...
  20. [20]
    [PDF] Chain Rules for Hessian and Higher Derivatives Made Easy ... - arXiv
    Nov 29, 2019 · The critical components of such computations are chain and product rules for derivatives. Although they are taught early in simple scenarios,.
  21. [21]
  22. [22]
    Calculus II - Trig Substitutions - Pauls Online Math Notes
    Oct 16, 2023 · In this section we will look at integrals (both indefinite and definite) that require the use of a substitutions involving trig functions ...
  23. [23]
    5.8 Trigonometric Substitutions
    Trigonometric substitutions of the form , u = sin ⁡ ( x ) , , u = tan ⁡ ( x ) , or variants thereof let us find integrals like , ∫ 1 1 − x 2 d x , , ∫ 1 1 + x 2 ...
  24. [24]
    [PDF] On geometric interpretation of Euler's substitutions - arXiv
    Sep 24, 2023 · The main novelty of this paper is the introduction of the fourth Euler substitution, which is a natural consequence of the geometric approach ...
  25. [25]
    [PDF] Hyperbolic functions∗ - Brooklyn College
    Hyperbolic functions can be used instead of trigonometric substitutions to evaluate integrals with quadratic expressions under the square root. For example ...
  26. [26]
    [PDF] 16.7 Change of Variables in Multiple Integrals - CSUN
    The Jacobian is easiest to remember as the determinant of a 2×2 matrix of partial derivatives. With the Jacobian in hand, we can state the change-of-variables ...Missing: calculus | Show results with:calculus
  27. [27]
    15.8 Change of Variables in Multiple Integrals - Open Textbook
    A function used for a change of variables is called a transformation. Such functions need to be one-to-one, except possibly on the boundary of their domain, and ...
  28. [28]
    The Jacobian for Polar and Spherical Coordinates
    The Jacobian is computed for the change from Cartesian to polar coordinates, and then for the change from Cartesian to spherical coordinates.
  29. [29]
    Jacobians
    Example 1: Compute the Jacobian of the polar coordinates transformation x = rcosθ,y=rsinθ. Solution: Since ∂x∂r=cos(θ),∂y∂r=sin(θ),∂x∂θ=−rsin(θ),∂y∂θ=rcos(θ), ...
  30. [30]
    [PDF] THE GAUSSIAN INTEGRAL Let I = ∫ ∞ e dx, J ... - Keith Conrad
    In the last section, the Gaussian integral's history is presented. 1. First Proof: Polar coordinates. The most widely known proof, due to Poisson [10, p. 3], ...
  31. [31]
    [PDF] derivation of Laplacian (and gradient) in polar coordinates
    Math 412. Laplacian in. Polar Coordinates @. A direct calculation. If f = f(x,y) ... the gradient. We defined x²f V. (f). Plan: 1) find in polar. 2). Gradient.
  32. [32]
    Laplacian in polar coordinates - Branko Curgus
    The goal of this page is to derive the formula for the Laplacian in polar coordinates step by step. ... We often rewrite the gradient vector in polar coordinates ...
  33. [33]
    13.2Changing Coordinate Systems: The Jacobian
    The Jacobian is used to change coordinate systems in 3D, calculated by taking the derivative, finding the determinant, and computing the absolute value. ...
  34. [34]
    Introduction to changing variables in double integrals - Math Insight
    Introduction to the concepts behind a change of variables in double integrals. The transformation is illustrated with interactive graphics.
  35. [35]
    14.8 Change of Variables in Multiple Integrals - WeBWorK
    Changing from rectangular coordinates to polar, or cylindrical, or spherical coordinates, are special cases of a general process known as a change of variables ...
  36. [36]
    14.7: Change of Variables in Multiple Integrals (Jacobians)
    Oct 19, 2020 · Using the substitutions \(x = v\) and \(y = \sqrt{u + v}\), evaluate the integral \(\displaystyle\iint_R y \, \sin (y^2 - x) \,dA,\) where \(R\) ...
  37. [37]
    [PDF] 5 Vector calculus in spherical coordinates - ZJUI
    how to represent vectors and vector fields in spherical coordinates,. 2. how to perform div, grad, curl, and Laplacian operations in spherical coordinates.
  38. [38]
    [PDF] Curl, Divergence, and Gradient in Cylindrical and Spherical ...
    In Sections 3.1, 3.4, and 6.1, we introduced the curl, divergence, and gradient, respec- tively, and derived the expressions for them in the Cartesian ...
  39. [39]
    [PDF] COORDINATE SYSTEMS , ox ox - LSU Math
    Jacobian in agreement with ... We shall return to bipolar coordinates in Sections 2.13 and 2.14 to derive the toroidal and bispherical coordinate systems.<|control11|><|separator|>
  40. [40]
    [PDF] Chapter 2 - The Earth's Gravitational field
    GRAVITATIONAL POTENTIAL DUE TO NEARLY SPHERICAL BODY. 31 a degree of latitude ... * , in spherical coordinates. Laplace's equa- tion is obeyed by ...
  41. [41]
    Differential Equations - Substitutions - Pauls Online Math Notes
    Nov 16, 2022 · Upon using this substitution, we were able to convert the differential equation into a form that we could deal with (linear in this case).
  42. [42]
    [PDF] Substitution Methods for First-Order ODEs and Exact Equations
    In today's lecture we're going to examine another technique that can be useful for solving first-order ODEs. Namely, substitutuion. Now, as.
  43. [43]
    Bernoulli Differential Equations - Pauls Online Math Notes
    Feb 14, 2025 · We are now going to use the substitution v=y1−n v = y 1 − n to convert this into a differential equation in terms of v v . As we'll see this ...
  44. [44]
    [PDF] Using Substitution Homogeneous and Bernoulli Equations
    These differential equations almost match the form required to be linear. By making a substitution, both of these types of equations can be made to be linear.Missing: method | Show results with:method
  45. [45]
    [PDF] 2 First-Order Equations: Method of Characteristics
    In this section, we describe a general technique for solving first-order equations. We begin with linear equations and work our way through the semilinear, ...
  46. [46]
    [PDF] Solving First Order PDEs - Trinity University
    Jan 21, 2014 · First Order PDEs. Page 14. Linear Change of Variables. The Method of Characteristics. Summary. Example. Solve 2y. ∂u. ∂x. + (3x2 - 1). ∂u. ∂y. = ...
  47. [47]
    [PDF] 4 The Heat Equation - DAMTP
    where we have introduced the similarity variable η ≡ x. √Kt . (4.7). The similarity variable is a dimensionless parameter that is invariant under further rescal ...
  48. [48]
    [PDF] Symmetry and Explicit Solutions of Partial Differential Equations
    A local Lie group of transformations G is called a symmetry group of the system of partial differential equations (1) if ¯f= g · f is a solution whenever f is.
  49. [49]
    [PDF] Solving Differential Equations With Symmetry Methods - Open Works
    Lie Groups. 5. 1.3 Lie Groups. A Lie group is a group of symmetries with a parameter λ ∈ R. Lie group symmetries are functions from R2 to R2. Let A be a set ...
  50. [50]
    [PDF] Generalized Coordinates, Lagrange's Equations, and Constraints
    1 Cartesian Coordinates and Generalized Coordinates. The set of coordinates used to describe the motion of a dynamic system is not unique.
  51. [51]
    [PDF] Lagrange Handout - MIT OpenCourseWare
    A set of generalized coordinates is independent if, when all but one of the generalized coordinates are fixed, there remains a continuous range of values for.
  52. [52]
    [PDF] Chapter 2 Lagrange's and Hamilton's Equations - Rutgers Physics
    The change of coordinates itself (2.1) is called a point transformation. 2. This is why we chose the particular combination we did for the Lagrangian, rather.
  53. [53]
    [PDF] Central Forces - Oregon State University
    Conservation of angular momentum is derived and exploited to simplify the problem. Spherical coordinates are chosen to respect this symmetry. The equations of ...
  54. [54]
    [PDF] Dimensionless Form of the Governing Equations - Purdue Engineering
    Dec 15, 2021 · In order to make the Navier-Stokes equation dimensionless, the convention is to divide through by the characteristic convective inertial ...
  55. [55]
    [PDF] 10 Dimensional Analysis - MIT Mathematics
    For our first dimensionless group, we choose the Reynolds number. Π1 = ρUR. µ. ,. (12) as we know that it arises naturally when you nondimensionalise the Navier ...
  56. [56]
    [PDF] Hamiltonian Mechanics and Symplectic Geometry
    preserving the symplectic structure (f∗(ω) = ω) is called a symplectomorphism, and corresponds to the physicist's notion of a canonical transformation of phase.
  57. [57]
    [PDF] PHY411 Lecture notes – Canonical Transformations
    Sep 27, 2023 · Canonical transformations, defined here as those that preserve the Poisson brackets or equivalently the symplectic 2-form, also preserve ...Missing: structure | Show results with:structure