Fact-checked by Grok 2 weeks ago

Covariant transformation

In and physics, a covariant transformation describes the rule by which the components of a covariant tensor, such as a or higher-rank tensor, change under a coordinate transformation in a manifold, preserving the tensor's geometric and physical meaning across different bases. For a rank-1 covariant V_i, the components in the new V'_{i'} are given by V'_{i'} = \sum_j \frac{\partial x^j}{\partial x^{i'}} V_j, where x^j are the old coordinates and x^{i'} the new ones. This law extends to higher-rank covariant tensors by including a factor of \frac{\partial x^k}{\partial x^{l'}} for each lower index, ensuring the tensor's multilinearity and coordinate independence. Covariant transformations are distinguished from contravariant ones, which use the partial derivatives \frac{\partial x^{i'}}{\partial x^j} for upper indices, reflecting the nature of basis vectors and covectors in vector spaces. These properties satisfy the transitive nature of tensor transformations: applying successive coordinate changes yields the same result as a direct transformation between the initial and final systems. In non-orthogonal bases, covariant components are defined with respect to the reciprocal basis, where the between basis and reciprocal basis vectors yields the , facilitating consistent projections. In , particularly , covariant transformations underpin the principle of , ensuring that the fundamental equations—such as Einstein's field equations—retain their form under arbitrary diffeomorphisms of coordinates. This invariance allows physical laws to be expressed in a tensorial manner, independent of the observer's coordinate choice, and is essential for describing phenomena like as curvature without privileging any frame. Applications extend to , where the electromagnetic field tensor transforms covariantly to maintain across inertial and non-inertial frames.

Fundamental Concepts

Covariant Transformation

In tensor analysis, a is said to transform covariantly if, under a or , its components in the new system are obtained by multiplying the original components by the inverse of the . For a rank-1 covariant tensor T_i, this transformation law is expressed as T'_i = \frac{\partial x^j}{\partial x'^i} T_j, where the Einstein summation convention is used over the repeated index j, and x^j and x'^i denote the old and new coordinates, respectively. This rule ensures that the tensor's geometric remains consistent across different frames. Covariant objects, such as covectors or the components of a , adhere to this scaling with the inverse Jacobian matrix \left( \frac{\partial x^j}{\partial x'^i} \right), which preserves the tensorial nature of the quantity. The transformation arises from the requirement that contractions, like the scalar product between a covariant tensor and a contravariant , yield invariants under basis changes. This property distinguishes covariant transformation from contravariant , which applies to and uses the direct . The covariant transformation law serves as the foundational rule for defining lower-index tensors in , enabling the formulation of physical laws that are independent of coordinate choices. For higher-rank covariant tensors, the law generalizes by applying the inverse factor to each covariant index separately.

Contravariant Transformation

A contravariant transformation refers to the rule by which the components of a tensor with upper indices, such as a , change under a coordinate or basis . Specifically, a is said to transform contravariantly if its components in the new system are obtained by multiplying the original components by the matrix of the transformation, which describes the partial derivatives of the new coordinates with respect to the old ones. This ensures that the tensor's representation adapts correctly to preserve the underlying geometric or physical object across different frames. For a rank-1 contravariant tensor \mathbf{V}, the transformation law is given by V'^i = \frac{\partial x'^i}{\partial x^j} V^j, where the Einstein summation convention is used over the repeated j, and x'^i denote the new coordinates. This law applies generally to higher-rank contravariant tensors, where each upper index follows the same multiplication by the corresponding . In contrast to covariant transformations, which involve the inverse for lower-index components, the contravariant rule aligns with how displacement vectors or elements shift under passive coordinate changes. The contravariant transformation plays a crucial role in maintaining the invariance of scalar quantities formed by contractions, such as the dot product between a contravariant vector and a covariant vector via the . Under a coordinate change, the opposing transformation behaviors of contravariant and covariant components cancel out in the contraction, yielding a scalar that remains unchanged, thus ensuring the physical or geometric meaning is coordinate-independent. The origins of contravariant transformations trace back to the foundational work in by and , particularly in their 1900 paper on absolute differential calculus, where they systematized these rules to handle coordinate-independent differential operations on manifolds. This framework emphasized passive transformations—changes in the coordinate description rather than active deformations of the space itself—to develop invariant formulations of and physics.

Transformation Examples

Derivatives and Functions

Consider a scalar function f defined on a with coordinates x^j. The partial derivatives \partial_i f = \frac{\partial f}{\partial x^i} form the components of the , which transform as a covector under a change of coordinates. This behavior arises because the df is a from vectors to scalars, preserving the value of the function's change along any direction. To derive the transformation law, introduce new coordinates x'^i. Since f is a scalar, its value remains unchanged, so f(x') = f(x(x')). Applying the chain rule for multivariable differentiation yields \frac{\partial f}{\partial x'^i} = \frac{\partial f}{\partial x^j} \frac{\partial x^j}{\partial x'^i}. Here, the indices follow the Einstein summation convention, with j summed over all dimensions. This equation demonstrates the covariant nature of the partial derivatives, as the new components are obtained by contracting the old components with the inverse \frac{\partial x^j}{\partial x'^i}. Geometrically, the gradient indicates the of steepest ascent of f, where the df(\mathbf{v}) = \partial_i f \, v^i gives the of change along a \mathbf{v}. Under coordinate transformations, the basis vectors may stretch or contract, requiring the gradient components to scale inversely to maintain this invariant . For instance, if the new basis is elongated, the gradient components diminish to preserve the physical . In one dimension, a linear coordinate shift x' = x + b (with constant b) illustrates this numerically. The transformation is \frac{\partial x}{\partial x'} = 1. Take f(x) = x^2, so \frac{df}{dx} = 2x. At x = 1, \frac{df}{dx} = 2. Expressing in new coordinates, f(x') = (x' - b)^2, so \frac{df}{dx'} = 2(x' - b). At the corresponding point x' = 1 + b, \frac{df}{dx'} = 2((1 + b) - b) = 2, consistent with the law since the Jacobian factor is unity.

Basis Vectors and Covectors

In tensor analysis, the basis vectors of a , often denoted as \mathbf{e}_i, transform under a change of coordinates from x^j to x'^i according to the rule \mathbf{e}'_i = \frac{\partial x^j}{\partial x'^i} \mathbf{e}_j. This transformation reflects the covariant nature of the basis vectors themselves, as they adjust inversely to the coordinate differentials via the inverse Jacobian matrix. The partial derivatives \frac{\partial x^j}{\partial x'^i} ensure that the new basis vectors \mathbf{e}'_i are linear combinations of the old ones, preserving the linear structure of the . The dual basis, consisting of covectors or one-forms \mathbf{e}^i, exhibits the opposite transformation behavior to maintain consistency with the vector basis. Under the same coordinate change, these covectors transform contravariantly as \mathbf{e}'^i = \frac{\partial x'^i}{\partial x^j} \mathbf{e}^j. This uses the direct Jacobian matrix \frac{\partial x'^i}{\partial x^j}, allowing the dual basis to scale in tandem with the coordinate expansion or contraction. The paired transformations of basis vectors and covectors thus highlight their complementary roles in representing multilinear maps within the space. A defining of this duality is the invariance of the pairing between covectors and basis vectors, given by \mathbf{e}^i (\mathbf{e}_j) = \delta^i_j, where \delta^i_j is the . This condition, which equals 1 if i = j and 0 otherwise, remains unchanged under coordinate transformations because the covariant adjustment of \mathbf{e}_i precisely counters the contravariant shift in \mathbf{e}^i. The preservation ensures that the dual pairing acts as a coordinate-independent selector, fundamental to tensor definitions. To illustrate in two dimensions, consider an that stretches the coordinate axes; the basis vectors \mathbf{e}_1 and \mathbf{e}_2 (represented as arrows) shorten to compensate, while the corresponding covectors \mathbf{e}^1 and \mathbf{e}^2 lengthen inversely, maintaining the area-preserving duality. This inverse scaling visually underscores how the paired transformations keep the inner product structure intact across frames.

Differential Forms

Differential forms provide a natural framework for understanding covariant and contravariant transformations on manifolds, particularly through the behavior of basis 1-forms and their role in integration. The basis 1-forms dx^i, which span the cotangent space at each point in a coordinate chart, transform contravariantly under a change of coordinates from x^j to x'^i. This means dx'^i = \frac{\partial x'^i}{\partial x^j} dx^j, ensuring that the dual relationship with the contravariant basis vectors \frac{\partial}{\partial x^i} is preserved across coordinate systems. A example arises in on , such as the transition from Cartesian coordinates (x, y) to polar coordinates (r, \theta), where x = r \cos \theta and y = r \sin \theta. Here, the basis 1-forms transform as dr = \cos \theta \, dx + \sin \theta \, dy and d\theta = -\frac{\sin \theta}{r} dx + \frac{\cos \theta}{r} dy, reflecting the partial derivatives of the new coordinates with respect to the old ones and adjusting the infinitesimal line elements under reparametrization to maintain geometric consistency. In contrast to this contravariant behavior of the basis, differential forms as a whole exhibit covariant transformation properties when considered in the context of , achieved through the operation. For a \phi: M [\to](/page/TO) N and a k-form \omega on N, the \phi^* \omega on M ensures that the \int_M \phi^* \omega = \int_N \omega (up to ) is coordinate-independent, as the transformation compensates for changes in the basis forms. This is particularly evident in the volume form on an n-dimensional manifold, where the standard dV = dx^1 \wedge \cdots \wedge dx^n transforms under a coordinate change such that dV = \left| \det \frac{\partial x}{\partial x'} \right| dV' , where dV' = dx'^1 \wedge \cdots \wedge dx'^n , incorporating the absolute value of the Jacobian determinant to preserve the positive measure for regardless of the .

Tensor Components

Without Coordinates

In abstract vector spaces, tensors are defined as multilinear maps that generalize the notions of scalars, vectors, and linear functionals without reference to any specific basis or . A tensor of type (p, q) over a V is a multilinear function T: (V^)^p \times V^q \to \mathbb{R}, where V^ denotes the of V consisting of all linear functionals on V, p represents the number of contravariant slots (which accept elements from V^*), and q represents the number of covariant slots (which accept elements from V). This definition captures the intrinsic, coordinate-free nature of tensors as higher-order generalizations of vectors (p=1, q=0) and covectors (p=0, q=1), emphasizing their multilinearity—linearity in each argument while holding the others fixed. The space of all such (p, q)-tensors, denoted T^{p,q}(V), forms a vector space itself and can be constructed algebraically as the tensor product T^{p,q}(V) = V^{\otimes p} \otimes (V^)^{\otimes q}, where \otimes denotes the of s. In this component-free perspective, the transformation properties of tensors are implicit in the structure of the tensor product: changing bases in V induces corresponding changes in V^ via the map, ensuring that the overall multilinear action remains consistent across different choices of bases. Elements of T^{p,q}(V) are finite sums of pure tensors of the form v_1 \otimes \cdots \otimes v_p \otimes \alpha_1 \otimes \cdots \otimes \alpha_q, where v_i \in V and \alpha_j \in V^*, and the multilinearity follows from the universal property of the tensor product, which guarantees a unique of the map (v_1, \dots, v_p, \alpha_1, \dots, \alpha_q) \mapsto T(v_1, \dots, v_p, \alpha_1, \dots, \alpha_q). This algebraic construction underscores the basis-independent essence of tensors, as the tensor product is defined intrinsically without coordinates. A representative example is the , which serves as a (0,2) covariant tensor in T^{0,2}(V) = (V^*)^{\otimes 2}. In the coordinate-free view, the is a g: V \times V \to \mathbb{R} that pairs two vectors to produce an inner product, such as g(u, v) for u, v \in V, without invoking components or bases. This pairing induces a norm and angle structure on V, and its bilinearity ensures it is linear in each vector argument separately. The invariance of tensor operations, particularly scalar contractions, highlights their basis-independent character. A full contraction of a tensor in T^{p,q}(V) with p = q pairs all contravariant and covariant slots to yield a scalar in \mathbb{R}, such as tracing over matching indices in a (1,1)-tensor to obtain a number; this result is unchanged under any because the contraction is defined via the natural duality between V and V^*, which is intrinsic to the structure. Partial contractions similarly reduce the tensor type while preserving multilinearity and independence from coordinates.

With Coordinates

In a chosen , the components of a tensor are expressed using indices that distinguish between contravariant (upper) and covariant (lower) positions, enabling explicit calculations of how the tensor behaves under coordinate transformations. For instance, a contravariant V^i has components that scale with the direct partial derivatives of the new coordinates with respect to the old ones, while a covariant V_i scales inversely. The transformation law for a general tensor follows a pattern where each contravariant index acquires a factor of the Jacobian matrix \frac{\partial x'^i}{\partial x^j}, and each covariant index acquires a factor of its inverse \frac{\partial x^k}{\partial x'^l}. For a (1,1) tensor T^i_j, which has one contravariant and one covariant index, the components in the new coordinate system x' transform as T'^i_j = \frac{\partial x'^i}{\partial x^k} \frac{\partial x^l}{\partial x'^j} T^k_l, where summation over repeated indices k and l is implied. This ensures the tensor's multilinearity is preserved across coordinate changes, with the direct Jacobian acting on the upper index and the inverse on the lower. A practical example arises in physics with the stress tensor \sigma^i_j, a (1,1) tensor representing force per unit area in mixed components. Under a coordinate defined by an orthogonal R, the transformed stress tensor becomes \sigma' = R \sigma R^T, or in , \sigma'^i_j = R^i_k R^l_j \sigma^k_l, since rotations preserve the metric and the inverse equals the for orthogonal transformations. For a case with initial components \sigma = \begin{pmatrix} 1 & 2 \\ 2 & 3 \end{pmatrix} (in MPa) and a 50° counterclockwise , the new components are approximately \sigma' = \begin{pmatrix} 2.36 & -0.68 \\ -0.68 & 1.64 \end{pmatrix}, illustrating how and normal stresses redistribute. To interconvert between contravariant and covariant components within the same , the g_{ij} is employed, which defines the inner product and allows index raising or lowering. Specifically, a contravariant V^j is lowered to its covariant form via V_i = g_{ij} V^j, with the inverse metric g^{ij} used for raising: V^i = g^{ij} V_j. In with the identity metric g_{ij} = \delta_{ij}, the components coincide, but in curved spaces like , this operation accounts for the .

Dual Properties

Duality in Vector Spaces

In finite-dimensional vector spaces over a F, the V^* of a V is defined as the set of all linear functionals from V to F, equipped with the structure of a vector space under addition and . This construction establishes a natural ity, where elements of V^*, often called covectors, act on vectors in V to produce scalars. The duality is manifested through the natural pairing \langle \omega, v \rangle = \omega(v) for \omega \in V^* and v \in V, which is bilinear and invariant under linear transformations. Given a basis \{e_j\} for V, the dual basis \{e^i\} for V^* satisfies \langle e^i, e_j \rangle = \delta^i_j, where \delta^i_j is the (equal to 1 if i = j and 0 otherwise), ensuring the bases are uniquely determined and the dimension of V^* equals that of V. Under a , vectors in V transform contravariantly, with components v'^i = \frac{\partial x'^i}{\partial x^j} v^j, while covectors in V^* transform covariantly, with components \omega'_i = \frac{\partial x^j}{\partial x'^i} \omega_j, preserving the invariance of the pairing \langle \omega', v' \rangle = \langle \omega, v \rangle. This complementary transformation behavior underscores the duality between contravariant and covariant objects. A fundamental result is the natural isomorphism between V and its double dual V^{**} = (V^*)^*, given by the evaluation map \mathrm{ev}: V \to V^{**} defined by \mathrm{ev}(v)(\omega) = \langle \omega, v \rangle for v \in V and \omega \in V^*. This map is linear and bijective for finite-dimensional spaces, as injectivity follows from the fact that if \langle \omega, v \rangle = 0 for all \omega \in V^* then v = 0, and dimensions match to ensure surjectivity.

Covariant-Contra Variant Pairs

In tensor analysis, practical examples of covariant-contravariant pairs illustrate how these dual objects interact in physical contexts. The , represented by components \mathbf{r}^i, transforms as a contravariant under coordinate changes, scaling with the basis vectors to maintain its directional integrity. In contrast, the of a , with components \nabla_i \phi, behaves as a covariant , transforming inversely to ensure the remains consistent across bases. Similarly, in with , the conjugate to , often denoted p_i, carries covariant indices, pairing naturally with the contravariant dq^i to form work as a scalar. The force, interpreted as the rate of change of this , can thus be viewed in its covariant form, emphasizing its role as a covector in dual pairings. A key application of these pairs is in constructing invariants through tensor contraction, where the summation over paired indices yields quantities unchanged under coordinate transformations. For a contravariant vector v^i and a covariant vector \omega_i, the contraction v^i \omega_i produces a scalar invariant, as the opposing transformation rules cancel out, preserving the inner product. This operation leverages the duality between vector spaces, ensuring that physical laws expressed in such forms remain coordinate-independent. For higher- tensors of type (p, q), where p contravariant and q covariant indices are present, contractions can pair specific indices to reduce the and form . By contracting one contravariant index with one covariant index, the resulting tensor maintains the overall properties while yielding a lower- object that is scalar under the paired ; repeated contractions can fully reduce to scalars for complete invariance. This is essential in formulating invariant expressions in multidimensional systems, such as stress-energy tensors where multiple pairs contribute to conserved quantities. In , a prominent example involves the 4-velocity u^\mu, a contravariant 4-vector to the worldline, paired with the covariant 4-gradient \partial_\mu. Their contraction u^\mu \partial_\mu yields the proper time derivative along the trajectory, a invariant under boosts and rotations, crucial for describing relativistic . This pairing underscores how covariant-contravariant structures preserve fundamental physical scalars in spacetime transformations.

References

  1. [1]
    [PDF] Tensors
    Nov 24, 2013 · (11.10) is known as the “rank 1 covariant transformation law”. For ... Applying the rank 1 covariant transformation law (i.e., equation (11.9)) ...
  2. [2]
    [PDF] 1.2 tensor concepts and transformations
    (Transitive Property of Covariant Transformation). Consider a sequence of ... , which demonstrates the transitive property of a covariant transformation.
  3. [3]
    [PDF] General covariance and the foundations of general relativity: eight ...
    Now Einstein's general theory is generally covariant: its laws remain unchanged under an arbitrary transformation of the spacetime coordinates. Does this ...
  4. [4]
    [PDF] 13 General Relativity - The University of New Mexico
    A covariant or contravariant tensor is antisymmetric if it changes sign when any two of its indices are interchanged. The Maxwell field strength Fk`(x) = F`k(x ...
  5. [5]
    Covariant Tensor -- from Wolfram MathWorld
    A covariant tensor, denoted with a lowered index (eg, a_mu ) is a tensor having specific transformation properties.<|control11|><|separator|>
  6. [6]
    1. Special Relativity and Flat Spacetime
    This notation ensured that the invariant object constructed by summing over the components and basis vectors was left unchanged by the transformation, just as ...
  7. [7]
    Méthodes de calcul différentiel absolu et leurs applications
    Download PDF ... About this article. Cite this article. Ricci, M.M.G., Levi-Civita, T. Méthodes de calcul différentiel absolu et leurs applications.
  8. [8]
    Some remarks on the history of Ricci's absolute differential calculus
    Oct 9, 2024 · A more general definition of tensor (Ricci did not use this denomination), either p-time covariant or contravariant, was given in terms of a ...
  9. [9]
    Tensors - Richard Fitzpatrick
    The simplest example of a covariant vector is provided by the gradient of a function of position $\phi=\phi(x^1, \cdots, x^ , since if we write. \begin ...
  10. [10]
    [PDF] 1. Vectors, contravariant and covariant
    Parallel transport of a vector is defined as transport for which the covariant derivative is zero. The Riemann tensor is determined by parallel transport of ...
  11. [11]
    [PDF] A Gentle Introduction to Tensors - Electrical & Systems Engineering
    May 27, 2014 · The covariant divergence is a contravariant vector whose free index is j in this definition. The covariant divergence is important in physics ...
  12. [12]
    [PDF] INTRODUCTION TO VECTORS AND TENSORS - OAKTrust
    This book presents basic concepts of vector and tensor analysis, including algebraic structures, vector and tensor algebra, and vector spaces.
  13. [13]
    [PDF] A mini-course on tensors
    We start Section 1 defining tensors in vector spaces as certain multilinear maps. ... = V ืทททื V (s times). Definition 1.1. A tensor of type (r, s) on V is a ...
  14. [14]
    Introduction to Smooth Manifolds - SpringerLink
    This book is an introductory graduate-level textbook on the theory of smooth manifolds. Its goal is to familiarize students with the tools they will need
  15. [15]
    [PDF] Multilinear Mappings and Tensors - UCSD CSE
    A multilinear mapping is linear in each variable. A tensor is a scalar-valued multilinear function with variables in both V and V*.
  16. [16]
    4.4: The Tensor Transformation Laws - Physics LibreTexts
    Mar 5, 2022 · The upper index in the denominator on the right becomes a lower index on the left by the same reasoning as was employed in the notation of the ...<|control11|><|separator|>
  17. [17]
    Stress Transformations - Continuum Mechanics
    Stress transformations use coordinate transforms (Q⋅σ⋅QT) for coordinate changes and rotations (R⋅σ⋅RT) where the object rotates, using full shear values.
  18. [18]
    [PDF] Notes on Dual Spaces
    The dual space of V , denoted by V ∗, is the space of all linear functionals on V ; i.e. V ∗ := L(V,F).
  19. [19]
    Dual Vector Space -- from Wolfram MathWorld
    The dual vector space to a real vector space V is the vector space of linear functions f:V->R, denoted V^*. In the dual of a complex vector space, ...
  20. [20]
    [PDF] An Introduction to Tensors for Students of Physics and Engineering
    A semi-intuitive approach to those notions underlying tensor analysis is given via scalars, vectors, dyads, triads, and similar higher-order vector products.
  21. [21]
    [PDF] Introduction to Tensor Analysis
    In tensor analysis the word “covariant” is also used in a different sense, to characterize a type of vector or an index on a tensor, as explained below.
  22. [22]
    [PDF] 1 INTRODUCTION TO THE ESSENTIALS OF TENSOR CALCULUS ...
    The relation above gives a prescription for transforming the (contravariant) vector dxi to another system.
  23. [23]
    [PDF] Introduction to Tensor Calculus
    This booklet contains an explanation about tensor calculus for students of physics and engineering with a basic knowledge of linear algebra.
  24. [24]
  25. [25]
    26: Lorentz Transformations of the Fields - Feynman Lectures
    We found that they could be made into one by multiplying each component by 1/√1−v2/c2. The “four-velocity” uμ is the four-vector ...