Fact-checked by Grok 2 weeks ago

Cartesian tensor

A Cartesian tensor is a mathematical entity in that generalizes scalars, vectors, and higher-order , represented by components in an and transforming under rotations according to specific linear rules to ensure invariance of the underlying physical or geometric . These tensors are defined in , where a tensor of rank N possesses $3^N components, forming a multi-dimensional that captures multi-directional relationships. Cartesian tensors are classified by their rank or order, which determines the number of indices required to specify a component. A rank-0 tensor is a scalar, invariant under coordinate transformations, such as or . Rank-1 tensors are vectors, like position or , with three components that transform as \mathbf{v}'_i = C_{ij} v_j, where C_{ij} is the direction cosine of the . Higher-rank tensors, such as the rank-2 stress tensor in , form 3×3 matrices and transform as T'_{ij} = C_{ik} C_{jl} T_{kl}, preserving the tensor's operational meaning across coordinate systems. The properties of Cartesian tensors rely on the of the basis, distinguishing them from general tensors in curved spaces by eliminating the need to differentiate between covariant and contravariant components. Key operations include addition of same-rank tensors, outer multiplication to increase rank, and (summation over repeated indices) to reduce rank by two, enabling efficient for derivations. The further verifies tensorial character: if the of an with an arbitrary tensor yields a known tensor, the itself is a tensor. In applications, Cartesian tensors are fundamental to , where they describe phenomena like , , and deformation in solids and fluids. For instance, the quantifies force distribution per unit area, transforming to maintain equations independently of the observer's frame. They also appear in (e.g., permittivity tensor) and (e.g., operators), providing a coordinate-independent for physical laws. This notation, popularized in works like Harold Jeffreys' 1931 monograph, simplifies algebraic manipulations in and physics.

Fundamentals of Cartesian Tensors

Definition in Cartesian coordinates

A Cartesian tensor generalizes the concepts of scalars, vectors, and matrices as multi-linear objects that transform under orthogonal coordinate changes in , preserving their structural form under rotations and reflections within orthonormal bases. These tensors are defined in the context of flat, where physical quantities like or are represented without the complications of . Cartesian coordinates form a right-handed, orthonormal system characterized by mutually perpendicular axes and unit basis vectors \mathbf{e}_i (for i = 1, 2, 3) that satisfy the condition \mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij}, where \delta_{ij} is the symbol, equal to 1 if i = j and 0 otherwise. This setup ensures that distances and are preserved, aligning with the where the inner product is simply the . In contrast to general tensors defined in arbitrary coordinate systems—where indices may be raised or lowered using a non-trivial —Cartesian tensors employ physical components directly in orthonormal bases, implicitly assuming the metric \delta_{ij} without needing index adjustments. This simplification facilitates computations in and physics applications, focusing on invariance under proper and improper orthogonal transformations. Examples illustrate the hierarchy: a scalar represents a zeroth-order tensor with a single value; a is a tensor with three components along the basis directions; and a , or second-order tensor, forms a 3×3 capturing linear relations between . The general component representation for an nth-order Cartesian tensor is T_{i_1 i_2 \dots i_n}, where each index i_k ranges from 1 to 3, yielding $3^n components in total.

Vectors as first-order tensors

In Cartesian tensor analysis, vectors are treated as contravariant tensors of rank one, representing physical quantities with both and direction that transform linearly under coordinate rotations. A \mathbf{v} in an n-dimensional is expressed through its components v_i relative to an \{\mathbf{e}_i\}, where i = 1, 2, \dots, n, via the decomposition \mathbf{v} = \sum_{i=1}^n v_i \mathbf{e}_i. This representation leverages the abstract nature of tensors, ensuring the remains as an entity despite changes in its component values under orthogonal transformations. The components v_i of a vector behave as scalars in the chosen basis, enabling straightforward algebraic operations. Vector addition follows component-wise rules: if \mathbf{u} = u_i \mathbf{e}_i and \mathbf{v} = v_i \mathbf{e}_i, then \mathbf{u} + \mathbf{v} = (u_i + v_i) \mathbf{e}_i. Similarly, scalar multiplication by a constant \alpha yields \alpha \mathbf{v} = (\alpha v_i) \mathbf{e}_i, preserving the directional structure while scaling the magnitude. These operations align with the in vector spaces and underscore the coordinate-independent essence of first-order tensors. The basis vectors \mathbf{e}_i are unit vectors satisfying orthonormality conditions, where the inner product \mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij}. The \delta_{ij} is defined as 1 if i = j and 0 otherwise, serving as a mathematical tool to encode this and unit length (|\mathbf{e}_i| = 1) in Cartesian systems. The of a \mathbf{v}, assuming the , is given by |\mathbf{v}| = \sqrt{\mathbf{v} \cdot \mathbf{v}} = \sqrt{\sum_{i=1}^n v_i^2}, providing a measure of its independent of direction. Higher-rank tensors, such as second-order ones, can be constructed as outer products of these first-order vectors, extending the framework to more complex multilinear mappings. In two dimensions, a vector might be \mathbf{v} = v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2, suitable for planar problems like fluid flow. For three-dimensional applications, such as , the position vector \mathbf{r} from the origin is commonly \mathbf{r} = x \mathbf{e}_x + y \mathbf{e}_y + z \mathbf{e}_z, where x, y, z are the Cartesian coordinates, illustrating how tensors model in physical space.

Second-order tensors in three dimensions

In three-dimensional with a , a second-order Cartesian tensor, often simply called a second-order tensor, is a linear that maps to while preserving the of the space. It is represented by a 3×3 \mathbf{T} = [T_{ij}], where the components are defined as T_{ij} = \mathbf{e}_i \cdot \mathbf{T} \cdot \mathbf{e}_j and \{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\} denotes the . This representation facilitates computations, as the action of the tensor on a \mathbf{v} with components v_j yields (\mathbf{T} \cdot \mathbf{v})_i = \sum_{j=1}^3 T_{ij} v_j, producing another . Many physical second-order tensors are symmetric, satisfying T_{ij} = T_{ji}, which implies that the matrix is equal to its transpose. A prominent example is the inertia tensor, which describes the mass distribution of a rigid body relative to a point and governs rotational dynamics. Such symmetry ensures that the tensor has real eigenvalues and orthogonal eigenvectors, corresponding to principal axes where the matrix is diagonal. The trace, defined as \operatorname{tr}(\mathbf{T}) = \sum_{i=1}^3 T_{ii}, is an invariant scalar that represents the sum of these eigenvalues. Similarly, the determinant \det(\mathbf{T}) is invariant and relates to the volume scaling factor under the transformation. Second-order tensors can be constructed via the of two , \mathbf{T} = \mathbf{a} \otimes \mathbf{b}, with components T_{ij} = a_i b_j. This product generates a rank-2 tensor that linearly maps any \mathbf{v} to (\mathbf{a} \otimes \mathbf{b}) \cdot \mathbf{v} = (\mathbf{b} \cdot \mathbf{v}) \mathbf{a}, projecting along \mathbf{b} and scaling by \mathbf{a}. In , the stress tensor \sigma_{ij} exemplifies a second-order tensor, quantifying the force per unit area across an internal surface with normal in the i-direction acting in the j-direction. The strain tensor \varepsilon_{ij}, symmetric by definition, measures infinitesimal deformations. Both can be decomposed into a spherical (hydrostatic) part, \frac{1}{3} \operatorname{tr}(\boldsymbol{\sigma}) \mathbf{I} or \frac{1}{3} \operatorname{tr}(\boldsymbol{\varepsilon}) \mathbf{I}, representing uniform expansion or compression, and a deviatoric part, \boldsymbol{\sigma}' = \boldsymbol{\sigma} - \frac{1}{3} \operatorname{tr}(\boldsymbol{\sigma}) \mathbf{I} or \boldsymbol{\varepsilon}' = \boldsymbol{\varepsilon} - \frac{1}{3} \operatorname{tr}(\boldsymbol{\varepsilon}) \mathbf{I}, which is traceless and captures distortions. While the focus here is on three dimensions, the concepts extend analogously to higher-dimensional spaces.

Transformations in Cartesian Systems

Invariance under orthogonal transformations

Orthogonal transformations in Cartesian coordinate systems are represented by matrices R satisfying R^T R = I, where I is the , ensuring that lengths and angles are preserved. These transformations include proper rotations with \det(R) = +1 and improper ones, such as reflections, with \det(R) = -1. In three-dimensional , such transformations correspond to changes of vectors, maintaining the flat metric \delta_{ij}. The invariance of a Cartesian tensor under orthogonal transformations means that the tensor, viewed as a multi-linear T: V \times \cdots \times V \to \mathbb{R} (for a covariant tensor of n), remains unchanged in its on vectors, regardless of the . Specifically, physical quantities and laws expressed in terms of the tensor yield identical predictions before and after the transformation, as the components adjust to compensate for the basis change. This property ensures that tensor descriptions are independent of the observer's , a of . For a first-rank tensor, or vector \mathbf{v}, invariance under an R is exemplified by the preservation of its : |\mathbf{v}'| = |R \mathbf{v}| = |\mathbf{v}|, since R preserves the Euclidean norm \mathbf{v} \cdot \mathbf{v}. The components transform as v'_i = R_{ij} v_j, but the underlying as a directional entity remains the same. This extends to higher ranks, where the multi-linear structure ensures consistent physical interpretations, such as in or tensors. In Cartesian coordinates, the distinction between covariant and contravariant tensors is unified because the metric tensor is the Kronecker delta \delta_{ij}, which itself transforms invariantly under orthogonal changes: \delta'_{ij} = R_{ik} R_{jl} \delta_{kl} = \delta_{ij}. Thus, raising or lowering indices via \delta_{ij} does not alter the transformation rules, allowing a single set of laws for all tensor types of a given rank. The general condition for invariance is that the transformed tensor T' satisfies T'( \mathbf{u}_1, \dots, \mathbf{u}_n ) = T( R^{-1} \mathbf{u}_1, \dots, R^{-1} \mathbf{u}_n ) for input vectors \mathbf{u}_i in the new basis, ensuring the map's output is . Since R^{-1} = R^T for orthogonal R, this relation directly ties component transformations to the preservation of the tensor's intrinsic properties.

Jacobian and derivative transformations for vectors

In Cartesian coordinate systems, a general coordinate transformation is expressed as \mathbf{x}' = \mathbf{x}'(\mathbf{x}), where \mathbf{x} and \mathbf{x}' denote the position vectors in the original and new systems, respectively. The Jacobian matrix J of this transformation is defined by its components J_{ij} = \frac{\partial x'_i}{\partial x_j}, which capture the local linear approximation of the mapping and ensure the transformation is invertible if \det J \neq 0. For a vector field \mathbf{v} with components v_j in the original coordinates, the components in the new system transform contravariantly as v'_k = \sum_m \frac{\partial x'_k}{\partial x_m} v_m, preserving the vector's geometric interpretation across the coordinate change. In matrix notation, this is compactly written as \mathbf{v}' = J \mathbf{v}, where \mathbf{v} and \mathbf{v}' are column vectors of components. This law holds in Cartesian systems because the basis vectors are orthonormal and constant, distinguishing it from where metric adjustments are required. Derivatives of scalar fields transform as covectors in tensor . The partial derivative \frac{\partial f}{\partial x_i} in the original coordinates becomes \frac{\partial f}{\partial x'_k} = \sum_m \frac{\partial x_m}{\partial x'_k} \frac{\partial f}{\partial x_m} in the new system, reflecting the inverse Jacobian's role. Thus, the \nabla f has components that adjust inversely to the transformation, ensuring the along any remains invariant. In Cartesian coordinates, where the is the identity, these components numerically coincide with those of the contravariant . The chain rule extends this to time derivatives in moving frames or along curves. For a vector \mathbf{v}(t) parameterized by time, the material derivative in the new coordinates is \frac{d v'_k}{dt} = \sum_m \frac{\partial x'_k}{\partial x_m} \frac{d v_m}{dt} + \sum_n v'_n \frac{\partial^2 x'_n}{\partial x_m \partial x_p} \frac{d x_m}{dt} \frac{d x_p}{dt}, though the second term vanishes for linear transformations typical in rigid Cartesian shifts. This application underscores how Jacobian-based transformations maintain consistency in dynamical contexts, such as or motion.

Projections and component changes

In Cartesian coordinates, the components of a \mathbf{v} are obtained by projecting it onto the vectors \mathbf{e}_i (where i = 1, 2, 3) using the , defined as the scalar product \mathbf{v} \cdot \mathbf{e}_i. This yields the i-th component v_i = \mathbf{v} \cdot \mathbf{e}_i, representing the signed length of the projection along that axis. For unit basis vectors, this simplifies the representation \mathbf{v} = \sum_i v_i \mathbf{e}_i, ensuring the 's and are preserved. When changing from an old basis \{\mathbf{e}_i\} to a new \{\mathbf{e}'_j\}, the components transform to maintain the vector's invariance. The new component v'_j is given by v'_j = \sum_i v_i (\mathbf{e}_i \cdot \mathbf{e}'_j), where the again captures the projection of the old basis onto the new one. This summation accounts for the geometric reorientation, ensuring \mathbf{v} = \sum_j v'_j \mathbf{e}'_j. The coefficients \mathbf{e}_i \cdot \mathbf{e}'_j are the direction cosines a_{ij}, which form the elements of an orthogonal A = [a_{ij}] satisfying A A^T = I. These cosines represent the angles between the old and new basis vectors, with a_{ij} = \cos \theta_{ij}. In matrix notation, the is compactly expressed as \mathbf{v}' = A \mathbf{v}, where \mathbf{v} and \mathbf{v}' are column vectors of components. For a specific example, consider a rotation by an \theta counterclockwise from the old basis to the new one. The is A = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}, yielding transformed components v'_x = v_x \cos \theta - v_y \sin \theta and v'_y = v_x \sin \theta + v_y \cos \theta. This illustrates how the projections adjust under while preserving the vector's length \sqrt{(v'_x)^2 + (v'_y)^2} = \sqrt{v_x^2 + v_y^2}.

Dyadic Products and Special Symbols

Dyadic product

The dyadic product, also known as the or , of two vectors \mathbf{a} and \mathbf{b} in a is a second-order tensor \mathbf{T} = \mathbf{a} \otimes \mathbf{b} (or simply \mathbf{ab} in dyadic notation). Its components are given by T_{ij} = a_i b_j, where i and j range from 1 to 3, resulting in a 3×3 matrix that captures the bilinear . This operation increases the tensor rank by combining two rank-1 tensors (vectors) into a rank-2 tensor, which transforms under rotations as T'_{ij} = C_{ik} C_{jl} T_{kl}, preserving the geometric relationships. Dyadic products form the basis for expressing general second-order tensors as sums of such products, \mathbf{T} = \sum T_{ij} \mathbf{e}_i \otimes \mathbf{e}_j, where \mathbf{e}_i are vectors.

Dot product with Kronecker delta

The dot product of two vectors \mathbf{v} and \mathbf{w} in a is a scalar quantity given by \mathbf{v} \cdot \mathbf{w} = \sum_{i=1}^3 v_i w_i, where the follows the Einstein for repeated indices. Geometrically, this equals |\mathbf{v}| \, |\mathbf{w}| \cos \theta, with \theta the angle between the vectors, emphasizing its role in measuring and . This operation is fundamental in Cartesian tensor algebra, as it contracts the indices of first-order tensors (vectors) to yield an invariant scalar. The remains invariant under orthogonal transformations, preserving its value across rotated coordinate systems. Consider an orthogonal matrix R such that R^T R = I. The transformed vectors are \mathbf{v}' = R \mathbf{v} and \mathbf{w}' = R \mathbf{w}, with components v'_p = R_{pi} v_i and w'_q = R_{qj} w_j. The transformed is then \mathbf{v}' \cdot \mathbf{w}' = v'_p w'_p = (R_{pi} v_i) (R_{pj} w_j) = v_i (R^T_{ip} R_{pj}) w_j = v_i \delta_{ij} w_j = v_i w_i = \mathbf{v} \cdot \mathbf{w}, where the of R ensures the yields the original scalar. Central to this operation is the \delta_{ij}, a second-order Cartesian tensor defined as \delta_{ij} = 1 if i = j and $0 otherwise, representing the components of the tensor in an . It acts as a selection , satisfying \sum_k \delta_{ik} a_k = a_i for any components a_k, effectively leaving the unchanged upon . The can thus be expressed tensorially as \mathbf{v} \cdot \mathbf{w} = v_i \delta_{ij} w_j, highlighting the 's role in index while maintaining invariance, as \delta_{ij} itself transforms as \delta'_{pq} = R_{pi} \delta_{ij} R_{qj} = (R^T R)_{pq} = \delta_{pq}. For higher-order tensors, the dot product extends to contractions, such as the double dot product (or full contraction) of two second-order tensors T and S, defined as T : S = \sum_{i,j=1}^3 T_{ij} S_{ji}. This scalar reduces the rank by two, analogous to the vector case, and is invariant under orthogonal transformations due to the properties of the components. In Cartesian coordinates, the g_{ij} = \delta_{ij} further simplifies these inner products by providing the orthonormal , eliminating the need for explicit metric raising or lowering of indices. This enables efficient algebraic manipulations in tensor analysis, such as computing traces or .

Cross product with Levi-Civita symbol

The cross product of two \mathbf{v} and \mathbf{w} in three-dimensional is a \mathbf{v} \times \mathbf{w} whose magnitude equals |\mathbf{v}| |\mathbf{w}| \sin \theta, where \theta is the angle between \mathbf{v} and \mathbf{w}, and whose direction follows the , pointing perpendicular to the plane spanned by \mathbf{v} and \mathbf{w}. The \varepsilon_{ijk} is a totally antisymmetric mathematical object defined for indices i, j, k = 1, 2, 3, with \varepsilon_{123} = 1, \varepsilon_{ijk} = -1 for even permutations of 123, \varepsilon_{ijk} = 1 for odd permutations, and \varepsilon_{ijk} = 0 if any two indices are repeated. In component form, the is expressed using the Levi-Civita symbol as (\mathbf{v} \times \mathbf{w})_i = \sum_{j,k} \varepsilon_{ijk} v_j w_k, where the summation convention applies over repeated indices j and k. The is a pseudotensor of rank three, meaning it transforms under an matrix R with components \varepsilon'_{ijk} = (\det R) \varepsilon_{lmn} R_{il} R_{jm} R_{kn}, acquiring an extra factor of \det R = \pm 1 compared to a true tensor; under improper rotations where \det R = -1, it changes sign. This property reflects the nature of the result. A key identity involving the is the expression for the of a 3×3 matrix A with components A_{mi}: \det A = \sum_{i,j,k} \varepsilon_{ijk} A_{1i} A_{2j} A_{3k}. This formula arises from the antisymmetric contraction and permutes the rows systematically.

Pseudovectors from antisymmetric tensors

In three-dimensional Cartesian coordinates, an antisymmetric second-order tensor A_{ij} satisfies A_{ij} = -A_{ji} for all indices i, j = 1, 2, 3. Such a tensor has only three independent components, as the diagonal elements must vanish and the off-diagonal elements are related by negation, reducing the total from nine to three degrees of freedom. This antisymmetric tensor can be mapped to a pseudovector \omega_k through duality using the Levi-Civita symbol \varepsilon_{kij}, defined previously as the totally antisymmetric symbol with \varepsilon_{123} = 1. The components of the pseudovector are given by \omega_k = \frac{1}{2} \sum_{i,j=1}^3 \varepsilon_{kij} A_{ij}, and the inverse mapping recovers the tensor as A_{ij} = \sum_{k=1}^3 \varepsilon_{ijk} \omega_k. These relations highlight the equivalence between the antisymmetric tensor and the pseudovector representation in three dimensions. Pseudovectors, or axial vectors, derived this way transform like ordinary vectors under proper rotations but acquire an additional sign flip under improper transformations such as reflections, where the determinant of the is -1. A key example is \mathbf{L}, which arises from the antisymmetric part of the tensor acting on , yielding \mathbf{L} = \mathbf{I} \cdot \boldsymbol{\omega} with the pseudovector nature ensuring consistency under . This duality also manifests in the , where the k-th component of \mathbf{v} \times \mathbf{w} is expressed as the contraction (v \times w)_k = \sum_{i,j=1}^3 \varepsilon_{kij} v_i w_j, producing a from two polar vectors.

Higher-Order Tensors and Generalizations

Transformation laws for second-order tensors

In Cartesian coordinate systems, a second-order tensor \mathbf{T} undergoes a change of basis under an orthogonal transformation specified by a rotation matrix \mathbf{R} (satisfying \mathbf{R}^T \mathbf{R} = \mathbf{I}), where the components in the new basis are given by T'_{mn} = \sum_{i,j=1}^3 R_{m i} R_{n j} T_{i j}. This relation ensures that the tensor's directional properties are preserved across orthonormal frames. In matrix form, the transformation simplifies to \mathbf{T}' = \mathbf{R} \mathbf{T} \mathbf{R}^T, reflecting the bilinear mapping nature of second-order tensors. Within Euclidean spaces using orthonormal bases, Cartesian tensors treat upper and lower indices equivalently, eliminating the distinction between covariant and contravariant forms prevalent in general tensor analysis; thus, the transformation law applies uniformly without metric adjustments. The trace of the tensor, \operatorname{tr}(\mathbf{T}) = \sum_{i=1}^3 T_{i i}, remains invariant under this orthogonal transformation, as \operatorname{tr}(\mathbf{T}') = \operatorname{tr}(\mathbf{R} \mathbf{T} \mathbf{R}^T) = \operatorname{tr}(\mathbf{T} \mathbf{R}^T \mathbf{R}) = \operatorname{tr}(\mathbf{T}), owing to the cyclic property of the trace and the orthogonality condition. Consequently, the eigenvalues of \mathbf{T}, determined by its characteristic equation \det(\mathbf{T} - \lambda \mathbf{I}) = 0, are also unchanged, as the transformation is a similarity operation that preserves the spectrum. A practical illustration arises in with the \boldsymbol{\sigma}, which transforms as \boldsymbol{\sigma}' = \mathbf{R} \boldsymbol{\sigma} \mathbf{R}^T when rotating the coordinate axes; this allows evaluation of normal and shear stresses in arbitrary orientations, such as determining maximum principal stresses for material failure analysis. The double contraction between two second-order tensors, defined as the scalar \mathbf{T} : \mathbf{S} = \sum_{i,j=1}^3 T_{i j} S_{i j} = \operatorname{tr}(\mathbf{T}^T \mathbf{S}), exhibits invariance under the transformation: \mathbf{T} : \mathbf{S}' = \mathbf{T} : (\mathbf{R} \mathbf{S} \mathbf{R}^T) = (\mathbf{R}^T \mathbf{T} \mathbf{R}) : \mathbf{S}, ensuring that invariant quantities like the strain energy density \frac{1}{2} \boldsymbol{\sigma} : \boldsymbol{\epsilon} (where \boldsymbol{\epsilon} is the strain tensor) retain their physical meaning regardless of the basis.

Rules for arbitrary-order tensors

Cartesian tensors of arbitrary order k in an n-dimensional with orthonormal bases obey a specific transformation law under orthogonal coordinate transformations. These transformations are represented by an A with elements a_{ji}, satisfying A^T A = I. The components of the tensor T in the new basis are given by T'_{j_1 j_2 \cdots j_k} = \sum_{i_1=1}^n \sum_{i_2=1}^n \cdots \sum_{i_k=1}^n a_{j_1 i_1} a_{j_2 i_2} \cdots a_{j_k i_k} \, T_{i_1 i_2 \cdots i_k}, where the sums are over the n dimensions, ensuring the tensor's geometric meaning remains unchanged across bases. This law extends the transformation for lower ranks, such as the rank-2 case, by applying the orthogonal matrix once per index. From a functional perspective, a Cartesian tensor of contravariant order k (type (k,0)) can be interpreted as a T: V \times \cdots \times V \to \mathbb{R} (with k factors of the V), linear in each argument. Under an defined by A, the transformed tensor T' satisfies T'(v'_1, \dots, v'_k) = T(v_1, \dots, v_k), where each v_i = A^{-1} v'_i. Since A is orthogonal, A^{-1} = A^T, preserving the multilinearity and the tensor's intrinsic properties in the setting. For antisymmetric Cartesian tensors in n dimensions, the generalization of the \epsilon_{i_1 \dots i_n} serves as a basis for totally antisymmetric structures. This symbol is defined as the sign of the of the indices (i_1, \dots, i_n) relative to (1, \dots, n) if all indices are distinct (i.e., +1 for even permutations, -1 for odd), and zero otherwise. It enables the expression of higher-dimensional antisymmetric tensors, such as the volume form or generalizations of cross products, by contracting with this symbol to enforce antisymmetry across all indices. A representative example is the fourth-order elasticity tensor C_{ijkl} in n dimensions, which relates the second-order stress tensor \sigma_{ij} to the strain tensor \varepsilon_{kl} via \sigma_{ij} = C_{ijkl} \varepsilon_{kl} in . Its components transform according to the general law: C'_{pqrs} = \sum_{i,j,k,l=1}^n a_{p i} a_{q j} a_{r k} a_{s l} \, C_{ijkl}, capturing the material's anisotropic response under orthogonal rotations in any dimension, with symmetries often reducing the independent components. Scalar contractions of Cartesian tensors, such as the full of a rank-$2m tensor T_{i_1 j_1 \dots i_m j_m} = \sum T_{i_1 j_1 \dots i_m j_m}, remain under orthogonal . To see this, substitute the : each contravariant index introduces a factor of a, and each covariant index introduces a^{-1} = a^T. For a simple rank-2 trace, \operatorname{tr}(T') = \sum_p T'_{pp} = \sum_{p,q,r} a_{p q} a_{p r} T_{q r} = \sum_{q r} T_{q r} \delta_{q r} = \operatorname{tr}(T), using the orthogonality \sum_p a_{p q} a_{p r} = \delta_{q r}. This extends to higher even ranks by successive applications of the Kronecker delta from orthogonality, ensuring the scalar result is basis-independent.

Antisymmetric and symmetric properties

In the context of Cartesian tensors, a second-rank satisfies T_{ij} = T_{ji} for all indices i, j, where the indices range over the dimensions of the . In an n-dimensional space, such a tensor has n(n+1)/2 independent components, as the reduces the total n^2 components by equating off-diagonal pairs. This property holds invariantly under orthogonal transformations in Cartesian coordinates. Conversely, a second-rank antisymmetric tensor obeys T_{ij} = -T_{ji}, implying that all diagonal components vanish (T_{ii} = 0) and off-diagonal elements come in opposite-signed pairs. It possesses n(n-1)/2 independent components in n dimensions, and in three dimensions, this corresponds to three components that can be mapped to a or in multivector algebra. Like the symmetric case, antisymmetry is preserved under rotations. Any second-rank Cartesian tensor T_{ij} can be uniquely decomposed into its symmetric part S_{ij} and antisymmetric part A_{ij}, such that T_{ij} = S_{ij} + A_{ij}, where S_{ij} = (T_{ij} + T_{ji})/2 and A_{ij} = (T_{ij} - T_{ji})/2. This separates strain-like (symmetric) and rotation-like (antisymmetric) contributions in physical interpretations. For higher-rank tensors, generalizations include fully symmetric tensors, which remain invariant under any even of indices, and fully antisymmetric tensors, invariant under odd permutations (up to ). The symmetrizer S, which projects a tensor onto its fully symmetric part, is defined as S T = (T + P T)/2, where P denotes the permutation exchanging the relevant indices; for multiple indices, it extends to averages over all symmetric permutations. In , the irreducible representations (irreps) of the acting on tensor spaces are classified using Young tableaux, where rows represent symmetrization and columns antisymmetrization, providing a basis for decomposing general tensors into symmetry-adapted components.

Distinctions from General Tensor Theory

Cartesian vs. curvilinear coordinates

Cartesian tensors are defined in the context of Cartesian coordinate systems, where the basis vectors are constant and orthonormal, leading to significant simplifications compared to the more general treatment in curvilinear coordinates. In curvilinear systems, such as spherical or cylindrical coordinates, the basis vectors vary with position, necessitating the use of Christoffel symbols \Gamma^j_{ik} to account for this variation in the covariant derivative. The covariant derivative of a contravariant vector component v^j in curvilinear coordinates is given by \nabla_i v^j = \partial_i v^j + \Gamma^j_{ik} v^k, where \partial_i denotes the partial derivative with respect to the i-th coordinate. In contrast, Cartesian coordinates have constant basis vectors, so the Christoffel symbols vanish (\Gamma^j_{ik} = 0), reducing the covariant derivative to the simple partial derivative: \nabla_i v^j = \partial_i v^j. The further highlights these differences. In , the metric g_{ij} is generally non-diagonal and position-dependent, requiring explicit index raising and lowering operations (e.g., v_i = g_{ij} v^j) to distinguish between covariant and contravariant components. This complexity arises because the inner product between basis vectors varies, complicating tensor manipulations. In Cartesian coordinates, however, the is the \delta_{ij}, which is diagonal and constant, eliminating the need for such operations and making covariant and contravariant components numerically identical. Under coordinate transformations, general tensors in curvilinear systems mix covariant and contravariant indices, with transformation laws involving the matrix and its inverse to preserve the tensor's invariance. Cartesian tensors, often expressed in terms of physical components (projections onto unit vectors), simplify this by ignoring the distinction between index types due to the , allowing transformations to focus solely on direction cosines without metric adjustments. This approach is particularly advantageous in solving partial differential s (PDEs), such as the Laplace \nabla^2 \phi = 0. In curvilinear coordinates, the Laplacian involves the metric determinant and inverse metric, \nabla^2 \phi = \frac{1}{\sqrt{|g|}} \partial_i \left( \sqrt{|g|} g^{ij} \partial_j \phi \right), introducing additional terms. In Cartesian coordinates, it reduces to the straightforward form \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2} + \frac{\partial^2 \phi}{\partial z^2} = 0, avoiding connection terms like and facilitating analytical and numerical solutions.

Index-free vs. abstract index notation

In index-free notation, also referred to as or direct notation, Cartesian tensors are expressed without explicit indices, leveraging symbols like boldface letters or arrows to denote and tensors, with operations indicated by , dots, or colons. For a second-order tensor \mathbf{T} acting on a \mathbf{v}, the result is \mathbf{T} \cdot \mathbf{v}; the inner product of two second-order tensors \mathbf{T} and \mathbf{S} is \mathbf{T} : \mathbf{S}. This approach treats tensors as linear operators on , extending familiar in a compact manner and is well-suited to orthonormal Cartesian bases where no distinction between covariant and contravariant components is needed. Abstract index notation, by contrast, employs indices as abstract labels to specify tensor type rather than numerical components in a fixed basis, such as T^i_{\ j} for a mixed second-order tensor with one contravariant and one covariant index. Repeated indices imply via the Einstein convention, and the notation underscores the tensor's role as a multilinear independent of coordinates. This formalism facilitates precise tracking of tensor ranks and symmetries while maintaining basis independence. Index-free notation provides advantages in engineering contexts through its brevity and intuitive operations, reducing the risk of summation convention errors and promoting conceptual focus on physical transformations over component manipulation. Its limitations arise in non-Cartesian systems, where basis non-orthonormality complicates dot products without additional metric factors. Abstract index notation excels in generality, particularly for relativistic applications requiring coordinate invariance, though it demands careful index management. To illustrate equivalence, consider the composition of three second-order tensors A, B, and C: In index-free notation: A \cdot B \cdot C. The i-th component is (A \cdot B \cdot C)_i = A_{ij} B_{jk} C_{ki}, or in abstract index notation: A^i_{\ j} B^j_{\ k} C^k_{\ i}.

Simplifications in orthonormal bases

In orthonormal bases, Cartesian tensors benefit from several simplifications arising from the Euclidean metric tensor being the identity, which streamlines algebraic and analytical manipulations compared to general coordinate systems. A primary advantage is the elimination of the distinction between upper and lower indices for tensor components. Since the metric tensor g_{ij} = \delta_{ij} is diagonal with unit entries, raising or lowering an index leaves the component unchanged: v^i = \delta^{ij} v_j = v_i. This equivalence, where contravariant and covariant representations coincide, avoids the need for metric contractions in index manipulations and simplifies transformation laws for tensors under rotations. Partial derivatives in these bases commute, \partial_i \partial_j = \partial_j \partial_i, due to the zero Christoffel symbols and the flat geometry, which ensures that mixed second derivatives of smooth functions are equal. This commutativity directly simplifies Taylor expansions of tensor fields. For a scalar field f near a point, the expansion includes the second-order term \frac{1}{2} (\partial_i \partial_j f) \, dx^i \, dx^j; the symmetry \partial_i \partial_j f = \partial_j \partial_i f implies that the associated Hessian tensor is inherently symmetric, reducing the number of independent components and facilitating computations without symmetrization corrections. Similar simplifications apply to higher-order expansions for vector and tensor fields. The orthonormality of the basis vectors enables straightforward expansions applied directly to tensor components. In rectangular domains, the aligns with the standard , allowing each component T_{i_1 \dots i_k} of a to be decomposed independently as T_{i_1 \dots i_k}(x) = \sum_{n} \hat{T}_{i_1 \dots i_k, n} e^{i n \cdot x}, where the coefficients \hat{T} are computed via inner products that vanish across modes due to . This component-wise separability accelerates methods for solving tensor equations in physics and . From a computational perspective, the diagonal metric accelerates numerical algorithms like the finite element method (FEM). In Cartesian orthonormal bases, the identity metric eliminates off-diagonal terms in integral formulations, yielding simpler Jacobian matrices and stiffness assemblies; for instance, in structured grids, this results in diagonal or banded systems that reduce solve times in solvers for elasticity or fluid dynamics problems involving second-order tensors. Rotation invariance of tensor norms further highlights these simplifications, requiring no metric adjustments. For a second-order tensor T, the squared Frobenius norm is \|T\|^2 = T_{ij} T_{ij}, invariant under orthogonal transformations R (with R^T R = I) because the transformed tensor T' = R T R^T satisfies \|T'\|^2 = \operatorname{Tr}(T'^T T') = \operatorname{Tr}(T^T T) = \|T\|^2. This property preserves scalar measures of tensor magnitude across rotations without additional contractions.

Historical Context

Early developments in vector analysis

In the early 1840s, Irish mathematician developed quaternions as a four-dimensional extension of complex numbers to address geometric transformations in three-dimensional space. Introduced in 1843, quaternions provided a algebraic tool for representing rotations and anticipated the vector cross product through their vector-like components, influencing later developments in vector analysis. Independently, in 1844, German scholar published Die lineale Ausdehnungslehre, a foundational work that introduced extensive magnitudes, or multivectors, as algebraic entities combining scalars, vectors, and higher-dimensional objects. Grassmann's theory emphasized the combinatorial structure of geometric extensions, laying groundwork for vector spaces and the , which prefigured tensorial operations in linear algebra. By the 1870s, these algebraic innovations found practical application in physics through James Clerk Maxwell's A Treatise on Electricity and Magnetism (1873), where he employed to formulate electromagnetic phenomena. Maxwell's vectors described field intensities and forces in a unified manner, bridging Hamilton's and Grassmann's abstract ideas with physical modeling in . The shift toward explicit tensor concepts emerged in the 1890s with Woldemar Voigt's studies in crystal physics, culminating in his 1898 introduction of the term "tensor" for second-order quantities like and , represented as matrices. Voigt's notation facilitated the analysis of anisotropic materials in orthonormal bases, marking the transition from vectors to higher-rank objects. This Cartesian framework, rooted in 19th-century , solidified tensors as tools for invariant descriptions under orthogonal transformations, essential for and continuum physics.

Formulation by Gibbs and Heaviside

In the late , formulated a comprehensive system of analysis tailored to three-dimensional Cartesian coordinates during his lectures at in the 1880s. His privately printed pamphlet Elements of Vector Analysis: Arranged for the Use of Students in Physics, issued in parts between and , defined operations including the scalar product (denoted by a dot) between two s, yielding a scalar, and the product (denoted by a cross), producing a perpendicular to both operands. Gibbs also introduced differential operators: the gradient of a scalar field, represented as \nabla \phi, which yields a pointing in the direction of steepest ascent; the divergence of a field, \nabla \cdot \mathbf{A}, measuring the net flux out of a point; and the curl of a field, \nabla \times \mathbf{A}, capturing rotational tendencies. These constructs formed the basis of scalar- calculus, emphasizing orthonormal bases and eschewing the quaternion methods promoted by contemporaries like William Rowan Hamilton. Independently, developed parallel innovations in analysis during the 1880s, driven by applications to , as detailed in his collected works Electrical Papers published in 1892. Heaviside reformulated using notation, introducing to handle second-rank quantities such as stress and tensors in electromagnetic theory. notation, expressed as the juxtaposition \mathbf{a b} for the of \mathbf{a} and \mathbf{b}, enabled an index-free of rank-2 Cartesian tensors, facilitating operations like and without explicit components. This approach extended methods to higher-order entities while maintaining focus on 3D space. Gibbs incorporated in the second part of his 1884 pamphlet, treating them as linear functions in Cartesian coordinates with unit vectors \mathbf{i}, \mathbf{j}, \mathbf{k}, which laid groundwork for and inversion. Both Gibbs and Heaviside's formulations prioritized practical utility in physics, establishing a unified framework for and that diverged from abstract or non-Cartesian alternatives. Their independent yet convergent efforts, disseminated through Gibbs's Yale notes and Heaviside's serial publications in The Electrician, solidified the approach central to modern Cartesian tensor theory.

Modern extensions and unification

In the early 20th century, introduced , also known as the Einstein summation convention, which provided a compact framework for handling multicomponent quantities in Cartesian coordinate systems. This notation implies summation over repeated indices, simplifying manipulations of Cartesian tensors such as rotations and derivatives in flat space. Adapted for , it uses extended Cartesian coordinates (t, x, y, z) to describe phenomena in inertial frames, where the metric is diagonal and simplifies tensor transformations without effects. This adaptation proved essential for approximations in relativistic electrodynamics, such as expressing the electromagnetic in Cartesian components. Building on these foundations, Clifford Truesdell advanced the application of Cartesian tensors in during the 1950s and 1960s through his development of rational . In this framework, tensors model material responses, such as the Piola-Kirchhoff stress tensor, which depends on deformation gradients represented in Cartesian components to ensure frame-indifference under orthogonal transformations. Truesdell's approach integrated thermodynamic principles with tensor equations for nonlinear elasticity and , using Cartesian tensors to derive equilibrium conditions and elasticities without reliance on . His seminal works emphasized axiomatic rigor, unifying and via tensorial response functions. From the onward, systems facilitated computational handling of higher-dimensional Cartesian tensors, enabling symbolic manipulations beyond manual limits. Packages like MathTensor and CARTAN in Mathematica support in orthonormal bases, including for contractions and symmetrizations. Similarly, toolboxes incorporate Cartesian tensor operations for vector and matrix representations, often extending to quaternions and curvilinear systems. The CTenC package, developed for Mathematica, specializes in Cartesian tensor calculus using index-free and indexed methods, aiding simulations in n-dimensional spaces. These tools have democratized complex tensor computations in and physics. Cartesian tensors achieve unification with broader by embedding them within flat or Minkowski spaces, where the is constant and orthonormal bases prevail. This perspective views general tensors on manifolds as local extensions of Cartesian ones, with covariant derivatives reducing to partial derivatives in flat space. Such embedding allows relativistic approximations to leverage while incorporating curvature via local inertial frames. In contexts, this bridges classical tensor analysis to pseudo-Riemannian , facilitating computations in weakly curved spacetimes. In recent decades, particularly post-2000, Cartesian tensors have found applications in , representing operators as rank-one or higher tensors in . The components of operators, such as S_x, S_y, S_z, transform as Cartesian vectors under rotations, enabling tensorial decompositions for . Numerical methods, including low-rank tensor factorizations on Cartesian grids, have advanced many-body quantum simulations, reducing computational scaling for correlations and energies. These techniques, applied in , achieve high accuracy for systems like Hartree-Fock approximations, with grid-based tensors handling -orbit interactions efficiently.

References

  1. [1]
    Cartesian Tensor - an overview | ScienceDirect Topics
    A Cartesian tensor is defined as an entity that can be represented as an ordered triplet of real numbers in every Cartesian coordinate system, ...
  2. [2]
    [PDF] A.1 Appendix on Cartesian tensors - MIT
    Feb 6, 2007 · A.1.2 Definition of a cartesian tensor. A tensor T of rank r is an array of components denoted by Tijk...m with r indices ijk...m. In three ...
  3. [3]
    [PDF] An Introduction to Vectors and Tensors from a Computational ... - UTC
    In this example, the Cartesian local basis ˆ i e does not vary with x . 6. Calculus Operations in Cartesian Tensor Notation ... definition as scalar ...
  4. [4]
    Cartesian Tensor -- from Wolfram MathWorld
    A Cartesian tensor is a tensor in three-dimensional Euclidean space. Unlike general tensors, there is no distinction between covariant and contravariant ...
  5. [5]
    Cartesian Tensors - Harold Jeffreys - Google Books
    Title, Cartesian Tensors ; Author, Harold Jeffreys ; Publisher, University Press, 1961 ; Original from, the University of California ; Digitized, Jan 13, 2011.Missing: H. | Show results with:H.
  6. [6]
    [PDF] On Vectors and Tensors, Expressed in Cartesian Coordinates
    1.1.4 FORMAL DEFINITION OF A SECOND ORDER CARTESIAN TENSOR. If two cartesian coordinate systems Ox1x2x3 and Ox. 1x. 2x. 3 are related to each other as shown in ...
  7. [7]
    [PDF] Chapter 2 - Cartesian Vectors and Tensors: Their Algebra Definition ...
    In Cartesian coordinates, the length of the position vector of a point from the origin is equal to the square root of the sum of the square of the coordinates.
  8. [8]
    None
    ### Summary of Vectors as Rank-1 Tensors from Chapter 1
  9. [9]
    [PDF] Lecture 2 Introduction to vectors and tensors
    - Tensors are abstract mathematical entities. - Vectors are first order tensors. - Vectors and tensors exist separately of a particular coordinate system (i.e. ...
  10. [10]
  11. [11]
    [PDF] Vector and Tensor Algebra
    1 Vectors and tensors​​ The three compo- nents of a vector can be stored in a column. The nine components of a second-order tensor are generally stored in a ...
  12. [12]
    None
    ### Summary of Second-Order Cartesian Tensors in Three Dimensions
  13. [13]
    [PDF] Introduction to Tensor Analysis
    The Levi-Civita symbol ǫijk is a tensor only under proper orthogonal transformations; δij is a tensor under all orthogonal transformations; and the mixed ...<|control11|><|separator|>
  14. [14]
    [PDF] Physics 114A Tensors Peter Young
    Note that for cartesian tensors, U is a rotation matrix, which is orthogonal, and so V = U. Hence it is not necessary in this (important) case to distinguish ...
  15. [15]
    [PDF] Tensors in computations
    ... defined as a Cartesian tensor. On the other hand the hypermatrix A in (2.18) does not have this property. One may show that up to scalar multiples, M = I is ...
  16. [16]
    [PDF] INTRODUCTION TO VECTORS AND TENSORS - OAKTrust
    The values of the field may be vectors or tensors. We investigate results due to the distribution of the vector or tensor values of the field on its domain.
  17. [17]
    [PDF] Lecture 4 Vector and Cartesian Tensor Analysis - Purdue University
    General transforms are done using the Jacobian of the coordinate transformation equations and the introduction of a given metric. The components or “scale ...
  18. [18]
    None
    ### Summary of Vector Projections, Direction Angles, and Change of Basis from https://www.et.byu.edu/~vps/ME505/IEM/04%2001.pdf
  19. [19]
    [PDF] MATH 323 Cartesian Tensors Modules
    The projection of a vector on any direction is found by taking the scalar product of the vector with the direction, i.e., n • v . That projection will be in ...
  20. [20]
    [PDF] Tensors
    Kronecker delta is a so called invariant tensor. Now we can write our dot product as. A · B = Xj. Aiδij Bj = Aiδij Bj = AiBi. The sum is, of course, neglected ...
  21. [21]
    Tensor Notation (Basics) - Continuum Mechanics
    The Kronecker Delta, δij δ i j , serves as the identity matrix, I , because it equals 1 when i=j and 0 otherwise.
  22. [22]
    The Kronecker Delta Function and the Einstein Summation ...
    The Kronecker delta function is obviously useful for representing the dot product in a compact way.
  23. [23]
    [PDF] 3. Tensor
    Dot products between basis vectors result in the Kronecker delta function because the Cartesian system is orthonormal. Note: •In Einstein notation, the ...
  24. [24]
    [PDF] Levi-Civita symbol and cross product vector/tensor
    The cross product is a special vector. If we transform both vectors by a reflection transformation, for example a central symmetry by the origin ...
  25. [25]
    [PDF] The Levi-Civita Symbol - UNCW
    Sep 4, 2024 · If the needed computation goes counterclockwise, then the sign is positive, otherwise it is negative. Returning to the cross product, we can ...
  26. [26]
    [PDF] The Levi-Civita Symbol
    The Levi-Civita symbol is useful for converting cross products and curls into the lan- guage of tensor analysis, and for many other purposes.
  27. [27]
    [PDF] CROSS PRODUCTS AND THE PERMUTATION TENSOR
    The first subscript in each Levi-Civita tensor refers to a component of the vector resulting from the cross product; in other words, the “i” in (3) means we are ...
  28. [28]
    Levi-Civita symbol - tensor - MathOverflow
    May 2, 2014 · The Levi-Civita symbol is a "pseudotensor", or tensor density, because it inverses sign upon inversion. (An orthogonal transformation with ...
  29. [29]
    [PDF] Levi-Civita symbol
    It is actually a pseudotensor because under an orthogonal transformation of jacobian determinant −1 (i.e., a rotation composed with a reflection), it acquires a ...
  30. [30]
    [PDF] Determinant and the Adjugate
    Using the Levi-Civita symbol, the definition of the determinant of the n × n matrix ... Although the sum consists of nn terms, the Levi-Civita symbol is zero ...
  31. [31]
    [PDF] Tensors
    Oct 2, 2025 · We identify the pseudovector of Eq. (69) and the antisymmetric tensor of Eq. (68) as dual tensors; they are simply different representations ...
  32. [32]
    [PDF] Tensors (Symon Chapter Ten)
    1 Angular Momentum and Angular Velocity. 2. 1.1 Matrix Approach ... So in fact any antisymmetric tensor can be written as the dual of a vector.
  33. [33]
    [PDF] Lecture V: Tensor algebra in flat spacetime
    The Levi-Civita tensor and the cross product. The other type of multiplication of vectors that you learned in undergraduate physics is the cross product.
  34. [34]
    [PDF] An Introduction to Vectors and Tensors from a Computational ... - UTC
    Analogous transformation laws for tensors of any order are given below: The forward and inverse transformation laws for second-order tensors are given by.
  35. [35]
    [PDF] Transformation of Stresses and Strains - MIT
    May 14, 2001 · These transformations are vital in analyses of stress and strain, both because they are needed to compute critical values of these entities ...
  36. [36]
    3.3: Cartesian Tensors - Engineering LibreTexts
    Jul 23, 2022 · The basic idea is to identify a mathematical operation that the matrix represents, then require that it represent the same operation in the new ...
  37. [37]
    [PDF] Multilinear Mappings and Tensors - UCSD CSE
    In this chapter we generalize our earlier discussion of bilinear forms, which leads in a natural manner to the concepts of tensors and tensor products.
  38. [38]
    Levi-Civita tensor in nLab
    ### Definition of the Levi-Civita Tensor in n Dimensions for Antisymmetric Properties
  39. [39]
    Tensors: Stress, Strain and Elasticity - SERC (Carleton)
    Jan 19, 2012 · A tensor is a multi-dimensional array of numerical values that can be used to describe the physical state or properties of a material.Introduction · The Stress Tensor · The Strain Tensor
  40. [40]
    19.6: Appendix - Tensor Algebra - Physics LibreTexts
    Nov 22, 2021 · A scalar is a rank 0 tensor with one component, that is invariant under change of the coordinate system. The above definitions of scalars and ...<|control11|><|separator|>
  41. [41]
    [PDF] 7 Tensors
    A tensor is symmetric if Tµν = Tνµ , or is antisymmetric if Tµν = −Tνµ . Obviously, the diagonal elements of an antisymmetric tensor are all zero. ∗ For a ...
  42. [42]
    18 Symmetry Properties of Tensors
    Thus, a symmetric second rank tensor, such as the polarizability tensor or the Raman tensor, has only six independent components. In this chapter we are ...
  43. [43]
    [PDF] Quantum Mechanics Charles B. Thorn1 - UF Physics
    And an antisymmetric second rank tensor has three independent components and can be associated with a vector. Vm = Aklǫklm. We say that a second rank ...
  44. [44]
    Summary: Decomposition of Second Rank Tensors - SpringerLink
    Apr 26, 2015 · This chapter provides a summary of formulae for the decomposition of a Cartesian second rank tensor into its isotropic, antisymmetric and ...
  45. [45]
    [PDF] A Simple Example of 2D Tensor
    Invariance qualifies tensors to describe physical processes independent of the coordinate system. More precisely, the tensor components change according to the ...
  46. [46]
    [PDF] 5. Young Tableaux
    A rank n symmetric tensor is written as a row of n boxes: Sijk = i j k whereas a rank n antisymmetric tensor is a column of n boxes, eg, Aij = i j . A ...
  47. [47]
    Symmetrizing and Anti-Symmetrizing Tensors - Math Stack Exchange
    Apr 13, 2014 · Given any Tensor, we can obtain a symmetric tensor through symmetrising operator. by. Tuv→T(uv)=1n!(Tuv+Tvu) where n is the order of the tensor ...How to completely symmetrize and antisymmetrize $T^{i\{jk\}}Looking for a Basis-Free Definition of a Tensor Operator in Quantum ...More results from math.stackexchange.com
  48. [48]
    [PDF] Group Theory
    Except for the fully symmetric and the fully antisymmetric irreps, the irreps of the k-index tensors of U(n) have mixed symmetry. Boxes in a row correspond to ...
  49. [49]
    [PDF] Vector Calculus
    In index notation, a vector has a single index while a matrix has two indices. A tensor is an object with any number of indices, something like Tij...k.
  50. [50]
    [PDF] A Student's Guide to Vectors and Tensors - unica.it
    This book explains vectors and tensors, which are powerful problem-solving tools, and how to use them to solve problems. Vectors are a subset of tensors.
  51. [51]
    [PDF] Chapter 3 Cartesian Vectors and Tensors: Their Calculus
    is called the Jacobian of the transformation of variables. Vector fields. When the components of a vector or tensor depend on the coordinates we speak of a ...
  52. [52]
  53. [53]
    [PDF] The Navier-Stokes Equations - UCSB Math
    Feb 22, 2012 · And we are finally left with the Cauchy momentum equation: ρ∂j vi vj + ρ˙vi - f i - ∂j σij = 0 Ebrahim Ebrahim The Navier-Stokes Equations Page ...
  54. [54]
  55. [55]
    [PDF] The Navier-Stokes Equation - Colorado State University
    Oct 9, 2007 · (25.53) or, in Cartesian tensor notation,. (25.54). This is called the Navier-Stokes equation. When the fluid is incompressible. , and is ...
  56. [56]
    [PDF] Tensor Calculus - UCI Mathematics
    Mar 25, 2024 · The symmetric subspace is denoted by S and the anti-symmetric one by K. ... The symmetric gradient of a vector function u is defined as. ∇su = 1.
  57. [57]
    [PDF] Governing Equations of Fluid Dynamics
    (2.26). ∂ρ. ∂t. +. Δ. ·(ρ V) = 0. (2.27). Page 13. 2 Governing Equations of Fluid Dynamics. 27. Equation (2.27) is the continuity equation in conservation form.
  58. [58]
    [PDF] Tensor Calculus - HAL
    Nov 3, 2024 · Since in rectilinear coordinate systems the basis vectors are constants, the Christoffel symbol terms vanish identically and hence the covariant ...
  59. [59]
    [PDF] 1.3 special tensors
    Mar 1, 2013 · These equations tell us that in a Cartesian coordinate system the contravariant and covariant components are identically the same. EXAMPLE 1.3-6 ...<|control11|><|separator|>
  60. [60]
  61. [61]
    [PDF] Introduction to Tensor Calculus
    This booklet contains an explanation about tensor calculus for students of physics and engineering with a basic knowledge of linear algebra.
  62. [62]
    Tensors: A guide for undergraduate students - AIP Publishing
    Jul 1, 2013 · As we will see, there is no need to have upper and lower indices in such a coordinate system; all indices appear as lower indices. Hence, only ...
  63. [63]
    [PDF] Tensor Calculus
    Only for Cartesian coordinates— where all Christoffel symbols are zero—do covariant derivatives reduce to ordinary partial derivatives. Consider now the ...
  64. [64]
    Lecture Notes on General Relativity - S. Carroll
    An operator which reduces to the partial derivative in flat space with Cartesian coordinates, but transforms as a tensor on an arbitrary manifold.
  65. [65]
    [PDF] Physical constraints on the coefficients of Fourier expansions in ...
    The analysis in the main body of this paper was based on the postulate that all physical quantities when expressed in. Cartesian coordinates are infinitely ...
  66. [66]
    [PDF] Introduction to Finite Element Analysis
    This course provides a thorough introduction to Finite Element Analysis, covering concepts and examples, with extra background and extended explanations.
  67. [67]
    Basic finite element methods - Hans Petter Langtangen
    Dec 12, 2012 · For an interpolation/collocation method a diagonal matrix implies that \({\varphi}_j(x_{i}) = 0\) if \(i\neq j\). One set of basis functions ...
  68. [68]
    [PDF] 1 Cartesian Tensor Analysis - Assets - Cambridge University Press
    The simplest tensor is a scalar, a zeroth-order tensor. A scalar is represented by a single component that is invariant under coordinate transformation.Missing: dot | Show results with:dot<|separator|>
  69. [69]
    The Curious History of Vectors and Tensors - SIAM.org
    Sep 3, 2024 · The idea of a vector as a mathematical object in its own right first appeared as part of William Rowan Hamilton's theory of quaternions.
  70. [70]
    [PDF] The Mathematical Optics of Sir William Rowan Hamilton: Conical ...
    The mathematical discovery for which Hamilton is perhaps best known came in 1843 when he described quaternions. Twenty nine years after the discovery of conical ...
  71. [71]
    [PDF] The Extension Theory of Hermann Grassmann and the ... - CORE
    Sep 5, 2019 · Details are then given on Grassmann's life and especially the two of the three major works of vector (and linear) calculus written by him, ...
  72. [72]
    The Key to Maxwell's Theory of Electrodynamics (1873): A ...
    Apr 14, 2022 · James Clerk Maxwell (1831–1879) was the author of four studies in the domain of electromagnetism. (Maxwell was educated at the Universities ...Introduction: Maxwell's... · The Argument · Maxwell (1873): A Treatise on...
  73. [73]
    [PDF] Chapter 10 Vectors and Tensors
    It is possible to show5 that any tensor whose components are numerically invariant under all orthogonal transformations is a sum of products of this form. The ...
  74. [74]
  75. [75]
    the Roots of Vector and Tensor Calculus. Heaviside versus Gibbs
    Oct 19, 2020 · The aim of our paper is to analyse Heaviside's annotations and to investigate the role played by the American physicist in the development of Heaviside's work.
  76. [76]
    Electrical papers : Heaviside, Oliver, 1850-1925 - Internet Archive
    Dec 17, 2007 · Electrical papers. by: Heaviside, Oliver, 1850-1925. Publication date: 1892. Topics: Electricity. Publisher: London : Macmillan. Collection ...Missing: dyadics | Show results with:dyadics
  77. [77]
    [PDF] The Einstein Summation Notation
    The Einstein summation notation is an algebraic short-hand for expressing multicomponent Carte- sian quantities, manipulating them, simplifying.
  78. [78]
    [PDF] Primer on Index Notation - DSpace@MIT
    These notes summarize the index notation and its use. For a look at the original usage, see Chapter 1 of The Meaning of Relativity by Albert. Einstein ( ...
  79. [79]
    A first course in rational continuum mechanics, vol. I, by C. Truesdell ...
    A first course in rational continuum mechanics, vol. I, by C. Truesdell, Pure and Applied Mathematics, Academic Press, New York, San Francisco,.
  80. [80]
    Rational Thermodynamics - C. Truesdell - Google Books
    In the first edition of this book I tried to survey in brief compass the main ideas, methods, and discoveries of rational thermodynamics as it then stood.Missing: Cartesian | Show results with:Cartesian
  81. [81]
    The Mathematica Packages CARTAN and MathTensor for Tensor ...
    In this talk I present examples of applications of two packages for tensor analysis which both use Mathematica as a base program. the presentation describes the ...
  82. [82]
    CTenC: Cartesian TENsor Calculus package for Mathematica using ...
    Aug 29, 2024 · CTenC is a Cartesian TENsor Calculus package for performing manipulations of tensor expressions using index notation in Mathematica.Missing: computer | Show results with:computer
  83. [83]
    Including Cartesian Tensors, Quaternions, and MATLAB Examples
    Topics covered include cartesian tensors, curvilinear coordinate systems, general tensor analysis, and quaternions. The book includes numerous examples using ...<|separator|>
  84. [84]
    [PDF] Introduction to Tensor Calculus for General Relativity - MIT
    If the coordinates are locally orthogonal the product over µ then gives the square of the volume element. The product of diagonal elements of the metric is not ...Missing: computational | Show results with:computational
  85. [85]
    (PDF) Tensor numerical methods in quantum chemistry
    Aug 10, 2025 · We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure ...
  86. [86]
    [PDF] The Wigner-Eckart Theorem for Reducible Symmetric Cartesian ...
    Feb 16, 2016 · We explicitly establish a unitary correspondence between spherical irreducible tensor operators and cartesian tensor operators of any rank. That ...
  87. [87]
    Tensor Operators - Galileo
    Cartesian Tensor Operators. From the definition given earlier, under rotation the elements of a rank two Cartesian tensor transform as: Tij→Tij′=∑∑Rii′ ...