Fact-checked by Grok 2 weeks ago

Euclidean vector

A Euclidean vector, also known as a geometric or spatial vector, is a fundamental mathematical object in Euclidean space that possesses both magnitude (length) and direction, typically represented as a directed line segment from an initial point (tail) to a terminal point (head) or as an ordered tuple of real numbers corresponding to coordinates in \mathbb{R}^n. These vectors form the basis of vector spaces over the real numbers, where the Euclidean structure is defined by an inner product, such as the dot product, which induces a norm measuring length and orthogonality between vectors. Euclidean vectors support key operations including , which combines two vectors tail-to-head to form a vector, and , which scales the and potentially reverses direction based on the scalar's sign; these operations satisfy properties like commutativity, associativity, and distributivity, making the set a . The , or norm, of a vector \mathbf{v} = (v_1, v_2, \dots, v_n) is given by \|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2}, derived from the \mathbf{v} \cdot \mathbf{v}, enabling concepts like unit vectors (norm 1) and projections. Additionally, the \mathbf{u} \cdot \mathbf{v} = \sum u_i v_i quantifies the angle between vectors, with when it equals zero, and supports decomposition into orthogonal components. In applications, Euclidean vectors are essential in physics for representing forces, velocities, and displacements, in for transformations and rendering, and in for multidimensional analysis, underpinning fields from to algorithms. Their finite-dimensional nature distinguishes them from more abstract vectors in infinite-dimensional spaces, while generalizations extend to non-Euclidean geometries.

Overview

Definition and basic characteristics

A Euclidean vector is a that represents a directed in n-dimensional , characterized by both (or length) and direction. This geometric interpretation allows vectors to model displacements or quantities where orientation matters, such as or in physical contexts, though the core concept remains algebraic and independent of specific applications. Unlike scalars, which are quantities defined solely by (e.g., or ), vectors require both a numerical size and a directional specification to fully describe them. In , vectors exhibit key characteristics that distinguish their behavior. They can be classified as vectors, whose position is irrelevant and can be translated without altering their properties, or bound vectors, which are fixed to a specific point of application. vectors emphasize the equivalence of segments with equal and , enabling their use in abstract algebraic manipulations. Visually, vectors are often depicted as arrows, where the arrow's corresponds to and its to , providing an intuitive aid grounded in basic . Common notation for vectors includes boldface letters, such as \mathbf{v}, or an arrow overhead, like \vec{v}, to differentiate them from scalar variables. In component form, a vector in n-dimensional space is expressed as \mathbf{v} = (v_1, v_2, \dots, v_n), where each v_i is a representing the projection along the i-th coordinate in a chosen basis. This representation assumes familiarity with the standard metric on \mathbb{R}^n, facilitating both geometric intuition and computational handling.

Examples in one dimension

In one dimension, a Euclidean vector is fundamentally a signed scalar quantity along a straight line, where the positive sign denotes (e.g., to the right or east) and the negative sign denotes the opposite (e.g., to the left or ). This captures both and on a single axis, forming the simplest case of a \mathbb{R}^1. A classic example is displacement on a number line. Suppose an object moves from position 0 to +5 units along the x-axis; this displacement is the one-dimensional vector \mathbf{d} = +5, indicating a shift eastward. Conversely, a movement from 0 to -3 units represents \mathbf{d} = -3, signifying a westward shift. These signed values encode the net change in position, distinguishing them from mere distances, which are always positive scalars. Another illustrative case is velocity as a one-dimensional vector. For an object traveling at 10 m/s to the east, the velocity vector is \mathbf{v} = +10 m/s, while 4 m/s to the west is \mathbf{v} = -4 m/s. This signed scalar conveys both speed and directional sense along the line, essential for describing linear motion. Visually, one-dimensional vectors are depicted as directed arrows on a number line: a positive vector points rightward with length proportional to its value, while a negative one points leftward. The magnitude, or length, of such a vector \mathbf{v} = v is the absolute value |\mathbf{v}| = |v|, which ignores direction and yields a non-negative scalar. This foundational one-dimensional framework extends seamlessly to higher dimensions, where vectors gain additional components perpendicular to the original axis, building toward the full structure of .

Role in physics and engineering

In physics, Euclidean vectors are essential for representing quantities that possess both and , distinguishing them from scalars, which have only . For instance, , , , and are modeled as vectors because their effects depend on how they point in space, whereas properties like , , or speed are scalars unaffected by . This vectorial nature allows physicists to describe the motion of objects accurately; for example, an object's vector captures not just its speed but also its , enabling predictions of future positions under various influences. Similarly, forces on a body, such as gravitational or frictional forces, are treated as vectors to account for their directional push or pull, which is crucial for analyzing interactions in systems like planetary orbits or collisions. In , Euclidean vectors bridge theoretical to practical design, particularly in and . In , vectors model forces and moments in structures at rest, such as the of beams or trusses under loads, where resolving multiple forces into components ensures stability without net motion. extends this to moving systems, using vectors for and to simulate vehicle trajectories or robotic arms, allowing engineers to optimize performance and safety. A key utility is the of complex motions or forces into orthogonal components, simplifying analysis of multifaceted problems like flight paths influenced by and . Beyond mechanics, vectors appear in through , which represent sinusoidal voltages and currents as rotating vectors in the to analyze (AC) circuits. Phasors facilitate the summation of waveforms with different phases, akin to vector addition, enabling efficient computation of circuit impedance and power without time-domain integration. This approach is foundational in designing power systems and signal processors, where directional phase relationships determine overall behavior.

History

Early geometric intuitions

The earliest intuitions of vector-like concepts emerged in ancient Greek geometry, where directed quantities were represented through visual and geometric means rather than algebraic notation. In Euclid's Elements (c. 300 BCE), the parallel postulate and related propositions implicitly addressed direction by preserving the orientation of lines under translation, allowing for the conceptual treatment of displacements as having both magnitude and sense, though without a unified framework. Similarly, in , Euclid's Optics employed ray diagrams to depict light propagation as straight lines emanating from the eye, illustrating directional paths that anticipated modern vector representations of propagation, as seen in propositions describing rays falling on surfaces and their reflections. These geometric constructions, including the parallelogram rule for composing velocities attributed to various ancient authors and even Pseudo-Aristotle's Mechanical Problems, treated motions as directed segments that could be added head-to-tail, forming a foundational intuitive grasp of vector addition without formal symbolism. In the 17th century, these geometric ideas gained prominence in physics through the work of , who conceptualized forces as directed quantities in his (1687). Newton explicitly invoked the to compose forces, stating in Corollary I to the Laws of Motion that if two forces act simultaneously on a body, the resultant is determined by completing a with the forces as adjacent sides, yielding the diagonal as the effective directed impetus. This approach built on earlier intuitions but integrated them into dynamical analysis, where forces were visualized as arrows indicating both intensity and direction, essential for explaining under central attractions. contributed parallel developments in his writings on motion and , introducing the "characteristic triangle" to geometrically represent infinitesimal changes in and , aligning with the for composing directed speeds and forces as segments in a . Key advancements in force composition occurred in the 1670s among British and continental natural philosophers. , , and independently explored the synthesis of directed forces in planetary motion, with Wren and Hooke hypothesizing an for gravitational attraction by combining Huygens's 1673 expression for with Kepler's harmonic law, using geometric diagrams to resolve composite effects. These efforts, often depicted as arrows or line segments denoting direction and magnitude, exemplified vectors as "geometric arrows" devoid of algebraic machinery, relying instead on constructions to model physical interactions like orbital deviations. In contrast, the later 19th-century formalizations by J. Willard Gibbs and introduced rigorous algebraic vector systems, marking a shift from these intuitive geometric precedents.

Formal mathematical development

The formal mathematical development of Euclidean vectors began in the mid-19th century, building on earlier geometric intuitions to establish an algebraic framework for vector operations and structures. In 1843, introduced quaternions as a four-dimensional extension of complex numbers, which provided a multiplicative structure that influenced the emergence of vector analysis by separating scalar and vector components in three dimensions. This innovation laid groundwork for treating vectors as algebraic entities rather than purely geometric objects. Concurrently, in 1844, published his Ausdehnungslehre (Theory of Extension), which developed the concept of multivectors—higher-grade extensions combining vectors and their products—offering a comprehensive algebraic system for linear combinations and outer products in multidimensional spaces. By the 1880s, the work of and standardized vector analysis through treatises that emphasized practical notation for vectors in physics, including the dot and cross products, while discarding the quaternion's scalar components to focus on three-dimensional vectors. Their independent developments, disseminated via Gibbs's Yale lectures and Heaviside's electromagnetic papers, established a concise symbolic language that became the foundation for modern . This period marked a pivotal shift from —relying on axiomatic constructions without coordinates—to analytic methods using algebraic manipulations and coordinate representations, enabling rigorous proofs and computations in higher dimensions. In the , extended Euclidean vectors to four-dimensional -time in his 1908 formulation, integrating time as a fourth coordinate to unify and time under Lorentz transformations in , thus broadening vector applications beyond . Parallel to this, the axiomatization of vector spaces in linear algebra progressed with Giuseppe Peano's 1888 definition of abstract vector spaces over fields, providing a general framework independent of dimension, which was further refined in David Hilbert's work on infinite-dimensional spaces. , introduced around 1906–1910, axiomatized Euclidean vectors as complete inner product spaces, encompassing both finite- and infinite-dimensional cases and underpinning . A key educational milestone was Charles Jasper Joly's 1901 edition of Hamilton's Elements of Quaternions, which served as an accessible primer linking quaternion-based vectors to contemporary algebraic developments.

Representations

Cartesian coordinates

In Euclidean space, vectors are commonly represented using Cartesian coordinates, which assign numerical values to their positions relative to a fixed origin and orthogonal axes. This system allows a vector to be expressed as an ordered tuple of components, corresponding to displacements along each coordinate direction. For instance, in three-dimensional space, a vector \mathbf{v} is written as \mathbf{v} = (x, y, z), where x, y, and z are the scalar components./10:_Vectors/10.01:_Introduction_to_Cartesian_Coordinates_in_Space) Equivalently, this representation can be formulated as a of basis vectors: \mathbf{v} = x\mathbf{i} + y\mathbf{j} + z\mathbf{k}, where \mathbf{i}, \mathbf{j}, and \mathbf{k} are the basis vectors along the positive x-, y-, and z-axes, respectively. In n-dimensional , the general form is \mathbf{v} = \sum_{i=1}^n v_i \mathbf{e}_i, with \mathbf{e}_i denoting the i-th basis vector. These basis vectors form an orthonormal set, meaning each has magnitude (\|\mathbf{e}_i\| = 1) and they are pairwise orthogonal (\mathbf{e}_i \cdot \mathbf{e}_j = 0 for i \neq j). The orthonormality condition is compactly expressed as \mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij}, where \delta_{ij} is the (equal to 1 if i = j and 0 otherwise). The components of a vector in this basis are extracted through projection onto the corresponding basis vector. Specifically, the i-th component is given by v_i = \mathbf{v} \cdot \mathbf{e}_i, leveraging the orthonormality to isolate the scalar projection without interference from other directions. This Cartesian representation is foundational because it aligns directly with the geometric structure of , enabling straightforward algebraic manipulations and visualizations. Its orthogonality simplifies computations involving distances, angles, and transformations, making it the preferred system for most analytical work in physics and .

Decomposition into components

In Euclidean space, a vector \mathbf{v} is decomposed into its components relative to an , expressing it as a of scalar multiples of the basis vectors. In three dimensions, using the standard Cartesian basis \mathbf{i}, \mathbf{j}, and \mathbf{k} (unit vectors along the coordinate axes), this resolution takes the form \mathbf{v} = v_x \mathbf{i} + v_y \mathbf{j} + v_z \mathbf{k}, where v_x, v_y, and v_z are the scalar components representing the directed distances along each axis. This decomposition is unique in an and facilitates coordinate-based calculations. The scalar components are obtained via the with the basis vectors: v_x = \mathbf{v} \cdot \mathbf{i}, v_y = \mathbf{v} \cdot \mathbf{j}, and v_z = \mathbf{v} \cdot \mathbf{k}, where the \mathbf{a} \cdot \mathbf{b} = \sum a_k b_k extracts the projection in the basis direction. To derive this, start with the decomposition equation and compute the with \mathbf{i}: \mathbf{v} \cdot \mathbf{i} = (v_x \mathbf{i} + v_y \mathbf{j} + v_z \mathbf{k}) \cdot \mathbf{i} = v_x (\mathbf{i} \cdot \mathbf{i}) + v_y (\mathbf{j} \cdot \mathbf{i}) + v_z (\mathbf{k} \cdot \mathbf{i}). implies \mathbf{i} \cdot \mathbf{i} = 1 and \mathbf{j} \cdot \mathbf{i} = \mathbf{k} \cdot \mathbf{i} = 0, so the equation reduces to v_x = \mathbf{v} \cdot \mathbf{i}. The same holds for the other components by symmetry. Geometrically, each component v_x is the signed scalar of \mathbf{v} onto the x-axis, equal to |\mathbf{v}| \cos \theta_x, where \theta_x is the angle between \mathbf{v} and \mathbf{i}. In two dimensions, the decomposition aligns with the rule: the component vectors v_x \mathbf{i} and v_y \mathbf{j} form adjacent sides of a parallelogram, with \mathbf{v} as the resultant diagonal. This projection-based interpretation underscores the dot product's role in quantifying directional alignment. This component decomposition simplifies analysis in by converting vector equations into independent scalar ones per direction, essential for solving problems in physics and engineering. While non-orthogonal bases require more involved projections, the orthogonal Euclidean case yields straightforward, computationally efficient resolutions.

Properties

Equality and parallelism

In , two vectors \mathbf{u} and \mathbf{v} are equal if and only if they have the same dimension and their corresponding components are , that is, u_i = v_i for every i = 1, 2, \dots, n. This component-wise equality ensures that the vectors represent the same directed quantity in the space. Geometrically, equal vectors are represented by arrows that have identical length (magnitude) and direction, though they may originate from different points in space. This interpretation aligns with the free vector concept, where position is irrelevant, and only the displacement matters. Two vectors \mathbf{u} and \mathbf{v} are parallel if one is a scalar multiple of the other, \mathbf{u} = k \mathbf{v} for some nonzero scalar k \in \mathbb{R}. If k > 0, the vectors point in the same direction; if k < 0, they are antiparallel, pointing in opposite directions. Geometrically, parallel vectors point in the same or opposite directions and can be depicted as arrows that are parallel, regardless of their positions in space, though they may not share the same magnitude. In three-dimensional , two nonzero are parallel if and only if their is the zero vector, \mathbf{u} \times \mathbf{v} = \mathbf{0}. The \mathbf{u} \times \mathbf{v} produces a vector to the spanned by \mathbf{u} and \mathbf{v}, with magnitude |\mathbf{u}| |\mathbf{v}| \sin \theta, where \theta is the angle between them; this magnitude is zero precisely when \theta = 0^\circ or $180^\circ, confirming parallelism or antiparallelism.

Magnitude and unit vectors

The magnitude, or Euclidean norm, of a vector \mathbf{v} = (v_1, v_2, \dots, v_n) in \mathbb{R}^n is defined as \|\mathbf{v}\| = \sqrt{\sum_{i=1}^n v_i^2}. This definition arises from the Pythagorean theorem applied iteratively to the components of the vector. In two dimensions, for \mathbf{v} = (v_1, v_2), the magnitude is the length of the hypotenuse of a right triangle with legs |v_1| and |v_2|, given by \sqrt{v_1^2 + v_2^2}. Extending to higher dimensions, the norm is computed by first finding the magnitude in the subspace of the first n-1 components, say \sqrt{\sum_{i=1}^{n-1} v_i^2}, and then applying the Pythagorean theorem again with the nth component: \|\mathbf{v}\| = \sqrt{\left( \sqrt{\sum_{i=1}^{n-1} v_i^2} \right)^2 + v_n^2} = \sqrt{\sum_{i=1}^n v_i^2}. Equivalently, the magnitude can be expressed using the dot product as \|\mathbf{v}\| = \sqrt{\mathbf{v} \cdot \mathbf{v}}, where the dot product provides the basis for this norm in . A , or normalized vector, is obtained by dividing a nonzero by its : \hat{\mathbf{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|}, which has magnitude 1 and preserves the of \mathbf{v}. This normalization process isolates the directional aspect of the , making unit vectors useful for indicating without scaling information. The zero vector \mathbf{0} = (0, 0, \dots, 0) has \|\mathbf{0}\| = 0 and no defined direction, as it represents a point with no . It serves as the in the , satisfying \mathbf{0} + \mathbf{v} = \mathbf{v} and \mathbf{v} + \mathbf{0} = \mathbf{v} for any vector \mathbf{v}, and is unique in this role. Unlike nonzero vectors, the zero vector cannot be normalized, as division by its magnitude is undefined.

Operations

Addition, subtraction, and scalar multiplication

In Euclidean vector spaces, addition of two vectors \mathbf{u} and \mathbf{v} is defined component-wise in Cartesian coordinates: if \mathbf{u} = (u_1, u_2, \dots, u_n) and \mathbf{v} = (v_1, v_2, \dots, v_n), then \mathbf{u} + \mathbf{v} = (u_1 + v_1, u_2 + v_2, \dots, u_n + v_n)./04%3A_R/4.02%3A_Vector_Algebra) This operation satisfies commutativity, \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}, and associativity, (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}), for any vectors \mathbf{u}, \mathbf{v}, \mathbf{w}. Geometrically, vector follows the : the \mathbf{u} + \mathbf{v} is the diagonal of the formed by \mathbf{u} and \mathbf{v} as adjacent sides. Equivalently, the constructs the by attaching the of \mathbf{v} to the head of \mathbf{u}, with \mathbf{u} + \mathbf{v} extending from the tail of \mathbf{u} to the head of \mathbf{v}. Vector subtraction is derived from addition via the : \mathbf{u} - \mathbf{v} = \mathbf{u} + (-\mathbf{v}), where the opposite vector -\mathbf{v} points in the reverse direction with the same as \mathbf{v}./04%3A_R/4.02%3A_Vector_Algebra) Component-wise, -\mathbf{v} = (-v_1, -v_2, \dots, -v_n), obtained by multiplying \mathbf{v} by the scalar -1. This definition preserves the , allowing to be visualized as addition of the reversed using the or rules. Scalar multiplication scales a vector \mathbf{v} by a real number k: k\mathbf{v} = (k v_1, k v_2, \dots, k v_n), which stretches or compresses the vector while preserving its direction (reversing it if k < 0)./04%3A_R/4.02%3A_Vector_Algebra) Key properties include distributivity over vector addition, k(\mathbf{u} + \mathbf{v}) = k\mathbf{u} + k\mathbf{v}, and over scalar addition, (k + m)\mathbf{v} = k\mathbf{v} + m\mathbf{v}, along with associativity, k(m\mathbf{v}) = (km)\mathbf{v}. These operations collectively form the foundation of the vector space axioms for Euclidean spaces, enabling linear combinations such as a\mathbf{u} + b\mathbf{v}.

Dot product and orthogonality

The , also known as the scalar product or inner product, of two Euclidean vectors \mathbf{u} = (u_1, u_2, \dots, u_n) and \mathbf{v} = (v_1, v_2, \dots, v_n) in \mathbb{R}^n is defined algebraically as the sum of the products of their corresponding components: \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i. This operation yields a scalar value and is fundamental to the geometry of ./12%3A_Vectors_in_Space/12.03%3A_The_Dot_Product) Geometrically, the dot product is expressed in terms of the magnitudes of the vectors and the angle \theta between them (where $0 \leq \theta \leq \pi): \mathbf{u} \cdot \mathbf{v} = \|\mathbf{u}\| \|\mathbf{v}\| \cos \theta, with the magnitude \|\mathbf{u}\| defined as the square root of \mathbf{u} \cdot \mathbf{u}. This formulation arises from considering the projection of one vector onto the other, scaled by the magnitude of the second vector. To derive the equivalence between the algebraic and geometric definitions, start with the in the formed by \mathbf{u}, \mathbf{v}, and \mathbf{u} - \mathbf{v}. The squared of \mathbf{u} - \mathbf{v} is \|\mathbf{u} - \mathbf{v}\|^2 = \|\mathbf{u}\|^2 + \|\mathbf{v}\|^2 - 2 \|\mathbf{u}\| \|\mathbf{v}\| \cos \theta. Expanding algebraically gives: \|\mathbf{u} - \mathbf{v}\|^2 = (\mathbf{u} - \mathbf{v}) \cdot (\mathbf{u} - \mathbf{v}) = \mathbf{u} \cdot \mathbf{u} - 2 \mathbf{u} \cdot \mathbf{v} + \mathbf{v} \cdot \mathbf{v} = \|\mathbf{u}\|^2 + \|\mathbf{v}\|^2 - 2 \mathbf{u} \cdot \mathbf{v}. Equating the two expressions yields \mathbf{u} \cdot \mathbf{v} = \|\mathbf{u}\| \|\mathbf{v}\| \cos \theta, confirming the definitions are consistent for Euclidean vectors. The dot product satisfies several key properties that underpin its utility in vector analysis. It is commutative, meaning \mathbf{u} \cdot \mathbf{v} = \mathbf{v} \cdot \mathbf{u}, and bilinear, linear in each argument separately: (\mathbf{u} + \mathbf{w}) \cdot \mathbf{v} = \mathbf{u} \cdot \mathbf{v} + \mathbf{w} \cdot \mathbf{v} and (c \mathbf{u}) \cdot \mathbf{v} = c (\mathbf{u} \cdot \mathbf{v}) for any scalar c. Additionally, it is positive definite: \mathbf{u} \cdot \mathbf{u} \geq 0, with equality if and only if \mathbf{u} = \mathbf{0}. These properties make the dot product an inner product on the Euclidean space \mathbb{R}^n./12%3A_Vectors_in_Space/12.03%3A_The_Dot_Product) Two nonzero vectors \mathbf{u} and \mathbf{v} are orthogonal (perpendicular) if and only if their dot product is zero: \mathbf{u} \cdot \mathbf{v} = 0. This condition follows directly from the geometric definition, as \cos \theta = 0 when \theta = \pi/2. Orthogonality is a cornerstone of Euclidean geometry, enabling decompositions into perpendicular components and forming the basis for orthogonal bases in vector spaces./12%3A_Vectors_in_Space/12.03%3A_The_Dot_Product) In applications, the dot product determines the angle between two vectors via \theta = \arccos \left( \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\| \|\mathbf{v}\|} \right), providing a measure of their directional similarity independent of magnitude. It also computes the scalar projection of \mathbf{u} onto \mathbf{v}, given by \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{v}\|}, and the vector projection: \proj_{\mathbf{v}} \mathbf{u} = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{v}\|^2} \mathbf{v}. This projection represents the component of \mathbf{u} in the direction of \mathbf{v}, essential for resolving vectors into orthogonal parts.

Cross product and triple products

In three-dimensional Euclidean space, the cross product of two vectors \mathbf{u} and \mathbf{v} is defined as a vector \mathbf{u} \times \mathbf{v} that is perpendicular to both \mathbf{u} and \mathbf{v}, with a magnitude equal to the area of the parallelogram they span. The direction of \mathbf{u} \times \mathbf{v} follows the right-hand rule: pointing the fingers of the right hand from \mathbf{u} toward \mathbf{v} aligns the thumb with the direction of the resulting vector. The magnitude satisfies \|\mathbf{u} \times \mathbf{v}\| = \|\mathbf{u}\| \|\mathbf{v}\| \sin \theta, where \theta is the angle between \mathbf{u} and \mathbf{v}. In Cartesian coordinates, with \mathbf{u} = (u_1, u_2, u_3) and \mathbf{v} = (v_1, v_2, v_3), the cross product is given by the formula: \mathbf{u} \times \mathbf{v} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix} = (u_2 v_3 - u_3 v_2) \mathbf{i} - (u_1 v_3 - u_3 v_1) \mathbf{j} + (u_1 v_2 - u_2 v_1) \mathbf{k}. This yields the component form \mathbf{u} \times \mathbf{v} = (u_2 v_3 - u_3 v_2, u_3 v_1 - u_1 v_3, u_1 v_2 - u_2 v_1). Key properties of the cross product include anticommutativity, \mathbf{u} \times \mathbf{v} = -\mathbf{v} \times \mathbf{u}, which follows from the representation and implies that the operation reverses direction when operands are swapped. The result is always to both input vectors, as their dot products with \mathbf{u} \times \mathbf{v} vanish: \mathbf{u} \cdot (\mathbf{u} \times \mathbf{v}) = 0 and \mathbf{v} \cdot (\mathbf{u} \times \mathbf{v}) = 0. Additionally, \mathbf{u} \times \mathbf{v} = \mathbf{0} \mathbf{u} and \mathbf{v} are (or at least one is the zero vector), since \sin \theta = 0 in that case. The scalar triple product extends the cross product to three vectors \mathbf{u}, \mathbf{v}, and \mathbf{w}, defined as \mathbf{u} \cdot (\mathbf{v} \times \mathbf{w}). This equals the determinant of the 3×3 matrix whose columns (or rows) are the components of \mathbf{u}, \mathbf{v}, and \mathbf{w}: \mathbf{u} \cdot (\mathbf{v} \times \mathbf{w}) = \det \begin{pmatrix} u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \\ w_1 & w_2 & w_3 \end{pmatrix}. Geometrically, the |\mathbf{u} \cdot (\mathbf{v} \times \mathbf{w})| represents the signed of the spanned by the three vectors, with the sign indicating orientation relative to the standard right-handed basis. The scalar triple product is invariant under of the vectors but changes sign under odd permutations.

Applications in Physics

Kinematics: position, velocity, acceleration

In kinematics, the position of a particle in is described by the vector \mathbf{r}(t), which extends from a chosen to the particle's location at time t. This vector encapsulates the full spatial coordinates of the particle relative to the origin, allowing for a complete geometric representation of its placement in . The vector \mathbf{v}(t) quantifies the particle's rate of change of . The average over a time \Delta t is defined as the change in vector divided by the time elapsed, \mathbf{v}_\text{avg} = \frac{\Delta \mathbf{r}}{\Delta t}, which represents the straight-line per unit time. In the limit as \Delta t approaches zero, this yields the instantaneous , \mathbf{v}(t) = \frac{d\mathbf{r}}{dt}, the first time of the vector, capturing the particle's at a precise instant. For constant , the simplifies to \Delta \mathbf{r} = \mathbf{v} \Delta t. The acceleration vector \mathbf{a}(t) measures the rate of change of , defined as the time \mathbf{a}(t) = \frac{d\mathbf{v}}{dt} = \frac{d^2 \mathbf{r}}{dt^2}. This of the position vector describes how the particle's motion evolves, with its and indicating changes in speed or direction. In , acceleration decomposes into : the tangential component a_T = \frac{dv}{dt} aligns with the velocity and affects speed, while the normal component a_N = \frac{v^2}{\rho} (where \rho is the ) points toward the center of curvature and governs directional change. Vector differentiation in kinematics follows standard rules, such as the product rule for a scalar f(t) times a vector \mathbf{u}(t): \frac{d}{dt} [f(t) \mathbf{u}(t)] = f'(t) \mathbf{u}(t) + f(t) \frac{d\mathbf{u}}{dt}, enabling computation of velocities and accelerations for composite expressions like \mathbf{r}(t) = f(t) \hat{\mathbf{i}} + g(t) \hat{\mathbf{j}} + h(t) \hat{\mathbf{k}}.

Dynamics: force, momentum, energy

In Newtonian , the \mathbf{F} acting on a of m is defined as the that causes the body's \mathbf{a}, according to Newton's second : \mathbf{F} = m \mathbf{a}. This equation represents the net force, which is the sum \mathbf{F}_{\text{net}} = \sum \mathbf{F}_i of all individual forces \mathbf{F}_i applied to the . In form, Newton's laws describe the of point particles or rigid under these forces, with the first stating that if \mathbf{F}_{\text{net}} = \mathbf{0}, then \mathbf{a} = \mathbf{0} and the moves with constant velocity (inertial motion); the second as above; and the third asserting that forces between interacting are equal in magnitude and opposite in direction, \mathbf{F}_{12} = -\mathbf{F}_{21}. These formulations enable the analysis of motion in three dimensions by resolving forces into components along orthogonal axes. Linear momentum \mathbf{p} of a is defined as the vector product \mathbf{p} = m \mathbf{v}, where \mathbf{v} is the velocity vector. The time derivative of gives the , \mathbf{F}_{\text{net}} = \frac{d\mathbf{p}}{dt}, which generalizes Newton's second law for variable mass systems, though in standard cases with constant mass, it reduces to m \mathbf{a}. In isolated systems with no external forces (\mathbf{F}_{\text{net}} = \mathbf{0}), the total \sum \mathbf{p}_i is conserved, a direct consequence of Newton's third law ensuring internal forces cancel in pairs. This applies to collisions and interactions, allowing prediction of post-event velocities from initial conditions. The \mathbf{J} imparted by a variable force over time interval [t_1, t_2] is the \mathbf{J} = \int_{t_1}^{t_2} \mathbf{F}(t) \, dt, which equals the change in \Delta \mathbf{p} = \mathbf{p}_f - \mathbf{p}_i, linking instantaneous forces to finite changes in motion. Work W done by a constant \mathbf{F} on a displaced by \Delta \mathbf{r} is the scalar \mathbf{F} \cdot \Delta \mathbf{r}, representing energy transfer along the displacement direction, with the capturing the 's effective component. For variable forces, work generalizes to the W = \int \mathbf{F} \cdot d\mathbf{r}. The work-energy theorem states that the net work by all forces equals the change in \Delta K, where is K = \frac{1}{2} m \|\mathbf{v}\|^2 = \frac{1}{2} m v^2. This connects dynamics to , as in systems where non-conservative forces do work, the changes accordingly, but total holds separately for translational motion./7:_Work_and_Energy/7.4:_Work-Energy_Theorem)

Advanced Topics

Affine versus Euclidean vectors

Euclidean vectors, also known as free vectors, are elements of a that are translation-invariant, meaning their properties remain unchanged under translation. These vectors support standard operations such as and , forming a complete structure without reference to a specific . In contrast, affine vectors, often exemplified by vectors, are associated with specific points in an , where the absence of a distinguished prevents direct of two vectors, as such an would lack geometric meaning. The key distinction lies in the underlying structures: Euclidean vectors operate within a , allowing full linear combinations, while affine vectors inhabit an , which is essentially a vector space "forgotten" its origin, emphasizing parallelism and ratios but not absolute positions. Affine combinations are the appropriate operation here, defined as a linear combination of points \sum_{i=1}^n \alpha_i \mathbf{p}_i where \sum_{i=1}^n \alpha_i = 1; this preserves the barycenter (weighted average position) of the points involved. Such combinations enable expressions like midpoints or weighted averages without shifting the overall location. In physics applications, this differentiation is crucial: displacement vectors, representing changes in position (e.g., or ), behave as Euclidean vectors and can be freely added or scaled, reflecting translation invariance. Position vectors, however, are affine, tied to coordinate origins, and their differences yield displacements; adding two positions directly is invalid, as it would imply an arbitrary origin-dependent result. This framework ensures consistent modeling of motion and forces in .

Pseudovectors and transformations

In three-dimensional , pseudovectors, also known as axial vectors, are quantities that behave like ordinary under proper rotations but acquire an additional sign change under improper , such as reflections or spatial inversions. This distinction arises because pseudovectors are typically constructed via the of two polar (true) , which introduces a dependent on the orientation of the . Under a general represented by a O with \det(O) = \pm 1, a true \mathbf{v} transforms as \mathbf{v}' = O \mathbf{v}, while a \mathbf{a} transforms as \mathbf{a}' = \det(O) \, O \mathbf{a}. For proper rotations, where \det(O) = 1, both types transform identically; for improper transformations, where \det(O) = -1, pseudovectors effectively flip sign relative to what a true would do. A canonical example of a pseudovector is angular momentum \mathbf{L}, defined as the cross product \mathbf{L} = \mathbf{r} \times \mathbf{p}, where \mathbf{r} is the position vector and \mathbf{p} is the linear momentum, both polar vectors. Under spatial inversion ( transformation, \mathbf{r} \to -\mathbf{r}, \mathbf{p} \to -\mathbf{p}), \mathbf{L} remains unchanged because (-\mathbf{r}) \times (-\mathbf{p}) = \mathbf{r} \times \mathbf{p}, illustrating its even compared to the odd of polar vectors. Similarly, \boldsymbol{\tau} = \mathbf{r} \times \mathbf{F}, with \mathbf{F} the force (a polar vector), qualifies as a pseudovector and shares this property, reflecting its dependence on the right-hand rule for orientation. The \mathbf{B} provides another physical instance of a , evident in the law \mathbf{F} = q (\mathbf{E} + \mathbf{v} \times \mathbf{B}), where \mathbf{E} () and \mathbf{v} () are polar vectors. The structure ensures \mathbf{B} transforms with even under inversion, remaining invariant while polar vectors reverse, which is crucial for maintaining the form-invariance of under orthogonal transformations. This behavior underscores the role of the in distinguishing pseudovectors, as the sign of \det(O) encodes the (orientation-preserving or reversing) of the transformation, ensuring consistent physical interpretations in oriented spaces.

Generalizations to higher dimensions

Euclidean vectors generalize naturally to n dimensions, where a vector \mathbf{v} belongs to the \mathbb{R}^n, consisting of ordered n-tuples of real numbers representing components along n orthogonal axes. This extension preserves the core structure of , , and the norm, defined as \|\mathbf{v}\| = \sqrt{\sum_{i=1}^n v_i^2}, which induces a positive-definite on the . The , or inner product, also generalizes directly as \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i, enabling concepts like and angles in higher dimensions via the \cos \theta = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\| \|\mathbf{v}\|}. In n-dimensional Euclidean space, the vector space \mathbb{R}^n equipped with this inner product forms a real , where properties such as the Cauchy-Schwarz inequality |\mathbf{u} \cdot \mathbf{v}| \leq \|\mathbf{u}\| \|\mathbf{v}\| hold, ensuring the geometry remains consistent with lower-dimensional cases. Unlike in three dimensions, there is no natural that yields a vector perpendicular to two given vectors in dimensions greater than , as the Hodge dual operator, which defines the cross product in \mathbb{R}^3, does not produce a vector in higher even or odd dimensions beyond that. Instead, generalizations use the , where the wedge product \mathbf{u} \wedge \mathbf{v} forms a in the second exterior power \bigwedge^2 \mathbb{R}^n, capturing oriented area without reducing to a single vector. These higher-dimensional Euclidean vectors find applications in data science, where feature vectors in \mathbb{R}^n represent data points with n attributes, facilitating distance-based analyses like clustering via Euclidean distance. In physics, particularly special relativity, 4-vectors extend the concept to four dimensions but adopt a semi-Euclidean (Minkowski) metric with signature (+,-,-,-), differing from the positive-definite Euclidean metric while still using vector notation for position, momentum, and other quantities.

References

  1. [1]
    Vectors, Matrices, and Gauss-Jordan Elimination - UTSA
    Jan 12, 2022 · A Euclidean vector or simply a vector (sometimes called a geometric vector or spatial vector) is a geometric object that has magnitude (or length) and ...
  2. [2]
    Linear Algebra, Part 5: Euclidean Vector Spaces (Mathematica)
    Euclidean Vector Spaces. In mathematics, a metric space is a set together with a notion of distance between its elements, usually called points.
  3. [3]
    [PDF] 10. Euclidean Spaces
    Sep 13, 2022 · A vector space can be defined over any field F, such as the rational numbers, the real numbers, or the complex numbers.2 Vector spaces over the ...
  4. [4]
    [PDF] Properties of Euclidean Space - Sites at Lafayette
    Vectors in Euclidean Space. When we refer to a vector in Euclidean space, we mean directed line segments that are embedded in the space, such as the vector ...
  5. [5]
    [PDF] 5.1 Vectors
    Properties of Vector Addition. Vector addition has the following properties: 1. Commutative property: v + w w + v for any two vectors v and w. 2. Associative ...
  6. [6]
    ALAFF The vector 2-norm (Euclidean length) - UT Computer Science
    The length of a vector is most commonly measured by the "square root of the sum of the squares of the elements," also known as the Euclidean norm.
  7. [7]
    Linear Algebra Webnotes. Part 4.
    Any vector space V with a dot product which satisfies properties 1-4 is called a Euclidean vector space. Using the dot product one can define most of the ...<|control11|><|separator|>
  8. [8]
    [PDF] Chapter 6 Euclidean Spaces - CIS UPenn
    Definition 6.2. Given a Euclidean space E, any two vectors u, v 2 E are orthogonal, or perpendicular iff u · v = 0. Given a family (ui)i2I of vectors in E, ...
  9. [9]
    The Abstract Spaces in Which Points and Vectors Live
    Euclidean Spaces​​ Definition: A d-dimensional Euclidean Space is a d-dimensional Affine Space with the additional concept of distance or length. Two additional ...
  10. [10]
    [PDF] The Geometry of Euclidean Space
    2. A vector (in the plane or space) is a directed line segment with a specified tail (with the default being the origin) and an arrow at its head. 3.
  11. [11]
    [PDF] Contents 1. Vector Spaces - UNL Math
    An (abstract) vector space is a nonempty set V of elements called vectors, together with operations of vector addition (+) and scalar multiplication (. · ), ...
  12. [12]
    [PDF] 1 Vectors: Geometric Approach
    When writing by hand, we use an arrow on top г or a wiggle underneath instead of the boldface г. ... components of our arrow vectors. But we also use Rn for very ...
  13. [13]
    [PDF] Chapter 4: Vectors, Matrices, and Linear Algebra
    Definition: A scalar is a number. Examples of scalars are temperature, distance, speed, or mass – all quantities that have a magnitude but no “direction”, other ...<|separator|>
  14. [14]
    Vectors and bases #rvv - Dynamics
    Some textbooks differentiate between free vectors, which are free to slide around, and bound vectors, which are anchored in space. We will only use free vectors ...
  15. [15]
    [PDF] An Introduction to Vectors and Tensors from a Computational ... - UTC
    The vector is called a free vector if its location is not specified and a fixed or bound vector if the base O has a specific location in space. The ...
  16. [16]
    [PDF] Traditional Vector Theory
    Nov 9, 2013 · We will denote vectors by writing their name in boldface (e.g., v and a ) or by drawing a little arrow above their name (e.g., −→ v and −→ a ). ...
  17. [17]
    LinearAlgebra - Department of Computer Science
    We can use a vector to represent the coordinates of a point in space. In general, given some point in the n-dimensional Euclidean space ℝn, we consider it as an ...<|control11|><|separator|>
  18. [18]
    The geometry of Eucidean space
    A vector is often pictured as an arrow. a point ... Thus, a boldface x denotes an element of Rn, whereas a non-boldface x denotes a component of a vector.
  19. [19]
    1.3 Displacement Vector in 1D | Classical Mechanics | Physics
    In one dimension, the displacement vector has one component. For example, if the motion is along the x-axis, the displacement vector becomes Δ r → = Δ x i ^ = ...
  20. [20]
    Displacement, Velocity, and Acceleration - Physics
    Displacement is a vector representing the distance traveled and specifying the direction. If you start at position xo and move to position x, your displacement ...
  21. [21]
    4.1 Displacement and Velocity Vectors – University Physics Volume 1
    Displacement r → ( t ) can be written as a vector sum of the one-dimensional displacements x → ( t ) , y → ( t ) , z → ( t ) along the x, y, and z directions.
  22. [22]
    2.2 One-Dimensional Vectors - Engineering Statics
    The simplest vector calculations involve one-dimensional vectors. With one-dimensional vectors, vector arithmetic is the same as ordinary arithmetic.Missing: Euclidean | Show results with:Euclidean
  23. [23]
    Vectors - Physics
    May 21, 1998 · Examples of scalars : mass, temperature, speed, distance. Examples of vectors : displacement, velocity, acceleration, force. One crucial ...
  24. [24]
    Vectors: Velocities, Accelerations, and Forces
    Notice that velocity, which is a vector, is changed if either its magnitude or its direction is changed. Thus, acceleration occurs when either the magnitude or ...
  25. [25]
    The Feynman Lectures on Physics Vol. I Ch. 11: Vectors - Caltech
    The fact that the acceleration is the rate of change of the vector velocity helps us to calculate the acceleration in some rather complicated circumstances.
  26. [26]
    Engineering Statics: Open and Interactive - Open Textbook Library
    Rating 4.5 (9) This textbook covers vectors, forces, moments, static equilibrium, rigid body mechanics, trusses, internal forces, inertia, and area moments of inertia.
  27. [27]
    [PDF] STATICS DYNAMICS - Andy Ruina
    Aug 21, 2013 · ... Vectors: position, force and moment. 38. The key vectors for statics, namely relative position, force, and mo- ment, are used to develop ...
  28. [28]
    Phasor Introduction and Demo - Linear Physical Systems Analysis
    Key Concept: A sinusoidal signal can be represented by a vector in the complex plane called a phasor. A sinusoidal signal f(t)=A·cos(ωt+θ) can be represented ...
  29. [29]
    Example of circuit analysis with phasors - Ximera
    Phasors are essential tool in circuit analysis. Step-by-step instructions on how to solve circuits using phasors.
  30. [30]
    The Parallelogram Rule from Pseudo-Aristotle to Newton - jstor
    Nov 1, 2016 · Abstract The history of the Parallelogram Rule for composing physical quantities, such as motions and forces, is marked by conceptual ...
  31. [31]
    Why do forces add vectorially? A forgotten controversy in the ...
    Apr 1, 2011 · The parallelogram of forces seems to have been widely recognized by Newton's day because both Pierre Varignon and Bernard Lamy stated it in the ...
  32. [32]
    [PDF] Mathematisation of the Science of Motion and the Birth of Analytical ...
    Nov 27, 2006 · As long as speeds and forces are measured by these segments, they compose each other according to the law of the parallelogram applied to them.
  33. [33]
    Hooke and Wren and the System of the World - jstor
    certain that, by combining Huygens's expression for centrifugal force, published in 1673, and. Kepler's third law, Wren had derived an inverse-square ...
  34. [34]
    (Michael J. Crowe) History of Vector Analysis PDF - Scribd
    Rating 1.0 (2) A history of vector analysis. Originally published: Notre D ame : University of N otre D ame Press, 1967. W ithcorrectionsandnew pref. Includes bibliographies ...
  35. [35]
    A History of Vector Analysis: Evolution of Vectorial System
    Jun 21, 2019 · A history of vector analysis : the evolution of the idea of a vectorial system. xv, 270 p. : 22 cm. Originally published: Notre Dame : University of Notre Dame ...
  36. [36]
    Hermann Grassmann (1809 - 1877) - Biography - MacTutor
    In the Foreword of his Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik Ⓣ (Linear extension theory, a new branch of mathematics) (1844) Grassmann ...
  37. [37]
    [PDF] A History of Vector Analysis
    1This talk is based on the following book: Michael J. Crowe, A History of Vector Analysis: The Evolution of the. Idea of a Vectorial System (Notre Dame ...
  38. [38]
    The Curious History of Vectors and Tensors - SIAM.org
    Sep 3, 2024 · The idea of a vector as a mathematical object in its own right first appeared as part of William Rowan Hamilton's theory of quaternions.
  39. [39]
    [PDF] Space and Time - UCSD Math
    In his four-dimensional physics Minkowski found that pairs of ordinary mechanical quantities are in fact space and time components of four- dimensional vectors ...
  40. [40]
    [PDF] Project Gutenberg's Vector Analysis and Quaternions, by Alexander ...
    A new edition of Hamilton's classic, the Elements of Quaternions, has been pre- pared by Professor Joly (London, 1899, 1901); Tait's Scientific Papers have been.
  41. [41]
    [PDF] ijk and cross products in 3-D Euclidean space - Rutgers Physics
    In a cartesian coordinate system a vector V has components Vi along each of the three orthonormal basis vectors ˆei, or V = P i Viˆei. The dot product of ...
  42. [42]
    [PDF] MATH 304 Linear Algebra Lecture 17: Euclidean structure in Rn ...
    A vector is represented by a directed segment. ... vector has a unique representative with tail at O. Cartesian coordinates allow us to identify a line, a.
  43. [43]
    [PDF] Euclidean Space, Cartesian Space, ARROWS, AND VECTORS
    There are two kinds of space used in calculus and analytic geometry: Euclidean Space (ES) and. Cartesian Space (CS). Euclidean Space was invented some ...
  44. [44]
  45. [45]
    [PDF] 8.01SC S22 Chapter 3: Vectors - MIT OpenCourseWare
    (1) Vector Decomposition: Choose a coordinate system with an origin, axes, and unit vectors. We can decompose a vector into component vectors along each ...
  46. [46]
    Projection -- from Wolfram MathWorld
    DotProduct. The projection of a vector a onto a vector u is given by. proj_(u)a=(a·u)/(|u|. where a·u is the dot product, and the length of this projection is ...Missing: components | Show results with:components
  47. [47]
    [PDF] Math 321 Vector and Complex Calculus for the Physical Sciences
    Jan 25, 2018 · mathematics, engineering and physics students. The unit vectors x, y, z form a basis for 3D Euclidean vector space, but ˆρ and z do not, and ...<|control11|><|separator|>
  48. [48]
    [PDF] Chapter 3. Euclidean vector space
    In this geometric representation, we can define algebraic operations on vectors as following: (1) equal vectors: vectors with the same length and same direction ...
  49. [49]
    [PDF] Math 221: LINEAR ALGEBRA - Chapter 4. Vector Geometry §4-1 ...
    Mar 1, 2021 · Two vectors are called parallel if they lie on the same line. Equivalently, two vectors are parallel if they are scalar multiples of each other.
  50. [50]
    The cross product - Math Insight
    On the other hand, if a and b are parallel or if either vector is the zero vector, then the cross product is the zero vector.
  51. [51]
    Euclidean Norm - an overview | ScienceDirect Topics
    Euclidean norm is defined as the square root of the sum of the squares of the coordinates of a vector in a coordinate space ℝⁿ, represented as ||v|| ...
  52. [52]
    Norm of a Vector - Leibniz World of Math - WordPress.com
    By Pythagoras theorem, the Euclidean norm $latex \vert a\vert$ of the vector $latex a=(a_1,a_2)$ is equal to the length of the hypothenuse of the right ...
  53. [53]
    How was the vector magnitude derived? - Math Stack Exchange
    Jul 11, 2020 · Note that for a vector [a1]∈R, the magnitude is obviously √a21=|a1|. For a vector [a1a2]∈R2, using pythagros, the magnitude is √a21+a22.Relationship between definition of the Euclidean metric and the ...Pythagorean theorem in higher dimensions? - Math Stack ExchangeMore results from math.stackexchange.com
  54. [54]
    Euclidean vector - PlanetMath
    Mar 22, 2013 · It's magnitude is also referred to as the vector norm . In Euclidean space, this can be gotten using Pythagorean's theorem ...
  55. [55]
    4.3: Inner Product and Euclidean Norm - Engineering LibreTexts
    May 22, 2022 · Euclidean Norm. Sometimes we want to measure the length of a vector, namely, the distance from the origin to the point specified by the ...
  56. [56]
    Euclidean space and its representation: points and vectors - Mathesis
    Mar 25, 2021 · The (Euclidean) norm of the vector ⃗ 𝑢 is then the square root of its dot product with itself, i.e., the positive real number | | ⁢ ⃗ 𝑢 ⁢ | ...
  57. [57]
    unit vector - PlanetMath
    Mar 22, 2013 · A unit vector is a unit-length element of Euclidean space, where its norm is equal to 1. Normalizing a vector finds the unit vector with the ...
  58. [58]
    [PDF] Euclidean Space - HKUST Math Department
    Definition 3.6. The vector u with unit length, i.e. kuk = 1 is called a unit vector. Given v 6= 0, 1 kvk v has unit length and is called the normalization of v.
  59. [59]
    2.5 Unit Vectors - Engineering Statics
    A unit vector is a vector with a magnitude of one and no units, representing a pure direction. It is found by dividing a vector by its magnitude.Missing: Euclidean | Show results with:Euclidean
  60. [60]
    Zero Vector - GeeksforGeeks
    Jul 23, 2025 · A zero vector or a null vector is defined as a vector in space with a magnitude equal to 0 and an undefined direction.<|control11|><|separator|>
  61. [61]
    4.2: Elementary properties of vector spaces - Mathematics LibreTexts
    Mar 5, 2021 · Every vector space has a unique additive identity. Proof. Suppose there are two additive identities 0 and 0 ′ Then. (4.2.1) 0 ′ = 0 + 0 ′ = 0 ,.
  62. [62]
    Zero Vector (Null Vector) - Definition, Examples - Cuemath
    A zero vector has no length and does not point in any specific direction. · A null vector is an additive identity in vector algebra. · The resultant of the ...
  63. [63]
    Calculus II - Vector Arithmetic - Pauls Online Math Notes
    Nov 16, 2022 · In this section we will discuss the mathematical and geometric interpretation of the sum and difference of two vectors.Missing: Euclidean authoritative source
  64. [64]
    Vectors: Basic operations - The University of Queensland
    Vector properties ... Vectors can be added together (vector addition), subtracted (vector subtraction) and multiplied by scalars (scalar multiplication).Missing: authoritative source
  65. [65]
    Dot Product - Calculus II - Pauls Online Math Notes
    Nov 16, 2022 · We give some of the basic properties of dot products and define orthogonal vectors and show how to use the dot product to determine if two ...Missing: Euclidean | Show results with:Euclidean
  66. [66]
    The dot product - Math Insight
    The dot product of a with unit vector u, denoted a⋅u, is defined to be the projection of a in the direction of u, or the amount that a is pointing in the same ...Missing: Euclidean authoritative source
  67. [67]
    Dot Products and Orthogonality
    The basic construction in this section is the dot product, which measures angles between vectors and computes the length of a vector. Definition. The dot ...Missing: Euclidean | Show results with:Euclidean
  68. [68]
    The formula for the dot product in terms of vector components
    The geometric definition of the dot product says that the dot product between two vectors a and b is a⋅b=∥a∥∥b∥cosθ,. where θ is the angle between vectors a ...
  69. [69]
    [PDF] Dot Products - MIT
    We start by multiplying a vector times itself to gain understanding of the geometric definition: A · A = |A|2 cos(0) = |A|2. From the definition of the dot ...
  70. [70]
    1.4: Cross Product - Mathematics LibreTexts
    Jan 16, 2023 · In Section 1.3 we defined the dot product, which gave a way of multiplying two vectors. The resulting product, however, was a scalar, ...
  71. [71]
    The formula for the cross product - Math Insight
    Using determinants, we can write the result as a×b=|a2a3b2b3|i−|a1a3b1b3|j+|a1a2b1b2|k. Looking at the formula for the 3×3 ...Missing: Euclidean | Show results with:Euclidean
  72. [72]
    The scalar triple product - Math Insight
    The volume of the parallelepiped is the area of the base times the height. From the geometric definition of the cross product, we know that its magnitude, ∥a× ...
  73. [73]
    [PDF] (1) Kinematics
    The vector from the origin to P is called the position vector, r , of the particle. The position vector contains all the information regarding this particle.
  74. [74]
    1.5 Position Vectors - Front Matter
    A position vector is always measured relative to some origin (a fixed location in space) and using some set of coordinates.
  75. [75]
    [PDF] Chapter 4 One Dimensional Kinematics - MIT OpenCourseWare
    The average velocity vector is then v.. ≡. Δx. ˆ i = v. ˆ i . (4.3.3) ave ave ... The instantaneous velocity vector is then. v(t) = v(t) ˆi . (4.3.6).
  76. [76]
    [PDF] Phy 2053 Announcements Average vs. Instantaneous Velocity ...
    Average vs ... Average velocity between. A and B t x vaverage. ∆. ∆. = v v. Instantaneous Velocity: ... ▫ Choose the appropriate kinematic equation.
  77. [77]
    [PDF] Kinematics 2 - Duke Physics
    Simply that the position must now be described by a vector r(t) and the velocity and acceleration are derivatives with respect to time of this vector: The ...
  78. [78]
    Position, velocity, and acceleration #rkv - Dynamics
    Position vectors are defined by the origin and the point, but not by any choice of basis. We can write any position vector in any basis and it is still the same ...
  79. [79]
    [PDF] Chapter 3 Motion in Two and Three Dimensions
    (a) The velocity vector v is the time–derivative of the position vector r: v = dr dt. = d dt. (3.0ti − 4.0t2j + 2.0k). = 3.0i − 8.0tj where we mean that when ...
  80. [80]
    Calculus III - Velocity and Acceleration - Pauls Online Math Notes
    Jan 17, 2023 · The velocity of the object is the first derivative of the position function and the acceleration of the object is the second derivative of the position ...
  81. [81]
    8.5Tangential And Normal Components
    Tangential acceleration (aT) is in the direction of motion, while normal acceleration (aN) veers off the path. They are scalars, aT=ddt|v(t)| and aN=κ|v(t)|2.
  82. [82]
    Tangential/normal basis - Dynamics
    The tangential/normal basis has three vectors: ^et (tangential), ^en (normal, towards curvature), and ^eb (binormal, completing the right-handed basis).
  83. [83]
    SLIDE 2D Linear Algebra Notes - People @EECS
    Sep 8, 2002 · For our purposes, an affine space a vector space offset from the origin. We can represent points and vectors in an affine space by adding an ...
  84. [84]
    [PDF] Introduction to Applied Linear Algebra
    This book is meant to provide an introduction to vectors, matrices, and least squares methods, basic topics in applied linear algebra.
  85. [85]
    [PDF] CMSC 425: Lecture 4 Geometry and Geometric Programming
    Euclidean geometry is an extension of affine geometry which includes one additional operation, called the inner product. The inner product is an operator that ...
  86. [86]
    CoordinateTransformations - Intelligent Motion Lab
    Displacement: A vector representing the difference between points. Direction ... Rigid transform: An affine transform that combines a rotation and a translation.
  87. [87]
    [PDF] HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT ...
    Each point is displaced along a line parallel to. ←→. PQ in the same direction and with the same distance. Note that displacement vectors depend on the ...
  88. [88]
    [PDF] Physics 613 Lecture 5 Feb. 6, 2014 1 P, C and T
    Feb 6, 2014 · Pseudoscalars behave as scalars, and pseudovectors as vectors, under ro- tations. We expect a field which behaves as a scalar under rotations to ...
  89. [89]
    Pseudovector -- from Wolfram MathWorld
    A vector-like object which is invariant under inversion is called a pseudovector, also called an axial vector.Missing: sources | Show results with:sources
  90. [90]
    [PDF] Covariance of the Dirac Equation
    Similarly, a pseudovector (for example, Wµ in the table) transforms as a vector under proper Lorentz transformation but under improper ones it acquires the same ...
  91. [91]
    [PDF] Symmetries of Mechanics and Electromagnetism - UT Physics
    Under rotations, scalars are invariant, vectors transform like coordinates, and tensors transform according to their indices. Space reflection reverses polar  ...
  92. [92]
    7.4: Angular Momentum - Physics LibreTexts
    Mar 4, 2025 · Angular momentum is an example of a mathematical quantity known as an axial vector, or a pseudovector ("regular" vectors like momentum and velocity are ...
  93. [93]
    [PDF] FREQUENTLY ASKED QUESTIONS - Duke Physics
    Jan 14, 2020 · ... pseudovector”, or “axial vector”. So angular momentum is a pseudovector, whereas linear momentum is known as a “polar vector.” ) Page 2. How ...
  94. [94]
    44. Symmetries and the Dirac Monopole - Galileo and Einstein
    Vectors that do not change sign under spatial inversion are called pseudovectors or axial vectors. In this context, ordinary vectors are sometimes referred to ...
  95. [95]
    [PDF] Classical Electrodynamics - University of Oregon
    Apr 9, 2020 · The curl of a vector field transforms as a pseudovector: ^. (∇ × v) ... v is a pseudovector.a. aFor some reason, Belitz writes the ...
  96. [96]
    [PDF] Inner-Product Spaces, Euclidean Spaces - CMU Math
    Many of the results, for example the Inner-Product In- equality and the Theorem on Subadditivity of Magnitude, remain valid for infinite-dimensional spaces.
  97. [97]
    [PDF] Inner Product Spaces and Orthogonality - HKUST Math Department
    The vector space Rn with this special inner product (dot product) is called the Euclidean n-space, and the dot product is called the standard inner product on ...
  98. [98]
    Inner Product Spaces - Euclidean Tensor Analysis
    The simplest, and also the most important, example of an inner product space is the vector space Rn defined earlier, with the inner product ⋅:V×V→R defined as ...
  99. [99]
    [PDF] Lecture 5.1: Inner products and Euclidean structure
    An inner product on a real vector space X is a symmetric positive-definite bilinear form h−, −i: X × X −→ R. A vector space endowed with an inner product is an ...
  100. [100]
    n-dimensional "cross product" reference request - MathOverflow
    Apr 17, 2012 · Note: One can express this "cross product" in terms of exterior algebra operations. It is equivalent to ∗(w1∧w ...Generalization of Curl to higher dimensions - MathOverflowWhy is the exterior algebra so ubiquitous? - MathOverflowMore results from mathoverflow.net
  101. [101]
    [PDF] 4 Exterior algebra - People
    Given this space we can now define our generalization of the cross-product, called the exterior product or wedge product of two vectors. Definition 14 Given ...
  102. [102]
    Linear Algebra: Euclidean Vector Space - Towards Data Science
    Mar 6, 2023 · Two vectors u = (u₁, u₂) and v = (v₁, v₂) are equal if u₁ = v₁ and u₂ = v₂. · The sum of vector u and v is defined as u + v = (u₁ + v₁, u₂ + v₂).
  103. [103]
    17.5: Geometry of Space-time - Physics LibreTexts
    Mar 14, 2021 · In 1908 Minkowski reformulated Einstein's Special Theory of Relativity in this 4-dimensional Euclidean space-time vector space and concluded ...