Vector algebra
Vector algebra is a branch of mathematics that studies vectors—mathematical entities characterized by both magnitude and direction—and the operations that can be performed on them, including vector addition, scalar multiplication, the dot product (which yields a scalar), and the cross product (which produces another vector).[1][2] These operations enable the representation and analysis of physical quantities like force, velocity, and displacement in Euclidean space, typically in two or three dimensions.[3] The foundations of vector algebra trace back to the mid-19th century, when Irish mathematician William Rowan Hamilton developed quaternions in 1843 as a four-dimensional extension of complex numbers to handle three-dimensional rotations, laying the groundwork for vector concepts.[4] Independently, in the 1880s, American physicist J. Willard Gibbs and British engineer Oliver Heaviside formulated a more practical, three-dimensional vector system by separating the scalar and vector parts of quaternions, introducing notations like the dot and cross products that are standard today.[5][6] This "vector analysis" gained prominence through its application in James Clerk Maxwell's electromagnetic theory, where Heaviside reformulated Maxwell's equations using vector notation.[7] Key properties of vector algebra include linearity (addition and scalar multiplication distribute over each other), commutativity of addition, and associativity, which mirror those of scalar algebra but account for directionality.[8] The dot product, defined as \mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \cos \theta, measures the projection of one vector onto another and is crucial for orthogonality checks.[9] The cross product, \mathbf{a} \times \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \sin \theta \, \mathbf{n} (where \mathbf{n} is the unit normal), generates a vector perpendicular to both inputs, essential for computing areas and torques.[10] In applications, vector algebra underpins classical mechanics for resolving forces and motions, electromagnetism for field descriptions, and engineering fields like fluid dynamics and structural analysis.[11] It also extends to computer graphics for transformations and simulations, and more advanced areas like linear algebra where vectors form the basis for matrices and eigenvalues.[12]Foundations
Definition of vectors
In the context of vector algebra, vectors are mathematical objects that represent quantities possessing both magnitude and direction, typically visualized as directed line segments or arrows within two-dimensional (ℝ²) or three-dimensional (ℝ³) Euclidean space./01%3A_Vectors_in_Euclidean_Space) These entities arise from the foundational principles of real numbers and Euclidean geometry, providing a framework for describing spatial relationships without delving into coordinate systems.[13] Vectors can be classified as free vectors or bound vectors. Free vectors are defined solely by their magnitude and direction, independent of any specific starting point, allowing them to be translated throughout space while preserving their properties.[14] In contrast, bound vectors, often called position vectors, are tied to a particular point of application, such as the origin, making their location in space integral to their definition.[15] Two vectors are considered equal if they share the same magnitude and direction, regardless of their positions in space; this equivalence underscores the translation invariance of free vectors.[13] Historically, the concept of vectors emerged from geometric intuitions, with significant influence from William Rowan Hamilton's 1843 introduction of quaternions as a means to extend complex numbers to three dimensions.[16] In the late 19th century, Oliver Heaviside and Josiah Willard Gibbs formalized vector algebra by simplifying quaternion-based methods into a more practical system focused on three-dimensional Euclidean vectors, facilitating applications in physics such as electromagnetism.[17] In abstract mathematics, vectors inhabit vector spaces, though the present discussion centers on their concrete realization in Euclidean settings.Basic properties
In vector algebra, vectors in Euclidean space form an abelian group under the operation of addition. This structure ensures that addition is associative, meaning that for any vectors \mathbf{u}, \mathbf{v}, and \mathbf{w}, (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}); commutative, so \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}; and possesses an identity element, the zero vector \mathbf{0}, satisfying \mathbf{u} + \mathbf{0} = \mathbf{u} for any \mathbf{u}. Additionally, every vector has a unique additive inverse -\mathbf{u}, such that \mathbf{u} + (-\mathbf{u}) = \mathbf{0}. These properties establish a foundational algebraic framework for vectors over the real numbers, analogous to the group axioms in abstract algebra but tailored to the geometric context of directed quantities.[18][19] Scalar multiplication interacts with vector addition through distributivity: for any scalar a and vectors \mathbf{u}, \mathbf{v}, a(\mathbf{u} + \mathbf{v}) = a\mathbf{u} + a\mathbf{v}. This, combined with other vector space axioms such as homogeneity—where (a + b)\mathbf{u} = a\mathbf{u} + b\mathbf{u} and a(b\mathbf{u}) = (ab)\mathbf{u} for scalars a, b—and the requirement that the multiplicative identity scalar 1 satisfies $1 \cdot \mathbf{u} = \mathbf{u}, defines the linearity essential to Euclidean vector spaces. These axioms ensure that the set of vectors behaves consistently under linear combinations, providing the rigorous basis for operations in physics and engineering applications. The zero vector is uniquely the additive identity, and negative vectors are uniquely determined as the inverses, preventing ambiguities in algebraic manipulations.[20][21][22] A key distinction arises between geometric and algebraic interpretations of vectors, highlighting a form of non-uniqueness in the former. Geometrically, vectors are directed line segments (arrows) whose algebraic equivalence depends on magnitude and direction, allowing parallel transport: two arrows represent the same vector if they are congruent via translation, regardless of starting point. Algebraically, vectors are equivalence classes of such arrows or, in coordinates, ordered tuples in \mathbb{R}^n, where position is fixed relative to an origin. This equivalence resolves potential ambiguities, ensuring that properties like group structure hold invariantly across representations.[23][24]Arithmetic Operations
Addition and subtraction
Vector addition is a fundamental operation in vector algebra, geometrically interpreted through the parallelogram law. To add two vectors \vec{a} and \vec{b}, place the tail of \vec{b} at the head of \vec{a}, forming two adjacent sides of a parallelogram; the resultant vector \vec{c} = \vec{a} + \vec{b} is the diagonal extending from the tail of \vec{a} to the head of \vec{b}. This construction preserves both magnitude and direction, allowing visualization of how vectors combine to produce a net displacement or force.[24] The parallelogram law directly implies the commutativity of vector addition, where \vec{a} + \vec{b} = \vec{b} + \vec{a}, as swapping the vectors merely reflects the parallelogram without altering the diagonal. Associativity holds as well, stated as (\vec{a} + \vec{b}) + \vec{c} = \vec{a} + (\vec{b} + \vec{c}), enabling the grouping of multiple vectors in any order for sequential addition, which is essential for summing chains of displacements or forces. These properties establish vector addition as an abelian group operation under the axioms of vector spaces.[25] Vector subtraction is defined as the addition of the negative vector, so \vec{a} - \vec{b} = \vec{a} + (-\vec{b}), where -\vec{b} has the same magnitude as \vec{b} but opposite direction. Geometrically, this involves constructing the parallelogram with \vec{a} and -\vec{b}, yielding a resultant that points from the head of \vec{b} to the head of \vec{a}. This operation is crucial for finding differences in position or resolving components in opposing directions.[26] A key inequality arising from vector addition is the triangle inequality, which states that the magnitude (or norm) of the sum satisfies \|\vec{a} + \vec{b}\| \leq \|\vec{a}\| + \|\vec{b}\|, where the norm \|\vec{v}\| denotes the length of vector \vec{v}. This reflects the geometric fact that the straight-line path (direct sum) is no longer than the path via intermediate points (separate magnitudes), with equality when \vec{a} and \vec{b} are parallel and in the same direction. The norm here is briefly introduced as a measure of vector length, with full details deferred to later sections.[27] In practical applications, such as navigation, vector addition models displacement: a ship sailing 10 km east (\vec{d_1}) followed by 15 km northeast (\vec{d_2}) results in a net displacement \vec{d} = \vec{d_1} + \vec{d_2}, computed via the parallelogram law to determine the shortest return path, avoiding longer routes that violate the triangle inequality.[28]Scalar multiplication
Scalar multiplication is an operation in vector algebra that combines a scalar (a real number k \in \mathbb{R}) with a vector \vec{a} to produce a new vector k\vec{a}, which is parallel to \vec{a}.[9] This operation scales the vector while preserving its direction if k > 0, reversing the direction if k < 0, and resulting in the zero vector if k = 0.[9] Key properties of scalar multiplication include distributivity over vector addition, where k(\vec{a} + \vec{b}) = k\vec{a} + k\vec{b} for any vectors \vec{a}, \vec{b} and scalar k, and associativity with respect to scalar multiplication, where (k m)\vec{a} = k(m\vec{a}) for scalars k, m and vector \vec{a}.[12] Additionally, multiplying the zero vector by any scalar yields the zero vector: k\vec{0} = \vec{0}.[12] The magnitude of the resulting vector satisfies \|k\vec{a}\| = |k| \|\vec{a}\|, meaning the length scales by the absolute value of the scalar, independent of its sign.[9] For the zero scalar specifically, $0 \cdot \vec{a} = \vec{0} for any vector \vec{a}, emphasizing that scaling by zero eliminates the vector entirely.[12] In physics, scalar multiplication often models scaling of quantities like velocity; for instance, if \vec{v} represents an object's velocity, then $2\vec{v} doubles the speed while maintaining the direction, as seen in analyses of motion where velocity components are proportionally adjusted.[29]Vector Products
Dot product
The dot product, also known as the scalar product or inner product, is a binary operation on two vectors that yields a scalar value, originally developed from William Rowan Hamilton's quaternion scalar part and refined by Josiah Willard Gibbs in his vector analysis framework.[5] Geometrically, for vectors \vec{a} and \vec{b}, it is defined as \vec{a} \cdot \vec{b} = \|\vec{a}\| \|\vec{b}\| \cos \theta, where \theta is the angle between the vectors and \|\cdot\| denotes the magnitude.[30] This formulation captures the projection of one vector onto the other, scaled by the magnitudes, and is positive when the angle is acute, zero when right, and negative when obtuse.[30] In coordinate form, assuming Cartesian coordinates in three dimensions, the dot product is computed algebraically as \vec{a} \cdot \vec{b} = a_x b_x + a_y b_y + a_z b_z, where \vec{a} = (a_x, a_y, a_z) and \vec{b} = (b_x, b_y, b_z).[30] This component-wise summation provides a practical method for calculation and extends naturally to higher dimensions.[30] The dot product exhibits several key algebraic properties that underpin its utility in vector algebra. It is commutative, satisfying \vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a}, and distributive over vector addition, so \vec{a} \cdot (\vec{b} + \vec{c}) = \vec{a} \cdot \vec{b} + \vec{a} \cdot \vec{c}.[30] It is also bilinear, meaning it is linear in each argument: (\alpha \vec{a} + \beta \vec{b}) \cdot \vec{c} = \alpha (\vec{a} \cdot \vec{c}) + \beta (\vec{b} \cdot \vec{c}) and similarly for the second argument, where \alpha, \beta are scalars.[30] Notably, the self-dot product equals the square of the magnitude: \vec{a} \cdot \vec{a} = \|\vec{a}\|^2.[30] A dot product of zero indicates orthogonality: \vec{a} \cdot \vec{b} = 0 if and only if the vectors are perpendicular, providing a criterion for checking perpendicularity in geometric configurations.[30] In physics, the dot product quantifies work as the product of force and displacement vectors, W = \vec{F} \cdot \vec{d}, representing only the component of force parallel to the displacement.[31] The dot product satisfies the Cauchy-Schwarz inequality, which states that |\vec{a} \cdot \vec{b}| \leq \|\vec{a}\| \|\vec{b}\|, with equality when the vectors are parallel; this bounds the projection and follows from the geometric definition since |\cos \theta| \leq 1.[32]Cross product
The cross product, also known as the vector product, is a binary operation on two vectors in three-dimensional Euclidean space that yields a third vector perpendicular to both input vectors. For vectors \vec{a} and \vec{b}, the cross product \vec{a} \times \vec{b} has magnitude ||\vec{a} \times \vec{b}|| = ||\vec{a}|| \, ||\vec{b}|| \sin \theta, where \theta is the angle between \vec{a} and \vec{b} (with $0 \leq \theta \leq \pi), and direction determined by the right-hand rule: pointing in the direction of the thumb when the fingers curl from \vec{a} to \vec{b}.[33] This magnitude equals the area of the parallelogram formed by \vec{a} and \vec{b} as adjacent sides.[33] The operation is undefined in two dimensions and requires extensions, such as using bivectors or higher-dimensional analogs, for spaces beyond three dimensions.[33] In Cartesian coordinates, with \vec{a} = (a_x, a_y, a_z) and \vec{b} = (b_x, b_y, b_z), the cross product is computed as: \vec{a} \times \vec{b} = \begin{vmatrix} \hat{i} & \hat{j} & \hat{k} \\ a_x & a_y & a_z \\ b_x & b_y & b_z \end{vmatrix} = (a_y b_z - a_z b_y) \hat{i} - (a_x b_z - a_z b_x) \hat{j} + (a_x b_y - a_y b_x) \hat{k}. This determinant mnemonic facilitates calculation and highlights the antisymmetric nature of the components.[33] The resulting vector is orthogonal to both \vec{a} and \vec{b}, satisfying \vec{a} \cdot (\vec{a} \times \vec{b}) = 0 and \vec{b} \cdot (\vec{a} \times \vec{b}) = 0.[33] Key algebraic properties include anti-commutativity, \vec{a} \times \vec{b} = -(\vec{b} \times \vec{a}), which follows from the sign change in the determinant when rows are swapped; distributivity over vector addition, \vec{a} \times (\vec{b} + \vec{c}) = \vec{a} \times \vec{b} + \vec{a} \times \vec{c}; and scalar multiplication compatibility, k(\vec{a} \times \vec{b}) = (k\vec{a}) \times \vec{b} = \vec{a} \times (k\vec{b}) for scalar k.[34] These ensure the cross product behaves consistently in vector equations. In physics, it appears in the magnetic Lorentz force \vec{F} = q (\vec{v} \times \vec{B}) on a charge q moving with velocity \vec{v} in field \vec{B}, and in angular momentum \vec{L} = \vec{r} \times \vec{p}, where \vec{r} is position and \vec{p} is linear momentum.[35][36]Geometric and Algebraic Interpretations
Magnitude, direction, and unit vectors
The magnitude of a vector \vec{a}, often denoted \|\vec{a}\| and referred to as its norm or length, quantifies the vector's size as the straight-line distance from its initial point to its terminal point. Geometrically, this arises from the Pythagorean theorem applied to the vector's components, yielding \|\vec{a}\| = \sqrt{\vec{a} \cdot \vec{a}}. In three-dimensional Euclidean space, with components a_x, a_y, and a_z, the formula expands to \|\vec{a}\| = \sqrt{a_x^2 + a_y^2 + a_z^2}. The direction of a non-zero vector \vec{a} is captured by its corresponding unit vector \hat{a}, defined as a vector of magnitude 1 pointing in the same direction as \vec{a}. This unit vector is obtained by scaling \vec{a} by the reciprocal of its magnitude: \hat{a} = \frac{\vec{a}}{\|\vec{a}\|}. Any non-zero vector admits a unique decomposition into its magnitude and unit vector: \vec{a} = \|\vec{a}\| \hat{a}, separating the concepts of length and orientation. The angle \theta between two non-zero vectors \vec{a} and \vec{b} (where $0 \leq \theta \leq \pi) derives from the dot product relation \vec{a} \cdot \vec{b} = \|\vec{a}\| \|\vec{b}\| \cos \theta, leading to \cos \theta = \frac{\vec{a} \cdot \vec{b}}{\|\vec{a}\| \|\vec{b}\|}. This formula follows from applying the law of cosines to the triangle formed by \vec{a}, \vec{b}, and \vec{a} - \vec{b}, confirming that the dot product encodes both magnitudes and the cosine of their angular separation. The zero vector \vec{0}, with all components equal to zero, has magnitude \|\vec{0}\| = 0 but no defined direction, as it cannot be normalized without division by zero. This undefined direction aligns with its geometric interpretation as a point rather than a displacement. The magnitude \|\vec{a}\| of a vector also represents the Euclidean distance from the origin to the point defined by \vec{a} in the space, providing a foundational measure for distances between points via \|\vec{a} - \vec{b}\|.Coordinate representations
In Cartesian coordinate systems, vectors are represented by their components along orthogonal axes, facilitating algebraic computations and bridging abstract vector properties to explicit numerical evaluations. This representation is fundamental in Euclidean spaces, where the position of a point or the direction and magnitude of a displacement are quantified using scalar coordinates.[37] A position vector \vec{r} describes the location of a point relative to the origin in a Cartesian system. In two dimensions, it is expressed as \vec{r} = x \hat{i} + y \hat{j}, where x and y are the coordinates along the respective axes. In three dimensions, this extends to \vec{r} = x \hat{i} + y \hat{j} + z \hat{k}, incorporating the z-coordinate.[13] The basis vectors \hat{i}, \hat{j}, and \hat{k} in three-dimensional space form an orthonormal set, meaning they are mutually perpendicular unit vectors with magnitude 1, aligned with the x-, y-, and z-axes. In two dimensions, the basis reduces to \{\hat{i}, \hat{j}\}. These basis vectors provide a standardized framework for decomposing any vector into scalar multiples of these directions.[38] Any vector \vec{a} can thus be written in terms of its components as \vec{a} = a_x \hat{i} + a_y \hat{j} + a_z \hat{k} in 3D, where a_x, a_y, and a_z are the scalar components representing projections onto the axes. In 2D, the z-component is omitted. This decomposition allows for straightforward manipulation in computational contexts.[39] Arithmetic operations on vectors are performed component-wise in this representation, aligning with the abstract definitions of addition and scalar multiplication. For two vectors \vec{a} = (a_x, a_y, a_z) and \vec{b} = (b_x, b_y, b_z), their sum is \vec{a} + \vec{b} = (a_x + b_x, a_y + b_y, a_z + b_z). Scalar multiplication by k yields k\vec{a} = (k a_x, k a_y, k a_z). In 2D, operations follow the same pattern with two components.[37] The dot product, a key algebraic operation, computes efficiently via components as \vec{a} \cdot \vec{b} = a_x b_x + a_y b_y + a_z b_z in 3D (or omitting the z-term in 2D). This scalar result equals the product of the magnitudes and the cosine of the angle between the vectors, enabling quick numerical assessment.[38] To switch between coordinate systems, such as rotating the basis, a change of basis is applied using transformation matrices. For a rotation by angle \theta in 2D, the new components (x', y') relate to the original (x, y) via the matrix equation: \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. In 3D, analogous 3x3 rotation matrices transform the full component vector. These matrices, formed by the new basis vectors as columns, ensure consistent vector representations across aligned frames.[40] While 2D representations suffice for planar problems with two components and basis vectors, 3D extends this to spatial analyses requiring the third dimension. The framework generalizes to n-dimensional Euclidean space, where vectors are tuples (a_1, a_2, \dots, a_n) with respect to an orthonormal basis, supporting operations like addition and dot products component-wise.[37]Applications
In physics and engineering
In classical mechanics, vector algebra is fundamental to resolving forces acting on a body, where the net force \vec{F} is the vector sum of all individual forces, leading to Newton's second law in vector form: \vec{F} = m \vec{a}, with m as the mass and \vec{a} as the acceleration vector. This equation governs the motion of particles and rigid bodies by relating the resultant force to the rate of change of momentum, enabling predictions of trajectories under multiple influences such as gravity or friction. For instance, in projectile motion, the gravitational force \vec{F}_g = m \vec{g} combines with initial velocity components to determine parabolic paths.[41] Velocity \vec{v} and acceleration \vec{a} are treated as vectors in kinematics, where \vec{v} = \frac{d\vec{r}}{dt} and \vec{a} = \frac{d\vec{v}}{dt} = \frac{d^2\vec{r}}{dt^2}, with \vec{r} as the position vector. Displacement over time integrates these vectors, as \vec{r}(t) = \vec{r}_0 + \int_0^t \vec{v}(t') \, dt', allowing analysis of curvilinear motion like circular orbits where centripetal acceleration \vec{a}_c = -\frac{v^2}{r} \hat{r} points toward the center. Kinetic energy derives from the magnitude of velocity, expressed as \frac{1}{2} m \|\vec{v}\|^2, quantifying the work required to accelerate a body from rest.[42] Torque, or moment of force, arises in rotational dynamics as \vec{\tau} = \vec{r} \times \vec{F}, where \vec{r} is the position vector from the pivot to the force application point, capturing the rotational effect perpendicular to both vectors. This cross product magnitude \tau = r F \sin \theta determines angular acceleration via \vec{\tau} = I \vec{\alpha}, with I as the moment of inertia. Work in mechanics is the scalar \vec{F} \cdot \vec{d}, integrating force along displacement \vec{d} to yield energy transfer, such as in constant-force scenarios where W = F d \cos \theta.[43] In electromagnetism, the Lorentz force law describes the vector force on a charged particle: \vec{F} = q (\vec{E} + \vec{v} \times \vec{B}), combining electric field \vec{E} contribution q \vec{E} with magnetic q \vec{v} \times \vec{B}, which deflects moving charges orthogonally to \vec{v} and \vec{B}. This governs phenomena like cyclotron motion in uniform fields. In engineering, vector equilibrium ensures structural stability, where for a body in static equilibrium, \sum \vec{F} = \vec{0} and \sum \vec{\tau} = \vec{0}, applied in truss analysis to resolve member forces via method of joints or sections.[44][45]In geometry and computer science
In geometry, vector algebra provides elegant proofs for classical theorems. The triangle law of vector addition, which states that the resultant of two vectors forms the third side of a triangle when placed head-to-tail, can be proven using vector addition and the parallelogram law equivalence. Specifically, for vectors \vec{u} and \vec{v}, the sum \vec{u} + \vec{v} satisfies the geometric configuration where |\vec{u} + \vec{v}| = |\vec{w}| for the closing side \vec{w} = -(\vec{u} + \vec{v}) in the triangle, derived from the definition of vector addition as displacement.[46][47] Another key application is the computation of volumes in three-dimensional figures. The volume V of a parallelepiped spanned by vectors \vec{a}, \vec{b}, and \vec{c} is given by the absolute value of the scalar triple product: V = |\vec{a} \cdot (\vec{b} \times \vec{c})| This formula arises because \vec{b} \times \vec{c} yields a vector whose magnitude is the area of the base parallelogram, and the dot product with \vec{a} projects this area along the height direction.[48][49] In computer science, particularly computer graphics, vector algebra underpins rendering techniques. Normal vectors, essential for surface shading, are computed as the cross product of two edge vectors \vec{a} and \vec{b} on a polygonal face, yielding \vec{n} = \vec{a} \times \vec{b}, which is perpendicular to the surface and determines light interaction via models like Lambertian shading.[50][51] In ray tracing, vector operations detect intersections between rays (parameterized as \vec{p}(t) = \vec{o} + t \vec{d}) and geometric primitives, such as solving quadratic equations for spheres or using cross products for triangle tests to find entry and exit points.[52][53] Algorithms in graphics and simulation frequently employ vector normalization and products. Normalization scales a vector \vec{v} to unit length by dividing by its magnitude \|\vec{v}\| = \sqrt{\vec{v} \cdot \vec{v}}, ensuring consistent lighting computations where the cosine of the angle between light and normal directions must use unit vectors for accuracy in diffuse reflection models.[54][55] Cross products also enable collision detection in 3D environments, such as games, by generating plane normals from object edges to test separations between convex hulls via the separating axis theorem.[56] Vectors are implemented as arrays in programming libraries for efficient computation. In Python's NumPy, vectors are represented as one-dimensionalndarray objects, supporting element-wise operations and linear algebra functions like dot and cross products through methods such as numpy.dot and numpy.cross.[57]
Interpolation techniques, such as Bézier curves used in vector graphics and animation, rely on linear combinations of control points. A cubic Bézier curve is defined as \vec{B}(t) = (1-t)^3 \vec{P_0} + 3(1-t)^2 t \vec{P_1} + 3(1-t) t^2 \vec{P_2} + t^3 \vec{P_3} for t \in [0,1], blending vectors \vec{P_i} to produce smooth paths scalable without aliasing.[58][59] The dot product briefly aids in graphics by computing angles between vectors for orientation, such as viewer-to-surface alignments in shading.[50]