Fact-checked by Grok 2 weeks ago

Vector algebra

Vector algebra is a branch of that studies —mathematical entities characterized by both and —and the operations that can be performed on them, including vector addition, , the (which yields a scalar), and the (which produces another ). These operations enable the representation and analysis of physical quantities like , , and displacement in , typically in two or three dimensions. The foundations of vector algebra trace back to the mid-19th century, when Irish mathematician developed quaternions in as a four-dimensional extension of complex numbers to handle three-dimensional rotations, laying the groundwork for vector concepts. Independently, in the 1880s, American physicist J. Willard Gibbs and British engineer formulated a more practical, three-dimensional vector system by separating the scalar and vector parts of quaternions, introducing notations like the and products that are standard today. This "vector analysis" gained prominence through its application in James Clerk Maxwell's electromagnetic theory, where Heaviside reformulated using vector notation. Key properties of vector algebra include (addition and distribute over each other), commutativity of , and associativity, which mirror those of scalar algebra but account for directionality. The , defined as \mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \cos \theta, measures the of one vector onto another and is crucial for checks. The , \mathbf{a} \times \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \sin \theta \, \mathbf{n} (where \mathbf{n} is the unit normal), generates a vector perpendicular to both inputs, essential for computing areas and torques. In applications, vector algebra underpins for resolving forces and motions, for field descriptions, and engineering fields like and . It also extends to for transformations and simulations, and more advanced areas like linear algebra where vectors form the basis for matrices and eigenvalues.

Foundations

Definition of vectors

In the context of vector algebra, vectors are mathematical objects that represent quantities possessing both and , typically visualized as directed line segments or arrows within two-dimensional (ℝ²) or three-dimensional (ℝ³) ./01%3A_Vectors_in_Euclidean_Space) These entities arise from the foundational principles of real numbers and , providing a framework for describing spatial relationships without delving into coordinate systems. Vectors can be classified as free vectors or bound vectors. Free vectors are defined solely by their magnitude and direction, independent of any specific starting point, allowing them to be translated throughout space while preserving their properties. In contrast, bound vectors, often called position vectors, are tied to a particular point of application, such as the , making their in space integral to their definition. Two vectors are considered equal if they share the same and , regardless of their positions in space; this equivalence underscores the translation invariance of free vectors. Historically, the concept of vectors emerged from geometric intuitions, with significant influence from William Rowan Hamilton's introduction of quaternions as a means to extend complex numbers to three dimensions. In the late , and formalized vector algebra by simplifying quaternion-based methods into a more practical system focused on three-dimensional vectors, facilitating applications in physics such as . In abstract mathematics, vectors inhabit vector spaces, though the present discussion centers on their concrete realization in settings.

Basic properties

In vector algebra, vectors in Euclidean space form an abelian group under the operation of addition. This structure ensures that addition is associative, meaning that for any vectors \mathbf{u}, \mathbf{v}, and \mathbf{w}, (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}); commutative, so \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}; and possesses an identity element, the zero vector \mathbf{0}, satisfying \mathbf{u} + \mathbf{0} = \mathbf{u} for any \mathbf{u}. Additionally, every vector has a unique additive inverse -\mathbf{u}, such that \mathbf{u} + (-\mathbf{u}) = \mathbf{0}. These properties establish a foundational algebraic framework for vectors over the real numbers, analogous to the group axioms in abstract algebra but tailored to the geometric context of directed quantities. Scalar multiplication interacts with vector addition through distributivity: for any scalar a and vectors \mathbf{u}, \mathbf{v}, a(\mathbf{u} + \mathbf{v}) = a\mathbf{u} + a\mathbf{v}. This, combined with other vector space axioms such as homogeneity—where (a + b)\mathbf{u} = a\mathbf{u} + b\mathbf{u} and a(b\mathbf{u}) = (ab)\mathbf{u} for scalars a, b—and the requirement that the multiplicative identity scalar 1 satisfies $1 \cdot \mathbf{u} = \mathbf{u}, defines the linearity essential to Euclidean vector spaces. These axioms ensure that the set of vectors behaves consistently under linear combinations, providing the rigorous basis for operations in physics and engineering applications. The zero vector is uniquely the additive identity, and negative vectors are uniquely determined as the inverses, preventing ambiguities in algebraic manipulations. A key distinction arises between geometric and algebraic interpretations of vectors, highlighting a form of non-uniqueness in the former. Geometrically, vectors are directed line segments (arrows) whose algebraic equivalence depends on magnitude and direction, allowing parallel transport: two arrows represent the same vector if they are congruent via translation, regardless of starting point. Algebraically, vectors are equivalence classes of such arrows or, in coordinates, ordered tuples in \mathbb{R}^n, where position is fixed relative to an origin. This equivalence resolves potential ambiguities, ensuring that properties like group structure hold invariantly across representations.

Arithmetic Operations

Addition and subtraction

Vector addition is a fundamental operation in vector algebra, geometrically interpreted through the . To add two vectors \vec{a} and \vec{b}, place the tail of \vec{b} at the head of \vec{a}, forming two adjacent sides of a parallelogram; the resultant vector \vec{c} = \vec{a} + \vec{b} is the diagonal extending from the tail of \vec{a} to the head of \vec{b}. This construction preserves both magnitude and direction, allowing visualization of how vectors combine to produce a net displacement or force. The parallelogram law directly implies the commutativity of vector addition, where \vec{a} + \vec{b} = \vec{b} + \vec{a}, as swapping the vectors merely reflects the parallelogram without altering the diagonal. Associativity holds as well, stated as (\vec{a} + \vec{b}) + \vec{c} = \vec{a} + (\vec{b} + \vec{c}), enabling the grouping of multiple vectors in any order for sequential addition, which is essential for summing chains of displacements or forces. These properties establish vector addition as an abelian group operation under the axioms of vector spaces. Vector subtraction is defined as the addition of the negative vector, so \vec{a} - \vec{b} = \vec{a} + (-\vec{b}), where -\vec{b} has the same as \vec{b} but opposite direction. Geometrically, this involves constructing the with \vec{a} and -\vec{b}, yielding a that points from the head of \vec{b} to the head of \vec{a}. This operation is crucial for finding differences in position or resolving components in opposing directions. A key inequality arising from vector addition is the , which states that the magnitude (or ) of the sum satisfies \|\vec{a} + \vec{b}\| \leq \|\vec{a}\| + \|\vec{b}\|, where the \|\vec{v}\| denotes the of \vec{v}. This reflects the geometric fact that the straight-line path () is no longer than the path via intermediate points (separate magnitudes), with equality when \vec{a} and \vec{b} are and in the same direction. The here is briefly introduced as a measure of , with full details deferred to later sections. In practical applications, such as navigation, vector addition models displacement: a ship sailing 10 km east (\vec{d_1}) followed by 15 km northeast (\vec{d_2}) results in a net displacement \vec{d} = \vec{d_1} + \vec{d_2}, computed via the parallelogram law to determine the shortest return path, avoiding longer routes that violate the triangle inequality.

Scalar multiplication

Scalar multiplication is an operation in vector algebra that combines a scalar (a real number k \in \mathbb{R}) with a vector \vec{a} to produce a new vector k\vec{a}, which is parallel to \vec{a}. This operation scales the vector while preserving its direction if k > 0, reversing the direction if k < 0, and resulting in the zero vector if k = 0. Key properties of scalar multiplication include distributivity over vector addition, where k(\vec{a} + \vec{b}) = k\vec{a} + k\vec{b} for any vectors \vec{a}, \vec{b} and scalar k, and associativity with respect to scalar multiplication, where (k m)\vec{a} = k(m\vec{a}) for scalars k, m and vector \vec{a}. Additionally, multiplying the zero vector by any scalar yields the zero vector: k\vec{0} = \vec{0}. The magnitude of the resulting vector satisfies \|k\vec{a}\| = |k| \|\vec{a}\|, meaning the length scales by the absolute value of the scalar, independent of its sign. For the zero scalar specifically, $0 \cdot \vec{a} = \vec{0} for any vector \vec{a}, emphasizing that scaling by zero eliminates the vector entirely. In physics, scalar multiplication often models scaling of quantities like ; for instance, if \vec{v} represents an object's , then $2\vec{v} doubles the speed while maintaining the direction, as seen in analyses of motion where components are proportionally adjusted.

Vector Products

Dot product

The dot product, also known as the scalar product or inner product, is a binary operation on two vectors that yields a scalar value, originally developed from William Rowan Hamilton's quaternion scalar part and refined by in his vector analysis framework. Geometrically, for vectors \vec{a} and \vec{b}, it is defined as \vec{a} \cdot \vec{b} = \|\vec{a}\| \|\vec{b}\| \cos \theta, where \theta is the angle between the vectors and \|\cdot\| denotes the magnitude. This formulation captures the projection of one vector onto the other, scaled by the magnitudes, and is positive when the angle is acute, zero when right, and negative when obtuse. In coordinate form, assuming Cartesian coordinates in three dimensions, the dot product is computed algebraically as \vec{a} \cdot \vec{b} = a_x b_x + a_y b_y + a_z b_z, where \vec{a} = (a_x, a_y, a_z) and \vec{b} = (b_x, b_y, b_z). This component-wise summation provides a practical method for calculation and extends naturally to higher dimensions. The dot product exhibits several key algebraic properties that underpin its utility in vector algebra. It is commutative, satisfying \vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a}, and distributive over , so \vec{a} \cdot (\vec{b} + \vec{c}) = \vec{a} \cdot \vec{b} + \vec{a} \cdot \vec{c}. It is also bilinear, meaning it is linear in each argument: (\alpha \vec{a} + \beta \vec{b}) \cdot \vec{c} = \alpha (\vec{a} \cdot \vec{c}) + \beta (\vec{b} \cdot \vec{c}) and similarly for the second argument, where \alpha, \beta are scalars. Notably, the self-dot product equals the square of the magnitude: \vec{a} \cdot \vec{a} = \|\vec{a}\|^2. A of zero indicates : \vec{a} \cdot \vec{b} = 0 the vectors are , providing a for checking perpendicularity in geometric configurations. In physics, the quantifies work as the product of and vectors, W = \vec{F} \cdot \vec{d}, representing only the component of parallel to the . The satisfies the Cauchy-Schwarz inequality, which states that |\vec{a} \cdot \vec{b}| \leq \|\vec{a}\| \|\vec{b}\|, with equality when the vectors are parallel; this bounds the and follows from the geometric definition since |\cos \theta| \leq 1.

Cross product

The , also known as the vector product, is a on two in three-dimensional that yields a third vector to both input vectors. For vectors \vec{a} and \vec{b}, the \vec{a} \times \vec{b} has magnitude ||\vec{a} \times \vec{b}|| = ||\vec{a}|| \, ||\vec{b}|| \sin \theta, where \theta is the angle between \vec{a} and \vec{b} (with $0 \leq \theta \leq \pi), and direction determined by the : pointing in the direction of the thumb when the fingers curl from \vec{a} to \vec{b}. This magnitude equals the area of the formed by \vec{a} and \vec{b} as adjacent sides. The operation is undefined in two dimensions and requires extensions, such as using bivectors or higher-dimensional analogs, for spaces beyond three dimensions. In Cartesian coordinates, with \vec{a} = (a_x, a_y, a_z) and \vec{b} = (b_x, b_y, b_z), the cross product is computed as: \vec{a} \times \vec{b} = \begin{vmatrix} \hat{i} & \hat{j} & \hat{k} \\ a_x & a_y & a_z \\ b_x & b_y & b_z \end{vmatrix} = (a_y b_z - a_z b_y) \hat{i} - (a_x b_z - a_z b_x) \hat{j} + (a_x b_y - a_y b_x) \hat{k}. This mnemonic facilitates calculation and highlights the antisymmetric nature of the components. The resulting vector is orthogonal to both \vec{a} and \vec{b}, satisfying \vec{a} \cdot (\vec{a} \times \vec{b}) = 0 and \vec{b} \cdot (\vec{a} \times \vec{b}) = 0. Key algebraic properties include anti-commutativity, \vec{a} \times \vec{b} = -(\vec{b} \times \vec{a}), which follows from the sign change in the determinant when rows are swapped; distributivity over vector addition, \vec{a} \times (\vec{b} + \vec{c}) = \vec{a} \times \vec{b} + \vec{a} \times \vec{c}; and scalar multiplication compatibility, k(\vec{a} \times \vec{b}) = (k\vec{a}) \times \vec{b} = \vec{a} \times (k\vec{b}) for scalar k. These ensure the cross product behaves consistently in vector equations. In physics, it appears in the magnetic Lorentz force \vec{F} = q (\vec{v} \times \vec{B}) on a charge q moving with velocity \vec{v} in field \vec{B}, and in angular momentum \vec{L} = \vec{r} \times \vec{p}, where \vec{r} is position and \vec{p} is linear momentum.

Geometric and Algebraic Interpretations

Magnitude, direction, and unit vectors

The of a vector \vec{a}, often denoted \|\vec{a}\| and referred to as its or , quantifies the vector's size as the straight-line from its initial point to its terminal point. Geometrically, this arises from the applied to the vector's components, yielding \|\vec{a}\| = \sqrt{\vec{a} \cdot \vec{a}}. In three-dimensional Euclidean space, with components a_x, a_y, and a_z, the formula expands to \|\vec{a}\| = \sqrt{a_x^2 + a_y^2 + a_z^2}. The direction of a non-zero vector \vec{a} is captured by its corresponding unit vector \hat{a}, defined as a vector of magnitude 1 pointing in the same direction as \vec{a}. This unit vector is obtained by scaling \vec{a} by the reciprocal of its magnitude: \hat{a} = \frac{\vec{a}}{\|\vec{a}\|}. Any non-zero vector admits a unique decomposition into its magnitude and unit vector: \vec{a} = \|\vec{a}\| \hat{a}, separating the concepts of length and orientation. The angle \theta between two non-zero vectors \vec{a} and \vec{b} (where $0 \leq \theta \leq \pi) derives from the dot product relation \vec{a} \cdot \vec{b} = \|\vec{a}\| \|\vec{b}\| \cos \theta, leading to \cos \theta = \frac{\vec{a} \cdot \vec{b}}{\|\vec{a}\| \|\vec{b}\|}. This formula follows from applying the law of cosines to the triangle formed by \vec{a}, \vec{b}, and \vec{a} - \vec{b}, confirming that the dot product encodes both magnitudes and the cosine of their angular separation. The zero vector \vec{0}, with all components equal to zero, has magnitude \|\vec{0}\| = 0 but no defined , as it cannot be normalized without . This undefined direction aligns with its geometric interpretation as a point rather than a . The \|\vec{a}\| of a also represents the Euclidean distance from the origin to the point defined by \vec{a} in the space, providing a foundational measure for distances between points via \|\vec{a} - \vec{b}\|.

Coordinate representations

In Cartesian coordinate systems, vectors are represented by their components along orthogonal axes, facilitating algebraic computations and bridging abstract vector properties to explicit numerical evaluations. This representation is fundamental in Euclidean spaces, where the position of a point or the direction and of a are quantified using scalar coordinates. A position \vec{r} describes the location of a point relative to the origin in a Cartesian system. In two dimensions, it is expressed as \vec{r} = x \hat{i} + y \hat{j}, where x and y are the coordinates along the respective axes. In three dimensions, this extends to \vec{r} = x \hat{i} + y \hat{j} + z \hat{k}, incorporating the z-coordinate. The basis vectors \hat{i}, \hat{j}, and \hat{k} in three-dimensional space form an orthonormal set, meaning they are mutually unit vectors with magnitude 1, aligned with the x-, y-, and z-axes. In two dimensions, the basis reduces to \{\hat{i}, \hat{j}\}. These basis vectors provide a standardized framework for decomposing any vector into scalar multiples of these directions. Any vector \vec{a} can thus be written in terms of its components as \vec{a} = a_x \hat{i} + a_y \hat{j} + a_z \hat{k} in 3D, where a_x, a_y, and a_z are the scalar components representing projections onto the axes. In 2D, the z-component is omitted. This decomposition allows for straightforward manipulation in computational contexts. Arithmetic operations on vectors are performed component-wise in this representation, aligning with the abstract definitions of and . For two vectors \vec{a} = (a_x, a_y, a_z) and \vec{b} = (b_x, b_y, b_z), their is \vec{a} + \vec{b} = (a_x + b_x, a_y + b_y, a_z + b_z). by k yields k\vec{a} = (k a_x, k a_y, k a_z). In , operations follow the same pattern with two components. The dot product, a key algebraic operation, computes efficiently via components as \vec{a} \cdot \vec{b} = a_x b_x + a_y b_y + a_z b_z in 3D (or omitting the z-term in 2D). This scalar result equals the product of the magnitudes and the cosine of the angle between the vectors, enabling quick numerical assessment. To switch between coordinate systems, such as rotating the basis, a change of basis is applied using transformation matrices. For a rotation by angle \theta in 2D, the new components (x', y') relate to the original (x, y) via the matrix equation: \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. In 3D, analogous 3x3 rotation matrices transform the full component vector. These matrices, formed by the new basis vectors as columns, ensure consistent vector representations across aligned frames. While representations suffice for planar problems with two components and basis vectors, extends this to spatial analyses requiring the third . The framework generalizes to n-dimensional , where vectors are tuples (a_1, a_2, \dots, a_n) with respect to an , supporting operations like addition and dot products component-wise.

Applications

In and

In , vector algebra is fundamental to resolving forces acting on a body, where the \vec{F} is the vector sum of all individual forces, leading to Newton's second law in vector form: \vec{F} = m \vec{a}, with m as the and \vec{a} as the acceleration vector. This equation governs the motion of particles and rigid bodies by relating the resultant force to the rate of change of momentum, enabling predictions of trajectories under multiple influences such as gravity or friction. For instance, in projectile motion, the gravitational force \vec{F}_g = m \vec{g} combines with initial velocity components to determine parabolic paths. Velocity \vec{v} and acceleration \vec{a} are treated as vectors in kinematics, where \vec{v} = \frac{d\vec{r}}{dt} and \vec{a} = \frac{d\vec{v}}{dt} = \frac{d^2\vec{r}}{dt^2}, with \vec{r} as the vector. Displacement over time integrates these vectors, as \vec{r}(t) = \vec{r}_0 + \int_0^t \vec{v}(t') \, dt', allowing analysis of like circular orbits where centripetal acceleration \vec{a}_c = -\frac{v^2}{r} \hat{r} points toward the center. Kinetic energy derives from the magnitude of , expressed as \frac{1}{2} m \|\vec{v}\|^2, quantifying the work required to accelerate a body from rest. Torque, or moment of force, arises in rotational dynamics as \vec{\tau} = \vec{r} \times \vec{F}, where \vec{r} is the position vector from the pivot to the force application point, capturing the rotational effect perpendicular to both vectors. This cross product magnitude \tau = r F \sin \theta determines angular acceleration via \vec{\tau} = I \vec{\alpha}, with I as the moment of inertia. Work in mechanics is the scalar \vec{F} \cdot \vec{d}, integrating force along displacement \vec{d} to yield energy transfer, such as in constant-force scenarios where W = F d \cos \theta. In , the law describes the vector force on a : \vec{F} = q (\vec{E} + \vec{v} \times \vec{B}), combining \vec{E} contribution q \vec{E} with magnetic q \vec{v} \times \vec{B}, which deflects moving charges orthogonally to \vec{v} and \vec{B}. This governs phenomena like cyclotron motion in uniform fields. In engineering, vector ensures , where for a body in static equilibrium, \sum \vec{F} = \vec{0} and \sum \vec{\tau} = \vec{0}, applied in truss analysis to resolve member forces via method of joints or sections.

In geometry and computer science

In , vector algebra provides elegant proofs for classical theorems. The law of vector , which states that the of two s forms the third side of a triangle when placed head-to-tail, can be proven using vector and the parallelogram law equivalence. Specifically, for vectors \vec{u} and \vec{v}, the sum \vec{u} + \vec{v} satisfies the geometric configuration where |\vec{u} + \vec{v}| = |\vec{w}| for the closing side \vec{w} = -(\vec{u} + \vec{v}) in the triangle, derived from the definition of vector as displacement. Another key application is the of volumes in three-dimensional figures. The volume V of a spanned by s \vec{a}, \vec{b}, and \vec{c} is given by the of the : V = |\vec{a} \cdot (\vec{b} \times \vec{c})| This formula arises because \vec{b} \times \vec{c} yields a whose is the area of the base , and the with \vec{a} projects this area along the height direction. In , particularly , vector algebra underpins rendering techniques. Normal vectors, essential for surface , are computed as the of two edge vectors \vec{a} and \vec{b} on a polygonal face, yielding \vec{n} = \vec{a} \times \vec{b}, which is perpendicular to the surface and determines light interaction via models like Lambertian shading. In ray tracing, vector operations detect intersections between rays (parameterized as \vec{p}(t) = \vec{o} + t \vec{d}) and geometric primitives, such as solving quadratic equations for spheres or using cross products for tests to find entry and exit points. Algorithms in and frequently employ vector normalization and products. Normalization scales a \vec{v} to length by dividing by its \|\vec{v}\| = \sqrt{\vec{v} \cdot \vec{v}}, ensuring consistent computations where the cosine of the angle between and directions must use unit vectors for accuracy in models. Cross products also enable in 3D environments, such as games, by generating plane from object edges to test separations between convex hulls via the separating axis theorem. Vectors are implemented as arrays in programming libraries for efficient computation. In Python's NumPy, vectors are represented as one-dimensional ndarray objects, supporting element-wise operations and linear algebra functions like dot and cross products through methods such as numpy.dot and numpy.cross. Interpolation techniques, such as Bézier curves used in vector graphics and animation, rely on linear combinations of control points. A cubic Bézier curve is defined as \vec{B}(t) = (1-t)^3 \vec{P_0} + 3(1-t)^2 t \vec{P_1} + 3(1-t) t^2 \vec{P_2} + t^3 \vec{P_3} for t \in [0,1], blending vectors \vec{P_i} to produce smooth paths scalable without aliasing. The dot product briefly aids in graphics by computing angles between vectors for orientation, such as viewer-to-surface alignments in shading.

References

  1. [1]
    [PDF] Vector Algebra
    Vector Algebra x13.1. Basic Concepts. A vector V in the plane or in space is an arrow: it is determined by its length, denoted jVj and its direction. Two ...
  2. [2]
    Vector Algebra:
    A vector is a mathematical object that has magnitude and direction. A line of given length and pointing along a given direction, such as an arrow, is the ...
  3. [3]
    Calculus II - Basic Concepts - Pauls Online Math Notes
    Nov 16, 2022 · Vectors are used to represent quantities that have both a magnitude and a direction. Good examples of quantities that can be represented by ...
  4. [4]
    book:go:qhist - Geometry of the Octonions - Oregon State University
    Jul 10, 2012 · History. The quaternions were discovered by Sir William Rowan Hamilton in 1843, after struggling unsuccessfully to construct an algebra in three ...
  5. [5]
    [PDF] A History of Vector Analysis
    1893. Heaviside publishes the first volume of his Electromagnetic Theory, which contains as Chapter 3, “The Elements of Vectorial Algebra and Analysis,” a 173- ...
  6. [6]
    [PDF] Oliver Heaviside: A first-rate oddity
    Nov 17, 2012 · Their notations differed slightly (it was. Gibbs who introduced “dot” and “cross” for the scalar and vector products), but their underlying.<|control11|><|separator|>
  7. [7]
    [PDF] The vector algebra war
    Clifford combines the Gibbs-like vector with the rotational algebra within a unified system. Page 39. Potential applications. • Maxwell's equations and EM waves.
  8. [8]
    [PDF] Points, Vectors and Matrices
    The key algebraic properties of vectors are addition, scalar multiplication, and the dot and cross products: (1) We can add two vectors. (2) We can scale any ...
  9. [9]
    6 Vector Operations - Harvard Mathematics Department
    The dot product is an operation on vectors that enables us to easily find the angle between two vectors.
  10. [10]
    [PDF] 2.3 Vector Algebra - The University of Kansas
    Aug 22, 2005 · The dot product is also called the scalar product of two vectors. ... The cross product of a scalar and a vector has no meaning. In the ...
  11. [11]
    Introduction – University Physics Volume 1 - UCF Pressbooks
    Vectors are essential to physics and engineering. ... In this chapter, we explore elements of vector algebra for applications in mechanics and in electricity and ...
  12. [12]
    [PDF] 1. Vectors and Geometry - Purdue Math
    This chapter serves as an introduction to the various objects—vectors, matrices, and linear transformations—that are the central focus of linear algebra.
  13. [13]
    [PDF] 8.01SC S22 Chapter 3: Vectors - MIT OpenCourseWare
    any point P in space, (2) vectors have direction and magnitude, and (3) any two vectors that have the same direction and magnitude are equal no matter where in ...
  14. [14]
    [PDF] Chapter 1 Vectors
    A bound vector is a directed line segment. A free vector is defined by its length and direction, and a zero vector has zero length.
  15. [15]
    [PDF] A1 Vector Algebra and Calculus
    Overview. This course is concerned chiefly with the properties of vectors which are related to physical processes in 3 spatial dimensions.
  16. [16]
    The Curious History of Vectors and Tensors - SIAM.org
    Sep 3, 2024 · The idea of a vector as a mathematical object in its own right first appeared as part of William Rowan Hamilton's theory of quaternions.<|separator|>
  17. [17]
    [PDF] The vector algebra war: a historical perspective - arXiv
    Nov 13, 2015 · This led to an intense and lengthy debate over several years between the followers of Gibbs and the followers of Hamilton, beginning in 1890, ...
  18. [18]
    [PDF] 10. Euclidean Spaces
    Sep 13, 2022 · A vector space (V,+,·) over a field F (the set of scalars) is a set of vectors V with two operations: vector addition, which defines x+ y ∈ V ...
  19. [19]
    [PDF] 1 Vector spaces - DSpace@MIT
    We also need to know which of our insights about Euclidean space actually come from the vector space axioms. 1 Vector spaces. A vector space is a set V of ...
  20. [20]
    [PDF] What are vector spaces? - UC Berkeley math
    The notion of vector space axiomatizes the validity of these properties. In other words, a vector space is something whose elements can be algebraically ...
  21. [21]
    [PDF] Vector Spaces - Sites at Lafayette
    In this section, we will introduce the idea of vector spaces and see that such spaces share a great deal of algebraic and geometric structure with Euclidean ...
  22. [22]
    [PDF] 8 Vector space
    The vector space axioms (V1)–(V8) need to be checked. By the definition of subspace, S contains the zero vector 0 of V , which acts as a zero vector for S,.
  23. [23]
    [PDF] 1 Vectors: Geometric Approach
    We begin with the latter point of view because the algebraic version hides geometric issues that are important to physics, namely that physical laws are ...<|control11|><|separator|>
  24. [24]
    Vectors
    Here we learn how to add vectors together and how to multiply vectors by numbers, both algebraically and geometrically.
  25. [25]
    [PDF] 5.1 Vectors
    Properties of Vector Addition. Vector addition has the following properties: 1. Commutative property: v + w w + v for any two vectors v and w. 2. Associative ...
  26. [26]
    3.2 Vector Addition and Subtraction: Graphical Methods
    The subtraction of vector B from vector A is then simply defined to be the addition of − B to A . Note that vector subtraction is the addition of ...
  27. [27]
    [PDF] 6.7 Cauchy-Schwarz Inequality - Berkeley Math
    Consider a triangle with sides consisting of vectors u, v, and u + v. Theorem 17 (Triangle Inequality). If u, v ∈ V , then ku + vk≤kuk + kvk. (3). This ...
  28. [28]
    [PDF] Chapter 2 VECTORS - UNIVERSITY PHYSICS
    In navigation, the laws of geometry are used to draw resultant displacements on nautical maps. Page 9. FIGURE 2.6. Displacement vectors for a fishing trip. a ...
  29. [29]
    The Feynman Lectures on Physics Vol. I Ch. 11: Vectors - Caltech
    Now let us consider vector subtraction. We may define subtraction in the same way as addition, but instead of adding, we subtract the components. Or we might ...
  30. [30]
    [PDF] MATH 222 SECOND SEMESTER CALCULUS
    . An important property of the dot product is its relation with the length of a vector: (56) kãk2 = ã·ã. 45.2. Algebraic properties of the dot product. The ...
  31. [31]
    [PDF] Chapter 3. Vectors
    Aug 28, 2022 · The scalar product (or dot product or inner product), denoted г ·~b ... )k. Therefore, vector products can be calculated in terms of components:.
  32. [32]
    [PDF] Cauchy-Schwarz inequality - Williams College
    Recall that for any two vectors, a·b = |a| |b|cosθ where θ is the angle between a and b (this is a consequence of the law of cosines). Proof of Cauchy-Schwarz.
  33. [33]
    Calculus II - Cross Product - Pauls Online Math Notes
    Nov 16, 2022 · In this section we define the cross product of two vectors and give some of the basic facts and properties of cross products.
  34. [34]
    [PDF] Cross Product - Penn Math
    Algebraic Properties of the cross product: Let , , and be vectors and let be a scalar. c. u v w. (. ) 1. × = - ×. u v. v u. (. ) 2. × +. = × + ×. u v w u v u w.
  35. [35]
    The magnetic force on a moving charge
    The force a magnetic field exerts on a charge q moving with velocity v is called the magnetic Lorentz force. It is given by F = qv × B.
  36. [36]
    [PDF] Chapter 1 Angular Momentum in Quantum Mechanics
    In classical physics, the angular momentum L of a particle is given by the cross product of its position vector r and momentum vector p: L = r ⇥ p. (1.1).
  37. [37]
    Vectors in two- and three-dimensional Cartesian coordinates
    A introduction to representing vectors using the standard Cartesian coordinate systems in the plane and in three-dimensional space.
  38. [38]
    [PDF] Review of Vector Analysis in Cartesian Coordinates - Research
    Vectors are often represented by bold faced letters: for example, velocity may be written as v, and position as r. The "magnitude" of a vector is a scalar. For ...
  39. [39]
  40. [40]
    3.2 Bases and coordinate systems - Understanding Linear Algebra
    A basis is a set of vectors that spans and is linearly independent. A basis forms a coordinate system, and a vector's coordinates in a basis are weights in a ...
  41. [41]
    [PDF] Newtonian Dynamics - Richard Fitzpatrick
    An important corollary of Newton's second law is that force is a vector quantity. ... According to Equation (5.1), and Newton's second law, the equation of motion ...
  42. [42]
    Position, velocity, and acceleration #rkv - Dynamics
    The velocity ⃗v v → and acceleration ⃗a a → are the first and second derivatives of the position vector ⃗r r → . Technically, this is the velocity and ...
  43. [43]
    2.3 The Dot Product - Calculus Volume 3 | OpenStax
    Mar 30, 2016 · In this section, we develop an operation called the dot product, which allows us to calculate work in the case when the force vector and the ...Missing: mechanics | Show results with:mechanics
  44. [44]
    [PDF] 10.1 The Lorentz force law - MIT
    Mar 10, 2005 · The full vector magnetic field is thus written. ~B = 2I cr. ˆ φ . 10.4.1 Field of a plane of current. The magnetic field of the long wire can be ...
  45. [45]
    5.3 Equations of Equilibrium - Engineering Statics
    These are vector equations; hidden within each are three independent scalar equations, one for each coordinate direction.
  46. [46]
    Calculus II - Vector Arithmetic - Pauls Online Math Notes
    Nov 16, 2022 · We'll start with addition of two vectors. So, given the vectors →a=⟨a1,a2 ... This is sometimes called the parallelogram law or triangle law.
  47. [47]
    [PDF] 1 Vectors in 2D and 3D - Stanford Mechanics and Computation
    3 Cross product. Definition: Given u and v, we define their cross product as u × v = (u2v3 − v2u3,u3v1 − v3u1,u1v2 − v1u2). A simple mnemonic to remember ...
  48. [48]
    The scalar triple product - Math Insight
    The volume of the parallelepiped is the area of the base times the height. From the geometric definition of the cross product, we know that its magnitude, ∥a× ...
  49. [49]
    [PDF] Lecture 1t Volume of a Parallelepiped (page 56)
    We can calculate the volume of this parallelepiped by computing the scalar triple product of w, v, and u: w ·(u ×v). As we always take volume to be a positive ...
  50. [50]
    Introduction to Shading (Normals, Vertex Normals and Facing Ratio)
    As previously mentioned, the normal of a triangle can be found by computing the cross product of vectors, for example, v 0 v 1 and v 0 v 2 , where v 0 , v 1 , ...
  51. [51]
    [PDF] Shading - Department of Computer Science
    • Normal given by cross product x=x(u,v)=cos u sin v y=y(u,v)=cos u cos v z ... Is it in world coordinates or camera coordinates? 93. E. Angel and D. Shreiner: ...
  52. [52]
    [PDF] Ray Tracing: intersection and shading - Cornell: Computer Science
    Ray intersection involves ray-sphere, ray-box, and ray-triangle intersections. A ray is represented by a point and direction. Ray-sphere intersection is  ...
  53. [53]
    Ray Tracing in One Weekend
    First, let's get ourselves a surface normal so we can shade. This is a vector that is perpendicular to the surface at the point of intersection. We have a key ...Overview · The ray Class · Ray-Sphere Intersection · Simplifying the Ray-Sphere...
  54. [54]
    Basic Lighting - LearnOpenGL
    So when doing lighting calculations, make sure you always normalize the relevant vectors to ensure they're actual unit vectors. Forgetting to normalize a vector ...
  55. [55]
    Introduction to Computer Graphics, Section 7.2 -- Lighting and Material
    An interpolated normal vector is in general only an approximation for the geometrically correct normal, but it's usually good enough to give good results.
  56. [56]
    Collision detection in games in 3D - edge to edge cross product
    Aug 7, 2013 · The cross product of two vectors is a vector perpendicular to both, and normal to the plane that they span. If the two vectors are two ...How to detect 2D line on line collision?Simple 3D collision detection for general polyhedraMore results from gamedev.stackexchange.com
  57. [57]
    numpy.array — NumPy v2.3 Manual
    Create an array. ... Specify the memory layout of the array. If object is not an array, the newly created array will be in C order (row major) unless 'F' is ...
  58. [58]
    [PDF] Bezier Curves and Splines - MIT OpenCourseWare
    More precisely: What's a basis? • A set of “atomic” vectors. – Called basis vectors. – Linear combinations of basis vectors span the space. • i.e. any cubic ...
  59. [59]
    [PDF] Bézier Curves - CUNY Academic Works
    Apr 17, 2024 · LINEAR BÉZIER CURVES. We start with two vectors P1 and P2 in R. 2 . A Linear. Bézier curve (1) is a linear combination of the form. Pt = (1 − t) ...