Fact-checked by Grok 2 weeks ago

Vector multiplication

Vector multiplication refers to a set of operations in linear algebra and that combine s with scalars or other s to yield either a scalar or a new , fundamental to fields such as physics, , and . The primary forms include , which scales a by a ; the (or scalar product), which produces a scalar from two s; and the (or vector product), which yields a perpendicular to the inputs, applicable mainly in . Scalar multiplication of a \vec{v} by a scalar k results in a k\vec{v} that points in the same direction as \vec{v} if k > 0, the opposite direction if k < 0, or the zero vector if k = 0, with magnitude |k| times that of \vec{v}. This operation is distributive over vector addition and compatible with scalar addition, forming the basis for vector spaces. For example, if \vec{v} = \langle 1, 2, 3 \rangle, then $2\vec{v} = \langle 2, 4, 6 \rangle. The dot product of two vectors \vec{a} = \langle a_1, a_2, a_3 \rangle and \vec{b} = \langle b_1, b_2, b_3 \rangle is defined as \vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2 + a_3 b_3, equivalent to \|\vec{a}\| \|\vec{b}\| \cos \theta, where \theta is the angle between them. It is bilinear, commutative, and zero for orthogonal vectors, enabling computations of projections, angles, and work in physics. For instance, \langle 1, 2, 3 \rangle \cdot \langle 4, 5, 6 \rangle = 32. The cross product \vec{a} \times \vec{b} produces a vector orthogonal to both \vec{a} and \vec{b}, with magnitude \|\vec{a}\| \|\vec{b}\| \sin \theta representing the area of the parallelogram they span, and direction given by the . Computed via the determinant formula \vec{a} \times \vec{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}, it is anti-commutative (\vec{a} \times \vec{b} = -\vec{b} \times \vec{a}) and zero for parallel vectors. This operation is essential for , , and surface normals in three dimensions.

Multiplication by scalars

Definition and notation

Scalar multiplication is a fundamental operation in vector algebra that involves multiplying a vector by a real number, known as a scalar, which resizes the vector while potentially altering its direction based on the scalar's sign. Specifically, for a scalar k \in \mathbb{R} and a vector \mathbf{v}, the result is a new vector that lies in the same direction as \mathbf{v} if k > 0, or in the opposite direction if k < 0, with its magnitude equal to |k| times the magnitude of \mathbf{v}. The operation is typically denoted by juxtaposition as k\mathbf{v}, or sometimes explicitly as k \cdot \mathbf{v} to emphasize the multiplication, though the latter notation is used cautiously to avoid confusion with the dot product of two vectors. For a vector \mathbf{v} = (v_1, v_2, \dots, v_n) in \mathbb{R}^n, the scalar multiple is computed component-wise: k\mathbf{v} = (k v_1, k v_2, \dots, k v_n). This distributive application ensures the result remains in the same vector space. A simple example illustrates the process: scaling the 2D vector (3, 4) by the scalar 2 yields $2(3, 4) = (6, 8), doubling both the magnitude and preserving the direction. Similarly, multiplying by the scalar 0 produces the zero vector (0, 0, \dots, 0), regardless of the original vector.

Properties and examples

Scalar multiplication in a vector space satisfies several fundamental algebraic properties that ensure consistency with vector addition and scalar arithmetic. These properties include distributivity over vector addition, where for any scalar k and vectors \mathbf{u}, \mathbf{v}, k(\mathbf{u} + \mathbf{v}) = k\mathbf{u} + k\mathbf{v}; homogeneity, or distributivity over scalar addition, given by (k + m)\mathbf{v} = k\mathbf{v} + m\mathbf{v} for scalars k, m and vector \mathbf{v}; and associativity with scalar multiplication, (k m) \mathbf{v} = k (m \mathbf{v}) for scalars k, m and vector \mathbf{v}. Additionally, the multiplicative identity holds as $1 \cdot \mathbf{v} = \mathbf{v} for any vector \mathbf{v}, and multiplication by zero yields the zero vector, $0 \cdot \mathbf{v} = \mathbf{0}. These properties enable the formation of linear combinations, which are expressions of the form k_1 \mathbf{v}_1 + k_2 \mathbf{v}_2 + \cdots + k_n \mathbf{v}_n for scalars k_i and vectors \mathbf{v}_i. For instance, consider vectors \mathbf{u} = (1, 2) and \mathbf{v} = (3, 4) in \mathbb{R}^2; the linear combination $2\mathbf{u} - 3\mathbf{v} computes as $2(1, 2) - 3(3, 4) = (2, 4) - (9, 12) = (-7, -8), illustrating how scalar multiplication scales and addition combines vectors. Such operations preserve the linear independence of sets of vectors when non-zero scalars are used, as scaling an independent set by non-zero factors maintains the triviality of linear dependence relations within spans. The compatibility of scalar multiplication with other scalars follows from associativity, such as k(\lambda \mathbf{v}) = (k \lambda) \mathbf{v} for scalars k, \lambda and vector \mathbf{v}, which underpins the homogeneity of transformations in vector spaces. These rules collectively ensure that scalar multiplication behaves predictably, facilitating the algebraic structure essential for linear algebra applications.

The dot product

Definition and computation

The dot product of two vectors \mathbf{u} = (u_x, u_y, u_z) and \mathbf{v} = (v_x, v_y, v_z) in \mathbb{R}^3 is defined as the scalar \mathbf{u} \cdot \mathbf{v} = u_x v_x + u_y v_y + u_z v_z. This algebraic formula extends to any dimension n, where \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i. The operation is also expressed geometrically as \mathbf{u} \cdot \mathbf{v} = \|\mathbf{u}\| \|\mathbf{v}\| \cos \theta, where \theta is the angle between the vectors and \|\cdot\| denotes the Euclidean norm. It is commutative, so \mathbf{u} \cdot \mathbf{v} = \mathbf{v} \cdot \mathbf{u}, and equals zero if the vectors are orthogonal. For example, the dot product of \langle 1, 2, 3 \rangle and \langle 4, 5, 6 \rangle is $1 \cdot 4 + 2 \cdot 5 + 3 \cdot 6 = 32. The dot product is defined in any and serves as the standard in .

Geometric interpretation

The dot product \mathbf{u} \cdot \mathbf{v} measures how much the direction of \mathbf{u} aligns with \mathbf{v}. Geometrically, it equals the magnitude of \mathbf{u} times the length of the projection of \mathbf{v} onto \mathbf{u}, or \|\mathbf{u}\| (\proj_{\mathbf{u}} \mathbf{v}). The sign of the result indicates alignment: positive if \theta < 90^\circ, zero if \theta = 90^\circ (orthogonal), and negative if \theta > 90^\circ. The \cos \theta factor in \|\mathbf{u}\| \|\mathbf{v}\| \cos \theta captures the cosine of between the vectors, allowing computation of \theta = \cos^{-1} \left( \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\| \|\mathbf{v}\|} \right). This is useful for determining in geometric configurations. The |\mathbf{u} \cdot \mathbf{v}| relates to the component of one vector along the other, with the maximizing when vectors are parallel. In two dimensions, the similarly computes directional similarity, such as for vectors \mathbf{u} = (u_x, u_y) and \mathbf{v} = (v_x, v_y), yielding u_x v_x + u_y v_y. Unlike the , which has no direct vector analog, the generalizes seamlessly to higher dimensions. The dot product also connects to volumes via the scalar triple product [\mathbf{u}, \mathbf{v}, \mathbf{w}] = \mathbf{u} \cdot (\mathbf{v} \times \mathbf{w}), representing the signed volume of the spanned by the vectors. This equals the of the matrix with columns \mathbf{u}, \mathbf{v}, \mathbf{w}, with the sign indicating orientation.

Algebraic properties

The is commutative: \mathbf{u} \cdot \mathbf{v} = \mathbf{v} \cdot \mathbf{u} for any vectors \mathbf{u}, \mathbf{v}. It is bilinear, linear in each argument: (\alpha \mathbf{u} + \beta \mathbf{w}) \cdot \mathbf{v} = \alpha (\mathbf{u} \cdot \mathbf{v}) + \beta (\mathbf{w} \cdot \mathbf{v}) and \mathbf{u} \cdot (\alpha \mathbf{v} + \beta \mathbf{w}) = \alpha (\mathbf{u} \cdot \mathbf{v}) + \beta (\mathbf{u} \cdot \mathbf{w}), where \alpha, \beta are scalars. This implies distributivity over vector addition and homogeneity under . It is positive definite: \mathbf{u} \cdot \mathbf{u} = \|\mathbf{u}\|^2 \geq 0, with if and only if \mathbf{u} = \mathbf{0}. The is associative with but not with itself, as it yields a scalar. The Cauchy-Schwarz inequality holds: |\mathbf{u} \cdot \mathbf{v}| \leq \|\mathbf{u}\| \|\mathbf{v}\|, with equality when the vectors are linearly dependent. A is the Lagrange identity: \|\mathbf{u} \times \mathbf{v}\|^2 = \|\mathbf{u}\|^2 \|\mathbf{v}\|^2 - (\mathbf{u} \cdot \mathbf{v})^2, linking the to the product's in . is detected by \mathbf{u} \cdot \mathbf{v} = 0. These properties make the the basis for norms, angles, and projections in vector spaces.

The cross product

Definition and computation

The cross product of two vectors \mathbf{u} = (u_x, u_y, u_z) and \mathbf{v} = (v_x, v_y, v_z) in \mathbb{R}^3 is defined as the vector \mathbf{u} \times \mathbf{v} that is orthogonal to both \mathbf{u} and \mathbf{v}, with magnitude \|\mathbf{u}\| \|\mathbf{v}\| \sin \theta, where \theta is between the two vectors. The cross product is computed using the following coordinate formula: \mathbf{u} \times \mathbf{v} = (u_y v_z - u_z v_y) \mathbf{i} - (u_x v_z - u_z v_x) \mathbf{j} + (u_x v_y - u_y v_x) \mathbf{k}, which is equivalent to the determinant of the matrix formed by the standard basis vectors and the components of \mathbf{u} and \mathbf{v}: \mathbf{u} \times \mathbf{v} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ u_x & u_y & u_z \\ v_x & v_y & v_z \end{vmatrix}. This operation is antisymmetric, so \mathbf{u} \times \mathbf{v} = -(\mathbf{v} \times \mathbf{u}), and \mathbf{u} \times \mathbf{u} = \mathbf{0} for any vector \mathbf{u}. If \mathbf{u} and \mathbf{v} are parallel, then \mathbf{u} \times \mathbf{v} = \mathbf{0}. For example, the cross product of the unit vectors (1,0,0) and (0,1,0) yields (0,0,1). The cross product is defined primarily for three-dimensional Euclidean space and has no direct analog in two dimensions or higher dimensions without extensions such as the use of the Levi-Civita symbol in generalized forms.

Geometric interpretation

The cross product \mathbf{u} \times \mathbf{v} of two vectors in three-dimensional space yields a vector that is perpendicular to both \mathbf{u} and \mathbf{v}, lying in the plane normal to that spanned by the original vectors. The direction of \mathbf{u} \times \mathbf{v} is determined by the right-hand rule: point the fingers of the right hand in the direction of \mathbf{u}, curl them toward \mathbf{v} through the smaller angle between the vectors, and the thumb points in the direction of the resulting vector; reversing the order of \mathbf{u} and \mathbf{v} reverses the direction. This orientation convention ensures a consistent handedness in vector operations, distinguishing \mathbf{u} \times \mathbf{v} from -\mathbf{v} \times \mathbf{u}. The magnitude of the cross product, \|\mathbf{u} \times \mathbf{v}\| = \|\mathbf{u}\| \|\mathbf{v}\| \sin \theta, where \theta is the angle between \mathbf{u} and \mathbf{v} (with $0 \leq \theta \leq \pi), equals the area of the parallelogram formed by \mathbf{u} and \mathbf{v} as adjacent sides. This geometric significance highlights the cross product's role in measuring oriented areas: the \sin \theta factor captures the perpendicular component of the vectors, vanishing when they are parallel (\theta = 0 or \pi) and maximizing when they are orthogonal (\theta = \pi/2). In visualization, placing the tails of \mathbf{u} and \mathbf{v} at the origin forms the parallelogram, with the cross product vector's length scaling this area directly. A two-dimensional analog of the cross product, often defined as the scalar u_x v_y - u_y v_x, represents the signed area of the spanned by the vectors, where the sign indicates orientation relative to the (positive for counterclockwise, negative otherwise). This extends naturally to three dimensions, where the full vector encodes both magnitude (area) and direction (normal to the plane). The facilitates computations involving through the , defined as [\mathbf{u}, \mathbf{v}, \mathbf{w}] = (\mathbf{u} \times \mathbf{v}) \cdot \mathbf{w}, which equals the signed of the with edges \mathbf{u}, \mathbf{v}, and \mathbf{w}. Geometrically, this is the area of the base (spanned by \mathbf{u} and \mathbf{v}) multiplied by the , given by the of \mathbf{w} onto the normal \mathbf{u} \times \mathbf{v}; the |[\mathbf{u}, \mathbf{v}, \mathbf{w}]| yields the unsigned . The sign reflects the of the triple, positive if \mathbf{w} aligns with the orientation of \mathbf{u} and \mathbf{v}. In physical applications, such as mechanics, the cross product describes torque as \boldsymbol{\tau} = \mathbf{r} \times \mathbf{F}, where \mathbf{r} is the position vector from the pivot to the force application point and \mathbf{F} is the force vector; the magnitude \|\boldsymbol{\tau}\| = r F \sin \phi (with \phi the angle between \mathbf{r} and \mathbf{F}) represents the rotational effect's strength, while the direction indicates the axis of rotation via the right-hand rule.

Algebraic properties

The cross product exhibits antisymmetry, satisfying \mathbf{u} \times \mathbf{v} = -(\mathbf{v} \times \mathbf{u}) for any vectors \mathbf{u}, \mathbf{v} in \mathbb{R}^3. This property follows directly from the determinant-based definition and the alternating nature of the used in its computation. The operation is bilinear, meaning it is linear in each argument separately: (\alpha \mathbf{u} + \beta \mathbf{v}) \times \mathbf{w} = \alpha (\mathbf{u} \times \mathbf{w}) + \beta (\mathbf{v} \times \mathbf{w}) and \mathbf{u} \times (\alpha \mathbf{v} + \beta \mathbf{w}) = \alpha (\mathbf{u} \times \mathbf{v}) + \beta (\mathbf{u} \times \mathbf{w}) for scalars \alpha, \beta and vectors \mathbf{u}, \mathbf{v}, \mathbf{w}. This bilinearity implies right-distributivity over vector addition and homogeneity with respect to . Unlike the , the is non-associative in general, as (\mathbf{u} \times \mathbf{v}) \times \mathbf{w} \neq \mathbf{u} \times (\mathbf{v} \times \mathbf{w}) for arbitrary \mathbf{u}, \mathbf{v}, \mathbf{w}. However, a specific identity holds: (\mathbf{u} \times \mathbf{v}) \times \mathbf{w} = (\mathbf{u} \cdot \mathbf{w}) \mathbf{v} - (\mathbf{v} \cdot \mathbf{w}) \mathbf{u}, which relates the to the and provides a way to expand such expressions. This identity, often called the BAC-CAB rule in its alternative form, underscores the non-associative structure while enabling algebraic manipulations. The result of the cross product is orthogonal to both input vectors: (\mathbf{u} \times \mathbf{v}) \cdot \mathbf{u} = 0 and (\mathbf{u} \times \mathbf{v}) \cdot \mathbf{v} = 0. This orthogonality property arises from the antisymmetric and bilinear nature of the operation, as verified by expanding the dot products using the coordinate definition. A key relation connecting the and products is the Lagrange identity: \|\mathbf{u} \times \mathbf{v}\|^2 = \|\mathbf{u}\|^2 \|\mathbf{v}\|^2 - (\mathbf{u} \cdot \mathbf{v})^2. This equation quantifies the magnitude of the in terms of the input vectors' lengths and their , highlighting the operation's role in measuring components.

Applications

In physics

In physics, vector multiplication, particularly the dot and cross products, plays a fundamental role in describing mechanical and electromagnetic phenomena. The dot product quantifies scalar quantities like work and , while the cross product yields vector quantities such as and magnetic forces. These operations were formalized in the late through ' development of vector analysis, which provided a concise notation for physical laws and was widely adopted in textbooks by the early . The is essential for calculating work done by a force on an object. The work W is given by W = \mathbf{F} \cdot \mathbf{d}, where \mathbf{F} is the force and \mathbf{d} is the ; this accounts for the component of the force to the . For instance, when pushing a box at an , only the force component aligned with the motion contributes to work, as the component does no work. Similarly, instantaneous P is the rate of work, expressed as P = \mathbf{F} \cdot \mathbf{v}, where \mathbf{v} is the , representing the force's alignment with the object's motion. Kinetic energy also ties directly to the dot product, with the translational kinetic energy of a particle given by KE = \frac{1}{2} m \mathbf{v} \cdot \mathbf{v} = \frac{1}{2} m v^2, where m is and v is speed; this form emerges from integrating work over velocity changes in Newton's laws. The cross product is crucial for vector quantities involving and perpendicular effects. Torque \boldsymbol{\tau}, which measures rotational force, is \boldsymbol{\tau} = \mathbf{r} \times \mathbf{F}, where \mathbf{r} is the position vector from the to the force application point; its magnitude is r F \sin \theta, with direction following the . In a lever system, applying a force perpendicular to the arm maximizes , enabling efficient . Angular momentum \mathbf{L} for a particle is \mathbf{L} = \mathbf{r} \times \mathbf{p}, where \mathbf{p} = m \mathbf{v} is linear , conserving in isolated systems to explain rotational dynamics. In , the magnetic component of the on a is \mathbf{F} = q \mathbf{v} \times \mathbf{B}, where q is charge and \mathbf{B} is the vector; this perpendicular force causes without changing speed, as in cyclotrons.

In engineering and

In engineering and , the plays a crucial role in lighting models, where it determines the intensity of based on the cosine of the angle between a surface and the incident . For instance, in the model, the diffuse component of illumination is computed as I_d = k_d (\mathbf{N} \cdot \mathbf{L}), where k_d is the diffuse coefficient, \mathbf{N} is the normalized surface , and \mathbf{L} is the normalized vector from the surface point to the ; this ensures that surfaces appear brighter when facing the and darker when grazing it. This model, foundational in rendering pipelines, simulates matte surfaces by assuming scatters equally in all directions perpendicular to the surface. Similarly, the specular highlight in the uses dot products to model shiny s: the vector \mathbf{R} = 2 (\mathbf{N} \cdot \mathbf{L}) \mathbf{N} - \mathbf{L}, and the specular term is I_s = k_s (\mathbf{V} \cdot \mathbf{R})^n, where \mathbf{V} is the view , k_s is the specular coefficient, and n controls shininess; this captures the concentration of reflected around the perfect . The is essential for computing surface normals in , particularly for polygonal meshes, where the normal to a defined by vertices A, B, and C is given by \mathbf{n} = (\mathbf{B} - \mathbf{A}) \times (\mathbf{C} - \mathbf{A}), normalized to unit length for consistent shading and orientation. This vector defines the facing direction of the surface, enabling back-face culling and proper light interaction in rendering engines. In orientation tasks, such as determining the up- in camera systems or aligning objects, the cross product helps establish axes, for example, by crossing a forward with a world up to derive a right in matrices. Practical examples include ray tracing for scene intersection, where the facilitates efficient tests: for a \mathbf{P}(t) = \mathbf{O} + t \mathbf{D} and \mathbf{N} \cdot \mathbf{X} = d, the parameter t = -(\mathbf{N} \cdot (\mathbf{O} - \mathbf{Q})) / (\mathbf{N} \cdot \mathbf{D}) (with \mathbf{Q} on the ) identifies the hit point if t > 0, allowing projections to check if the intersection lies within bounded primitives like triangles. In quaternion-based rotations, common for smooth s in 3D graphics, the aids derivation; for instance, to rotate from one vector to another, an is found via \mathbf{axis} = \mathbf{u} \times \mathbf{v}, which is then used to construct the q = \cos(\theta/2) + \sin(\theta/2) \mathbf{axis}, avoiding in interpolations. Additionally, s support by checks, such as verifying if two objects' vectors form an acute (\mathbf{v_1} \cdot \mathbf{v_2} > 0) to predict glancing versus head-on impacts in simulations. In modern software like and game engines such as , these operations are optimized for computation, leveraging GPU advances since the early 2000s, including programmable shaders introduced with NVIDIA's 3 in 2001, which enabled vector math directly on graphics hardware for per-fragment lighting and calculations. Post-2000 developments, such as in 2006, further accelerated vector multiplications in shaders, allowing millions of and products per frame for complex scenes in rendering.

References

  1. [1]
    6 Vector Operations - Harvard Mathematics Department
    The first key operation is scalar multiplication , multiplying a scalar and a vector. If k k is a scalar and →v v → is a vector, their product k→v k v → is ...
  2. [2]
    2.2 Product of Vectors
    The dot or scalar product of two vectors, a and b, is the product of their lengths times the cosine of the angle between them.
  3. [3]
  4. [4]
    Scalar Multiplication of Vectors - Varsity Tutors
    The scalar multiplication scales the magnitude by | n | and keeps the direction unchanged unless n is negative. Remember. Multiply each component of the vector ...
  5. [5]
    Scalar Multiplication and Vector Addition - Soft Schools
    Scalar multiplication is when a vector is multiplied by a scalar (a number or a constant). If a vector v is multiplied by a scalar k the result is kv.
  6. [6]
    [PDF] A History of Vector Analysis
    1885 Heaviside in one of his electrical papers gives his first unified presentation of his system of vector analysis, which is essentially identical to that of ...
  7. [7]
    14.4 The Cross Product
    The cross product is A×B=|ijka00bc0|=⟨0,0,ac⟩. As predicted, this is a vector pointing up or down, depending on the sign of ...
  8. [8]
    Calculus II - Cross Product - Pauls Online Math Notes
    Nov 16, 2022 · The result of a dot product is a number and the result of a cross product is a vector! Be careful not to confuse the two.
  9. [9]
    The formula for the cross product - Math Insight
    ### Summary of Cross Product from Math Insight
  10. [10]
    [PDF] Vector Geometry for Computer Graphics
    The cross-product is another way to multiply two vectors. This time the result is a vector. Cross products are only defined for 3D vectors. There are many ...
  11. [11]
    [PDF] The Geometry of the Dot and Cross Products
    Figure 6: The geometric definition of the cross product, whose magnitude is defined to be the area of the parallelogram.
  12. [12]
    [PDF] Fundamentals of Computational Geometry
    Apr 11, 2023 · The cross product of p and q is the signed area of the parallogram with the four vertices (0, 0), p, q, p + q. Equivalently, it is twice the ...
  13. [13]
    Scalar Triple Product - Richard Fitzpatrick
    The scalar triple product is zero if any two of ${\bf a}$, ${\bf b}$, and ${\bf c}$ are parallel, or if ${\bf a}$, ${\bf b}$, and ${\bf c}$ are coplanar.
  14. [14]
    [PDF] "MAC2313-05 Sanghyun Lee"
    * The volume of the parallelepiped is the magnitude of their scalar triple product: V = Ah = 4. Fall 2017 MAC2313-05 SanghyunLee. Page 5. 12. 12.4. THE CROSS ...
  15. [15]
    [PDF] 3: Cross product - Harvard Mathematics Department
    The sin formula: |v × w| = |v||w| sin(α). Proof: We verify first the Lagrange's identity |v× w|2 = |v|2|w|2 −(v· w) ...
  16. [16]
    [PDF] MATH-UA 123 Calculus 3: Cross Product - NYU Courant
    ▷ Memorize the cross product of the standard unit vectors i×i = j× j = 0 i× j = 1, j×i = −1. ▷ Use asymmetric and bilinear properties of the cross product.<|control11|><|separator|>
  17. [17]
    [PDF] Math 3335. The Dot and Cross Products
    The cross product of x and y is denoted by x × y, and it is vector valued; i.e. x × y ∈ R3 . Both the dot and cross product are bilinear. That is, for any ...
  18. [18]
    [PDF] Vector Spaces Math 1553 Fall 2009
    if you switch any two of the vectors then the triple product changes sign: u ∧ v ... A vector triple product identity a × (b × c)=(a · c)b − (a · b)c.
  19. [19]
    [PDF] A History of Vector Analysis
    around 1880, the modern system of vector analysis came into existence through the work of. Josiah Willard Gibbs and Oliver Heaviside and by 1910 had established ...
  20. [20]
    Vector analysis; a text-book for the use of students of mathematics ...
    May 9, 2007 · Vector analysis; a text-book for the use of students of mathematics and physics. by: Gibbs, J. Willard (Josiah Willard), 1839-1903; Wilson, ...
  21. [21]
    7.1 Work - University Physics Volume 1 | OpenStax
    Sep 19, 2016 · We first define the increment of work dW done by a force F → F → acting through an infinitesimal displacement d r → d r → as the dot product of ...
  22. [22]
    7.4 Power – General Physics Using Calculus I - UCF Pressbooks
    The power delivered by a force, acting on a moving particle, is the dot product of the force and the particle's velocity. Key Equations. Work done by a force ...
  23. [23]
    Kinetic Energy and the Work-Energy Theorem | Physics
    In equation form, the translational kinetic energy, KE = 1 2 m v 2 , is the energy associated with translational motion. Kinetic energy is a form of energy ...
  24. [24]
    21A: Vectors - The Cross Product & Torque - Physics LibreTexts
    Jan 15, 2023 · The relational operator is called the cross product. It is represented by the symbol “×” read “cross.”
  25. [25]
    11.3: Angular Momentum - Physics LibreTexts
    Mar 16, 2025 · The expression for this angular momentum is l → = r → × p → , where the vector r → is from the origin to the particle, and p → is the particle's ...Angular Momentum of a Single... · Angular Momentum of a Particle
  26. [26]
    [PDF] Generalization of Lambert's Reflectance Model - Columbia CAVE
    Lambert's model for body reflection is widely used in computer graphics. It is used extensively by rendering techniques such as radiosity and ray tracing.
  27. [27]
    [PDF] Illumination for Computer Generated Pictures
    Bui Tuong Phong and Crow, F.C. Improved rendition of polygonal models of curved surfaces. To be presented at the joint USA-Japan Computer Conference. 8. Warnock ...
  28. [28]
    [PDF] Mathematics for 3D Graphics
    Cross Product Properties. Cross Product Properties. Let U and V be two ... product of the surface normal with the direction to the light divided by the ...
  29. [29]
    [PDF] Ray tracing complex scenes - UGA School of Computing
    By counting operations in Equation 5 we see that each ray-slab intersection requires four dot products, two subtracts and two divides when taking both planes.
  30. [30]
    [PDF] SAN FRANCISCO JULY 22-26 Volume 19, Number 3, 1985
    In computer animation, so do cameras. The rotations of these objects are best described using a four coordinate system, quaternions, as is shown in this paper.
  31. [31]
    [PDF] Real-Time Collision Detection
    Christer Ericson'sReal-time Collision Detectionis an excellent resource that covers the fundamentals as well as a broad array of techniques applicable to game ...
  32. [32]
    [PDF] Efficient Sparse Matrix-Vector Multiplication on CUDA - NVIDIA
    Dec 11, 2008 · In this paper we discuss data structures and algorithms for SpMV that are efficiently implemented on the CUDA platform for the fine-grained ...Missing: post- | Show results with:post-