Vector space
A vector space, also known as a linear space, is a fundamental algebraic structure consisting of a set V of elements called vectors, together with two operations: vector addition and scalar multiplication by elements from a field F (such as the real numbers \mathbb{R} or complex numbers \mathbb{C}).[1] These operations must satisfy ten axioms, including closure under addition and scalar multiplication, associativity and commutativity of addition, the existence of an additive identity (zero vector) and inverses, and distributivity of scalar multiplication over vector addition and field addition.[2] This framework generalizes the properties of arrows in Euclidean space, allowing vectors to represent not just geometric directions and magnitudes but also abstract quantities like functions, polynomials, or matrices.[3] The concept of a vector space forms the cornerstone of linear algebra, enabling the study of linear transformations, systems of linear equations, and properties such as basis, dimension, and linear independence.[4] For instance, the dimension of a vector space is the number of vectors in a basis—a maximal linearly independent spanning set—providing a measure of its "size" independent of the choice of basis.[5] Subspaces, which are subsets that are themselves vector spaces under the induced operations, play a crucial role in decomposing complex spaces into simpler components.[6] Vector spaces have broad applications across mathematics, physics, engineering, and computer science, underpinning models in quantum mechanics, where state spaces are Hilbert spaces (complete inner product spaces), and in data analysis, where high-dimensional datasets are treated as points in vector spaces for techniques like principal component analysis.[7] In electrical engineering, signals and images are represented as vectors, with linear operators (matrices) modeling filters and transformations.[8] Their formalization extends intuitive geometric ideas into rigorous theory, facilitating solutions to differential equations and optimization problems in fields like machine learning and computer graphics.[9]Formal definition
Axioms of vector spaces
A vector space V over a field F is a nonempty set whose elements are called vectors, equipped with two binary operations: vector addition, which combines two vectors to produce another vector in V, and scalar multiplication, which combines an element (scalar) of F with a vector to produce another vector in V.[10][11] Common choices for the field F include the real numbers \mathbb{R} or the complex numbers \mathbb{C}.[10] The operations must satisfy ten axioms, ensuring consistent algebraic behavior. For vectors \mathbf{u}, \mathbf{v}, \mathbf{w} \in V and scalars \alpha, \beta \in F, the addition operation is denoted \mathbf{u} + \mathbf{v} and the scalar multiplication by \alpha \mathbf{v}.[10][11] These axioms are:- Closure under addition: \mathbf{u} + \mathbf{v} \in V for all \mathbf{u}, \mathbf{v} \in V.[10][11]
- Associativity of addition: (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) for all \mathbf{u}, \mathbf{v}, \mathbf{w} \in V.[10][11]
- Commutativity of addition: \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} for all \mathbf{u}, \mathbf{v} \in V.[10][11]
- Existence of zero vector: There exists a vector \mathbf{0} \in V such that \mathbf{u} + \mathbf{0} = \mathbf{u} for all \mathbf{u} \in V.[10][11]
- Existence of additive inverses: For each \mathbf{u} \in V, there exists -\mathbf{u} \in V such that \mathbf{u} + (-\mathbf{u}) = \mathbf{0}.[10][11]
- Closure under scalar multiplication: \alpha \mathbf{v} \in V for all \alpha \in F and \mathbf{v} \in V.[10][11]
- Distributivity of scalar multiplication over vector addition: \alpha (\mathbf{u} + \mathbf{v}) = \alpha \mathbf{u} + \alpha \mathbf{v} for all \alpha \in F and \mathbf{u}, \mathbf{v} \in V.[10][11]
- Distributivity of scalar multiplication over field addition: (\alpha + \beta) \mathbf{v} = \alpha \mathbf{v} + \beta \mathbf{v} for all \alpha, \beta \in F and \mathbf{v} \in V.[10][11]
- Compatibility with field multiplication: \alpha (\beta \mathbf{v}) = (\alpha \beta) \mathbf{v} for all \alpha, \beta \in F and \mathbf{v} \in V.[10][11]
- Multiplicative identity: $1 \cdot \mathbf{v} = \mathbf{v} for all \mathbf{v} \in V, where 1 is the multiplicative identity in F.[10][11]