Fact-checked by Grok 2 weeks ago

Vector space

A vector space, also known as a linear space, is a fundamental consisting of a set V of elements called , together with two operations: and by elements from a F (such as the real numbers \mathbb{R} or complex numbers \mathbb{C}). These operations must satisfy ten axioms, including closure under and , associativity and commutativity of , the existence of an (zero ) and inverses, and distributivity of over and . This framework generalizes the properties of arrows in , allowing to represent not just geometric directions and magnitudes but also abstract quantities like functions, polynomials, or matrices. The concept of a vector space forms the cornerstone of linear algebra, enabling the study of linear transformations, systems of linear equations, and properties such as basis, , and . For instance, the of a vector space is the number of vectors in a basis—a maximal linearly independent spanning set—providing a measure of its "size" independent of the choice of basis. Subspaces, which are subsets that are themselves vector spaces under the induced operations, play a crucial role in decomposing complex spaces into simpler components. Vector spaces have broad applications across , physics, , and , underpinning models in , where state spaces are Hilbert spaces (complete inner product spaces), and in , where high-dimensional datasets are treated as points in vector spaces for techniques like . In , signals and images are represented as vectors, with linear operators (matrices) modeling filters and transformations. Their formalization extends intuitive geometric ideas into rigorous theory, facilitating solutions to differential equations and optimization problems in fields like and .

Formal definition

Axioms of vector spaces

A vector space V over a F is a nonempty set whose elements are called vectors, equipped with two binary operations: vector addition, which combines two vectors to produce another vector in V, and , which combines an element (scalar) of F with a vector to produce another vector in V. Common choices for the F include the real numbers \mathbb{R} or the complex numbers \mathbb{C}. The operations must satisfy ten axioms, ensuring consistent algebraic behavior. For vectors \mathbf{u}, \mathbf{v}, \mathbf{w} \in V and scalars \alpha, \beta \in F, the addition operation is denoted \mathbf{u} + \mathbf{v} and the scalar multiplication by \alpha \mathbf{v}. These axioms are:
  1. Closure under addition: \mathbf{u} + \mathbf{v} \in V for all \mathbf{u}, \mathbf{v} \in V.
  2. Associativity of addition: (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) for all \mathbf{u}, \mathbf{v}, \mathbf{w} \in V.
  3. Commutativity of addition: \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} for all \mathbf{u}, \mathbf{v} \in V.
  4. Existence of zero vector: There exists a vector \mathbf{0} \in V such that \mathbf{u} + \mathbf{0} = \mathbf{u} for all \mathbf{u} \in V.
  5. Existence of additive inverses: For each \mathbf{u} \in V, there exists -\mathbf{u} \in V such that \mathbf{u} + (-\mathbf{u}) = \mathbf{0}.
  6. Closure under scalar multiplication: \alpha \mathbf{v} \in V for all \alpha \in F and \mathbf{v} \in V.
  7. Distributivity of scalar multiplication over vector addition: \alpha (\mathbf{u} + \mathbf{v}) = \alpha \mathbf{u} + \alpha \mathbf{v} for all \alpha \in F and \mathbf{u}, \mathbf{v} \in V.
  8. Distributivity of scalar multiplication over field addition: (\alpha + \beta) \mathbf{v} = \alpha \mathbf{v} + \beta \mathbf{v} for all \alpha, \beta \in F and \mathbf{v} \in V.
  9. Compatibility with field multiplication: \alpha (\beta \mathbf{v}) = (\alpha \beta) \mathbf{v} for all \alpha, \beta \in F and \mathbf{v} \in V.
  10. Multiplicative identity: $1 \cdot \mathbf{v} = \mathbf{v} for all \mathbf{v} \in V, where 1 is the multiplicative identity in F.
The first five axioms establish that (V, +) forms an abelian group under addition.

Abelian group structure under addition

In a vector space V over a field F, the set V equipped with the binary operation of vector addition + forms an abelian group (V, +). This group structure arises directly from the axioms governing addition in the definition of a vector space, which ensure closure under addition (i.e., u + v \in V for all u, v \in V), associativity of addition ((u + v) + w = u + (v + w) for all u, v, w \in V), commutativity of addition (u + v = v + u for all u, v \in V), the existence of an additive identity (the zero vector $0 \in V such that v + 0 = v for all v \in V), and the existence of additive inverses (for each v \in V, there exists -v \in V such that v + (-v) = 0). The identity $0 + v = v for all v \in V follows from the axioms: $0 + v = (v + (-v)) + v = v + ((-v) + v) = v + 0 = v, using associativity and the property. The zero vector and additive inverses are unique. Suppose $0' is another element satisfying v + 0' = v for all v \in V. Then $0 + 0' = 0, and using the group properties, $0' = 0. Similarly, if w satisfies v + w = 0, then w = -v, as follows from adding -v to both sides: w + v + (-v) = 0 + (-v), so w + 0 = -v, hence w = -v. The additive group (V, +) relates to the field's additive group (F, +) in that both are abelian groups, with V's structure extending F's through the module action of scalars, preserving commutativity and other properties inherited from F.

Basic properties and operations

Scalar multiplication properties

Scalar multiplication in a vector space V over a field F associates each scalar \alpha \in F and vector \mathbf{v} \in V with a vector \alpha \mathbf{v} \in V, satisfying specific axioms that ensure compatibility with the underlying addition structure. These properties include distributivity over vector addition, given by \alpha (\mathbf{u} + \mathbf{w}) = \alpha \mathbf{u} + \alpha \mathbf{w} for all \alpha \in F and \mathbf{u}, \mathbf{w} \in V, which aligns the scaling operation with the Abelian group structure under addition. Similarly, distributivity over scalar addition holds: (\alpha + \beta) \mathbf{v} = \alpha \mathbf{v} + \beta \mathbf{v} for all \alpha, \beta \in F and \mathbf{v} \in V. Homogeneity, or compatibility with field multiplication, ensures that scalar multiplications compose appropriately: (\alpha \beta) \mathbf{v} = \alpha (\beta \mathbf{v}) for all \alpha, \beta \in F and \mathbf{v} \in V. The multiplicative identity in the acts as the identity for scalar multiplication: $1 \cdot \mathbf{v} = \mathbf{v} for all \mathbf{v} \in V. Additionally, multiplication by the zero scalar yields the zero vector: $0 \cdot \mathbf{v} = \mathbf{0} for all \mathbf{v} \in V. To see this, note that \mathbf{v} = 1 \cdot \mathbf{v} = (1 + 0) \mathbf{v} = 1 \cdot \mathbf{v} + 0 \cdot \mathbf{v} = \mathbf{v} + 0 \cdot \mathbf{v}, so by the cancellation property of addition, $0 \cdot \mathbf{v} = \mathbf{0}. These axioms lead to further corollaries, such as the behavior with the . Specifically, (-1) \mathbf{v} = -\mathbf{v} for all \mathbf{v} \in V, where -\mathbf{v} is the of \mathbf{v}. This follows from \mathbf{v} + (-1) \mathbf{v} = (1 + (-1)) \mathbf{v} = 0 \cdot \mathbf{v} = \mathbf{0}, confirming that (-1) \mathbf{v} serves as the . Another consequence is that \alpha \cdot \mathbf{0} = \mathbf{0} for all \alpha \in F, derived from \alpha \cdot \mathbf{0} = \alpha (\mathbf{0} + \mathbf{0}) = \alpha \cdot \mathbf{0} + \alpha \cdot \mathbf{0}, implying \alpha \cdot \mathbf{0} = \mathbf{0} by cancellation. For a fixed scalar \alpha \in F, the T: V \to V defined by T(\mathbf{v}) = \alpha \mathbf{v} preserves both addition and , making it a linear transformation: T(\mathbf{u} + \mathbf{w}) = \alpha (\mathbf{u} + \mathbf{w}) = \alpha \mathbf{u} + \alpha \mathbf{w} = T(\mathbf{u}) + T(\mathbf{w}) and T(\beta \mathbf{v}) = \alpha (\beta \mathbf{v}) = (\alpha \beta) \mathbf{v} = \beta (\alpha \mathbf{v}) = \beta T(\mathbf{v}) for all \beta \in F and \mathbf{u}, \mathbf{w}, \mathbf{v} \in V.

Vector addition identities

Vector addition in a vector space V satisfies the axioms of an , ensuring that the operation is both commutative and associative. Commutativity states that for all vectors \mathbf{u}, \mathbf{v} \in V, \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}. This axiom guarantees that the order of does not affect the result, mirroring the observed in familiar examples like . Associativity further ensures that the grouping of vectors in a sum is irrelevant: for all \mathbf{u}, \mathbf{v}, \mathbf{w} \in V, (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}). This property allows for unambiguous extension of addition to any finite number of vectors, as the result remains consistent regardless of parenthesization. Like commutativity, associativity is a defining of the additive structure in vector spaces. From these axioms, along with the existence of the zero vector and additive inverses, several derived identities follow, including the cancellation law. This law asserts that if \mathbf{x} + \mathbf{y} = \mathbf{x}' + \mathbf{y} for \mathbf{x}, \mathbf{x}', \mathbf{y} \in V, then \mathbf{x} = \mathbf{x}'. To prove this, add the -\mathbf{y} to both sides of the equation: (\mathbf{x} + \mathbf{y}) + (-\mathbf{y}) = (\mathbf{x}' + \mathbf{y}) + (-\mathbf{y}). By associativity, this simplifies to \mathbf{x} + (\mathbf{y} + (-\mathbf{y})) = \mathbf{x}' + (\mathbf{y} + (-\mathbf{y})), and since \mathbf{y} + (-\mathbf{y}) = \mathbf{0}, where \mathbf{0} is the zero vector, it further reduces to \mathbf{x} + \mathbf{0} = \mathbf{x}' + \mathbf{0}. Finally, as \mathbf{x} + \mathbf{0} = \mathbf{x} and \mathbf{x}' + \mathbf{0} = \mathbf{x}' by the zero vector axiom, \mathbf{x} = \mathbf{x}'. This derivation relies solely on the group axioms for and underscores the uniqueness implied by the structure.

Examples

Coordinate spaces over fields

A coordinate space over a field F, denoted F^n, is the set of all ordered n-tuples (a_1, \dots, a_n) where each a_i \in F, equipped with componentwise vector addition defined by (a_1, \dots, a_n) + (b_1, \dots, b_n) = (a_1 + b_1, \dots, a_n + b_n) and defined by k(a_1, \dots, a_n) = (ka_1, \dots, ka_n) for k \in F. This structure satisfies the vector space axioms over F, providing a fundamental example of a finite-dimensional vector space. Common instances include \mathbb{R}^n over the real numbers and \mathbb{C}^n over the complex numbers, where the operations inherit the field properties of \mathbb{R} and \mathbb{C}. To illustrate, consider \mathbb{R}^2 as a prototypical case. The axioms hold via componentwise operations: addition is commutative since (x_1, y_1) + (x_2, y_2) = (x_1 + x_2, y_1 + y_2) = (x_2 + x_1, y_2 + y_1); associative by the field's ; the zero vector is (0, 0); the additive inverse of (x, y) is (-x, -y); scalar multiplication distributes over in scalars and vectors, associates properly, and satisfies the identity $1 \cdot (x, y) = (x, y). Similarly, for \mathbb{R}^3, the verification is analogous, with operations on triples (x, y, z) proceeding componentwise to confirm all eight axioms, leveraging the of \mathbb{R}. Geometrically, in \mathbb{R}^2, elements can be visualized as directed in the , originating from the , with represented by placing the tail of one at the head of the other to form a . This interpretation highlights the intuitive role of coordinate spaces in modeling physical quantities like , though it emphasizes the without delving into metrics or inner products. The of F^n is n, directly reflecting the number of independent coordinates required to specify each .

Function spaces

Function spaces provide examples of infinite-dimensional vector spaces where the elements are functions, equipped with addition and . These spaces illustrate how abstract algebraic structures can apply to continuous or mappings, extending the concept of vectors beyond finite coordinates. A prominent example is the space C[0,1], consisting of all continuous real-valued functions on the closed interval [0,1], with the field of real numbers \mathbb{R}. Addition and are defined : for functions f, g \in C[0,1] and scalar \alpha \in \mathbb{R}, the sum (f + g)(x) = f(x) + g(x) and the scaled function (\alpha f)(x) = \alpha f(x) for all x \in [0,1]. This structure ensures closure under these operations, as the sum and scalar multiple of continuous functions remain continuous. Another key example is the space \mathbb{P} of all polynomials with real coefficients, viewed as functions from \mathbb{R} to \mathbb{R}. This space is closed under pointwise addition and scalar multiplication, since the sum of two polynomials is a polynomial and scalar multiplication distributes over coefficients. For instance, if p(x) = a_0 + a_1 x + \cdots + a_n x^n and q(x) = b_0 + b_1 x + \cdots + b_m x^m, then (p + q)(x) = (a_0 + b_0) + (a_1 + b_1) x + \cdots, which is again a polynomial. To confirm these are vector spaces, the operations must satisfy the axioms outlined in the formal definition, such as associativity of , existence of a (the constant function 0), and distributivity of over vector . For C[0,1], the zero vector is the zero , and additive inverses exist as (-f)(x) = -f(x), which is continuous; similar verifications hold for commutativity and scalar properties, leveraging the field structure of \mathbb{R}. In \mathbb{P}, the zero polynomial serves as the , and inverses are obtained by negating coefficients, with all axioms following from polynomial arithmetic. Both spaces are thus infinite-dimensional, as they contain linearly sets of arbitrary finite size, such as monomials. A concrete illustration in \mathbb{P} involves linear combinations of basis-like functions, such as the monomials $1, x, x^2. Any polynomial, like $3 + 2x - x^2, can be expressed as $3 \cdot 1 + 2 \cdot x + (-1) \cdot x^2, demonstrating how scalar multiples and sums generate elements within the . This extends indefinitely to higher degrees, underscoring the dimensionality.

Polynomial rings as vector spaces

The set of all polynomials with real coefficients, denoted \mathbb{R}, forms a vector space over the field \mathbb{R}. Each element of \mathbb{R} is a polynomial of the form p(x) = \sum_{k=0}^n a_k x^k, where a_k \in \mathbb{R} for each k and n is a non-negative (possibly infinite in the sense of arbitrary , but each individual polynomial has finite degree). Vector addition is defined componentwise on the coefficients: for polynomials p(x) = \sum_{k=0}^n a_k x^k and q(x) = \sum_{k=0}^m b_k x^k (padding with zeros if necessary), (p + q)(x) = \sum_{k=0}^{\max(n,m)} (a_k + b_k) x^k. by \lambda \in \mathbb{R} scales the coefficients: (\lambda p)(x) = \sum_{k=0}^n (\lambda a_k) x^k. These operations satisfy the vector space axioms, with the zero polynomial as the and negation given by multiplying by -1. A for \mathbb{R} is the set of monomials \{1, x, x^2, \dots \}, which is countably infinite. Every p(x) = \sum_{k=0}^n a_k x^k can be uniquely expressed as a finite p(x) = a_0 \cdot 1 + a_1 \cdot x + \dots + a_n \cdot x^n of these basis elements, confirming that they \mathbb{R} and are linearly independent (no finite nontrivial equals the zero ). The infinitude of this basis implies that \mathbb{R} is infinite-dimensional as a vector space over \mathbb{R}, meaning it has no finite basis. For each non-negative n, the \mathbb{R}_n consisting of all of at most n is a finite-dimensional of \mathbb{R}. This has basis \{1, x, x^2, \dots, x^n\} and n+1, as any such polynomial is a unique of these n+1 monomials. The collection of all such \mathbb{R}_n for n = 0, 1, 2, \dots forms an increasing of whose is \mathbb{R}, underscoring the infinite-dimensional nature of the full . Although \mathbb{R} is a under the usual polynomial multiplication, the vector space structure considered here focuses solely on addition and of coefficients, independent of the multiplicative operation. This perspective aligns \mathbb{R} with other infinite-dimensional examples like certain function spaces, but emphasizes its algebraic simplicity via bases.

Basis, dimension, and coordinates

Linear independence and spanning sets

A linear combination of vectors v_1, v_2, \dots, v_k in a vector space V over a field F is any vector of the form \sum_{i=1}^k \alpha_i v_i, where \alpha_i \in F are scalars. The set of all linear combinations of a nonempty subset S = \{ v_1, v_2, \dots, v_k \} \subseteq V is called the of S, denoted \operatorname{span}(S). The subset S is a spanning set for V if \operatorname{span}(S) = V, meaning every vector in V can be expressed as a of elements from S. A subset \{ v_1, v_2, \dots, v_k \} \subseteq V is linearly independent if the only solution to the equation \sum_{i=1}^k \alpha_i v_i = 0 is the trivial solution where all scalars \alpha_i = 0. Equivalently, the set is linearly dependent if there exists a nontrivial linear dependence \sum_{i=1}^k \alpha_i v_i = 0 with at least one \alpha_i \neq 0. In the coordinate space \mathbb{R}^n over the field \mathbb{R}, the standard basis S = \{ e_1, e_2, \dots, e_n \}, where e_i has a 1 in the i-th position and 0 elsewhere, is a spanning set because every vector (x_1, x_2, \dots, x_n) equals \sum_{i=1}^n x_i e_i. This set is also linearly independent, as the equation \sum_{i=1}^n \alpha_i e_i = 0 implies all \alpha_i = 0. However, a proper subset of S, such as \{ e_1, e_2, \dots, e_{n-1} \}, is linearly independent but does not span \mathbb{R}^n, since vectors with nonzero n-th coordinate cannot be expressed as their linear combinations.

Hamel bases and dimension

A basis for a vector space V over a K is a B \subseteq V that is linearly independent and spans V, meaning every element of V can be expressed as a finite of elements from B. In finite-dimensional spaces, bases consist of finitely many elements, but in infinite-dimensional spaces, bases may be infinite and are specifically termed Hamel bases to distinguish them from other notions of bases used in , such as Schauder bases. The concept of a Hamel basis originates from the work of Georg Hamel, who demonstrated its existence for \mathbb{R} as a vector space over \mathbb{Q} in 1905. The existence of a Hamel basis for any vector space V (including the zero space, where the empty set serves as the basis) is established using Zorn's lemma, a consequence of the axiom of choice in set theory. Consider the collection of all linearly independent subsets of V, partially ordered by inclusion; any chain in this poset has an upper bound given by its union, which remains linearly independent. By Zorn's lemma, there exists a maximal linearly independent subset B, and maximality implies that B spans V, as adjoining any additional vector would violate linear independence. The dimension of a vector space V, denoted \dim(V), is defined as the cardinality of any Hamel basis of V; if this cardinality is finite, V is finite-dimensional, whereas infinite cardinality indicates an infinite-dimensional space. This definition is unambiguous because any two Hamel bases of V have the same cardinality: suppose B_1 and B_2 are bases with |B_1| < |B_2|; then B_1 can be extended to a basis of cardinality |B_2|, but since B_2 spans V, it cannot contain a linearly independent subset larger than |B_1| without contradicting the spanning property of B_1, leading to a contradiction. A key consequence is that any linearly independent subset S \subseteq V can be extended to a Hamel basis of V. To see this, apply Zorn's lemma to the poset of linearly independent subsets containing S, ordered by inclusion; chains have unions as upper bounds, so a maximal element B containing S must span V. This extension property underscores the foundational role of bases in vector space theory.

Coordinate representations

In a finite-dimensional vector space V over a field F with basis B = \{e_1, \dots, e_n\}, every vector v \in V can be expressed uniquely as a linear combination v = \sum_{i=1}^n x_i e_i, where the scalars x_i \in F. The ordered tuple (x_1, \dots, x_n) is called the coordinate representation of v with respect to B, denoted _B \in F^n. This coordinate map \phi_B: F^n \to V defined by \phi_B(x_1, \dots, x_n) = \sum_{i=1}^n x_i e_i is an isomorphism of vector spaces, ensuring a one-to-one correspondence between vectors in V and their coordinate tuples. The uniqueness of coordinates follows directly from the definition of a basis: the set B spans V, so every v has at least one representation as a linear combination, while linear independence ensures no two distinct combinations yield the same vector. In infinite-dimensional spaces, such uniqueness may fail without additional structure like Hamel bases, but in finite dimensions, it holds for any basis. To relate coordinates across different bases, suppose B' is another basis for V. Let P be the n \times n matrix over F whose columns are the coordinate vectors [e'_1]_B, \dots, [e'_n]_B, where \{e'_1, \dots, e'_n\} = B'. Then the coordinates transform via _{B'} = P^{-1} _B, since P is invertible as the bases have the same cardinality. This formula introduces the change-of-basis matrix without altering the intrinsic properties of v. A concrete example occurs in the real vector space \mathbb{R}^n with the standard basis E = \{e_1, \dots, e_n\}, where e_i has 1 in the i-th position and 0 elsewhere. For any v = (v_1, \dots, v_n) \in \mathbb{R}^n, the coordinates _E = (v_1, \dots, v_n) coincide with the usual component representation, simplifying computations in Euclidean space.

Subspaces and quotient spaces

Defining subspaces

A subspace of a vector space V over a field F is a subset W \subseteq V that contains the zero vector and is closed under vector addition and scalar multiplication by elements of F. This means that for all \mathbf{u}, \mathbf{v} \in W and all c \in F, both \mathbf{u} + \mathbf{v} \in W and c \mathbf{u} \in W. Equivalently, W is a subspace if it is a vector space in its own right under the addition and scalar multiplication operations induced from V. Common examples of subspaces include the trivial subspace \{\mathbf{0}\}, which consists solely of the zero vector and satisfies the conditions vacuously, and the entire space V itself. Another important example is the span of a subset S \subseteq V, denoted \operatorname{span}(S), which is the set of all finite linear combinations of elements from S and forms the smallest subspace containing S. Additionally, the solution set to a system of homogeneous linear equations A\mathbf{x} = \mathbf{0}, where A is a matrix over F, is a subspace of the coordinate space, as it is closed under addition and scalar multiplication. A key property is that the intersection of any collection of subspaces of V is itself a subspace of V. To see this, note that the zero vector belongs to every subspace, and if \mathbf{u}, \mathbf{v} are in the intersection, then \mathbf{u} + \mathbf{v} and c\mathbf{u} remain in each original subspace, hence in their intersection. In contrast, the union of two subspaces is generally not a subspace unless one is contained in the other; for instance, the union of the x-axis and y-axis in \mathbb{R}^2 fails closure under addition, as (1,0) + (0,1) = (1,1) lies outside both.

Quotient space construction

Given a subspace W of a vector space V over a field F, the cosets of W in V are the sets of the form v + W = \{v + w \mid w \in W\} for v \in V. These cosets partition V into equivalence classes, where two vectors v_1, v_2 \in V are equivalent modulo W if v_1 - v_2 \in W. The quotient space V/W is the set of all such cosets \{v + W \mid v \in V\}, equipped with vector space operations defined by (v + W) + (u + W) = (v + u) + W for addition and \alpha (v + W) = \alpha v + W for scalar multiplication by \alpha \in F. These operations are well-defined, independent of the choice of representatives, because if v' + W = v + W and u' + W = u + W, then v' = v + w_1 and u' = u + w_2 for some w_1, w_2 \in W, so v' + u' + W = v + u + (w_1 + w_2) + W = v + u + W. To verify that V/W is a vector space over F, the operations satisfy the vector space axioms: addition is associative and commutative, with zero element $0 + W = W and additive inverse -v + W; scalar multiplication distributes over vector addition and field multiplication, and satisfies \alpha (\beta (v + W)) = (\alpha \beta)(v + W) and $1 \cdot (v + W) = v + W. Closure follows from the definitions, and all properties inherit from those of V. If V is finite-dimensional, the dimension theorem states that \dim(V) = \dim(W) + \dim(V/W). To see this, extend a basis of W to a basis of V, and the images of the additional basis vectors under the quotient map form a basis for V/W. The natural projection \pi: V \to V/W given by \pi(v) = v + W is a surjective linear map with kernel W.

Linear transformations

Definition and properties

A linear transformation, also known as a linear map, from a vector space V to a vector space W over the same field is a function T: V \to W that preserves vector addition and scalar multiplication. Specifically, for all u, v \in V and scalars \alpha in the field, T(u + v) = T(u) + T(v) and T(\alpha v) = \alpha T(v). The kernel of T, denoted \ker(T), is the set of all vectors in V that map to the zero vector in W, i.e., \ker(T) = \{ v \in V \mid T(v) = 0 \}, which forms a subspace of V. The image of T, denoted \operatorname{im}(T), is the set of all vectors in W that are outputs of T, i.e., \operatorname{im}(T) = \{ T(v) \mid v \in V \}, which forms a subspace of W. A key property is that T is linear if and only if it preserves arbitrary finite linear combinations: for any finite collection of vectors v_1, \dots, v_n \in V and scalars \alpha_1, \dots, \alpha_n, T\left( \sum_{i=1}^n \alpha_i v_i \right) = \sum_{i=1}^n \alpha_i T(v_i). Additionally, T is injective (one-to-one) if and only if its kernel is the trivial subspace \{0\}. An isomorphism between vector spaces V and W is a bijective linear map T: V \to W whose inverse T^{-1}: W \to V is also linear. Such maps establish that V and W have the same structure as vector spaces.

Kernel, image, and rank-nullity theorem

For a linear transformation T: V \to W between vector spaces over a field, the kernel of T, denoted \ker(T), is the set \{v \in V \mid T(v) = 0\}. This set forms a subspace of the domain V, as it is closed under addition and scalar multiplication, and contains the zero vector. Similarly, the image of T, denoted \operatorname{im}(T), is the set \{T(v) \mid v \in V\}, which is a subspace of the codomain W because the image of a linear combination is the linear combination of the images. The nullity of T, denoted n(T), is defined as the dimension of \ker(T). The rank of T, denoted r(T), is the dimension of \operatorname{im}(T). These quantities measure the "degeneracy" and "reach" of the transformation, respectively: a higher nullity indicates more vectors are mapped to zero, while a higher rank reflects a larger subspace spanned by the outputs. The rank-nullity theorem states that if V is finite-dimensional, then \dim(V) = r(T) + n(T). This fundamental result connects the dimensions of the domain, kernel, and image, providing insight into the structure of linear maps. To prove the rank-nullity theorem, suppose \dim(V) = n < \infty and let k = n(T) = \dim(\ker(T)). Choose a basis \{u_1, \dots, u_k\} for \ker(T). Extend this to a basis \{u_1, \dots, u_k, v_1, \dots, v_m\} for V, where m = n - k. Since T(u_i) = 0 for i = 1, \dots, k, the images T(v_1), \dots, T(v_m) lie in \operatorname{im}(T). These images form a spanning set for \operatorname{im}(T), as any T(w) for w \in V can be expressed using the basis coefficients. Moreover, \{T(v_1), \dots, T(v_m)\} is linearly independent: if \sum c_j T(v_j) = 0, then T(\sum c_j v_j) = 0, so \sum c_j v_j \in \ker(T), implying all c_j = 0 by basis extension properties. Thus, \dim(\operatorname{im}(T)) = m, so r(T) = m = n - k, and \dim(V) = r(T) + n(T). As a consequence, if \dim(V) = n < \infty, then r(T) \leq n, since n(T) \geq 0. Additionally, r(T) \leq \dim(W) because \operatorname{im}(T) is a subspace of W. Therefore, r(T) \leq \min(n, \dim(W)).

Matrix representations

Linear maps as matrices

Given finite-dimensional vector spaces V and W over the same field, with \dim V = n and \dim W = m, and ordered bases \mathcal{B} = \{e_1, \dots, e_n\} for V and \mathcal{C} = \{f_1, \dots, f_m\} for W, any linear map T: V \to W can be represented by an m \times n matrix A relative to these bases. The columns of A are the coordinate vectors [T(e_i)]_{\mathcal{C}} with respect to \mathcal{C}, for i = 1, \dots, n. This matrix A encodes the action of T on coordinate representations: for any vector v \in V, the coordinate vector [T(v)]_{\mathcal{C}} equals A _{\mathcal{B}}, where _{\mathcal{B}} is the coordinate vector of v with respect to \mathcal{B}. This correspondence arises because T(v) = T\left( \sum_{i=1}^n x_i e_i \right) = \sum_{i=1}^n x_i T(e_i), and expressing the images T(e_i) in coordinates yields the matrix-vector product. The matrix A is unique for the given bases \mathcal{B} and \mathcal{C}, as the coordinate representations [T(e_i)]_{\mathcal{C}} are uniquely determined by the spanning and linear independence properties of the bases. For a concrete example, consider the rotation linear map R_\theta: \mathbb{R}^2 \to \mathbb{R}^2 by an angle \theta counterclockwise, with respect to the standard basis \mathcal{E} = \{e_1 = (1,0), e_2 = (0,1)\}. The matrix of R_\theta is \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}, since R_\theta(e_1) = (\cos \theta, \sin \theta) and R_\theta(e_2) = (-\sin \theta, \cos \theta). Applying this matrix to a vector (x,y) yields the rotated coordinates (\cos \theta \cdot x - \sin \theta \cdot y, \sin \theta \cdot x + \cos \theta \cdot y).

Change of basis matrices

In a finite-dimensional vector space V over a field \mathbb{F}, consider two bases B = \{e_1, \dots, e_n\} and B' = \{e'_1, \dots, e'_n\}. The change-of-basis matrix P from B to B' is the invertible n \times n matrix whose columns are the coordinate vectors [e'_i]_B of the vectors in the new basis B' expressed with respect to the old basis B. This matrix P satisfies the relation that for any vector v \in V, the coordinates transform as _{B'} = P^{-1} _B, ensuring that the vector representation remains consistent across bases since v = \sum _B^i e_i = \sum _{B'}^i e'_i. For a linear transformation T: V \to W between finite-dimensional vector spaces, with bases B and B' in V, and bases C and C' in W, let P be the change-of-basis matrix from B to B' in V, and let Q be the change-of-basis matrix from C to C' in W (defined analogously, with columns [f'_j]_C). If A is the matrix of T with respect to the bases B and C, then the matrix A' of T with respect to B' and C' is given by the transformation A' = Q^{-1} A P. This formula arises from substituting the coordinate relations: [T v]_C = A _B, so [T v]_{C'} = Q^{-1} [T v]_C = Q^{-1} A _B = Q^{-1} A P _{B'}, yielding A' directly. When T is an endomorphism on V (so W = V and the same basis change applies, with Q = P), the formula simplifies to the similarity transformation A' = P^{-1} A P. This preserves key invariants like the trace, determinant, and characteristic polynomial of the matrix, as similarity reflects the intrinsic properties of the linear operator independent of basis choice. The proof follows from ensuring consistency with the linear action: the transformed matrix must satisfy T(e'_i) = \sum_j A'_{ji} e'_j for all i, which holds by expressing T(e'_i) in the new basis using the original matrix A and the coordinate conversions via P.

Algebraic constructions

Direct sums and products

The external direct sum of two vector spaces V and W over the same field F, denoted V \oplus W, consists of all ordered pairs (v, w) with v \in V and w \in W, equipped with componentwise addition (v_1, w_1) + (v_2, w_2) = (v_1 + v_2, w_1 + w_2) and scalar multiplication c(v, w) = (cv, cw) for c \in F. This construction forms a vector space whose elements can be thought of as formal combinations of vectors from V and W without overlap. In contrast, the internal direct sum arises within a single vector space V that decomposes into the sum of two subspaces U and W, written V = U \oplus W, if every vector in V can be uniquely expressed as v = u + w with u \in U and w \in W, which holds precisely when U + W = V and U \cap W = \{0\}. This condition ensures that the decomposition is unique, distinguishing it from a general sum of subspaces. The internal direct sum corresponds to the external direct sum via the canonical isomorphism that identifies V with U \oplus W when the conditions are met. For a finite collection of vector spaces, the direct product coincides with the direct sum up to isomorphism; specifically, the direct product \prod_{i=1}^n V_i is the set of all n-tuples (v_1, \dots, v_n) with v_i \in V_i and componentwise operations, which is isomorphic to the direct sum \bigoplus_{i=1}^n V_i in the finite case. This equivalence simplifies constructions in finite dimensions, where the notions are often used interchangeably. If V = U \oplus W is an internal direct sum and \{u_i\} is a basis for U while \{w_j\} is a basis for W, then the union \{u_i\} \cup \{w_j\} forms a basis for V. Consequently, the dimension satisfies \dim(V \oplus W) = \dim V + \dim W for the external direct sum, and similarly for internal decompositions.

Tensor products of vector spaces

The tensor product of two vector spaces V and W over a field K, denoted V \otimes_K W, is a vector space equipped with a bilinear map \otimes: V \times W \to V \otimes_K W that satisfies the relations (v_1 + v_2) \otimes w = v_1 \otimes w + v_2 \otimes w, v \otimes (w_1 + w_2) = v \otimes w_1 + v \otimes w_2, and (\alpha v) \otimes w = v \otimes (\alpha w) = \alpha (v \otimes w) for all v, v_1, v_2 \in V, w, w_1, w_2 \in W, and \alpha \in K. This construction generates V \otimes_K W as the span of elements of the form v \otimes w, subject to these bilinearity conditions, ensuring that the map \otimes is K-bilinear. The tensor product satisfies a universal property: for any vector space U and any K-bilinear map \phi: V \times W \to U, there exists a unique K-linear map \tilde{\phi}: V \otimes_K W \to U such that \tilde{\phi}(v \otimes w) = \phi(v, w) for all v \in V, w \in W. This property characterizes the tensor product up to unique isomorphism and allows bilinear maps to factor uniquely through the linear map on the tensor product. Suppose \{e_i\}_{i=1}^n is a basis for a finite-dimensional vector space V over K and \{f_j\}_{j=1}^m is a basis for W. Then \{e_i \otimes f_j\}_{i=1,\dots,n; j=1,\dots,m} forms a basis for V \otimes_K W, consisting of nm elements. Consequently, the dimension of the tensor product is the product of the dimensions: \dim_K(V \otimes_K W) = (\dim_K V) \cdot (\dim_K W). For example, over the field \mathbb{R}, the tensor product \mathbb{R}^m \otimes_\mathbb{R} \mathbb{R}^n is isomorphic as a vector space to \mathbb{R}^{mn}, where the isomorphism arises from mapping the standard basis elements e_i \otimes f_j to the standard basis of \mathbb{R}^{mn}.

Vector spaces with metric structure

Normed vector spaces

A normed vector space is a vector space V over the real or complex numbers equipped with a norm \|\cdot\|: V \to [0, \infty), which measures the "length" of vectors and satisfies three fundamental axioms. These axioms ensure the norm behaves consistently with the vector space operations of addition and scalar multiplication. The norm satisfies positivity: \|v\| \geq 0 for all v \in V, with equality if and only if v = 0; absolute homogeneity: \|\alpha v\| = |\alpha| \|v\| for every scalar \alpha and vector v \in V; and the triangle inequality: \|u + v\| \leq \|u\| + \|v\| for all u, v \in V. These properties make the norm a natural extension of intuitive notions of distance and size in familiar spaces like \mathbb{R}^n. The norm induces a metric on V defined by d(u, v) = \|u - v\| for u, v \in V, which satisfies the axioms of a metric space: non-negativity, symmetry, the identity of indiscernibles, and the triangle inequality. This metric structure allows the application of topological concepts to vector spaces, such as convergence and continuity, while preserving linearity. Common examples include the Euclidean norm on \mathbb{R}^n, given by \|x\|_2 = \sqrt{\sum_{i=1}^n x_i^2} for x = (x_1, \dots, x_n), which generalizes the standard distance in Euclidean space. Another is the supremum norm on the space C[0,1] of continuous real-valued functions on [0,1], defined as \|f\|_\infty = \sup_{x \in [0,1]} |f(x)|, which measures the maximum deviation of the function. Between two normed vector spaces V and W, a linear map T: V \to W is bounded if there exists a constant M \geq 0 such that \|T v\|_W \leq M \|v\|_V for all v \in V. The smallest such M is the operator norm \|T\| = \sup_{\|v\|_V \leq 1} \|T v\|_W, which itself defines a norm on the space of bounded linear maps. Boundedness ensures that the map does not distort lengths excessively, linking algebraic linearity to metric continuity in these spaces.

Inner product spaces

An inner product space over a field \mathbb{F} (either \mathbb{R} or \mathbb{C}) is a vector space V equipped with an inner product \langle \cdot, \cdot \rangle: V \times V \to \mathbb{F}, a scalar-valued function satisfying three key properties. First, it is sesquilinear: linear in the first argument, \langle av_1 + bw_1, w \rangle = a \langle v_1, w \rangle + b \langle w_1, w \rangle for all scalars a, b \in \mathbb{F} and vectors v_1, w_1, w \in V, and conjugate-linear in the second argument, \langle v, aw_1 + bw_2 \rangle = \bar{a} \langle v, w_1 \rangle + \bar{b} \langle v, w_2 \rangle where \bar{\cdot} denotes complex conjugation (reducing to bilinearity when \mathbb{F} = \mathbb{R}). Second, it is conjugate symmetric: \langle v, w \rangle = \overline{\langle w, v \rangle} for all v, w \in V (symmetric when \mathbb{F} = \mathbb{R}). Third, it is positive definite: \langle v, v \rangle > 0 for all nonzero v \in V, and \langle 0, 0 \rangle = 0. The inner product induces a norm on V, defined by \|v\| = \sqrt{\langle v, v \rangle} for all v \in V, which satisfies the norm axioms including and the (as derived from the inner product properties). Two vectors v, w \in V are orthogonal if \langle v, w \rangle = 0; an orthogonal set is one where every pair is orthogonal, and it is orthonormal if additionally \|e_i\| = 1 for each basis vector e_i. An for V is an \{e_i\} satisfying \langle e_i, e_j \rangle = \delta_{ij}, where \delta_{ij} is the (1 if i = j, 0 otherwise); every finite-dimensional admits an . A fundamental inequality in inner product spaces is the Cauchy-Schwarz inequality: for all u, v \in V, |\langle u, v \rangle| \leq \|u\| \|v\|, with equality if and only if u and v are linearly dependent (one is a scalar multiple of the other). This follows from the positive definiteness of the inner product applied to u - \frac{\langle u, v \rangle}{\|v\|^2} v when v \neq 0, yielding \|u\|^2 - \frac{|\langle u, v \rangle|^2}{\|v\|^2} \geq 0. The Gram-Schmidt process provides an algorithm to orthogonalize a linearly independent set \{v_1, \dots, v_n\} in a finite-dimensional , producing an orthogonal set \{u_1, \dots, u_n\} via recursive projections: set u_1 = v_1, and for k = 2, \dots, n, u_k = v_k - \sum_{i=1}^{k-1} \frac{\langle v_k, u_i \rangle}{\langle u_i, u_i \rangle} u_i. Normalizing each u_k by dividing by \|u_k\| yields an orthonormal basis; the process preserves linear independence and spans the same subspace.

Topological vector spaces

General topology on vector spaces

A topological vector space (TVS) is a vector space over a topological field, typically the real or complex numbers with their standard topology, equipped with a topology such that the operations of vector addition and scalar multiplication are continuous maps. This structure ensures that the algebraic operations respect the topological properties, allowing the study of convergence, continuity, and limits within the vector space framework. The continuity of addition means that for any neighborhood U of the origin, there exist neighborhoods V and W such that V + W \subseteq U, while scalar multiplication's continuity implies that for any neighborhood U of the origin and scalar \lambda, there are neighborhoods V of the origin and S of \lambda such that S \cdot V \subseteq U. Classic examples of TVS include the on \mathbb{R}^n, where the induces a topology making and continuous, as the operations are polynomials and thus continuous in the usual . Another example is the of continuous functions on a compact set, such as C([0,1]), equipped with the from \mathbb{R}^{[0,1]}, where pointwise and are continuous due to the 's properties. In finite-dimensional spaces over \mathbb{R} or \mathbb{C}, any Hausdorff topology compatible with the vector structure coincides with the . The topology on a TVS is uniquely determined by its neighborhood basis at the , as translations of these neighborhoods form a basis for the entire , and the continuity axioms ensure that the structure around zero propagates algebraically. Specifically, a collection of sets forms a neighborhood basis at zero if it is closed under taking absolute convex hulls and satisfies the continuity conditions for and . A linear map between TVS is continuous it is continuous at the , which simplifies verification in practice. Locally convex TVS are those where every point has a local basis of neighborhoods, and such topologies can be generated by a family of s on the vector space. A p defines open sets \{ x : p(x) < \epsilon \} for \epsilon > 0, and the topology induced by a directed family of seminorms ensures local convexity while maintaining the continuity of vector operations. This construction is fundamental, as every locally convex TVS admits such a representation, linking algebraic and topological features through subadditive, positively homogeneous functionals.

Banach and Hilbert spaces

A is a that is complete as a with respect to the induced by its , meaning every converges to an element within the . This completeness ensures that limits of convergent sequences remain in the , distinguishing Banach spaces from incomplete . Prominent examples include the sequence spaces \ell^p for $1 \leq p \leq \infty, consisting of sequences (x_n) such that \sum |x_n|^p < \infty (or bounded for p=\infty) with the \|x\|_p = (\sum |x_n|^p)^{1/p}, and the function spaces L^p[0,1] for $1 \leq p \leq \infty, comprising equivalence classes of measurable functions f on [0,1] with \int_0^1 |f|^p \, dx < \infty (or essentially bounded for p=\infty) under the \|f\|_p = (\int_0^1 |f|^p \, dx)^{1/p}. A is a complete , where the inner product induces a under which the is complete. Key examples are the sequence \ell^2 of square-summable sequences with inner product \langle x, y \rangle = \sum x_n \overline{y_n} and the L^2[0,1] of square-integrable functions with \langle f, g \rangle = \int_0^1 f \overline{g} \, dx. In s, the states that every continuous linear functional T on the H can be expressed uniquely as T(x) = \langle x, g \rangle for some g \in H, establishing an isometric isomorphism between H and its H^*. Orthogonal projections play a central role in Hilbert spaces: for any closed subspace M \subseteq H, there exists a unique orthogonal P_M: H \to M such that for every x \in H, x = P_M x + (x - P_M x) with P_M x \in M and \langle x - P_M x, y \rangle = 0 for all y \in M, and P_M is a bounded linear satisfying P_M^2 = P_M and self-adjointness. This decomposition guarantees that P_M x is the element of M closest to x in . The Hahn-Banach theorem provides a fundamental extension result for normed spaces: if X is a normed space, M \subseteq X a subspace, and \ell: M \to \mathbb{R} (or \mathbb{C}) a continuous linear functional, then there exists a continuous linear functional L: X \to \mathbb{R} (or \mathbb{C}) extending \ell with the same norm \|L\| = \|\ell\|. Intuitively, this allows functionals defined on subspaces to be extended to the entire space without increasing their boundedness, enabling separation of points from closed convex sets and underpinning duality theory in Banach spaces.

Modules over rings

A over a generalizes the concept of a vector space by replacing the scalar field with an arbitrary R, allowing for richer algebraic structures where division may not be possible. Specifically, an R- M is an under addition equipped with a operation R \times M \to M, denoted (r, m) \mapsto r \cdot m, satisfying the following axioms: distributivity over addition in both components, associativity (r s) \cdot m = r \cdot (s \cdot m), and the unit property $1 \cdot m = m if R is unital. These axioms adapt those of vector spaces, but the lack of inverses in R introduces phenomena absent in the field case, such as non-invertible scalars leading to dependencies among elements. Submodules and homomorphisms extend the corresponding notions from vector spaces in a straightforward manner. A submodule N of M is a subset that is itself an R-module under the induced operations, meaning it is closed under and scalar multiplication by elements of R. An R-module homomorphism \phi: M \to N is a that also respects , i.e., \phi(r \cdot m) = r \cdot \phi(m) for all r \in R and m \in M. These structures enable the study of module categories, where kernels and images of homomorphisms form submodules, analogous to subspaces but without guarantees of complements. Free modules represent the "simplest" case, mirroring free vector spaces. An R-module M is free if it is isomorphic to a direct sum of copies of R, denoted R^{(I)} = \bigoplus_{i \in I} R for some index set I, and possesses a basis consisting of elements that are linearly independent over R and generate M. For instance, R^n is a free module of rank n, with the standard basis vectors serving as generators. However, unlike vector spaces, not every module over a general ring is free; the absence of division in R can prevent the existence of bases in finitely generated modules. A key difference from vector spaces arises from the potential for torsion elements, which have no counterpart over fields. Over an , for example, an m \in M, m \neq 0, is a torsion element if there exists a non-zero r \in R such that r \cdot m = 0; the set of all such elements forms the torsion submodule. For example, over the ring \mathbb{Z} of integers, \mathbb{Z}-modules are precisely abelian groups, and the \mathbb{Z}/n\mathbb{Z} is a torsion module where every non-zero has finite , as n \cdot {{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} = {{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}. In contrast, vector spaces over a field K are torsion-free, since only $0 \cdot m = 0 holds non-trivially. When R is a field, every R- reduces to a vector space, recovering the original structure.

Vector bundles

A vector bundle is a geometric structure that generalizes the notion of a varying continuously over a base . Formally, a of rank k over a base space B consists of a total space E, a continuous surjective \pi: E \to B, and a vector space structure on each E_b = \pi^{-1}(b) isomorphic to \mathbb{R}^k (or \mathbb{C}^k for bundles), such that the bundle is locally trivial: for every point b \in B, there exists a neighborhood U \subset B and a bundle \phi: \pi^{-1}(U) \to U \times \mathbb{R}^k that is linear on each fiber. This local product structure ensures that the vector space operations—addition and —are continuous with respect to the on E. Sections of a provide a way to assign elements from the fibers consistently over the base. A s: B \to E is a continuous satisfying \pi \circ s = \mathrm{id}_B, meaning s(b) \in E_b for each b \in B. The space of all continuous sections, often denoted \Gamma(E), forms a over the of continuous functions on B and plays a central role in applications like , where sections of the correspond to fields. The rank of the bundle is the k of the typical , which remains across B. A prominent example is the tangent bundle TM of a smooth manifold M, where the base is M itself, the total space TM consists of pairs (p, v) with p \in M and v \in T_p M (the at p), and \pi(p, v) = p. This bundle has rank equal to the dimension of M and is locally trivial via charts on M. Vector bundles are classified as trivial or non-trivial based on global topology: a trivial bundle is globally isomorphic to the product B \times \mathbb{R}^k, admitting k everywhere linearly independent global sections. In contrast, non-trivial bundles, such as the Möbius band as a rank-1 real over the circle S^1, cannot be expressed as a global product due to twisting; here, the total space is obtained by identifying (0, v) \sim (1, -v) on [0,1] \times \mathbb{R}, yielding a single continuous non-vanishing section impossible in the trivial case.

Historical development

Classical origins

In the 17th and 18th centuries, the concept of vectors emerged in physics primarily through the representation of forces and velocities as directed quantities, often depicted as arrows that could be composed using geometric methods. , in his (1687), employed the to determine the resultant of two forces acting on a body, illustrating how forces combine vectorially without explicitly formalizing vectors as abstract entities. This approach treated forces as lines with magnitude and direction, laying groundwork for vector ideas in , though Newton relied on rather than algebraic notation. A significant advancement came in 1843 when discovered quaternions, a four-dimensional extension of complex numbers designed to handle rotations in . Hamilton's quaternions, detailed in his 1844 publication, separated into scalar and vector parts, providing a algebraic tool for representing oriented magnitudes and rotations, which influenced later vector theories. In 1844, published Die lineale Ausdehnungslehre, introducing a theory of extension that prefigured modern linear algebra through the concept of multivectors—decomposable entities representing oriented volumes in higher dimensions. Grassmann's work emphasized linear combinations and outer products, offering a geometric framework for vector-like operations beyond simple arrows, though it remained largely overlooked at the time. During the 1880s, and independently developed vector analysis tailored to three-dimensional , introducing the for scalar quantities and the for vector results. Gibbs outlined this system in his 1881-1884 Yale lectures, later compiled as Elements of Vector Analysis (1884), while Heaviside applied similar ideas in his electromagnetic papers, standardizing operations essential for physical applications. Giuseppe Peano provided early axiomatic insights in his 1888 treatise Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassmann, where he defined "linear systems" with properties akin to and , hinting at the of vector spaces without full generalization.

Abstract axiomatization

The abstract axiomatization of vector spaces emerged in the early as mathematicians sought to formalize linear algebra independently of specific geometric or coordinate representations, treating them as algebraic structures over fields equipped with and . In 1910, Ernst Steinitz provided the first rigorous abstract definition in his seminal work Algebraische Theorie der Körper, where he characterized vector spaces (referred to as "free modules" over fields) as abelian groups with a basis, emphasizing the uniqueness of basis and exchange properties that underpin . This framework decoupled vector spaces from , allowing application to arbitrary fields and laying the groundwork for modern module theory. Building on this algebraic foundation, the saw extensions into analytic settings through . Stefan , in collaboration with Hans Hahn, introduced normed linear spaces around 1920–1922, defining them as vector spaces equipped with a that induces a , enabling the study of completeness and convergence in infinite . Concurrently, Emmy advanced the theory of modules over rings in the early , developing ideal theory and proving structure results for modules over domains, including the well-definedness and invariance of (analogous to ), generalizing Steinitz's results beyond fields. By the 1940s, including during and after , abstract vector spaces had become a cornerstone of linear algebra . Paul Halmos's 1942 monograph Finite-Dimensional Vector Spaces standardized the axiomatic approach in textbooks, presenting vector spaces through their universal properties and bases without reliance on matrices or coordinates, influencing generations of mathematicians. The push toward infinite-dimensional spaces was significantly influenced by applications in physics, particularly , where formalized Hilbert spaces as complete inner product spaces in 1932 to model wave functions, and , which necessitated functional analytic tools for solving partial differential equations in .

References

  1. [1]
    [PDF] What is a Vector Space?
    Definition: A vector space consists of a set V (elements of V are called vec- tors), a field F (elements of F are called scalars), and two operations. •
  2. [2]
    [PDF] Math 2331 – Linear Algebra - 4.1 Vector Spaces & Subspaces
    A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers) ...
  3. [3]
    Vector Spaces - A First Course in Linear Algebra
    A vector space is composed of three objects, a set and two operations. Some would explicitly state in the definition that V V must be ...
  4. [4]
    [PDF] Chapter 5 - Vector Spaces and Subspaces - MIT Mathematics
    Section 5.5 will present the “Fundamental Theorem of Linear Algebra.” We begin with the most important vector spaces. They are denoted by R1, R2, R3,. R4 ...
  5. [5]
    [PDF] Vector Spaces - UC Davis Math
    Feb 1, 2007 · As we have seen in the introduction, a vector space is a set V with two operations: addition of vectors and scalar multiplication.
  6. [6]
    Definition of a vector space - Ximera - The Ohio State University
    A vector space is a set equipped with two operations, vector addition and scalar multiplication, satisfying certain properties.
  7. [7]
    [PDF] Vector Spaces and Linear Functions
    One of the most important modern applications of linear algebra is to solve many problems involving differential equations, etc., by regarding functions as ...
  8. [8]
    [PDF] Lecture 3: Review of Linear Algebra 1 Linear Vector Space
    Very often in this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts ...
  9. [9]
    [PDF] The Growing Importance of Linear Algebra in Undergraduate ...
    Linear algebra takes students' background in Euclidean space and formalizes it with vector space theory that builds on algebra and the geometric intuition.
  10. [10]
    Vector Space -- from Wolfram MathWorld
    A consequence of the axiom of choice is that every vector space has a vector basis. A module is abstractly similar to a vector space, but it uses a ring to ...
  11. [11]
    [PDF] 18.022: Multivariable calculus — Points and vectors
    The operations of vector sum and scalar multiplication are required to satify the following axioms: (V1) For all a, b, c ∈ V , (a + b) + c = a + (b + c). (V2) ...
  12. [12]
    [PDF] Mathematics Course 111: Algebra I Part IV: Vector Spaces
    ... axioms that involve only the operation of addition) can be sum- marized in the statement that a vector space is an Abelian group (i.e., a commutative group) ...
  13. [13]
    [PDF] Vector Spaces - Penn Math
    Jul 17, 2013 · 9. Distributive property of scalar multiplication over vector addition: For all vectors u and v and scalars r, we have r(u + v) = ru + rv.
  14. [14]
    [PDF] Vector Spaces and Linear Maps - Stanford AI Lab
    Aug 14, 2018 · We have 0v = 0 for every v ∈ V and α0 = 0 for every α ∈ F. Proof. If v ∈ V , then v = 1v = (1 + 0)v = 1v + 0v = v + 0v which implies 0v = 0.
  15. [15]
    [PDF] Contents 1 Vector Spaces - Evan Dummit
    ◦ Proof: By the definition of subtraction and [V3], 0 − v = 0 + (−v) = −v. 10. Negation distributes over addition: −(v + w)=(−v)+(−w) = −v − w.
  16. [16]
    [PDF] 6. Vector Spaces - Emory Mathematics
    Hence (−1)v + v = −v + v (because both are equal to 0), so. (−1)v = −v by cancellation. 5. The proof is left as Exercise 6.1.12.4. The properties in Theorem 6.1 ...
  17. [17]
    [PDF] Vector spaces - MATH 304, Spring 2017 [3mm] Linear Algebra
    Additional properties of vector spaces. • (cancellation law) x + y = x0 + y implies x = x0 for any x,x0,y ∈ V. If x + y = x0 + y then (x + y)+(−y)=(x0 + y ...
  18. [18]
    [PDF] Vector Spaces
    A nonempty set V is said to be a vector space over a field F if: (i) there exists ... f (n)(x)+ an!1 f (n!1)(x)+ an!2 f (n!2)(x)+!+ a1 f (1)(x)+ a0 f(x) = 0 where ...
  19. [19]
    [PDF] Vector Spaces Over Fields - UCLA Math Circle
    A vector in the Euclidean plane is a directed segment or arrow. ... Definition 7 for the case of d = 2 and F = R. Since we know that R2 models the Euclidean 2- ...
  20. [20]
    [PDF] Chapter 2 Function Spaces
    Consider F[a, b], the set of all real (or complex) valued functions f(x) on the interval [a, b]. This is a vector space over the field of the real (or complex).
  21. [21]
    [PDF] Vector Spaces §4.2 Vector Spaces - Satya Mandal
    Here is a proof that. C(0,1) is a vector space. ▷ So, the vectors are continuous functions f(x) : (0,1) −→ R. ▷ For vectors f,g ∈ C(0,1) addition is defined as ...<|separator|>
  22. [22]
    [PDF] 1 Norms on Vector Spaces
    Let C([0, 1]) denote the space of continuous functions on [0, 1]. a. Show that C([0, 1]) is a vector space. b. Define for all f ∈ C([0, 1]) that ∥f ...
  23. [23]
    [PDF] Math 4377/6308 Advanced Linear Algebra - 1.2 Vector Spaces
    (VS 5-8) (Properties of scalar multiplication) (Axioms 7-10) For all ... F, is a vector space over F, with addition and scalar multiplication ... Vector Spaces.
  24. [24]
    [PDF] Lecture 3 Vector Spaces of Functions - UTK Math
    Recall that the space of all functions f : I −→ R is a vector space. We will now list some important subspaces: Example (1). Let C(I) be the space of all ...
  25. [25]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    Sheldon Axler received his undergraduate degree from Princeton University, followed by a PhD in mathematics from the University of California at Berkeley.Missing: axioms | Show results with:axioms
  26. [26]
    [PDF] Zorn's lemma and some applications - Keith Conrad
    Our more algebraically-oriented concept of a basis, always using finite linear combinations, is called a Hamel basis. Theorem 4.1 (Hausdorff). Every nonzero ...
  27. [27]
    [PDF] 18.102 S2021 Lecture 5. Zorn's Lemma and the Hahn-Banach ...
    As a warmup, today we'll use this axiom to prove a fact about vector spaces. Recall that a Hamel basis of a vector space V is a linearly independent set H, ...
  28. [28]
    [PDF] Lecture 3: Bases - UMD MATH
    Every vector space V has a basis (in fact, many bases). Second Main Theorem (Text, Theorem 7.2). Any two bases of V have the same cardinality. Page 3 ...
  29. [29]
    [PDF] MATH 304 Linear Algebra Lecture 16: Basis and dimension.
    Theorem Any vector space V has a basis. All bases for V are of the same cardinality. Definition. The dimension of a vector space V, denoted dimV, is the ...
  30. [30]
    [PDF] Lecture 2: September 30, 2015 1 Linear Independence and Bases
    Sep 30, 2015 · To prove the existence of a basis for every vector space, we will need Zorn's lemma (which is equivalent to the axiom of choice). We first ...
  31. [31]
    [PDF] Linear Algebra Done Wrong Sergei Treil - Brown Math Department
    This book appeared as lecture notes for the course “Honors. Linear Algebra”. It ... Coordinate representation of a tensor. Let F be an r-covariant s ...
  32. [32]
    [PDF] Bases, Coordinates and Representations
    Mar 27, 2019 · precisely the standard coordinate representation of T. We can go ... what linear algebra was designed to capture about Euclidean geometry.
  33. [33]
    Subspaces
    A subspace is a subset that happens to satisfy the three additional defining properties. In order to verify that a subset of R n is ...
  34. [34]
    [PDF] Subspaces
    A subspace W of a vector space V is itself a vector space, using the vector addition and scalar multi- plication operations from V .
  35. [35]
    Vector Spaces » Subspaces » - A First Course in Linear Algebra
    A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined ...
  36. [36]
    Linear Algebra, Part 3: Intersections and Spans (Mathematica)
    The intersection V ∩ W of two subspaces is always a subspace of their embedding space U. So any basis for V ∩ W can be extended to a basis for V; it can be ...
  37. [37]
    [PDF] MATH 304 Linear Algebra Lecture 12: Subspaces of vector spaces.
    Theorem The solution set of a system of linear equations in n variables is a subspace of Rn if and only if all equations are homogeneous. Proof: “only if ...
  38. [38]
    [PDF] Math 4377/6308 Advanced Linear Algebra - 1.3 Subspaces
    1.3 Subspaces. Subspaces. Intersections of Subspaces. Theorem (1.4). Any intersection of subspaces of a vector space V is a subspace of. V. Jiwen He, University ...
  39. [39]
    Subspaces
    For intersections, the situation is different: The intersection of any number of subspaces is a subspace.
  40. [40]
    [PDF] Math 396. Quotient spaces 1. Definition Let F be a field, V a vector ...
    Definition. Let F be a field, V a vector space over F and W ⊆ V a subspace of V . For v1,v2 ∈ V , we say that v1 ≡ v2 mod W if and only if v1 − v2 ∈ W. One ...
  41. [41]
    [PDF] Quotient Spaces
    Below we'll provide a construction which starts with a vector space V over a field F and a subspace S of V , and which furnishes with an entirely new vector ...
  42. [42]
    [PDF] NOTES ON QUOTIENT SPACES Let V be a vector ... - Academic Web
    Let V be a vector space over a field F, and let W be a subspace of V . There is a sense in which we can “divide” V by W to get a new vector space.Missing: construction | Show results with:construction
  43. [43]
    Linear Transformation -- from Wolfram MathWorld
    A linear transformation between two vector spaces V and W is a map T:V->W such that the following hold: 1. T(v_1+v_2)=T(v_1)+T(v_2) for any vectors v_1 and v_2 ...Missing: "linear
  44. [44]
    linear transformation - PlanetMath.org
    Mar 22, 2013 · If G:W→U G : W → U is a linear transformations then G∘T:V→U G ∘ T : V → U is also a linear transformation. •. The kernel (http://planetmath.org/ ...
  45. [45]
    Null Space -- from Wolfram MathWorld
    If T is a linear transformation of R^n, then the null space Null(T), also called the kernel Ker(T), is the set of all vectors X such that T(X)=0, i.e., ...
  46. [46]
    image of a linear transformation - PlanetMath.org
    Mar 22, 2013 · image of a linear transformation. Definition Let T:V→W T : V → W be a linear transformation. Then the image of T T is the set ...
  47. [47]
    linear isomorphism - PlanetMath.org
    Mar 22, 2013 · Suppose V V and W W are vector spaces and L:V→W L : V → W is a linear map. Then L L is a linear isomorphism if L L is bijective .
  48. [48]
    1.3 Rank and Nullity
    We have defined the kernel of T , , ker ⁡ ( T ) = Null ( T ) , (also called the nullspace ) and noted that it is a subspace of the domain V . The image of T , ...
  49. [49]
    [PDF] Chapter 7. Linear Transformations §7-2. Kernel and Image
    Apr 19, 2021 · . Theorem (Dimension Theorem (Rank-Nullity Theorem)). Let V and W be vector spaces and T : V → W a linear transformation. If ker(T) and im(T) ...
  50. [50]
    [PDF] Kernel, image, nullity, and rank Math 130 Linear Algebra
    Let T : V → W be a linear trans- formation between vector spaces. The kernel of T, also called the null space of T, is the inverse image of the zero vector, 0, ...
  51. [51]
    LTR-0050: Image and Kernel of a Linear Transformation - Ximera
    We define the image and kernel of a linear transformation and prove the Rank-Nullity Theorem for linear transformations.
  52. [52]
    6.6: The matrix of a linear map - Mathematics LibreTexts
    Mar 5, 2021 · Since we have a one-to-one correspondence between linear maps and matrices, we can also make the set of matrices F m × n into a vector space.
  53. [53]
    Matrix of a linear map - StatLect
    A linear map (or linear transformation) between two finite-dimensional vector spaces can always be represented by a matrix, called the matrix of the linear map.
  54. [54]
    [PDF] Linear Algebra Done Wrong Sergei Treil - Brown Math
    This book is for advanced students, using rigorous proofs, defining basis first, and using row reduction proofs instead of coordinate-free proofs.
  55. [55]
    5.4: Special Linear Transformations in R² - Mathematics LibreTexts
    Sep 16, 2022 · In this section, we will examine some special examples of linear transformations in R 2 including rotations and reflections.
  56. [56]
    [PDF] Some linear transformations on R2 Math 130 Linear Algebra
    cosθ −sinθ sinθ cosθ describes a rotation of the plane by an angle of θ. For example, the matrix that describes a rotation of the plane around the origin of 10◦.
  57. [57]
    Change of basis | Formula, examples, proofs - StatLect
    Change of basis is a technique to express vector coordinates with respect to a new basis, different from the original basis.Missing: authoritative source
  58. [58]
    [PDF] Math 217: Summary of Change of Basis and All That...
    A change of basis matrix (SB→A) converts a vector's B-coordinates to A-coordinates. Its columns are the basis elements of B expressed in A.
  59. [59]
    [PDF] Chapter 2. Dramatis Personae - UC Berkeley math
    Given two vector spaces V and W, their direct sum V⊕W is defined as the set of all ordered pairs (v,w), where v ∈ V, w ∈ W, equipped with the component-wise ...
  60. [60]
    [PDF] Free associative algebras - MIT Mathematics
    Feb 16, 2015 · Vector spaces can be thought of as a very nice place to study addition. The notion of direct sum of vector spaces provides a place to add things.
  61. [61]
    [PDF] Linear Algebra 2: Direct sums of vector spaces - People
    Nov 3, 2005 · Definition: Let U, W be subspaces of V . Then V is said to be the direct sum of U and W, and we write V = U ⊕ W, if.
  62. [62]
    [PDF] 1 Some Basics - Berkeley Math
    c(v, w) := (cv, cw) Definition 35. Given V , and two subspaces W1,W2 ⊂ V the sum W1 + W2 is called a “direct sum” if W1 ∩ W2 = 0. Remark 36. Axler uses the ...
  63. [63]
    Math 113: Lecture summaries - Stanford Mathematics
    Sep 22, 2008 · Friday, September 27: More on direct sums. A vector space is interior direct sum of U1 and U2 exactly when U1+U2 = V and the intersection of U1 ...
  64. [64]
    [PDF] Lecture 1.3: Direct products and sums
    In the finite-dimensional cases, there is no difference (up to isomorphism) of direct sums vs. direct products. Not the case when dim X = ∞. Consider the vector ...
  65. [65]
    Linear Algebra » Part 3: Vector Spaces » Direct Sums
    It turns out that a direct product of two vector spaces can be considered as a sum. In this case, it is called the direct sum and denoted as A⊕B.
  66. [66]
    Module Direct Sum -- from Wolfram MathWorld
    The direct sum of modules A and B is the module A direct sum B={a direct sum b|a in A,b in B}, where all algebraic operations are defined componentwise.
  67. [67]
    [PDF] Introduction to the Tensor Product - UCSB Math
    Let V and W be finite dimensional vector spaces, and let βv = {ei}n i=1 and ... universal property for tensor product. With this definition we have that ...
  68. [68]
    [PDF] Math 55a: Honors Abstract Algebra Tensor products Slogan. Tensor ...
    Slogan. Tensor products of vector spaces are to Cartesian products of sets as direct sums of vectors spaces are to disjoint unions of sets.
  69. [69]
    [PDF] tensor products ii - keith conrad
    Tensor products combine two linear maps M -> M0 and N -> N0 into a linear map M ⊗R N -> M0 ⊗R N0, where (m ⊗ n) 7→ ϕ(m) ⊗ ψ(n).
  70. [70]
  71. [71]
    [PDF] Introduction to Normed Vector Spaces - UCSD Math
    Mar 29, 2009 · A normed vector space is a vector space with a norm function mapping to non-negative real numbers, satisfying three axioms.
  72. [72]
    [PDF] 2 Inner Product Spaces, part 1
    Lemma 2.6. An inner product is a positive-definite, conjugate-symmetric, sesquilinear1form: it is conjugate-linear (anti-linear) in the second slot,. ⟨z, Ax ...
  73. [73]
    Conjugate-symmetric sesquilinear pairings on ℂn, and ... - Ximera
    A Hermitian inner product on is a conjugate-symmetric sesquilinear pairing that is also positive definite: ... inner product; in other words, is positive definite ...
  74. [74]
    [PDF] Chapter 6: Hilbert Spaces - UC Davis Math
    An inner product space is a normed space with respect to the norm defined in. (6.1). The converse question of when a norm is derived from an inner product in.
  75. [75]
    [PDF] Lecture 22: INNER PRODUCTS
    The first condition is that the inner product is “positive definite”, the second is that it is (conjugate) symmetric, and the third is that it is linear in the ...
  76. [76]
    [PDF] 6.4 The Gram-Schmidt Procedure - UC Berkeley math
    Every finite-dimensional inner-product space has an orthonormal basis. Proof ... Apply the Gram-Schmidt procedure to it, producing an orthonormal list.<|separator|>
  77. [77]
    [PDF] 6.7 Cauchy-Schwarz Inequality - UC Berkeley math
    Thus the Cauchy-Schwarz inequality is an equality if and only if u is a scalar multiple of v or v is a scalar multiple of u (or both; the phrasing has been ...
  78. [78]
    Gram-Schmidt Method – Calculus Tutorials
    The Gram-Schmidt algorithm constructs an orthogonal basis from an arbitrary basis in an inner product space. It starts with v1 = u1.
  79. [79]
    [PDF] Gram--Schmidt Orthogonalization: 100 Years and More - CIS UPenn
    Jun 8, 2010 · The Gram-Schmidt process forms an orthogonal sequence from a linearly independent sequence in an inner-product space, computing an orthonormal ...
  80. [80]
    Topological Vector Spaces: Chapters 1–5 | SpringerLink
    This is a softcover reprint of the English translation of 1987 of the second edition of Bourbaki's Espaces Vectoriels Topologiques (1981).
  81. [81]
    [PDF] Topological Vector Spaces I: Basic Theory - KSU Math
    All vector spaces mentioned here are over K. Definitions. Let X be a vector space. A linear topology on X is a topology T such that the maps. X × ...
  82. [82]
    [PDF] Finite-dimensional topological vector spaces - Keith Conrad
    Definition 2.1. A topological vector space over R is an R-vector space V with a Haus- dorff topology making vector addition V × V → V and ...
  83. [83]
    [PDF] Topological Vector Spaces
    A vector space X over K is called a topological vector space (t.v.s.) if X is provided with a topology τ which is compatible with the vector space structure of ...
  84. [84]
    [PDF] Locally Convex Vector Spaces III: The Metric Point of View - KSU Math
    Every locally convex topology on a vector space X can be constructed as T(P), for a suitably chosen collection P of seminorms on X. More precisely, if S is a ...
  85. [85]
    [PDF] Seminorms and locally convex spaces
    Apr 23, 2014 · A seminorm is a real-valued function on a vector space that is non-negative, homogeneous, and satisfies the triangle inequality. Locally convex ...
  86. [86]
    [PDF] Banach Spaces - UC Davis Math
    Definition 5.1 A Banach space is a normed linear space that is a complete metric space with respect to the metric derived from its norm.
  87. [87]
    [PDF] Lp spaces - UC Davis Math
    The Lp-spaces are perhaps the most useful and important examples of Banach spaces. For definiteness, we consider real-valued functions. Analogous results apply ...
  88. [88]
    [PDF] Hilbert spaces
    Definition 15. A Hilbert space H is a pre-Hilbert space which is complete with respect to the norm induced by the inner product. As examples ...Missing: primary | Show results with:primary
  89. [89]
    [PDF] The Riesz Representation Theorem
    The following is called the Riesz Representation Theorem: Theorem 1 If T is a bounded linear functional on a Hilbert space H then there exists some g ∈ H ...Missing: paper | Show results with:paper
  90. [90]
    [PDF] 1 Orthogonal Projections - LSU Math
    We shall study orthogonal projections onto closed subspaces of H. In summary, we show: • If X is any closed subspace of H then there is a bounded linear ...
  91. [91]
    [PDF] Chapter 12 The Hahn-Banach Theorem - LSU Math
    In this chapter V is a real or complex vector space. The scalars will be taken to be real until the very last result, the comlex-version of the Hahn-Banach.Missing: source | Show results with:source
  92. [92]
    [PDF] 1. Modules Definition 1.1. Let R be a commutative ring ... - UCSD Math
    Modules. Definition 1.1. Let R be a commutative ring. A module over R is set M together with a binary operation, denoted +, which makes M.
  93. [93]
    [PDF] Chapter 1 Modules Every ring can be viewed as a ring of operators if ...
    Any ring can be viewed as operators on an abelian group in many ways; the abelian group on which the ring acts is called a module over that ring. 1.1 ...
  94. [94]
    [PDF] Foundations of Module Theory - Clemson University
    Definition (Let M a module over a ring R.) An R-submodule of M is a subgroup K ⊆ M that absorbs scalar multiplication: r ∈ R and x ∈ K ...<|control11|><|separator|>
  95. [95]
    [PDF] Notes on Vector Bundles
    Notes on Vector Bundles. Aleksey Zinger. March 16, 2010. 1 Definitions and Examples. A (smooth) real vector bundle V of rank k over a smooth manifold M is a ...
  96. [96]
  97. [97]
    [PDF] CLASS NOTES MATH 751 (FALL 2018) 1. Vector bundles 3 1.1 ...
    Definition and examples. Definition 1.1. A (real) vector bundle of rank n over B is a space E and map p : E −→ B equipped.
  98. [98]
    Why do forces add vectorially? A forgotten controversy in the ...
    Apr 1, 2011 · The controversy concerned the reason why forces compose vectorially. If the parallelogram law is explained statically, then the parallelogram ...Missing: concept | Show results with:concept<|control11|><|separator|>
  99. [99]
    The Curious History of Vectors and Tensors - SIAM.org
    Sep 3, 2024 · The idea of a vector as a mathematical object in its own right first appeared as part of William Rowan Hamilton's theory of quaternions.Missing: arrows | Show results with:arrows
  100. [100]
    [PDF] Geometric Algebra: An Introduction with Applications in Euclidean ...
    Grassmann published the first edition of his geometric calculus, Lineale Ausdehnungslehre, in 1844, the same year. Hamilton announced his discovery of the ...
  101. [101]
    [PDF] A History of Vector Analysis
    involves two forms of multiplication, the scalar (dot) and vector (cross) products. ... 1880s Gibbs frequently teaches a course on vector analysis, and does so ...
  102. [102]
    [PDF] The Hahn-Banach Theorem: The Life and Times - UCI Mathematics
    It happened first in algebra. There, Peano [1888] defined vector space and linear map axiomatically. No more were vectors n-tuples or sequences; now you could ...
  103. [103]
    Algebraische Theorie der Körper. - EuDML
    Steinitz, Ernst. "Algebraische Theorie der Körper.." Journal für die reine und angewandte Mathematik 137 (1910): 167-309. <http://eudml.org/doc/149323>.Missing: vector space