Fact-checked by Grok 2 weeks ago

Vector projection

In linear algebra and vector geometry, the of a \vec{u} onto a nonzero \vec{v} is the \operatorname{proj}_{\vec{v}} \vec{u} that lies along the line spanned by \vec{v} and represents the "shadow" or orthogonal component of \vec{u} in the direction of \vec{v}, formally defined as \operatorname{proj}_{\vec{v}} \vec{u} = \left( \frac{\vec{u} \cdot \vec{v}}{\|\vec{v}\|^2} \right) \vec{v}, where \cdot denotes the and \|\cdot\| the Euclidean norm. The scalar projection, or component, is the signed of this , given by \operatorname{comp}_{\vec{v}} \vec{u} = \frac{\vec{u} \cdot \vec{v}}{\|\vec{v}\|}, which measures how far \vec{u} extends in the direction of the unit \hat{v} = \vec{v}/\|\vec{v}\|. This concept underpins the geometric interpretation of the dot product, \vec{u} \cdot \vec{v} = \|\vec{u}\| \|\vec{v}\| \cos \theta, where \theta is the angle between \vec{u} and \vec{v}, and is essential for orthogonal decomposition: any vector \vec{u} can be uniquely decomposed as \vec{u} = \operatorname{proj}_{\vec{v}} \vec{u} + (\vec{u} - \operatorname{proj}_{\vec{v}} \vec{u}), with the remainder perpendicular to \vec{v}. In higher dimensions, projections extend to subspaces, where the projection onto a subspace V is the point in V closest to \vec{u} in the Euclidean metric, minimizing the distance \|\vec{u} - \mathbf{p}\| for \mathbf{p} \in V. Vector projections have broad applications across disciplines; in physics, they compute the effective component of a force along a , such as the work W = \mathbf{F} \cdot \Delta \mathbf{r} = \|\Delta \mathbf{r}\| \cdot \operatorname{comp}_{\Delta \mathbf{r}} \mathbf{F}, which quantifies energy transfer in . In engineering and , projections model light direction for shading algorithms, simulate shadows by projecting object geometries onto surfaces, and facilitate perspective transformations in rendering pipelines. Additionally, in and machine learning, orthogonal projections onto principal components reduce dimensionality while preserving variance, forming the basis of techniques like ().

Fundamentals

Notation

In the study of vector projections within Euclidean space \mathbb{R}^n, vectors are conventionally represented using boldface lowercase letters, such as \mathbf{a} and \mathbf{b}, or with arrows overhead, such as \vec{a} and \vec{b}, to distinguish them from scalars. The dot product of two vectors \mathbf{a} and \mathbf{b} is denoted either as \mathbf{a} \cdot \mathbf{b} or using the inner product notation \langle \mathbf{a}, \mathbf{b} \rangle, which yields a scalar value representing their directional alignment. The Euclidean norm (or ) of a \mathbf{a} is expressed as \|\mathbf{a}\| = \sqrt{\mathbf{a} \cdot \mathbf{a}}, providing the in the . Specific to projections, the vector projection of \mathbf{a} onto a nonzero vector \mathbf{b} is symbolized as \proj_{\mathbf{b}} \mathbf{a}, while the scalar projection—the signed length of this projection—is given by \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|}; the absolute value of the latter denotes the unsigned magnitude./02%3A_Vectors_In_Two_Dimensions/2.06%3A_The_Vector_Projection_of_One_Vector_onto_Another) Additionally, the in the of \mathbf{b} is denoted \hat{\mathbf{b}} = \frac{\mathbf{b}}{\|\mathbf{b}\|}, normalizing \mathbf{b} to for computations./10%3A_Inner_Product_Spaces/10.03%3A_Projections)

Scalar projection

The scalar of a vector \mathbf{a} onto another \mathbf{b} measures the signed extent to which \mathbf{a} extends in the of \mathbf{b}, indicating how much of \mathbf{a} aligns with \mathbf{b}'s . This value is positive if the angle between \mathbf{a} and \mathbf{b} is acute (less than 90 degrees) and negative if obtuse (greater than 90 degrees), reflecting the directional alignment or opposition. The signed scalar projection, often denoted as \operatorname{scal}_{\mathbf{b}} \mathbf{a} or \operatorname{comp}_{\mathbf{b}} \mathbf{a}, is given by the formula \operatorname{scal}_{\mathbf{b}} \mathbf{a} = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|} where \mathbf{a} \cdot \mathbf{b} is the of \mathbf{a} and \mathbf{b}, and \|\mathbf{b}\| is the (norm) of \mathbf{b}. This expression yields a scalar value representing the length of the component of \mathbf{a} parallel to \mathbf{b}, adjusted for sign based on the angle between them. Geometrically, the scalar projection corresponds to the signed of the shadow cast by \mathbf{a} onto the line passing through the in the direction of \mathbf{b}, akin to measuring the displacement along that line with directionality preserved. If \mathbf{a} points in the same general direction as \mathbf{b}, the projection is positive and equals the of the orthogonal vector; otherwise, it is negative./02:_Applications_of_Trigonometry/2.09:_The_Dot_Product_and_Projection) For example, consider the vectors \mathbf{a} = (3, 4) and \mathbf{b} = (1, 0) in two dimensions. The dot product \mathbf{a} \cdot \mathbf{b} = 3 \cdot 1 + 4 \cdot 0 = 3, and \|\mathbf{b}\| = 1, so the scalar projection is $3 / 1 = 3. This positive value indicates that \mathbf{a} extends 3 units along the direction of \mathbf{b} (the positive x-axis). This scalar projection is equivalently expressed as \operatorname{scal}_{\mathbf{b}} \mathbf{a} = \|\mathbf{a}\| \cos \theta, where \theta is the angle between \mathbf{a} and \mathbf{b}, highlighting its dependence on the magnitude of \mathbf{a} and the cosine of their angular separation./02:_Applications_of_Trigonometry/2.09:_The_Dot_Product_and_Projection)

Vector projection

In vector geometry, the vector projection of a vector \mathbf{a} onto a nonzero vector \mathbf{b}, denoted \proj_{\mathbf{b}} \mathbf{a}, is the unique vector lying along the line spanned by \mathbf{b} that minimizes the Euclidean distance to \mathbf{a}. This projection represents the "shadow" of \mathbf{a} cast onto the direction of \mathbf{b}, capturing the component of \mathbf{a} that is parallel to \mathbf{b}. As an orthogonal projection onto the one-dimensional subspace spanned by \mathbf{b}, it ensures \proj_{\mathbf{b}} \mathbf{a} is the closest point in that subspace to \mathbf{a}. The vector projection is given by the formula \proj_{\mathbf{b}} \mathbf{a} = \left( \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|^2} \right) \mathbf{b}, which scales \mathbf{b} by the ratio of the to the squared magnitude of \mathbf{b}. Equivalently, it can be expressed as \proj_{\mathbf{b}} \mathbf{a} = (\scal_{\mathbf{b}} \mathbf{a}) \cdot \frac{\mathbf{b}}{\|\mathbf{b}\|}, where the scalar \scal_{\mathbf{b}} \mathbf{a} provides the signed length of the along the unit in the direction of \mathbf{b}. The scalar projection thus acts as the multiplier that aligns the projection with the direction and magnitude of \mathbf{b}. A fundamental property of the vector projection is that the difference \mathbf{a} - \proj_{\mathbf{b}} \mathbf{a} is to \mathbf{b}, satisfying (\mathbf{a} - \proj_{\mathbf{b}} \mathbf{a}) \cdot \mathbf{b} = 0. This orthogonality confirms that \proj_{\mathbf{b}} \mathbf{a} fully accounts for the parallel component of \mathbf{a} relative to \mathbf{b}. For example, in , consider \mathbf{a} = (1, 2, 3) and \mathbf{b} = (0, 1, 0). The \mathbf{a} \cdot \mathbf{b} = 2 and \|\mathbf{b}\|^2 = 1, so \proj_{\mathbf{b}} \mathbf{a} = 2 \cdot (0, 1, 0) = (0, 2, 0).

Vector rejection

The rejection of a \mathbf{a} onto a nonzero \mathbf{b}, denoted \operatorname{rej}_{\mathbf{b}} \mathbf{a}, is the component of \mathbf{a} orthogonal to \mathbf{b}, representing the portion of \mathbf{a} that lies in the perpendicular to \mathbf{b}. This vector is computed using the formula \operatorname{rej}_{\mathbf{b}} \mathbf{a} = \mathbf{a} - \operatorname{proj}_{\mathbf{b}} \mathbf{a}, where \operatorname{proj}_{\mathbf{b}} \mathbf{a} is the vector projection of \mathbf{a} onto \mathbf{b}. Geometrically, \operatorname{rej}_{\mathbf{b}} \mathbf{a} connects the tip of \operatorname{proj}_{\mathbf{b}} \mathbf{a} to the tip of \mathbf{a} and is to \mathbf{b}, completing the orthogonal \mathbf{a} = \operatorname{proj}_{\mathbf{b}} \mathbf{a} + \operatorname{rej}_{\mathbf{b}} \mathbf{a}. For instance, with \mathbf{a} = (3, 4) and \mathbf{b} = (1, 0), the is (3, 0), so the rejection is (0, 4). The magnitude of the rejection satisfies \|\operatorname{rej}_{\mathbf{b}} \mathbf{a}\| = \|\mathbf{a}\| \sin \theta, where \theta is the angle between \mathbf{a} and \mathbf{b}.

Definitions

Geometric definition

The geometric interpretation of vector projection relies on the angle between two vectors in , providing an intuitive foundation for understanding how one vector "casts a shadow" onto the direction of another. Consider two nonzero vectors \mathbf{a} and \mathbf{b} in \mathbb{R}^n. The angle \alpha between them satisfies $0 \leq \alpha \leq \pi and is defined such that \cos \alpha = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\| \|\mathbf{b}\|}, where the captures the cosine of this angle based on their directional alignment. This trigonometric relation roots the projection in classical , analogous to the shadow cast by an object under directional light, where the length of the shadow along a line represents the aligned component./02%3A_Vectors_In_Two_Dimensions/2.06%3A_The_Vector_Projection_of_One_Vector_onto_Another) The scalar projection of \mathbf{a} onto \mathbf{b} measures the signed magnitude of this aligned component, given by \|\mathbf{a}\| \cos \alpha. This value is positive if \alpha < \pi/2 (acute angle, indicating alignment), negative if \alpha > \pi/2 (obtuse angle, indicating opposition), and zero if \alpha = \pi/2 (perpendicular)./02%3A_Applications_of_Trigonometry/2.09%3A_The_Dot_Product_and_Projection) The vector projection extends this to a full vector in the direction of \mathbf{b}, expressed as \operatorname{proj}_{\mathbf{b}} \mathbf{a} = (\|\mathbf{a}\| \cos \alpha) \hat{\mathbf{b}}, where \hat{\mathbf{b}} = \mathbf{b} / \|\mathbf{b}\| is the unit vector along \mathbf{b}. This projection vector lies along the line spanned by \mathbf{b} and is the closest point on that line to \mathbf{a}./02%3A_Vectors_In_Two_Dimensions/2.06%3A_The_Vector_Projection_of_One_Vector_onto_Another) Complementing the projection, the vector rejection (or perpendicular component) of \mathbf{a} onto \mathbf{b} captures the remainder, with magnitude \|\mathbf{a}\| \sin \alpha orthogonal to \mathbf{b}. The vector rejection is \operatorname{re j}_{\mathbf{b}} \mathbf{a} = \mathbf{a} - \operatorname{proj}_{\mathbf{b}} \mathbf{a}./03%3A_Inner-product_spaces/3.04%3A_Projections) These concepts trace their formal development to late 19th-century advancements in vector methods by J. Willard Gibbs and , building on earlier geometric ideas like shadows in perspective . This geometric framework assumes an Euclidean space equipped with an inner product, which defines angles and lengths; without it, such as in general normed spaces lacking an angle metric, projections via angles are not well-defined. The algebraic equivalents using the dot product provide computational tools but preserve this underlying trigonometric intuition./Appendix_A%3A_Linear_Algebra/A.5%3A_Inner_Product_and_Projections)

Algebraic definition

In an equipped with an inner product \langle \cdot, \cdot \rangle, such as the in , the algebraic definition of vector projection formalizes the concept without relying on coordinate systems or explicit geometric angles./Appendix_A:_Linear_Algebra/A.5:_Inner_Product_and_Projections) The scalar projection of a \mathbf{a} onto a nonzero \mathbf{b} is the scalar quantity \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|}, which algebraically captures the signed length of the component of \mathbf{a} along the direction of \mathbf{b}, though it derives from the inner product relation to the cosine of the angle between them in the geometric interpretation./Vector_Calculus/1:_Vector_Basics/1.5:_The_Dot_and_Cross_Product) The vector projection, denoted \operatorname{proj}_{\mathbf{b}} \mathbf{a}, is the vector lying along \mathbf{b} that best approximates \mathbf{a} in the least-squares sense: \operatorname{proj}_{\mathbf{b}} \mathbf{a} = \left( \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}} \right) \mathbf{b}. $$/Vector_Calculus/1:_Vector_Basics/1.5:_The_Dot_and_Cross_Product) This formula emerges as the unique solution to the optimization problem of minimizing the squared distance $\|\mathbf{a} - t \mathbf{b}\|^2$ over all scalars $t \in \mathbb{R}$, where expanding the norm yields $\|\mathbf{a}\|^2 - 2t (\mathbf{a} \cdot \mathbf{b}) + t^2 \|\mathbf{b}\|^2$ and setting the derivative with respect to $t$ to zero gives $t = \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}}$.[](https://users.math.msu.edu/users/gnagy/teaching/11-fall/mth234/l03-234.pdf) Equivalently, the derivation follows from the [orthogonality principle](/page/Orthogonality_principle): the error vector (or rejection) $\mathbf{a} - \operatorname{proj}_{\mathbf{b}} \mathbf{a}$ must be [perpendicular](/page/Perpendicular) to $\mathbf{b}$, so $(\mathbf{a} - t \mathbf{b}) \cdot \mathbf{b} = 0$, which rearranges to $\mathbf{a} \cdot \mathbf{b} = t (\mathbf{b} \cdot \mathbf{b})$ and thus $t = \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}}$.[](https://users.math.msu.edu/users/gnagy/teaching/11-fall/mth234/l03-234.pdf)[](http://www.myweb.ttu.edu/jengwer/courses/MATH1452/slides/CalcII-Slides9.3.pdf) The vector rejection is then explicitly $\mathbf{a} - \left( \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}} \right) \mathbf{b}$, representing the component of $\mathbf{a}$ orthogonal to $\mathbf{b}\)./Vector_Calculus/1:_Vector_Basics/1.5:_The_Dot_and_Cross_Product) This framework is coordinate-free, relying solely on the [inner product](/page/Inner_product_space), and extends seamlessly to any finite-dimensional [inner product space](/page/Inner_product_space), enabling computations in arbitrary dimensions without reference to angles or visual geometry.[](https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/resources/lecture-15-projections-onto-subspaces/) ## Properties ### Scalar projection properties The scalar projection of a vector $\mathbf{a}$ onto a nonzero vector $\mathbf{b}$, denoted $\text{scal}_{\mathbf{b}} \mathbf{a}$, exhibits linearity in $\mathbf{a}$. Specifically, for scalars $c$ and $d$, and vectors $\mathbf{a}$ and $\mathbf{e}$, \text{scal}{\mathbf{b}} (c \mathbf{a} + d \mathbf{e}) = c , \text{scal}{\mathbf{b}} \mathbf{a} + d , \text{scal}_{\mathbf{b}} \mathbf{e}, which follows from the bilinearity of the dot product underlying the definition $\text{scal}_{\mathbf{b}} \mathbf{a} = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|}$.[](https://users.math.msu.edu/users/gnagy/teaching/11-fall/mth234/l03-234.pdf) This property ensures that scalar projections respect vector addition and scaling in the projected vector. The sign of the scalar projection reflects the relative orientation of $\mathbf{a}$ and $\mathbf{b}$: it is positive if the angle $\theta$ between them satisfies $0^\circ \leq \theta < 90^\circ$, zero if $\theta = 90^\circ$ (indicating perpendicularity), and negative if $90^\circ < \theta \leq 180^\circ$. This behavior arises because $\text{scal}_{\mathbf{b}} \mathbf{a} = \|\mathbf{a}\| \cos \theta$, where $\cos \theta$ determines the sign.[](http://ramanujan.math.trinity.edu/rdaileda/teach/f20/m2321/lectures/lecture3_slides.pdf) By the Cauchy-Schwarz inequality, $|\mathbf{a} \cdot \mathbf{b}| \leq \|\mathbf{a}\| \|\mathbf{b}\|$, the magnitude of the scalar projection satisfies |\text{scal}_{\mathbf{b}} \mathbf{a}| \leq |\mathbf{a}|, with equality if and only if $\mathbf{a}$ is a scalar multiple of $\mathbf{b}$ (i.e., parallel).[](https://math.libretexts.org/Bookshelves/Linear_Algebra/A_First_Course_in_Linear_Algebra_(Kuttler)/04:_R/4.07:_The_Dot_Product) Scaling the direction vector $\mathbf{b}$ affects the scalar projection in a direction-dependent manner: for $k > 0$, $\text{scal}_{k \mathbf{b}} \mathbf{a} = \text{scal}_{\mathbf{b}} \mathbf{a}$, preserving the value since the unit vector in the direction of $k \mathbf{b}$ matches that of $\mathbf{b}$; for $k < 0$, $\text{scal}_{k \mathbf{b}} \mathbf{a} = -\text{scal}_{\mathbf{b}} \mathbf{a}$, reversing the sign due to the opposite direction.[](http://sites.science.oregonstate.edu/math/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/dotprod/dotprod.html) In physics, the scalar projection $\text{scal}_{\mathbf{F}} \mathbf{d}$ of the displacement $\mathbf{d}$ onto [the force](/page/The_Force) $\mathbf{F}$ represents the effective [distance](/page/Distance) traveled in the force's direction, such that the work done is $\|\mathbf{F}\| \cdot \text{scal}_{\mathbf{F}} \mathbf{d}$.[](https://pressbooks.online.ucf.edu/osuniversityphysics/chapter/2-4-products-of-vectors/) ### Vector projection properties The [vector](/page/Vector) projection $\proj_{\mathbf{b}} \mathbf{a}$ of a vector $\mathbf{a}$ onto a nonzero vector $\mathbf{b}$ in an [inner product space](/page/Inner_product_space) exhibits key [properties](/page/.properties) as an orthogonal projection operator onto the one-dimensional [subspace](/page/Subspace) spanned by $\mathbf{b}$. These properties arise from the geometric and algebraic definitions of the [projection](/page/Projection) and are fundamental in linear algebra.[](https://textbooks.math.gatech.edu/ila/1553/projections.html) One central property is **[orthogonality](/page/Orthogonality)**: the projected [vector](/page/Vector) $\proj_{\mathbf{b}} \mathbf{a}$ is orthogonal to the rejection [vector](/page/Vector) $\mathbf{a} - \proj_{\mathbf{b}} \mathbf{a}$, satisfying \proj_{\mathbf{b}} \mathbf{a} \cdot (\mathbf{a} - \proj_{\mathbf{b}} \mathbf{a}) = 0. This ensures that the rejection lies in the [orthogonal complement](/page/Orthogonal_complement) of the span of $\mathbf{b}$.[](https://textbooks.math.gatech.edu/ila/1553/projections.html)[](https://web.ma.utexas.edu/users/m408m/Display12-3-4.shtml) The projection operator is **idempotent**, meaning \proj_{\mathbf{b}} (\proj_{\mathbf{b}} \mathbf{a}) = \proj_{\mathbf{b}} \mathbf{a}, since $\proj_{\mathbf{b}} \mathbf{a}$ already belongs to the subspace spanned by $\mathbf{b}$, and further projection onto the same subspace leaves it unchanged. This idempotence characterizes orthogonal projections in general.[](https://ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011/00e9c8f0eafedeab21a3d079a17ed3d8_MIT18_06SCF11_Ses2.2sum.pdf)[](https://www.biostat.jhsph.edu/~iruczins/teaching/140.751/notes/ch2.pdf) **Linearity** holds for the projection operator: for scalars $c$ and $d$, and vectors $\mathbf{a}$ and $\mathbf{c}$, \proj_{\mathbf{b}} (c \mathbf{a} + d \mathbf{c}) = c \proj_{\mathbf{b}} \mathbf{a} + d \proj_{\mathbf{b}} \mathbf{c}. This follows from the linearity of the inner product and [scalar multiplication](/page/Scalar_multiplication) in the projection formula.[15] The operator is also **symmetric** (self-adjoint) with respect to the inner product, so \proj_{\mathbf{b}} \mathbf{a} \cdot \mathbf{c} = \mathbf{a} \cdot \proj_{\mathbf{b}} \mathbf{c} for any vectors $\mathbf{a}$ and $\mathbf{c}$. In matrix terms, the corresponding [projection matrix](/page/Projection_matrix) is symmetric.[](https://ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011/00e9c8f0eafedeab21a3d079a17ed3d8_MIT18_06SCF11_Ses2.2sum.pdf)[](https://www.biostat.jhsph.edu/~iruczins/teaching/140.751/notes/ch2.pdf) A key **decomposition** property states that any [vector](/page/Vector) $\mathbf{a}$ can be uniquely expressed as \mathbf{a} = \proj_{\mathbf{b}} \mathbf{a} + \rej_{\mathbf{b}} \mathbf{a}, where $\rej_{\mathbf{b}} \mathbf{a} = \mathbf{a} - \proj_{\mathbf{b}} \mathbf{a}$ is the rejection, and the two components are orthogonal. This orthogonal [direct sum](/page/Direct_sum) decomposition underlies many applications of projections.[](https://web.ma.utexas.edu/users/m408m/Display12-3-4.shtml)[](https://textbooks.math.gatech.edu/ila/1553/projections.html) If $\mathbf{b} = \mathbf{0}$, the vector projection is undefined, as the formula involves division by $\|\mathbf{b}\|^2 = 0$; in such cases, the [subspace](/page/Subspace) is trivial, and projections are conventionally not defined or taken as the zero vector in limiting contexts.[](https://www.whitman.edu/mathematics/calculus_online/section12.03.html)/Vector_Calculus/12:_Vectors_in_Three_Dimensions/12.03:_The_Dot_Product) ### Vector rejection properties The vector rejection of $\mathbf{a}$ onto $\mathbf{b}$, denoted $\operatorname{rej}_{\mathbf{b}} \mathbf{a}$, is the component of $\mathbf{a}$ orthogonal to $\mathbf{b}$, obtained as $\mathbf{a} - \operatorname{proj}_{\mathbf{b}} \mathbf{a}$. This vector plays a crucial role in the orthogonal decomposition $\mathbf{a} = \operatorname{proj}_{\mathbf{b}} \mathbf{a} + \operatorname{rej}_{\mathbf{b}} \mathbf{a}$, where the two components are mutually [perpendicular](/page/Perpendicular).[](https://link.springer.com/content/pdf/10.1007/978-3-031-33953-0_1.pdf) By construction, the rejection vector is [orthogonal](/page/Orthogonality) to $\mathbf{b}$, satisfying $\operatorname{rej}_{\mathbf{b}} \mathbf{a} \cdot \mathbf{b} = 0$. This [orthogonality](/page/Orthogonality) ensures that the rejection lies in the [hyperplane](/page/Hyperplane) perpendicular to $\mathbf{b}$.[](https://link.springer.com/content/pdf/10.1007/978-3-031-33953-0_1.pdf) The [rejection operator](/page/Operator) is linear, meaning $\operatorname{rej}_{\mathbf{b}} (c \mathbf{a} + d \mathbf{c}) = c \operatorname{rej}_{\mathbf{b}} \mathbf{a} + d \operatorname{rej}_{\mathbf{b}} \mathbf{c}$ for scalars $c, d$ and vectors $\mathbf{a}, \mathbf{c}$. This follows from the [linearity](/page/Linearity) of the [projection operator](/page/Projection), as rejection is its complement.[](https://raw.org/book/linear-algebra/dot-product/) The magnitude of the rejection vector relates to that of $\mathbf{a}$ via the Pythagorean theorem applied to the orthogonal decomposition: |\operatorname{rej}{\mathbf{b}} \mathbf{a}|^2 = |\mathbf{a}|^2 - (\operatorname{scal}{\mathbf{b}} \mathbf{a})^2, where $\operatorname{scal}_{\mathbf{b}} \mathbf{a}$ is the scalar projection. This identity holds because $\|\mathbf{a}\|^2 = \|\operatorname{proj}_{\mathbf{b}} \mathbf{a}\|^2 + \|\operatorname{rej}_{\mathbf{b}} \mathbf{a}\|^2$.[](https://link.springer.com/content/pdf/10.1007/978-3-031-33953-0_1.pdf) Like the projection [operator](/page/Operator), the rejection operator is idempotent, meaning $\operatorname{rej}_{\mathbf{b}} (\operatorname{rej}_{\mathbf{b}} \mathbf{a}) = \operatorname{rej}_{\mathbf{b}} \mathbf{a}$, since the rejection already lies in the [orthogonal complement](/page/Orthogonal_complement) of the [span](/page/Span) of $\mathbf{b}$, and further rejection leaves it unchanged.[](https://raw.org/book/linear-algebra/dot-product/) The rejection direction is invariant under rotations in the plane spanned by $\mathbf{a}$ and $\mathbf{b}$, as it always points [perpendicular](/page/Perpendicular) to $\mathbf{b}$, rotating with the [orientation](/page/Orientation) of $\mathbf{a}$ relative to $\mathbf{b}$. This geometric property underscores its role in decomposing vectors in 2D subspaces.[](https://raw.org/book/linear-algebra/dot-product/) Finally, the complementarity property holds: $\operatorname{proj}_{\mathbf{b}} (\operatorname{rej}_{\mathbf{b}} \mathbf{a}) = \mathbf{0}$, confirming that the rejection has no component parallel to $\mathbf{b}$. This complements the decomposition, with the projection of the rejection being zero.[](https://link.springer.com/content/pdf/10.1007/978-3-031-33953-0_1.pdf) ## Representations ### Matrix representation The orthogonal projection of a vector onto the line spanned by a nonzero vector $\mathbf{b}$ can be represented using the projection matrix $P_{\mathbf{b}} = \frac{\mathbf{b} \mathbf{b}^T}{\mathbf{b}^T \mathbf{b}}$, where $^T$ denotes the transpose operation.[](https://www.math.utah.edu/~zwick/Classes/Fall2012_2270/Lectures/Lecture22.pdf) This matrix acts on any vector $\mathbf{a}$ to yield the projected vector via $\operatorname{proj}_{\mathbf{b}} \mathbf{a} = P_{\mathbf{b}} \mathbf{a}$.[](https://www.math.utah.edu/~zwick/Classes/Fall2012_2270/Lectures/Lecture22.pdf) In matrix terms, $P_{\mathbf{b}}$ satisfies the properties $P_{\mathbf{b}}^2 = P_{\mathbf{b}}$ (idempotence, ensuring repeated projections remain unchanged) and $P_{\mathbf{b}}^T = P_{\mathbf{b}}$ (symmetry).[](https://www.math.utah.edu/~zwick/Classes/Fall2012_2270/Lectures/Lecture22.pdf) When $\mathbf{b}$ is a unit vector (satisfying $\mathbf{b}^T \mathbf{b} = 1$), the formula simplifies to $P_{\mathbf{b}} = \mathbf{b} \mathbf{b}^T$.[](https://www.math.utah.edu/~zwick/Classes/Fall2012_2270/Lectures/Lecture22.pdf) For instance, in $\mathbb{R}^2$ with $\mathbf{b} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ (a unit vector along the x-axis), the projection matrix is P_{\mathbf{b}} = \begin{pmatrix} 1 & 0 \ 0 & 0 \end{pmatrix}, which projects any vector onto the x-axis by retaining its first component and setting the second to zero.[](https://www.math.utah.edu/~zwick/Classes/Fall2012_2270/Lectures/Lecture22.pdf) This matrix form is computationally advantageous for projecting multiple vectors onto the same line, as it enables efficient repeated applications through [standard](/page/Standard) matrix-vector [multiplication](/page/Multiplication) rather than scalar computations for each [vector](/page/Vector) individually.[](https://www.khanacademy.org/math/linear-algebra/v/expressing-a-projection-on-to-a-line-as-a-matrix-vector-prod) ### Coordinate representations In Cartesian coordinates, the [vector](/page/Vector) projection of $\mathbf{a} = (a_1, \dots, a_n)$ onto a nonzero [vector](/page/Vector) $\mathbf{b} = (b_1, \dots, b_n)$ in $\mathbb{R}^n$ is expressed as \operatorname{proj}{\mathbf{b}} \mathbf{a} = \left( \frac{\sum{i=1}^n a_i b_i}{\sum_{i=1}^n b_i^2} \right) \mathbf{b}. This formula computes the projection using the [dot product](/page/Dot_product) $\mathbf{a} \cdot \mathbf{b} = \sum_{i=1}^n a_i b_i$ in the numerator and the squared [magnitude](/page/Magnitude) $\|\mathbf{b}\|^2 = \sum_{i=1}^n b_i^2$ in the denominator, yielding the scalar multiple of $\mathbf{b}$.[](https://web.ma.utexas.edu/users/m408m/Display12-3-4.shtml)[](https://users.math.msu.edu/users/gnagy/teaching/10-fall/mth234/w2-234-h.pdf) In two dimensions, consider $\mathbf{a} = (x_1, y_1)$ and $\mathbf{b} = (x_2, y_2)$ with $\mathbf{b} \neq \mathbf{0}$. The projection is \operatorname{proj}_{\mathbf{b}} \mathbf{a} = \left( \frac{x_1 x_2 + y_1 y_2}{x_2^2 + y_2^2} \right) (x_2, y_2). For a numerical example, let $\mathbf{a} = (3, 4)$ and $\mathbf{b} = (1, 2)$. The dot product is $3 \cdot 1 + 4 \cdot 2 = 11$, and $\|\mathbf{b}\|^2 = 1^2 + 2^2 = 5$, so the scalar is $11/5 = 2.2$. Thus, $\operatorname{proj}_{\mathbf{b}} \mathbf{a} = 2.2 \cdot (1, 2) = (2.2, 4.4)$.[](https://web.ma.utexas.edu/users/m408m/Display12-3-4.shtml) In three dimensions, for $\mathbf{a} = (a_x, a_y, a_z)$ and $\mathbf{b} = (b_x, b_y, b_z)$ with $\mathbf{b} \neq \mathbf{0}$, the formula extends component-wise: \operatorname{proj}_{\mathbf{b}} \mathbf{a} = \left( \frac{a_x b_x + a_y b_y + a_z b_z}{b_x^2 + b_y^2 + b_z^2} \right) (b_x, b_y, b_z). A simple case is projection onto the x-axis, where $\mathbf{b} = (1, 0, 0)$. Here, $\mathbf{a} \cdot \mathbf{b} = a_x$ and $\|\mathbf{b}\|^2 = 1$, so $\operatorname{proj}_{\mathbf{b}} \mathbf{a} = (a_x, 0, 0)$. For example, with $\mathbf{a} = (2, -1, 3)$, the projection is $(2, 0, 0)$.[](https://web.ma.utexas.edu/users/m408m/Display12-3-4.shtml)[](https://users.math.msu.edu/users/gnagy/teaching/10-fall/mth234/w2-234-h.pdf) The vector rejection, or the component of $\mathbf{a}$ orthogonal to $\mathbf{b}$, has coordinates $\mathbf{a} - \operatorname{proj}_{\mathbf{b}} \mathbf{a}$, computed by subtracting the [projection](/page/Projection)'s components from $\mathbf{a}$'s: $(a_i - c b_i)$ for each $i$, where $c = (\sum a_i b_i)/(\sum b_i^2)$. Using the earlier 2D example, the rejection is $(3, 4) - (2.2, 4.4) = (0.8, -0.4)$.[](https://web.ma.utexas.edu/users/m408m/Display12-3-4.shtml) This coordinate representation facilitates numerical computation, as it depends solely on finite sums of component products rather than angles or [inverse trigonometric functions](/page/Inverse_trigonometric_functions), making it efficient for direct evaluation in programming or calculators.[](https://users.math.msu.edu/users/gnagy/teaching/10-fall/mth234/w2-234-h.pdf) If $\mathbf{b} = \mathbf{0}$, the denominator $\sum b_i^2 = 0$, rendering the [projection](/page/Projection) undefined, as no direction exists for projection.[](https://web.ma.utexas.edu/users/m408m/Display12-3-4.shtml) ## Applications ### Physics and engineering In physics, vector projection is fundamental for resolving forces into components along specific directions, enabling the analysis of their effects in constrained systems such as friction or structural supports. The parallel component of a force $\mathbf{F}$ along a unit direction vector $\mathbf{d}$ is given by $\mathbf{F}_\parallel = (\mathbf{F} \cdot \mathbf{d}) \mathbf{d}$, which isolates the portion contributing to motion or stress in that direction.[](https://farside.ph.utexas.edu/teaching/336k/Newton.pdf) This resolution technique, rooted in the parallelogram law, allows physicists to decompose arbitrary forces into orthogonal components for equilibrium calculations, as seen in problems involving inclined planes or tension in cables.[](https://farside.ph.utexas.edu/teaching/336k/Newton.pdf) Vector projections also play a key role in calculating work done by a [force](/page/Force), where the scalar projection quantifies the effective component along the [displacement](/page/Displacement) path. The work $W$ performed by a [constant](/page/Constant) [force](/page/Force) $\mathbf{F}$ over a [displacement](/page/Displacement) $\Delta \mathbf{r}$ is $W = \mathbf{F} \cdot \Delta \mathbf{r} = \|\Delta \mathbf{r}\| \cdot \text{scal}_{\Delta \mathbf{r}} \mathbf{F}$, with the scalar projection $\text{scal}_{\Delta \mathbf{r}} \mathbf{F} = \|\mathbf{F}\| \cos \theta$, where $\theta$ is the angle between the vectors.[](http://sites.science.oregonstate.edu/math/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/dotprod/dotprod.html) For instance, in moving a particle from the origin to (1,1,1) under a known [force](/page/Force), the work equals the scalar projection of $\mathbf{F}$ onto $\langle 1,1,1 \rangle$ times the [displacement](/page/Displacement) magnitude, highlighting how only the aligned [force](/page/Force) component contributes to energy transfer.[](http://sites.science.oregonstate.edu/math/home/programs/undergrad/CalculusQuestStudyGuides/vcalc/dotprod/dotprod.html) In collision dynamics, projections of [velocity](/page/Velocity) vectors onto the line of impact determine relative speeds and [momentum](/page/Momentum) transfer, facilitating the application of [conservation](/page/Conservation) laws. For two colliding objects with velocities $\mathbf{u}$ and $\mathbf{v}$, projections of the velocities onto the line of impact (the contact [normal](/page/Normal) [direction](/page/Direction)) yield the components along the [interaction](/page/Interaction) axis, used to compute post-collision velocities in [elastic](/page/Elastic) or inelastic scenarios while the orthogonal component remains unchanged.[](https://labs.la.utexas.edu/gilden/files/2016/04/UnderstandingCollisionDynamics.pdf) This [decomposition](/page/Decomposition), often via dot products, simplifies two-dimensional analyses by isolating the [interaction](/page/Interaction) [direction](/page/Direction), as in [billiard ball](/page/Billiard_ball) collisions where mass ratios are inferred from projected speeds.[](https://labs.la.utexas.edu/gilden/files/2016/04/UnderstandingCollisionDynamics.pdf) Engineering applications, such as [beam](/page/Beam) [stress](/page/Stress) [analysis](/page/Analysis), rely on [projecting](/page/Projection) loads onto principal axes to assess [bending](/page/Bending) moments and [shear](/page/Shear) forces. In structural design, an applied load vector is decomposed into axial and transverse components relative to the beam's orientation, with the parallel [projection](/page/Projection) determining compressive or tensile stresses along the length.[](https://eaglepubs.erau.edu/introductiontoaerospaceflightvehicles/chapter/review-of-important-mathematics/) For example, in [cantilever](/page/Cantilever) beams under oblique forces, this projection isolates contributions to deflection and failure risk, guiding [material selection](/page/Material_selection) and safety factors in civil and [mechanical engineering](/page/Mechanical_engineering).[](https://eaglepubs.erau.edu/introductiontoaerospaceflightvehicles/chapter/review-of-important-mathematics/) A practical illustration appears in [projectile motion](/page/Projectile_motion), where the initial velocity $\mathbf{v}_0$ is projected onto the [horizontal](/page/Horizontal) plane to find the [range](/page/Range), while the vertical rejection determines maximum [height](/page/Height). The [horizontal](/page/Horizontal) component is $v_{0x} = v_0 \cos [\theta](/page/Theta)$, where $\theta$ is the launch angle, yielding constant motion unaffected by [gravity](/page/Gravity), whereas the vertical component $v_{0y} = v_0 \sin [\theta](/page/Theta)$ governs the [parabolic trajectory](/page/Parabolic_trajectory) under acceleration $g$. These applications trace back to Newtonian [mechanics](/page/Mechanics) in the 17th and 18th centuries, where force resolution via geometric projections underpinned the second law and [equilibrium](/page/Equilibrium) principles, though formal [vector notation](/page/Vector_notation) emerged later.[](https://farside.ph.utexas.edu/teaching/336k/Newton.pdf) Isaac [Newton](/page/Newton)'s use of the parallelogram of forces in *[Philosophiæ Naturalis Principia Mathematica](/page/Philosophiæ_Naturalis_Principia_Mathematica)* (1687) effectively employed projections to analyze composite motions, laying the groundwork for modern [dynamics](/page/Dynamics) without explicit [vector algebra](/page/Vector_algebra).[](http://prubin.physics.gmu.edu/courses/170/readings/historyofphysics_2_mechanics.pdf) ### Geometry and computer graphics In [computer graphics](/page/Computer_graphics), vector projection provides a geometric foundation for rendering realistic scenes by mapping three-dimensional positions onto two-dimensional surfaces or planes, enabling effects such as [shadows](/page/The_Shadows) and [lighting](/page/Lighting) that mimic real-world [optics](/page/Optics).[](https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/projection-matrix-introduction.html) This process involves decomposing a [vector](/page/Vector), such as the direction from a [light](/page/Light) source to an object, into components parallel and perpendicular to a target surface [normal](/page/Normal), which determines [visibility](/page/Visibility) and illumination [intensity](/page/Intensity).[](https://math.hws.edu/graphicsbook/c7/s2.html) By leveraging orthogonal projections, [graphics](/page/Graphics) algorithms efficiently compute how [light](/page/Light) interacts with [geometry](/page/Geometry) without simulating every [photon](/page/Photon) path.[](https://www.cs.cmu.edu/~ph/texfund/texfund.pdf) Shadow casting relies on vector projection to simulate how objects block [light](/page/Light) onto receiving planes, such as floors or walls, by projecting the occluder's vertices onto the plane using the light's direction vector.[](https://cseweb.ucsd.edu/~viscomp/classes/cse168/sp25/readings/shadow_mapping.pdf) This technique, introduced in early [shadow mapping](/page/Shadow_mapping) methods, involves rejecting the component of the object-to-light vector along the plane's [normal](/page/Normal) to flatten the shadow [silhouette](/page/Silhouette) accurately.[](https://diglib.eg.org/bitstream/handle/10.2312/cgems04-11-1361/presentation.pdf?sequence=1) For planar shadows, the projection ensures that dynamic objects cast correct distortions based on light position, avoiding artifacts in [real-time](/page/Real-time) rendering. In lighting models, vector projection computes [diffuse reflection](/page/Diffuse_reflection) by finding the scalar projection of the incident light vector onto the surface [normal](/page/Normal), which quantifies how directly light strikes the material.[](https://learnopengl.com/Lighting/Basic-Lighting) The Lambertian diffuse term, a [cornerstone](/page/Cornerstone) of [shading](/page/Shading) since its formulation in the [18th century](/page/18th_century) and adapted to [graphics](/page/Graphics), uses this [dot product](/page/Dot_product) to scale illumination intensity, where the cosine of the angle between light and [normal](/page/Normal) determines brightness.[](https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/diffuse-lambertian-shading.html) This projection-based approach integrates seamlessly into [vertex](/page/Vertex) and fragment shaders, contributing to photorealistic surface appearance without full [ray](/page/Ray) simulation.[](https://math.hws.edu/graphicsbook/c7/s2.html) Ray tracing employs vector projection to detect intersections by projecting ray directions onto surface parameters, particularly for planes or quadrics, where the parallel component yields the hit point along the ray.[](https://raytracing.github.io/books/RayTracingInOneWeekend.html) In this recursive rendering paradigm, projecting secondary rays—such as reflections or refractions—onto tangent spaces at intersection points allows for accurate computation of light bounces and global illumination effects.[](https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-overview/ray-tracing-rendering-technique-overview.html) This method, formalized in Whitted's 1980 algorithm, ensures precise geometric queries in complex scenes.[](https://my.eng.utah.edu/~cs5600/slides/RayTracingShirley.pdf) Vector projection facilitates [texture mapping](/page/Texture_mapping) by projecting 2D texture coordinates onto 3D model surfaces via perspective-correct [interpolation](/page/Interpolation), aligning image details with geometric distortions in camera views.[](https://www.cs.cmu.edu/~ph/texfund/texfund.pdf) In projective [texture mapping](/page/Texture_mapping), the light or camera's view matrix projects environmental textures onto arbitrary surfaces, enhancing realism in applications like environment mapping.[](https://cseweb.ucsd.edu/~ravir/6160/papers/shadow_mapping.pdf) For camera projections, it transforms world coordinates into screen space, using the view [frustum](/page/Frustum) to clip and perspective-divide vectors for correct depth cueing.[](https://learnopengl.com/Getting-started/Coordinate-Systems) A simple 2D example illustrates shadow casting from a point light: consider a light at position $ \mathbf{L} $ casting the shadow of an occluder point $ \mathbf{P} $ onto a wall line defined by normal $ \mathbf{N} $ and point $ \mathbf{Q} $; the shadow position $ \mathbf{S} $ is found by computing the intersection of the ray from $\mathbf{L}$ through $\mathbf{P}$ with the wall line, producing a linear elongation based on distance.[](http://www.it.hiof.no/~borres/j3d/explain/shadow/p-shadow.html) For real-time performance on GPUs, vector projections are encoded as matrix multiplications, where a 4x4 [projection matrix](/page/Projection_matrix) represents the linear transformation of vertex vectors in parallel across shader cores, enabling high-frame-rate rendering in modern pipelines.[](https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/projection-matrix-introduction.html) This [matrix](/page/Matrix) form, optimized in hardware like NVIDIA's [CUDA](/page/CUDA), processes thousands of projections per frame with minimal latency, as demonstrated in shadow volume and mapping implementations.[](https://developer.nvidia.com/gpugems/gpugems/part-ii-lighting-and-shadows/chapter-9-efficient-shadow-volume-rendering) [Plane](/page/Plane) projections serve as a special case here, simplifying computations for flat receivers.[](https://www.codinglabs.net/article_world_view_projection_matrix.aspx) ### Statistics and machine learning In statistics, orthogonal vector projection provides a geometric interpretation of [linear regression](/page/Linear_regression), where the goal is to find the best [linear approximation](/page/Linear_approximation) to the data by projecting the [vector](/page/Vector) of observed responses onto the [subspace](/page/Subspace) spanned by the predictor variables, thereby minimizing the sum of squared errors through the [least squares](/page/Least_squares) criterion.[](https://bookdown.org/ts_robinson1994/10EconometricTheorems/linear-projection.html) This projection ensures that the residuals are orthogonal to the fitted [subspace](/page/Subspace), a property that underpins the unbiasedness and efficiency of ordinary [least squares](/page/Least_squares) estimators under standard assumptions.[](https://understandinglinearalgebra.org/sec-least-squares.html) A key tool in this framework is the hat matrix, defined as $ H = X (X^T X)^{-1} X^T $, where $ X $ is the [design matrix](/page/Design_matrix); this matrix performs the orthogonal [projection](/page/Projection) of the response [vector](/page/Vector) $ y $ onto the column [space](/page/Space) of $ X $, yielding the vector of fitted values $ \hat{y} = H y $.[](https://www.alexkaizer.com/bios_6618/files/bios6618/W15/linear_regression_with_matrices.pdf) The diagonal elements of $ H $, known as leverage scores, quantify the influence of each observation on its own fitted value, with values between 0 and 1 indicating how much the prediction relies on that point.[](https://www.tandfonline.com/doi/abs/10.1080/00031305.1978.10479237) Principal component analysis (PCA) extends this concept to unsupervised [dimensionality reduction](/page/Dimensionality_reduction) by identifying orthogonal directions, or principal components, that maximize the variance of the projected data points.[](https://www.seas.upenn.edu/~cis520/papers/Bishop_12.1.pdf) Formally, [PCA](/page/PCA) computes the orthogonal [projection](/page/Projection) of the centered data onto a lower-dimensional [subspace](/page/Subspace) spanned by the leading eigenvectors of the [covariance matrix](/page/Covariance_matrix), prioritizing components that capture the most variance to enable data compression while retaining essential structure.[](https://www.stat.cmu.edu/~cshalizi/uADA/12/lectures/ch18.pdf) For example, consider a bivariate dataset in 2D space representing two correlated variables, such as height and weight measurements; projecting the points orthogonally onto the first principal axis (the line of maximum variance) reduces the data to 1D while quantifying the linear correlation through the proportion of total variance explained by that projection, often exceeding 80% for strongly correlated features.[](https://towardsdatascience.com/principal-component-analysis-part-1-the-different-formulations-6508f63a5553) In [machine learning](/page/Machine_learning), vector projections facilitate [feature engineering](/page/Feature_engineering) by linearly transforming input data into more suitable representations, such as projecting high-dimensional features into lower-dimensional embeddings that preserve semantic similarities for tasks like [classification](/page/Classification) or clustering.[](https://ww3.math.ucla.edu/camreport/cam16-41.pdf) This is common in neural networks, where fully connected layers apply linear projections followed by nonlinear activations to evolve feature spaces across layers, enhancing model expressiveness without excessive parameter growth.[](https://hsf-training.github.io/hsf-training-ml-webpage/03-nn/index.html) In the context of 2020s [AI](/page/Ai) advancements, transformer architectures rely on linear projections to map input embeddings into query, key, and value vectors, enabling the scaled dot-product attention mechanism to compute weighted combinations of sequence elements based on their projected similarities. ## Generalizations ### Projection onto planes The orthogonal projection of a vector $\vec{v}$ onto a plane in three-dimensional space is obtained by subtracting the projection of $\vec{v}$ onto the plane's normal vector $\vec{n}$ from $\vec{v}$ itself, yielding $\proj_{\plane} \vec{v} = \vec{v} - \proj_{\vec{n}} \vec{v}$.[](https://esp.mit.edu/download/13a3138f19fdbee1f473fa0112bc3ad2/M2779_Linear_Algebra_Notes.pdf) Here, the vector projection onto the normal is $\proj_{\vec{n}} \vec{v} = \frac{\vec{v} \cdot \vec{n}}{\|\vec{n}\|^2} \vec{n}$, which isolates the component perpendicular to the plane.[](https://esp.mit.edu/download/13a3138f19fdbee1f473fa0112bc3ad2/M2779_Linear_Algebra_Notes.pdf) For a unit normal vector $\hat{n}$ (where $\|\hat{n}\| = 1$), this simplifies to $\proj_{\plane} \vec{v} = \vec{v} - (\vec{v} \cdot \hat{n}) \hat{n}$, or in matrix form, $\proj_{\plane} \vec{v} = (I - \hat{n} \hat{n}^T) \vec{v}$, where $I$ is the identity matrix and $\hat{n} \hat{n}^T$ is the outer product.[](https://esp.mit.edu/download/13a3138f19fdbee1f473fa0112bc3ad2/M2779_Linear_Algebra_Notes.pdf) Geometrically, this projection represents the "shadow" of $\vec{v}$ cast onto the [plane](/page/Plane) by light rays perpendicular to the plane, ensuring the result lies within the plane and the vector from the projected point to the original is orthogonal to every [vector](/page/Vector) in the plane.[](https://mathworld.wolfram.com/ProjectionTheorem.html) For instance, consider projecting the [vector](/page/Vector) $\vec{v} = (x, y, z)$ onto the $xy$-[plane](/page/Plane), which has [normal](/page/Normal) $\vec{n} = (0, 0, 1)$. The result is $\proj_{\plane} \vec{v} = (x, y, 0)$, discarding the $z$-component.[](https://esp.mit.edu/download/13a3138f19fdbee1f473fa0112bc3ad2/M2779_Linear_Algebra_Notes.pdf) Key properties include idempotence: applying the projection twice returns the same vector, as $(I - \hat{n} \hat{n}^T)^2 = I - \hat{n} \hat{n}^T$.[](https://esp.mit.edu/download/13a3138f19fdbee1f473fa0112bc3ad2/M2779_Linear_Algebra_Notes.pdf) Additionally, the projected vector is orthogonal to the normal, satisfying $\hat{n} \cdot \proj_{\plane} \vec{v} = 0$, while the rejection $\vec{v} - \proj_{\plane} \vec{v}$ is parallel to $\hat{n}$.[](https://esp.mit.edu/download/13a3138f19fdbee1f473fa0112bc3ad2/M2779_Linear_Algebra_Notes.pdf) In [geometry](/page/Geometry), this projection determines the foot of the [perpendicular](/page/Perpendicular) from a point to the [plane](/page/Plane), enabling calculations of altitudes, such as the [height](/page/Height) from a [vertex](/page/Vertex) to the base [plane](/page/Plane) in a 3D triangular [pyramid](/page/Pyramid). In [terrain](/page/Terrain) mapping, orthogonal projections of [elevation](/page/Elevation) vectors onto horizontal planes facilitate the creation of 2D [contour](/page/Contour) maps from [digital](/page/Digital) [elevation](/page/Elevation) models, preserving spatial relationships for [visualization](/page/Visualization) and [analysis](/page/Analysis). ### Projection onto subspaces The orthogonal projection of a [vector](/page/Vector) $\mathbf{a}$ onto a [subspace](/page/Subspace) $S = \span\{\mathbf{u}_1, \dots, \mathbf{u}_k\}$ of an [inner product space](/page/Inner_product_space) is the unique [vector](/page/Vector) $\mathbf{p} \in S$ that minimizes the [Euclidean distance](/page/Euclidean_distance) $\|\mathbf{a} - \mathbf{p}\|$, with the [error](/page/Error) $\mathbf{a} - \mathbf{p}$ orthogonal to every [vector](/page/Vector) in $S$.[](https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/resources/lecture-15-projections-onto-subspaces/) This $\mathbf{p}$ satisfies $\langle \mathbf{a} - \mathbf{p}, \mathbf{v} \rangle = 0$ for all $\mathbf{v} \in S$.[](https://textbooks.math.gatech.edu/ila/1553/projections.html) If $\{\mathbf{q}_1, \dots, \mathbf{q}_k\}$ forms an [orthonormal basis](/page/Orthonormal_basis) for $S$, the projection is given by \mathbf{p} = \proj_S \mathbf{a} = \sum_{i=1}^k \langle \mathbf{a}, \mathbf{q}_i \rangle \mathbf{q}_i = Q Q^T \mathbf{a}, where $Q$ is [the matrix](/page/The_Matrix) with columns $\mathbf{q}_1, \dots, \mathbf{q}_k$.[](https://textbooks.math.gatech.edu/ila/1553/projections.html) To compute this from a general basis $\{\mathbf{u}_1, \dots, \mathbf{u}_k\}$, apply the Gram-Schmidt process to orthogonalize and normalize the basis vectors, yielding the orthonormal set $\{\mathbf{q}_i\}$.[](https://textbooks.math.gatech.edu/ila/06:Orthogonality/6.03:Orthogonal_Projection.html) For a non-orthonormal basis, let $A$ be the matrix with columns $\mathbf{u}_1, \dots, \mathbf{u}_k$; then the projection is \mathbf{p} = A (A^T A)^{-1} A^T \mathbf{a}, assuming $A$ has full column rank so $A^T A$ is invertible.[](https://ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011/00e9c8f0eafedeab21a3d079a17ed3d8_MIT18_06SCF11_Ses2.2sum.pdf) This formula directly solves the normal equations $A^T (A \hat{\mathbf{x}} - \mathbf{a}) = \mathbf{0}$ for the coefficients $\hat{\mathbf{x}}$ such that $\mathbf{p} = A \hat{\mathbf{x}}$.[](https://ocw.mit.edu/courses/18-06sc-linear-algebra-fall-2011/00e9c8f0eafedeab21a3d079a17ed3d8_MIT18_06SCF11_Ses2.2sum.pdf) As an example in $\mathbb{R}^3$, consider the subspace $S$ spanned by $\mathbf{u}_1 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}$ and $\mathbf{u}_2 = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}$, and project $\mathbf{a} = \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}$. Form $A = \begin{pmatrix} 1 & 1 \\ 1 & 0 \\ 0 & 1 \end{pmatrix}$; then $A^T A = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$, with inverse $\frac{1}{3} \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix}$. The projection is $\mathbf{p} = A (A^T A)^{-1} A^T \mathbf{a} = \begin{pmatrix} 7/3 \\ 2/3 \\ 5/3 \end{pmatrix}$. The projection operator $P$ defined by $\proj_S \mathbf{a} = P \mathbf{a}$ is idempotent, satisfying $P^2 = P$, since projecting onto $S$ twice yields the same result in $S$.[](https://textbooks.math.gatech.edu/ila/1553/projections.html) It is also self-adjoint, with $P^T = P$, ensuring the error is orthogonal to $S$.[](https://textbooks.math.gatech.edu/ila/1553/projections.html) The dimension of the image of $P$ equals $\dim S = k$.[](https://textbooks.math.gatech.edu/ila/1553/projections.html) This generalizes the line projection as the case $k=1$.[](https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/resources/lecture-15-projections-onto-subspaces/) ### Projections in inner product spaces In [inner product space](/page/Inner_product_space)s, the concept of orthogonal projection extends naturally beyond finite-dimensional [Euclidean](/page/Euclidean) spaces to abstract settings, including infinite-dimensional [Hilbert space](/page/Hilbert_space)s. A [Hilbert space](/page/Hilbert_space) $H$ is a complete [inner product space](/page/Inner_product_space) equipped with an inner product $\langle \cdot, \cdot \rangle$. For a nonzero [vector](/page/Vector) $u \in H$ and any $f \in H$, the orthogonal projection of $f$ onto the [span](/page/Span) of $u$ is defined as \proj_u f = \left( \frac{\langle f, u \rangle}{\langle u, u \rangle} \right) u. This formula minimizes the distance $\|f - \proj_u f\|$ in the norm induced by the inner product, ensuring orthogonality: $\langle f - \proj_u f, u \rangle = 0$. The projection decomposes $H$ into the direct sum of the one-dimensional subspace spanned by $u$ and its orthogonal complement $u^\perp = \{ v \in H \mid \langle v, u \rangle = 0 \}$, so $H = \span\{u\} \oplus u^\perp$. Every element $f \in H$ can thus be uniquely written as $f = \proj_u f + (f - \proj_u f)$, where the second component lies in $u^\perp$. This decomposition holds due to the completeness of Hilbert spaces, which guarantees the existence and uniqueness of such projections.[](https://math.mit.edu/~rbm/18-102-Sp16/Chapter3.pdf) A concrete example arises in the [Hilbert space](/page/Hilbert_space) $L^2[a, b]$ of square-integrable functions on an [interval](/page/Interval) $[a, b]$, with inner product $\langle f, g \rangle = \int_a^b f(x) g(x) \, dx$. The orthogonal [projection](/page/Projection) onto the [subspace](/page/Subspace) of [constant](/page/Constant) functions (spanned by the [constant](/page/Constant) function 1) is the mean value of the function: for $f \in L^2[a, b]$, \proj_1 f(x) = \frac{1}{b - a} \int_a^b f(t) , dt, which is [constant](/page/Constant) and represents the best approximation in the $L^2$ norm. This [projection](/page/Projection) isolates the average behavior of $f$, with the error orthogonal to all [constants](/page/Constant).[](https://www.arpm.co/lab/geometry-of-random-variables.html) Key properties of these projections mirror those in [Euclidean](/page/Euclidean) spaces but are established via the inner product. The projection [operator](/page/Operator) $P_u: H \to \span\{u\}$ defined by $P_u f = \proj_u f$ is linear, bounded (with $\|P_u\| = 1$ if $\|u\| = 1$), [self-adjoint](/page/Self-adjoint) ($\langle P_u f, g \rangle = \langle f, P_u g \rangle$), and [idempotent](/page/Idempotence) ($P_u^2 = P_u$), confirming its status as an orthogonal projection. [Idempotence](/page/Idempotence) follows directly from the formula, as applying $P_u$ twice yields the same scalar multiple of $u$.[](https://math.mit.edu/~rbm/18-102-Sp16/Chapter3.pdf) For broader applicability, projections extend to closed subspaces $M \subset H$ using the Riesz representation theorem, which identifies bounded linear functionals on $H$ with inner products. For any closed subspace $M$, there exists a unique orthogonal [projection](/page/Projection) $P_M: H \to M$ such that $H = M \oplus M^\perp$ and $\|f - P_M f\|$ is minimized for each $f \in H$. This operator is again linear, bounded, [self-adjoint](/page/Self-adjoint), and idempotent, enabling projections in infinite-dimensional settings like function spaces.[](https://www.impan.pl/~tkoch/FA_lecturenotes/lecture8.pdf) In infinite-dimensional Hilbert spaces, these projections find critical applications in [quantum computing](/page/Quantum_computing), where state spaces are modeled as such structures and projections facilitate measurement and error correction operations. Recent analyses highlight their role in describing quantum system evolutions and algorithmic efficiencies.[](https://www.researchgate.net/publication/391878771_Understanding_Quantum_Mechanics_through_Hilbert_Spaces_Applications_in_Quantum_Computing)

References

  1. [1]
    [PDF] 4.4 The Dot Product of Vectors, Projections
    Definition 4.4.3: The Projection of One Vector on Another. For two vectors u and v, the vector projvu is given by projvu =u · v v · v v. Note that since both ...<|control11|><|separator|>
  2. [2]
    Math 21a, Fall 2013/2014, Multivariable Calculus, Harvard College
    The vector projection is a vector parallel to w. The scalar projection is a scalar. If the angle between v and w is smaller than 90 degrees, then the scalar ...
  3. [3]
    Projections and components
    The geometric definition of dot product helps us express the projection of one vector onto another as well as the component of one vector in the direction of ...
  4. [4]
    [PDF] Math 2270 - Lecture 22 : Projections
    The projec- tion of b onto V is the vector in V closest to b. This projection vector, p, will be by definition a linear combination of the basis vectors of V :.
  5. [5]
    [PDF] Vectors in Two Dimensions
    The dot product has many uses, such as finding the angle between two vectors, the projection of one vector onto another, the amount of work a force does on ...
  6. [6]
    [PDF] Vector Geometry for Computer Graphics
    We want to project vector B onto vector A. The following picture illustrates the derivation. We want to find vector p, which is the projection of B onto A.
  7. [7]
    [PDF] 10.6 Computer Graphics
    The projection onto the flat has three steps. Translate v0 to the origin by T− . Project along the n direction, and translate back along the row vector v0:.
  8. [8]
    [PDF] Linear Algebra and Multivariable Calculus - Evan Chen
    Sep 23, 2025 · ... formula. comp𝐰(𝐯) = 𝐯 ⋅ 𝐰. |𝐰| . 2. To compute the vector projection, use the formula. proj𝐰(𝐯) = comp𝐰(𝐯). 𝐰. |𝐰| . Type signature.
  9. [9]
    Dot Product -- from Wolfram MathWorld
    The dot product can be defined for two vectors X and Y by X·Y=|X||Y|costheta, where theta is the angle between the vectors and |X| is the norm.
  10. [10]
    [PDF] Linear Algebra
    Jun 12, 2017 · ... Vector Projection and Components. Consider two vectors x and y with the same initial point , represented by x .. and y ...<|control11|><|separator|>
  11. [11]
    Scalar and Vector Projections | CK-12 Foundation
    The formula for the projection vector is given by p r o j u v = ( u ⋅ v | u | ) u | u | . A vector is multiplied by a scalar s. Its components are given by s v ...
  12. [12]
    [PDF] The Dot Product - MATH 241-02, Multivariable Calculus, Spring 2019
    Our final topic is how to project one vector onto another. This is particularly important in physics and engineering applications as well as many other subjects ...<|control11|><|separator|>
  13. [13]
    Dot Products and Projections - Oregon State University
    The vector projection of b onto a is the vector with this length that begins at the point A points in the same direction (or opposite direction if the scalar ...
  14. [14]
    2.9: The Dot Product and Projection - Mathematics LibreTexts
    Jun 15, 2021 · The scalar \(\vec{v} \cdot \hat{w}\) is a measure of how much of the vector \(\vec{v}\) is in the direction of the vector \(\vec{w}\) and is ...
  15. [15]
    Orthogonal Projection
    In the special case where we are projecting a vector x in R n onto a line L = Span { u } , our formula for the projection can be derived very directly and ...
  16. [16]
    2.6: The Vector Projection of One Vector onto Another
    Oct 29, 2023 · The notation commonly used to represent the projection of u → onto v → is proj v → ⁢ u → .Projection · Example 2 . 6 . 1 · Using Technology · Example 2 . 6 . 2
  17. [17]
    [PDF] The Dot Product - Trinity University
    c′ is orthogonal to b. Daileda. Dot Product. Page 16. b a c θ. Let's compute c ... The vector projection of a on b is projb(a) = a · b. |b|2 b. Daileda. Dot ...
  18. [18]
    [PDF] Moving Through Space with Geometric Algebras - UT Math
    Apr 19, 2009 · Figure: A vector is its projection plus its rejection. Page 10. GEOMETRIC ALGEBRA.
  19. [19]
    Orthogonal Vector Projection / Rejection - STEM and Music
    Finding the Projection and Rejection of a Vector on/from another Vector is same as Finding the Projection and Rejection of a Vector on/from a Basis Vector ...
  20. [20]
    The dot product - Math Insight
    The dot product of a with unit vector u, denoted a⋅u, is defined to be the projection of a in the direction of u, or the amount that a is pointing in the same ...Dot product examples · The formula for the dot product... · More similar pages
  21. [21]
    [PDF] A History of Vector Analysis
    1885 Heaviside in one of his electrical papers gives his first unified presentation of his system of vector analysis, which is essentially identical to that of ...Missing: projection | Show results with:projection
  22. [22]
    Vector Space Projection -- from Wolfram MathWorld
    It is possible to project vectors from V to W. The most familiar projection is when W is the x-axis in the plane. In this case, P(x,y)=(x,0) is the projection.<|control11|><|separator|>
  23. [23]
    Dot Product - Calculus II - Pauls Online Math Notes
    Nov 16, 2022 · The dot product is called the scalar product. The dot product is also an example of an inner product and so on occasion you may hear it called an inner product.
  24. [24]
    [PDF] Dot product and vector projections (Sect. 12.3)
    The dot product of two vectors is a scalar, calculated as v · w = |v||w| cos(θ), where θ is the angle between them.
  25. [25]
    [PDF] Vectors: Dot Products & Projections - Calculus II - myweb
    Apr 28, 2014 · Dot Product (Geometric Formula Derivation). Given vectors v = hv1, v2 ... Projection (Formula Derivation). Determine a formula for projwv ...
  26. [26]
    Lecture 15: Projections onto subspaces | Linear Algebra | Mathematics
    The algebra of finding these best fit solutions begins with the projection of a vector onto a subspace. These video lectures of Professor Gilbert Strang ...
  27. [27]
  28. [28]
    2.4 Products of Vectors – University Physics Volume 1
    Scalar products are used to define work and energy relations. For example, the work that a force (a vector) performs on an object while causing its displacement ...
  29. [29]
    [PDF] Lecture 15: Projections onto subspaces - MIT OpenCourseWare
    In general, projection matrices have the properties: PT = P and. P2 = P. Why project? As we know, the ...
  30. [30]
    [PDF] 2 Review of Linear Algebra and Matrices
    Properties of a projection matrix P: 2.52 Theorem: If P is an n × n matrix and rank(P) = r, then P has r eigenvalues equal to 1 and n − r eigenvalues equal to ...
  31. [31]
    12.3 The Dot Product
    First Order Homogeneous Linear Equations · 3. First Order Linear Equations · 4 ... scalar projection. Of course, you can also compute the length of the ...
  32. [32]
  33. [33]
    Introduction to Dot Product - RAW
    Jul 7, 2021 · Vector Projection. The dot product can be used to determine the projection of one vector onto another, like for shadows. Given two vectors ...
  34. [34]
    Confusion on terminology: "vector rejection", "vector projection", and ...
    Oct 21, 2017 · The vector component or vector resolute of a perpendicular to b, sometimes also called the vector rejection of a from b, is the orthogonal ...Simple definition of vector rejection: $\vec p + \vec r = \vec vProve that the norm of the rejection of vector a on vector b is equal to ...More results from math.stackexchange.com
  35. [35]
    Expressing a projection on to a line as a matrix vector prod (video)
    May 5, 2011 · We know that our projection onto a line L in Rn is a linear transformation. That tells us that we can represent it as a matrix transformation. We know that ...
  36. [36]
    [PDF] Dot product and vector projections (Sect. 12.3)
    The dot product of two vectors is a scalar, calculated as v · w = |v||w| cos(θ), where θ is the angle between them.
  37. [37]
    [PDF] Newtonian Dynamics - Richard Fitzpatrick
    ... Equation (A.46) then it follows that the equation of motion of a vector which precesses about the origin with some angular velocity ω is da dt= ω × a. (A.49).
  38. [38]
    [PDF] Understanding Collision Dynamics - UT Psychology Labs
    In the context of collisions, we are interested in the magnitude of the velocity projections on the axis specified by the trajectories of the incoming balls and ...
  39. [39]
    Mathematics for Engineering – Introduction to Aerospace Flight ...
    ... force, or acceleration) by adding components component-by-component, nose-to ... beam, or the velocity or pressure field in a fluid flow. Here, the ...
  40. [40]
    [PDF] HISTORY - George Mason University
    manner and thereby deduced the resolution of a force into components, i.e., he discovered the principle of the parallelo- gram of forces. The remainder of ...<|separator|>
  41. [41]
    The Perspective and Orthographic Projection Matrix - Scratchapixel
    Projection matrices are 4x4 matrices that transform 3D points to 2D coordinates on a canvas, projecting 3D objects onto the screen.
  42. [42]
    Introduction to Computer Graphics, Section 7.2 -- Lighting and Material
    The angle can be computed from the direction to the light source and the normal vector to the surface. Computation of specular reflection also uses the ...
  43. [43]
    [PDF] Fundamentals of Texture Mapping and Image Warping
    Jun 17, 1989 · The applications of texture mapping in computer graphics and image distortion (warping) in image processing share a core of fundamental ...
  44. [44]
    [PDF] Projective Texture Mapping - UCSD CSE
    Shadow mapping is an image-based shadowing technique developed by Lance. Williams [8] in 1978. It is particularly amenable to hardware implementation because it.
  45. [45]
    [PDF] Shadows in Computer Graphics
    By moving all vertices which don't face the light, the degenerated rectangles become “real” rectangles forming the side surfaces of the shadow volume.  Used ...
  46. [46]
    Basic Lighting - LearnOpenGL
    Diffuse lighting : simulates the directional impact a light object has on an object. This is the most visually significant component of the lighting model. The ...
  47. [47]
    Lights - Diffuse and Lambertian Shading - Introduction to Shading
    Figure 3: Diffuse surfaces reflect light equally in all directions contained within a hemisphere of directions oriented about the surface normal.
  48. [48]
    Ray Tracing in One Weekend
    Ray-Sphere Intersection​​ You can also think of this as saying that if a given point (x,y,z) is on the surface of the sphere, then x2+y2+z2=r2.
  49. [49]
    Overview of the Ray-Tracing Rendering Technique - Scratchapixel
    Ray-tracing is a technique for computing the visibility between points. Light transport algorithms are designed to simulate the way light propagates through ...
  50. [50]
    [PDF] Ray Tracing
    Given a ray p(t) = e + td and an implicit surface f(p)=0, we'd like to know where they intersect. The intersection points occur when points on the ray satisfy.
  51. [51]
    [PDF] Projective Texture Mapping - UCSD CSE
    The basic steps for rendering with shadow maps are quite simple: Figure 2 shows an example scene with shadows, the same scene shown from the light's point of ...
  52. [52]
    Coordinate Systems - LearnOpenGL
    The five coordinate systems in OpenGL are: local, world, view, clip, and screen space. These are used to transform vertices before they become visible.Camera · Getting-started/cube_vertices · Source code · Here<|separator|>
  53. [53]
    Shadows
    ... point in the plane r = (r1,r2,r3). To find the point on line L that lies in the plane we have the relation: (3) n1(p1 + t·a1) + n2(p2 + t·a2) + n3(p3 + t·a3)= ...Projection Shadows · Plane Equation · Stencil Buffer
  54. [54]
    Chapter 9. Efficient Shadow Volume Rendering - NVIDIA Developer
    The shadow volume technique creates sharp, per-pixel accurate shadows from point, spot, and directional lights. A single object can be lit by multiple lights, ...
  55. [55]
  56. [56]
    Chapter 3 Linear Projection | 10 Fundamental Theorems for ...
    Hence the predicted values from a linear regression simply are an orthogonal projection of y y onto the space defined by X X . 3.3.1 Geometric interpretation.
  57. [57]
    6.5 Orthogonal least squares - Understanding Linear Algebra
    Remember that , the orthogonal projection of onto Col , is the closest vector in Col to . Therefore, when we solve the equation , A x = b ^ , we are finding ...
  58. [58]
    [PDF] Matrix Approach to Linear Regression - Alex Kaizer
    It is the orthogonal (perpendicular) projection of Y onto the column space of X. The hat matrix can be calculated from the design matrix: ... Simple Linear ...
  59. [59]
    The Hat Matrix in Regression and ANOVA - Taylor & Francis Online
    Mar 12, 2012 · The hat matrix is a projection matrix that shows how a data point influences fitted values, and helps identify exceptional data points.<|separator|>
  60. [60]
    [PDF] 12.1. Principal Component Analysis
    PCA can be defined as the orthogonal projection of the data onto a lower dimensional linear space, known as the principal subspace, such that the variance of.
  61. [61]
    [PDF] Principal Components Analysis - Statistics & Data Science
    The second principal component is the direction which maximizes variance among all directions orthogonal to the first.
  62. [62]
  63. [63]
    [PDF] Linear Feature Transform and Enhancement of Classification on ...
    To overcome the complexity of the fully-connected layer and maintain the classification accuracy, we construct a linear transform to project the DNN feature ...
  64. [64]
    Neural Networks – Introduction to Machine Learning - GitHub Pages
    We first perform a linear transformation, then apply activation function g1 g 1 , then perform another linear transformation, then apply activation function g2 ...
  65. [65]
    [PDF] Linear Algebra - MIT ESP
    Projection onto plane through (0,0,0) perpendicular to unit vector n. 𝑃 = 𝐼 − 𝑛𝑛𝑇. 0. 0. 1. Projection onto plane passing through Q, perpendicular to unit ...<|control11|><|separator|>
  66. [66]
    Projection Theorem -- from Wolfram MathWorld
    Let H be a Hilbert space and M a closed subspace of H. Corresponding to any vector x in H, there is a unique vector m_0 in M such that |x-m_0|<=|x-m| for ...
  67. [67]
    [PDF] Orthographic Terrain Views Using Data Derived from Digital ...
    ABSTRACT: A fast algorithm for producing three-dimensional orthographic terrain views uses digital elevation data and co-registered imagery.
  68. [68]
    [PDF] Hilbert spaces
    Thus, projection onto W is an operator of norm 1 (unless W = {0}) equal to its own square. Such an operator is called a projection or sometimes an idempotent ( ...
  69. [69]
    56.2 L2 spaces of random variables
    - The set I is the vector space of the information set of a constant I≡{a}a (35.94). then the orthogonal projection (56.89)-(56.90) is the mean (35.96).
  70. [70]
    [PDF] Hilbert spaces and the projection theorem - Functional analysis
    A Hilbert space is a complete inner product space, where the norm is generated by an inner product, and the inner product is a sesquilinear form.
  71. [71]
    (PDF) Understanding Quantum Mechanics through Hilbert Spaces
    May 20, 2025 · Hilbert spaces provide the fundamental mathematical framework for describing quantum mechanical systems. Their structure, characterized by ...