Projection matrix
In linear algebra, a projection matrix is a square matrix P that represents a linear transformation projecting vectors from a vector space onto a subspace, satisfying the idempotence property P^2 = P.[1][2][3] Such matrices arise in the context of decomposing a vector space V as a direct sum V = X \oplus Y, where P maps any vector v = x + y (with x \in X and y \in Y) to its component x \in X, ensuring the image of P is X and the kernel is Y.[2] For orthogonal projections, which are the most common type, P is symmetric and projects onto a subspace W (e.g., the column space of a matrix A) such that the error vector is perpendicular to W; the explicit formula is P = A (A^T A)^{-1} A^T, assuming A has full column rank and its columns form a basis for W.[1][3] Projection matrices are fundamental in applications like least squares regression, where they provide the closest approximation of a vector b in the subspace spanned by A's columns, minimizing the Euclidean distance \| b - Pb \|.[1][3] They also appear in geometry for coordinate projections, signal processing for dimensionality reduction, and numerical methods for solving overdetermined systems.[2] Key properties include linearity, the fact that I - P is also a projection (onto the orthogonal complement for orthogonal cases), and invariance under certain linear operators if the subspaces are invariant.[2]Fundamentals
Definition
In linear algebra, a projection matrix P is defined as a square matrix that satisfies the idempotence condition P^2 = P. This property characterizes projections onto a subspace, where applying the operator twice yields the same result as applying it once.[4] For orthogonal projections, which are the most common in applications involving Euclidean spaces, the matrix is additionally symmetric, satisfying P^T = P. This symmetry ensures that the projection is perpendicular to the complementary subspace. Such matrices project vectors from \mathbb{R}^n onto a lower-dimensional subspace while preserving angles and lengths within that subspace.[5][4] In statistical linear models, the projection matrix takes the specific form of the hat matrix H = X(X^T X)^{-1} X^T, where X is the n \times p design matrix with full column rank. This matrix projects the observed response vector y onto the column space of X, producing the vector of fitted values \hat{y} = H y. The hat matrix inherits the idempotence and symmetry properties, making it an orthogonal projection operator in this context.[6][7] The diagonal elements h_{ii} of the hat matrix, known as leverages, quantify the influence of the i-th response value y_i on the corresponding fitted value \hat{y}_i. These leverages satisfy $0 \leq h_{ii} \leq 1 for each i, with their sum equal to the number of parameters p in the model. This formulation assumes familiarity with basic concepts of vectors, matrices, and linear subspaces.[6]Geometric Interpretation
In linear algebra, a projection matrix P provides an orthogonal mapping of a vector \mathbf{v} in a vector space onto a subspace spanned by the columns of a matrix A, such that the projected vector P\mathbf{v} lies within the subspace and the residual vector \mathbf{v} - P\mathbf{v} is perpendicular to every vector in that subspace.[8] This orthogonality ensures that the projection minimizes the Euclidean distance between \mathbf{v} and the subspace, representing the "closest point" approximation in the geometric sense.[8] Consider an example in \mathbb{R}^2, where the subspace is the x-axis, spanned by the standard basis vector \mathbf{e}_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}. The corresponding orthogonal projection matrix is P = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, which maps any vector \mathbf{v} = \begin{pmatrix} x \\ y \end{pmatrix} to P\mathbf{v} = \begin{pmatrix} x \\ 0 \end{pmatrix}, effectively dropping the y-component while preserving the x-coordinate.[9] Geometrically, this can be visualized as a vector \mathbf{v} being "dropped" perpendicularly from its tip to the x-axis line, forming a right angle between the residual \begin{pmatrix} 0 \\ y \end{pmatrix} and the projected point on the axis; such diagrams often illustrate the decomposition \mathbf{v} = P\mathbf{v} + (\mathbf{v} - P\mathbf{v}) with the residual orthogonal to the subspace.[8] This focus on orthogonality distinguishes orthogonal projections from oblique projections, where the residuals are not perpendicular to the subspace, leading to a slanted "drop" rather than a right-angled one; however, the orthogonal case maintains the property that applying P repeatedly to a vector in the subspace yields the same result, keeping it fixed within the subspace.[10] In higher dimensions, such as projecting onto a plane in \mathbb{R}^3, the visualization extends to the residual being perpendicular to the entire plane, akin to a shadow cast by perpendicular light rays onto a flat surface.[8]Properties
Algebraic Properties
A projection matrix P in linear algebra is characterized by its idempotence, meaning P^2 = P. This property implies that P acts as a projection onto an invariant subspace, where applying P multiple times yields the same result as applying it once, leaving vectors in the range of P unchanged while mapping others to that subspace. To see this, consider the spectral decomposition of P, where its eigenvalues are either 0 or 1; thus, P^2 has the same eigenvalues, confirming P^2 = P.[11] For orthogonal projections, P is also symmetric, satisfying P^T = P. This symmetry ensures that the projection is self-adjoint, preserving inner products in the sense that \langle Pv, w \rangle = \langle v, Pw \rangle for all vectors v, w. Consequently, P coincides with its own Moore-Penrose pseudoinverse in the context of orthogonal projections onto the column space, as P^+ = P.[11][4] The rank of P equals the dimension of its range, which is the subspace onto which it projects, and this rank is also equal to the trace of P. Since the nonzero eigenvalues of P are all 1 and their count matches the rank, the trace, as the sum of eigenvalues, directly gives \operatorname{trace}(P) = \operatorname{rank}(P).[11] Regarding null spaces, the kernel of I - P is precisely the range of P, while the kernel of P is the range of I - P. This decomposition highlights how P and I - P partition the space into the projected subspace and its orthogonal complement for orthogonal projections.[11] A standard formula for the orthogonal projection matrix onto the column space of a full column rank matrix A \in \mathbb{R}^{m \times n} (with m \geq n) is P = A A^+, where A^+ is the Moore-Penrose pseudoinverse of A. For full column rank, A^+ = (A^T A)^{-1} A^T, so P = A (A^T A)^{-1} A^T. To derive this, note that P must satisfy P^2 = P and project onto \operatorname{range}(A). Substituting yields P^2 = A (A^T A)^{-1} A^T A (A^T A)^{-1} A^T = A (A^T A)^{-1} A^T = P, confirming idempotence, and the columns of A lie in the range of P since PA = A. Symmetry follows from the transpose: P^T = A (A^T A)^{-1} A^T = P. This form generalizes via the pseudoinverse for rank-deficient cases, where A A^+ remains the orthogonal projection onto \operatorname{range}(A).[11][12]Trace and Rank Interpretations
In the context of linear regression, the trace of a projection matrix P equals its rank, which corresponds to the number of columns in the design matrix X, or equivalently, the number of parameters in the model, assuming X has full column rank. This equality holds because P is idempotent and symmetric, projecting onto the column space of X. As established in the algebraic properties of projection matrices, \operatorname{trace}(P) = \operatorname{rank}(P).[13] The trace of P provides a measure of model complexity in regression analysis, representing the degrees of freedom associated with the fitted values. Specifically, \operatorname{trace}(P) = p, where p is the number of parameters, while the trace of the residual projection matrix I - P equals n - p, indicating the degrees of freedom for the residuals, with n denoting the number of observations. This interpretation links the projection's dimensionality directly to statistical inference, such as in estimating variance or testing hypotheses.[14][15] The diagonal elements of P, known as leverage values, quantify the influence of each observation on the fitted values; their sum equals \operatorname{trace}(P) = p, yielding an average leverage of p/n. This sum underscores the overall "pull" of the model on the data, with high-leverage points potentially affecting fit stability.[16] For instance, in simple linear regression with an intercept and slope, the projection matrix has trace 2, reflecting the two parameters and the rank of the two-column design matrix.[17]Applications in Statistics
Ordinary Least Squares
In the ordinary least squares (OLS) framework, the linear regression model is expressed in matrix form as \mathbf{y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}, where \mathbf{y} is an n \times 1 vector of observations, \mathbf{X} is an n \times p design matrix with full column rank, \boldsymbol{\beta} is a p \times 1 vector of unknown parameters, and \boldsymbol{\epsilon} is an n \times 1 vector of errors.[18] The OLS estimator minimizes the residual sum of squares \|\mathbf{y} - \mathbf{X}\hat{\boldsymbol{\beta}}\|^2 and is given by \hat{\boldsymbol{\beta}} = (\mathbf{X}^\top \mathbf{X})^{-1} \mathbf{X}^\top \mathbf{y}.[18] This estimator is unbiased and has minimum variance among linear unbiased estimators under the model's assumptions.[19] The fitted values are \hat{\mathbf{y}} = \mathbf{X} \hat{\boldsymbol{\beta}} = \mathbf{X} (\mathbf{X}^\top \mathbf{X})^{-1} \mathbf{X}^\top \mathbf{y}, which can be written compactly as \hat{\mathbf{y}} = \mathbf{P} \mathbf{y}, where \mathbf{P} = \mathbf{X} (\mathbf{X}^\top \mathbf{X})^{-1} \mathbf{X}^\top is the projection matrix, also known as the hat matrix.[18] Geometrically, \mathbf{P} orthogonally projects the response vector \mathbf{y} onto the column space of \mathbf{X}, ensuring that \hat{\mathbf{y}} lies in this subspace and minimizes the Euclidean distance to \mathbf{y}.[18] The residuals are defined as \mathbf{e} = \mathbf{y} - \hat{\mathbf{y}} = (\mathbf{I} - \mathbf{P}) \mathbf{y}, where \mathbf{I} is the n \times n identity matrix.[19] These residuals are orthogonal to the columns of \mathbf{X}, satisfying \mathbf{X}^\top \mathbf{e} = \mathbf{0}, which implies that the unexplained variation is perpendicular to the fitted hyperplane spanned by the predictors.[18] This orthogonality property decomposes \mathbf{y} into fitted and residual components with no correlation between them.[19] The projection matrix \mathbf{P} in OLS arises under the assumptions of a linear model, errors with zero conditional mean E[\boldsymbol{\epsilon} \mid \mathbf{X}] = \mathbf{0}, homoscedasticity \text{Var}(\boldsymbol{\epsilon} \mid \mathbf{X}) = \sigma^2 \mathbf{I}, and uncorrelated errors.[18] These conditions ensure that \mathbf{P} represents an orthogonal projection onto \text{Col}(\mathbf{X}), with \mathbf{P} being symmetric and idempotent.[19] For a univariate example with an intercept, consider the simple linear model y_i = \beta_0 + \beta_1 x_i + \epsilon_i for i = 1, \dots, n, where the design matrix \mathbf{X} has first column of ones and second column \mathbf{x} = (x_1, \dots, x_n)^\top. The projection matrix \mathbf{P} then projects \mathbf{y} onto the span of \{\mathbf{1}, \mathbf{x}\}, yielding fitted values that represent the best linear approximation in the least squares sense; for instance, with centered predictors, the slope estimate \hat{\beta}_1 = \frac{\sum (x_i - \bar{x})(y_i - \bar{y})}{\sum (x_i - \bar{x})^2} aligns with this projection.[18]Generalized Least Squares
In the context of linear regression models where the errors exhibit heteroscedasticity or correlation, the generalized least squares (GLS) method employs a projection matrix adapted to the error covariance structure. Consider the linear model y = X \beta + \epsilon, where \mathbb{E}(\epsilon) = 0 and \mathrm{Var}(\epsilon) = \Sigma, with \Sigma a positive definite matrix. The GLS estimator of the parameter vector \beta is given by \hat{\beta} = (X^T \Sigma^{-1} X)^{-1} X^T \Sigma^{-1} y. The fitted values are then \hat{y} = H y, where the projection matrix H = X (X^T \Sigma^{-1} X)^{-1} X^T \Sigma^{-1} projects the response vector onto the column space of X with respect to the inner product induced by \Sigma^{-1}.[20] This formulation generalizes the ordinary least squares case, which arises as a special instance when \Sigma = \sigma^2 I.[20] A key special case of GLS is weighted least squares (WLS), which applies when the errors are uncorrelated but have unequal variances, so \Sigma is diagonal with entries \sigma_i^2. In this scenario, the weights are w_i = 1 / \sigma_i^2, and the projection matrix becomes H = X (X^T W X)^{-1} X^T W, where W = \Sigma^{-1} is the diagonal weight matrix. This weighting ensures that observations with smaller error variances contribute more to the estimation, yielding more efficient parameter estimates under the specified error structure.[20] The projection matrix H in GLS retains the idempotence property, satisfying H^2 = H, which confirms its role as a projection operator. However, unlike the orthogonal projection in ordinary least squares, H is generally not symmetric (H^T \neq H) unless \Sigma is a scalar multiple of the identity matrix, reflecting the oblique nature of the projection in the presence of correlated or heteroscedastic errors.[20] For the residuals e = y - \hat{y} = (I - H) y, the covariance matrix is \mathrm{Var}(e) = (I - H) \Sigma (I - H)^T, demonstrating how the projection accounts for the underlying error structure to produce unbiased residuals with adjusted variance.[20]Advanced Formulations
Oblique Projections
Oblique projections extend the concept of projections beyond the orthogonal case by allowing the direction of projection to be non-perpendicular to the target subspace. A matrix P represents an oblique projection if it is idempotent, satisfying P^2 = P, but not symmetric, so P^T \neq P; consequently, the projected vector and the residual vector (I - P)x are not orthogonal under the standard inner product. This distinguishes oblique projections from orthogonal ones, where symmetry holds and orthogonality is preserved.[21] The formula for the oblique projection onto the column space of a matrix A parallel to the column space of B is given by P = A (A^T B^{-1} A)^{-1} A^T B^{-1}, assuming B is positive definite and invertible, and the inner matrices have full rank to ensure well-definedness. This expression captures projections in spaces equipped with a non-standard metric induced by B^{-1}, where the "parallel" direction aligns with the geometry defined by B.[22] Representative examples of oblique projections appear in coordinate transformations and non-Euclidean metrics, such as affine mappings or engineering applications requiring angled views. For instance, in a 2D setting analogous to multiview projections, the matrix P = \begin{bmatrix} 1 & 0 & -\cot \theta & 0 \\ 0 & 1 & -\cot \phi & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} (for a 4D homogeneous coordinate system) implements an oblique projection onto a plane along directions specified by angles \theta and \phi, preserving certain lengths while shearing others. This satisfies P^2 = P but lacks symmetry unless \theta = \phi = 90^\circ.[10] Oblique projections arise naturally when working in weighted spaces, such as those with a non-identity covariance structure \Sigma \neq I, where the projection becomes oblique in the Euclidean metric but orthogonal under the weighted inner product \langle x, y \rangle = x^T \Sigma^{-1} y. In this context, setting B = \Sigma in the formula yields the projection used in such settings.[22]Blockwise Formulas
Blockwise formulas for projection matrices arise in the context of partitioned design matrices, particularly when the column space is decomposed into nested or augmented subspaces. Consider a design matrix X partitioned as X = [A \, B], where A and B are matrices whose columns span subspaces \mathcal{A} and the augmentation to \mathcal{X} = \operatorname{span}(\mathcal{A} \cup \operatorname{span}(B)), respectively. The orthogonal projection matrix onto \mathcal{X}, denoted P_X, can be expressed in terms of the projection P_A onto \mathcal{A} and the contribution from B orthogonal to \mathcal{A}: P_X = P_A + (I - P_A) B \left[ B^\top (I - P_A) B \right]^{-1} B^\top (I - P_A), assuming B^\top (I - P_A) B is invertible, which holds if the columns of B span a subspace with full rank relative to the orthogonal complement of \mathcal{A}.[23] This decomposition highlights the incremental nature of the projection, where the first term projects onto the initial subspace, and the second term adds the projection onto the component of B orthogonal to \mathcal{A}. The second term, (I - P_A) B \left[ B^\top (I - P_A) B \right]^{-1} B^\top (I - P_A), represents the incremental projection contributed by the added variables in B. This structure is particularly useful in sequential model building, as it allows updating the projection matrix without recomputing the full inverse of X^\top X. In the context of the hat matrix in regression, this incremental term captures how additional predictors modify the fitted values and associated diagnostics. For example, in linear regression, adding a single covariate corresponding to a column vector b (so B = b) updates the leverages, which are the diagonal elements of the hat matrix. The new leverage for observation i becomes h_{ii}^{(new)} = h_{ii}^{(old)} + \frac{[(I - P_A) b]_i^2}{[(I - P_A) b]^\top (I - P_A) b}, reflecting the increased influence of that observation if it has high residual leverage in the orthogonalized direction. This update formula facilitates efficient computation in stepwise regression procedures.[23] Regarding ranks, the decomposition preserves rank additivity in the projected subspaces: \operatorname{rank}(P_X) = \operatorname{rank}(P_A) + \operatorname{rank}\left( (I - P_A) B \left[ B^\top (I - P_A) B \right]^{-1} B^\top (I - P_A) \right), where the second rank term measures the dimension added by the orthogonal component of B. This follows from the direct sum decomposition of \mathcal{X} = \mathcal{A} \oplus (\mathcal{X} \ominus \mathcal{A}), ensuring the total dimension is the sum of the dimensions of the nested and complementary subspaces.[23] The idempotence of P_X is preserved through this block structure, consistent with general algebraic properties of projections.Computation and Extensions
Numerical Aspects
Computing the projection matrix P = X (X^T X)^{-1} X^T directly via the normal equations is numerically unstable, particularly when the design matrix X exhibits multicollinearity or near-collinearity, as the condition number of X^T X is the square of that of X, amplifying rounding errors. Instead, a stable approach involves the QR decomposition of X = Q R, where Q has orthonormal columns, yielding P = Q Q^T; this avoids explicit inversion and preserves numerical accuracy even for ill-conditioned X. High multicollinearity, indicated by a large condition number \kappa(X) > 30, inflates the variance of regression coefficients and can lead to extreme leverage values on the diagonal of P, where leverages h_{ii} = x_i^T (X^T X)^{-1} x_i measure an observation's potential influence and may exceed typical bounds (e.g., h_{ii} > 2p/n for p predictors and n observations).[24] For rank-deficient cases where \rank(X) = r < \min(m,n), the singular value decomposition (SVD) X = U \Sigma V^T provides a robust alternative, with the projection onto the column space given by P = U_r U_r^T, where U_r comprises the first r left singular vectors corresponding to nonzero singular values; this handles numerical rank deficiency effectively. Orthogonal projections are typically computed via QR factorization algorithms, such as Householder reflections, which introduce zeros column-by-column through orthogonal transformations and ensure backward stability with O(m n^2) flops for an m \times n matrix, or the modified Gram-Schmidt process, which orthogonalizes columns sequentially while mitigating loss-of-orthogonality issues present in the classical version.[25] In practice, blockwise updates from symbolic decompositions can enhance efficiency for large-scale computations, though stability remains paramount. Implementations in statistical software often compute projections implicitly for efficiency and stability; for instance, the R functionlm() employs QR decomposition internally to solve least squares problems, with leverages accessible via hatvalues() on the fitted model object. Similarly, Python's SciPy linalg.lstsq uses QR or SVD based on matrix conditioning to handle projections in linear models.