Fact-checked by Grok 2 weeks ago

Householder transformation

In linear algebra, the Householder transformation, also known as a Householder reflection or elementary reflector, is a type of that performs a reflection of vectors across a in . It is mathematically defined as H = I - \frac{2 u u^T}{u^T u}, where I is the and u is a non-zero column , or equivalently H = I - 2 v v^T when v is the unit vector in the direction of u. This construction ensures that H is symmetric (H^T = H), orthogonal (H^T H = I), and involutory (H^2 = I), meaning it preserves lengths and angles while reflecting any x to Hx = x - 2 \frac{u^T x}{u^T u} u. Named after mathematician Alston S. Householder, who introduced the concept in his 1958 paper on unitary triangularization of nonsymmetric matrices, the transformation provides a numerically stable method for introducing zeros in specific positions of a without disrupting previously computed zeros. Key applications include the of a A into an Q and upper R, achieved by applying a sequence of Householder transformations to zero out subdiagonal elements column by column. It is also essential for reducing symmetric matrices to tridiagonal form in eigenvalue computations and for solving problems, offering advantages in computational efficiency and stability over alternatives like the Gram-Schmidt process. In practice, Householder transformations are implemented in libraries such as , where the vector u (or a scaled version) is stored compactly to minimize storage and enable blocked algorithms for large-scale computations. Their reflection property makes them particularly suited for preserving symmetry in certain decompositions, and extensions like generalized Householder transformations have been developed for sparse matrices and other specialized numerical tasks.

Definition

Mathematical Formulation

The Householder transformation, also known as a Householder , is a linear operator defined on an that performs a reflection of vectors across a passing through the , with the hyperplane orthogonal to a given non-zero \mathbf{v}. In the context of \mathbb{R}^n, equipped with the standard inner product \langle \mathbf{x}, \mathbf{y} \rangle = \mathbf{x}^T \mathbf{y}, the transformation is given by H(\mathbf{v}) = I - \frac{2 \mathbf{v} \mathbf{v}^T}{\| \mathbf{v} \|^2}, where I is the n \times n and \| \mathbf{v} \|^2 = \mathbf{v}^T \mathbf{v}. This operator is orthogonal, satisfying H(\mathbf{v})^T H(\mathbf{v}) = I, and thus preserves lengths and angles in the space. An is a over the real or complex numbers endowed with an inner product, which induces a \| \mathbf{x} \| = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle} and enables the definition of via \langle \mathbf{x}, \mathbf{y} \rangle = 0. The orthogonal projection of a vector \mathbf{x} onto the one-dimensional spanned by \mathbf{v} is \text{proj}_{\mathbf{v}}(\mathbf{x}) = \frac{\langle \mathbf{x}, \mathbf{v} \rangle}{\| \mathbf{v} \|^2} \mathbf{v} = \frac{\mathbf{v}^T \mathbf{x}}{\| \mathbf{v} \|^2} \mathbf{v}. Applying the Householder transformation to \mathbf{x} yields the reflected vector H(\mathbf{v}) \mathbf{x} = \mathbf{x} - 2 \text{proj}_{\mathbf{v}}(\mathbf{x}), which geometrically mirrors \mathbf{x} across the perpendicular to \mathbf{v}, leaving vectors in that unchanged. The Householder transformation is named after Alston S. Householder, an American mathematician who formalized its use in the 1950s for , particularly in matrix decompositions. His seminal 1958 paper introduced the method in the context of unitary triangularization, establishing its foundational role in stable computational algorithms.

Householder Reflector Matrix

The Householder reflector matrix, denoted as Q, represents the linear transformation corresponding to a reflection across a in and takes the explicit form Q = I - 2 \mathbf{u} \mathbf{u}^T, where I is the and \mathbf{u} is a (\| \mathbf{u} \|_2 = 1) serving as the normal to the reflection . This matrix form derives directly from the geometric definition of a , which subtracts twice the of a onto the normal from the original . Here, \mathbf{u} = \mathbf{v} / \| \mathbf{v} \|_2 for some nonzero \mathbf{v} defining the hyperplane's orientation. The matrix Q is symmetric, as Q^T = (I - 2 \mathbf{u} \mathbf{u}^T)^T = I - 2 (\mathbf{u} \mathbf{u}^T)^T = I - 2 \mathbf{u} \mathbf{u}^T = Q, since the outer product [\mathbf{u} \mathbf{u}^T](/page/Outer_product) is symmetric. It is also orthogonal, satisfying Q^2 = I (and thus Q Q^T = I): \begin{align*} Q^2 &= (I - 2 \mathbf{u} \mathbf{u}^T)(I - 2 \mathbf{u} \mathbf{u}^T) \\ &= I - 4 \mathbf{u} \mathbf{u}^T + 4 \mathbf{u} \mathbf{u}^T \mathbf{u} \mathbf{u}^T \\ &= I - 4 \mathbf{u} \mathbf{u}^T + 4 (\mathbf{u}^T \mathbf{u}) \mathbf{u} \mathbf{u}^T \\ &= I - 4 \mathbf{u} \mathbf{u}^T + 4 \mathbf{u} \mathbf{u}^T = I, \end{align*} where the simplification relies on \mathbf{u}^T \mathbf{u} = 1. These properties ensure that Q preserves vector norms and inner products, making it suitable for tasks. For a concrete illustration in two dimensions, consider \mathbf{v} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}, so \| \mathbf{v} \|_2 = \sqrt{2} and \mathbf{u} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}. Then, \mathbf{u} \mathbf{u}^T = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, and Q = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} - 2 \cdot \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} = \begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix}. Applying Q to the vector \mathbf{e}_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix} yields Q \mathbf{e}_1 = \begin{pmatrix} 0 \\ -1 \end{pmatrix}, demonstrating the reflection: the vector \mathbf{e}_1 is mirrored across the line perpendicular to \mathbf{v} (the line y = -x). In contrast to arbitrary reflection matrices, the Householder reflector employs a specific choice of \mathbf{v} (and thus \mathbf{u}) to map a target \mathbf{x} onto a scalar multiple of a vector, such as \| \mathbf{x} \|_2 \mathbf{e}_1, which zeros out all but the first component. This targeted selection distinguishes it from general reflections and enables efficient zero introductions in matrix factorizations.

Properties

Algebraic Properties

The Householder transformation matrix Q = I - 2 \frac{u u^T}{u^T u}, where u is a nonzero , is orthogonal, satisfying Q^T Q = I. This orthogonality follows directly from the form of Q, as Q^T Q = (I - 2 \frac{u u^T}{u^T u})^T (I - 2 \frac{u u^T}{u^T u}) = I - 4 \frac{u u^T}{u^T u} + 4 \frac{u u^T u u^T}{(u^T u)^2} = I, since u^T u is a scalar and u u^T u = (u^T u) u. Additionally, Q is real symmetric, with Q = Q^T, because the outer product u u^T is symmetric. The determinant of Q is \det(Q) = -1, distinguishing it as a reflection rather than a rotation. This arises because Q has eigenvalues consisting of +1 with multiplicity n-1 (for eigenvectors orthogonal to u) and -1 with multiplicity 1 (for the eigenvector u). Consequently, the trace of Q in n dimensions is \operatorname{tr}(Q) = n - 2, reflecting the sum of these eigenvalues. The matrix Q is involutory, satisfying Q^2 = I, which implies that Q is its own inverse. This property holds because Q^2 = (I - 2 \frac{u u^T}{u^T u})^2 = I - 4 \frac{u u^T}{u^T u} + 4 \frac{u u^T u u^T}{(u^T u)^2} = I, as the cross terms cancel. The rank-one update structure of Q, derived via the Sherman-Morrison formula for low-rank modifications to the , underscores its efficiency in computations. Products of Householder matrices yield orthogonal matrices, enabling the construction of arbitrary orthogonal transformations through sequential s.

Geometric Interpretation

The transformation represents a of vectors across a in , where the is defined as perpendicular to a chosen nonzero vector \mathbf{u}. This geometric operation leaves all points on the unchanged while flipping the position of points on the opposite side relative to the plane, effectively negating their components parallel to \mathbf{u}. Such reflections preserve Euclidean distances and angles between vectors, ensuring that the transformation is an of the space. Vectors lying within the —those orthogonal to \mathbf{u}—remain fixed under the , as their onto \mathbf{u} is zero. For any , the subtracts twice its onto \mathbf{u}, which geometrically corresponds to the parallel component across the along that direction. This selective along the normal direction \mathbf{u} while fixing the orthogonal highlights the 's role in reorienting specific components without distorting the overall . In two dimensions, the Householder transformation acts as a over a line to \mathbf{u}, such as mirroring a point across the y-axis if \mathbf{u} aligns with the x-axis, resulting in a simple flip without any rotational component. Extending to three dimensions, it reflects over a with \mathbf{u}, for instance, transforming a pointing away from the xy- to its symmetric counterpart on the other side of the plane. Unlike rotations, which preserve and involve around an , the Householder transformation performs a pure flip across the , reversing in the affected direction. In contrast to projections, which map s onto a and are neither bijective nor invertible, the is a that fully spans the space, allowing reversal by applying the same transformation again. A distinctive geometric feature is its application in aligning a with a direction, such as reflecting a given vector toward the first coordinate axis \mathbf{e}_1 to zero out its subdiagonal components in a column, all while maintaining the vector's original due to the inherent of the transformation.

Construction and Computation

Deriving the Reflector Vector

The reflector vector \mathbf{u} in a Householder transformation is derived to reflect a given nonzero \mathbf{x} \in \mathbb{R}^n onto a scalar multiple of the first vector \mathbf{e}_1 = [1, 0, \dots, 0]^T, thereby zeroing all components of \mathbf{x} except the first. This construction ensures that the transformation H\mathbf{x} = \beta \mathbf{e}_1, where \beta = \pm \|\mathbf{x}\|_2 and \|\cdot\|_2 denotes the Euclidean vector norm, while preserving the norm since H is orthogonal. The basic mathematical derivation begins by noting that the reflection hyperplane is perpendicular to \mathbf{u}, so \mathbf{u} must be parallel to the vector difference \mathbf{x} - \beta \mathbf{e}_1. A simple choice is \beta = \|\mathbf{x}\|_2, yielding the unnormalized reflector \tilde{\mathbf{u}} = \mathbf{x} - \|\mathbf{x}\|_2 \mathbf{e}_1. The normalized reflector is then \mathbf{u} = \tilde{\mathbf{u}} / \|\tilde{\mathbf{u}}\|_2, ensuring \mathbf{u}^T \mathbf{u} = 1. With this \mathbf{u}, the transformation satisfies H\mathbf{x} = \|\mathbf{x}\|_2 \mathbf{e}_1. For improved numerical properties, the sign of \beta is adjusted based on the first component of \mathbf{x}: set \beta = -\operatorname{sign}(x_1) \|\mathbf{x}\|_2, where \operatorname{sign}(x_1) = 1 if x_1 \geq 0 and -1 otherwise (with \operatorname{sign}(0) = 1 by ). The unnormalized reflector becomes \tilde{\mathbf{u}} = \beta \mathbf{e}_1 - \mathbf{x}, and the normalized form is \mathbf{u} = \frac{ -\operatorname{sign}(x_1) \|\mathbf{x}\|_2 \mathbf{e}_1 - \mathbf{x} }{ \left\| \mathbf{x} + \operatorname{sign}(x_1) \|\mathbf{x}\|_2 \mathbf{e}_1 \right\|_2 }. This choice aligns the first component of \tilde{\mathbf{u}} away from zero when x_1 is close in to \|\mathbf{x}\|_2, and the yields H\mathbf{x} = -\operatorname{sign}(x_1) \|\mathbf{x}\|_2 \mathbf{e}_1. The outline of the derivation process is as follows: first, compute the \sigma = \|\mathbf{x}\|_2; second, determine the s = -\operatorname{sign}(x_1); third, form the unnormalized \tilde{\mathbf{u}} = s \sigma \mathbf{e}_1 - \mathbf{x}; and finally, normalize to obtain \mathbf{u} if a is required (though in applications, the unnormalized form is often retained with a scaling factor \beta = 2 / \tilde{\mathbf{u}}^T \tilde{\mathbf{u}} for efficient computation of H). This process assumes familiarity with vector norms and the basis vector \mathbf{e}_1. As a concrete example, consider \mathbf{x} = [3, 4, 0]^T, for which \|\mathbf{x}\|_2 = 5 and x_1 = 3 > 0, so s = -1. The unnormalized reflector is \tilde{\mathbf{u}} = -1 \cdot 5 \cdot \mathbf{e}_1 - \mathbf{x} = [-5, 0, 0]^T - [3, 4, 0]^T = [-8, -4, 0]^T, with \|\tilde{\mathbf{u}}\|_2 = \sqrt{64 + 16 + 0} = \sqrt{80} = 4\sqrt{5}. The normalized \mathbf{u} = [-8, -4, 0]^T / (4\sqrt{5}) = [-2/\sqrt{5}, -1/\sqrt{5}, 0]^T. The corresponding Householder transformation is H = I - 2 \mathbf{u} \mathbf{u}^T, and applying it gives H \mathbf{x} = [-5, 0, 0]^T, as the first (and only nonzero) component matches s \|\mathbf{x}\|_2 = -5.

Numerical Stability and Algorithms

The numerical stability of Householder transformations hinges on careful selection of the reflector \mathbf{u}, particularly the sign adjustment in its first component to prevent subtractive cancellation during norm computation. Specifically, choosing the sign of the scaling factor \sigma = -\operatorname{sign}(x_1) \|\mathbf{x}\|, where x_1 is the first element of the target \mathbf{x}, ensures that the difference x_1 - \sigma is computed without loss of precision, as it avoids near-cancellation when x_1 and \|\mathbf{x}\| have opposite signs. This choice maximizes the magnitude of the diagonal element in the reflector, reducing relative growth to O(\epsilon), where \epsilon is . Without this adjustment, severe cancellation can occur, leading to that amplifies errors in subsequent computations, as demonstrated in error analyses showing up to growth in dependence. Furthermore, Householder transformations exhibit backward when applied in sequence, meaning the computed result \widehat{\mathbf{Q}} satisfies \mathbf{A} = (\mathbf{I} + \Delta \mathbf{Q}) \mathbf{Q} \mathbf{R} (or similar for the full ), with \|\Delta \mathbf{Q}\| = O(\epsilon) \|\mathbf{Q}\|, independent of the matrix . This property arises from the orthogonal nature of the transformations and the use of rank-one updates, which bound rounding errors effectively in . Numerical experiments confirm that this holds even for ill-conditioned matrices, with residual norms remaining close to machine precision. In practice, the algorithm for applying a Householder transformation \mathbf{H} = \mathbf{I} - \beta \mathbf{u} \mathbf{u}^T (with \beta = 2 / (\mathbf{u}^T \mathbf{u})) to a \mathbf{A} from the left proceeds via a rank-one to avoid explicit formation of \mathbf{H}, which would be O(n^3)-costly. The step computes the inner product \mathbf{w}^T = \mathbf{u}^T \mathbf{A}, then updates \mathbf{A} \leftarrow \mathbf{A} - \beta \mathbf{u} \mathbf{w}^T, efficiently zeroing subdiagonal elements in a column during triangularization. This approach leverages Level-2 BLAS operations for the update, ensuring . Pseudocode for applying the transformation to an m \times n \mathbf{A} (assuming \mathbf{u} is an m- with first element 1 for ):
function apply_householder_left(A, u)
    [beta](/page/Beta) = 2 / (u' * u)
    w = u' * A  % row [vector](/page/Vector), m x n becomes 1 x n
    A = A - [beta](/page/Beta) * (u * w)  % [outer product](/page/Outer_product) update
    return A
end
This implementation requires approximately $2mn floating-point operations for an m \times n , yielding O(n^2) cost per transformation for square n \times n dense matrices. Householder transformations are preferred over Givens rotations for dense matrices due to their lower operational count—roughly half the for column reduction—and improved efficiency from accessing contiguous blocks in the rank-one updates, which fill fewer lines compared to the scattered accesses in sequential Givens applications.

Applications in Linear Algebra

QR Factorization

The QR factorization, also known as QR decomposition, represents a matrix A \in \mathbb{R}^{m \times n} (with m \geq n) as the product A = QR, where Q is an orthogonal matrix and R is an upper triangular matrix. Householder transformations provide an efficient and stable method to compute this decomposition by applying a sequence of n-1 such reflections from the left to A, successively eliminating the subdiagonal elements in each column to produce the upper triangular R. This approach ensures that the transformations preserve the Euclidean norms of the columns of A, as each Householder matrix is orthogonal. The process begins with the first column of A: a Householder reflector H_1 is constructed to map the subcolumn A(1:m,1) onto a multiple of the first standard basis vector e_1, zeroing all entries below the (1,1) position. The updated matrix becomes A^{(1)} = H_1 A, with the first column now having zeros below the diagonal. For the second column, a reflector H_2 is applied to the trailing (m-1) \times (n-1) submatrix A^{(1)}(2:m, 2:n), zeroing entries below the (2,2) position without disturbing the prior zeros, yielding A^{(2)} = H_2 A^{(1)}. This continues for k = 1 to n-1, with each H_k acting on the submatrix starting from row k and column k, resulting in R = H_{n-1} \cdots H_2 H_1 A. The full orthogonal matrix is then Q = H_1 H_2 \cdots H_{n-1}, satisfying Q^T A = R or equivalently A = QR, where R is upper triangular with the norms of the original columns of A appearing on its diagonal (up to signs). In practice, the algorithm avoids explicitly forming Q to save and . It initializes by copying A to R and sets Q = I_m, but instead applies each H_k (from k=1 to n-1) to the trailing submatrix of R and accumulates the effect on Q if needed. To store the reflectors compactly, the lower triangular part of R (below the diagonal) is overwritten with the Householder vectors v_k (normalized so the first entry is 1), and auxiliary scalars \tau_k = 2 / (v_k^T v_k) are stored separately. This compact representation allows reconstruction of Q or application of Q or Q^T to a in O(mn^2) operations without forming the full . The overall flop count for the is approximately $2mn^2 - \frac{2}{3}n^3. Consider the example of decomposing the 3×3 matrix A = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 10 \end{pmatrix}. For the first column x = (1,4,7)^T, compute \|x\|_2 = \sqrt{66} \approx 8.124, and choose the reflector vector v_1 = x + \operatorname{sign}(x_1) \|x\|_2 e_1 = (1+8.124, 4, 7)^T \approx (9.124, 4, 7)^T, normalized to u_1 = v_1 / \|v_1\|_2 with \tau_1 = 2 / (u_1^T u_1). Applying H_1 = I - \tau_1 u_1 u_1^T to A zeros the (2,1) and (3,1) entries, yielding A^{(1)} \approx \begin{pmatrix} -8.124 & -9.599 & -11.941 \\ 0 & -0.087 & -0.548 \\ 0 & -0.903 & -1.458 \end{pmatrix}, where the signs ensure by avoiding cancellation. For the second column of the submatrix y \approx (-0.087, -0.903)^T, \|y\|_2 \approx 0.907, \operatorname{sign}(y_1) = -1, v_2 = y + \operatorname{sign}(y_1) \|y\|_2 e_1 \approx (-0.994, -0.903)^T, normalized to u_2, and H_2 applied to the 2×2 trailing submatrix zeros the (3,2) entry, resulting in upper triangular R \approx \begin{pmatrix} -8.124 & -9.599 & -11.941 \\ 0 & 0.907 & 1.505 \\ 0 & 0 & 0.402 \end{pmatrix}. The product Q = H_1 H_2 can then be reconstructed from the stored vectors to verify A = QR. This method offers key benefits in , particularly for solving problems \min_x \|Ax - b\|_2, where the solution is x = R^{-1} (Q^T b), avoiding the ill-conditioned normal equations A^T A x = A^T b whose condition number is \kappa(A)^2; instead, the Householder QR preserves \kappa(R) = \kappa(A), ensuring backward stability with small residual errors bounded by times \kappa(A). It is also more efficient than alternatives like Givens rotations, requiring about two-thirds the operations for dense matrices.

Hessenberg and Tridiagonal Reduction

transformations are employed to reduce a general to upper Hessenberg form through a , where the resulting matrix H has zeros below the subdiagonal. This is achieved by applying a sequence of reflectors from both the left and right, yielding H = Q^T A Q, with Q orthogonal. The process begins by zeroing the elements below the subdiagonal in the first column using a reflector on rows 2 through n, followed by applying the same reflector from the right to maintain ; subsequent columns are treated similarly, with care to avoid disrupting previously zeroed entries via bulge chasing techniques. For symmetric matrices, the Hessenberg reduction simplifies to tridiagonal form, as the symmetry ensures that the superdiagonal mirrors the subdiagonal. Here, n-2 Householder transformations suffice: for the k-th step, a reflector is constructed to zero entries below the subdiagonal in column k of the trailing submatrix, applied from both sides to produce T = Q^T A Q, where T is symmetric tridiagonal and Q orthogonal. The subdiagonal elements are preserved throughout, capturing essential off-diagonal structure. This two-sided application ensures , with the backward error bounded by O(\epsilon_{\text{mach}} \|A\|), where \epsilon_{\text{mach}} is . The A' = Q^T A Q preserves the eigenvalues of A, as orthogonal similarities maintain the . In the basic , each step computes the reflector u for the subcolumn x by setting u = x + \operatorname{sign}(x_1) \|x\| e_1, normalized, then applies H_k = I - 2 u u^T to the appropriate submatrix. Bulge chasing, involving additional reflectors to eliminate introduced nonzeros, ensures the form is maintained without fill-in beyond the band. The total cost is approximately \frac{10}{3} n^3 for Hessenberg form and \frac{4}{3} n^3 for symmetric tridiagonal form of an n \times n . A representative example illustrates the tridiagonalization of a 4×4 : A = \begin{pmatrix} 4 & 1 & -2 & 2 \\ 1 & 2 & 0 & 1 \\ -2 & 0 & 3 & -2 \\ 2 & 1 & -2 & -1 \end{pmatrix} In the first step, a Householder reflector P_1 zeros entries below the subdiagonal in the first column of the 3×3 trailing submatrix, yielding an intermediate matrix with zeros in positions (3,1) and (4,1). Applying P_1 from both sides gives: A_1 = P_1 A P_1 = \begin{pmatrix} 4 & -3 & 0 & 0 \\ -3 & \frac{10}{3} & 1 & \frac{4}{3} \\ 0 & 1 & \frac{5}{3} & -\frac{4}{3} \\ 0 & \frac{4}{3} & -\frac{4}{3} & -1 \end{pmatrix} The second step uses P_2 on the 2×2 trailing submatrix to zero (4,3), resulting in the tridiagonal form: T = P_2 A_1 P_2 = \begin{pmatrix} 4 & -3 & 0 & 0 \\ -3 & \frac{10}{3} & -\frac{5}{3} & 0 \\ 0 & -\frac{5}{3} & -\frac{33}{25} & \frac{68}{75} \\ 0 & 0 & \frac{68}{75} & \frac{149}{75} \end{pmatrix} The subdiagonal elements (e.g., -3 and -5/3) are preserved, facilitating subsequent eigenvalue computations. This reduction to Hessenberg or tridiagonal form serves as an essential precursor to the for computing , as the condensed structure accelerates iterations while preserving spectral properties.

Broader Applications

Geometric Optics

In geometric optics, the Householder transformation provides a precise for , where light rays bounce off a surface following the that the angle of incidence equals the angle of reflection. The \mathbf{u} in the transformation corresponds to the unit to the reflecting , effectively acting as the "mirror" that reverses the component of the incident ray parallel to this while leaving the component unchanged. This analogy underscores the transformation's role as a , mirroring phenomena in a linear algebraic framework. A key application arises in ray tracing simulations for and optical design, where the Householder transformation computes the direction of reflected rays to model light propagation realistically. For an incident direction \mathbf{i} (a pointing toward the surface) and surface \mathbf{n} (also unit length), the reflected \mathbf{r} is given by \mathbf{r} = \mathbf{i} - 2 (\mathbf{i} \cdot \mathbf{n}) \mathbf{n}, which is precisely the action of the Householder matrix H = I - 2 \mathbf{n} \mathbf{n}^T applied to \mathbf{i}. This formulation ensures the reflected ray lies in the spanned by \mathbf{i} and \mathbf{n}, maintaining the geometric constraints of without requiring explicit angle calculations. As a specific example, consider simulating in a : an incident from a source at position \mathbf{p}_s striking the mirror at point \mathbf{p} with normal \mathbf{n} has direction \mathbf{i} = \frac{\mathbf{p} - \mathbf{p}_s}{|\mathbf{p} - \mathbf{p}_s|}. The reflected direction \mathbf{r} is computed as above, allowing determination of the location \mathbf{p}_i = \mathbf{p} + (\mathbf{p} - \mathbf{p}_s) - 2 ((\mathbf{p} - \mathbf{p}_s) \cdot \mathbf{n}) \mathbf{n}, which ensures the from an observer to the image equals the actual folded path via the mirror. This enables accurate rendering of image positions and path lengths in optical systems. The principles underlying such reflections have been integral to geometric since the , building on earlier formulations like those in Fresnel's work on wave optics interfaces, though the vectorial description facilitated modern simulations. The Householder transformation, introduced in , provided a numerically stable form that extended these ideas to computational contexts, including approximations for non-planar surfaces by iteratively applying local planar reflections at planes along curved geometries. Additionally, the transformation's unitarity—ensuring H is orthogonal and self-inverse—preserves the reversibility of light paths, a core tenet of where inverting directions reconstructs the incident configuration without loss of energy or directionality.

Quantum Computing

In quantum computing, the Householder transformation serves as a that performs reflections within the , playing a crucial role in algorithms such as Grover's search. This quantum analog mirrors the classical operation by reflecting quantum states across a orthogonal to a chosen normalized vector |\phi\rangle, defined by the operator H = I - 2 |\phi\rangle \langle \phi |. As a and , it preserves state norms while inverting the projection onto |\phi\rangle, enabling geometric manipulations essential for engineering. Quantum circuits implementing reflections can be constructed using sequences of elementary , often leveraging controlled-phase operations or oracles for efficiency. One method decomposes the reflection into state preparation circuits and controlled s, achieving arbitrary reflections in systems of N states with O(N) interaction steps in physical realizations, though general digital circuits for arbitrary |\phi\rangle may scale exponentially in qubit number. For the specific case in , the reflection over the uniform superposition employs Hadamard combined with multi-controlled phase flips, realizable with controlled-phase and Pauli operators. A primary application lies in , where reflections boost the probability of measuring desired states by iteratively reflecting over . In this framework, the diffusion step reflects the current state |\psi\rangle across the subspace spanned by the uniform superposition |s\rangle, yielding |\psi'\rangle = D |\psi\rangle with D = 2 |s\rangle \langle s | - I, where |s\rangle = \frac{1}{\sqrt{N}} \sum_x |x\rangle and N = 2^n. This is equivalent to the reflection I - 2 |s\rangle \langle s | up to a global of -1. This operation, alternated with an oracle-defined reflection across the orthogonal to the good state , rotates the state toward the target in the two-dimensional plane spanned by the initial uniform state and the marked states, achieving quadratic speedup over classical search. For a concrete example in a 2-qubit system (N=4), the Householder reflection for the diffusion operator—searching for a marked state like |11\rangle—is decomposed as follows: apply Hadamard gates H^{\otimes 2} to create the uniform superposition, followed by Pauli-X gates on both qubits, a controlled-Z gate (CZ) with both qubits as controls and one as target (effectively a double-controlled phase flip), then reverse the X and Hadamard gates. This circuit inverts amplitudes about the average using four Hadamard gates, two X gates, and one CZ gate, forming one iteration of the amplification. The integration of Householder transformations into quantum information processing emerged in the late 1990s with foundational quantum algorithms, offering an efficient means to enact arbitrary reflections via O(n) gates in targeted scenarios like uniform subspace reflections.

Relations to Other Transformations

Comparisons with Givens Rotations

The Givens rotation is an orthogonal transformation consisting of a 2D plane rotation designed to zero out a single selected element in a vector or matrix, which can be generalized to higher dimensions by embedding the 2D rotation block within an identity matrix that affects only two specific rows or columns. In contrast, a Householder transformation zeros an entire subcolumn below the pivot in a single step by reflecting across a hyperplane orthogonal to a carefully chosen vector, requiring O(n) operations for a subcolumn of length n. A Givens rotation zeros only one entry per application with O(1) operations but necessitates O(n) sequential rotations to achieve the same subcolumn zeroing, resulting in roughly twice the computational cost for dense matrices. Consequently, Householder transformations are more efficient for dense linear algebra problems, while Givens rotations excel in sparse settings where targeted updates minimize fill-in and preserve matrix structure.
AspectHouseholder QR (m × n, m ≥ n)Givens QR (m × n, m ≥ n)
Leading Flop Count2mn² - (2/3)n³4mn²
SuitabilityDense matricesSparse matrices
StabilityBackward , O(ε) errorBackward , O(ε) error
Both methods employ orthogonal transformations, ensuring norm preservation, though sequences of transformations can mimic the iterative zeroing effects of Jacobi methods—which rely on successive Givens rotations for symmetric eigenvalue problems—by applying global reflections per step rather than pairwise local rotations. Each transformation introduces exactly one eigenvalue of - (with the rest being ), reflecting its nature as an improper orthogonal matrix with -, whereas Givens rotations have and eigenvalues on the unit ; this spectral difference can influence intermediate in iterative processes, though overall algorithms remain numerically . For example, in computing the QR factorization of an n × n dense to zero the subdiagonal elements of the first column (length n-1), a single Householder transformation requires approximately 4(n-1) to construct the reflector and an additional 4n(n-1) to apply it to the remaining , totaling about 4n² for this step. In comparison, using Givens rotations demands n-1 rotations to zero the subcolumn, each costing 6 for the rotation plus 4n to apply to the , yielding roughly 4n(n-1) + 6(n-1) ≈ 4n² + 2n —about 50% more expensive for large n. Over the full QR process, this disparity accumulates, underscoring Householder's efficiency for dense cases.

Connections to Unitary and Orthogonal Matrices

The Householder transformation, defined as H = I - 2 \frac{\mathbf{u} \mathbf{u}^T}{\mathbf{u}^T \mathbf{u}} for a nonzero vector \mathbf{u}, constitutes a special subclass of real orthogonal matrices, satisfying H^T H = I and possessing determinant -1, which classifies it as an improper orthogonal transformation or reflection in the special orthogonal group SO(n) extended to include orientation-reversing elements. In the complex domain, the transformation generalizes to H = I - \beta \mathbf{v} \mathbf{v}^H (with \beta = 2 / (\mathbf{v}^H \mathbf{v})), forming a unitary matrix that preserves the Hermitian inner product and satisfies H^H H = I, enabling analogous reflections in complex Euclidean spaces. Any real orthogonal matrix can be decomposed as a product of at most n Householder reflectors, providing an efficient representation that leverages their structure for dense matrices, often requiring fewer factors than decompositions based on plane rotations. This factorization property stems from the generative capacity of reflections within the O(n), where even products yield proper rotations (determinant +1) and odd products yield reflections (determinant -1). In the complex case, products of such unitary Householder matrices similarly span the U(n), with applications extending to unitarily similar reflections that maintain spectral properties under similarity transformations. The theoretical foundations trace to Alston S. Householder's 1958 work on unitary triangularization of nonsymmetric matrices, where sequences of such transformations reduce matrices to upper triangular form while preserving eigenvalues via unitary similarity. Extensions to indefinite inner products generalize Householder reflections to pseudo-orthogonal groups, adapting the construction to bilinear forms like those in Krein spaces, where orthogonality is defined relative to a signature matrix rather than the standard Euclidean metric. In the context of singular value decomposition (SVD), Householder transformations facilitate bidiagonalization by applying left and right reflections to reduce a general matrix to upper bidiagonal form, from which singular values are extracted via iterative methods. Unlike general unitary matrices, such as those generated by matrix exponentials of skew-Hermitian operators, Householder transformations offer an explicit rank-one update structure, enabling direct computation and storage efficiency without approximating continuous flows. This discrete, low-rank nature distinguishes them as practical building blocks for orthogonal and unitary operations in numerical algorithms.

References

  1. [1]
    Householder Transformation - an overview | ScienceDirect Topics
    A Householder transformation is defined as a unitary transformation of the form \( T = I - 2vv^T / v^Tv \), where \( v \) is a vector, that is used to ...
  2. [2]
    [PDF] Householder transformations - Cornell: Computer Science
    Householder transformations are orthogonal transfor- mations (reflections) that can be used to similar effect.
  3. [3]
    ALAFF Householder transformation - UT Computer Science
    Then H=I−2uuH H = I − 2 u u H is said to be a Householder transformation or (Householder) reflector.
  4. [4]
    Unitary Triangularization of a Nonsymmetric Matrix | Journal of the ...
    A. S. HOUSEHOLDER (1958), A class of methods for inverting matrices. J ... Unitary Triangularization of a Nonsymmetric Matrix. Computing methodologies.
  5. [5]
    [PDF] The Householder transformation in numerical linear algebra
    Feb 3, 2008 · Abstract. In this paper I define the Householder transformation, then put it to work in several ways: • To illustrate the usefulness of ...
  6. [6]
    Alston Householder (1904 - 1993) - Biography - MacTutor
    Householder transformations are now routinely taught in courses in linear algebra, throughout the world, as is the systematic use of norms in linear algebra, ...Missing: original | Show results with:original
  7. [7]
  8. [8]
  9. [9]
    A Storage-Efficient 𝑊 ⁢ 𝑌 Representation for Products of ...
    It is of interest when implementing Householder techniques in high-performance computing environments that are especially good at matrix-matrix multiplication.
  10. [10]
    [PDF] Householder Transformation 1 The QR Decomposition - UTEP
    Householder transformation: This method is robust like the one using Givens rotations, easier to implement, and requires less computation. However, it is less ...<|separator|>
  11. [11]
    [PDF] Lecture 23: Triangularization and Array Algorithms for Kalman Filter
    1.1 Householder Reflection. Figure 1: Geometric interpretation of Householder Reflection. Given a row vector x and corresponding transformed equivalent vector ...
  12. [12]
    [PDF] Orthogonalization
    a geometric interpretation. Given are two vectors ... Reflection through the bisector brings x to y. ... returns the Q and R computed with Householder reflections.<|control11|><|separator|>
  13. [13]
    None
    Below is a merged summary of Householder Transformations based on all provided segments, consolidating the information into a comprehensive response. To retain all details efficiently, I will use a table in CSV format for key concepts, derivations, equations, applications, and page references, followed by a narrative summary that integrates additional context and useful URLs. This approach ensures maximum density and clarity while preserving all information.
  14. [14]
    On the Choice of Sign Defining Householder Transformations - arXiv
    Aug 7, 2023 · It is well known that, when defining Householder transformations, the correct choice of sign in the standard formula is important to avoid cancellation and ...Missing: stability | Show results with:stability
  15. [15]
    [PDF] Linear least squares solutions by householder transformations
    [1] HOUSEHOLDER, A. S.: Unitary Triangularization of a Nonsymmetric Matrix. J. Assoc. Comput. Mach. 5, 339--342 (1958). [2] WILKINSON, J. H. : Householder's ...
  16. [16]
    [PDF] Householder QR - Cornell: Computer Science
    Householder QR uses Householder transformations, which are simple orthogonal transformations corresponding to reflection through a plane, to compute QR ...
  17. [17]
    [PDF] Notes on Householder QR Factorization - UT Computer Science
    Sep 21, 2014 · Householder QR factorization uses unitary transformations, specifically Householder transformations (reflectors), to achieve QR factorization.
  18. [18]
    [PDF] Lecture 14 Hessenberg/Tridiagonal Reduction - DSpace@MIT
    Oct 26, 2006 · • Instead, try computing an upper Hessenberg matrix H similar to A: ... • The Householder Hessenberg reduction algorithm is backward stable:.
  19. [19]
    [PDF] 1 Tridiagonalization
    Repeating the process n − 2 times will yield a symmetric tridiagonal matrix. ... is a Householder transformation. Properties: • H is symmetric. • H is ...
  20. [20]
    [PDF] Woah, We're Halfway There Hessenberg via Householder
    Mar 11, 2016 · The beauty about using the Arnoldi procedure to reduce a matrix to upper Hessenberg form is two fold: we can readily apply the method to sparse.
  21. [21]
    [PDF] 6. QR algorithm
    One way of reducing a matrix to upper Hessenberg form is to apply Householder transfor- mations. When A is symmetric, this reduces A to tridiagonal form. For ...
  22. [22]
    [PDF] Applied Numerical Linear Algebra. Lecture 8
    Example 1. In this example, the given matrix A is transformed to the similar tridiagonal matrix A1 by using Householder Method. We have A =.. 5 1 0. 1 6 3.Missing: 4x4 | Show results with:4x4<|control11|><|separator|>
  23. [23]
    [PDF] ANALYTIC METHODS FOR SIMULATED LIGHT TRANSPORT
    Note that the Householder matrix I;2vvT performs a reflection ... optics effect of specular reflection near grazing, the norm has a bound strictly less.
  24. [24]
    [PDF] Reflections and Refractions in Ray Tracing
    Nov 13, 2006 · Reflection uses the law of reflection where incident and reflected angles are equal. Refraction uses Snell's law, where refractive indices and ...
  25. [25]
    [PDF] Introduction to Computer Graphics - UCSD CSE
    Jan 6, 2025 · A reflection in a plane is also called a Householder transformation ... Summary 10.3 The specular reflection from a single light source is ...
  26. [26]
    History of Geometric Optics
    The law of reflection was correctly formulated in Euclid's book. Hero of Alexandria demonstrated that, by adopting the rule that light rays always travel ...
  27. [27]
    Engineering of arbitrary U(N) transformations by quantum ... - arXiv
    Aug 21, 2007 · We propose a simple physical implementation of the quantum Householder reflection (QHR) M(v)=I-2|v><v| in a quantum system of N degenerate states.
  28. [28]
  29. [29]
    [quant-ph/0005055] Quantum Amplitude Amplification and Estimation
    May 15, 2000 · View a PDF of the paper titled Quantum Amplitude Amplification and Estimation, by Gilles Brassard (1) and 5 other authors. View PDF. Abstract ...
  30. [30]
  31. [31]
    [PDF] Lecture 6 Householder Reflectors and Givens Rotations
    Sep 26, 2006 · Householder Triangularization. • The Householder method multiplies by unitary matrices to make columns triangular, for example at the first ...
  32. [32]
    [PDF] Golub and Van Loan - EE IIT Bombay
    A complex Householder transformation is a unitary matrix of the form where /3 = 2/vH v. Given a nonzero vector x E <C"', it is easy to determine v so that.
  33. [33]
    [PDF] Numerical Linear Algebra - EconStor
    4.3 Givens and Householder Reductions. The Givens and Householder methods use a similar principle as the Jacobi method. A series of GRs or HRs, designed such ...<|control11|><|separator|>
  34. [34]
    Householder Orthogonalization with a Nonstandard Inner Product
    The basic idea is to map a set of vectors to a prescribed orthonormal basis, instead of directly eliminating matrix entries of the input. As long as an ...Missing: extensions indefinite<|separator|>