Fact-checked by Grok 2 weeks ago

Transformation matrix

In linear algebra, a is a that defines a linear transformation from a \mathbb{R}^n to another \mathbb{R}^m, where the transformation T maps a x \in \mathbb{R}^n to T(x) = Ax with A being an m \times n . The columns of A correspond to the images of the vectors under T, making the matrix a compact representation of the transformation's action on any basis. Key properties include the \mathbb{R}^n, \mathbb{R}^m, and as the column space of A, with the inducing the identity transformation. Transformation matrices are fundamental in for operations such as , reflections, scalings, , and projections, which preserve linearity and can be composed via . In , they enable efficient manipulation of and 3D objects using , where points are augmented with a 1 (e.g., (x, y, 1)) to represent affine transformations like via 3x3 or 4x4 matrices. For instance, a by \theta is given by the matrix \begin{pmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{pmatrix}, allowing combined transformations through a single multiplication. These matrices extend to higher dimensions and applications in , , and physics simulations, where they model coordinate changes and motions.

Basic Concepts

Definition

A transformation matrix is an m \times n that represents a linear transformation between vector spaces of dimensions n and m. A linear transformation T: \mathbb{R}^n \to \mathbb{R}^m is a that preserves vector addition and , meaning T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v}) and T(c \mathbf{u}) = c T(\mathbf{u}) for all vectors \mathbf{u}, \mathbf{v} \in \mathbb{R}^n and scalars c. In this context, the transformation matrix A is an m \times n such that T(\mathbf{v}) = A \mathbf{v} for any column vector \mathbf{v} \in \mathbb{R}^n, where the multiplication follows standard matrix-vector rules. The general form of this representation is given by the equation T(\mathbf{x}) = A \mathbf{x}, where \mathbf{x} is a column and A encodes the action of T on the standard basis vectors of \mathbb{R}^n. This matrix formulation assumes familiarity with basic and operations but relies fundamentally on the property to ensure consistency across all inputs. The concept of the transformation matrix builds on 19th-century theory developed by and , who introduced matrices as tools for representing linear transformations and their compositions.

Relation to linear transformations

In linear algebra, a transformation matrix serves as the concrete representation of an abstract linear transformation between finite-dimensional spaces. Specifically, for a linear transformation T: V \to W, where V and W are spaces of dimensions n and m respectively, there exists a m \times n A such that T(\mathbf{v}) = A \mathbf{v} for all \mathbf{v} \in V, when vectors are expressed in coordinates relative to ordered bases \mathcal{B} = \{\mathbf{e}_1, \dots, \mathbf{e}_n\} for V and \mathcal{B}' = \{\mathbf{e}'_1, \dots, \mathbf{e}'_m\} for W. This establishes a between the set of linear transformations and the set of matrices of appropriate dimensions with respect to fixed bases. The entries of the matrix A = (a_{ij}) are determined by the action of T on the basis vectors of \mathcal{B}: T(\mathbf{e}_j) = \sum_{i=1}^m a_{ij} \mathbf{e}'_i, \quad j = 1, \dots, n. Thus, the j-th column of A consists of the coordinates of T(\mathbf{e}_j) in the basis \mathcal{B}'. This representation is unique for the chosen bases, ensuring that the matrix fully encodes the transformation T. If the bases are changed to new ordered bases \mathcal{B}'' for V and \mathcal{B}''' for W, the matrix representation transforms via conjugation. Let P be the change-of-basis matrix from \mathcal{B} to \mathcal{B}'' (whose columns are the coordinates of the new basis vectors in \mathcal{B}), and Q the change-of-basis matrix from \mathcal{B}' to \mathcal{B}'''. The new matrix B is then given by B = Q^{-1} A P. This preserves the intrinsic properties of T, such as eigenvalues, while adapting the coordinates to the new bases. A key property linking the abstract transformation to its matrix is invertibility: T is invertible if and only if its matrix A is invertible, in which case the matrix of the inverse transformation T^{-1} is A^{-1}. This equivalence holds with respect to the chosen bases and underscores the computational utility of matrices over abstract linear maps. Unlike abstract linear transformations, which emphasize conceptual mappings between vector spaces, transformation matrices enable explicit computations, such as applying T via matrix-vector multiplication or composing transformations through . This representational framework is fundamental in fields like and engineering, where numerical implementation is essential.

Constructing Transformation Matrices

Using standard bases

To determine the matrix representation of a linear transformation T: \mathbb{R}^n \to \mathbb{R}^m with respect to the standard bases, apply T to each standard basis vector e_i of \mathbb{R}^n, where e_i has a 1 in the i-th position and 0s elsewhere, and express the result T(e_i) as a coordinate vector in the standard basis of \mathbb{R}^m. The columns of the resulting m \times n matrix A are precisely these coordinate vectors [T(e_1) \mid T(e_2) \mid \cdots \mid T(e_n)], so that T(\mathbf{x}) = A\mathbf{x} for any \mathbf{x} \in \mathbb{R}^n. This method yields the standard matrix A of T, which encodes the transformation entirely in terms of matrix-vector multiplication using the canonical orthonormal bases \{e_1, \dots, e_n\} for the and \{f_1, \dots, f_m\} for the . The approach generalizes to any finite-dimensional spaces equipped with chosen bases, though it is simplest and most direct when using the standard bases of \mathbb{R}^n and \mathbb{R}^m, as the coordinates of T(e_i) are immediately the components of the image . For example, consider the linear transformation T: \mathbb{R}^2 \to \mathbb{R}^2 defined by T(x, y) = (x + y, y). Applying T to e_1 = (1, 0) gives T(e_1) = (1, 0), whose standard coordinates are \begin{pmatrix} 1 \\ 0 \end{pmatrix}. Similarly, T(e_2) = T(0, 1) = (1, 1), whose coordinates are \begin{pmatrix} 1 \\ 1 \end{pmatrix}. Thus, the matrix is A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, and T\begin{pmatrix} x \\ y \end{pmatrix} = A \begin{pmatrix} x \\ y \end{pmatrix}.

Eigenbasis and diagonalization

In linear algebra, a transformation matrix A representing a linear transformation T: \mathbb{R}^n \to \mathbb{R}^n can often be simplified by changing to an appropriate basis. If T possesses n linearly independent eigenvectors, an eigenbasis B = \{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n\} can be formed, where each \mathbf{v}_i satisfies T(\mathbf{v}_i) = \lambda_i \mathbf{v}_i for corresponding eigenvalues \lambda_i. In this eigenbasis, the matrix of T becomes the D = \operatorname{diag}(\lambda_1, \lambda_2, \dots, \lambda_n), as the transformation merely scales each basis vector by its eigenvalue without mixing components. The relationship between the original matrix A (with respect to the standard basis) and this diagonal form is given by the spectral decomposition A = P D P^{-1}, where the columns of P are the eigenvectors \mathbf{v}_1, \dots, \mathbf{v}_n, and P is invertible since the eigenvectors are linearly independent. This decomposition, known as diagonalization, expresses A in a form that reveals its intrinsic scaling behavior along principal directions defined by the eigenvectors. Not all transformation matrices are diagonalizable; for instance, a in two dimensions lacks a full set of real eigenvectors and thus cannot be over the reals. In such cases, more general forms like the Jordan canonical form extend the concept to a block-diagonal structure, though it requires generalized eigenvectors and is not fully diagonal. Diagonalization proves particularly valuable for computing powers of the transformation, as A^k = P D^k P^{-1}, where D^k = \operatorname{diag}(\lambda_1^k, \dots, \lambda_n^k) is straightforward to calculate, enabling efficient iteration or exponentiation without repeated matrix multiplication.

Two-Dimensional Transformations

Scaling

In two-dimensional space, a scaling transformation stretches or compresses vectors along the coordinate axes by specified factors, represented by a diagonal matrix A = \begin{pmatrix} s_x & 0 \\ 0 & s_y \end{pmatrix}, where s_x and s_y are the scaling factors for the x- and y-directions, respectively. This matrix applies to a vector \begin{pmatrix} x \\ y \end{pmatrix} to produce the transformed vector \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} s_x x \\ s_y y \end{pmatrix}. When s_x = s_y = s, the transformation is uniform scaling, which preserves angles between vectors since it enlarges or reduces all directions proportionally. In the general non-uniform case, angles are not preserved unless the factors are equal. The determinant of the scaling matrix is \det(A) = s_x s_y, which gives the factor by which areas are scaled under the ; for instance, the unit square maps to a of area |s_x s_y|. Geometrically, expands or contracts shapes relative to the along the principal axes, with uniform maintaining the overall and proportions while non-uniform distorts them by altering aspect ratios.

Rotation

In , a is a linear that preserves , lengths, and , corresponding to a counterclockwise turn by an \theta around the . The matrix representation belongs to the special SO(2), consisting of 2×2 orthogonal matrices with 1. The for a counterclockwise by \theta is \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}. This matrix is obtained by applying the transformation to the vectors \mathbf{e}_1 = (1,0) and \mathbf{e}_2 = (0,1), yielding (\cos \theta, \sin \theta) and (-\sin \theta, \cos \theta), respectively. matrices in 2D are orthogonal, satisfying A^T A = I, which ensures preservation of lengths and , and have \det(A) = 1, confirming preservation. The provides the via \operatorname{tr}(A) = 2 \cos \theta. In applications such as , rotation matrices are essential for orienting objects, often composed with scaling and ing to form pipelines. For instance, they enable efficient rendering of rotated models by multiplying coordinates.

Shearing

A transformation in two dimensions displaces points parallel to one coordinate by an amount proportional to their distance from that , while keeping distances perpendicular to the shear direction unchanged. This results in a sliding effect that distorts shapes without altering their area or . There are two primary types: horizontal shear, where the x-coordinate remains unchanged and the y-coordinate is adjusted as y' = y + k x with x' = x, and vertical shear, where the y-coordinate remains unchanged and the x-coordinate is adjusted as x' = x + k y with y' = y, where k is the shear factor. The matrix representation for a shear is \begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix}, which maps the vectors such that \mathbf{e}_1 remains fixed and \mathbf{e}_2 is shifted horizontally by k. For vertical shear, the matrix is \begin{pmatrix} 1 & 0 \\ k & 1 \end{pmatrix}, shifting \mathbf{e}_1 vertically by k while fixing \mathbf{e}_2. Both matrices have a of 1, confirming that the transformation preserves areas, and a of 2, reflecting their upper- or lower-triangular structure with 1s on the diagonal. Geometrically, a shear transforms a into a by slanting one pair of opposite sides parallel to the axis, while maintaining the lengths of the unsheared sides and the overall area. These matrices are not orthogonal, as they do not preserve angles between vectors, leading to distortion in non-perpendicular directions. In applications such as , shear transformations are used to create fonts by slanting characters via a simple horizontal displacement proportional to vertical position.

Reflection

In , a is a linear that fixes all points on a line passing through the and reverses the of vectors to that line. The matrix representing such a across a line at \theta to the x-axis is given by A = \begin{pmatrix} \cos 2\theta & \sin 2\theta \\ \sin 2\theta & -\cos 2\theta \end{pmatrix}. This form arises from the of reflecting a \mathbf{x} across the line, which can be derived using the Householder-like reflection formula A = I - 2 \mathbf{n} \mathbf{n}^T, where \mathbf{n} is to the line. For example, reflection across the x-axis, where \theta = 0, yields A = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. This matrix leaves the x-coordinate unchanged while negating the y-coordinate. Reflection matrices in 2D are orthogonal, satisfying A^T A = I (and thus A^T = A), have -1, and are involutory, meaning A^2 = I. The of -1 reflects the orientation-reversing nature of the . The product of two s is a . Geometrically, these transformations mirror figures across the line, interchanging sides, and are used in 2D graphics for effects.

Orthogonal projection

Orthogonal projection is a linear transformation that maps a in the onto a specified line or one-dimensional , with the property that the vector from the original point to its is perpendicular to the subspace. This operation finds the point on the line closest to the given vector, minimizing the . In form, the orthogonal projection onto the x-axis, which is the line spanned by the vector (1, 0), is represented by the matrix \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}. This matrix collapses any point (x, y) to (x, 0), effectively discarding the y-component while preserving the x-component unchanged. For a general line in the plane defined by a unit vector \mathbf{u} = \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} (where \|\mathbf{u}\| = 1), the orthogonal projection matrix is given by the outer product \mathbf{A} = \mathbf{u} \mathbf{u}^T. This yields \mathbf{A} = \begin{pmatrix} u_1^2 & u_1 u_2 \\ u_1 u_2 & u_2^2 \end{pmatrix}, which projects any vector \mathbf{v} onto the line by \mathbf{A} \mathbf{v} = (\mathbf{v} \cdot \mathbf{u}) \mathbf{u}, the scalar projection scaled by the unit vector. If the line makes an angle \theta with the x-axis, then \mathbf{u} = \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix}, and the matrix simplifies to \mathbf{A} = \begin{pmatrix} \cos^2 \theta & \cos \theta \sin \theta \\ \cos \theta \sin \theta & \sin^2 \theta \end{pmatrix}. This form highlights the trigonometric dependence on the line's orientation. Orthogonal projection matrices exhibit key algebraic properties that distinguish them from invertible transformations. They are idempotent, satisfying \mathbf{A}^2 = \mathbf{A}, meaning applying the projection twice yields the same result as applying it once, as the image lies entirely within the . Additionally, these matrices are singular, with \det \mathbf{A} = 0, since the is the of the line (a one-dimensional in 2D), rendering them non-invertible. The symmetry \mathbf{A}^T = \mathbf{A} further ensures the projection is , aligning with the perpendicularity condition. Geometrically, orthogonal projections model phenomena such as casting shadows onto a surface under light, where the shadow is the of an object onto the . In computational contexts, they underpin least-squares approximations, finding the best linear fit by projecting data onto a that minimizes residual errors, a foundational in and optimization.

Three-Dimensional Transformations

Rotation

In , a is a linear that preserves and distances, corresponding to a counterclockwise turn by an \theta around a fixed passing through the . The matrix representation of such a rotation belongs to the special SO(3), consisting of 3×3 orthogonal matrices with 1. For rotations around one of the coordinate axes, explicit matrix forms can be derived by considering the effect on the standard basis vectors. The rotation matrix for a counterclockwise rotation by angle \theta around the z-axis leaves the z-coordinate unchanged while rotating the x-y plane: \begin{pmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{pmatrix} This matrix is obtained by applying the transformation to the unit vectors \mathbf{e}_1 = (1,0,0), \mathbf{e}_2 = (0,1,0), and \mathbf{e}_3 = (0,0,1), where \mathbf{e}_3 remains fixed. Similar matrices exist for rotations around the x- and y-axes, forming the basis for composing general rotations via sequences of axis-aligned turns, such as in Euler angle representations. To obtain the rotation matrix for an arbitrary axis, Rodrigues' formula provides a direct expression. For a unit vector \mathbf{k} = (k_x, k_y, k_z) along the axis and rotation angle \theta, the matrix A is given by: A = I \cos \theta + (\mathbf{k} \mathbf{k}^T) (1 - \cos \theta) + K \sin \theta, where I is the 3×3 , \mathbf{k} \mathbf{k}^T is the , and K is the skew-symmetric cross-product matrix: K = \begin{pmatrix} 0 & -k_z & k_y \\ k_z & 0 & -k_x \\ -k_y & k_x & 0 \end{pmatrix}. This formula arises from decomposing a vector into components parallel and perpendicular to \mathbf{k}, rotating the perpendicular part in the plane orthogonal to \mathbf{k}, and recombining; the parallel component remains fixed. The result was originally derived by Olinde Rodrigues in 1840 as part of the geometric laws governing irregular displacements of rigid bodies. Rotation matrices in 3D exhibit key properties that distinguish them from general orthogonal matrices. They are orthogonal, satisfying A^T A = I, which ensures preservation of lengths and angles, and have determinant \det(A) = 1, confirming orientation preservation (unlike improper rotations with \det = -1). Additionally, the trace provides the rotation angle via \operatorname{tr}(A) = 1 + 2 \cos \theta, allowing extraction of \theta from the matrix without knowing the axis. In applications such as , rotation matrices are essential for orienting objects and cameras in virtual scenes, often composed with and to form full pipelines. For instance, they enable efficient rendering of rotated models by multiplying coordinates, with particularly useful for arbitrary orientations in real-time simulations.

Reflection

In , a is a linear that fixes all points in a passing through the origin and reverses the direction of s normal to that plane. The matrix representing such a across a with unit normal \mathbf{n} is given by A = I - 2 \mathbf{n} \mathbf{n}^T, where I is the 3×3 . This form, known as the Householder matrix, arises from the geometry of reflecting a \mathbf{x} as A\mathbf{x} = \mathbf{x} - 2 (\mathbf{n}^T \mathbf{x}) \mathbf{n}, which subtracts twice the projection onto the normal. For example, reflection across the xy-plane, where \mathbf{n} = [0, 0, 1]^T, yields A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix}. This matrix leaves the first two coordinates unchanged while negating the z-coordinate. Reflection matrices exhibit key algebraic properties: they are orthogonal, satisfying A^T A = I (and thus A^T = A), have determinant -1, and are involutory, meaning A^2 = I. The determinant of -1 reflects the orientation-reversing nature of the transformation. The product of three reflections is an improper transformation. Geometrically, these transformations mirror volumes across the plane, interchanging left- and right-handed coordinate systems, and play a central role in describing symmetry operations in , where reflection matrices represent mirror planes in space groups. In two dimensions, reflections across lines through the are analogous lower-dimensional cases.

Operations on Transformation Matrices

Composition

In linear algebra, the composition of two linear transformations is represented by the product of their corresponding , allowing sequential geometric to be combined into a single matrix. If T: \mathbb{R}^n \to \mathbb{R}^n and S: \mathbb{R}^n \to \mathbb{R}^n are linear transformations with matrices A and B, respectively, and T is applied first followed by S, the composite transformation S \circ T has matrix BA. This order reflects the application sequence: the input v is first transformed by A and then by B. The defining equation for this composition is (S \circ T)(v) = B(Av), which directly corresponds to the matrix-vector product BA v. Matrix multiplication is associative, so (CB)A = C(BA) for compatible matrices C, B, A, mirroring the associativity of function composition. However, it is generally non-commutative, meaning BA \neq AB unless the transformations commute, which is rare for distinct geometric operations. Key properties of the product matrix include the multiplicative behavior of the determinant: \det(BA) = \det(B) \det(A) = \det(AB), preserving the scaling factor of volumes under the transformation. In contrast, the trace satisfies \operatorname{tr}(BA) = \operatorname{tr}(AB), but this equality does not imply commutativity. A representative example in two dimensions is composing a rotation by angle \theta followed by non-uniform scaling by factors s_x and s_y. The rotation matrix is R = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}, and the scaling matrix is S = \begin{pmatrix} s_x & 0 \\ 0 & s_y \end{pmatrix}. The composite matrix is SR = \begin{pmatrix} s_x \cos \theta & -s_x \sin \theta \\ s_y \sin \theta & s_y \cos \theta \end{pmatrix}, illustrating how the order affects the result—scaling after rotation stretches the rotated axes differently than the reverse.

Inversion

A linear transformation T: \mathbb{R}^n \to \mathbb{R}^n represented by an n \times n matrix A is invertible if and only if A is invertible, meaning T is bijective and there exists a unique inverse transformation T^{-1} such that T \circ T^{-1} = \mathrm{id} and T^{-1} \circ T = \mathrm{id}, where \mathrm{id} is the identity transformation. The matrix of the inverse transformation T^{-1} is precisely A^{-1}, so T^{-1}(\mathbf{v}) = A^{-1} \mathbf{v} for any vector \mathbf{v} \in \mathbb{R}^n. To compute A^{-1}, one standard method uses the and : if \det A \neq 0, then A^{-1} = \frac{1}{\det A} \adj A, where \adj A is the adjugate, the of the cofactor matrix of A. Alternatively, can be applied by augmenting A with the I_n to form [A \mid I_n], then performing row operations to transform the left side into I_n, yielding A^{-1} on the right. This process verifies the inverse through , as applying A followed by A^{-1} (or vice versa) results in the . For orthogonal matrices A, which represent rotations and reflections preserving lengths and angles, a key property simplifies inversion: A^{-1} = A^T, the of A, since A^T A = I_n. This holds because orthogonal transformations form a group under , and the operation coincides with .

Extended Transformations

Affine transformations

An in extends linear transformations by incorporating , defined mathematically as T(\mathbf{v}) = A \mathbf{v} + \mathbf{b}, where A is an n \times n and \mathbf{b} is an n-dimensional . This form preserves and ratios of distances along but does not necessarily maintain angles or lengths unless A has specific properties. To enable representation through , affine transformations employ , which append a 1 to the original \mathbf{v} to form \tilde{\mathbf{v}} = \begin{pmatrix} \mathbf{v} \\ 1 \end{pmatrix}. The transformation then becomes \tilde{T}(\tilde{\mathbf{v}}) = \begin{pmatrix} A & \mathbf{b} \\ \mathbf{0}^T & 1 \end{pmatrix} \tilde{\mathbf{v}} = \begin{pmatrix} A \mathbf{v} + \mathbf{b} \\ 1 \end{pmatrix}, using an (n+1) \times (n+1) matrix. In two dimensions, this matrix takes the explicit form \begin{pmatrix} a_{11} & a_{12} & t_x \\ a_{21} & a_{22} & t_y \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ 1 \end{pmatrix} = \begin{pmatrix} a_{11} x + a_{12} y + t_x \\ a_{21} x + a_{22} y + t_y \\ 1 \end{pmatrix}, where the first two components yield the transformed coordinates. Affine transformations exhibit key properties that facilitate their use in computational systems: they compose associatively via in , allowing sequential applications to be represented as a single product. When the linear component A is orthogonal (i.e., A^T A = I), the transformation qualifies as an , preserving distances, angles, and orientations, which encompasses all rigid motions in the space. In , affine transformations are fundamental for modeling motions, such as combining rotations with translations to reposition objects while maintaining their shape and size, enabling efficient scene manipulation in rendering pipelines.

Perspective transformations

transformations model the projection of three-dimensional points onto a two-dimensional , accounting for depth to produce realistic such as the apparent convergence of at distant vanishing points. Unlike affine transformations, which preserve parallelism, transformations introduce non-linear distortions in by scaling objects based on their distance from the viewer, making distant features appear smaller and lines recede toward . This approach is fundamental in for simulating human vision and has been employed in artistic rendering since the to achieve depth in drawings. In , a point represented as (X, Y, Z, 1)^T is transformed using a 4×4 to facilitate the projection. For a camera positioned at the looking toward the positive z-direction with the projection plane at z = d, a basic perspective projection is given by \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 1/d & 0 \end{pmatrix}. Applying this matrix yields a homogeneous point (X, Y, Z, Z/d)^T, which is dehomogenized by dividing the first three components by the fourth coordinate to obtain the point (d X / Z, d Y / Z, d) on the ; the projected 2D point is then (d X / Z, d Y / Z). This division by a depth-dependent factor ensures that objects farther along the z-axis are compressed, simulating the inverse relationship between perceived size and distance. An equivalent form with a negative sign in the (4,3) entry, such as -1/d, arises in conventions where the camera faces the negative z-direction toward the plane at z = 0, adjusting the denominator accordingly (with Z < 0 for points in front of the camera). These transformations exhibit key projective properties: they map straight lines to straight lines, preserving , but in 3D that are not parallel to the converge to a common in the 2D image, reflecting the geometry of the . The non-linearity in coordinates stems from the division, which cannot be expressed as a single without homogeneous extension. In two dimensions, transformations operate on (x, y, 1)^T via a general 3×3 matrix \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix}, producing (x', y', w')^T where the final point is (x'/w', y'/w'); affine transformations emerge as a special case when the last row is [0, 0, 1], avoiding depth-dependent scaling. Applications of perspective transformations are widespread in 3D rendering pipelines, where they convert world coordinates to screen space for display on devices like monitors, enabling realistic scenes in video games and simulations. In art and architecture, they underpin techniques like one-point perspective, where all parallel lines converge to a single vanishing point on the horizon, enhancing spatial illusion without computational aids.

References

  1. [1]
    Matrix Transformations
    A transformation from R n to R m is a rule T that assigns to each vector x in R n a vector T ( x ) in R m . R n is called the domain of ...
  2. [2]
    [PDF] 1.9 The Matrix of a Linear Transformation - Department of Mathematics
    Identity Matrix. In is an n × n matrix with 1's on the main left to right diagonal and 0's elsewhere. The ith column of In is labeled ei . Example.
  3. [3]
    2.6 The geometry of matrix transformations
    In this section, we will demonstrate how matrix transformations provide a convenient way to describe geometric operations, such as rotations, reflections, and ...
  4. [4]
    Introduction to Computer Graphics, Section 2.3 -- Transforms
    The computer only needs to keep track of a single matrix, which we can call the "current matrix" or "current transformation." To implement transform commands ...Viewing and Modeling · Translation · Rotation · Scaling
  5. [5]
    CoordinateTransformations - Intelligent Motion Lab
    Many common spatial transformations, including translations, rotations, and scaling are represented by matrix / vector operations. Changes of coordinate frames ...<|control11|><|separator|>
  6. [6]
    Linear Transformations
    We will see in the next subsection that the opposite is true: every linear transformation is a matrix transformation; we just haven't computed its matrix yet.
  7. [7]
    [PDF] 2.6 Linear Transformations - Emory Mathematics
    TA(x) = Ax for all x in Rn is called the matrix transformation induced by A. In Section 2.2, we saw that many important geometric.
  8. [8]
    [PDF] A Brief History of Linear Algebra and Matrix Theory
    Cayley studied compositions of linear transformations and was led to define matrix multiplication so that the matrix of coefficients for the composite ...
  9. [9]
    [PDF] Cayley, Sylvester, and Early Matrix Theory - School of Mathematics
    Nov 20, 2007 · The year 2008 marks the 150th anniversary of “A Memoir on the Theory of. Matrices” by Arthur Cayley (1821–1895) [3]—the first paper on matrix ...
  10. [10]
  11. [11]
    Representing Linear Transformations by Matrices
    A given linear transformation can be represented by matrices with respect to many choices of bases for the domain and range.
  12. [12]
  13. [13]
    Change of basis | Formula, examples, proofs - StatLect
    Discover how a change of basis affects coordinate vectors and the matrix of a linear operator. With detailed explanations, proofs and solved exercises.The change-of-basis matrix · Inverse of the change-of-basis... · Linear operators
  14. [14]
    2.4: Invertibility of Linear Transformations
    ### Summary: Equivalence Between Invertibility of a Linear Transformation and Its Matrix Representation
  15. [15]
    [PDF] ZoomNotes for Linear Algebra - Gilbert Strang - MIT OpenCourseWare
    Every linear transformation T : V → Y can be expressed by a matrix A. That matrix A depends on the basis for V and the basis for Y. To construct A: Apply T ...
  16. [16]
    [PDF] Linear Algebra in Twenty Five Lectures - UC Davis Mathematics
    Mar 27, 2012 · These linear algebra lecture notes are designed to be presented as twenty five, fifty minute lectures suitable for sophomores likely to use ...
  17. [17]
    [PDF] The Matrix of a Linear Transformation - Cornell Mathematics
    is the j -th element of the standard basis for F n×1 . Hence one obtains the ... Theorem 2 (Universality of the Matrix of a Linear Transformation). Let ...
  18. [18]
    [PDF] Math 2331 – Linear Algebra - 5.3 Diagonalization
    An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = PDP-1, with D a diagonal matrix, if and only if the ...
  19. [19]
    [PDF] Matrix Diagonalization
    The eigendecomposition theorem tells us that if the eigenvalues are distinct, we can always switch to a coordinate system where the non-essential features of A ...
  20. [20]
    [PDF] EIGENVALUES AND EIGENVECTORS: Diagonalizable Matrices
    Diagonalizable linear transformations and matrices. Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal.
  21. [21]
    [PDF] Notes on Eigenvalues, eigenvectors, and diagonalization
    If it is possible to diagonalize A (in other words, if there exists a basis of eigenvectors), then you would say that A is diagonalizable. for some scalar λ. ...
  22. [22]
    [PDF] Unit 16: Diagonalization
    For a basis, we would need two linearly independent eigenvectors to the eigenvalue 0. 16.3. We say a matrix A is diagonalizable if it is similar to a diagonal ...
  23. [23]
    [PDF] 2D Geometric Transformations | CS 4620 Lecture 4
    Geometry of 2D linear trans. • 2x2 matrices have simple geometric interpretations. – uniform scale. – non-uniform scale. – rotation.Missing: algebra | Show results with:algebra
  24. [24]
    [PDF] Linear algebra and geometric transformations in 2D - UCSD CSE
    2D rotation about a point. • This can be accomplished with one transformation matrix, if we use homogeneous coordinates. • A 2D point using affine homogeneous.
  25. [25]
  26. [26]
    [PDF] Three-Dimensional Rotation Matrices
    There are three distinct cases: Case 1: θ = 0 λ1 = λ2 = λ3 =1,. R(n, 0) = I,. Case 2: θ = π λ1 = 1 , λ2 = λ3 = −1,. R(n,π),. Case 3: 0 <θ<π λ1 = 1,λ2 = eiθ , λ3 ...
  27. [27]
    proof of Rodrigues' rotation formula - PlanetMath.org
    Mar 22, 2013 · proof of Rodrigues' rotation formula ... Let [x,y,z] [ x , y , z ] be a frame of right-handed orthonormal vectors in R3 ℝ 3 , and let v=ax+by+cz v ...Missing: 3D | Show results with:3D
  28. [28]
    Euler–Rodrigues formula variations, quaternion conjugation and ...
    This paper reviews the Euler–Rodrigues formula in the axis–angle representation of rotations, studies its variations and derivations in different mathematical ...
  29. [29]
    3DRotations
    In this chapter we will discuss the meaning of rotation matrices in more detail, as well as the common representations of Euler angles, angle-axis form and the ...
  30. [30]
    [PDF] Linear Algebra and It's Applications by Gilbert Strang
    Now I can describe the first part of the book, about linear equations Ax = b. The matrix A has n columns and m rows. Linear algebra moves steadily to n vectors ...
  31. [31]
    Font Elements - Win32 apps | Microsoft Learn
    2021年1月7日 · The characters in an oblique font are artificially slanted. ... The slant is achieved by performing a shear transformation on the characters from ...
  32. [32]
    [PDF] Householder transformations - Cornell: Computer Science
    Reflection across the plane orthogonal to a unit normal vector v can be expressed in matrix form as. H = I − 2vvT . Now suppose we are given a vector x and we ...
  33. [33]
    Householder matrix - StatLect
    Learn how a Houselder matrix (or elementary reflector) is defined, constructed and used. With detailed explanations, proofs and solved exercises.
  34. [34]
    [PDF] The Cartan–Dieudonné Theorem - CIS UPenn
    product of three reflections, or equivalently the product of a reflection about a plane with a rotation, and we noted in the discussion following Theorem.
  35. [35]
    Matrices, Mappings and Crystallographic Symmetry
    Different types of isometries are distinguished: In the space these are translations, rotations, inversions, reflections, and the more complicated roto- ...
  36. [36]
  37. [37]
    Orthogonal Projection — Applied Linear Algebra
    A matrix P is an orthogonal projector (or orthogonal projection matrix) if P 2 = P and P T = P . Theorem. Let P be the orthogonal projection onto ...
  38. [38]
    Lecture 16: Projection matrices and least squares | Linear Algebra
    Linear algebra provides a powerful and efficient description of linear regression in terms of the matrix A_T_A. These video lectures of Professor Gilbert Strang ...
  39. [39]
    Matrix Multiplication
    The composition of matrix transformations corresponds to a notion of multiplying two matrices together. We also discuss addition and scalar multiplication of ...
  40. [40]
    [PDF] 2.3 Composition of Linear Transformations
    Each column of AB is a linear combination of the columns of A using weights from the corresponding columns of B. Jiwen He, University of Houston. Math 4377/6308 ...
  41. [41]
    [PDF] 3.2 Determinants and Matrix Inverses
    If A and B are n×n matrices, then det(AB) = det A det B. The complexity of matrix multiplication makes the product theorem quite unexpected. Here is an example ...
  42. [42]
    [PDF] Traces of operators and matrices
    Proposition 4 If A and B are n × n matrices, trace(AB) = trace(BA). Proof: Let C := AB and C0 := BA. Then for each i, cii = X k aikbki. traceC = X i cii = X i.Missing: product | Show results with:product
  43. [43]
    [PDF] Lecture 26: Determinants part three
    Apr 6, 2011 · Let Aij be the matrix where the i'th row and the j'th column is deleted. Bij = (−1)i+jdet(Aji) is called the classical adjoint or adjugate of A.<|separator|>
  44. [44]
    The Inverse of a Linear Transformation - UTSA
    Nov 3, 2021 · The inverse of an n-by-n matrix can be calculated by creating an n-by-2n matrix which has the original matrix on the left and the identity matrix on the right.Inverse of an n-by-n matrix · Inverse of a Linear... · Definitions · Example 2<|control11|><|separator|>
  45. [45]
    [PDF] ORTHOGONAL MATRICES Math 21b, O. Knill
    The inverse of an orthogonal transformation is orthogonal. Proof. The properties of the transpose give (AB)T AB = BT AT AB = BT B = 1 and (A−1) ...
  46. [46]
    [PDF] Affine Transformations
    Affine transformations include translations, scaling, rotating, and shearing. These are the operations that provide for all such transformations.
  47. [47]
    [PDF] 3 Transformations
    Examples: translation, rotation, scaling, shear, and reflection. Examples of transformations: • Translation by vector t: ¯p1 = ¯p0 +t. • Rotation ...
  48. [48]
    [PDF] Affine Transformations - UT Computer Science
    These transformations can be very simple, such as scaling each coordinate, or complex, such as non- linear twists and bends. We'll focus on transformations ...Missing: applications | Show results with:applications
  49. [49]
    [PDF] CMSC 425: Lecture 6 Affine Transformations
    This transformation maps the origin O to the point O with homogeneous coordinates. (2,1,1), the x-axis is mapped to the vector u0 = (cos30◦,sin 30◦,0) = (√3/2,1 ...
  50. [50]
    [PDF] Vector and Affine Algebra - Texas Computer Science
    – Example affine transformations. ∗ Rigid body motions (translations, rotations). ∗ Scales, reflections. ∗ Shears. The University of Texas at Austin. 8. Page 9 ...
  51. [51]
    Mathematics of Perspective Drawing - University of Utah Math Dept.
    If the parallel lines are not parallel with the drawing plane, then their image on the drawing plane passes through a fixed point, called the vanishing point.
  52. [52]
    [PDF] Perspective Projection in Homogeneous Coordinates
    Homogeneous coordinates add a dimension to vectors, simplifying formulas for perspective projection, and help represent points at infinity. In 2D, a point is x ...
  53. [53]
    Perspective Projection & Homogeneous Coordinates (WRF)
    Homogeneous coords make a translation or projection into a matrix multiply, like the others. Now, several successive transformations can be combined into ...
  54. [54]
    [PDF] Perspective Projection
    We look at the application of homogeneous coordinates to visualization of three-dimensional objects. Current display devices such as computer monitors, ...
  55. [55]
    [PDF] 1 Transformation by Matrices 2 2-D Transformation Matrices
    Oct 8, 1992 · Now, we can apply the perspective transformation onto the one- dimensional point by pre- multiplying the matrix by the homogeneous coordinates ...