Fact-checked by Grok 2 weeks ago

Viewport

A viewport is a polygonal (normally rectangular) area in that is currently being viewed, representing the portion of an image or scene displayed on a screen. In , it refers to the area through which a views a document in a , encompassing the visible portion of the page within the or on-screen display, such as on mobile devices or desktops. This concept is fundamental to rendering, as content outside the viewport remains invisible until the scrolls or zooms into , and it directly influences how pages adapt to varying screen sizes and orientations. The viewport comprises two primary components: the layout viewport, which defines the fixed reference area for CSS layout calculations and remains constant regardless of user interactions like zooming, and the visual viewport, which represents the actual visible region that can shrink or shift relative to the layout viewport during actions such as pinch-to-zoom on touch devices. In CSS specifications, the viewport establishes the initial containing block for continuous media, serving as the basis for relative units like viewport width (vw) and viewport height (vh), which allow elements to scale proportionally to the viewing area. For , the viewport is controlled primarily through the <meta name="viewport"> tag, which specifies attributes such as width (e.g., set to device-width to match the device's screen width in pixels), initial-scale (a factor from 0.0 to 10.0), and user-scalable (a to enable or disable zooming, defaulting to yes). This tag addresses historical challenges with virtual viewports on mobile browsers, where pages not optimized for small screens would render at a desktop-like width (often 980 pixels), leading to excessive horizontal scrolling; by setting appropriate values, developers ensure content fits naturally and supports for adaptive layouts. In embedded contexts like <iframe>, <svg>, or <object> elements, the viewport aligns with the element's inner dimensions, treating the visual and viewports as identical to facilitate precise rendering within constrained spaces. Overall, the viewport's flexibility underpins modern web standards, enabling seamless cross-device experiences while adhering to behaviors defined in CSS modules.

Overview

Definition

In and software rendering, a viewport refers to the visible polygonal region—typically rectangular—on a or within an application where graphical content is projected and rendered for viewing. This region serves as the final mapping target for scene elements after transformations, ensuring that only the intended portion of the virtual environment appears on the output surface. A key distinction exists between a viewport and a related term like "window": the window defines an abstract area in world or normalized coordinates selected for display, whereas the viewport specifies the concrete rendering area in device-specific coordinates, such as pixels on a screen. This separation allows for scalable mapping from conceptual scenes to physical outputs. Analogously, the viewport functions like a camera lens framing a scene, determining which part of the broader view is captured and presented to the observer. The core attributes of a viewport include its boundaries, defined by parameters such as lower-left corner coordinates (x, y) and dimensions (width, height) in pixel units, which precisely delimit the rendering space. By confining rendering operations to these bounds, the viewport optimizes computational resources, preventing the processing of content outside the visible area and enabling efficient display updates. This transformation from abstract coordinates to the viewport is often handled via a window-to-viewport mapping process.

Key Concepts

Viewports exhibit dynamic behaviors that adapt to changes in conditions and user actions, ensuring efficient and responsive rendering. Upon resizing, such as when a is adjusted, the viewport dimensions must be updated to match the new surface area, typically by modifying parameters like width and height in the rendering to prevent distortion or clipping artifacts. This adjustment often triggers a recomputation of the or to maintain and scale content appropriately. , in contrast, involves shifting the visible portion of the scene by translating the view coordinates—such as moving the camera position—without altering the viewport boundaries themselves, allowing the same rendered content to be repositioned efficiently across frames. Clipping complements these by discarding portions of that lie outside the viewport bounds during the rendering , using algorithms like outcode testing to classify and eliminate invisible early, thereby optimizing resource use. A core performance role of viewports lies in enabling efficient rendering through culling mechanisms that avoid processing invisible elements. View frustum culling, for instance, removes entire objects or primitives outside the defined viewing volume before rasterization, significantly reducing computational overhead in complex scenes by leveraging tests against the frustum planes. Similarly, viewport clipping ensures that only relevant fragments are rasterized by intersecting geometry with the 2D rectangle bounds, preventing unnecessary pixel operations. The concept of normalized device coordinates (NDC) further supports device-independent scaling, where coordinates are normalized to a canonical range of -1 to 1 across x, y, and z axes post-projection; this abstraction allows rendering pipelines to clip out-of-range vertices uniformly regardless of display resolution, streamlining the transition to screen space and enhancing portability across hardware. Viewports serve as the primary interface for user input events, mapping device coordinates—such as mouse positions—to the coordinate space of the rendered scene. Input like mouse clicks or drags is captured relative to the viewport's origin (often the upper-left corner in screen space), requiring transformation to normalized or world coordinates for interaction with virtual objects; for example, screen pixel coordinates are inverted on the y-axis and scaled via the inverse viewport transform to align with the scene's geometry. This mapping ensures precise event handling, such as selecting elements within the visible area, while confining interactions to the active viewport region to avoid processing outside the intended display bounds.

Computer Graphics

2D Viewport Mapping

In , 2D viewport mapping involves transforming coordinates from a world window—a rectangular region in the —to the viewport, which is the display area on the screen in device coordinates. This process ensures that the relevant portion of the scene is rendered accurately within the available pixels. The is affine and consists of two main steps: to match the ratios and sizes, followed by to position the scaled window within the viewport bounds. The scaling factors are computed separately for the x- and y-axes to handle potential differences in window and viewport dimensions. The x-scaling factor S_x is given by S_x = \frac{x_{r2} - x_{r1}}{x_{w2} - x_{w1}}, where (x_{w1}, x_{w2}) define the left and right bounds of the world window, and (x_{r1}, x_{r2}) define the left and right bounds of the viewport; the y-scaling factor S_y follows analogously as S_y = \frac{y_{r2} - y_{r1}}{y_{w2} - y_{w1}}. After scaling a point (x_w, y_w) in the window to (x_w', y_w') = (S_x (x_w - x_{w1}), S_y (y_w - y_{w1})), translation offsets are applied: T_x = x_{r1} - S_x x_{w1} and T_y = y_{r1} - S_y y_{w1}, yielding the final viewport coordinates (x_r, y_r) = (x_w' + T_x, y_w' + T_y). This mapping preserves straight lines and parallelism but may introduce distortion if the aspect ratios differ. Prior to applying the viewport mapping, clipping is essential to discard or adjust outside the world window, preventing unnecessary computations and artifacts in the viewport. The Cohen-Sutherland algorithm, a seminal line-clipping method, assigns 4-bit outcodes to line endpoints based on their position relative to the window edges (left, right, top, bottom), then iteratively clips segments that straddle boundaries by computing intersections. This preprocessing ensures only visible portions are transformed, improving efficiency in rasterization pipelines. For instance, consider mapping a world window of 100x100 units, bounded by (0, 0) to (100, 100), to an 800x600 pixel viewport from (0, 0) to (800, 600). The scaling factors are S_x = 800 / 100 = 8 and S_y = 600 / 100 = 6, reflecting the differing aspect ratios. The translation offsets derive as T_x = 0 - 8 \cdot 0 = 0 and T_y = 0 - 6 \cdot 0 = 0, so a world point like (50, 50) maps to (400, 300) in the viewport after scaling. If clipping via Cohen-Sutherland identifies a line segment partially outside the window, only the interior portion undergoes this transformation.

3D Viewport Projection

In , the projection pipeline transforms 3D scene coordinates into 2D viewport coordinates to render scenes on a . This process begins with the model-view transformation, which positions and orients 3D models relative to the camera by combining modeling transformations (to place objects in world ) and viewing transformations (to align the world with the camera's eye ). Following this, a projection matrix applies a perspective divide to simulate depth-based scaling, mapping eye-space coordinates to clip where points behind the camera or beyond clipping planes are discarded. The final viewport mapping then scales and translates these normalized device coordinates (typically in the range [-1, 1]) to coordinates within the 2D viewport rectangle, completing the to screen . The perspective projection matrix incorporates the field-of-view (FOV) angle, near plane distance n, and far plane distance f to define the viewing volume. For a viewport with aspect ratio of 1 (square), using horizontal FOV, it is given by: \begin{pmatrix} \frac{1}{\tan(\frac{\mathrm{FOV}}{2})} & 0 & 0 & 0 \\ 0 & \frac{1}{\tan(\frac{\mathrm{FOV}}{2})} & 0 & 0 \\ 0 & 0 & -\frac{f}{f - n} & -1 \\ 0 & 0 & -\frac{f n}{f - n} & 0 \end{pmatrix} In general, for horizontal FOV and arbitrary aspect ratio a = width/height, the y-scaling factor (second row, first column) becomes \frac{1}{\tan(\frac{\mathrm{FOV}}{2}) \cdot a}. This ensures objects farther from the camera appear smaller, mimicking human vision, while handling the perspective divide by setting the homogeneous w-coordinate to preserve depth information for later stages. The view represents the 3D pyramidal volume visible through the viewport, bounded by six planes: near, far, left, right, top, and bottom, which extend from the camera position based on the FOV and clipping distances. culling optimizes rendering by excluding objects entirely outside this volume, preventing unnecessary transformations and rasterization of invisible geometry, which can reduce processed elements by factors of 5-10 in complex scenes. After and clipping to the , depth buffering via the z-buffer resolves during rasterization by storing the depth value of the closest fragment for each pixel. The z-buffer, a per-pixel initialized to the maximum depth (e.g., the far plane), compares interpolated z-values from projected fragments; only the fragment with the smallest z updates the color buffer, discarding others. This algorithm, originally proposed by in 1974, enables efficient handling of overlapping polygons without sorting. For example, in rasterizing two overlapping triangles after , the z-buffer ensures only the nearer triangle's color is written to pixels where their fragments intersect, accurately depicting .

Web Development

Browser Viewport Mechanics

In web browsers, the viewport establishes the initial containing block, serving as the root reference for CSS layout calculations and positioning of all elements in the document. This initial containing block defines the boundaries within which the document's content is rendered, influencing properties like absolute and fixed positioning, which are resolved relative to the viewport's dimensions. For instance, elements with fixed positioning are anchored to the viewport, remaining stationary even as the page scrolls. The effective size of the viewport is determined by the window's dimensions minus the space occupied by , such as toolbars, address bars, and status bars, which are not part of the renderable area. In mode, the viewport expands to utilize nearly the entire screen, whereas dynamic elements like collapsible toolbars can temporarily alter the available space, prompting adjustments in . This exclusion of ensures that rendering focuses solely on the visible area, but it means must account for variable effective sizes across user configurations. Viewport resizes, often triggered by user actions like window dragging or orientation changes, initiate a reflow process where the recalculates positions, sizes, and the overall to fit the new dimensions. This reflow can cascade through the tree, potentially affecting multiple elements and leading to overhead if frequent. Following reflow, a repaint occurs, where the redraws the visible pixels to reflect the updated , optimizing only the changed regions for efficiency. When content overflows the viewport—due to elements exceeding its width or height—the handles scrollable by adding scrollbars to the , enabling without resizing the layout itself; like overflow: auto on the or can control this . Cross-browser implementations of viewport mechanics have evolved, with historical variations notably between older versions of and modern Chromium-based browsers like . In , particularly in quirks mode triggered by non-standard DOCTYPEs, viewport sizing could deviate due to inconsistent box model interpretations, leading to inflated or contracted effective areas compared to standards mode. Modern Chromium engines, adhering closely to W3C specifications, provide more predictable sizing by standardizing the initial containing block. A key example of such differences is the handling of visual viewport versus viewport: the viewport maintains a fixed size for CSS computations (often 980px wide on mobile for desktop-like layouts), while the visual viewport represents the currently visible portion, which shrinks with overlays like keyboards; older IE lacked this distinction, treating the viewport more uniformly and causing layout shifts absent in Chromium's layered approach.

Mobile and Responsive Viewport Control

In mobile web development, the viewport meta tag is essential for controlling how web pages scale and display on smaller screens. The tag uses the syntax <meta name="viewport" content="key=value, key=value">, where common attributes include width=device-width to set the viewport width to match the device's screen width, initial-scale=1.0 to establish a 1:1 zoom ratio between the device pixels and CSS pixels, maximum-scale and minimum-scale to limit zoom levels (though these are often ignored in modern iOS versions for accessibility reasons), and user-scalable=no to disable user zooming (not recommended as it violates WCAG guidelines). By default, mobile browsers like Safari on iOS assume a wider virtual viewport (e.g., 980px) to render desktop-optimized sites without excessive zooming, which can distort layouts; however, including <meta name="viewport" content="width=device-width, initial-scale=1.0"> prevents this by ensuring the page renders at the device's actual width, avoiding unintended zooming and enabling proper responsive behavior. A key distinction in mobile browsers arises between the layout viewport and the visual viewport, particularly in environments like . The layout viewport represents the full area into which the webpage is rendered, including off-screen portions, and remains stable to preserve the page's structural integrity, such as fixed positioning and CSS calculations. In contrast, the visual viewport is the portion actually visible to the user on the screen, which can shrink dynamically—for instance, when the on-screen appears in , reducing the visible height without altering the layout viewport's dimensions. This separation ensures that elements like input fields remain accessible and layouts do not reflow unexpectedly during keyboard activation, though developers must use APIs like the Visual Viewport API to detect and adjust for these changes in interactive applications. Responsive web design techniques leverage the viewport's dimensions through CSS to adapt layouts across varying screen sizes, a practice that gained prominence following the iPhone's 2007 launch, which highlighted the need for mobile-optimized browsing. formalized the term "" in a 2010 A List Apart , advocating for fluid grids, flexible images, and as core pillars to ensure sites scale seamlessly. , standardized in CSS3, allow conditional styling based on viewport width, such as @media screen and (max-width: 600px) { body { font-size: 14px; } }, which applies smaller text on narrow screens to maintain readability. A practical example is preventing horizontal scrolling by combining the viewport meta tag with max-width: 100% on images and containers, ensuring content fits within the device's width without overflow, thus promoting a single-column on mobiles that expands to multi-column on larger viewports via queries like @media screen and (min-width: 600px) { .wrapper { display: flex; } }. This approach, rooted in the post-iPhone era, has become a standard for inclusive web experiences across devices.

Advanced Applications

Viewports in Virtual and Augmented Reality

In (VR), viewports are adapted to simulate immersive environments by rendering stereoscopic images that mimic human . Stereo rendering employs dual viewports, one for each eye, to create through slight disparities in the projected scenes. These viewports are typically configured as split-screen textures, with the left eye viewport occupying the left half and the right eye the right half of the display buffer. headsets like the aim to match the human horizontal field-of-view (FOV) of approximately 120 degrees binocularly, though practical implementations often target around 110 degrees to balance immersion and hardware constraints. As of 2023, newer headsets like the maintain a horizontal FOV of about 110 degrees, with improvements in resolution and tracking. Head-tracked viewports in enable dynamic adjustment of the rendered scene based on the user's head movements, ensuring the viewport aligns with perception. (IMU) sensors, including gyroscopes, accelerometers, and magnetometers, track head orientation at high frequencies, such as 1000 Hz in devices like the , integrating angular velocity data to update the viewport pose via representations. This compensation prevents and maintains spatial consistency. To address optical distortions from curvature, such as effects in VR headsets, rendering applies barrel distortion and corrections using a distortion mesh, processed by the device's compositor for each eye's viewport. In augmented reality (AR), the viewport serves as the frame for the device's live camera feed, overlaying virtual content onto the real-world view while clipping elements to physical boundaries. ARCore's Frame API captures the camera image in real-time and uses hit-testing to project rays from screen coordinates onto detected real-world geometry, ensuring virtual objects are anchored and clipped within environmental limits like surfaces or depths up to 65 meters. Similarly, ARKit integrates the camera feed as the primary viewport, employing world tracking to align and clip 3D virtual elements to the physical scene, preventing overlaps beyond detectable bounds. This approach maintains perceptual coherence by transforming coordinates between 2D screen space and 3D real-world poses. As of 2025, ARKit on devices with LiDAR supports depth mapping up to several meters with high precision for occlusion.

Viewports in Game Engines

In game engines, multi-viewport support enables features such as split-screen multiplayer or overlays, allowing multiple camera views to render simultaneously within a single display. In , developers configure this by modifying the Camera.rect property, which specifies a normalized (ranging from (0,0) at the bottom-left to (1,1) at the top-right) defining the screen portion allocated to each camera's output. For split-screen setups, multiple cameras can target the same render texture or screen, with each assigned a distinct rect to divide the viewport—such as halving the width for two-player horizontal splits—facilitating local multiplayer without additional hardware. Similarly, provides built-in split-screen functionality for local multiplayer, automatically generating additional player controllers and viewports upon detecting multiple inputs, with viewport divisions adjustable via project settings or nodes to support layouts like side-by-side or stacked views. Dynamic viewport adjustments are essential for maintaining visual consistency across resolutions, often through scaling techniques that ensure resolution independence. In Unity, viewport scaling involves setting the camera's aspect ratio (width/height) dynamically based on the screen dimensions, combined with orthographic size adjustments for or field-of-view tweaks for to prevent distortion; this allows content designed at a reference , such as 1920x1080, to proportionally without or . Performance optimizations like level-of-detail () further leverage viewport , where Unity's LODGroup component switches to lower-detail meshes when objects exceed a camera-relative , reducing draw calls for distant elements visible in the viewport. In , equivalent LOD transitions occur via static mesh settings, with distances defined in Cull Distance Volumes that attenuate rendering based on from the viewport camera, enabling efficient handling of large scenes by excluding off-screen or far objects. Porting games to consoles and mobile devices post-2010 has emphasized adaptive viewport handling for diverse aspect ratios, such as transitioning from standard 16:9 on consoles like to ultrawide 21:9 on PC or variable ratios up to 21:9 on devices like models. In workflows, developers use the Player Settings to define supported s and employ scripts to detect runtime screen dimensions, applying letterboxing or pillarboxing via viewport rect constraints to preserve intended without UI overlap. supports this through project-wide constraints in the Game settings, automatically adding black bars for unsupported ratios (e.g., constraining 21:9 ports to 16:9 safe areas), a practice refined in /console pipelines since 4's 2014 release to streamline cross-platform builds while minimizing re-authoring. These techniques, integral to iterative processes, ensure fidelity across hardware variances, as seen in titles like adapting from console to mobile without core viewport redesigns.

History and Standards

Early Developments

The concept of the viewport emerged in the early days of as a means to manage the display of graphical content on limited hardware, beginning with foundational work in the 1960s. Ivan Sutherland's system, developed in 1963 as part of his thesis at , introduced early notions of clipping windows by computing only the visible portions of curves and lines that appeared within the display scope—a vector display addressing 1024 positions per axis on a screen. This clipping mechanism ensured efficient rendering by detecting intersections with scope edges and generating solely the portions intended for human viewing, laying groundwork for abstracting the physical display boundary. By the 1970s, these ideas were formalized in theoretical and practical frameworks. In their book Principles of Interactive Computer Graphics, William M. Newman and Robert F. Sproull explicitly defined the viewport as a rectangular on the screen for displaying content, distinguished from the —a in world-coordinate space defining the scene portion to render. They described the window-to-viewport transformation as a and process to map world coordinates onto screen space, with clipping algorithms like Cohen-Sutherland applied to remove invisible parts outside these boundaries, enhancing efficiency in interactive systems. This formalization addressed the need for device-independent abstractions in growing applications. A key precursor to came in 1977 with the Graphics Standards Planning Committee (GSPC)'s system, outlined in their status report. The system defined window-viewport transformations as integral to both and viewing pipelines, treating the 2D case as a subset of 3D where a window in normalized device coordinates maps to a viewport on the display surface via affine transformations and clipping. This design emphasized portability across hardware, influencing subsequent standards by specifying primitives and functions for viewport specification and transformation. These developments were driven by the constraints of early () displays prevalent in the 1960s and 1970s, such as vector refresh CRTs like those in the MIT TX-2 or IBM 2250, which featured small viewable areas (often 7x7 inches or less) and required constant redrawing at 30 Hz or higher to avoid . Limited —typically a few hundred addressable points—and high computational demands for refresh cycles necessitated software abstractions like viewports to selectively render content within the physical screen bounds, avoiding overload on scarce memory and processing resources.

Modern Standardization

The modern standardization of viewports emphasizes interoperability across web browsers, mobile devices, and graphics APIs, addressing the proliferation of diverse screen sizes and rendering environments since the early 2010s. In , the within documents enables authors to specify the viewport's width, initial scale, and user scalability, crucial for responsive design on mobile devices. Introduced by Apple for on in 2007, this tag has evolved into a supported by all major browsers, as documented in the Living Standard's guidance on elements, though it remains a rather than a core normative feature. Complementing the meta tag, the W3C's CSS Viewport Level 1 formalizes CSS like @viewport to control the initial containing block's size, zoom, and orientation, published as a First Public Working Draft in January 2024 to enable more precise viewport management without relying solely on . Viewport-relative units such as vw (viewport width) and (viewport height), defined in the CSS Values and Units Level 3 ( Recommendation, 2024), further support adaptive layouts by tying measurements to the viewport dimensions, promoting fluid designs across devices. The CSSOM View (Working Draft, September 2025) standardizes interfaces like .innerWidth and .innerHeight for dynamically querying viewport , ensuring consistent access in client-side scripting. In , the Khronos Group's core specification, version 4.6 (released 2017), defines the viewport as a state-managed that maps normalized coordinates to window coordinates via the glViewport command, a concept retained from 1.0 (1992) but refined for modern . For web-based 3D rendering, 1.0 (standardized 2011) adopts this from 2.0, initializing the viewport to match the element's dimensions upon context creation. 2.0 (2017) extends viewport control with support for multiple render targets and floating-point specifications, enhancing efficiency for complex scenes. Emerging as the successor to , —developed by the W3C GPU for the Web Community Group and advanced to Candidate Recommendation in December 2024—introduces the GPURenderPassEncoder.setViewport() method to set the rasterization viewport dynamically, allowing offsets and dimensions that can extend beyond render target boundaries for advanced effects like multi-view rendering in . These standards collectively ensure viewport consistency in high-performance applications, from responsive web pages to immersive graphics, while prioritizing and hardware efficiency.

References

  1. [1]
    Viewport - Glossary | MDN
    ### Definition and Summary of Viewport in Web Development
  2. [2]
  3. [3]
  4. [4]
    CSS Viewport Module Level 1 - W3C
    Jan 25, 2024 · In CSS 2.1 a viewport is a feature of a user agent for continuous media and used to establish the initial containing block for continuous media.Missing: development | Show results with:development
  5. [5]
    CSS Values and Units Module Level 3 - W3C
    Mar 22, 2024 · This CSS module describes the common values and units that CSS properties accept and the syntax used for describing them in CSS property definitions.<|control11|><|separator|>
  6. [6]
    Viewports and clipping - UWP applications | Microsoft Learn
    Oct 20, 2022 · A viewport is a two-dimensional (2D) rectangle into which a 3D scene is projected. In Direct3D, the rectangle exists as coordinates within a Direct3D surface.The Viewing Frustum · Viewport Rectangle
  7. [7]
    glViewport - OpenGL 4 Reference Pages - Khronos Registry
    glViewport specifies the affine transformation of x and y from normalized device coordinates to window coordinates.
  8. [8]
    Window to Viewport Transformation in Computer Graphics with ...
    Aug 8, 2022 · ViewPort -It is the area on the device coordinate where graphics is to be displayed. Mathematical Calculation of Window to Viewport: It may be ...
  9. [9]
    Introduction to Computer Graphics, Section 3.3 -- Projection and ...
    This is the rectangle that is called the viewport. The viewport transformation takes x and y from the clip coordinates and scales them to fit the viewport.
  10. [10]
    [PDF] Clipping and Culling - MIT
    The clipping stage removes objects and parts of objects that fall outside of the viewing frustum. And, when rasterizing, clipping is used to remove parts of ...<|separator|>
  11. [11]
    [PDF] CIS 441/541: Introduction to Computer Graphics - University of Oregon
    Mar 7, 2019 · View Frustum Culling. Page 87. View Frustum Culling. □ Viewing frustum culling: the process of removing objects that lie completely outside ...
  12. [12]
    Coordinate Systems - LearnOpenGL
    OpenGL expects all the vertices, that we want to become visible, to be in normalized device coordinates after each vertex shader run.Camera · Getting-started/cube_vertices · Source code · Here
  13. [13]
    Introduction to Computer Graphics, Section 2.3 -- Transforms
    Remember that in order to view the scene, there will be another transformation that maps the object from a view window in world coordinates into the viewport.
  14. [14]
    Intro to Computer Graphics: 2D Transformations
    1. translate world-coordinate window to the origin of the world coordinate system. 2. rescale the window to the size and aspect ratio of the viewport.
  15. [15]
    [PDF] Transformations - Department of Computer Science
    Given a window and a viewport, what is the transformation from. WCS to VPCS? Three steps: • Translate. • Scale. • Translate. 1994 Foley/VanDam/Finer ...
  16. [16]
    [PDF] Computer Graphics (CS 543) Lecture 9 (Part 2): Clipping
    ○ Cohen-Sutherland Clipping algorithm x = x max x = x min y = y max y = y min. Page 10. Clipping Points. (xmin, ymin). (xmax, ymax). Determine whether a point ...
  17. [17]
    [PDF] The Graphics Pipeline: Projective Transformations - MIT
    Almost every step in the graphics pipeline involves a change of coordinate system. Transformations are central to understanding 3D computer graphics. Modeling.<|control11|><|separator|>
  18. [18]
    The Perspective and Orthographic Projection Matrix - Scratchapixel
    To get a basic perspective projection matrix working, we need to account for the angle of view or field-of-view (FOV) of the camera. Changing the focal length ...
  19. [19]
    Frustum Culling - LearnOpenGL
    Frustum culling can be useful to avoid computation of things that are not visible. You can use it to not compute the animation state of your entity, simplify ...
  20. [20]
    [PDF] Projections and Z-buffers - UT Computer Science
    The Z-buffer' or depth buffer algorithm [Catmull, 1974] is probably the simplest and most widely used of these techniques. The process of filling in the pixels ...
  21. [21]
    The Visibility Problem, the Depth Buffer Algorithm ... - Rasterization
    One solution to the visibility problem is to use a depth-buffer or z-buffer. A depth-buffer is nothing more than a two-dimensional array of floats that has the ...
  22. [22]
  23. [23]
    Rendering: repaint, reflow/relayout, restyle
    Dec 17, 2009 · Adding a stylesheet, tweaking style properties; User action such as resizing the window, changing the font size, or (oh, OMG, no!) scrolling.The Forest And The Trees · Minimizing Repaints And... · A Final Example<|control11|><|separator|>
  24. [24]
    Reflow - Glossary - MDN Web Docs - Mozilla
    Jul 11, 2025 · Reflow happens when a browser recalculates the position and geometry of certain parts of a webpage, such as after an update on an interactive site.Missing: resize | Show results with:resize
  25. [25]
  26. [26]
    Understanding quirks and standards modes - HTML - MDN Web Docs
    Jul 9, 2025 · In quirks mode, layout emulates behavior in Navigator 4 and Internet Explorer 5. This is essential in order to support websites that were built ...
  27. [27]
    <meta name="viewport"> - HTML | MDN
    ### Summary of `<meta name="viewport">`
  28. [28]
    Configuring the Viewport - Apple Developer
    Dec 12, 2016 · Configuring the Viewport. Safari on iOS displays webpages at a scale that works for most web content originally designed for the desktop.Missing: browsers | Show results with:browsers
  29. [29]
    Visual Viewport API - MDN Web Docs - Mozilla
    Aug 27, 2024 · The layout viewport covers all the elements on a page and the visual viewport is what is actually visible on the screen.Concepts and usage · Interfaces · Examples
  30. [30]
    A Brief History of Responsive Web Design - freeCodeCamp
    Feb 4, 2021 · In this article, we'll take a look at the early web, different ways developers would adapt a site to different screen sizes, and modern responsive design.
  31. [31]
    Rendering to the Oculus Rift | Meta Horizon OS Developers
    In VR, rendered eye views depend on the headset position and orientation in the physical space, tracked with the help of internal IMU and external sensors.
  32. [32]
    [PDF] The Human Visual System - Stanford University
    The human visual system has visual acuity of ~1 arc min, a visual field of ~200° monocular, ~120° binocular, and uses cones for color and rods for night vision.
  33. [33]
    VR headset rendered FOV calculation | VR Docs
    Jul 29, 2019 · The calculation of the FOV points and subsequent rendered FOVs can be performed by the command line tool hmdq which queries the headset over OpenVR API.Missing: dual binocular
  34. [34]
    [PDF] Chapter 9 Tracking - Steven M. LaValle
    This section explains how the orientation of a rigid body is estimated using an inertial measurement unit (IMU). The main application is determining the view-.
  35. [35]
    Perceptual Requirements for Eye-Tracked Distortion Correction in VR
    We present a virtual reality display system simulator that accurately reproduces gaze-contingent distortions created by any viewing optic.
  36. [36]
    Frame | ARCore - Google for Developers
    Oct 31, 2024 · Attempts to acquire an image from the camera that corresponds to the current frame. Image · acquireDepthImage(). This method is deprecated.Missing: ARKit | Show results with:ARKit
  37. [37]
    ARKit | Apple Developer Documentation
    ### Summary of ARKit Viewport, Camera Feed, and Overlaying Virtual Content
  38. [38]
    Scripting API: Camera.rect - Unity - Manual
    Where on the screen is the camera rendered in normalized coordinates. In rect , the bottom-left of the screen is (0,0) and the top-right of the screen is (1,1).
  39. [39]
    [PDF] IVAN EDWARD SUTHERLAND B.S., Carnegie Institute of ...
    Sketchpad also makes it easy to draw highly repetitive or highly accurate drawings and to change drawings previously- drawn with it. The many drawings in this.<|control11|><|separator|>
  40. [40]
    [PDF] Principles of interactive computer graphics - Parent Directory
    Jun 23, 1977 · Newman, William M 1939-. Principles of interactive computer graphics. (McGraw-Hill computer science series) (McGraw-Hill series in artificial.
  41. [41]
  42. [42]
    CSSOM View Module - W3C
    Sep 16, 2025 · The visual viewport is a kind of viewport whose scrolling area is another viewport, called the layout viewport . In addition to scrolling, the ...
  43. [43]
    WebGPU - W3C
    Oct 28, 2025 · WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to (post-2014) native GPU ...