Screen tearing
Screen tearing is a visual artifact that occurs in video displays when the graphics processing unit (GPU) renders frames at a rate that is not synchronized with the monitor's refresh rate, resulting in horizontal splits or distortions where parts of two different frames are shown simultaneously on the screen.[1][2][3] This phenomenon arises primarily from a mismatch between the GPU's frame output—measured in frames per second (FPS)—and the display's refresh rate, typically expressed in hertz (Hz), such as 60Hz meaning 60 refreshes per second.[1][2] When the GPU delivers a new frame while the monitor is midway through refreshing the previous one, the display buffer updates incompletely, creating visible tears, especially noticeable during fast horizontal motion in applications like video games or scrolling interfaces.[3][2] For instance, if a GPU produces 100 FPS on a 60Hz monitor, multiple frames may queue up, leading to the monitor blending portions of older and newer frames.[1] The effects of screen tearing include disrupted visual continuity, reduced immersion, and impaired performance in dynamic scenarios such as gaming, where smooth motion is critical.[2][3] It is particularly prevalent in high-frame-rate environments without proper synchronization, though it can also occur at lower rates if timing is off.[1] To mitigate screen tearing, several techniques synchronize the GPU and display rates: vertical synchronization (V-Sync) caps the frame rate to match the refresh rate, such as limiting to 60 FPS on a 60Hz monitor, though it may introduce input lag.[3][2] Adaptive technologies like NVIDIA's G-Sync or AMD's FreeSync dynamically adjust the monitor's refresh rate to align with the GPU's output within a supported range (e.g., 48-144Hz), eliminating tearing with minimal latency of about 1ms.[3][1] Additional solutions include updating graphics drivers, adjusting resolution and refresh rates to compatible settings, or upgrading hardware to support higher refresh rates or adaptive sync.[2][1]Definition and Causes
Definition
Screen tearing is a visual artifact in video displays that occurs when the graphics processing unit (GPU) delivers frames to the monitor at a rate not synchronized with the display's refresh rate, resulting in portions of two different frames being shown simultaneously on the screen. This mismatch causes a horizontal split or "tear" line, where the upper and lower parts of the image depict slightly different moments in the rendered scene, creating a disjointed appearance.[4][5] The phenomenon arises primarily during the rendering and display process, where the GPU continuously generates new frames and swaps them into the frame buffer for the monitor to scan out. If the GPU completes a frame swap midway through the monitor's refresh cycle—especially when the frame rate exceeds the refresh rate without synchronization—the monitor begins displaying the new frame while still finishing the previous one, leading to the visible tear. This is most noticeable in fast-moving scenes, such as in video games or scrolling interfaces, where the displacement between frames is pronounced.[6][4] Typically, screen tearing manifests as a straight horizontal line across the display, with the image above the line lagging behind the content below it, though the exact position of the tear can vary depending on the timing of the frame update relative to the scanout process. In extreme cases, multiple tear lines may appear if frame rates are significantly mismatched. This artifact is distinct from other display issues like stuttering, as it specifically involves the blending of incomplete frames rather than frame drops or delays.[5][4]Underlying Mechanisms
Screen tearing arises from a mismatch between the graphics processing unit (GPU) rendering process and the display's scanout mechanism. Displays refresh their image by progressively scanning pixels from top to bottom, line by line, at a fixed refresh rate, such as 60 Hz. This scanout process includes a brief vertical blanking interval (VBI) at the end of each refresh cycle, during which the display is not actively drawing and the frame buffer can be updated without visible artifacts. If the GPU updates the frame buffer during active scanout—when the display is midway through drawing lines from the current frame—the new frame data overwrites portions of the old one, resulting in horizontal discontinuities where parts of two different frames are visible simultaneously.[7][8] To mitigate this, graphics systems employ double buffering, utilizing a front buffer (currently displayed) and a back buffer (where the GPU renders the next frame off-screen). Once rendering completes, the buffers are swapped atomically, ideally during the VBI to ensure the entire new frame is presented cohesively. This technique, standard in APIs like OpenGL and DirectX, prevents direct overwrites during display. However, without synchronization, the swap can occur asynchronously with the scanout, leading to tearing if it happens mid-refresh. For instance, in NVIDIA's 3D API implementation, drawing directly to the front buffer produces tearing, while buffered swapping resolves it by isolating rendering from display.[8][9] In more advanced scenarios, such as those in Vulkan's swapchain model, tearing mechanisms are explicitly managed through presentation modes. TheVK_PRESENT_MODE_IMMEDIATE_KHR mode allows immediate buffer presentation without waiting for VBI, minimizing latency but enabling tearing as the display may scan out from a buffer while the GPU simultaneously updates it. Conversely, VK_PRESENT_MODE_FIFO_KHR enforces FIFO queuing and VSync-like synchronization, queuing frames until the VBI to avoid mid-scanout swaps, though this introduces potential latency. Tearing is particularly evident when the GPU frame rate exceeds or mismatches the display refresh rate, as unsynchronized deliveries cause the tear line to appear at varying vertical positions across frames. Even at matched rates, phase misalignment between rendering and refresh timing can position the tear outside the VBI, perpetuating the artifact.[10][7][9]
The underlying issue stems from the decoupled nature of GPU rendering pipelines, which operate at variable speeds determined by computational load, and the fixed-rate scanout hardware in displays. Without mechanisms like vertical synchronization (VSync), which signals the GPU to complete frames only at VBI boundaries, the frame buffer becomes a shared resource vulnerable to concurrent access. This concurrency is exacerbated in high-frame-rate scenarios, where multiple frames may queue or overwrite rapidly, amplifying visible splits. Seminal graphics architectures, such as those in early 3D accelerators, highlighted these challenges, leading to standardized buffering and sync protocols in modern APIs to maintain visual integrity.[8][10]