Fact-checked by Grok 2 weeks ago

Motion interpolation

Motion interpolation, also known as frame interpolation or motion-compensated frame interpolation (MCFI), is a computational technique used in and to synthesize intermediate frames or poses between existing keyframes, thereby creating smoother and more fluid motion sequences. In video applications, it addresses discrepancies between source frame rates (e.g., 24 for films) and display refresh rates (e.g., 60 Hz or higher), reducing judder and by estimating and generating new frames along motion trajectories. In , it enables seamless transitions between example motions captured via or hand-animated, often through parametric blending to produce variations like speed or style adjustments.

Techniques in Video Processing

Video frame interpolation methods are broadly categorized into optical flow-based, kernel-based, and phase-based approaches, each leveraging different principles to estimate motion and synthesize frames.
  • Optical Flow-Based Methods: These estimate pixel-level motion vectors between frames using algorithms like Lucas-Kanade or Farneback, then warp and blend frames to create intermediates; early examples include block-matching techniques, while modern variants incorporate deep learning for handling occlusions and large displacements. For instance, advancements like those in large-motion scenarios use neural networks to predict adaptive flows, improving accuracy in dynamic scenes such as sports footage.
  • Kernel-Based Methods: These predict per-pixel kernels to aggregate information from input frames, often via convolutional neural networks (CNNs) for sub-pixel precision; hybrid models combine this with depth estimation to better manage disocclusions.
  • Phase-Based Methods: Decomposing frames into phase and amplitude components allows phase shifting for interpolation, which is efficient for periodic motions but less robust to complex deformations.
Recent integrations, such as transformer-based models focusing on motion regions, have enabled real-time processing at resolutions up to , with applications in enhancing low-frame-rate content.

Techniques in

In animation, motion interpolation relies on or multidimensional blending to generate paths and poses between keyframes, reducing workload by automating in-betweening. These methods support hierarchical animations and real-time interactive control in games and simulations.

Applications and Impact

Motion interpolation enhances consumer viewing experiences, such as in televisions via "motion smoothing" features that mitigate the "soap opera effect" while enabling high-frame-rate playback. In professional contexts, it facilitates slow-motion generation from standard videos, improves compression efficiency by upsampling frames, and aids restoration of archival footage. For animation, it powers motion editing, reuse, and synthesis in virtual reality and film production, allowing expressive character behaviors from limited capture data. Challenges persist in handling occlusions, large motions, and artifacts, driving ongoing research toward more robust AI-driven solutions.

Fundamentals

Definition and purpose

Motion interpolation, also known as motion-compensated frame interpolation (MCFI), is a technique that generates intermediate frames between existing ones in a video sequence to create the illusion of a higher . This method analyzes the motion within the original frames to synthesize new content, rather than merely duplicating or averaging pixels, which helps produce smoother transitions and more natural-looking movement. The primary purpose of motion interpolation is to mitigate and judder—artifacts that occur when low-frame-rate content is displayed on high-refresh-rate screens—thereby enhancing visual fluidity and perceived quality during playback. It emerged in the as a tool for frame rate conversion in broadcast , enabling the adaptation of content to standard TV formats without significant degradation. For instance, it facilitates the conversion of 24 frames per second (fps) cinematic footage to 60 fps for smoother presentation, preserving the artistic intent while accommodating display requirements. At its core, the workflow involves taking input frames, estimating motion vectors to track object movement across them, and then using those vectors to construct interpolated output frames that align temporally between the originals. This approach distinguishes motion interpolation from simpler techniques like frame duplication, which repeats existing frames and can introduce stuttering, or basic averaging, which often results in ghosting artifacts.

Motion estimation principles

Motion estimation forms the cornerstone of motion interpolation by identifying and quantifying the displacement of visual elements across consecutive frames in a video sequence. At its core, motion vectors represent the displacement of pixels, blocks, or features from one frame to the next, capturing how image content moves over time to enable the synthesis of intermediate frames. These vectors provide a compact description of motion, allowing pixels from source frames to be repositioned accurately in the interpolated frame. This principle underpins both dense (per-pixel) and sparse (selected points) estimation approaches in . The mathematical foundation of motion estimation relies on the model, which assumes brightness constancy—the idea that the intensity of a point remains unchanged as it moves between frames. Under this assumption, for an image intensity function I(x, y, t), the constancy I(x, y, t) = I(x + u \Delta t, y + v \Delta t, t + \Delta t) leads, via first-order expansion, to the optical flow constraint equation: \frac{\partial I}{\partial x} u + \frac{\partial I}{\partial y} v + \frac{\partial I}{\partial t} = 0, where u and v are the horizontal and vertical components of the (motion velocity), and \frac{\partial I}{\partial x}, \frac{\partial I}{\partial y}, \frac{\partial I}{\partial t} are the spatial and temporal image gradients. This constrains the possible motion directions at each pixel but does not uniquely determine the flow vector, as it forms one for two unknowns. Seminal work in the 1980s, such as the Lucas-Kanade method, addressed this by assuming constant flow within local neighborhoods and solving the resulting via minimization to estimate motion robustly. Estimation faces key challenges, including the aperture problem, where uniform or edge-like regions provide ambiguous motion cues, as the constraint only resolves the component perpendicular to the local , leaving parallel motion undetermined. This ambiguity arises because local intensity changes alone cannot distinguish true motion from aperture-induced illusions, necessitating additional smoothness assumptions or multi-point to resolve full vectors. Occlusion handling poses another hurdle, as regions visible in one frame may be hidden in another due to object motion, leading to unreliable vectors or estimation failures at boundaries; techniques must detect and mitigate these by blending or affected areas. For effective interpolation, estimated motion fields must support forward and backward mapping: forward mapping warps source pixels ahead using the motion vectors, while backward mapping traces from the target frame position to source locations. This dual approach prevents holes (uncovered regions from occlusions) and overlaps (multiple sources mapping to one target), ensuring complete and artifact-free coverage in the new frame by averaging or selecting appropriate contributions. These principles originated in early research during the 1980s, with methods like Lucas-Kanade laying the groundwork for practical motion analysis.

Frame rate relationships

Motion interpolation bridges discrepancies between the source of video content and the of devices by generating intermediate frames, thereby enhancing perceived without altering the original audio or content timing. For instance, cinematic content captured at 24 frames per second () can be interpolated to match a 120 Hz by inserting four synthetic frames between each pair of original frames, resulting in an effective output of 120 . This process follows the general relationship where the effective equals the source multiplied by (1 + interpolation factor), with the factor representing the number of inserted frames per original interval. Unlike native higher frame rates achieved through actual capture at elevated speeds, motion interpolation produces artificial frames via , which can introduce artifacts but avoids the need for re-recording content. A common example involves up-converting 30 video to a 60 Hz using 2x , yielding 60 of synthesized motion rather than truly captured higher-rate footage. This distinction is crucial, as interpolated frames do not capture new visual information but approximate motion trajectories between existing frames. Television manufacturers often advertise inflated "effective" refresh rates that incorporate motion interpolation, leading to potential misconceptions about performance. For example, a 60 Hz panel with 4x interpolation may be marketed as delivering "240 Hz effective" motion handling, though the actual native refresh rate remains 60 Hz and the perceived quality hinges on the interpolation algorithm's accuracy. Such claims typically derive from combining the panel's native rate with the number of generated frames, but they do not equate to genuine high-frame-rate capture. By quantifying these frame rate conversions, motion interpolation serves as a practical to display mismatches, such as adapting low-frame-rate sources to high-refresh-rate screens, while preserving synchronous audio playback. Motion estimation principles underpin the computation of these s, ensuring temporal consistency in the synthesized sequence.

Techniques

Traditional algorithms

Traditional algorithms for motion interpolation rely on classical techniques to estimate motion vectors between consecutive , enabling the of intermediate frames without data-driven learning. These methods, developed primarily in the pre-deep learning , focus on deterministic computations such as matching blocks or analyzing intensity gradients to approximate the field. Block matching and variants represent foundational approaches, often optimized for efficiency in video compression and processing tasks. Block matching divides each frame into non-overlapping macroblocks, typically of fixed size such as 16x16 pixels, and searches for the best-matching block in a reference frame within a defined search window. Motion vectors are computed by minimizing a dissimilarity metric between the current block and candidate blocks in the reference frame. Common metrics include the sum of absolute differences (SAD), defined as \sum |I_1(x) - I_2(x + \mathbf{mv})| over the block, where I_1 and I_2 are the intensities in the current and reference frames, respectively, and \mathbf{mv} is the motion vector; or mean squared error (MSE) for smoother estimates. Exhaustive search, known as full search, evaluates all positions in the window but has quadratic complexity O(n^2), where n is the search range, making it computationally intensive for real-time applications. Phase correlation offers a frequency-domain suited for estimating global translational motion shifts between . It leverages the shift theorem by computing the cross-power between the transforms of two : F_1(u,v) \cdot \overline{F_2(u,v)} / |F_1(u,v) \cdot \overline{F_2(u,v)}|, where F_1 and F_2 are the transforms, \overline{F_2} is the , and peaks in the inverse transform indicate the displacement. This method excels in scenarios with uniform motion but assumes translational shifts and struggles with rotations or deformations. Optical flow variants model motion as a dense across the entire image, enforcing constraints like brightness constancy. The Horn-Schunck algorithm, a seminal global method, minimizes an energy functional that balances data fidelity and smoothness: \iint \left( (I_x u + I_y v + I_t)^2 + \alpha (|\nabla u|^2 + |\nabla v|^2) \right) \, dx \, dy, where (u, v) is the flow field, (I_x, I_y, I_t) are spatial and temporal image gradients, and \alpha > 0 is a regularization controlling smoothness. This variational approach yields continuous flow estimates but requires iterative solving, increasing computational demands. Hybrid approaches integrate local methods like block matching with global techniques such as to balance accuracy and efficiency. For instance, block matching can initialize coarse motion vectors, refined by in regions of ambiguity, reducing overall complexity from O(n^2) via strategies like logarithmic search that evaluate fewer candidates. These combinations leverage the strengths of discrete matching for speed and continuous flow for detail, common in early video codecs. Despite their robustness in controlled conditions, traditional algorithms exhibit limitations such as high sensitivity to , which corrupts computations in or matching scores in block methods, leading to erroneous vectors. They also falter in fast motion scenarios, where large displacements exceed search ranges or violate small-motion assumptions, resulting in artifacts like blurring in interpolated frames without adaptive learning mechanisms.

Machine learning methods

Machine learning methods have revolutionized motion interpolation by leveraging data-driven approaches to estimate and synthesize intermediate frames, surpassing the limitations of rule-based techniques in handling . These methods, primarily based on deep neural networks, learn motion patterns from large datasets, enabling adaptive prediction of displacements and frame synthesis. Convolutional neural networks (CNNs) form the backbone of early advancements, while more recent architectures incorporate generative and mechanisms for enhanced . One prominent category involves CNNs that predict motion flows through learned filters. For instance, SepConv employs adaptive separable convolutions to model local motion, generating intermediate frames by applying 1D kernels separately along spatial dimensions on input frames. Building on this, DAIN introduces depth-aware , utilizing adaptive kernels informed by depth estimation to better handle occlusions and disocclusions in scenes with varying depths. These CNN-based models excel in capturing short-range motions but often require additional modules for robustness in challenging scenarios. Generative adversarial networks (GANs) further improve interpolation quality by training a generator to synthesize realistic frames and a discriminator to distinguish them from real ones, fostering photorealistic outputs. The training typically optimizes a composite loss function, such as L = \lambda_{adv} \cdot L_{adv} + \lambda_{per} \cdot L_{per}, where L_{adv} is the adversarial loss, L_{per} is the perceptual loss derived from feature representations, and \lambda weights balance the terms. This adversarial training mitigates blurring artifacts common in direct regression approaches, particularly in regions with rapid motion changes. Transformer-based models represent a post-2020 shift, leveraging self-attention to capture long-range dependencies across frames for more coherent interpolation. For instance, employs a with cross-scale window-based self-attention to model long-range pixel correlations and aggregate multi-scale information, overcoming limitations of convolutional receptive fields in large-motion scenarios. These models, often from 2023 onward, integrate with backbones for hybrid efficiency, outperforming prior techniques on benchmarks involving diverse scene complexities. Advancements since 2020 emphasize real-time capabilities through lightweight networks, such as RIFE, which estimates intermediate s directly via a compact , achieving over 100 frames per second on modern GPUs for videos. Training these models relies on datasets like Vimeo-90K, comprising 89,800 high-quality clips for of diverse motion patterns. This evolution marks a departure from traditional heuristics, as approaches better manage occlusions and intricate scenes by implicitly learning contextual priors from data. Open-source implementations, including for foundational estimation, have accelerated adoption and further innovations in the field. Diffusion-based methods have gained prominence since 2023 by modeling frame as an iterative denoising process in , particularly effective for challenging scenarios with occlusions and large motions. Notable examples include (CVPR 2025), which enhances diffusion models for high-quality synthesis in dynamic scenes. These approaches often outperform prior neural methods on benchmarks like X4K1000FPS, as surveyed in recent comprehensive reviews.

Hardware applications

Consumer devices

Motion interpolation is widely implemented in consumer televisions and monitors to enhance perceived smoothness during fast-paced scenes, such as sports or action sequences. Leading manufacturers like employ TruMotion technology, which uses frame to generate intermediate frames between original content, effectively reducing on panels with native refresh rates of 120 Hz or 240 Hz. This process typically achieves 2x to 4x interpolation multipliers, converting standard 24 or 60 sources into higher effective rates while integrating with LED LCD backlights that employ scanning techniques for further blur mitigation. Similarly, Sony's Motionflow XR system on 120 Hz and 240 Hz panels doubles or quadruples frame rates through combined with image blur reduction, allowing for smoother playback of variable content without introducing excessive artifacts in optimized modes. In smartphones and tablets, motion interpolation is handled by mobile system-on-chips to support high-refresh-rate displays ranging from 90 Hz to 144 Hz, enabling fluid scrolling and gaming experiences on battery-constrained devices. Qualcomm's Snapdragon processors, for instance, incorporate the Frame Motion Engine (AFME) 2.0 and 3.0, which perform on-device frame interpolation to double frame rates—such as elevating 60 games to 120 —while minimizing and power draw through efficient GPU processing. This hardware-level integration allows devices like those in the S25 series to maintain high visual fidelity without proportionally increasing battery drain, though enabling interpolation can still trade off against extended usage time in power-sensitive scenarios compared to native low-frame-rate rendering. Blu-ray players and set-top boxes often feature hardware decoders capable of frame rate conversion, transforming cinematic content to broadcast-compatible 60i formats to match display requirements and reduce judder during playback. These devices use dedicated chips for real-time processing, ensuring compatibility with interlaced outputs on older TVs while preserving detail through basic or pulldown techniques, though advanced is typically deferred to the connected display. The adoption of motion interpolation in consumer devices surged in the post-2010 era alongside the rise of televisions, as manufacturers integrated higher refresh rates and processing power to handle ultra-high-definition content, making features like TruMotion and Motionflow standard for reducing blur in LED and panels. In battery-powered mobiles, this implementation introduces trade-offs, where interpolation boosts smoothness but optimizations by chipset-level efficiencies help maintain power efficiency. Often paired with resolution upscaling in these devices, motion interpolation remains distinct by focusing solely on temporal frame synthesis to align with advertised refresh rates like 120 Hz or 240 Hz.

Integration with display technologies

Motion interpolation integrates seamlessly with modern display technologies by adapting content frame rates to the native refresh rates of screens, enhancing overall smoothness in video playback. The HDMI 2.1 standard plays a pivotal role in this synergy, supporting uncompressed bandwidth up to 48 Gbps and enabling high refresh rates such as 120 Hz, which facilitates the application of motion interpolation to bridge discrepancies between source material and display capabilities. Additionally, HDMI 2.1 incorporates Variable Refresh Rate (VRR) functionality, which dynamically adjusts the display's refresh rate to match incoming frame rates, reducing judder and tearing while complementing interpolation techniques for fluid motion. This VRR support is compatible with adaptive sync technologies like AMD FreeSync and NVIDIA G-Sync, allowing interpolated frames to align more precisely with variable content rates in gaming and video scenarios. Display panel types further influence the reliance on motion interpolation. Organic Light-Emitting Diode (OLED) panels exhibit near-instantaneous pixel response times, typically under 0.1 ms, which inherently minimizes motion blur compared to Liquid Crystal Display (LCD) panels that often require 5-10 ms or more for full pixel transitions. As a result, OLEDs reduce the need for aggressive interpolation to combat blur but still employ it selectively for judder reduction in low-frame-rate content like 24 fps films. In contrast, LCDs benefit more substantially from interpolation to offset their slower response, though advancements in backlight scanning have narrowed this gap. Broadcast standards like ATSC 3.0 enhance this integration by supporting frame rates up to 120 fps in over-the-air transmissions, enabling real-time interpolation in compatible tuners to deliver smoother high-dynamic-range (HDR) programming without excessive processing demands on the display. Emerging display technologies, such as , are poised to further optimize motion interpolation's role through native high-frame-rate capabilities and ultra-fast response times. In 2024, introduced expanded lineup with sizes up to 114 inches, featuring modular designs and peak brightness exceeding 2,000 nits, which support high frame rates and reduce dependence on interpolation for blur mitigation due to response times below 1 ms. holds numerous patents on fabrication, including innovations in RGB LED arrays that enhance pixel-level control and motion fidelity, minimizing artifacts in high-speed content. Within broader ecosystems, motion interpolation serves as a prerequisite for seamless playback, as it interpolates additional frames to match display refresh rates, preventing judder in typically 24-30 sources while preserving and color accuracy.

Software applications

Video processing tools

In video editing software, motion interpolation is commonly employed to retime clips smoothly, such as creating slow-motion sequences or adjusting playback speeds without introducing judder. features , an algorithm that analyzes pixel motion between frames to generate intermediate frames, enabling precise retiming for clips in workflows. This method is particularly effective for footage lacking , as it estimates motion vectors to synthesize new frames, though it requires significant computational resources for high-quality results. Similarly, incorporates SpeedWarp, an AI-assisted interpolation tool that leverages neural networks to produce fluid retiming effects, outperforming traditional in handling complex scenes by reducing artifacts in variable-speed edits. For video playback, plugins extend media players to apply motion interpolation in real-time during viewing. integrates with SmoothVideo Project (SVP), a that performs doubling or higher-rate using motion vector analysis, converting standard rates like 24 fps to 60 fps for smoother playback on compatible hardware. (MPC-HC) supports similar enhancements through SVP or dedicated filters like DmitriRender, which enable real-time blending to interpolate missing frames, ideal for enhancing older or low-frame-rate content without altering the original file. For and conversion, FFmpeg's minterpolate filter offers a command-line solution that applies motion-compensated to upsample or downsample rates, such as increasing 30 fps video to 60 fps by estimating and inserting intermediate frames based on configurable search parameters for . In , motion interpolation facilitates standards conversion between formats like PAL (25 fps) and (29.97 fps), ensuring seamless playback across regional systems in professional editing suites. Tools within or automate this process by blending or interpolating frames to match the target rate, preserving audio synchronization while minimizing temporal artifacts during live or post-broadcast preparation. Post-production workflows frequently utilize motion interpolation to craft slow-motion effects from standard-rate footage, generating additional frames to extend clip duration without repetitive stuttering. Open-source alternatives like AviSynth provide scripting flexibility for custom interpolation, employing plugins such as MVTools or SVPflow to compute motion vectors and blend frames, allowing users to tailor scripts for specific enhancement needs. Users often balance quality and speed through adjustable settings, such as selecting lower-resolution motion estimation for faster rendering or enabling scene-change detection to avoid interpolation errors, with AI methods like those in SpeedWarp offering superior quality at the expense of longer processing times.

Gaming and real-time rendering

In and real-time rendering, motion interpolation enables the generation of additional frames to achieve higher perceived frame rates, crucial for maintaining smooth visuals in performance-intensive scenarios like ray tracing. NVIDIA's DLSS 4, launched in January 2025 for RTX 50-series GPUs, uses AI-driven multi-frame generation to interpolate multiple new frames between rendered ones, boosting frame rates significantly in supported titles such as while preserving image quality. AMD's FidelityFX Super Resolution 4 (FSR 4), released in early 2025 and exclusive to RDNA 4 hardware, employs advanced AI-based upscaling and frame generation powered by temporal data and motion vectors, delivering performance gains in over 30 games at launch, including , with further titles added throughout the year. Virtual reality (VR) and (AR) applications demand low-latency interpolation to align display refresh rates—often 90-120 Hz—with head-tracking sensors, preventing from judder or misalignment. The Quest series implements asynchronous timewarp (ATW), a reprojection technique that warps the most recent rendered frame using current head pose data and motion vectors to simulate intermediate frames, reducing motion-to-photon latency to under 20 ms in optimal conditions. This method extends to asynchronous spacewarp () in PC VR setups, halving GPU load by predicting and interpolating poses for dropped frames. On consoles like the and Series X, motion interpolation integrates with variable rate shading (VRS) to optimize rendering by applying lower shading rates to less critical screen areas, freeing resources for frame synthesis. AMD's FSR 4 frame generation, supported on both platforms since 2025, interpolates frames to elevate inconsistent 40-60 outputs to smoother 120 Hz experiences with (VRR) displays, as demonstrated in titles achieving significant performance uplifts on Series X. Series X benefits from hardware-accelerated VRS tiers, enabling efficient motion vector-based interpolation, while PS5 relies on software equivalents for comparable results. A primary challenge in these contexts is input lag minimization, as frame requires buffering prior frames, potentially adding 16-33 ms of delay that can impair responsiveness in fast-paced . Developers mitigate this through techniques like integration in DLSS 4, which synchronizes CPU-GPU pipelines to keep latency under 10 ms even with generated frames. These advancements yield tangible benefits, such as transforming native 30 fps into a perceived 60 fps experience for enhanced fluidity, particularly in where it sustains immersion during high-motion sequences. In 5, ongoing plugin support for DLSS 4 and 4 facilitates , with 2025 updates including 4 plugins for UE 5.5 and 5.6 emphasizing low-latency enhancements for broader adoption in interactive titles.

Effects and limitations

Visual artifacts

Motion interpolation, particularly through motion-compensated frame interpolation (MCFI), often introduces visual artifacts due to errors in motion vector estimation and compensation processes. These artifacts manifest as distortions that degrade perceived video quality, stemming from inaccuracies in predicting intermediate frames between original ones. Common types include ghosting, where trailing edges appear behind moving objects because of mismatched motion vectors that fail to align pixels correctly across frames. Warping occurs when objects in complex or nonlinear motion are distorted, as the interpolation algorithm incorrectly stretches or bends visual elements during frame synthesis. Haloing, another prevalent issue, produces bright or dark fringes around edges of moving objects, resulting from over-sharpening or misalignment at boundaries during the compensation step. These artifacts primarily arise from the algorithm's poor handling of occlusions—regions where parts of the scene become hidden or revealed between frames—or abrupt scene changes, such as in fast camera movements. For instance, in panning shots, static backgrounds can exhibit a "swimming" effect, where non-moving elements appear to undulate unnaturally due to erroneous motion assignment from nearby dynamic areas. In film content displayed on televisions, interpolated frames can introduce unnatural sharpness and detail in originally low-frame-rate sequences, exacerbating these issues in scenes with rapid action or depth changes. Objective evaluation of these artifacts often reveals significant quality degradation; for example, in scenes prone to such errors, (PSNR) and structural similarity index (SSIM) scores can show noticeable degradation compared to artifact-free interpolation, highlighting the impact on fidelity. Historically, these problems were more pronounced in early hardware implementations of MCFI, where rudimentary led to frequent vector inaccuracies before advancements in adaptive processing refined outcomes. Perceptually, such artifacts contribute to the "soap opera effect," where interpolated motion feels overly smooth and artificial.

Performance impacts

Motion interpolation introduces due to the computational overhead of estimating motion vectors and synthesizing intermediate frames, typically adding 1-2 frames of delay—equivalent to 16-33 at 60 —which can accumulate in applications like and diminish user responsiveness. Traditional motion interpolation algorithms, primarily relying on optical flow estimation, incur significant computational demands, limiting their suitability for resource-constrained environments without hardware optimization. Machine learning approaches, while potentially more intensive in raw operations, leverage GPU acceleration to achieve real-time performance, enabling deployment in modern consumer hardware. Motion interpolation increases power consumption, particularly on -powered devices, resulting in faster drain and elevated output that necessitates cooling measures or reduced usage duration. Recent benchmarks for 's DLSS frame generation demonstrate effective mitigation through predictive rendering techniques, where integration with can offset added delays, achieving near-native responsiveness in demanding titles like at 4K resolutions. As of 2025, technologies like 's DLSS 4 with Multi Frame Generation have further mitigated and artifacts through advanced , enabling up to 4x frame multiplication. Beyond device-level effects, motion interpolation facilitates bandwidth efficiencies in video streaming by enabling servers to transmit lower frame-rate content—such as 30 instead of 60 —while clients generate interpolated frames locally, yielding reductions without perceptible quality loss.