VVC
Versatile Video Coding (VVC), also known as H.266 and standardized as ITU-T Recommendation H.266 and ISO/IEC 23090-3, is an international video compression standard designed to enable efficient encoding and decoding of high-resolution video content, including support for resolutions up to 8K, high dynamic range (HDR), wide color gamut, and 360-degree immersive video.[1][2] Developed collaboratively by the Joint Video Experts Team (JVET), comprising experts from the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), VVC achieves up to 50% better compression efficiency compared to its predecessor, High Efficiency Video Coding (HEVC or H.265), allowing for reduced bitrate transmission while preserving visual quality.[1][2] Finalized in July 2020 after a development process initiated in 2015, the standard targets diverse applications such as over-the-top (OTT) streaming, video conferencing, broadcast television, and mobile video delivery.[1][3] The development of VVC began with a joint call for proposals in October 2017, following exploratory work on future video coding technologies starting in 2015, with the goal of addressing the growing demands for higher video resolutions and immersive formats beyond HEVC's capabilities.[1] Key technical advancements in VVC include a block-based hybrid coding structure with enhanced intra- and inter-prediction tools, more flexible partitioning schemes such as quadtrees with nested binary and ternary trees, improved transform coding, and advanced in-loop filtering techniques like adaptive loop filters.[1] These features enable support for bit depths up to 16 bits per channel and frame rates exceeding 120 Hz, making it suitable for professional and consumer-grade video workflows.[1] Unlike royalty-free alternatives like AV1, VVC operates under a patent pool licensing model managed by organizations such as MPEG LA and Access Advance, which has influenced its rollout.[2] As of 2025, VVC adoption is accelerating but remains selective due to its computational complexity, which demands significantly more processing power for encoding and decoding compared to HEVC—often 10 times higher for encoding.[2] Hardware support has emerged in modern chipsets, including Qualcomm's Snapdragon 8 Gen 3, Apple's M3 processors, and recent smart TVs from LG and Samsung, while software implementations are available in tools like FFmpeg and cloud services from AWS and Azure.[2] Major streaming platforms such as Netflix and Hulu are evaluating or testing VVC for 4K and 8K content delivery, particularly for bandwidth-constrained environments, though widespread deployment is tempered by licensing costs and the need for ecosystem maturity.[2] Ongoing extensions, including the third edition of the standard incorporating new supplemental enhancement information (SEI) messages and operational levels for 360-degree video (3DoF), continue to expand its versatility.[3]Overview
Definition and Goals
Versatile Video Coding (VVC), standardized as ITU-T Recommendation H.266 and ISO/IEC 23090-3 (MPEG-I Part 3), is an international video compression standard designed as the successor to High Efficiency Video Coding (HEVC, also known as H.265).[4][5] It was developed collaboratively by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) through the Joint Video Experts Team (JVET).[5][6] The primary goals of VVC focus on substantially improving compression efficiency over previous standards while enhancing versatility for diverse applications. Specifically, it aims to achieve approximately 50% bit-rate reduction compared to HEVC at equivalent subjective video quality across a wide range of content types.[7] This efficiency target addresses the escalating demands for high-quality video transmission and storage in bandwidth-constrained environments. Additionally, VVC is engineered to support a broad spectrum of video formats, including resolutions from standard definition (SD) to 8K ultra-high definition (UHD), high dynamic range (HDR), wide color gamut (WCG), and 360-degree immersive video.[4][8] In response to the rapid growth in video consumption for streaming, broadcasting, and emerging immersive technologies between 2015 and 2020, JVET initiated exploration of next-generation coding requirements in 2015 before formalizing development in 2017.[5] The standard's versatility extends to applications such as cloud gaming, screen content coding, and adaptive streaming with region-based extraction, enabling efficient handling of ultra-high-definition content at rates comparable to high-definition.[8] VVC was technically finalized in July 2020, with the first edition published that year, marking a significant advancement in video coding capable of supporting UHD video quality at HD-equivalent bit rates; subsequent editions as of 2025, including the third edition of ISO/IEC 23090-3, incorporate enhancements like new SEI messages and operational levels for 360-degree video (3DoF).[8][7][3]Key Technical Features
Versatile Video Coding (VVC) employs a block-based hybrid coding architecture that builds upon the foundational elements of its predecessor, High Efficiency Video Coding (HEVC), while introducing significant enhancements for improved efficiency and flexibility. At its core, VVC utilizes intra and inter prediction to exploit spatial and temporal redundancies within video frames, followed by transform coding to compact energy into fewer coefficients, and entropy coding to represent the quantized data efficiently using context-adaptive binary arithmetic coding (CABAC). A key advancement is the multi-type tree (MTT) partitioning scheme, which extends HEVC's quadtree structure by incorporating binary and ternary tree splits alongside the quadtree, enabling more adaptive and flexible block sizes down to 4×4, including separate partitioning for luma and chroma components in intra slices. This architecture supports coding block sizes up to 128×128, allowing VVC to handle a wide range of resolutions and content types effectively.[9] VVC's design emphasizes versatility to address diverse application needs, including high dynamic range (HDR) content, higher bit depths, screen content, and scalable video delivery. It natively supports HDR through transfer functions such as perceptual quantizer (PQ) and hybrid log gamma (HLG), facilitated by luma mapping with chroma scaling (LMCS) and supplemental enhancement information (SEI) messages for metadata signaling. Bit depths up to 16 bits per channel are supported across profiles, enabling richer color representation for professional and consumer applications without requiring external processing. For screen content coding (SCC), VVC incorporates tools like palette mode, intra block copy, and adaptive color transform, optimized for graphics-heavy scenarios such as remote desktop sharing or gaming. Scalability extensions, including temporal, spatial, quality, and multiview layers, are provided via separate amendments to the base standard, using features like reference picture resampling to simplify inter-layer predictions.[9][10] In terms of computational demands, VVC's reference software exhibits encoding complexity up to 10 times that of HEVC, reflecting the added sophistication of its tools, while decoding complexity is approximately twice that of HEVC, making it feasible for real-time applications with hardware acceleration. To accommodate varying use cases, VVC defines multiple profiles; for instance, the Main 10 profile supports 10-bit 4:2:0 HDR video, balancing efficiency and quality for broadcast and streaming, whereas higher-tier profiles enable 12-bit processing for studio workflows. The bitstream is organized into network abstraction layer (NAL) units, which encapsulate video coding layer (VCL) data such as slices—divided into rectangular or raster-scan segments for parallel decoding and error resilience—alongside non-VCL units for parameter sets (e.g., video, sequence, picture, and adaptation parameter sets) that convey global and picture-specific configurations, and SEI messages for auxiliary information like HDR metadata. This structure ensures compatibility with network environments and facilitates modular extensions.[9][10]Development
Joint Video Exploration Team
The Joint Video Exploration Team (JVET) was established in October 2015 through a collaboration between the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) to investigate video coding technologies that could achieve significant improvements beyond the High Efficiency Video Coding (HEVC) standard.[11] This informal exploratory group aimed to address emerging needs for higher compression efficiency in diverse video applications, including higher resolutions and immersive formats. The inaugural meeting occurred in Geneva, Switzerland, from 19 to 21 October 2015, where initial discussions focused on defining exploration objectives and test models.[12] JVET's structure involves a collaborative framework of video coding specialists drawn from industry leaders such as Qualcomm, Huawei, and Apple, alongside representatives from academia and research institutions, fostering a broad exchange of technical expertise.[13] The team operates through systematic processes, including core experiments to rigorously test proposed coding tools under standardized conditions, calls for proposals (CfP) to solicit innovative submissions, and cross-checks to independently verify performance claims and ensure reproducibility of results.[14] These mechanisms enable iterative refinement of technologies, with core experiments often involving multiple independent implementations to compare efficacy across metrics like bitrate reduction and visual quality.[6] Key contributors to JVET encompassed over 100 companies, reflecting widespread industry participation in shaping future video standards. Leadership was rotational, with chairs such as Jill Boyce from InterDigital and Ye-Kui Wang from Huawei guiding meetings, coordinating experiments, and integrating contributions from global experts.[15] Their roles ensured balanced representation and efficient progression of technical deliberations. From 2015 to 2017, JVET's exploratory phase emphasized requirements gathering for next-generation coding, alongside extensive tool testing via joint exploration test models to benchmark potential advancements.[11] This period involved analyzing compression gains for various content types, culminating in the issuance of a joint Call for Proposals in October 2017 to invite formal technology submissions.[16] This foundational work directly informed the subsequent formal standardization efforts for Versatile Video Coding.Standardization Timeline
The standardization process for Versatile Video Coding (VVC) commenced with the issuance of a final Call for Proposals (CfP) by the Joint Video Exploration Team (JVET) in October 2017, seeking video coding technologies capable of achieving substantial compression improvements over High Efficiency Video Coding (HEVC).[17] Proposal registrations closed in December 2017, followed by subjective quality testing in March 2018 and objective evaluations in April 2018, during which submissions from 32 organizations were assessed, demonstrating average bit rate reductions of 40% or more relative to HEVC at equivalent PSNR levels across standard dynamic range (SDR) and high dynamic range (HDR) test conditions.[18] These results, including 22 submissions specifically in the SDR category, confirmed the feasibility of significant efficiency gains ranging from 25% to 50% depending on content type and resolution.[19] At the JVET's 10th meeting in San Diego, USA, from April 10–20, 2018, the evaluation outcomes led to the selection and integration of the highest-performing tools into the initial VVC Test Model (VTM-1.0), marking the official start of the collaborative standardization effort.[20] Development progressed through iterative core experiments and test model refinements across subsequent JVET meetings, with the decoding capability frozen at the 16th meeting in Geneva, Switzerland, from October 1–11, 2019, to stabilize the decoder requirements and enable early conformance testing.[21] The COVID-19 pandemic caused delays, converting several in-person meetings to virtual formats starting in mid-2020 and extending the timeline for final reviews.[17] The full technical specification was frozen at the 19th JVET meeting, held virtually from June 22 to July 1, 2020, approving the first edition of ITU-T Recommendation H.266.[18] This was followed by formal approval in August 2020, with publication of H.266 on November 10, 2020, and the identical ISO/IEC 23090-3 International Standard in February 2021. Post-standardization efforts have included the second edition of ISO/IEC 23090-3 published in September 2022, incorporating operation range extensions for higher resolutions and frame rates,[22] and the conformance specification ITU-T H.266.1 (equivalent to ISO/IEC 23090-15) released in September 2023 to define decoder and bitstream conformance tests.[23][24] The third edition of ISO/IEC 23090-3 was published in July 2024, adding a new level 15.5 for high-bitrate bitstreams, enhanced supplemental enhancement information (SEI) messages for green metadata, video decoding interface, and neural-network post-filters, along with minor corrections.[25] As of November 2025, JVET continues work on amendments to the third edition, including additions and corrections via CDAM1 issued in February 2025, with further extensions under consideration.[3]Technical Details
Core Coding Tools
The core coding tools in Versatile Video Coding (VVC) form the foundational algorithms for efficient video compression, encompassing intra and inter prediction to estimate pixel values, transform and quantization to represent residuals compactly, and loop processing to refine reconstructed signals. These tools build upon High Efficiency Video Coding (HEVC) while introducing enhancements for higher fidelity and adaptability, enabling VVC to achieve approximately 50% bitrate reduction over HEVC for equivalent quality. The selection and optimization of these tools rely on rate-distortion optimization, where the cost J = D + \lambda R is minimized, with D denoting distortion, R the bitrate, and \lambda the Lagrange multiplier balancing the trade-off. Intra prediction in VVC generates pixel estimates within a block using neighboring reconstructed samples, supporting 67 directional modes compared to 35 in HEVC, alongside non-directional planar and DC modes. The angular modes span a wider range of directions for finer gradient capture, while position-dependent intra prediction combination (PDPC) applies weighted blending of neighboring samples and predicted values to reduce edge discontinuities, particularly effective for wide angular modes. This expanded set improves prediction accuracy for diverse textures, with modes signaled via a most probable mode list and remaining modes using a binary tree structure for entropy coding. Inter prediction in VVC exploits temporal correlations across frames using motion compensation, incorporating an affine motion model to handle non-translational movements such as rotation, zoom, and shear through a 6-parameter or 4-parameter affine transformation applied to subblocks. Bi-prediction merges predictions from two reference pictures with CU-level weights (BCW) to adaptively emphasize one direction based on content, enhancing accuracy for occlusions or fades. Additionally, decoder-side motion vector refinement (DMVR) iteratively refines motion vectors at the decoder using template matching on merged bi-prediction blocks, avoiding bitrate overhead for signaling refined vectors. These features collectively improve motion modeling for complex scenes without substantially increasing decoder complexity.[26][27] Transform and quantization process the prediction residual to compact energy in fewer coefficients, with multiple transform selection (MTS) allowing a choice among DCT-II for both horizontal and vertical directions, or alternatives like DST-VII and DCT-VIII for intra-coded blocks to better match residual statistics. For inter-coded blocks, MTS is restricted to DCT-II combinations, while implicit MTS applies DST-VII for 4x4 intra residuals. Quantization employs dependent quantization, which uses state-based scalar quantization where the step size for a coefficient depends on previously quantized neighbors, enabling finer granularity and up to 4% bitrate savings over independent quantization. Rate-distortion optimized quantization (RDOQ) further refines this by evaluating multiple quantization levels per coefficient to minimize the RD cost, integrated with trellis-based decisions for sequential dependencies.[28] Loop processing applies post-reconstruction filters to mitigate artifacts, starting with the deblocking filter that attenuates blocking discontinuities across transform block edges using adaptive strength based on gradient magnitudes. Sample adaptive offset (SAO), inherited from HEVC, then adds offsets to samples classified into edge or band categories to reduce ringing and banding. The adaptive loop filter (ALF) follows, using Wiener-like filters signaled per slice for luma (7x7 diamond shapes) and chroma (5x5), with classification into up to 25 luma classes based on local variance and signaling for on/off per chroma component, achieving additional artifact suppression through rate-distortion optimized coefficient derivation. These sequential filters enhance reference frame quality for subsequent predictions.[29]Performance Enhancements
Versatile Video Coding (VVC) demonstrates substantial compression efficiency improvements over its predecessor, High Efficiency Video Coding (HEVC), achieving an average bitrate reduction of 30-50% while maintaining equivalent video quality. This performance is quantified through objective metrics such as Peak Signal-to-Noise Ratio (PSNR) and Video Multimethod Assessment Fusion (VMAF), as well as subjective tests evaluating perceptual quality. For instance, under standardized evaluation conditions, VVC yields approximately 40% bitrate savings in PSNR-based measurements for standard dynamic range (SDR) content and up to 50% for high dynamic range (HDR) sequences.[30][31][32] These gains are evaluated using the Joint Video Exploration Team (JVET) common test conditions (CTC), which include ultra-high-definition (UHD) and HDR test sequences representative of real-world applications. The CTC specify configurations such as random access (RA) and all-intra (AI) modes, with test content at resolutions up to 4K and 8K. Bitrate reductions are notably higher for higher resolutions, reaching up to 50-60% in 4K and 8K scenarios due to VVC's enhanced handling of complex motion and detail in larger frames. Subjective assessments under these conditions confirm similar perceptual benefits, with VVC outperforming HEVC in viewer preference tests for immersive content.[33][34] The compression efficiency is formally measured using Bjøntegaard Delta (BD)-rate metrics, which quantify the average bitrate savings by integrating the difference between the logarithmic fits of the rate-distortion curves of the two codecs at equal PSNR levels. Negative BD-rate values indicate bitrate savings.[35][36] Despite these efficiency gains, VVC introduces increased computational complexity. In software implementations, encoding time rises by 4-10 times compared to HEVC, depending on configuration (e.g., 5-6.5x in low-delay modes and up to 30x in all-intra), while decoding complexity is approximately 1.5-2 times higher. Hardware implementations reflect this through elevated requirements, such as 2-3 times more logic gates for core processing units and up to 7-13 times greater memory access and bandwidth needs, primarily due to larger block sizes and advanced prediction tools.[37][38][39] To balance performance and complexity, VVC defines multiple profiles and tiers tailored to application needs. The Baseline and Main tiers target lower-complexity scenarios, such as real-time communication and broadcasting, by omitting certain advanced features like transform skips for high-frequency components. In contrast, the High tier supports advanced capabilities for professional workflows, including higher bit depths (up to 16 bits) and frame rates (up to 300 fps). Scalability is enabled through multi-layer extensions, allowing temporal, spatial, and quality scalability in layered bitstreams without full re-encoding. These structures ensure VVC's versatility across devices, from mobile to high-end servers.[40][41]Supported Video Formats
VVC supports a broad spectrum of video resolutions, ranging from standard definition to ultra-high definition, with capabilities extending up to 16K (15360 × 8640 pixels) to accommodate emerging applications in immersive and high-resolution content delivery. This flexibility is defined through profiles, tiers, and levels in the standard, allowing for maximum luma picture sizes that exceed 8K in higher levels such as 8.5. Chroma subsampling formats include 4:2:0 for typical consumer video, 4:2:2 for professional workflows, and 4:4:4 for high-fidelity applications like screen sharing or RGB-coded content. Bit depths range from 8 to 16 bits per sample, enabling efficient handling of both SDR and advanced imaging requirements without compromising dynamic range.[9][42] For high dynamic range (HDR) and wide color gamut (WCG) content, VVC incorporates support for the Rec. ITU-R BT.2020 color space, along with transfer functions such as Perceptual Quantizer (PQ) and Hybrid Log-Gamma (HLG) as defined in Rec. ITU-R BT.2100. These features ensure compatibility with modern display ecosystems, including cross-component chroma prediction to optimize color reproduction in HDR scenarios. Metadata signaling via Video Usability Information (VUI) and Video Supplementary Enhancement Information (VSEI) further enhances color and HDR interoperability across devices.[9][43] VVC addresses specialized video formats to meet diverse use cases. For 360-degree and immersive video, it facilitates coding of projected maps, such as equirectangular projection (ERP) or cube-map projection (CMP), using subpicture partitioning and wrap-around motion compensation boundaries for seamless spherical rendering. Screen content, common in remote desktop and conferencing scenarios, is supported through dedicated profiles like Main 10 4:4:4, enabling efficient compression of graphics-heavy material. Point cloud video representation is enabled via Video-based Point Cloud Compression (V-PCC) as defined in ISO/IEC 23090-5, which uses VVC to compress the 2D video streams generated from projections of the 3D point clouds, supporting volumetric media applications.[9][44] Scalability is provided in a single-layer base configuration for straightforward applications, with multilayer extensions introduced in the third edition (ISO/IEC 23090-3:2024) supporting spatial, temporal, quality (SNR), and multiview scalability. These allow for adaptive bitrate streaming by layering lower-resolution or lower-quality base layers with enhancement layers, using techniques like reference picture resampling (RPR) for flexible resolution changes. This multilayer approach is particularly useful for broadcast and streaming services requiring multiple quality levels from a single encode.[25][9] Conformance to the VVC standard is verified through testing procedures implemented in the Joint Video Exploration Team (JVET) Test Model (VTM) reference software, which serves as the official validation toolset. The VTM includes bitstream conformance tests across various profiles, tiers, and levels, ensuring decoder compliance with syntax, semantics, and decoding processes as specified in ITU-T H.266 | ISO/IEC 23090-3. Additional conformance aspects, such as toolset restrictions via sub-profiles, are signaled to promote interoperability in commercial implementations.[45][42]Licensing and Intellectual Property
Patent Management
The management of patents essential to Versatile Video Coding (VVC), also known as ITU-T H.266 and ISO/IEC 23090-3, involves declarations by patent holders to the ITU-T and MPEG standards development organizations, where thousands of patents have been identified as potentially or actually essential to the standard. These declarations follow the common patent policy of ITU-T, ITU-R, and ISO/IEC, which requires participants to disclose patents that may be essential and commit to licensing them on fair, reasonable, and non-discriminatory (FRAND) terms.[46] The MPEG Coalition Implementation Group (MC-IF) plays a central role in coordinating patent pooling to promote efficient, FRAND-based licensing and avoid fragmentation similar to that seen in HEVC. In 2021, MC-IF's VVC Pool Fostering initiative identified and endorsed two administrators to manage separate pools, ensuring broad coverage of essential patents while facilitating interoperability.[47][48] Access Advance launched its VVC Advance Patent Pool in 2021, focusing on independently evaluated essential patents issued across multiple jurisdictions. Via Licensing Alliance (Via LA), formed in 2023 from the merger of MPEG LA and Velos Media, administers a complementary HEVC/VVC Patent Portfolio License that includes VVC-essential patents, with updates to its patent list as recent as October 2025.[49][50] These pools collectively provide licensees with a one-stop solution for accessing VVC intellectual property, covering implementations for high-resolution and immersive video applications. Major contributors, including Huawei, Ericsson, and Dolby Laboratories, have submitted patent lists to the standards bodies and pools, with essentiality determined through rigorous, independent technical evaluations to confirm alignment with VVC specifications. For instance, Huawei joined Via LA as both licensor and licensee in March 2025, contributing its VVC portfolio; Ericsson has actively declared contributions from its extensive video coding research; and Dolby is among the licensors in the Access Advance pool.[51][52][53] By November 2025, the Access Advance pool encompasses over 3,600 essential patents from 46 licensors, while Via LA's portfolio adds further coverage, resulting in more than 50 firms participating across the pools. Recent additions include Xiaomi joining Access Advance's VVC pool as both licensor and licensee on November 17, 2025. These updates reflect ongoing additions, including patents related to advanced features, ensuring comprehensive protection for VVC deployments. Ongoing FRAND disputes have seen resolutions, such as the U.S. District Court's dismissal of Roku's lawsuit against Access Advance on July 28, 2025, declining to set global rates, and Microsoft's settlement with Via LA on October 9, 2025, leading to its joining the HEVC/VVC pool and ending German litigation.[54][53][55][56][57]Royalty Structure
The royalty structure for Versatile Video Coding (VVC) is managed through two primary patent pools: the VVC Advance Patent Pool administered by Access Advance and the VVC (H.266) Patent Pool administered by Via Licensing Alliance (formerly MPEG LA). These pools aggregate essential patents declared to the ITU-T and ISO/IEC, offering licensees fair, reasonable, and non-discriminatory (FRAND) terms to implement the standard.[58][59] Access Advance's VVC Advance Patent Pool employs a tiered per-device royalty model based on video resolution and product category, designed to reflect the standard's efficiency gains over HEVC while incorporating volume discounts and annual caps. For consumer VVC products, rates start at $0.20 per unit for devices supporting resolutions below 720p, scaling to $0.40 for HD (up to 1080p), $1.00 for UHD (up to 4K), and $1.60 for resolutions exceeding 4K. These rates apply to categories such as mobile devices, digital TVs, and set-top boxes, with a 50% regional discount for sales in lower-income countries (Region 2). Volume-based reductions are available, reducing effective rates by up to 50% for high-volume licensees, and an enterprise-wide annual cap of $40 million limits total payments across all licensed products. Royalties accrue on first sales starting January 1, 2022, with licenses offered in five-year terms; rates and caps for licensees signing by December 31, 2025, are locked through 2030 without increase beyond 20% in subsequent renewals.[60][61] Via Licensing Alliance's VVC Pool uses a similar tiered per-unit structure, with rates ranging from $0.125 to $1.25 depending on volume and product type (hardware, paid software, or free software), capped at $30 million annually for hardware implementations and $8 million for software. Free software distributions incur no royalties for the first 100,000 units, transitioning to nominal fees thereafter. As of October 1, 2025, the updated structure introduces regional variations, with renewals effective January 1, 2026, at $0.24 per unit in high-income regions (R1, up from $0.20) and reduced rates in emerging markets (R2), with no royalties on the first 100,000 units per category.[62][63][64] Both pools offer a joint licensing option, allowing a single agreement to cover patents from both, collectively encompassing over 95% of declared VVC essential patents and simplifying compliance for implementers. VVC's royalty framework adheres to FRAND commitments mandated by the ITU-T/ISO/IEC Common Patent Policy, which requires holders of standard-essential patents (SEPs) to declare them and license on terms that are fair, reasonable, and non-discriminatory, prohibiting patent hold-up or excessive fees. This policy ensures broad accessibility, with no injunctions permitted against compliant implementers during good-faith negotiations.[46][60] In 2025, Access Advance introduced adjustments via its Video Distribution Patent Pool to address streaming services, incorporating per-subscriber and per-user fee models alongside content-based royalties, with early adopters receiving up to 25% discounts and waivers for prior use through 2024. These changes aim to accommodate the growing OTT market while maintaining FRAND principles, with tiered monthly caps up to $5.25 million for large-scale providers.[65][66]Adoption and Implementations
Software Ecosystem
The VVC Test Model (VTM), developed by the Joint Video Exploration Team (JVET), functions as the primary reference software for verifying compliance with the VVC standard and testing its core functionalities. VTM implements all specified coding tools in a straightforward manner to ensure accurate representation of the standard's performance, though it prioritizes fidelity over optimization, resulting in higher computational demands during encoding and decoding.[67] To address VTM's efficiency limitations, open-source alternatives like VVenC and VVdeC have been developed by Fraunhofer HHI. VVenC is an optimized encoder based on VTM, incorporating SIMD instructions and multi-threading to achieve up to 10 times faster encoding speeds while maintaining bitstream compatibility with any VVC-compliant decoder.[68] Similarly, VVdeC provides a high-performance decoder that supports real-time processing of VVC bitstreams generated by encoders such as VTM or VVenC, with optimizations for low-latency applications.[69] In multimedia libraries, FFmpeg added VVC decoding support in version 5.1 (July 2022) via libvvdec, with a native decoder in 7.0 (October 2024) and encoding via libvvencc in 7.1 (September 2024), enabling inclusion in transcoding pipelines. FFmpeg 8.0 (August 2025) introduced VVC VA-API decoding for hardware acceleration.[70][71] GStreamer incorporates VVC capabilities through its FFmpeg-based elements, facilitating playback and processing in pipeline-based workflows, though full optimization remains ongoing. Commercial libraries, such as MainConcept's VVC Encoder Plugin for FFmpeg, offer production-grade encoding with low-bitrate efficiency and support for the Main 10 profile up to 8K resolution.[72] Media players have progressively adopted VVC decoding. VLC Media Player provides VVC playback support via FFmpeg integration in versions from 4.0 development builds (as of 2025), though performance on software decoding may vary on lower-end hardware. MPV and Kodi provide VVC playback through FFmpeg dependencies or external player plugins, enabling users to handle VVC content in custom setups. Browser-based playback remains experimental. For research and development, hybrid tools combining elements of the HEVC Test Model (HM) and VTM allow comparative analysis of VVC's advancements over prior standards, such as evaluating intra-coding efficiency gains of 19-25% in screen content scenarios.[73] CPU-based decoding performance, as demonstrated by VVdeC on modern processors, supports real-time 4K at 60 fps using multi-threading and vectorization, though specific benchmarks on Intel Arc GPUs highlight software-only paths achieving similar rates without hardware acceleration.[74] By 2025, VVC's software ecosystem has matured into widespread use within hybrid workflows combining VVC with AV1 for adaptive bitrate streaming, driven by tools like FFmpeg that balance royalty costs and efficiency. Incomplete areas, such as real-time encoding, have seen advancements through AI-accelerated optimizations in encoders like VVenC, enabling up to 8K processing with reduced latency in professional pipelines.[75][76]Hardware Support
The adoption of Versatile Video Coding (VVC, also known as H.266) in hardware has been gradual, primarily driven by dedicated system-on-chips (SoCs) for consumer electronics and emerging support in integrated graphics processors. Early implementations focused on decoding for high-resolution displays, with encoding remaining challenging due to VVC's computational complexity, which is approximately 10 times higher than that of its predecessor, High Efficiency Video Coding (HEVC).[2] MediaTek's Pentonic 2000 SoC, announced in 2021 and deployed in flagship 8K TVs starting in 2022, marked the first commercial hardware decoder for VVC, enabling up to 8K resolution at 120 Hz with support for the codec alongside AV1 and HEVC. This chip, built on TSMC's 7 nm process, facilitated efficient playback of VVC-encoded content in premium televisions from manufacturers like Sony and Philips. Similarly, HiSilicon's Kirin chipsets, powering Huawei's Mate series smartphones and set-top boxes in 2025, introduced VVC decoding capabilities, making Huawei the first major mobile vendor to support the standard in consumer devices.[77][78][79] In the realm of CPU and GPU integrations, Intel led with VVC decode support in its Xe2 architecture. The Arc Alchemist discrete GPUs (2022-2023), utilizing Quick Sync Video, provided hardware acceleration for prior codecs like AV1 but lacked native VVC decoding until firmware updates were requested without confirmation of implementation. AMD's RDNA 3 architecture in Radeon RX 7000-series GPUs (2022 onward) offered partial video processing enhancements via Video Core Next (VCN) 4.0, supporting HEVC and AV1, but did not include VVC decode or encode capabilities as of 2025. NVIDIA's RTX 40-series GPUs (2022-2024), leveraging the eighth-generation NVENC encoder, similarly prioritized AV1 for encoding and decoding up to 8K but omitted VVC hardware support.[80][81][82] Dedicated application-specific integrated circuits (ASICs) have seen limited VVC integration. Broadcom's VideoCore series, commonly used in set-top boxes (STBs), supports legacy codecs like HEVC in chips such as the BCM7452 but has not publicly adopted VVC decoding in deployed STB solutions as of 2025, despite the company's advocacy for the standard. Samsung's Exynos SoCs, including the Exynos 2500 slated for 2025 smartphones, enable advanced video processing for AV1 and HEVC but lack confirmed hardware VVC decode, relying instead on software fallbacks for compatibility. Encoding hardware remains confined to professional broadcast equipment, such as Rohde & Schwarz's AVHE100 systems, which handle VVC workflows in live production environments; however, the codec's high complexity restricts consumer-grade encoding chips, with most implementations using software-based solutions on high-end servers.[83][84][85] By 2025, Intel's Lunar Lake processors (Core Ultra 200V series, released in 2024) achieved full VVC decode support via the Xe2 media engine, handling up to 8K at 60 fps and positioning Intel as the pioneer among discrete and integrated GPUs. In contrast, mobile SoCs like Qualcomm's Snapdragon 8 Gen 4 (also known as Snapdragon 8 Elite, launched in 2024) exhibit incomplete coverage, with demonstrations relying on software decoders accelerated by the Adreno GPU rather than dedicated hardware, achieving real-time 4K playback but at higher power costs. MediaTek's Pentonic 800 SoC further expanded TV support in 2025 models with VVC decode for 4K and 8K content. Overall, VVC hardware remains niche, concentrated in premium displays and select professional tools, with broader GPU adoption trailing due to licensing and complexity barriers.[86][87][88][75]Industry Applications
In broadcasting, the Digital Video Broadcasting (DVB) project incorporated Versatile Video Coding (VVC) as an optional codec in its updated specifications in 2022, enabling support for high-resolution content including 8K UHD through the DVB-UHD-2 framework. This integration allows broadcasters to leverage VVC's compression efficiency for next-generation services, with conformance points defined for resolutions up to 4K at standard frame rates with HDR, and extensions for higher capabilities. In Brazil, TV 3.0 standards, aligned with ATSC 3.0 technologies, specify VVC as the primary video codec for 4K and 8K transmissions, supporting HDR10 and immersive audio; field trials began in 2023, with full deployment planned for 2025 following ecosystem testing during events like the 2024 Olympics. In the United States, ATSC 3.0 pilots have incorporated VVC as an approved video compression option since July 2025, facilitating enhanced UHD delivery in ongoing market trials across major metropolitan areas. VVC's adoption in streaming remains limited as of 2025, with major platforms prioritizing royalty-free alternatives like AV1 due to licensing complexities. However, hybrid approaches combining VVC with AV1 have been tested for premium 8K content to optimize bandwidth, particularly in scenarios requiring superior compression for HDR streams, though widespread rollout is pending hardware maturity. Consumer devices have seen gradual VVC integration, particularly in high-end 8K televisions from manufacturers like Sony and LG since 2023, where it enables decoding of UHD content with up to 40% better efficiency than HEVC, supporting features like AI upscaling and immersive formats. LG's webOS platform, for instance, lists VVC compatibility for 8K playback in its 2023 and later models, while Sony's Bravia lineup incorporates it for professional-grade video handling. In VR/AR applications, devices such as the Meta Quest 3 demonstrate potential for VVC through software updates, though primary support remains on HEVC and AV1; early tests show VVC reducing latency in 360-degree streaming for immersive experiences. Despite these advancements, VVC's industry uptake has been slow as of mid-2025, largely attributed to royalty structures managed by pools like Access Advance, which impose fees that deter broad adoption compared to open alternatives. In 2025, expansions into 5G mobile video have accelerated, with VVC enabling edge computing for lower-bandwidth streaming on smartphones and IoT devices, improving quality of experience by 30-50% over HEVC in live scenarios. Similarly, cloud gaming platforms are optimizing VVC for real-time rendering, as demonstrated in implementations that cut data usage while maintaining 4K fidelity, supporting the sector's projected growth to over $20 billion globally by 2030.Comparisons and Alternatives
Versus HEVC
Versatile Video Coding (VVC), standardized as H.266, achieves approximately 50% bitrate savings compared to its predecessor High Efficiency Video Coding (HEVC, H.265) when encoding 4K high dynamic range (HDR) content at equivalent perceptual quality levels.[40] This efficiency stems from advancements in prediction and transform techniques, including an increase from 35 intra prediction modes in HEVC to 67 in VVC, enabling more precise modeling of spatial correlations in video frames.[89] Overall, VVC targets 30-50% bitrate reduction across resolutions from HD to 8K, with gains most pronounced in high-resolution scenarios.[90] In terms of features, VVC introduces affine motion compensation to better handle complex deformations like rotation and scaling, which HEVC lacks in its core specification.[91] It also incorporates an adaptive loop filter (ALF) as a post-processing stage to reduce artifacts more effectively than HEVC's deblocking filter and sample adaptive offset (SAO) alone.[92] VVC natively supports resolutions up to 16K (e.g., 15360×8640), surpassing HEVC's maximum of 8K (8192×4320), and includes built-in tools for HDR signaling and screen content coding, whereas HEVC requires extensions for these capabilities.[93][94] VVC ensures some backward compatibility through multilayer profiles that allow scalable extensions, enabling lower layers to be decodable by HEVC-compliant devices in specific configurations like spatial scalability.[40] However, full VVC bitstreams are not decodable by standard HEVC hardware or software without upgrades, as the syntax and tools differ fundamentally.[95] Adoption of VVC involves trade-offs: HEVC benefits from a mature ecosystem, including mandatory use in Ultra HD Blu-ray discs, widespread hardware integration in devices since 2013, and established royalty frameworks.[96] In contrast, VVC offers future-proofing for emerging 8K and HDR applications but at significantly higher complexity—up to 10 times for encoding and twice for decoding compared to HEVC—which may delay broad implementation.[97]| Aspect | HEVC (H.265) | VVC (H.266) |
|---|---|---|
| Bitrate Savings (vs. prior gen) | ~50% over H.264 at 4K | ~50% over HEVC at 4K HDR |
| Encoding Complexity | Baseline | Up to 10× higher |
| Decoding Complexity | Baseline | ~2× higher |
| Max Resolution | 8K (8192×4320) | 16K (15360×8640) |
| HDR/Screen Tools | Extensions required | Native support |
Versus AV1 and Others
Versatile Video Coding (VVC) offers superior compression efficiency compared to AOMedia Video 1 (AV1), achieving approximately 20-30% better bitrate savings for equivalent quality, particularly in high-resolution scenarios like 8K and HDR content.[98][99] However, AV1 provides royalty-free licensing, avoiding the patent fees associated with VVC, which can total up to $0.20 per device under its joint licensing structure.[100] Encoding with AV1 is significantly faster, often 10-20 times quicker than VVC in software implementations, making it more suitable for real-time applications despite VVC's edge in low-latency inter prediction tools, such as advanced affine motion models and decoder-side motion vector refinement.[101][102] Both codecs support 8K and HDR, but VVC's integration into broadcast standards like DVB positions it as preferable for professional transmission workflows.[103] In contrast, Essential Video Coding (EVC) from MPEG-H features a baseline profile comparable to High Efficiency Video Coding (HEVC) in efficiency, with optional royalty-bearing tools enabling up to 30% additional compression gains over HEVC.[104] Low Complexity Enhancement Video Coding (LCEVC), also under MPEG-5, is not a standalone codec but an enhancement layer that boosts efficiency of legacy formats, delivering up to 48% better compression when layered over AVC/H.264, while maintaining low computational overhead.[105] VVC provides a more comprehensive set of tools for next-generation applications, surpassing EVC's hybrid approach and LCEVC's enhancement model in overall versatility, though EVC and LCEVC offer simpler royalty pools with fewer patent holders.[106] The core trade-off between VVC and these alternatives lies in openness versus performance: AV1's royalty-free model drives its adoption in web ecosystems like WebM and YouTube, while VVC's royalty requirements favor controlled environments such as DVB broadcasting.[107][108]| Codec | Compression Efficiency (vs. HEVC) | Licensing | Hardware Support (as of 2025) |
|---|---|---|---|
| VVC | 30-50% better | Royalty-bearing (multi-pool) | Emerging in broadcast chips (e.g., DVB set-tops); limited consumer devices |
| AV1 | 20-30% better | Royalty-free | Widespread in GPUs/CPUs (Intel, AMD, NVIDIA); browser-native |
| EVC | Baseline ~HEVC; optional +25-30% | Royalty-bearing (simplified) | Minimal; software-focused |
| LCEVC | +40-50% over AVC base | Low royalty | Compatible with existing hardware via base codec |