The Video Coding Experts Group (VCEG) is an informal group of technical experts operating under the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Study Group 21, dedicated to the development of international standards for the compression coding of digital video and still images to enable efficient transmission and storage in multimedia applications.[1]VCEG's work traces its origins to the early 1980s, with the group's foundational efforts culminating in the first ITU-T video coding standard, H.120, published in 1984 for low-bitrate videoconferencing over primary digital group transmission, and revised in 1988.[1] This marked the beginning of a series of H.26x standards, including H.261 (1988) for p×64 kbit/s videoconferencing and H.263 (1996) for low-bitrate video communication, which laid the groundwork for modern video technologies.[1] In recognition of its pioneering contributions, VCEG's efforts were voted the most influential work of ITU-T in 2006 and received the ITU 150th Anniversary Award in 2015.[1]A hallmark of VCEG's approach has been its close collaboration with the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Joint Technical Committee 1 Subcommittee 29 Working Group 11 (MPEG), resulting in jointly developed standards that dominate global video deployment.[1] Notable joint achievements include H.262 (MPEG-2, 1995) for digital television broadcasting, H.264/Advanced Video Coding (AVC, 2003) which accounts for a significant portion of internet video traffic, and H.265/High Efficiency Video Coding (HEVC, 2013) offering up to 50% bitrate reduction over AVC for high-definition content.[1] VCEG also contributes to still image standards through partnerships with ISO/IEC JTC1/SC29/WG1, supporting families like JPEG and JBIG.[1]In recent years, VCEG has focused on next-generation technologies through the Joint Video Experts Team (JVET), co-led with MPEG, which finalized H.266/Versatile Video Coding (VVC) in 2020 to support ultra-high-definition video, 360-degree immersive content, and screen content with improved compression efficiency.[1] Ongoing activities include extensions to VVC for enhanced scalability and explorations into neural network-based video coding, as highlighted in joint ITU-MPEG workshops on future video technologies incorporating advanced signal processing and artificial intelligence.[1][2] Following a January 2025 joint workshop, VCEG and MPEG are preparing a Call for Evidence and Proposals for a new generation of video codec standards beyond VVC.[2]
Overview
Role and Mandate
The Video Coding Experts Group (VCEG) operates as Question 6/21 within the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Study Group 21 (SG21), effective from the 2025-2028 study period, following the consolidation of the former Study Group 16 (SG16) on multimedia technologies with Study Group 9 (SG9) on broadband cable and television, as decided at the World Telecommunication Standardization Assembly (WTSA) in 2024.[3][4] This positioning underscores VCEG's role in advancing standardized coding techniques for visual and related signals amid evolving multimedia ecosystems.[5]VCEG's primary mandate is to develop and maintain ITU-T Recommendations on coding methods for visual, speech, audio, and other signal data, tailored for both conversational services—such as real-time videoconferencing and videotelephony requiring low-delay compression—and non-conversational applications like streaming and broadcast television.[6] This includes producing standards that support compression of video sequences, still images, graphics, stereoscopic and multi-view content, light fields, point clouds, volumetric imagery, computer-generated displays, medical imaging, 360-degree video, and virtual/augmented reality (VR/AR) elements, with an emphasis on flexibility across transport mechanisms like the Internet, 5G networks, and ITU-T's H.222.0 multiplexing framework.[6]Central to VCEG's efforts is the pursuit of high compression efficiency that preserves audiovisual quality, enabling efficient bandwidth utilization for diverse applications including broadcasting, video streaming, and mobile communications, while balancing trade-offs in bit rate, quality, delay, and computational complexity.[6] These standards also incorporate features for error resilience and the integration of digital signing to ensure multimedia content authenticity and synchronization within coded streams.[6] VCEG collaborates closely with ISO/IEC JTC 1/SC 29/WG 11 (MPEG) on joint initiatives, such as the Joint Video Experts Team (JVET), to create hybrid standards like Versatile Video Coding (VVC).[7]Since its origins in 1984 with the development of the H.120 digital videocodingstandard, VCEG has aimed to standardize methods for digital video and image coding to minimize bandwidth requirements in global telecommunication networks, fostering interoperability and efficiency in visual data transmission.[1]
Organizational Structure
The Video Coding Experts Group (VCEG) operates as a rapporteur group under Question 6/21 of ITU-T Study Group 21 (SG21), focusing on visual, audio, and signal coding technologies.[4] SG21 was established for the 2025-2028 study period to address multimedia, content delivery, and related applications. VCEG convenes through multiple meetings annually, typically including 3-4 plenary sessions aligned with SG21 schedules, supplemented by interim rapporteur group meetings, such as virtual sessions and in-person gatherings like the one held in Daejeon, South Korea, in June–July 2025.[8][9]Leadership of VCEG is provided by a rapporteur and associate rapporteurs, elected by participants to guide technical coordination, manage contributions, and oversee the development of outputs. The current rapporteur is Gary Sullivan from Dolby Laboratories, USA, supported by associate rapporteurs who assist in specific areas of video coding standardization.[8] These roles ensure efficient progression of work, including the integration of collaborative efforts with bodies like the Joint Video Experts Team (JVET).Participation in VCEG is open to ITU-T Member States, Sector Members (such as industry entities including Nokia, China Telecom, Ericsson, and Fraunhofer), Associates, and academic institutions, fostering contributions from diverse stakeholders.[10][11] Contributions from these participants are rigorously reviewed through mechanisms like core experiments—structured tests to evaluate proposed technologies—and calls for evidence, which solicit demonstrations of compression performance beyond existing standards, such as in the development of H.266/Versatile Video Coding (VVC).[12][13]Decision-making in VCEG follows ITU-T's consensus-based approach, where agreements are reached through general discussion and resolution of objections among experts, avoiding formal voting except for approving final Recommendations by Member States.[14] Documents are classified into contributions (formal inputs for discussion), temporary documents (informal working papers), and standards drafts, which evolve toward ITU-T Recommendations through iterative review.[15][16]This structure evolved from VCEG's prior placement under Question 6/16 of Study Group 16 (SG16), following decisions at the World Telecommunication Standardization Assembly (WTSA-24) in 2024, which merged SG16 and SG9 into SG21 to enhance focus on multimedia technologies.[3]
History
Formation and Early Developments
The Video Coding Experts Group (VCEG) traces its origins to 1984, when it emerged as part of the International Telecommunication Union (ITU, then known as the CCITT) efforts to standardize visual telephony over emerging digital networks. This initiative responded to the transition from analog to digital telecommunications, particularly the rollout of Integrated Services Digital Network (ISDN), which promised higher bandwidth for multimedia applications. Initial meetings under the CCITT's Study Group XV focused on developing codecs for videoconferencing, led by the Specialists Group on Coding for Visual Telephony, chaired by Sakae Okubo of NTT. The group's mandate emphasized low-bitrate video compression to enable real-time transmission within the constraints of early digital infrastructure, such as primary rate ISDN channels at rates up to 2.048 Mbit/s.[17][18]The first outcome of this work was the H.120 standard, published by the CCITT in 1984 and revised in 1988, marking the inaugural international recommendation for digital video coding. H.120 targeted ISDN-based videophones and videoconferencing at p×64 kbit/s rates, where p is an integer multiplier (typically up to 30 for full primary rate). It employed differential pulse code modulation (DPCM) with conditional replenishment to compress video signals with limited motion, suitable for the era's low-bandwidth channels. This standard facilitated the transmission of monochrome video at resolutions like 256×240 for NTSC or 256×288 for PAL, but its performance was limited by the absence of advanced prediction techniques, resulting in modest compression efficiency.[19][17][20]Building on H.120's foundations, the group advanced to H.261 in 1990, informally known as the "Px64" standard, which introduced a hybrid coding framework that became pivotal for subsequent video standards. H.261 utilized motion-compensated discrete cosine transform (DCT) coding, combining block-based motion prediction with transform-based residual compression to achieve better efficiency at bitrates from 64 kbit/s to 2 Mbit/s. It targeted resolutions of QCIF (176×144 pixels) and CIF (352×288 pixels), enabling real-time decoding on 1980s hardware like early digital signal processors. Key challenges included optimizing for computational constraints—such as limiting macroblock processing to ensure decoding within 33 ms per frame—and handling transmission errors over ISDN without robust error correction, which often led to visible artifacts in low-bitrate scenarios. These early developments laid the groundwork for block-based hybrid coding paradigms still in use today.[21][17][20]
Key Milestones and Evolution
The Video Coding Experts Group (VCEG) achieved a significant milestone in 1994 with the joint development of H.262 alongside the Moving Picture Experts Group (MPEG), resulting in the MPEG-2 Video standard, which became foundational for digital television broadcasting and DVD storage by supporting interlaced video and higher bit rates up to 20 Mbps.[22] This collaboration marked VCEG's shift from standalone ITU-T efforts toward integrated standardization with ISO/IEC, addressing the growing demand for multimedia applications beyond initial telephony uses.[22]In 1996, VCEG released H.263, a pivotal advancement over H.261 that enhanced compression efficiency for low-bit-rate video transmission, making it suitable for emerging internet videoconferencing with features like negotiable options for adaptability.[22] Over the subsequent decade, H.263 evolved through multiple extensions, culminating in Version 3 in 2005, which incorporated improvements in error resilience and scalability to meet the needs of broadband and mobile networks.[22]The 2000s saw VCEG deepen its partnership with MPEG through the formation of the Joint Video Team (JVT) in 2001, leading to the completion of H.264/Advanced Video Coding (AVC) in 2003, which offered roughly double the compression efficiency of prior standards and supported diverse profiles for broadcasting, streaming, and mobile devices.[23] This era reflected VCEG's adaptation to the proliferation of digital media, emphasizing robustness against transmission errors amid the rise of internet-based video.[22]Entering the 2010s, VCEG collaborated via the Joint Collaborative Team on Video Coding (JCT-VC), established in 2010, to develop H.265/High Efficiency Video Coding (HEVC) finalized in 2013, achieving about 50% better compression than H.264/AVC to handle high-definition and ultra-high-definition content efficiently.[24] In 2015, the Joint Video Exploration Team (JVET) was formed to explore technologies beyond HEVC, incorporating scalability and advanced error resilience for multimedia ecosystems driven by broadband expansion.[25] Overall, VCEG's progression from telephony-centric standards like H.261 to versatile multimedia solutions mirrored the transition to widespread digital broadband, prioritizing interoperability and efficiency for global video deployment.[22]In 2024, the World Telecommunication Standardization Assembly consolidated ITU-T Study Group 16 (encompassing VCEG) with Study Group 9 into the new Study Group 21 for the 2025-2028 period, continuing multimedia coding work under an expanded mandate.[3]
Video Coding Standards
Early Video Standards
The Video Coding Experts Group (VCEG) established its foundational work with Recommendation ITU-T H.120, approved in 1984 and revised in 1988, marking the first international standard for digital video compression. The 1984 version (v1) utilized differential pulse code modulation (DPCM) and conditional replenishment for intra-frame coding of changed areas, processing video signals without inter-frame prediction or motion compensation to compress monochrome or color video for videoconferencing. The 1988 revision (v2) introduced motion compensation for changed pixels and background prediction. Operating at fixed bit rates of 1.544 Mbit/s for 525-line/60 fields per second systems (NTSC) and 2.048 Mbit/s for 625-line/50 fields per second systems (PAL), H.120 was designed for transmission over primary digital group channels, achieving limited compression suitable primarily for static or low-motion videophone scenarios.[22]Advancing beyond intra-frame limitations, VCEG developed H.261 in 1990 (initial draft 1988), introducing the hybrid coding paradigm that combined block-based motion compensation with discrete cosine transform (DCT) for inter-frame prediction, enabling more efficient compression of temporal redundancies. Targeted at audiovisual services over integrated services digital network (ISDN) lines, H.261 supported flexible bit rates of p \times 64 kbit/s, where p = 1 to 30, commonly ranging from 384 to 1536 kbit/s for practical deployments. The algorithm divided frames into 16×16 luminance macroblocks (with 8×8 chrominance blocks in 4:2:0 sampling), applying motion compensation on 16×16 or 8×8 blocks followed by 8×8 DCT transformation and scalar quantization; a post-loop filter was incorporated to attenuate blocking artifacts arising from block boundaries in the reconstructed frames. Motion estimation employed full-search block matching, selecting the displacement vector \mathbf{v} = (v_x, v_y) that minimizes the sum of absolute differences (SAD) over a predefined search window:\text{SAD}(\mathbf{v}) = \sum_{m=0}^{N-1} \sum_{n=0}^{N-1} \left| f(t, x + m, y + n) - f(t-1, x + m + v_x, y + n + v_y) \right|where f(t, \cdot, \cdot) denotes the pixel intensity at time t, and N = 16 or 8 depending on the block size. This structure enabled compression ratios of 80:1 to 100:1 for quarter common intermediate format (QCIF) video, supporting real-time videotelephony with acceptable quality over ISDN channels.[22]To address the needs of emerging low-bit-rate applications like internet streaming, VCEG finalized H.263 in 1996 as an evolution of H.261, incorporating five baseline picture formats—sub-QCIF (128×96), QCIF (176×144), CIF (352×288), 4CIF (704×576), and 16CIF (1408×1152)—along with support for custom formats up to 31 distinct resolutions for flexibility in diverse systems. Key enhancements included optional negotiable modes such as unrestricted motion vector mode, which permitted vectors to extend beyond picture boundaries using padding or replication to improve prediction accuracy in edge regions, and advanced prediction mode, featuring overlapped block motion compensation and up to four 8×8 motion vectors per macroblock for finer granularity in handling complex motion. These innovations, built on the H.261 hybrid framework with added deblocking filters and arithmetic coding options, boosted compression efficiency, achieving ratios up to 100:1 for QCIF at bit rates as low as 20 kbit/s, making H.263 pivotal for early web video codecs like RealVideo and enabling widespread adoption in ISDN-based telephony and dial-up internet streaming.[22]These early VCEG standards provided the core architectural principles—intra/inter prediction, transform coding, and motion estimation—that influenced subsequent ITU-T developments, such as H.264.[22]
Advanced Video Standards
The Video Coding Experts Group (VCEG), in collaboration with the Moving Picture Experts Group (MPEG) through the Joint Video Team (JVT), developed H.264/Advanced Video Coding (AVC) in 2003 as a major advancement for high-definition video compression.[26] This standard introduced key innovations such as multiple reference frames for motion compensation, allowing up to 16 reference pictures to improve prediction accuracy; Context-Adaptive Binary Arithmetic Coding (CABAC) for entropy encoding, which provides higher efficiency than previous methods; and spatial intra-prediction modes to reduce redundancy within frames.[26] These features enabled up to 50% better compression efficiency compared to H.263 while maintaining similar video quality.[26] H.264/AVC also defined profiles tailored to applications, including the Main profile for broadcast and streaming, and the High profile for high-definition content in mobile and professional environments.[26]Building on H.264/AVC, VCEG and MPEG advanced to H.265/High Efficiency Video Coding (HEVC) in 2013 via the JVT, targeting even greater efficiency for ultra-high-definition video.[27] HEVC employs larger Coding Tree Units (CTUs) up to 64×64 pixels, compared to the 16×16 macroblocks in H.264/AVC, which reduces overhead and supports higher resolutions.[27] It features flexible partitioning through a quadtree structure for Coding Units (CUs) and Prediction Units (PUs), including asymmetric modes, alongside advanced motion vector prediction using spatial and temporal candidates in merge and AMVP modes.[27] Rate-distortion optimization in HEVC encoders minimizes the Lagrangian cost function:J = D + \lambda Rwhere D represents distortion (e.g., sum of squared differences), R is the bit rate, and \lambda is the Lagrange multiplier balancing the trade-off.[27]H.264/AVC saw widespread adoption, serving as the primary codec for Blu-ray Disc high-definition video and YouTube streaming, enabling efficient delivery of broadcast-quality content.[28] HEVC, in turn, became the standard for 4K Ultra HD Blu-ray, supporting higher bit depths and resolutions with reduced bandwidth needs.[29] Extensions enhanced versatility: Multiview Video Coding (MVC) for H.264/AVC enabled stereoscopic 3D, while Scalable HEVC (SHVC) added layered scalability to HEVC for adaptive streaming.[30][31]Technical advances in these standards included provisions for parallel processing to address computational complexity in hardware implementations. HEVC introduced tiles for independent rectangular regions and slices for segmented decoding, facilitating multi-core processing without interdependencies.[27] These mechanisms reduced latency and enabled efficient real-time encoding/decoding on consumer devices.[27] Such innovations laid the groundwork for subsequent video coding efforts.
Modern and Emerging Video Standards
The Versatile Video Coding (VVC) standard, formally known as H.266 and developed through the Joint Video Exploration Team (JVET) comprising VCEG and MPEG, was finalized in 2020 to achieve substantial compression improvements for high-resolution content. VVC delivers 30-50% bitrate savings over its predecessor HEVC while maintaining equivalent video quality, as demonstrated in JVET's core experiments using BD-rate metrics. This efficiency stems from advanced tools such as affine motion compensation, which models complex non-translational motion using a two- or three-parameter affine model for sub-block prediction, and decoder-side motion vector derivation (DMVD), which refines motion vectors at the decoder without additional signaling overhead to enhance merge mode accuracy. Additionally, palette mode optimizes coding for screen-sharing and graphics content by representing blocks with a small set of representative colors, reducing residual data. VVC supports ultra-high definitions up to 8K resolution at 30 frames per second with bitrates under 100 Mbps, enabling practical transmission of immersive and broadcast-quality video.[32][33][34][35][36][37]Extensions to VVC address emerging applications, including adaptations for 360-degree video through multi-model coding that projects spherical content onto multiple equirectangular frames for efficient compression. Integration with point cloud compression leverages VVC's core engine in video-based point cloud coding (V-PCC), where 3D geometry and attributes are projected into 2D video streams for encoding, achieving higher fidelity for volumetric media compared to prior HEVC-based approaches. Ongoing amendments focus on low-latency enhancements, such as gradual decoder refresh (GDR) without picture reordering to minimize end-to-end delay in real-time streaming, and refined parameter sets for ultra-low latency applications like vehicular networks and live events. These extensions maintain backward compatibility while expanding VVC's versatility for immersive and interactive formats.[38][39][40]Since 2021, VCEG through JVET has explored next-generation codecs beyond VVC, with the Enhanced Compression Model (ECM) serving as a testbed for incremental improvements targeting 20-50% additional gains via AI-assisted tools like neural network-based prediction and in-loop filtering. Efforts emphasize support for 16K resolutions and machine-optimized coding for analytics-driven applications. As of 2025, video coding efforts have moved to ITU-TStudy Group 21. A Joint Call for Evidence was issued in July 2025, with submissions evaluated in October 2025. Depending on results, a Call for Proposals will follow, targeting a new standard (potentially H.267) operational around 2029-2030, with AI integration emphasized in collaborations like those between Ericsson, Nokia, and Fraunhofer HHI for 6G-era applications. JVET tests of ECM versions confirm approximately 25-30% BD-rate savings over VVC (and thus ~60-70% over HEVC) in random access configurations, underscoring the trajectory for post-VVC advancements.[41][42][43][44][34][45][46][47]
Image and Related Coding Standards
Image Coding Standards
The Video Coding Experts Group (VCEG), in collaboration with ISO/IEC JTC 1/SC 29/WG 1 (the JPEG group), has contributed to several still image compression standards, focusing on spatial redundancy reduction through techniques like block-based transforms and prediction without temporal elements. These efforts emphasize efficient coding for professional photography, archiving, and high-dynamic-range (HDR) content, supporting both lossless and lossy modes to preserve image quality while minimizing file sizes.[17]A key contribution is ITU-T Recommendation H.273 (first published in 2014), which defines coding-independent code points (CICP) for identifying video signal types, including still images, and supports HDR extensions through color space transformations such as from YCbCr to ICtCp as specified in ITU-R BT.2100. This standard enables interoperability for HDR still images by standardizing metadata for color primaries, transfer characteristics, and matrix coefficients, facilitating backward compatibility with existing decoders.[48]JPEG XT (ISO/IEC 18477 series, published starting 2015) is a backward-compatible extension of the baseline JPEG standard designed for HDR and scalable still image compression, developed by the JPEG committee. JPEG XT employs a base layer using traditional JPEG discrete cosine transform (DCT) coding, augmented by residual coding for enhanced dynamic range and bit depth, targeting continuous-tone photographic content with support for up to 16 bits per component. This approach achieves approximately 20-30% better compression efficiency than JPEG 2000 for HDR images at equivalent quality levels, particularly in professional archiving where lossless modes are critical.[49][50]Additionally, VCEG co-developed the Main Still Picture profile in H.265/HEVC (ITU-T Recommendation H.265, 2013), which applies intra-only coding tools—such as angular intra prediction modes and larger block transforms (up to 64x64)—exclusively to single-frame images, excluding inter-frame motion prediction. This profile leverages block-based transforms and spatial prediction to reduce redundancy within the image, offering lossy and lossless options suitable for high-resolution stills in applications like digital photography. Evaluations indicate HEVC intra coding provides 16-25% bitrate savings over JPEG 2000 for standard dynamic range images, with even greater gains for HDR content when combined with H.273 color mappings. These intra tools draw directly from HEVC's video intra-frame mechanisms but operate independently for still image use.[51]
Current Work and Collaborations
Ongoing Projects
As of 2025, the Video Coding Experts Group (VCEG), operating under ITU-T Study Group 21 (SG21) Question 6/21, continues to advance amendments to Versatile Video Coding (VVC, ITU-T H.266) through the Joint Video Experts Team (JVET). These amendments, initiated post-finalization in 2020, include enhancements to in-loop filtering technologies such as adaptive loop filters (ALF) and cross-component ALF (CC-ALF), as well as provisions for transform skip modes to improve coding efficiency for specific content types like screen content. The work targets further compatibility with AI-driven processing pipelines, with ongoing refinements expected to culminate in additional recommendation updates by 2026, building on versions like H.266 (V3) released in 2023.[43]VCEG's exploration of next-generation video coding has progressed since 2023 via a dedicated post-JVET exploration team, emphasizing neural network-based coding tools for superior compression and adaptability. This includes Exploration Experiments (EE) such as EE1 on neural network-based video coding, launched around 2021 and continuing into 2025, which evaluates AI models for prediction, transformation, and filtering to achieve gains beyond VVC. Sustainability aspects are integrated through focuses on energy-efficient encoding algorithms, aiming to reduce computational overhead for deployment in resource-constrained environments while targeting at least 20-50% bitrate reduction over VVC at equivalent quality.[52][53][54]Under SG21's broader multimedia framework, VCEG projects integrate video coding with emerging applications, such as efficient representation for immersive environments including metaverse scenarios, where high-fidelity volumetric and point cloud data require optimized compression. This involves coordination with other SG21 questions on multimedia systems and services, ensuring seamless interoperability for IP-based and cable delivery networks. Specific outputs include advancements in coding for machines, as noted in liaison statements, to support AI analytics in video streams.[4][55]Recent VCEG activities in 2024-2025 have centered on JVET meetings and workshops advancing a call for evidence through the aforementioned exploration experiments, evaluating proposals for 50% or greater efficiency improvements over VVC baselines. Key events include the JVET-AM meeting in Daejeon in June 2025, reviewing neural network and enhanced compression results, and the joint ITU-ISO/IEC workshop in Geneva from July 1-3, 2025, which discussed AI integration and standards beyond VVC. The inaugural SG21 meeting in Geneva (January 13-24, 2025) established priorities for visual coding under the new structure, with follow-up sessions in October 2025 further progressing these efforts.[53][8]Ongoing challenges for VCEG include balancing patent licensing frameworks—where ITU-T standards like VVC incorporate royalty-bearing essential patents—against demands for more accessible, royalty-free alternatives in open ecosystems, while prioritizing hardware-accelerated implementations for edge devices to enable low-latency, power-efficient processing in IoT and mobile applications. These considerations are addressed through collaborative test conditions in the exploration experiments, ensuring practical deployability without excessive complexity.[56][57]
Partnerships and Joint Efforts
The Video Coding Experts Group (VCEG) has maintained a long-term partnership with the Moving Picture Experts Group (MPEG) since the 1990s, beginning with the collaborative development of the H.262 video coding standard, which was jointly published as ITU-T Recommendation H.262 and ISO/IEC 13818-2 (MPEG-2 Video). This cooperation continued with the formation of the Joint Video Team (JVT) in 2001, which developed the Advanced Video Coding (AVC) standard, finalized in 2003 as ITU-T H.264 and ISO/IEC 14496-10 (MPEG-4 Part 10). Subsequent joint efforts included the Joint Collaborative Team on Video Coding (JCT-VC) from 2010 to 2015 for High Efficiency Video Coding (HEVC, ITU-T H.265 | ISO/IEC 23008-2), and the Joint Video Experts Team (JVET) from 2015 to 2020 for Versatile Video Coding (VVC, ITU-T H.266 | ISO/IEC 23090-3). These partnerships enable dual approval under ITU-T and ISO/IEC, facilitating widespread adoption by combining ITU's telecommunication focus with ISO's broader industry branding.[1][58][59][60]VCEG's contributions to multiview video coding, such as the H.264/MVC extension developed jointly with MPEG, have supported advancements in immersive technologies, including MPEG Immersive Video (MIV, ISO/IEC 23090-12) finalized in 2022, which enables six degrees of freedom (6DoF) rendering for volumetric content captured by multiple views. Shared testing frameworks, like core experiments conducted within these joint teams, have streamlined development by evaluating proposals collaboratively, ensuring efficient compression for diverse applications. The ISO branding, as seen with AVC's designation as MPEG-4 Part 10, has broadened market penetration in consumer electronics and streaming services.[61][62]VCEG maintains ties with the 3rd Generation Partnership Project (3GPP) through the adoption of its standards in mobile video profiles, such as HEVC Main 10 Profile for 4K UHD in 3GPP Release 14 and VVC profiles for enhanced efficiency in 5G multimedia transmission. Similarly, the Digital Video Broadcasting (DVB) Project incorporates VCEG-developed codecs like HEVC for UHD broadcast in DVB-T2 and added VVC support in 2022 to enable 8K delivery within existing bandwidth constraints. These integrations promote interoperability, with VCEG providing the core coding tools tailored for mobile and broadcast environments via liaisons and profile specifications.[63][64]In recent years, VCEG has pursued enhanced collaborations on AI integration in video coding, highlighted by the January 2025 joint ITU-T SG21 and ISO/IEC JTC 1/SC 29 workshop on future video coding incorporating advanced signal processing and AI techniques beyond VVC. With the establishment of ITU-TStudy Group 21 (SG21) in 2025, which merged former SG16 (home to VCEG) and SG9 (focused on broadcast and cable systems), coordination on systems integration has intensified, supporting holistic multimedia standards that combine coding with delivery architectures.[2][65]