Fact-checked by Grok 2 weeks ago

MPEG-4 Part 2

MPEG-4 Part 2, formally known as ISO/IEC 14496-2, is an developed for the coded representation of visual information, encompassing natural and synthetic visual objects such as video sequences, still textures, and animations in applications. It provides a suite of video coding tools, object types, and profiles that enable efficient compression of rectangular and arbitrarily shaped video objects, scalable bitstreams, and error-resilient encoding. Published in its third edition in 2004 with subsequent amendments, the standard supports resolutions from sub-QCIF (176×144 pixels) up to HDTV (1920×1080 pixels) using chroma sampling, making it suitable for diverse environments including mobile devices and broadcast systems. Initiated by the (MPEG) in 1995 and approved in stages through 1998–1999, MPEG-4 Part 2 builds on the ITU-T H.263 video coding standard to achieve greater compression efficiency than its predecessor MPEG-2, particularly for online and mobile video delivery. Developed under ISO/IEC JTC 1/SC 29, it defines the bitstream syntax, semantics, and decoding processes but leaves encoder implementations flexible and unspecified. The standard emphasizes object-based coding, allowing independent manipulation of visual elements within a scene, which facilitates advanced features like and content scalability. MPEG-4 Part 2 includes approximately 21 profiles tailored to specific applications, such as the Simple Profile for low-complexity mobile and (up to QCIF and 384 kbit/s bitrates), the Advanced Simple Profile for enhanced efficiency in streaming, and the Main Profile for broadcast-quality interlaced content supporting up to 32 objects and 38 Mbit/s. Specialized profiles like Simple Face and Basic Animated Texture address synthetic content for virtual meetings and animations at low bitrates. Notable implementations include encoders like DivX and XviD, which utilize the Advanced Simple Profile for consumer video distribution in formats such as AVI and MP4. It was widely adopted in the early 2000s for streaming media, CD/DVD distribution, and devices like the Apple iPod (using Simple Profile since 2005), though its use has declined since the rise of MPEG-4 Part 10 (H.264/AVC) around 2003 for superior efficiency. All patents related to MPEG-4 Part 2 expired worldwide by January 28, 2024 (except in Brazil). As an open standard, MPEG-4 Part 2 remains documented and supported for legacy preservation, requiring specialized decoders for playback.

Overview

Definition and Purpose

MPEG-4 Part 2, formally known as ISO/IEC 14496-2 or MPEG-4 Visual, is an for video encoding that enables the efficient of both natural and synthetic visual content, including rectangular frame-based video objects and arbitrary-shaped objects. Developed by the (MPEG), it defines the syntax, semantics, and decoding processes for representing picture information in forms such as video sequences, still images, and computer-generated graphics, while leaving encoder implementations flexible. The primary purposes of MPEG-4 Part 2 are to facilitate low-bitrate video transmission suitable for emerging applications like internet streaming, mobile devices, and digital broadcast, while introducing object-based coding to support enhanced and content manipulation in scenes. This standard builds on earlier MPEG technologies by improving compression efficiency for bandwidth-constrained environments, allowing for scalable and error-resilient bitstreams that adapt to varying network conditions and device capabilities. For instance, it enables users to interact with individual video objects, such as selecting or animating specific elements within a scene, which was a significant advancement for interactive . Key features include block-based for temporal redundancy reduction, (DCT) for intra-frame spatial compression, and backward compatibility with in short header mode for video telephony applications. These tools support a range of bitrates from 5 kbit/s up to 38 Mbit/s, making it versatile for both low-end mobile video and high-fidelity professional production. The scope of MPEG-4 Part 2 encompasses natural and synthetic video, including progressive and interlaced formats from sub-QCIF resolutions up to HDTV ( pixels), but it does not cover video or more advanced features addressed in subsequent MPEG-4 parts. It enhances the coding efficiency of prior standards like and through refinements in and scalability, without altering their core hybrid coding framework.

Relation to Other Standards

MPEG-4 Part 2, also known as MPEG-4 Visual, ensures with the baseline profile, allowing MPEG-4 decoders to correctly process basic H.263 bitstreams for applications like video conferencing. It extends the capabilities of earlier standards such as (ISO/IEC 11172-2) and (ISO/IEC 13818-2) by incorporating advanced features like arbitrary shape coding for non-rectangular video objects and sprite coding for efficient representation of static backgrounds or global motion. In terms of compression efficiency, MPEG-4 Part 2 is generally less effective than its successor, H.264 (MPEG-4 Part 10), requiring up to 50% higher bit rates to achieve equivalent video quality, particularly in high-definition streaming scenarios. This positions MPEG-4 Part 2 as a transitional standard that bridges the gap between , commonly used for , and the more advanced H.264 for modern broadband delivery. Within the broader MPEG-4 framework (ISO/IEC 14496), Part 2 focuses exclusively on video object coding, complementing Part 3 for audio compression and Part 11 for scene description and interactive behavior of audiovisual objects. It builds upon the foundational systems layer defined in MPEG-4 Part 1, enabling integrated multimedia streams. MPEG-4 Part 2 served as the basis for several widely adopted codecs, including and , which implement its Advanced Simple Profile for consumer video encoding and playback. These implementations influenced subsequent MPEG standards by demonstrating practical applications of object-based video tools, paving the way for enhanced profiles in later parts like MPEG-4 Part 10.

History and Development

Standardization Process

The development of MPEG-4 Part 2, also known as MPEG-4 Visual, was undertaken by the (MPEG) as a working group under ISO/IEC JTC 1/SC 29/WG 11, beginning in 1993 as part of the broader MPEG-4 standardization phase aimed at object-based audiovisual coding. This effort sought to extend beyond prior standards like MPEG-2 by enabling functionalities such as content interactivity and scalable compression for diverse applications including mobile and . Key milestones in the standardization process included a call for proposals issued by MPEG in , with initial video submissions evaluated at the January 1996 meeting in , leading to the establishment of the first Video Verification Model (VVM). Verification model testing and iterative refinements occurred from 1997 to 1998, incorporating core experiments to assess tools such as global motion compensation () for improved efficiency in handling camera movements across frames. The final committee draft was completed in late 1998, followed by the publication of the international standard ISO/IEC 14496-2 on December 16, 1999. The process emphasized collaboration with the to align certain aspects of MPEG-4 Visual with the video coding standard, particularly in baseline compression techniques for low-bitrate applications, ensuring compatibility for and similar uses. Post-standardization maintenance involved multiple amendments to the core specification, with the last major update being Amendment 5 in 2009, which added Levels 5 and 6 to the Simple Studio Profile for enhanced professional video handling.

Key Contributors

Touradj Ebrahimi from the (EPFL) in and Caspar Horne from in the served as the primary architects of MPEG-4 Part 2, authoring key technical overviews and driving the specification's design for natural video coding. Ebrahimi played a pivotal role in advancing object-based coding, developing verification models that enabled content representation through separable audiovisual objects rather than pixel-based frames, which became foundational to the standard's interactivity features. Horne contributed extensively to the standard's verification models, serving as the first editor of the MPEG-4 Synthetic and Natural Hybrid Coding (SNHC) Verification Model and leading integration efforts for visual components. Several organizations provided essential input during the development of MPEG-4 Part 2, including , , and Thomson, which contributed through technical proposals and submissions that shaped core encoding tools. firms demonstrated heavy involvement, particularly in tool selection for advanced features like and shape coding, with companies such as Electric, Hitachi, and playing prominent roles via their expertise in video compression algorithms. The technological foundation of MPEG-4 Part 2 draws from over 870 early-filed patents across 29 entities, reflecting collaborative innovation in video encoding; major holders include companies like Mitsubishi Electric, , and , which together accounted for a substantial portion of essential patents on techniques such as global .

Editions

Initial Release

The first edition of MPEG-4 Part 2, formally designated ISO/IEC 14496-2:, was published in December by the (ISO) and the [International Electrotechnical Commission](/page/International_Electrotechnical Commission) (IEC). This release established the foundational baseline for video coding in the MPEG-4 standard, focusing on efficient of natural video objects for applications. It defined the initial profiles, including the Simple Profile (SP) for basic compatibility and the Advanced Simple Profile () for enhanced functionality, thereby providing a flexible for various decoding capabilities. Key innovations in this edition included advanced tools, such as quarter-sample motion vectors in the , which improved prediction accuracy over prior standards like by allowing finer granularity in . Support for interlaced coding was also introduced to accommodate broadcast and legacy content formats. The standard targeted low-bitrate scenarios, supporting up to 384 kbit/s for Common Intermediate Format () resolutions in the Simple Profile, positioning it as a successor to for internet-based video transmission. Development of MPEG-4 Part 2 originated from efforts starting in 1996, when the MPEG committee issued a call for proposals to advance beyond existing video coding paradigms toward object-based and network-friendly compression. In July 2000, Amendment 1 (ISO/IEC 14496-2:1999/Amd 1:2000) was published, adding error resilience mechanisms and data partitioning tools to enable more reliable delivery over error-prone channels like wireless networks. These enhancements allowed for resynchronization after transmission errors and prioritized critical data in the bitstream, improving overall robustness without altering the core profiles.

Amendments and Updates

The second edition of MPEG-4 Part 2, published as ISO/IEC 14496-2:2001, integrated prior amendments from the initial 1999 baseline and introduced studio profiles tailored for professional and post-processing applications, enabling higher bit depths and lossless coding modes for enhanced fidelity in studio environments. Amendment 2 to the second edition, released in 2002 as ISO/IEC 14496-2:2001/Amd 2:2002, added the Streaming Video Profile for improved scalability in network environments. Amendment 3 to the second edition, released in 2003 as ISO/IEC 14496-2:2001/Amd 3:2003, introduced new levels and tools for MPEG-4 Visual, including support for higher resolutions and enhanced scalability in the Advanced Simple Profile. The third edition, ISO/IEC 14496-2:2004, consolidated these and subsequent updates into a unified specification, incorporating technical revisions for broader compatibility and performance while maintaining backward compatibility with earlier versions. Amendment 4 to the third edition, published in 2008 as ISO/IEC 14496-2:2004/Amd 4:2008, introduced Simple Profile Level 6 to support higher resolutions up to 1280×720, with Amendment 5 in 2009 extending Simple Studio Profile Levels 5 and 6 to finalize capabilities for high-definition content up to resolutions, addressing demands for progressive workflows. These updates across three main editions and multiple amendments, including corrigenda up to 2013, culminating in stabilization by 2009 for major features, primarily addressed limitations in the original 1999 baseline by improving scalability for transmission, enhancing compression quality through advanced prediction tools, and supporting high-definition formats alongside robustness for error-prone channels such as wireless networks.

Profiles and Levels

Simple Profile

The Simple Profile (SP) of MPEG-4 Part 2 represents the baseline visual coding configuration, optimized for minimal to enable deployment on resource-constrained devices such as early mobile phones and portable equipment. Its design prioritizes low-latency encoding and decoding suitable for real-time applications in bandwidth-limited environments, while providing efficient compression for rectangular video objects without support for advanced features like interlacing or arbitrary shapes. To achieve this, SP restricts frame types to intra-coded (I-frames) and predicted (P-frames) only, omitting bi-directionally predicted B-frames and (GMC) that would increase processing demands. Key coding tools in SP include constrained variable block-size motion compensation, limited to 16×16 or 8×8 macroblocks for inter-frame prediction, which balances efficiency and simplicity. is aligned with the baseline profile of , incorporating half-pel accuracy and unrestricted motion vectors to facilitate compatibility with existing low-bitrate video systems while avoiding higher-complexity options like overlapped block motion compensation. These elements ensure robust performance in error-prone channels, with optional data partitioning for enhanced error resilience at low bitrates. SP defines six levels, each imposing constraints on parameters such as maximum bit rate, picture size, frame rate, and decoder buffer sizes to delineate conformance points for interoperability. The levels progressively scale capabilities from basic mobile video to moderate-resolution streaming, as summarized below:
LevelMax Bit RateMax ResolutionMax Frame RateKey Buffer/Rate Constraints
064 kbit/sQCIF (176×144)10 HzVBV buffer: 24 kbytes; Decoder rate: 14,400 samples/s [ISO/IEC 14496-2:2004]
1128 kbit/sQCIF (176×144)15 HzVBV buffer: 32 kbytes; Decoder rate: 21,600 samples/s [ISO/IEC 14496-2:2004]
2384 kbit/sCIF (352×288)30 HzVBV buffer: 79 kbytes; Decoder rate: 86,400 samples/s [ISO/IEC 14496-2:2004]
32 Mbit/s720×57630 HzVBV buffer: 396 kbytes; Decoder rate: 345,600 samples/s [ISO/IEC 14496-2:2004]
48 Mbit/s720×57630 HzVBV buffer: 2,621 kbytes; Decoder rate: 1,417,280 samples/s (variants for progressive scan) [ISO/IEC 14496-2:2004/Amd.2:2005]
58 Mbit/s1,280×72030 HzExtended VBV and decoder rates for basic SP enhancements [ISO/IEC 14496-2:2004/Amd.4:2008]
These levels establish operational ranges for video, with higher levels accommodating larger pictures and higher temporal rates while maintaining the profile's core simplicity. finds primary applications in video telephony over narrowband networks and early internet-based video streaming, where its compatibility enables seamless integration with legacy systems for low-end consumer devices. By focusing on straightforward, error-resilient coding, it laid foundational support for mobile multimedia services in the late and early .

Advanced Simple Profile

The Advanced Simple Profile () extends the capabilities of the Simple Profile in MPEG-4 Part 2 by adding advanced compression tools aimed at improving efficiency for consumer video applications, such as streaming and storage, while maintaining relatively low complexity. Defined in ISO/IEC 14496-2, ASP targets single-layer, frame-based video distribution across a broad range of bitrates suitable for both mobile devices and broadcast scenarios. It builds on the baseline rectangular video objects of the Simple Profile but introduces enhancements for better handling of complex motion and scene changes, enabling higher quality at comparable bitrates. Key tools in ASP include B-frames (bidirectional predicted video object planes) for improved temporal prediction, quarter-pixel for finer , and global (GMC) to efficiently encode global camera movements like panning or zooming. An optional is provided as post-processing to reduce blocking artifacts, and resync markers support error recovery in noisy transmission environments. Additionally, up to four motion vectors per are permitted, particularly in direct mode for B-frames, allowing greater flexibility in motion representation without excessive computational overhead. ASP comprises five levels, each specifying constraints on bitrate, resolution, and to ensure decoder conformance across varying application needs. These levels scale from low-bandwidth video to near-broadcast quality, with support for interlaced coding introduced at higher levels. Representative capabilities include:
LevelMaximum BitrateTypical ResolutionKey Capabilities
0128 kbit/sQCIF (176 × 144)Basic video
1128 kbit/sQCIF (176 × 144)Enhanced support
2384 kbit/s (352 × 288)Higher s
3768 kbit/s (352 × 288)Full
43 Mbit/s352 × 576Half
58 Mbit/s720 × 576 @ 30 Hz, broadcast-like
These parameters establish important context for deployment, balancing quality and resource demands. In practice, found widespread use in and early playback systems like , where its efficiency enabled high-quality compression of standard-definition content for distribution over limited bandwidth connections.

Simple Studio Profile

The Simple Studio Profile (SStP) in MPEG-4 Part 2 is designed for professional video applications, employing intra-frame coding exclusively to support editing workflows where random frame access is essential without dependencies on preceding frames. This profile facilitates high-quality intermediate formats in production pipelines, accommodating resolutions from standard definition television (SDTV) up to 4K ultra-high definition (UHD). It supports bit depths ranging from 8 to 12 bits per component, enabling precise color representation suitable for post-processing tasks like grading. Chroma subsampling formats of 4:2:2 and 4:4:4 for studio-quality color fidelity, including support for RGB and YCbCr color spaces. Key features of SStP emphasize tools tailored for studio environments, including spatial scalability to allow layered encoding for flexible resolution adjustments during editing. Notably, it omits inter-frame prediction mechanisms, ensuring each frame is independently encoded to preserve editing flexibility and avoid artifacts from motion compensation. The high bit-depth capability is particularly valuable for color grading workflows, providing extended dynamic range and reduced quantization errors in professional color correction. Introduced in the second edition of ISO/IEC 14496-2 in 2001, SStP addresses the need for a lossless-like codec in broadcast and production settings. SStP defines , each imposing specific constraints on parameters such as , , , , and bitrate to match diverse professional use cases from to ultra-high-resolution production. These levels ensure compatibility with studio equipment while scaling computational demands appropriately. Levels 5 and 6 were added in Amendment 5 (2009).
LevelChroma SubsamplingMax Bitrate (Mbit/s)
1SDTV (e.g., 720×576)4:2:2 / 10Up to 50i/60i180
2SDTV/HD transition4:2:2 / 10Up to 50i/60i600
3 (1920×1080)4:2:2 / 10/12Up to 60p900
42K (2048×1080)12Up to 60p1,800
5 (3840×2160)4:2:212Up to 30p1,800
64K×2K (4096×2160)12Up to 60p3,600
Note: Bit depths and frame rates align with ITU-R BT.709/BT.2020 and SMPTE standards for levels 1–4; higher levels extend to wider color gamuts. In applications, SStP serves as a codec for broadcast contribution links and environments, where its intra-frame nature supports efficient proxy editing and high-fidelity archiving without recompression losses. It integrates into workflows like the Interoperable Master Format (IMF) for distribution mastering, providing a bridge between acquisition and final delivery formats.

Technical Specifications

Compression Techniques

MPEG-4 Part 2, also known as MPEG-4 Visual, employs a hybrid video compression framework that combines motion-compensated prediction with to exploit spatial and temporal redundancies in video sequences. This approach builds on earlier standards like and , enabling efficient coding of rectangular frames or arbitrarily shaped video objects while supporting interactivity in applications. The core techniques prioritize block-based processing for computational efficiency, with optional advanced tools for enhanced performance in specific profiles. Video frames, termed Video Object Planes (VOPs) in the , are categorized into three types: intra-coded (I-VOPs), which are encoded without to other frames using only spatial information; predicted (P-VOPs), which use forward from previous I- or P-VOPs; and bi-directionally predicted (B-VOPs), available in the Advanced Simple Profile (ASP) for between past and future VOPs to improve compression efficiency. B-VOPs allow multiple motion vectors per but are optional to maintain low-latency decoding in simpler profiles. Motion compensation in MPEG-4 Visual is block-based, dividing frames into 16×16 luma macroblocks (with corresponding 8×8 chroma blocks) for prediction. Motion vectors are estimated with half-pixel accuracy in the Simple Profile (SP) using bilinear interpolation, while the ASP extends this to quarter-pixel accuracy via a 4-tap FIR filter for half-pixel luma interpolation and bilinear averaging for quarter-pixel positions, with the quarter_sample flag enabled; chroma uses half-sample bilinear interpolation, allowing finer temporal alignment and modest bitrate savings in motion-heavy scenes. Global Motion Compensation (GMC) further optimizes warped backgrounds by applying affine transformations to entire sprites or frames, reducing local motion vector overhead for static or camera-panned content. Motion vector prediction uses differential coding, where the predicted vector \mathbf{MV}_{pred} = \median(\mathbf{MV}_a, \mathbf{MV}_b, \mathbf{MV}_c) is derived as the median of three neighboring vectors (left, above, and above-left or above-right, depending on availability) to minimize transmitted residuals. Transform coding applies an 8×8 Discrete Cosine Transform (DCT) to intra blocks and prediction residuals, concentrating energy into low-frequency coefficients for subsequent quantization. In the ASP and for object-based coding, a 4×4 DCT variant is used for residuals in shape-adaptive scenarios, particularly when blocks have fewer than 64 opaque pixels, enabling efficient handling of irregular object boundaries via Shape-Adaptive DCT (SA-DCT). Quantization follows a scalar uniform scheme with a zigzag scan to reorder coefficients from low to high frequency, facilitating run-length encoding of zeros and prioritizing significant components for rate-distortion optimization. Entropy coding relies on Variable-Length Codes (VLCs) for coefficients, motion vectors, and macroblock types, closely mirroring H.263's tables for compatibility and efficiency, with adaptive selection among up to 12 VLC sets for AC coefficients based on prior symbols to approach Huffman optimality. Rate control is achieved through Video Buffering Verifier (VBV) buffer management, dynamically adjusting quantization parameters to maintain target bitrates and prevent decoder buffer overflows, often using quadratic rate-distortion models for frame allocation. Additional tools include object-based segmentation, which partitions video into foreground and background via alpha planes and arbitrary shapes, though rarely deployed due to segmentation complexity; and coding, which constructs a large static background (sprite) from multiple frames and warps it using GMC parameters for efficient representation of unchanging scenes, reducing redundancy in panned or zoomed content. These features, while innovative for content-based manipulation, are profile-dependent and see limited use outside research.

Bitstream Structure

The MPEG-4 Part 2 bitstream, also known as MPEG-4 Visual, employs a hierarchical structure to organize video data, enabling flexible encoding of visual objects within a scene. At the highest level, the Visual Object Sequence (VOS) header encapsulates the entire sequence, followed by Video Object (VO) descriptors that define individual objects. Each VO is associated with one or more Video Object Layers (VOLs), which specify encoding parameters such as profile and level. VOLs may include optional Group of Video Objects (GOVs) for random access points, grouping multiple Video Object Planes (VOPs). A VOP represents a snapshot of a video object at a specific time instance, comprising shape, motion, and texture information. This object-based hierarchy contrasts with frame-based structures in prior standards, supporting arbitrary-shaped objects and scalability. The VOL header establishes foundational syntax elements, including time information via vop_time_increment_resolution for , aspect ratio through aspect_ratio_info, and profile/level indication with profile_and_level_indication to define supported tools. The VOP header follows, providing the temporal reference with vop_time_increment and flags for frame type—Intra (I-VOP) for independent coding, Predictive (P-VOP) for forward prediction, or Bi-directional (B-VOP) for —along with quantization parameters. At the lowest level, syntax encodes spatial details: motion vectors are differentially coded with half-pixel precision using variable-length codes, while data involves coded block patterns (coded_block_pattern) to signal which blocks contain quantized DCT coefficients, and flags for intra/ coding modes. The ensures byte alignment through procedures like next_start_code(), facilitating straightforward parsing during decoding. Error resilience features are integrated into the to mitigate transmission errors, particularly in error-prone channels. Resynchronization markers (resync_marker) are periodically inserted after a fixed number of bits, delineating video packets for recovery from bit errors. Data partitioning separates the into independent sections for , , and motion, allowing partial decoding if one partition is corrupted. For low-delay applications, a short header mode provides a simplified syntax compatible with , reducing overhead while maintaining resilience through reversible variable-length codes and header extension methods that duplicate critical information. These tools enable graceful degradation without halting the entire stream. Syntax variations occur across profiles to balance complexity and functionality. The Simple Profile (SP) adheres to basic syntax, supporting I- and P-VOPs with core motion compensation and no advanced warping tools, ensuring compatibility for low-complexity decoders. In contrast, the Advanced Simple Profile (ASP) extends this with additional parameters for Global Motion Compensation (GMC), including warping parameters encoded in the GOV or VOP headers to describe affine transformations for background motion, while maintaining byte-aligned structures for efficient parsing. These profile-specific elements allow the bitstream to adapt to diverse application needs without altering the core hierarchy.

Implementation and Applications

Software Encoders and Decoders

MPEG-4 Part 2 software encoders primarily target the for its balance of compression efficiency and compatibility, though some support the Simple Profile (SP) as well. , an open-source encoder, focuses on ASP and is renowned for delivering high-quality output through advanced features like quarter-pixel and global . Developed as a of the OpenDivX project in July 2001, Xvid has been maintained by a community of developers and integrated into various tools for its reliability in producing efficient video streams. FFmpeg's libavcodec library provides versatile encoding support for all major MPEG-4 Part 2 profiles, with particular optimization for via its native mpeg4 encoder or the libxvid wrapper. This integration, available since the early 2000s, allows FFmpeg to handle a wide range of input sources and output bitrates, making it a staple in open-source workflows. DivX Pro, a proprietary encoder from , is ASP-based and emphasizes user-friendly encoding with features like scene detection, catering to consumer video creation since its release alongside the DivX codec family. Notable encoder implementations include , which offers an ASP-focused integrated into Nero's suite for DVD authoring and video conversion, providing robust rate-distortion performance. Early versions of Apple's , starting with version 6.0 in 2002, supported SP decoding natively, enabling playback of basic MPEG-4 Part 2 streams in applications. Xvid's two-pass rate control mode enhances encoding quality by first analyzing the video for complexity distribution and then optimizing bitrate allocation in the second pass, often yielding better visual fidelity at constrained bitrates compared to single-pass methods. For decoding, VLC Media Player offers full support for MPEG-4 Part 2 across profiles, leveraging libavcodec for hardware-accelerated playback where available, ensuring compatibility with SP and ASP streams in diverse formats like AVI and MP4. Windows Media Player handles MPEG-4 Part 2 decoding through installed DirectShow codecs, supporting SP and ASP for standard video playback on Windows systems. The libavcodec decoder within FFmpeg provides comprehensive SP and ASP support, serving as a backend for many applications and libraries focused on efficient software-based decoding.

Usage in Media

MPEG-4 Part 2, also known as MPEG-4 Visual, saw early adoption in streaming as an alternative to formats like , enabling more efficient delivery of content over bandwidth-limited connections in the early . It was positioned to succeed for streaming applications, leveraging improved compression for interactive multimedia experiences. In mobile streaming, MPEG-4 Part 2 profiles were integrated into standards for and 2.5G phones, with adoption by organizations like and 3GPP2 to support video transmission on early cellular networks during the . This facilitated person-to-person video communication and content capture on devices with limited processing power. For storage and distribution, implementations like , based on MPEG-4 Part 2, gained popularity for creating high-quality, compact video files suitable for and DVD backups, allowing near-DVD-quality playback from smaller media in the early . In broadcast applications, MPEG-4 Part 2 supported (SDTV) contribution feeds, including scenarios in digital broadcasting where it enabled efficient transmission of scenes. It also found legacy use in cameras, particularly for low-resolution, low-bitrate video encoding in systems requiring and . Following the expiration of nearly all associated patents by early 2024, MPEG-4 Part 2 became freely usable without licensing fees, opening opportunities for implementation in resource-constrained environments like embedded systems. Although it has become rare in mainstream streaming due to the superior efficiency of successors like H.264 (MPEG-4 Part 10), which reduces bandwidth needs by up to 50% while maintaining quality, MPEG-4 Part 2 persists in niche video applications suited to its low-complexity profiles. MPEG-4 Part 2 video is commonly encapsulated in .mp4 containers (defined in MPEG-4 Part 14), supporting streaming and storage of compressed audiovisual data across various media formats. This structure was utilized in early online video platforms, including pre-2005 uploads that leveraged MPEG-4 for web-based distribution before widespread shifts to more advanced codecs.

Licensing and Patents

Patent Holders

The essential patents for MPEG-4 Part 2 (Visual) are pooled and licensed through the MPEG-4 Visual Patent Portfolio License, administered by (formerly , L.L.C.) since its formation in 1999. This arrangement covers essential patents required to implement key profiles of the standard, such as , , Main, and , enabling implementers to access rights from multiple holders via a single license agreement. Major patent holders include Mitsubishi Electric Corporation with contributions in technologies like Global Motion Compensation (GMC), Hitachi, Ltd. with advancements in , and Panasonic Corporation (formerly Matsushita Electric Industrial Co., Ltd.) with developments in error resilience. Other significant holders are Sony Group Corporation (over 150 patents), Koninklijke Philips N.V. (over 100 patents), , and Corporation. The portfolio draws from a broad base of licensors, including France Télécom, Microsoft Corporation, , Inc., Co., Ltd., Fujitsu Limited, and SANYO Electric Co., Ltd. Japanese firms, including Sony, Toshiba, and Sharp, are among the major patent holders, reflecting their substantial role in the development of core compression and video processing features. Contributions from U.S. and European entities, such as those from Columbia University and École Polytechnique Fédérale de Lausanne (EPFL), focus on innovative aspects like object-based coding and advanced transform methods. In total, the portfolio encompasses over 1,000 patents across more than a dozen organizations, including approximately 346 in the United States, ensuring comprehensive coverage for MPEG-4 Part 2 implementations.

Expiration and Impact

All essential patents essential to MPEG-4 Part 2 (also known as MPEG-4 Visual) lapsed in 2024, with almost all worldwide patents expiring by early that year, thereby concluding royalty payments under the (now Via Licensing Alliance) for the majority of global implementations. As of November 2025, two patents remain active exclusively in : one held by AG (BR PI0109962-0, expiring July 19, 2026) and one by Laboratories (BR PI0113271-7, expiring January 26, 2026). The latter relates to watermarking and may not be essential for core implementations. These do not significantly affect the standard's overall royalty-free status outside those regions. Historically, the imposed licensing fees of approximately $0.20 to $0.25 per decoder unit after initial volume thresholds, subject to annual caps such as $4.75 million per legal entity after , which contributed to limited adoption of MPEG-4 Part 2 in cost-sensitive markets and open-source projects prior to . These fees created barriers for widespread deployment, particularly in emerging applications like video and internet streaming, where alternatives with more favorable terms gained traction. The lapse of these patents now permits encoding and decoding implementations, particularly benefiting libraries and legacy hardware systems that previously avoided full compliance due to cost. This shift holds potential for renewed use in low-cost devices, such as systems in developing regions or budget , where MPEG-4 Part 2's balance of compression efficiency and low computational demands remains viable despite its age. However, given the superior performance of successor standards like H.264/AVC (MPEG-4 Part 10) and HEVC, which offer better efficiency without ongoing royalty burdens in many scenarios, no significant development of new MPEG-4 Part 2 encoders or decoders is anticipated. On a broader scale, the expiration aligns with the transition of earlier MPEG technologies, such as , into the , promoting unrestricted access to foundational video compression tools and easing the decoding of archival media content encoded in these formats for preservation and research purposes. This development reduces legal risks for content distributors and software developers handling historical digital libraries, fostering greater interoperability in ecosystems.

Criticisms and Limitations

Technical Shortcomings

One notable technical shortcoming of MPEG-4 Part 2 is its lack of an in-loop , which results in prominent blocking artifacts, particularly at low bitrates. Unlike H.264/AVC, which employs an adaptive in-loop filter to mitigate these artifacts during the decoding loop, MPEG-4 Part 2 relies on optional post-processing filters that operate outside the loop and are less effective at reducing visible discontinuities between blocks. The inclusion of Global Motion Compensation () in the Advanced Simple Profile (ASP) introduces additional due to and warping, compared to standard prediction modes in the Simple Profile (). While GMC enhances efficiency for scenes with camera panning or zooming by modeling global motion with warped sprites, its implementation demands intensive arithmetic operations for sprite warping and parameter estimation, often increasing decoder complexity without proportional gains in most applications. MPEG-4 Part 2 exhibits limitations in handling high-resolution content due to its rudimentary features, which do not support efficient layered coding for progressive resolution enhancement. Although basic temporal and spatial tools exist, they provide only marginal performance benefits and are not optimized for resolutions beyond standard definition, leading to inefficient bitrate allocation and reduced quality in demanding scenarios. The standard's reliance on Variable Length Coding (VLC) for entropy encoding further hampers . VLC's fixed code tables lack the adaptive context modeling of CABAC, contributing to H.264/AVC's overall superior , especially for low-probability events, and resulting in higher bitrates for equivalent quality. Error concealment mechanisms in MPEG-4 Part 2 are inadequate for transmission over noisy channels, as the standard provides only basic spatial and temporal replacement strategies that fail to robustly recover from packet losses or bit errors. This vulnerability stems from the absence of advanced error-resilient tools like slice-level partitioning or redundant slices, leading to propagated artifacts in decoded frames. The object-based coding paradigm, a core feature of MPEG-4 Part 2 intended for content manipulation and interactivity, remains underutilized owing to its high in segmentation, shape coding, and sprite handling. Despite enabling arbitrary-shaped video objects, the associated overhead in encoder and decoder processing has limited its adoption to niche applications, with rectangular video modes dominating practical use. Overall, these design choices contribute to ASP's inferior performance, requiring approximately 50% higher bitrates than H.264/AVC to achieve comparable PSNR levels across various sequences and resolutions.

Adoption Challenges

The adoption of was significantly hindered by the complexities of its licensing structure, managed through the involving numerous licensors. This arrangement required implementers to navigate multiple agreements or individual negotiations with patent holders outside the pool, creating administrative and financial barriers that delayed widespread integration into products and services. In the early , industry concerns over these proposed licensing terms amplified fears of high costs, further dampening enthusiasm for the standard among multimedia developers and hardware manufacturers. A key market shift occurred as H.264 (MPEG-4 Part 10) gained prominence due to its superior compression efficiency—reducing needs by up to 50% compared to MPEG-4 Part 2—coupled with a more streamlined single-pool licensing model that simplified royalty payments. This transition confined MPEG-4 Part 2 primarily to niche applications, such as DivX-encoded home videos, where its capabilities sufficed for consumer playback but lacked the versatility for broader streaming or broadcasting demands. Consequently, the standard experienced lackluster uptake among content distributors and device makers. Compatibility challenges exacerbated these issues, with inconsistent support across platforms before 2010 leading to unreliable playback and hurdles in diverse ecosystems. High costs for implementations added another layer of deterrence; for large-scale deployments, fees were capped at approximately $2 million annually across encoding and decoding pools, representing a substantial upfront investment for manufacturers. Experts, including Ben Waggoner of , highlighted the "patent thicket" surrounding standards like MPEG-4 Part 2 as a major barrier, describing its route as a "profound dead end" due to entangled that stifled innovation and adoption. Even following the expiration of most MPEG-4 Part 2 patents in 2024, with the remaining few expiring in 2026 in , the standard remains overshadowed by more efficient successors like HEVC and in contemporary applications.

References

  1. [1]
    ISO/IEC 14496-2:2004 - Coding of audio-visual objects
    2–5 day deliveryISO/IEC 14496-2:2004 provides the following elements related to the encoded representation of visual information: - Specification of video coding tools, object ...
  2. [2]
    MPEG-4, Visual Coding (Part 2) (H.263) - Library of Congress
    Jul 22, 2022 · Open standard. Developed through ISO technical program JTC 1/SC 29 for coding of audio, picture, multimedia and hypermedia information by ...
  3. [3]
    What are the different profiles supported by MPEG-4 Video?
    The visual profiles determine which visual object types can be present in the scene. This is also the way they are defined: as a list of admissible object ...
  4. [4]
    Encode/MPEG-4 - FFmpeg Wiki
    Dec 1, 2020 · MPEG-4 Part 2, aka MPEG-4, Xvid, and DivX, is a video codec that was most widely used before the wide adoption of H.264. FFmpeg has two encoders to output MPEG ...
  5. [5]
    Video | MPEG
    ISO/IEC 14496-2 specifies a video codec which allows efficient compression of rectangular (frame-based) video. Support is given for manifold applications, ...
  6. [6]
    ISO/IEC 14496-2:2004(en), Information technology — Coding of ...
    This part of ISO/IEC 14496 specifies the coded representation of picture information in the form of natural or synthetic visual objects like video sequences of ...
  7. [7]
    [PDF] ISO/IEC 14496-2 - iTeh Standards
    Dec 1, 2001 · They consist of a 3D mesh connectivity (de)coder, geometry (de)coder, property (de)coder, and entropy (de)coding blocks. Connectivity, vertex ...
  8. [8]
    Advanced Video Coding | MPEG
    Reduction of the bit rate at same quality level by up to 50% or more as compared to prior standards such as MPEG-2, H.263, MPEG-4 Part 2 Simple Profile, and ...
  9. [9]
    [PDF] The H.264/MPEG-4 Advanced Video Coding (AVC) Standard - ITU
    Jul 22, 2005 · 263 (high-latency profile, conversational high-compression profile, baseline profile). • MPEG-4 Visual (simple and advanced simple profiles with ...<|control11|><|separator|>
  10. [10]
    MPEG-4 - Standards – MPEG
    ISO/IEC 14496. Coding of audio-visual objects. A suite of standards for multimedia for the fixed and mobile web. Parts. Systems Part: 1
  11. [11]
    Xvid Codec - VA.gov
    Xvid (originally XviD) is a video codec library that implements the MPEG-4 Part 2 Advanced Simple Profile (ASP) video coding standard. It`s an open-source ...
  12. [12]
    [PDF] A vision made real - Past present future of MPEG
    May 20, 2019 · MPEG-4 Visual standard, part 2 "Coding of audio-visual objects". The conjunction of the two standards is a very tiny code that simply tells ...
  13. [13]
    [PDF] MPEG-4: An Object-based Multimedia Coding Standard supporting ...
    Improved Coding Efficiency: The ability to provide subjectively better audio-visual quality at bitrates compared to existing or emerging video coding standards.
  14. [14]
    Methodologies used for evaluation of video tools and algorithms in ...
    MPEG-4 issued two calls for proposals requesting submission of algorithms and tools relevant to standardization of MPEG-4. This paper reports on the ...
  15. [15]
    MPEG-4: An object-based multimedia coding standard supporting ...
    Improved coding efficiency: The ability to provide subjectively better audio-visual quality at bit-rates compared to existing or emerging video coding standards ...<|separator|>
  16. [16]
    ISO/IEC 14496-2:1999 - Coding of audio-visual objects
    2–5 day deliveryPublication date. : 1999-12 ; Stage. : Withdrawal of International Standard [95.99] ; Edition. : 1 ; Number of pages. : 330 ; Technical Committee : ISO/IEC JTC 1/SC ...
  17. [17]
    ISO/IEC 14496-2:2004/Amd 5:2009 - Information technology
    Coding of audio-visual objects — Part 2: VisualAmendment 5: Simple studio profile levels 5 and 6.
  18. [18]
    [PDF] MPEG-4 Natural Video Coding - An overview
    Touradj Ebrahimi* and Caspar Horne**. *Signal Processing Laboratory ... He was the first editor of the MPEG-4 SNHC Verification Model, and the editor-in-chief of ...
  19. [19]
    A video encoding/decoding algorithm based on content representation
    MPEG-4 video verification model: A video encoding/decoding algorithm based on content representation. Author links open overlay panelTouradj Ebrahimi.
  20. [20]
    MPEG-4 natural video coding – An overview - ScienceDirect.com
    This paper describes the MPEG-4 standard, as defined in ISO/IEC 14496-2. The MPEG-4 visual standard is developed to provide users a new level of interaction ...Missing: broadcast | Show results with:broadcast
  21. [21]
    MPEG-4 Visual Patent Portfolio License - SEC.gov
    The MPEG-4 Visual Patent Portfolio License is between MPEG LA and DivX, allowing a single license for multiple patents, with a single royalty payment.
  22. [22]
    [PDF] Patent pools in high-tech industries - IAM Media
    • 17 patent holders. Program began mid-March 2007. • 513 patents in 32 countries. • 126 licensees. MPEG-4 Visual – Part 2. • 29 patent holders. Program started ...
  23. [23]
    [PDF] Overview of the H.264/AVC video coding standard - Circuits and ...
    The new design improves up on this by adding quarter-sample motion vector accuracy, as first found in an advanced profile of the MPEG-4 Visual (part 2) stan- ...
  24. [24]
    The MPEG-4 video standard verification model - IEEE Xplore
    The January 1996 MPEG Video Group meeting witnessed the definition of the first version of the MPEG-4 video verification model-a milestone in the development ...<|control11|><|separator|>
  25. [25]
    ISO/IEC 14496-2:1999/Amd 1:2000 - Information technology
    Status. : Withdrawn ; Publication date. : 2000-07 ; Stage. : Withdrawal of International Standard [95.99] ; Edition. : 1 ; Number of pages. : 521.
  26. [26]
    [PDF] ISO/IEC 14496-2 - iTeh Standards
    Jul 15, 2000 · It must be possible to resynchronize after a channel error, and continue data transmission and rendering from that point instead of starting ...
  27. [27]
  28. [28]
    ISO/IEC 14496-2:2001/Amd 3:2003 - Information technology
    Amendment 3: New levels and tools for MPEG-4 visual. Withdrawn (Edition 2, 2003). This amendment applies to ISO/IEC 14496-2:2001. New version available: ISO/ ...Missing: B- frames GMC
  29. [29]
    [PDF] ISO/IEC 14496-2 - iTeh Standards
    Jun 1, 2004 · In particular, level 6 provides support for typical visual session sizes of 1280 × 720 (often known as. “720p resolution”) in rectangular-VOP ...
  30. [30]
  31. [31]
    MPEG-4, Visual Coding, Simple Profile - The Library of Congress
    Jul 22, 2022 · The Simple Profile permits the use of three compression levels with bit rates from 64 kbps in Level 1 to 384 kbps in Level 3. Production phase ...Missing: parameters | Show results with:parameters
  32. [32]
    [PDF] MPEG-4 Low Delay Design for HDTV with Multi-stream Approach
    MPEG-4 Visual – Simple Profile (SP) is designed primarily for low processing power coding, low latency and use in less-than-ideal transmission circumstances.
  33. [33]
    [PDF] TSGS#10(00)0578 - 3GPP
    MPEG-4 Visual Simple Profile @ level 0 provides error concealment as part of the simple profile through Data. Partitioning (DP), Reversible Variable Length ...
  34. [34]
    ISO/IEC 14496-2:2004/Amd 2:2005
    ### Summary of ISO/IEC 14496-2:2004/Amd 2:2005
  35. [35]
    [PDF] ETSI TS 126 111 V3.4.0 (2000-12)
    MPEG-4 Visual Simple Profile @ level 0 provides error concealment as part of the simple profile through Data. Partitioning (DP), Reversible Variable Length ...
  36. [36]
    MPEG4 - an overview | ScienceDirect Topics
    The early MPEG-4 part 2 offered little practical advantage over MPEG-2 or H. 263 and was rapidly overtaken by MPEG-4 part 10/AVC, now known as H. 264/AVC.
  37. [37]
    MPEG-4, Visual Coding, Advanced Simple Profile
    Jul 22, 2022 · ... bit rates from 128 to 768 kbps; support for interlaced coding is added for Levels 4 and 5 with bit rates from 3 to 8 Mbps. Production phase ...
  38. [38]
  39. [39]
    DivX Video Codec, Version 5 - Library of Congress
    Mar 26, 2024 · The Official DivX 5.1 Guide says "the DivX 5 series encoder is an implementation of the MPEG4 Video Standard supporting both simple profile and ...
  40. [40]
    [PDF] Quality Control for File-Based Video Workflows –– - Telestream
    MPEG-4 Simple Studio Profile (SStP). JPEG 2000. Picture. Maximum. 3840×2160 up ... 3600 MB/s). 8192×6224 up to 120 fps. Sampling. 4:2:2 Y'C'BC'R. 4:2:2 Y'C'BC ...
  41. [41]
    [PDF] ISO/IEC 14496-2
    VRML or MPEG-4 BIFS format. The connectivity (de)coder is used for an ... process described here, by definition, complies with this part of ISO/IEC 14496.
  42. [42]
    [PDF] MPEG-4 Part 2 / H.263 - Cloudfront.net
    The MPEG-4 Video Decoder can process ISO/IEC 14496-2 video streams produced by the MainConcept MPEG Demultiplexer filter, MainConcept MP4 Demultiplexer filter ...<|control11|><|separator|>
  43. [43]
    [PDF] Overview: image and video coding standards
    Video coding standards: applications and common structure. ▫ Relevant standards organizations. ▫ ITU-T Rec. H.261. ▫ ITU-T Rec. H.263. ▫ ISO/IEC MPEG-1.Missing: collaboration | Show results with:collaboration
  44. [44]
    [PDF] Sprite Coding in Object-based Video Coding Standard: MPEG-4
    The algorithm consists of five parts: motion vector calculation, global motion estimation, provisional sprite generation, foreground object extraction and ...
  45. [45]
    [ViewVC] Contents of: svn/trunk/xvidcore/src/divx4.c
    2, *. 3, * XVID MPEG-4 VIDEO CODEC. 4, * OpenDivx API wrapper. 5, *. 6, * This program is an implementation of a part of one or more MPEG-4. 7, * Video tools as ...
  46. [46]
    Nero Digital Encoding - AfterDawn
    A guide for converting video to Nero Digital format using Nero Vision 4.
  47. [47]
    FFmpeg Codecs Documentation
    When set to one (the default), all B-frames will refer only to P- or I-frames. When set to greater values multiple layers of B-frames will be present ...
  48. [48]
    DivX - Wikipedia
    DivX is a family and brand of video codec products developed by DivX, LLC. There are three DivX codecs: the original MPEG-4 Part 2 DivX codec.
  49. [49]
    Nero Digital Encoding - AfterDawn
    ... Nero Digital, which is an implementation of MPEG-4 ASP, and Nero Digital AVC, obviously an implementation of MPEG-4 AVC (H.264). As of Nero 7, even more ...
  50. [50]
    MPEG-4 Sharpens Apple QuickTime 6 - eWeek
    Review: Quicktime 6, with added MPEG-4 support, is a major upgrade for video and audio authoring.Missing: early SP
  51. [51]
    What is XviD Video Codec? Here is the Answer - VideoProc
    Jul 18, 2024 · 4.Rate Control. XviD implements sophisticated rate control mechanisms: 1. Two-Pass Encoding: In the first pass, the codec analyzes the entire ...
  52. [52]
    VLC - Features - VideoLAN
    VLC plays Files, Discs, Webcams, Devices and Streams. No spyware, No ads, No user tracking. Fast - Hardware Decoding.
  53. [53]
    Codecs in Media Player - Microsoft Support
    Codecs allow you to play different formats of audio and video files. Media Player supports a wide variety of codecs. Most are included out of the box.
  54. [54]
    Commentary: MPEG-4 is streaming video - CNET
    Oct 5, 2001 · There's no question that MPEG-4 is going to succeed MPEG-2 as the standard for streaming digital video. Faster processor speeds and cheap ...
  55. [55]
    MPEG-4: A system designer's view - EE Times
    MPEG-4 is a unified standard that allows the streaming of both graphics and video in the same interactive, rich multimedia experience.Missing: DivX 2000s
  56. [56]
    [PDF] MPEG-4 Demystified
    Jun 24, 2003 · MPEG-4 is part of the standard for 3G and 2.5G mobile phones. 3GPP, 3GPP2 adopted MPEG-4 Profiles for video to mobile. AAC is the optional ...
  57. [57]
    MPEG-4 – Impact - Riding the Media Bits
    The MPEG File Format is universally adopted: 3G cellphones have mobile person-to-person video communication and with the additional capability to capture ...Missing: phones | Show results with:phones
  58. [58]
    The Birth of the DivX Revolution - DivX Video Software
    Jun 25, 2020 · Jerome created DivX for his portfolio, it was widely adopted for high-quality, small-file videos, and DivX 4.0 enabled near DVD-quality online.Missing: mobile 3G
  59. [59]
    MPEG-4 Part 2/H.263 Codec module for video stream encoding
    MPEG-4 Part 2/H.263 Codec Package-a software module for video stream encoding into MPEG-4 (ISO/IEC 14496-2) and H.263 (ITU-T Recommendation H.263) streams.
  60. [60]
    Standards: Appendix J - MPEG-4 Licensing & Patents
    Mar 22, 2024 · Here is a list which shows whether each of the 34 parts of the MPEG-4 standard is encumbered with a patent. This is an Appendix to our series of ...Missing: holders Mitsubishi Electric Hitachi Panasonic
  61. [61]
    H.264 versus MPEG-4 video coding - Genetec Inc
    Its second name is MPEG-H Part 2. The H. 265 HEVC standard was first published in 2013 and is designed to be 25% to 50% more efficient than H. 264, especially ...Missing: worse | Show results with:worse
  62. [62]
    Demystifying the mp4 container format - Agama Technologies
    Jun 7, 2021 · The rest of this text is about MP4, the container format specified in MPEG-4 Part 14, not the video codec defined in MPEG-4 Part 10.
  63. [63]
    MPEG-4 Visual - ViaLa - Via Licensing Alliance
    Via LA's MPEG-4 Visual Patent Portfolio License provides access to essential patent rights for the MPEG-4 (Part 2) Visual patents standard used in media player.
  64. [64]
    MPEG-4 Visual Licensors - ViaLa - Via Licensing
    SANYO Electric Co., Ltd. Sharp Corporation; Siemens AG; Sony Group Corporation; Sun Patent Trust; Telenor ASA; Toshiba Corporation; ZTE Corporation. *Terminated ...
  65. [65]
    MPEG LA - software patents wiki (ESP Wiki)
    May 18, 2021 · MPEG LA lists over 1,000 patents (346 in the USA alone), held by 29 companies divided between 57 countries which they claim are necessary to ...
  66. [66]
    [PDF] April 1, 2024 MPEG-4 Visual Attachment 1 Page ... - Via LA Licensing
    April 1, 2024. MPEG-4 Visual Attachment 1. Page 2 of 12. Expired Patents. British Telecommunications plc. GB (EP 448,590) - Exp. Dec 12, 2009. JP 2,046,808 - ...
  67. [67]
    How much do DVD and digital media playback features really cost?
    May 7, 2012 · How much do DVD and digital media playback features really cost? ; VC-1, up to $0.20, $8 million (cap) ; MPEG-4 Visual, up to $0.25, $4.75 million ...
  68. [68]
    The State of Video Codecs 2024 - Streaming Media Europe
    Apr 4, 2024 · I'll quickly get you up-to-speed on quality, playability, and usage of the most commonly utilised video codecs and then explore new codec-related advancements.
  69. [69]
    MPEG-2 patent expiration opens door for royalty-free use
    Feb 15, 2018 · The last patent for the MPEG-2 video codec expired, making it possible to distribute software and sell hardware without paying license fees for ...
  70. [70]
    None
    Summary of each segment:
  71. [71]
    MPEG-4 Natural Video Coding – Part II - ResearchGate
    PDF | On Mar 22, 2000, Touradj Ebrahimi and others published MPEG-4 Natural Video Coding – Part II | Find, read and cite all the research you need on ...
  72. [72]
    [PDF] SCV - a highly-scalable version of H.264/AVC - EBU tech
    Other video scalability techniques have been proposed in the past, (even standard- ized as optional modes for MPEG-2 [3] and MPEG-4 - Part 2 [4]) but they were ...Missing: weak | Show results with:weak
  73. [73]
    [PDF] Context-based adaptive binary arithmetic coding in the H.264/AVC ...
    Due to the usage of VLCs, coding events with a probability greater than. 0.5 cannot be efficiently represented, and hence, a so-called alphabet extension of “ ...<|separator|>
  74. [74]
    [PDF] A LOW COMPLEXITY ERROR CONCEALMENT SCHEME FOR ...
    MPEG coded bit streams are very sensitive to channel disturbances due to variable length coding. A single bit error can lead to very severe degradation in that ...Missing: poor | Show results with:poor
  75. [75]
    (PDF) Rate-distortion performance of H.264/AVC compared to state ...
    Aug 5, 2025 · H.264/AVC · goal of achieving a 50% bit rate savings is more ; or less satisfied with respect to the DivX 5.1 implementation of ; the MPEG-4 Visual ...
  76. [76]
    (PDF) Comparison of H.264/AVC and MPEG-4 part 2 coded video
    PDF | brief overview of H.264/AVC and MPEG-4 Part 2 standards is given in this paper with focus on differences between them. Experimental quality.
  77. [77]
    Companies fear costly MPEG-4 licenses - CNET
    Feb 8, 2002 · A newly proposed MPEG-4 licensing plan is sending jitters through multimedia circles, raising cost concerns about a new standard that ...Missing: adoption challenges burdens critiques
  78. [78]
    MPEG Video Standards Explained: MPEG-1, MPEG-2, and MPEG-4
    MPEG-4 Part 2: Also known as Advanced Simple Profile (ASP), this part is used by popular codecs like DivX, Xvid, and QuickTime ...
  79. [79]
    The Cost of Codecs: Royalty-Bearing Video Compression Standards ...
    Feb 3, 2016 · The royalty structure that MPEG-LA and the patent owners adopted for MPEG-2 was fairly simple: a one-time flat fee per codec. See table below.
  80. [80]
    MPEG licensing basics - EE Times
    Mar 18, 2005 · Some 24 different companies own the 650 worldwide essential patents for the MPEG-2 video standard for compressing digital information.
  81. [81]
    [XLS] September 2016 dataset - Hoover Institution
    MPEG-4 Part 2: $2.0M per calendar year. 34, MPEG-4 Part 10 (H.264): $3.75M per ... royalties are apportioned throughout the MPEG-4 Visual value chain.
  82. [82]
    The State of Video Codecs in 2024 - Gumlet
    Jan 27, 2025 · This blog discusses video codecs, provides an in-depth understanding of each codec, and helps you choose the right codec for your needs.