Fact-checked by Grok 2 weeks ago

Intra-frame coding

Intra-frame coding, also known as intra-coding, is a video compression technique that encodes each individual frame independently by exploiting spatial redundancies within the frame itself, treating it similarly to a standalone image without referencing data from other frames. This method reduces file sizes and bitrates by eliminating redundant pixel information, such as repeated colors or patterns, through processes like and quantization. Intra-frame coding plays a central role in major video compression standards, including , , H.264/AVC (also known as MPEG-4 Part 10), HEVC (H.265), (2018), and (H.266, 2020), where it is used to create I-frames (intra-coded frames) that serve as key access points for decoding and editing in a video sequence. Developed jointly by 's (VCEG) and ISO/IEC's (MPEG), these standards evolved from earlier technologies like (1994) to achieve up to twice the compression efficiency while maintaining quality. In H.264/AVC, finalized in 2003, intra-frame coding employs advanced spatial prediction techniques, including nine directional modes for luma blocks (4x4 or 16x16 sizes) and four for chroma, which predict pixel values from adjacent neighboring pixels to minimize residual data before transformation. This approach uses a 4x4 transform instead of the 8x8 (DCT) in prior standards, reducing blocking artifacts and enabling precise encoding with arithmetic to avoid floating-point errors. Key advantages of intra-frame coding include enhanced error resilience, as corruption in one does not propagate to others, and simplified editing or to specific frames in a stream. However, it is less bandwidth-efficient than inter-frame coding for sequences with temporal redundancy, such as low-motion video, because it does not exploit similarities across frames. Despite this, its integration with deblocking filters in standards like H.264 further improves visual quality by smoothing block boundaries post-prediction and quantization.

Introduction

Definition and Principles

Intra-frame coding is a data compression technique applied to individual frames in video or image sequences, operating either in a lossless or lossy manner to reduce file sizes by exploiting spatial redundancies—correlations between adjacent pixels—within each frame independently of others. This method treats every frame as a standalone entity, similar to still compression, enabling efficient encoding without reliance on temporal information from preceding or subsequent frames. At its core, intra-frame coding relies on intra-prediction, which estimates values in a given region based on surrounding reconstructed s within the same frame, thereby minimizing spatial redundancy by assuming higher correlation among nearby s. Following , the differences between actual and predicted values are transformed into a representation to further decorrelate the data and facilitate efficient encoding. These principles underpin the method's ability to achieve ratios suitable for and while preserving essential visual details. The basic workflow begins with dividing the frame into smaller blocks, typically of sizes such as 8×8 or 16×16 pixels, to enable localized processing. Intra-prediction is then applied block-wise to generate approximations, after which the residuals undergo , quantization to discard less perceptible high-frequency components in a lossy setup, and finally to assign shorter codes to more frequent symbols, completing the process. This independence from temporal data distinguishes intra-frame coding, as it allows random access to any individual frame for decoding without dependencies on sequence context, making it ideal for scenarios requiring frame-specific retrieval. In contrast to inter-frame coding, which leverages redundancies across multiple frames, intra-frame coding focuses solely on intra-frame spatial correlations.

Historical Development

The origins of intra-frame coding trace back to the 1970s, when differential pulse-code modulation (DPCM) was adapted for to exploit spatial by predicting values from neighboring samples within a single frame. This approach, initially patented for general signal coding in the early 1950s but applied to digital images in the early 1970s, marked an early shift toward efficient still-image encoding without temporal dependencies. By the 1980s, research advanced to block-based methods, incorporating techniques such as the (DCT), proposed by Nasir Ahmed in 1972, to decorrelate data in fixed-size blocks for better compression ratios. A pivotal milestone came with the standard, finalized in 1992 by the (JPEG) under ISO/IEC JTC1, which established the first widely adopted intra-frame using 8x8 DCT blocks followed by quantization and . In video contexts, intra-frame coding was introduced earlier through H.261 in 1990, which employed DCT-based intra modes for standalone frames in low-bitrate videoconferencing, providing a foundation for hybrid video codecs. This was extended in (ISO/IEC 11172), published in 1993 by the (MPEG) under the same ISO/IEC JTC1 umbrella, where I-frames used similar DCT intra-coding to anchor group-of-pictures structures for digital storage media like Video CDs. The digital video boom of the , fueled by consumer adoption of CDs and early streaming, rapidly propelled these standards into widespread use. Subsequent evolutions focused on efficiency gains in intra . H.264/AVC, jointly developed by and MPEG and finalized in 2003, introduced directional intra-prediction modes (up to 9 for 4x4 blocks) to reduce residual data, achieving about 50% better compression than prior standards for intra-coded content. HEVC (H.265), standardized in 2013, further refined intra coding with 35 angular prediction modes for luma, larger block sizes up to 64x64, and planar mode for smooth regions, yielding up to 50% bitrate savings over H.264 for high-resolution video. The latest advancement, VVC (H.266), approved in 2020, enhances intra-frame efficiency with 67 prediction modes, matrix-based intra prediction, and affine models, targeting 30-50% further compression gains for emerging applications like 8K and immersive media. These developments by ISO/IEC JTC1 committees have continually adapted intra-frame coding to meet growing demands for bandwidth-efficient visual data.

Technical Foundations

Spatial Redundancy and Compression Basics

Spatial redundancy in images arises from the statistical dependencies among neighboring pixels, which allow for efficient data reduction without significant loss of perceptual quality. This redundancy manifests in two primary forms: spatial correlations, where adjacent pixels exhibit high similarity due to horizontal and vertical patterns in natural scenes, such as smooth gradients or edges, and frequency-based redundancies, where low-frequency components dominate the energy content of typical images, carrying the bulk of structural while high-frequency details contribute less to overall . The compression pipeline for intra-frame coding begins with partitioning the frame into smaller , such as macroblocks or coding , to enable localized processing and prediction. Intra-prediction then estimates the value of each based on surrounding within the same , exploiting local correlations to generate a predicted . The is subsequently calculated as the difference between the original and predicted blocks, capturing only the unpredicted variations for further compression. Quantization follows by scaling down the transform coefficients of the , effectively discarding less perceptible high-frequency details to achieve data reduction while minimizing visual . This process incorporates rate- optimization, which balances the between and quality by selecting quantization parameters that minimize a combining metrics and rate constraints. Finally, compresses the quantized data by assigning shorter codes to more frequent symbols, leveraging the non-uniform resulting from redundancy removal. Common techniques include , which uses variable-length prefix codes based on symbol frequencies, and , which achieves finer granularity by encoding the entire sequence into a single fractional number between 0 and 1. These basics have been historically applied in standards like for still .

Core Algorithms and Transforms

The (DCT) serves as a foundational algorithm in intra-frame coding by exploiting spatial redundancy through representation, concentrating image energy in low-frequency coefficients for efficient . The 2D DCT applied to an N \times N block is defined as: C(u,v) = \alpha(u)\alpha(v) \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} f(x,y) \cos\left[\frac{\pi (2x+1)u}{2N}\right] \cos\left[\frac{\pi (2y+1)v}{2N}\right], where \alpha(0) = \sqrt{1/N}, \alpha(k) = \sqrt{2/N} for k > 0, and f(x,y) denotes the input values. This transform achieves energy compaction by mapping correlated spatial data into a set of decorrelated coefficients, where most energy resides in the low-frequency components near (u,v) = (0,0), allowing higher-frequency coefficients to be discarded or quantized with minimal perceptual loss. Intra-prediction modes enhance DCT efficiency by estimating values from neighboring blocks within the same , reducing data before . In standards like H.264/AVC, nine directional modes for 4x4 luma blocks include horizontal, vertical, and diagonal predictions, each extrapolating s along specific angles to minimize residuals. For instance, the horizontal mode predicts a at position (x,y) as p(x,y) = s(x,-1), where s(x,-1) is the left neighboring , while vertical mode uses p(x,y) = s(-1,y) from above; diagonal modes predict using weighted averages of neighboring samples along the direction, such as the down-left diagonal mode extrapolating from upper and left neighbors. These modes are selected based on rate-distortion optimization to best approximate local textures. Quantization follows to control bitrate by coefficients, introducing controlled in lossy . Quantization applies frequency-dependent and to the coefficients using matrices derived from the quantization parameter (), effectively discarding fine details in high frequencies while preserving low-frequency structure essential for image quality. This scalar approach, often frequency-dependent, discards fine details in high frequencies while preserving low-frequency structure essential for image quality. Alternative transforms address limitations of floating-point DCT, such as and drift in decoding. Integer DCT approximations, as in H.264/AVC, replace with multiplications to enable exact inverses and efficiency without floating-point operations. For lossless intra-frame coding, transforms like the Daubechies-based Cohen-Daubechies-Feauveau 5/3 filter in provide reversible -to- mappings, decomposing images into subbands for progressive while ensuring perfect reconstruction.

Implementation in Codecs

Role in Image Compression Standards

Intra-frame coding serves as the foundational mechanism in several key image compression standards, enabling efficient reduction of spatial redundancy within individual images without reliance on temporal data. The baseline standard, defined in ISO/IEC 10918-1, employs a complete intra-frame process that divides images into blocks, applies the (DCT) to each block, quantizes the coefficients, and reorders them using zigzag scanning to prioritize low-frequency components. The coefficient, representing the average intensity, is encoded differentially across blocks, while coefficients undergo for zero runs followed by using separate tables for DC differences and AC amplitudes to achieve variable-length . The structure incorporates markers such as SOI (Start of Image, 0xFFD8) to initiate the file and (Start of Scan, 0xFFDA) to precede the compressed scan data, ensuring modular parsing and compatibility across applications. Building on transform-based approaches, the standard (ISO/IEC 15444-1) utilizes transforms for intra-frame coding, decomposing the image into subbands and applying embedded zerotree (EZW) or set partitioning in hierarchical trees (SPIHT) algorithms for . EZW, introduced by , exploits the hierarchical similarity in coefficients by treating insignificant trees as single symbols for progressive bitstream generation, while SPIHT refines this with spatial orientation trees for improved efficiency in identifying significant coefficients. This enables transmission by resolution or quality, and supports through reversible integer transforms, such as the 5/3 LeGall filter, which map integers to integers without information loss. Other standards incorporate intra-frame techniques tailored for specific needs, such as lossless preservation. The format (ISO/IEC 15948) relies on intra-frame deflation using the algorithm, which combines LZ77 dictionary-based matching to replace repeated sequences with references and for entropy reduction on literals and distances, ensuring perfect reconstruction for raster images. Similarly, employs VP8-derived intra modes with , where blocks are predicted from neighboring pixels using directional modes (e.g., horizontal, vertical, or diagonal) before residual encoding, enhancing for web-optimized images while supporting both lossy and lossless variants. In lossy modes, these standards exhibit typical ratios of 10:1 to 20:1 for JPEG baseline, balancing file size against quality where higher ratios introduce artifacts but maintain acceptable (PSNR) values around 30-40 dB for visually lossless results. often achieves superior PSNR trade-offs at equivalent ratios due to its foundation, preserving more high-frequency details compared to DCT-based methods.

Integration in Video Compression Frameworks

In hybrid video compression frameworks, intra-frame coding forms the foundation for I-frames, which act as independent reference points within (GOP) structures to facilitate , error recovery, and scene transitions. These I-frames are typically inserted every 12 to 15 frames in GOP configurations, with the exact placement depending on the desired balance between compression efficiency and decoding flexibility. This periodic structure ensures that decoders can start playback or seek to specific points without relying on prior inter-frame dependencies, while also resetting temporal prediction chains at scene changes. Video standards have evolved intra-frame techniques to better exploit spatial redundancy in these key frames. In H.264/AVC, intra prediction supports 4x4 luma blocks with 9 directional modes, 16x16 luma blocks with 4 modes (vertical, horizontal, DC, and plane), and 4 modes for 8x8 chroma blocks to handle color components efficiently. HEVC (H.265) extends this with 35 intra prediction modes per coding unit, including 33 angular directions for precise edge modeling, plus planar and DC modes optimized for larger block sizes up to 64x64, improving efficiency for high-resolution content. Newer standards further advance these capabilities: (H.266), standardized in 2020, increases to 67 intra modes (65 angular plus planar and DC) for blocks up to 128x128, enhancing prediction accuracy and compression for ultra-high definitions as of 2025. Similarly, (2018) supports up to 56 directional modes plus non-directional and chroma-from-luma prediction, optimized for royalty-free web and streaming applications. These adaptations, often using (DCT) as in image coding, allow intra-coded blocks to predict from neighboring pixels within the frame. Due to the absence of temporal prediction, I-frames demand significantly higher bit allocation—often 5 to 10 times that of P- or B-frames—to maintain comparable quality, as they encode full spatial details without . Rate control strategies in codecs like H.264 and HEVC dynamically adjust quantization parameters and bit budgets across GOPs to mitigate this overhead, ensuring overall stream bitrate stability while prioritizing I-frame fidelity. For error in over unreliable channels, intra refresh patterns integrate intra into inter by periodically forcing macroblocks or slices to use intra , gradually refreshing the entire frame to contain error propagation without full I-frames. In H.264, this often employs vertical or cyclic refresh lines, while HEVC supports similar techniques with larger units to limit drift in predicted . Such methods enhance robustness in low-delay applications like streaming, balancing with minimal bitrate increase.

Advantages, Limitations, and Comparisons

Key Benefits and Use Cases

Intra-frame coding provides to individual frames by allowing each to be decoded independently without reliance on preceding or subsequent frames, which is essential for efficient workflows and quick seeking during playback. This independence eliminates the need to decode entire sequences, streamlining operations in environments where frames must be isolated and manipulated. A primary benefit is enhanced error resilience, as the absence of temporal dependencies prevents the propagation of errors or artifacts from one frame to the next, making it particularly suitable for transmission over unreliable channels such as wireless networks. In these scenarios, intra-coded frames limit damage to the affected frame alone, reducing visible distortions in streaming applications prone to packet loss. The placement of intra-frames at the start of a Group of Pictures (GOP) further supports this by serving as recovery points in error-prone streams. Key use cases include still image extraction from video sequences, where intra-coded frames can be directly output as standalone images without decoding dependencies, facilitating tasks like thumbnail generation or archival purposes. It also excels in low-latency applications such as video conferencing, where independent frame processing minimizes buffering delays and supports real-time interaction over variable networks. Additionally, in forensic video analysis, the ability to isolate and examine single frames independently aids in detailed scrutiny without interference from temporal artifacts. Quantitative evaluations in H.264 demonstrate improved robustness, with intra-frame approaches achieving up to 2.5 dB higher PSNR under high rates compared to inter-frame methods reliant on , enabling 20-30% better visual recovery in simulated tests.

Drawbacks and Contrasts with Inter-frame Coding

Intra-frame coding suffers from higher bitrate requirements due to its reliance solely on spatial prediction within individual frames, without exploiting temporal redundancies across frames, resulting in each intra-coded frame typically requiring 2 to 5 times more bits than an inter-coded frame for comparable quality. This lack of temporal exploitation leads to larger overall file sizes and increased bandwidth demands, particularly in video sequences where frame-to-frame similarities are prevalent. A key limitation of intra-frame coding is its compression inefficiency in handling scenes with motion or subtle temporal changes, such as gradual shifts in static backgrounds, where inter-frame methods excel through to predict and encode only differences between . In contrast, inter-frame coding employs block matching techniques, where motion vectors are determined by for the best between blocks in consecutive , typically by minimizing the (SAD) defined as \sum |original - predicted| over the block pixels. This temporal approach yields substantially lower overall video bitrate efficiency compared to pure intra-frame coding, with inter-frame integration often achieving 50-70% bitrate savings in standards like MPEG by reducing redundant data across . To mitigate these drawbacks, modern video codecs employ hybrid (GOP) structures that incorporate intra-frames periodically within a sequence dominated by inter-frames, balancing efficiency with requirements for and error resilience. This strategic placement of intra-frames, such as every 12-15 frames in typical GOP configurations, allows inter-frame prediction to dominate for bitrate savings while using intra-frames as reference points to limit error propagation.

Applications and Future Directions

Practical Deployments

Intra-frame coding is extensively deployed in and streaming applications, where I-frames and Instantaneous Decoder Refresh (IDR) frames serve as key reference points in protocols like (HLS) and (DASH). These frames define segment boundaries, enabling independent decoding of video chunks and supporting dynamic quality adjustments to match varying network conditions without propagation of errors across segments. Major platforms such as and leverage (HEVC) standards, which incorporate sophisticated intra-frame prediction techniques, to stream content with reduced bandwidth demands—achieving up to 50% compression efficiency over prior codecs while preserving visual fidelity. In , intra-frame coding underpins image capture in digital cameras and smartphones, particularly in burst mode where rapid successive shots are compressed individually using to facilitate quick storage and post-processing without relying on temporal data. Game consoles, including platforms like the and series, apply intra-frame compression for screenshots, optimizing storage for high-resolution gameplay captures while minimizing file sizes for sharing. Medical imaging relies on intra-frame coding within the Digital Imaging and Communications in Medicine (DICOM) standard, employing lossless methods like JPEG-LS and JPEG2000 for X-ray and MRI data storage. These approaches ensure exact reconstruction of images, critical for diagnostic integrity, with compression ratios often exceeding 2:1 for volumetric scans. Industry adoption underscores the prevalence of intra-frame techniques; for example, JPEG accounted for about 50% of web images by volume in 2020, reflecting its role as the baseline for static content delivery. In 5G video delivery, I-frames contribute to bitrate variability, with their larger size increasing average rates in short GOP structures, though 5G's elevated throughput supports this for low-latency applications. Intra-frame coding also bolsters error resilience, aiding deployments in mobile networks susceptible to intermittent connectivity.

Emerging Techniques and Research

Recent advancements in intra-frame coding have increasingly incorporated and techniques to enhance accuracy and reduce . Neural network-based intra-prediction methods, such as convolutional neural networks (CNNs) for mode selection, have been explored to optimize processes in codecs like by predicting optimal partition modes and angular directions from spatial features, achieving reductions in encoding time with minimal bitrate loss. Similarly, autoencoders have emerged as a tool for learning adaptive transforms in intra-frame coding, where conditional autoencoder structures enable multi-mode by training on neighboring contexts, resulting in improved efficiency for diverse content. These AI-driven approaches leverage end-to-end learning to capture complex spatial redundancies beyond traditional directional modes. In the () standard, intra-prediction has been expanded with 67 intra modes, including 65 modes, affine intra-prediction that models linear transformations for blocks with gradient variations, allowing sub-block-level within frames to better handle non-uniform textures. Complementing this, matrix-weighted intra prediction (MIP) applies low-rank matrix multiplications to downsampled boundary samples for non-square blocks, enabling efficient prediction without full mode evaluation and contributing to 's overall 30-50% bitrate savings over HEVC for intra-coded content. Ongoing research trends focus on history-based complexity reduction techniques that utilize statistics from previously encoded blocks or frames to prune redundant mode decisions in VVC intra-coding, such as tracking CU partition patterns to skip exhaustive searches and achieve significant time savings with minimal impact on BD-rate. Hybrid intra-inter frameworks in neural codecs represent another key direction, with unified models processing both frame types through shared autoencoder architectures, as demonstrated in recent studies that integrate temporal context into intra-prediction for seamless video compression, yielding improved PSNR at equivalent bitrates compared to separate intra-inter pipelines. Looking ahead, AI-driven intra-frame coding holds promise for substantial efficiency gains in high-resolution applications like 8K video, (AR), and (VR), where neural methods could enable bitrate reductions over current standards by 2030 through scalable learned representations tailored to immersive content demands.

References

  1. [1]
    Video Encoding - Interra Systems
    Intra-frame Coding (Intra-coding): Compresses each frame independently, focusing on spatial redundancy within the frame. Inter-frame Coding (Inter-coding): ...
  2. [2]
    [PDF] H.264/MPEG-4 AVC Video Compression Tutorial - UCLA CS
    The upcoming H.264/MPEG-4 AVC video compression standard promises a significant improvement over all previous video compression standards. In terms of coding ...
  3. [3]
    [PDF] The H.264/MPEG-4 Advanced Video Coding (AVC) Standard - ITU
    Jul 22, 2005 · Intra-frame. Prediction. Deblocking. Filter. Output. Video. Signal. Page 11. H.264/AVC July '05. Gary Sullivan. 10. Entropy. Coding. Scaling & ...
  4. [4]
    Interframe vs. Intraframe Compression - Verkada
    Intraframe compression (also called spatial compression) reduces the size of each individual frame independently by reducing redundant information within the ...
  5. [5]
    [PDF] An Optimized Template Matching Approach to Intra Coding in Video ...
    Intra-frame coding is a key component in video/image compression system. It predicts from previously recon- structed neighboring pixels to largely remove ...
  6. [6]
    None
    ### Summary of Intra-Frame Coding Principles from https://arxiv.org/pdf/2407.15730
  7. [7]
    Intra Coding of the HEVC Standard
    **Summary of Intra Coding Design Principles:**
  8. [8]
    Review of image compression technology: from traditional methods ...
    May 28, 2025 · Cutler of Bell Labs patented the DPCM system. After the 1970s, this method was widely applied in the field of image compression. The core ...
  9. [9]
    (PDF) A Brief History of Video Coding - ResearchGate
    As early as 1929, Ray Davis Kell described a form of video compression and was granted a patent for it. He wrote, "It has been customary in the past to transmit ...
  10. [10]
    The History of Video Compression Standards, From 1929 Until Now
    Jun 8, 2021 · Previously, DPCM was used for audio (and still is today). DPCM is a technique where you take samples of an image and then predict future sample ...Missing: origins | Show results with:origins
  11. [11]
    JPEG-1 standard 25 years: past, present, and future reasons for a ...
    Aug 31, 2018 · JPEG-1, developed in the late 1980s, is a successful standard due to its efficiency, versatility, robustness, and resilience, and is used in ...
  12. [12]
  13. [13]
    The true history of MPEG's first steps - Leonardo's Blog
    Dec 28, 2019 · In this article I will tell the (short) story of how the impossible happened. Video coding in the 1970's and 1980's. I was hired to do research ...
  14. [14]
    About MPEG
    The Moving Picture Experts Group (MPEG) is a set of working group of ISO/IEC in charge of the development of international standards for compression, ...<|control11|><|separator|>
  15. [15]
    [PDF] A Survey: Various Techniques of Image Compression - arXiv
    This type of redundancy is sometime also called spatial redundancy. This redundancy can be explored in several ways, one of which is by predicting a pixel value ...Missing: explanation | Show results with:explanation
  16. [16]
    [PDF] Haar Wavelet Based Approach for Image Compression and Quality ...
    In general, three types of redundancy can be identified: (a). Spatial Redundancy or correlation between neighboring pixel ... interaction among adjacent pixels.<|control11|><|separator|>
  17. [17]
    FastDINOv2: Frequency Based Curriculum Learning Improves ...
    Jul 4, 2025 · In natural images, low-frequency components dominate, carrying most ... Hybrid effects: Elastic transform, JPEG compression, pixelate, saturate, ...
  18. [18]
  19. [19]
    The JPEG still picture compression standard - IEEE Xplore
    The first international compression standard for continuous-tone still images, both grayscale and color.
  20. [20]
    [PDF] Image Compression Using the Discrete Cosine Transform
    The discrete cosine transform (DCT) is a technique for converting a signal into elementary frequency components. It is widely used in image compression.
  21. [21]
    H.264/AVC 4x4 Transform and Quantization — Vcodex BV
    In H.264, 4x4 blocks use a scaled DCT, then quantization. The process is structured into a core and scaling part to minimize complexity.
  22. [22]
    [PDF] Image compression using wavelets and JPEG2000: a tutorial
    This paper presents a tutorial on the discrete wavelet transform (DWT) and introduces its application to the new JPEG2000* image compression standard. We start ...
  23. [23]
    [PDF] The JPEG Still Picture Compression Standard
    To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based ...
  24. [24]
    Description of Exif file format - MIT Media Lab
    In JPEG format, some of Markers describe data, then SOS(Start of stream) Marker placed. After the SOS marker, JPEG image stream starts and terminated by EOI ...
  25. [25]
    An overview of the JPEG 2000 still image compression standard
    Aug 9, 2025 · ... JPEG-2000 wavelet-based ... Invertible wavelet transforms that map integers to integers have important applications in lossless coding.
  26. [26]
    Embedded image coding using zerotrees of wavelet coefficients
    The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the ...
  27. [27]
    A new, fast, and efficient image codec based on set partitioning in ...
    Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our ...
  28. [28]
    [PDF] PNG (Portable Network Graphics) Specification Version 1.0 - W3C
    Feb 4, 2010 · The PNG format provides a portable, legally unencumbered, well-compressed, well-specified standard for lossless bitmapped image files. Although ...Missing: intra- | Show results with:intra-
  29. [29]
    RFC 1951 DEFLATE Compressed Data Format Specification ver 1.3
    This specification defines a lossless compressed data format that compresses data using a combination of the LZ77 algorithm and Huffman coding.
  30. [30]
    Compression Techniques | WebP - Google for Developers
    Aug 7, 2025 · WebP uses Arithmetic entropy encoding, achieving better compression compared to the Huffman encoding used in JPEG. VP8 Intra-prediction Modes.
  31. [31]
    RFC 9649 - WebP Image Format - IETF Datatracker
    Nov 18, 2024 · Lossy compression is achieved using VP8 intra-frame encoding [RFC6386]. The lossless algorithm (Section 3) stores and restores the pixel ...
  32. [32]
    [PDF] Image compression methods for efficient storage and transmission
    JPEG Lossy 10:1 - 20:1 20 - 40 Moderate Widely used; may introduce artifacts at high compression. JPEG 2000 Lossy 20:1 - 50:1 30 - 50 High Better quality and ...
  33. [33]
    JPEG2000: A review and its performance comparison with JPEG
    The JPEG2000 standard is a wavelet based image compression system that is capable of providing effective lossy and lossless compression.
  34. [34]
    [DOC] Joint Video Team (MPEG+ITU) Document
    Codec. MPEG-2. Sequence. “Cheer”, “Mobile”, “Flower”, “Football”. 720x480; interlaced; 30Hz; 30frames. QP. QI=QP=16; QB=18 (fixed). GOP structure. N=15; M=3.<|separator|>
  35. [35]
    H.264 : Advanced video coding for generic audiovisual services
    **Summary of Intra Prediction Modes in H.264 (ITU-T Rec. H.264)**
  36. [36]
    H.265 : High efficiency video coding
    **Summary of Intra Prediction in HEVC (H.265):**
  37. [37]
    [DOC] CONTRIBUTION - ITU
    This means that an I frame takes approximately ten times the number of Transport Units (RTP or MPEG) or IP packets than a B or P frame. Impact of lost packets.
  38. [38]
    [DOC] Joint Video Team (MPEG+ITU) Document
    Our proposed rate control scheme is composed of two layers: GOP layer rate control and frame layer rate control if the basic unit is selected as a frame. ...
  39. [39]
    [PDF] Adaptive Intra-Refresh for Low-Delay Error- Resilient Video Coding
    Besides offering low-delay, intra-refresh coding schemes also provide good error-resilience performance. Since the whole frame is completely refreshed after a ...
  40. [40]
    Adaptive intra-refresh for low-delay error-resilient video coding
    In this paper, we present an efficient intra-refresh cycle-size selection model depending on the network packet loss rate and the motion in the video content.
  41. [41]
    Low delay error resilience algorithm for H.265|HEVC video ...
    Nov 20, 2019 · Their proposed error resilience coding scheme is based on adaptive intra-refresh. The error resilience coding scheme selects the important ...Missing: patterns | Show results with:patterns
  42. [42]
    Video Intra Coding for Compression and Error Resilience: A Review
    Aug 10, 2025 · Intra-refresh coding, which embeds intra coded regions into inter frames can achieve a relatively smooth bit-rate and terminate the error ...
  43. [43]
    [PDF] Unequal packet loss resilience for fine-granular-scalability video
    However, since the packet loss probability for the FGS base-layer is lower than for the single-layer at the same transmission rate, FGS performance is less ...
  44. [44]
    Understanding Video Inter-Frame Compression Techniques - FastPix
    Jan 10, 2025 · Inter-frame compression works by encoding only the differences between frames (P-frames and B-frames) rather than encoding each frame independently (I-frames).Missing: allocation | Show results with:allocation
  45. [45]
    Error Resilient Coding Techniques for Video Delivery over Vehicular ...
    Oct 17, 2018 · Intra refresh is an error resilience technique that forces to periodically intra encode (refresh) certain frame areas in P frames, in order ...
  46. [46]
  47. [47]
    [PDF] Block matching algorithm based on Differential Evolution for motion ...
    In this procedure, the motion vector is obtained by minimizing the sum of absolute differences (SAD) produced by the. MB of the current frame over a determined ...
  48. [48]
    Things You Wanted to Know About Compression but Were Afraid to ...
    Dec 4, 2017 · Intra-frame means that all the compression is done within that single frame and generates what is sometimes referred to as an i-frame. Inter- ...
  49. [49]
    Hybrid Coder - an overview | ScienceDirect Topics
    A hybrid coder is defined as a block-based architecture that integrates various coding concepts and has been universally adopted for applications in TV, ...
  50. [50]
    I, P, and B-frames - Differences and Use Cases Made Easy
    Dec 14, 2020 · Using I-frames for Refreshing Video Quality​​ I-frames are generally inserted to designate the end of a GOP (Group of Pictures) or a video ...
  51. [51]
    Decoding the Video Codec Wars: H.264, HEVC, and AV1 Compared ...
    Aug 26, 2024 · Netflix adopted HEVC to deliver 4K streaming content. This transition enabled Netflix to provide high-quality video while managing bandwidth ...Missing: Hulu | Show results with:Hulu
  52. [52]
    New Report Highlights Impact of HEVC Codec on Streaming Industry
    Dec 6, 2023 · According to the report, Amazon Prime, Apple TV+, Netflix, Hulu, Disney, NBC, Paramount and Warner Bros. all utilize HEVC for content delivery.
  53. [53]
    [PDF] Burst photography for high dynamic range and low-light imaging on ...
    approach to burst mode photography is to capture each image in the burst ... For each burst we include our merged raw output and final JPEG output.Missing: intra- | Show results with:intra-
  54. [54]
    Why, oh why did Nintendo use JPG for screenshots? - GameFAQs
    Apr 9, 2017 · Because PNG would take a lot more space than JPG for something like a videogame screenshot, and the Switch doesn't exactly have a 1TB harddrive ...Missing: consoles | Show results with:consoles
  55. [55]
    Does PS4 screenshot sharing downsample the game's graphics?
    Oct 12, 2016 · Well, your problem is that you've got your PS4 set up to take lossy/compressed screenshots. Set it to PNG.<|separator|>
  56. [56]
    The Current Role of Image Compression Standards in Medical ... - NIH
    This paper discusses the current status of these image compression standards in medical imaging applications together with some of the legal and regulatory ...
  57. [57]
    Media | 2021 | The Web Almanac by HTTP Archive
    Dec 15, 2021 · Format adoption​​ A pie chart breaking down each format's share of the web's images. JPEG comes in first, at 41.8%.
  58. [58]
    Video Encoding Tips for Optimized Latency & Bandwidth - Haivision
    Jul 13, 2022 · Choosing the right combination and number of I, P, and B frames is key to optimizing video quality. For the Makito X4, you can choose from ...
  59. [59]
    Dynamic optimizer — a perceptual video encoding optimization ...
    Mar 5, 2018 · H264/AVC [3] claimed 50% less bits than MPEG-2 [4] and HEVC claimed 50% less bits than AVC. Yet, in practical systems, these savings never quite ...