Fact-checked by Grok 2 weeks ago

Macroblock

A macroblock is the fundamental processing unit in block-based video compression standards, consisting of a 16×16 block of luminance (luma) samples and two corresponding 8×8 blocks of chrominance (chroma) samples in the common 4:2:0 color subsampling format. This structure enables efficient handling of spatial redundancy through techniques such as motion estimation, compensation, intra- and inter-prediction, and application of the discrete cosine transform (DCT) for quantization and encoding. Macroblocks form the basis of hybrid coding schemes that divide video frames into a grid for processing, allowing for reduced bitrate while maintaining perceptual quality in applications like streaming, broadcasting, and storage. The macroblock concept originated in the ITU-T H.261 standard, released in 1990 as the first practical digital video codec for low-bitrate videoconferencing over ISDN lines, where it was defined as a 16×16 luma block with 8×8 chroma components to support block-matching motion compensation. It was subsequently adopted and refined in successive standards, including MPEG-1 (1992) for CD-ROM video, MPEG-2 (1995) for DVD and digital TV, H.263 (1996) for improved low-bitrate internet video, and MPEG-4 Part 2 (1999) for object-based coding. The most widespread evolution occurred in H.264/AVC (Advanced Video Coding, 2003), a joint ITU-T and ISO/IEC standard that enhanced macroblock flexibility by allowing partitions into smaller sub-blocks (down to 4×4) for more precise motion vectors and transform sizes, achieving up to 50% bitrate savings over prior codecs. In modern video coding, the fixed 16×16 macroblock has largely been supplanted by more adaptive structures, such as the coding tree units (CTUs) in HEVC/H.265 (2013), which support larger blocks up to 64×64 pixels for better efficiency in high-resolution content like and 8K video. Nonetheless, macroblocks remain relevant in legacy systems, ongoing H.264 deployments, and as a foundational element influencing block-based partitioning in emerging standards like /H.266 (2020), where they inform hierarchical coding decisions for ultra-high-definition and immersive media.

Fundamentals

Definition and Purpose

A macroblock serves as the fundamental processing unit in block-based video codecs, such as those defined in the ITU-T H.261 and H.264 standards. It typically comprises a 16×16 array of luma samples, along with associated chroma samples—such as two 8×8 arrays for the Cb and Cr components in 4:2:0 color sampling formats. This structure allows the macroblock to represent a compact spatial region within a video frame, facilitating localized analysis and manipulation of pixel data during compression. The primary purpose of the macroblock is to enable efficient spatial and temporal by grouping pixels into discrete units suitable for , intra- and inter-prediction, and . In , for instance, the macroblock is matched against reference blocks from previous or future frames to compute motion vectors, exploiting temporal redundancy across video sequences. Similarly, spatial prediction within the macroblock leverages adjacent pixel correlations to minimize residual data, which is then transformed (e.g., via ) and quantized to further reduce bitrate while preserving essential visual information. This block-based approach originated in early standards like for videoconferencing and has been refined in subsequent codecs to achieve higher ratios. Key benefits of using macroblocks include simplified in encoding and decoding pipelines, as operations are confined to fixed-size blocks rather than processing the entire holistically, which optimizes and software implementations. This partitioning enhances overall efficiency by allowing adaptive techniques, such as variable block partitioning for better accuracy, leading to improved video quality at lower bitrates without excessive computational overhead. In standard-definition video (e.g., 720×480 resolution), a single macroblock might cover a small detail like part of a face or a uniform background patch, demonstrating its role in balancing detail preservation with data reduction.

Historical Development

The macroblock concept emerged in the late amid the development of early block-based video codecs, addressing the need for efficient compression in bandwidth-limited telecommunication applications. It was first formalized in the standard, ratified in 1990 for video telephony over ISDN lines at bitrates ranging from 64 to 2048 kbit/s. specified the macroblock as a 16×16 luma block accompanied by corresponding 8×8 chroma blocks, serving as the fundamental unit for differenced inter-frame coding through and . This structure set the template for subsequent standards, enabling practical transmission in resource-constrained environments. Subsequent adoption expanded the macroblock's role across storage and broadcast media. The ISO/IEC standard, released in 1993, incorporated H.261's 16×16 macroblock framework for CD-ROM-based video at approximately 1 Mbps, introducing bi-directional prediction to enhance temporal redundancy reduction for resolutions like and . This was followed by (ITU-T H.262), standardized in 1995 through joint and MPEG collaboration, which retained the fixed 16×16 macroblock while adding interlaced-scan support for and DVD applications at 2–20 Mbps. These milestones reflected the era's hardware computational limitations, prioritizing algorithms that balanced efficiency with feasible real-time processing on 1990s-era processors. The early 2000s brought evolutionary refinements driven by rising demands for streaming and higher resolutions, amid persistent constraints. The H.264/AVC standard, finalized in 2003 by the Video Coding Experts Group (VCEG) and ISO/IEC (MPEG) Joint Video Team, preserved the 16×16 macroblock as the processing unit but introduced variable-size partitions down to 4×4 for more adaptive , achieving roughly double the of prior standards. Building on this, the High Efficiency Video Coding (HEVC, ITU-T H.265) standard, approved in 2013, shifted from fixed macroblocks to larger Coding Tree Units (CTUs) up to 64×64 with recursive adaptive subdivisions, optimizing for and video while further reducing bitrate needs by about 50% compared to H.264 at equivalent quality.

Technical Specifications

Macroblock Structure

A macroblock serves as the fundamental processing unit in video compression standards like H.264/AVC, comprising 256 luma samples arranged in a 16×16 grid, accompanied by chroma samples in the YCbCr color space. In the prevalent 4:2:0 chroma subsampling format, which is widely used for standard-definition and high-definition video, the macroblock includes two 8×8 blocks—one for the blue-difference (Cb) component and one for the red-difference (Cr) component—resulting in 64 chroma samples overall. This structure totals 384 samples per macroblock, calculated as: \text{Total samples} = 256 \, (Y) + 64 \, (Cb) + 64 \, (Cr) = 384 The color space separates (Y) from (Cb and Cr), enabling efficient compression by exploiting human visual sensitivity to brightness over color details. H.264/AVC supports multiple ratios to accommodate varying applications: , where resolution is quartered relative to luma (common in consumer video); 4:2:2, with horizontal by a factor of 2 (used in professional broadcast and editing workflows); and , preserving full resolution (suited for high-fidelity graphics or ). For instance, in 4:2:2 format, each macroblock features 256 luma samples alongside 256 chroma samples (two 8×16 blocks of 128 samples each for Cb and Cr), resulting in 512 samples total. In format, the macroblock includes two 16×16 blocks for Cb and Cr, each with 256 samples (512 chroma samples total, 768 samples per macroblock). Macroblocks tile the video frame contiguously without overlap, ensuring complete coverage of the picture area. To facilitate this , frame dimensions in luma samples are typically padded during preprocessing to multiples of in both width and height, avoiding partial macroblocks at the edges. In video, all samples within a macroblock are processed uniformly as a single spatial unit. For , however, the macroblock may be adaptively split into top-field and bottom-field components via macroblock-adaptive frame-field (MBAFF) coding, allowing independent processing of the interlaced lines to better handle motion artifacts.

Subdivisions and Blocks

In video coding standards such as H.264/AVC, a macroblock is subdivided into smaller blocks to enable more flexible and efficient processing for and . These subdivisions allow the encoder to adapt to varying content characteristics, such as using larger blocks for uniform areas and smaller ones for detailed regions like edges. For inter , macroblocks can be partitioned into rectangular blocks including 16×16, 16×8, 8×16, and 8×8 sizes, with the 8×8 partitions further divisible into 8×4, 4×8, or 4×4 sub-partitions to refine . Intra , in contrast, operates on square blocks of 4×4 or 16×16 within the macroblock, facilitating directional spatial prediction. Transform blocks in H.264 are square and applied to the residual data after prediction, typically using 4×4 or 8×8 integer transforms akin to the (DCT). These block sizes balance computational efficiency with compression performance, with 4×4 blocks capturing high-frequency details in complex textures and 8×8 blocks handling smoother areas more effectively. The choice of subdivision is determined by the encoder's rate-distortion optimization, which selects partitions that minimize bitrate for a given quality level, often resulting in finer splits for high-motion or textured content. Building on H.264, the HEVC (H.265) standard evolves the macroblock concept into larger coding tree units (CTUs) of up to 64×64 pixels, which are recursively partitioned using a quad-tree structure down to minimum blocks of 4×4. This hierarchical approach allows for greater adaptability, where coding units (CUs) derived from CTU splits serve as the basis for prediction blocks ranging from 64×64 to 4×4, including non-square options like 16×8 for in irregular motion patterns. Transform blocks in HEVC extend to larger square sizes up to 32×32, also using DCT-like operations on residuals, enabling better energy compaction in high-resolution videos while maintaining finer granularity for detailed areas. The quad-tree partitioning promotes content-adaptive decisions, such as deeper splits around object edges to preserve sharpness without excessive bitrate overhead.
StandardPrediction Block ExamplesTransform Block Sizes
H.264/AVC16×16, 16×8, 8×16, 8×8, 4×44×4, 8×8
HEVC (H.265)64×64 to 4×4 (including rectangles like 16×8)4×4 to 32×32

Compression Processing

Prediction Mechanisms

Prediction mechanisms in macroblock-based video exploit redundancies to generate a predicted , from which a is computed and encoded. In standards like H.264/AVC, macroblocks employ both intra and to minimize data transmission while preserving quality. The , defined as the difference between the current macroblock and its , is given by: \text{residual} = \text{current macroblock} - \text{predicted macroblock} This equation captures the prediction error, which is subsequently transformed and quantized. Intra prediction leverages spatial correlations within the same frame by estimating the current macroblock from neighboring encoded blocks. For luma components in H.264/AVC, two primary modes exist: intra 4×4 with 9 directional predictions (including vertical, horizontal, and diagonal modes) and DC prediction, applied to smaller blocks for textured regions; and intra 16×16 with 4 modes (vertical, horizontal, DC, and plane) suited for uniform areas. These modes use extrapolated or averaged samples from adjacent blocks, with unavailable boundary samples handled by replication or default values to ensure decoder synchronization. Chroma intra prediction follows a similar 16×16 structure with 4 modes. No prediction occurs across slice boundaries to maintain independence. Inter prediction utilizes temporal redundancies by matching the current macroblock to blocks in one or more reference frames via motion vectors, enabling efficient encoding of motion-dominated scenes. In P-slices, prediction draws from a single reference list with possible partitions down to 4×4 sub-blocks; B-slices support bi-prediction from two lists for enhanced accuracy. Motion vectors, typically at quarter-pixel precision for luma (using 6-tap interpolation for half-samples and bilinear for quarters), are differentially coded relative to predictors derived from neighboring vectors. Multiple reference frames (up to 16 in H.264/AVC) further improve by selecting the best match. Motion estimation identifies the optimal matching block in the reference frame by searching within a defined window, often using metrics like (SAD) to evaluate candidates. The process starts with integer-pixel searches (e.g., spiral or diamond patterns) centered on a predicted , followed by sub-pixel refinement for ; costs incorporate both and vector bitrate via rate-distortion optimization, such as J = \text{SAD} + \lambda \times \text{mvbits}, where \lambda scales with quantization parameter. A specialized skip mode optimizes static regions by directly copying the reference macroblock without transmitting residuals or explicit motion data, relying on a predicted zero or median motion vector. This , applicable in P- and B-slices, significantly reduces bitrate in unchanged scenes while signaling only the mode flag. Macroblocks may be subdivided into smaller partitions for refined inter prediction, as detailed in block subdivision techniques.

Transform Operations

In video compression standards such as and H.264/AVC, transform operations convert the data—obtained after —into the to facilitate efficient quantization and by concentrating energy in low-frequency coefficients. This process emphasizes lower frequencies, which typically represent the bulk of visual information, allowing higher-frequency components to be more aggressively discarded for . At the , an inverse transform reconstructs the for addition to the predicted . Early standards like (ITU-T H.262) employ the (DCT) on 8×8 blocks derived from macroblock residuals, enabling decorrelation and energy compaction suitable for quantization. In contrast, H.264/AVC introduces integer approximations of the DCT to avoid , reducing while maintaining performance close to the full DCT; the primary transform operates on 4×4 blocks for luma and chroma residuals, with an optional 8×8 transform available in high profiles for smoother areas. These integer transforms are designed as scaled versions of the DCT, ensuring exact invertibility in arithmetic. The H.264 transform is implemented separably using 1D operations along rows and columns, starting with a core 4-point 1D transform matrix for intra and residuals, followed by a 4-point on the coefficients of luma blocks in certain modes, and a 2×2 specifically for coefficients to further compact low-frequency energy. This hierarchical approach enhances coding efficiency for components, which often exhibit lower detail. Following transformation, the coefficients undergo quantization, where a quantization parameter (QP) scales the step size to control the trade-off between bitrate and quality; in H.264, QP ranges from 0 to 51, with each increment roughly doubling the quantization step and reducing bitrate by about 12% while increasing distortion. Coarser quantization (higher QP) discards more high-frequency details, potentially leading to blockier artifacts, whereas finer quantization preserves quality at higher bitrates. The foundational 1D DCT, approximated in these codecs, is defined for a block of length N as: X = \sum_{n=0}^{N-1} x \cos\left( \frac{\pi (2n+1) k}{2N} \right), \quad k = 0, 1, \dots, N-1 where x are the input samples and X are the coefficients, with modern approximations scaling factors to enable fixed-point computation.

Encoding Representation

Bitstream Syntax

In video compression standards such as and H.264, macroblocks are encoded within a hierarchical structure organized into frames or fields, which are further divided into slices for resilience and . Each slice contains a sequence of macroblocks processed in raster-scan order, with syntax elements including headers that specify the macroblock type (intra-coded I, predicted P, or bi-predicted B), partition information for , and associated data such as residuals and motion parameters. This structure ensures that decoders can reconstruct the video by parsing the sequentially, starting from picture-level headers and progressing to macroblock-level details within each slice. Key syntax elements for macroblocks include the macroblock address increment (often abbreviated as MBAI or MBAP in older standards), which signals the relative position of the current macroblock to the previous one, typically using a to skip over unchanged areas efficiently. Another critical element is the coded block pattern (CBP), a that indicates which of the macroblock's or smaller blocks contain non-zero quantized transform coefficients after , allowing decoders to skip irrelevant blocks and reduce bitstream overhead. For example, in H.264, the CBP is encoded as a 6-bit value for luma and chroma components, derived from the data presence in up to four luma and two blocks. Entropy coding in modern standards like H.264 employs variable-length techniques to represent macroblock modes, motion vectors, and transform coefficients compactly, with two primary methods: context-adaptive variable-length coding (CAVLC) and context-adaptive binary arithmetic coding (CABAC). CAVLC uses predefined code tables that adapt based on neighboring macroblock statistics, assigning shorter codes to frequent symbols like small motion vector differences or trailing ones in coefficient levels, while CABAC binarizes syntax elements into binary strings and applies arithmetic coding with context models for up to 10-15% better compression efficiency over CAVLC. These methods ensure that prediction modes (e.g., intra or inter) and vector components are encoded with minimal bits for typical video content. In the standard, the macroblock syntax begins with the address delta (macroblock_address_increment), followed by motion vectors if applicable, and then the 8x8 DCT coefficients scanned in order to group low-frequency components first for . This order facilitates efficient of the DC and AC coefficients, starting from the top-left () and traversing diagonally to prioritize energy compaction. To handle rare events such as unusually large motion vectors or levels beyond standard codeword ranges, escape codes are incorporated into the variable-length coding schemes, allowing explicit representation of these outliers while maintaining compactness for common cases in natural video sequences. This mechanism, present in both MPEG-2's and H.264's CAVLC/CABAC, prevents expansion from infrequent anomalies and supports robust decoding.

Parameter Encoding

In H.264/AVC, motion vectors for inter-predicted macroblocks are encoded differentially relative to predictors derived from neighboring blocks to exploit spatial correlations and reduce bitrate. The predictor is typically the of the motion vectors from the left, upper, and upper-right neighboring blocks (or substitutes if unavailable), and the resulting motion vector difference (MVD) is quantized and entropy-coded using exponential-Golomb variable-length codes that assign shorter bit sequences to small displacements, which predominate in natural video motion. Prediction modes, particularly for intra macroblocks, employ context-adaptive encoding to signal the selected efficiently based on adjacent blocks. A key technique is the most probable mode (MPM), which identifies one or more likely modes from the left and upper neighbors; if the actual mode matches an MPM, it is indicated with a single flag (1 bit), otherwise a fixed-length code or enumerated index is used for the remaining possibilities, thereby minimizing bits for common spatial redundancies. Residual data after prediction is transformed, quantized, and represented as a sequence of coefficients that are entropy-coded using to group trailing zeros, paired with level values for non-zero coefficients, and terminated by an signaling the position of the last significant coefficient in the zigzag scan order. Quantization parameter () adjustments specific to each macroblock are encoded via a signed exponential-Golomb-coded value (mb_qp_delta), allowing fine-grained control over compression strength without altering the global . The macroblock type (mb_type), which defines the overall coding mode (e.g., intra, inter partitions, skip) and associated flags, is encoded using exponential-Golomb codes in H.264/AVC, supporting up to 25 modes in P slices (including intra variants differentiated by coded block patterns), with codeword lengths biased toward frequent types like 16x16 inter partitions to balance rarity and occurrence in typical content. Rate-distortion optimization (RDO) during encoding evaluates candidate parameter sets—such as motion vectors, modes, and deltas—by minimizing a combining and estimated bitrate, thereby influencing selections for efficiency, while the standardized bitstream syntax independently ensures deterministic decodability at the .

Artifacts and Limitations

Macroblocking Phenomenon

Macroblocking is a prominent visual artifact in block-based video compression, characterized by noticeable discontinuities that form a grid-like pattern across the image, resembling tiled squares. This phenomenon stems from the independent processing of macroblocks, where each ×16 pixel unit undergoes separate motion prediction and quantization, resulting in abrupt transitions at their boundaries that do not align with the original continuous scene. Such mismatches are exacerbated by the (DCT) quantization applied within each block, which discards high-frequency details unevenly and introduces artificial edges, especially in regions with low spatial variation. The artifact typically appears as "blockiness" or "," most evident in flat, uniform areas like solid skies or shadowed regions, where subtle gradients are replaced by harsh, squared-off demarcations. At low bitrates, corresponding to high ratios, these effects intensify as coarser quantization levels amplify the boundary discrepancies, reducing overall smoothness. Transmission errors, common in error-prone environments such as channels, further aggravate macroblocking by corrupting macroblock data, leading to incomplete decoding and intensified block visibility. In video encoded at typical DVD bitrates around 5 Mbps, this artifact frequently emerges in low-contrast scenes like or clear skies, highlighting the limitations of early block-based codecs under constraints. Quantification of macroblocking often relies on objective metrics like (PSNR) and Structural Similarity Index (SSIM), which capture the degradation in fidelity and perceptual structure. Blocking significantly impairs quality when PSNR drops below 30 , a threshold where artifacts dominate and viewer satisfaction declines sharply, as coarser quantization preserves less detail across block edges. The term "macroblocking" originated in the amid the development of standards like and MPEG-1/2, which popularized macroblock-based encoding and first documented this boundary-induced distortion in technical literature.

Mitigation Approaches

To mitigate macroblocking artifacts arising from block-based quantization in video compression, codec advancements incorporate in-loop filters that smooth discontinuities at block boundaries during decoding. In the H.264/AVC standard, the adaptive operates within the coding loop to attenuate blocking effects by selectively averaging samples across edges. This first determines a boundary strength (Bs) based on prediction modes and motion vector differences between adjacent blocks, then applies thresholds β (for large discontinuities) and t_C (for small changes), both derived from lookup tables indexed by the quantization parameter (). Filtering involves low-pass operations, such as averaging four samples with clipping to avoid excessive blurring in detailed areas, thereby preserving sharpness while reducing visible seams. Subsequent standards like HEVC (H.265) build on this by introducing smaller, adaptive block sizes through a partitioning structure, where coding tree units (CTUs) up to 64×64 pixels can be recursively split into coding units (CUs) as small as 8×8, minimizing the perceptual impact of boundaries in uniform regions. Content-aware partitioning further avoids unnecessary splits in textured or complex areas by aligning blocks with image features, such as edges or motion, which reduces quantization mismatches and visible blocking compared to H.264's fixed 16×16 macroblocks. HEVC also enhances deblocking with classification-based decisions that adjust filter strength based on local gradients and QP variations. Post-processing techniques complement in-loop methods by applying non-reencoding filters to decoded output, blending blocks without altering the . Traditional approaches use low-pass filters across suspected boundaries to attenuate high-frequency discontinuities, often guided by to target artifact-prone areas selectively. More recent advancements employ AI-based upscaling in media players, where convolutional neural networks (CNNs) learn to inpaint and denoise blocky regions from training on paired low- and high-quality datasets, effectively restoring detail in playback. The H.264 deblocking filter specifically applies offsets β and t_C from QP-derived tables, clipping pixel modifications to ±t_C pixels to balance smoothing and over-filtering prevention; simulations show it yields average PSNR gains of 0.25–0.35 and up to 9–10% bitrate savings at equivalent quality by curbing blocking propagation in prediction references. Looking to future trends, the (finalized in 2018) integrates a switchable loop restoration filter—using or self-guided restoration on units up to 256×256 pixels—to recover high-frequency details lost to quantization, further diminishing artifacts in real-time decoding scenarios. Emerging integrations of models, such as post-filters tailored for , enable predictive correction of residual blocking during playback, enhancing efficiency for streaming applications.

References

  1. [1]
    Macroblock - an overview | ScienceDirect Topics
    A macroblock is defined as a unit of video data consisting of a 16 ×16 block of luma pixels, which can be partitioned into smaller blocks for flexible ...
  2. [2]
    Compression: Part 7 - Macro Blocks - Connecting IT to Broadcast
    Apr 5, 2023 · Macro blocks form the basis of motion compensation and reducing the transients at the block edges is key encoding and decoding video ...
  3. [3]
    H.264/AVC Inter Prediction — Vcodex BV
    This document describes the methods of predicting inter-coded macroblocks in P-slices in an H.264 video compression codec.
  4. [4]
    [PDF] Video Coding Standards
    Video Coding Standards 4. H.261: The Basis of Modern Video Compression. • ITU-T (ex-CCITT) Rec. H.261: The first widespread practical success. • First design ...<|control11|><|separator|>
  5. [5]
    [PDF] A Study Of MPEG-2 And H.264 Video Coding - Purdue Engineering
    One of the most prevalent video compression standards is MPEG-2, found in DVD video. Recent advances in digital video coding tools have led to the introduction.
  6. [6]
    [PDF] Overview of the H.264/AVC video coding standard - Circuits and ...
    264/AVC are highly flexible, as was the case earlier in MPEG-1. • Flexible macroblock ordering (FMO): A new ability to partition the picture into regions called ...
  7. [7]
    HEVC (H.265) vs. AVC (H.264): What's the Difference? - BoxCast
    Dec 7, 2022 · Unlike H.264 macroblocks, H.265 processes information in what's called coding tree units (CTUs). Whereas macroblocks can span 4x4 to 16x16 ...
  8. [8]
    A Comprehensive Guide to Modern Video Compression Standards
    Oct 23, 2024 · Historical Context. The development of video compression ... 264 uses a macroblock structure, typically consisting of 16x16 pixel blocks.
  9. [9]
    (PDF) A Brief History of Video Coding - ResearchGate
    As early as 1929, Ray Davis Kell described a form of video compression and was granted a patent for it. He wrote, "It has been customary in the past to transmit ...Abstract And Figures · References (0) · Recommended Publications
  10. [10]
    [PDF] Overview of International Video Coding Standards (preceding H.264 ...
    Jul 22, 2005 · Two organizations have dominated video compression standardization: • ITU-T Video Coding Experts Group (VCEG).Missing: HEVC | Show results with:HEVC
  11. [11]
    High Efficiency Video Coding (HEVC) Family, H.265, MPEG-H Part 2
    Unlike H.264 (AVC), where the basic coding block is a macroblock of fixed size 16x16, H.265 (HEVC) defines a Coding Tree Unit (CTU) of a maximum size of 64x64.
  12. [12]
    H.265 : High efficiency video coding - ITU
    Oct 8, 2024 · H.265 (10/14), High efficiency video coding, Superseded ; H.265 (04/15), High efficiency video coding, Superseded ; H.265 (12/16), High efficiency ...Missing: date | Show results with:date
  13. [13]
    US7822123B2 - Efficient repeat padding for hybrid video sequence ...
    On the other hand, digital video compression standards generally are macroblock ... padding added to achieve vertical macroblock alignment at a multiple of 16 ...
  14. [14]
    H.264 : Advanced video coding for generic audiovisual services
    **Summary of Macroblock Partitioning in ITU-T H.264**
  15. [15]
    H.265 : High efficiency video coding
    **Summary of Coding Tree Units (CTU), Quad-Tree Partitioning, and Block Sizes in HEVC (ITU-T H.265):**
  16. [16]
    [PDF] Overview of the H.264/AVC Video Coding Standard
    • I slice: macroblocks are coded using intra prediction. • P slice: coding type in I slice and inter prediction with at most one motion compensated ...
  17. [17]
    H.264/AVC Intra Prediction — Vcodex BV
    This document describes the methods of predicting intra-coded macroblocks in an H.264 video compression codec.
  18. [18]
    [PDF] The emerging H.264/AVC standard - EBU tech
    This syntax element specifies if the corresponding sub-macroblock is coded using motion-compensated prediction with luma block sizes of 8×8,. 8×4, 4×8 or 4×4 ...<|control11|><|separator|>
  19. [19]
    H.264 : Advanced video coding for generic audiovisual services
    **Summary of Macroblock Structure in H.264 (ITU-T Rec. H.264):**
  20. [20]
    A Brief Overview of H.264 Frame Encoding Principles - Intra ...
    Mar 29, 2024 · Intra prediction is the process of forming predictions from the encoded parts of the current frame. This can be viewed as a precise form of self-encoding.Missing: padding | Show results with:padding
  21. [21]
    [PDF] MPEG Digital Video-Coding standards - Purdue Engineering
    An 8 x 8 DCT is then applied to each of the 8 x 8 blocks contained in the macroblock followed by quantization (Q) of the DCT coefficients with subsequent run- ...
  22. [22]
    [DOC] Joint Video Team (MPEG+ITU) Document
    May 27, 2003 · The change of the chroma DC transforms from length-2 to length-4 Hadamard doesn't lead to an increase in dynamic range, so the scaling equations ...
  23. [23]
  24. [24]
    [PDF] Context-based adaptive binary arithmetic coding in the H.264/AVC ...
    Bit-rate savings provided by CABAC relative to the baseline entropy coding method CAVLC of H.264/AVC. V. CONCLUSION. The CABAC design is based on the key ...
  25. [25]
    Joint motion vector encod- ing scheme with a pooled macroblock type
    The motion vectors of H.264兩AVC are coded differentially with respect to their motion vector predictors, which are the medians of three motion vectors of the ...
  26. [26]
    [PDF] The H.264/AVC Advanced Video Coding Standard: Overview and ...
    Each picture is compressed by partitioning it as one or more slices; each slice consists of macroblocks, which are blocks of 16x16 luma samples with ...
  27. [27]
    Most frequent mode for intra‐mode coding in video coding - Yoon
    Feb 1, 2019 · The most probable mode (MPM) is used in video coding to encode the prediction mode efficiently based on the modes of the neighbouring blocks.Abstract · Introduction · Related work · Proposed method
  28. [28]
    [PDF] ISO/IEC 14496-10
    H.264 (2002 E). 83. 9.1.5. Entropy coding for Intra. In intra mode, prediction is always used for each sub block in a macroblock. 9.1.5.1. Coding of Intra 4x4 ...
  29. [29]
    A Fast Rate-Distortion Optimization Algorithm for H.264/AVC | IEEE ...
    Experimental results prove that our proposed fast RDO algorithm can reduce about 60% total encoding time and save about 76% computation time of RDO module with ...Missing: parameters | Show results with:parameters
  30. [30]
    [PDF] Efficient DCT-domain Blind Measurement and Reduction of Blocking ...
    Apr 26, 2002 · Blocking artifacts continue to be among the most serious defects that occur in images and video streams compressed to low bitrates using block ...<|separator|>
  31. [31]
    [PDF] Detecting Macroblocking in Images Caused by Transmission Error
    Macroblocking is a type of widely observed video artifact where severe block-shaped artifacts appear in video frames. Macroblock- ing may be produced by ...
  32. [32]
    Macroblocking and Pixelation: Video Artifacts - Lifewire
    Nov 19, 2020 · Macroblocking is a video artifact in which objects or areas of a video image appear to be made up of small squares, rather than proper detail and smooth edges.
  33. [33]
    Quantitative Assessment of the Effects of Compression on Deep ...
    Mar 10, 2020 · As discussed previously, compression operations with PSNR below 30 dB are not recommended. Such lossy compression resulted in strong ...
  34. [34]
    Compression: Part 12 - The Evolution Of Video Compression
    Sep 27, 2023 · Arguably the first practicable video codec standard was H.261 that appeared in 1988 and it formed the basis of MPEG 1 that appeared in 1991.
  35. [35]
    None
    **Summary of H.264 Deblocking Filter from the Paper**
  36. [36]
    Overview of the High Efficiency Video Coding (HEVC) Standard
    Insufficient relevant content. The provided URL (https://ieeexplore.ieee.org/document/6316136) only displays a title and partial page structure without accessible full text or specific details about the HEVC standard, block partitioning, or blocking artifacts. No substantive information is available for extraction or summarization.
  37. [37]
    Block Partitioning Structure in the HEVC Standard | Request PDF
    Aug 5, 2025 · In contrast to the fixed size 16 × 16 macroblock structure of H.264/AVC, HEVC defines three different units according to their functionalities.
  38. [38]
  39. [39]
  40. [40]
    [PDF] A Technical Overview of AV1 - arXiv
    The loop restoration filter is applied to units of either. 64 × 64, 128 × 128, or 256 × 256 pixel blocks, named loop restoration units (LRU). Each unit can ...
  41. [41]
    Multi-Type Self-Attention-Based Convolutional-Neural-Network Post ...
    Sep 15, 2024 · In this paper, we propose an AV1 CNN with a Multi-type Self-Attention (MTSA) network designed to enhance the quality of AV1-decoded videos by ...