Fact-checked by Grok 2 weeks ago

Deblocking filter

A is a technique applied to decoded compressed images or frames to mitigate blocking artifacts—visible discontinuities at the boundaries of coding blocks caused by quantization in block-based transform coding schemes such as the (DCT). By adaptively smoothing pixel values across these boundaries, it enhances subjective visual quality and improves prediction efficiency in subsequent frames, without introducing excessive blurring of genuine edges. First standardized as a normative in-loop filter in the H.264/AVC (Advanced Video Coding) standard finalized in 2003 by the ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG), the deblocking filter operates on 4×4 or 8×8 luma and chroma block edges, with filter strength determined by boundary type, quantization parameter (QP), and local pixel differences. In earlier standards like MPEG-2 (1996), deblocking was an optional post-processing step rather than an integral encoding tool, limiting its impact on compression efficiency. Subsequent standards, including HEVC/H.265 (2013) and VVC/H.266 (2020), have refined the algorithm with longer filter taps, content-adaptive adjustments, and complementary tools like sample adaptive offset (SAO) to further suppress artifacts while supporting higher resolutions and bit depths. The filter's implementation typically involves parallelizable architectures to meet real-time decoding demands, achieving bitrate savings of up to 10% for equivalent perceptual in H.264/AVC by reducing in reference frames. Its widespread adoption has made it essential in modern video codecs, streaming services, and applications ranging from broadcast to mobile video, where blocking artifacts would otherwise degrade viewer experience at high compression ratios.

Fundamentals

Blocking Artifacts

Blocking artifacts manifest as visible discontinuities or "" effects at the boundaries of processing blocks in decoded video frames, creating an unnatural grid-like appearance in the image. These distortions occur because block-based algorithms divide frames into fixed-size blocks, such as 8×8 or 16×16 pixels, and process each independently without fully accounting for inter-block correlations. The primary cause of blocking artifacts is the coarse quantization of (DCT) coefficients applied to these blocks, which discards high-frequency details to achieve and introduces sharp, artificial edges at block boundaries due to differing quantization levels across adjacent blocks. This quantization process, essential for reducing data volume, becomes more pronounced in lossy schemes where transform coefficients are rounded or zeroed out, leading to a loss of smoothness in the reconstructed image. These artifacts degrade subjective visual quality, especially at low bitrates where aggressive quantization amplifies the effect, making block boundaries highly distracting and reducing perceived sharpness and naturalness. In early standards like , , and from the H.26x family, blocking is particularly evident during high-motion scenes or at compression ratios targeting rates below 1.5 Mbit/s, often resulting in a mosaic-like that impairs viewer experience. The impact of blocking can be measured using specialized metrics, such as the no-reference Blocking Metric (BM) that assesses boundary discontinuity strength by modeling block edges as step functions, or through PSNR variants like PSNR-B that isolate blocking-induced degradation from overall noise. These artifacts, present in early standards like , became particularly prominent with the standard for images, published in 1992, and the video standard in 1993, as rising demands for efficient storage and transmission intensified compression ratios and highlighted the limitations of block-DCT approaches.

Deblocking Principles

Deblocking filters operate on images or video frames compressed using block-based coding techniques, such as (DCT) or its inverse (IDCT), where quantization introduces visible discontinuities at block boundaries. The primary goal is to smooth these artificial edges to enhance visual quality while carefully preserving genuine image details and true edges, thereby avoiding excessive blurring that could degrade sharpness or introduce new artifacts like ringing. This balance is crucial in maintaining perceptual fidelity in compressed media. Key techniques in deblocking involve linear low-pass filtering applied selectively across block boundaries to attenuate high-frequency discontinuities. Adaptive approaches further refine this by detecting edges through gradient analysis or pixel difference thresholds, enabling the filter to apply varying strengths only where artifacts are likely artificial rather than structural features. Boundaries are often classified by artifact severity—strong for pronounced discontinuities and weak for subtle ones—allowing targeted smoothing that minimizes impact on textured regions. Deblocking filters are distinguished by their placement in the coding pipeline: in-loop filters are integrated within the encoding and decoding loop, modifying reference frames used for motion-compensated prediction to propagate improvements across subsequent frames and boost overall compression efficiency; out-of-loop filters, conversely, apply only to the final output for display, enhancing viewer experience without affecting the prediction process. Common filter types include spatial domain methods like averaging along edges, adjustments that target specific DCT coefficients to suppress blocking frequencies, and emerging learning-based techniques that learn artifact patterns for more precise restoration. These methods entail trade-offs between computational complexity and quality gains; while effective in reducing artifacts and saving 6-11% in bitrate, they can demand significant processing resources, potentially comprising one-third of decoder operations, and risk blurring real textures if not adaptively controlled. In-loop implementations offer superior long-term benefits but require encoder-decoder synchronization, whereas out-of-loop options provide flexibility at the cost of limited prediction improvements.

In-Loop Deblocking in Video Standards

H.263 Annex J

Annex J of the H.263 standard, finalized in February 1998, introduces an optional deblocking filter mode designed specifically for low-bitrate video telephony applications, targeting formats such as sub-QCIF, QCIF, CIF, 4CIF, and 16CIF. This filter operates within the video coding loop, applying smoothing to both luminance and chrominance components across the boundaries of 8x8 blocks in reconstructed I-, P-, EP-, or EI-pictures, or the P-picture portion of improved PB-frames. By integrating the filter into the prediction process, it uses filtered reference frames for motion compensation, which helps mitigate blocking artifacts that arise from quantization in block-based discrete cosine transform (DCT) coding. The mode is signaled via external negotiation (e.g., H.245) and indicated in the picture header's PTYPE field, making it selectively enableable to balance quality and computational demands. The is applied post-inverse DCT reconstruction, targeting horizontal and vertical edges within macroblocks but excluding picture, slice, or group-of-blocks (GOB) boundaries. It processes a four- window (A, B from one block and C, D from the adjacent block) using a one-dimensional low-pass ing approach that adjusts values based on local differences. Specifically, the computes a difference metric d = \frac{A - 4B + 4C - D}{8}, then applies adjustments such as B_1 = \clip(B + d_1) and C_1 = \clip(C - d_1), where d_1 incorporates a modulated by a strength (STRENGTH) derived from the quantization (QUANT), with values ranging from 1 for QUANT=1 to 12 for QUANT=31. detection relies on a basic : the activates if |d| < 2 \times \ STRENGTH and d \neq 0, ensuring it skips strong edges or uncoded blocks (where COD=1 and not INTRA), while the strength provides mild adaptation to quantization levels without complex classification. This fixed-strength application per qualifying edge keeps the process simple compared to later standards. In low-bitrate scenarios typical of H.263, such as below 64 kbit/s for QCIF or SQCIF resolutions, the filter enhances prediction efficiency by 5-10%, allowing bitrate reductions for equivalent subjective quality through smoother reference frames that reduce residual errors in motion-compensated prediction. It effectively diminishes visible blocking artifacts, improving overall visual smoothness without significantly altering peak signal-to-noise ratio (PSNR), though gains of around 0.8 dB in quality metrics have been observed in evaluations. As the first normative in-loop deblocking mechanism in ITU-T video coding standards, Annex J laid foundational influence on subsequent H.26x developments, such as the more advanced filters in H.264/AVC. However, its computational overhead—particularly when combined with features like unrestricted motion vectors (Annex D) or advanced prediction (Annex F)—posed challenges for the hardware of the late 1990s, often leading to optional disablement in resource-constrained implementations.

H.264/AVC

The deblocking filter in H.264/AVC, standardized in 2003 as part of the (AVC) specification, is a normative in-loop filter that mitigates blocking artifacts resulting from 4×4 integer transform quantization and block-based motion compensation. It operates on reconstructed pictures within the encoding and decoding loops, improving reference frame quality for subsequent predictions. The filter is applied separately to luma samples along 4×4 sub-block edges and to chroma samples along 2×2 sub-block edges, processing all vertical edges within a macroblock before horizontal edges, and excluding slice or frame boundaries to avoid inter-slice interactions. Central to the filter is the boundary strength parameter Bs, an integer from 0 to 4 that quantifies the discontinuity severity at each edge and determines whether filtering occurs. Bs=0 skips processing entirely, while higher values trigger stronger smoothing. Classification depends on adjacent macroblock coding modes (intra or inter), motion vector differences (e.g., Bs=2 if vectors differ by at least 4 quarter-pel units or reference indices differ), and coded block pattern flags (e.g., Bs=1 for inter-coded blocks sharing motion but with residual coefficients, escalating to Bs=3 or 4 for intra-coded edges). Chroma edges inherit Bs from corresponding luma edges. This adaptive scheme prioritizes filtering at transform and prediction discontinuities while preserving uniform regions. Filtering decisions incorporate QP-dependent thresholds α (larger, for cross-boundary differences) and β (smaller, for intra-block gradients) to avoid smoothing sharp edges. An edge qualifies for modification if Bs > 0 and the boundary offset Δ (defined as |p₂ - p₀| or |q₂ - q₀|, where p and q denote pixels on either side of the edge) satisfies Δ < β, alongside |p₀ - q₀| < α and side gradients |p₁ - p₀| < β, |q₁ - q₀| < β. These conditions ensure selective application, balancing artifact reduction with detail retention; α and β increase with QP to accommodate coarser quantization. For Bs=4, a strong 4-tap filter smooths the four nearest pixels (p₀, p₁, q₀, q₁), computed as: p_0' = \frac{p_2 + 2p_1 + 2p_0 + 2q_0 + q_1 + 4}{8} with symmetric formulas for q₀' (swapping p and q), and adjusted 3-tap variants for p₁' = (p₂ + p₀ + q₀ + 2)/4 and q₁' if side gradients exceed β. For Bs=1 to 3, a normal filter uses 2-tap averaging with an offset derived from Δ, clipped via the function: C(x) = \clip\left( \frac{x + \offset + 4}{8}, -t_c, t_c \right) where t_c scales with Bs and QP (e.g., higher QP yields larger t_c for aggressive filtering), and clip bounds prevent over-smoothing. Pixels are updated sequentially: p₀ and q₀ first, then conditionally p₁/q₁ based on updated neighbors. All operations use integer arithmetic for efficiency. This filter reduces bitrate by 5–10% at equivalent PSNR, particularly benefiting high-definition video by suppressing visible blocks without introducing excessive blur, thus enhancing both subjective quality and compression efficiency.

HEVC and VVC

The deblocking filter in (HEVC, ITU-T H.265, standardized in 2013) is applied to the boundaries of coding units (CUs) within coding tree blocks (CTBs) up to 64×64 samples, extending the adaptive approach from prior standards to larger block structures while incorporating sub-block considerations for transform and prediction splits. Edge flags are derived from these splits to determine filter applicability, ensuring processing only on relevant vertical and horizontal boundaries within an 8×8 sample grid. The boundary strength (Bs) ranges from 0 to 3, with Bs=0 indicating no filtering, Bs=1 or 2 enabling normal filtering, and Bs=3 triggering strong filtering when conditions like intra coding or significant motion differences are met. This design supports better parallelization compared to earlier methods by processing independent 8×8 sub-blocks. The core HEVC filter operates on eight samples per edge (p2 to p0 and q2 to q0, with decisions informed by p3 and q3), applying a normal filter that adjusts boundary samples based on quantization parameter (QP)-derived thresholds α and β, which control edge activity and filtering eligibility. A strong filter option is available for pronounced artifacts, modifying up to three samples per side with fixed coefficients for aggressive smoothing. For chroma, a separate boundary strength of 2 activates filtering, with threshold tc adjusted by chroma-specific offsets (e.g., slice_tc_offset_div2) to account for color component differences. The normal filter offset for p0, for example, is computed as \Delta p_0 = \text{Clip3}\left(-t_c, t_c, \frac{((p_2 + p_0 + 1) \gg 1) - p_1 + d}{2}\right), where d is a decision factor based on sample gradients, and clipping ensures bounded adjustments. This contributes to HEVC's overall performance, achieving PSNR improvements of 0.5–1 dB over H.264/AVC in typical sequences through reduced blocking artifacts and enhanced subjective quality. Versatile Video Coding (VVC, ITU-T H.266, standardized in 2020) enhances the deblocking filter for larger CTBs up to 128×128 samples, addressing artifacts in high-resolution content like 4K and 8K while maintaining compatibility with HEVC's foundational principles. It introduces a weak filter mode for subtle boundary discontinuities, applying minimal smoothing to preserve details in smooth regions, alongside the normal and strong modes. A classification process considers local sample levels for adaptation in high dynamic range content. VVC specifics include adaptive processing at sub-block boundaries on a 4×4 luma grid (or 8×8 for chroma), with filter lengths varying based on transform splits and block sizes to optimize for diverse content. In weak filter mode, the boundary samples p0 and q0 are filtered using a low-pass filter: p_0' = \clip(p_0 - \delta, p_0 + \delta, (p_2 + 2 p_1 + 2 p_0 + 2 q_0 + q_1 + 4) \gg 3), with symmetric formula for q_0', where \delta is a QP-dependent offset. Optionally, p1 and q1 may be filtered in a second stage if gradient conditions are met. This design reduces overall complexity by approximately 20% compared to HEVC through simplified decisions and fewer operations per boundary, enabling efficient hardware implementations. In 8K scenarios, VVC's deblocking contributes a 5% bitrate efficiency gain over HEVC by better handling large-block artifacts without excessive computation.

AV1

The deblocking filter serves as the first stage in the in-loop filtering pipeline of the codec, finalized in 2018 by the () as an open-source, royalty-free video compression standard. It targets blocking artifacts at boundaries between transform blocks within 64×64 or larger superblocks, applying smoothing only to 8×8 or smaller edge segments where quantization-induced discontinuities are detected. Key parameters include the loop filter level (ranging from 0 to 63, where level 0 disables filtering) and sharpness (0 to 7), which are derived through rate-distortion optimization during encoding and signaled in the bitstream to balance artifact reduction with preserved detail. Drawing from its VP9 heritage, the AV1 deblocking filter incorporates directional awareness to adapt to edge orientations, processing horizontal and vertical boundaries separately using finite impulse response (FIR) low-pass filters with 4 to 14 taps for luma (4 or 6 taps for chroma), selected based on adjacent transform block sizes (e.g., 14-tap for blocks larger than 16×16). Filtering is skipped for boundaries lacking a coded block flag, such as fully skipped or lossless intra blocks, and further conditioned on to prevent over-smoothing true edges—specifically, high edge variance (|p1 - p0| > T0 or |q1 - q0| > T0, where T0 is a per-superblock ) or insufficient flatness (|p_k - p0| ≤ 1 for k=1 to 6 in longer filters) disables application. A basic two-tap adjustment exemplifies the core operation for adjacent pixels p0 (left) and q0 (right):
p_0' = \clip(p_0 - \Delta, p_0 - t_c, p_0 + t_c)
where \Delta is derived from neighbor differences (e.g., (p1 + p0 + q0 + 1) >> 2 for simple cases), t_c is the clipping scaled by loop filter level and , and \clip enforces bounds to limit modifications. More advanced multi-tap filters extend this with predefined coefficients for broader smoothing.
Filter strength is adaptively tuned via per-frame deltas for reference frames and prediction modes, signaled as loop_filter_ref_deltas and loop_filter_mode_deltas, allowing up to four updates per frame for fine-grained and based on (e.g., higher levels increase t_c for stronger ). In the AV1 pipeline, deblocking precedes the Constrained Directional Enhancement Filter (CDEF) for ringing reduction and Loop Restoration Filter (e.g., or self-guided) for overall quality enhancement, forming a multi-stage in-loop that references filtered outputs for subsequent predictions. This integration significantly mitigates artifacts in web-delivered video, as deployed by platforms like and . The contributes to AV1's overall , enabling 30–50% bitrate savings over at equivalent while maintaining hardware-friendly designs suitable for ultra-high-definition (UHD) decoding, with complexity roughly three times that of but optimized for parallel processing.

Post-Processing Deblocking Filters

General Approaches

Post-processing deblocking filters, also known as out-of-loop filters, are non-normative techniques applied after video decoding to enhance visual by mitigating blocking artifacts without influencing the process or subsequent predictions. These filters operate independently of the encoding loop, focusing solely on improving the perceptual appearance of the decoded frames for display. General approaches to post-processing deblocking can be categorized into several broad types based on their underlying mechanisms. Linear methods, such as simple averaging or low-pass filtering across block boundaries, provide straightforward smoothing but risk blurring details. Non-linear techniques, including filtering or bilateral filters, preserve edges by adaptively weighting pixels based on similarity and spatial proximity, reducing over-smoothing in textured regions. Frequency-based approaches, like thresholding or DCT-domain adjustments, target high-frequency components associated with artifacts, suppressing them while retaining low-frequency content essential for structure. In the late 2010s and 2020s, learning-based methods using convolutional neural networks (CNNs) have emerged, training models to detect and remove artifacts by learning patterns from distorted and clean pairs, often outperforming traditional filters in complex scenarios. Block boundary detection is a critical step in these approaches, typically employing edge operators like the Sobel filter to identify discontinuities along predicted block edges or analyzing coefficients in the DCT domain to quantify quantization-induced variations. Filter strength is often adapted based on local variance or activity measures, such as horizontal and vertical differences, to apply stronger smoothing in uniform areas and weaker adjustments in high-detail regions, thereby balancing artifact reduction with detail preservation. A key advantage of post-processing deblocking is its flexibility, allowing application to legacy codecs like without modifying the or incurring additional bitrate costs. However, these filters cannot enhance inter-frame prediction accuracy, unlike in-loop methods, and may introduce over-smoothing in areas with significant motion, potentially degrading temporal consistency. Historically, post-processing deblocking techniques first gained prominence with decoders around 1995, where simple low-pass filters were used to address artifacts from block-based DCT coding in early applications. These methods have evolved significantly, incorporating AI-driven approaches in the late 2010s and 2020s to handle more sophisticated compression artifacts in modern standards.

Specific Implementations

Software implementations of post-processing deblocking filters include the deblock filter in FFmpeg, which applies a to reduce blocking artifacts in decoded video, with configurable strength options set to "weak" or "strong" for balancing artifact removal and detail preservation. Additionally, FFmpeg's legacy libpostproc module, integrated around , offers subfilters such as hb (horizontal deblocking) and vdeblock (vertical deblocking) that allow adjustable parameters like difference factor (ranging from 0 to higher values for increased deblocking, default 32) and flatness threshold (default 39, lower values enabling more aggressive filtering). These tools are widely used for enhancing low-bitrate or compressed video sources, such as streams or archived footage. Another notable software example is the MSU Deblocking filter (version 2.2, released in the mid-2000s), specifically developed for restoring quality in DVD rips and other blocky compressed videos like those from VideoCD. This filter supports spatial modes for single-frame processing and temporal modes that leverage inter-frame information to smooth artifacts while minimizing , making it suitable for offline video restoration workflows. Hardware implementations often leverage GPU acceleration for real-time post-processing. Deblocking is also commonly integrated into TV system-on-chips (SoCs) for MPEG-2 upscaling, where dedicated hardware modules apply adaptive filters to legacy broadcast content during resolution enhancement, reducing visible blocks on high-definition displays without significant latency. Modern advancements feature AI-based deblocking, exemplified by Topaz Video AI (introduced in the late 2010s and refined in the 2020s), which employs convolutional neural networks to detect and remove blocking artifacts in upscaled or low-quality footage, often outperforming traditional filters by preserving fine details through learned patterns from large datasets. These neural approaches, emerging post-2018, enable context-aware restoration, particularly effective for archival or web-sourced videos. A of deblocking application beyond video is its use in image restoration, where the Shape-Adaptive (SA-DCT) method reconstructs blocked regions by applying localized DCT inverses along image contours, implemented in plugins for tools like image editors. This technique corresponds to typical PSNR gains of 1-2 dB on low-quality JPEGs (e.g., quality factor 10-20), significantly improving structural fidelity without over-smoothing edges.

References

  1. [1]
    deblocking filter - an overview | ScienceDirect Topics
    A deblocking filter reduces blocking artifacts in video coding by smoothing pixel values at block boundaries, improving visual quality.
  2. [2]
    The Architectural Design of Deblocking Filter for Image ...
    The deblocking filter reduces the blocking artifacts (visible discontinuities in an image) caused by block-based encoding with strong quantization. It is ...
  3. [3]
    Introduction to video encoding. Elecard video compression book
    Mar 7, 2018 · In AVC, there is one kind of such filtering - deblocking filter. Application of this filter reduces the block effect resulting from ...
  4. [4]
  5. [5]
    [PDF] IN-LOOP DEBLOCKING FILTER FOR H.264/AVC VIDEO - EURASIP
    The filter is applied to the each decided macroblock to reduce the blocking artifacts without reducing the sharpness of the picture. The net effect is in.
  6. [6]
    Compression Artifacts Reduction by a Deep Convolutional Network
    Blocking artifacts arise when each block is encoded without considering the correlation with the adjacent blocks, resulting in discontinuities at the 8 × 8 ...<|control11|><|separator|>
  7. [7]
    Post processing for blocking artifact reduction - IEEE Xplore
    This image degradation, called blocking artifacts, is mainly caused due to the quantization process of discrete cosine transform (DCT) coefficients.
  8. [8]
    Blocking artifacts reduction in image compression with ... - IEEE Xplore
    ... blocking artifacts are caused by heavy accuracy loss of transform coefficients in the quantization process. We define the block boundary discontinuity ...
  9. [9]
    In-loop deblocking filter for block based video coding - IEEE Xplore
    Abstract: Blocking artifacts are inherent and inevitable phenomena at low bit rates in current block-based video coding schemes.
  10. [10]
    [PDF] a no-reference blocking artifact measure for adaptive - EURASIP
    In this paper, we illustrate the design and application of no-reference quality metrics for the case of blocking artifacts that commonly degrade the quality of.<|control11|><|separator|>
  11. [11]
    [PDF] Blind measurement of blocking artifacts in images
    Wei and A. C. Bovik, “A new metric for blocking artifacts in block transform-coded images and video,”. Technical Report, The University of Texas, Oct. 1998 ...
  12. [12]
    [PDF] New Approach of Estimating PSNR-B For De- blocked Images - arXiv
    C) PSNR-B [ 14] : PSNR-B is a quality metric which is specifically used for measuring the quality of images which consists of blocking artifacts.
  13. [13]
    [PDF] iso/iec 10918-1 (1986-1993) - ITU
    Abstract – The JPEG-1 standard of the Joint Photographic Experts Group (JPEG) whose specification was submitted to and approved by the Consultative ...
  14. [14]
    MPEG-1: Systems - Standards – MPEG
    Editions ; Publication Year: 1993 ; Status: published ; Motivations: motivation ; Objectives: objectives.
  15. [15]
    [PDF] Performance Analysis of H.264/AVC with In-Loop Deblocking Filter
    Deblocking filters can be used in two ways, either as post filters or loop filters. Post filters only operate on the display buffer outside of the coding loop.
  16. [16]
    None
    **Authors, Year, Title:**
  17. [17]
  18. [18]
    None
    Summary of each segment:
  19. [19]
  20. [20]
    (PDF) Evaluation of Deblocking Filter for H.263 Video Codec ...
    Aug 7, 2025 · The improvement in video quality with the use of this advance option Deblocking Filter of H.263 is of the order of 0.8 dB or more. Along with ...
  21. [21]
    [PDF] ETSI TR 126 911 V3.2.0 (2000-01)
    Annex J (Deblocking Filter), improves compression efficiency. •. Annex K (Slice Structure Mode), improves error resilience. •.<|control11|><|separator|>
  22. [22]
    None
    Summary of each segment:
  23. [23]
    [PDF] Overview of the H.264 / AVC Video Coding Standard
    For this reason, H.264/AVC defines an adaptive in-loop deblocking filter, where the strength of filtering is controlled by the values of several syntax elements ...
  24. [24]
    [PDF] The emerging H.264/AVC standard - EBU tech
    At the same time the filter reduces bit-rate with typically 5-10% while pro- ducing the same objective quality as the non-filtered video. Fig. 4 illustrates the ...
  25. [25]
    [PDF] ITU-T Rec. H.265 (08/2021) High efficiency video coding - TI E2E
    Aug 22, 2021 · ... Specification ... deblocking filter process ........... 175. 8.6.1. Derivation process for quantization parameters ...
  26. [26]
    (PDF) HEVC deblocking filter - ResearchGate
    PDF | This paper describes the in-loop deblocking filter used in the upcoming High Efficiency Video Coding (HEVC) standard to reduce visible artifacts.
  27. [27]
    Deblocking filtering in VVC - ResearchGate
    Cross-component adaptive loop filter (CC-ALF) is specifically used to correct the chroma samples by exploiting the correlation between the luma and chroma ...Missing: specification | Show results with:specification
  28. [28]
    None
    Below is a merged summary of the deblocking filter in AV1, consolidating all the information from the provided segments into a comprehensive response. To maximize detail and clarity, I’ve organized key information into tables where appropriate (e.g., for parameters, processes, and equations) and provided a narrative overview for context. The response retains all relevant details while avoiding redundancy and ensuring completeness.
  29. [29]
    [PDF] Tool Description for AV1 and libaom - Alliance for Open Media
    Oct 4, 2021 · For chroma colour components, 4- and 6-tap predefined filters can be applied as the deblocking filter. The selection of filter length is.
  30. [30]
    [PDF] A Technical Overview of AV1 - arXiv
    31: AV1 allows 3 optional in-loop filter stages including a deblocking filter, a constrained directional enhancement filter, and a loop restoration filter.
  31. [31]
    VP9 vs AV1: Comprehensive Comparison and Which to Choose
    Aug 27, 2025 · In CRF/QP mode, AV1 reduces bitrate by ~33% (PSNR) to ~41% (SSIM) compared with VP9 for the same quality. In ABR mode, AV1 saves ~30% bitrate ( ...
  32. [32]
    None
    ### Summary of Deblocking Filters in Video and Image Compression Coding
  33. [33]
    A generic post-deblocking filter for block based image compression ...
    Traditionally, these post-processing strategies tend to be less efficient than the in-loop filters, as they are not able to exploit all the information ...Abstract · Introduction · The Deblocking Filter
  34. [34]
    Adaptive non-local means filter for image deblocking - ScienceDirect
    To reduce blocking artifact, some postprocessing methods have been proposed in the last decades, which are classified into two categories: iterative and non- ...Missing: post- | Show results with:post-
  35. [35]
    Deep Learning Post-Filtering Using Multi-Head Attention and ... - NIH
    The paper proposes a novel post-filtering method based on convolutional neural networks (CNNs) for quality enhancement of RGB/grayscale images and video ...Missing: categories | Show results with:categories
  36. [36]
    A deblocking algorithm based on color psychology for display ...
    Jul 2, 2012 · This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of ...Missing: survey | Show results with:survey
  37. [37]
  38. [38]
    [PDF] A Study Of MPEG-2 And H.264 Video Coding - Purdue Engineering
    3.4 Experiment 2: Test of the H.264 In-Loop Filter . ... Due to the nature of the interlaced material, the deblocking filter does not strongly filter horizontal ...
  39. [39]
    [PDF] Residual Guided Deblocking With Deep Learning
    Oct 23, 2020 · The block-based coding structure in hybrid video coding framework inevitably introduces compression artifacts such as blocking, ringing etc.Missing: machine survey
  40. [40]
    FFmpeg Filters Documentation
    Filtering in FFmpeg is enabled through the libavfilter library. In libavfilter, a filter can have multiple inputs and multiple outputs.
  41. [41]
    Postprocessing - FFmpeg Wiki
    Mar 4, 2015 · Post-processing filters are applied after decoding a low-quality video, usually to make blocking and ringing artifacts less visually annoying.Missing: advantages drawbacks
  42. [42]
  43. [43]
    VirtualDub MSU Deblocking Filter v2.2 - Video Processing
    The MSU Deblocking filter recovers video quality from DVD/VideoCD or decompressed videos, automatically determining blockiness and raising quality.Missing: survey | Show results with:survey
  44. [44]
    H.264 Deblocking Filter using GPU How to do deblocking Filter for H ...
    Jun 10, 2009 · Hi everyone, I want to use GPU to implement Deblocking Filter for H.264/AVC now, you know, the H.264/AVC deblocking filter has data ...
  45. [45]
    A Detachable Full-HD Multi-Format Video Decoder: MPEG2/MPEG4 ...
    ... MPEG-2, MPEG-4, H.264 and ... The target of the proposed MFD is the Full HD (High Definition) video processing needed for the high-end D-TV SoC (System-on-Chip).
  46. [46]
    H264 encoding is blurring my image - Creative COW
    Apr 30, 2016 · I have been told to look into turning off the deblocking filter, but cannot find this in adobe media encoder. Is there any variable setting ...
  47. [47]
    How enable Deblocking Filter in ffmpeg h264 Encoder
    Apr 15, 2015 · To control the loopfilter parameters, use the "deblock" private codec option: Loop filter parameters, in <alpha:beta> form. Share.<|control11|><|separator|>
  48. [48]
  49. [49]
    Enhancement - Topaz Labs Docs
    Enhancement. Enhancement filters are the most customizable feature of Topaz Video. This section contains many controls to refine and improve your footage.Video Setup · Add Noise, Recover Detail &... · Interlaced
  50. [50]
    [PDF] POINTWISE SHAPE-ADAPTIVE DCT FOR HIGH-QUALITY ...
    The SA-DCT filtering used for the chrominance channels allows to faithfully reconstruct the missing structural information of the chrominances, thus correcting ...Missing: plugin | Show results with:plugin
  51. [51]
    Average PSNR/SSIM for JPEG image deblocking for quality factors ...
    Average PSNR/SSIM for JPEG image deblocking for quality factors of 10, 20, 30, and 40 on LIVE1 [86] dataset. The best results are in bold. Source ...