Fact-checked by Grok 2 weeks ago

Sum of absolute differences

In , the sum of absolute differences (SAD) is a metric that measures the similarity between two image blocks or regions by computing the total sum of the absolute differences between their corresponding values, without requiring division or complex operations. The for SAD is given by \sum |s_1 - s_2|, where s_1 and s_2 represent the pixel intensities in the respective blocks, and the occurs over all pixels in the block. Primarily applied in algorithms for video compression standards such as MPEG and , SAD helps identify the best matching block in a reference frame to predict motion, thereby reducing and improving encoding efficiency. Compared to alternatives like the sum of squared differences (SSD) or (MSE), SAD offers lower due to the absence of multiplications, making it suitable for hardware implementations, and is more robust to outliers in pixel values. Extensions such as sum of absolute transformed differences (SATD) incorporate frequency-domain transformations to enhance rate-distortion performance in modern codecs.

Definition and Mathematics

Formal Definition

The sum of absolute differences (SAD) is a fundamental metric in and that quantifies the dissimilarity between two sequences or arrays by aggregating the absolute deviations of their corresponding elements. For two finite sequences x = (x_1, x_2, \dots, x_n) and y = (y_1, y_2, \dots, y_n) of equal n, the SAD is formally defined as \text{SAD}(x, y) = \sum_{i=1}^n |x_i - y_i|. This L1-norm-based measure emphasizes robustness to outliers compared to squared-error alternatives, as it does not amplify larger deviations disproportionately. In applications involving two-dimensional data, such as digital images, the SAD extends naturally to arrays by summing absolute differences across spatial coordinates. For an image block I and a T of dimensions m \times p, the is expressed as \text{SAD}(I, T) = \sum_{i=1}^m \sum_{j=1}^p |I(i,j) - T(i,j)|. This formulation serves as a core similarity criterion in tasks like block comparison, where lower SAD values indicate greater alignment between the regions. The SAD metric traces its origins to early literature in the 1970s, where it emerged as a simple yet effective error measure for and in emerging systems.

Key Properties

The sum of absolute differences (SAD), defined as \sum_{i=1}^n |x_i - y_i| for vectors x and y in \mathbb{R}^n, is a . This convexity arises because the function | \cdot | is , and the sum of functions is , ensuring that any local minimum is global and facilitating reliable optimization in problems where SAD serves as an objective. SAD satisfies the : \mathrm{SAD}(x, z) \leq \mathrm{SAD}(x, y) + \mathrm{SAD}(y, z) for any vectors x, y, z \in \mathbb{R}^n. This property follows directly from the for the , |a - c| \leq |a - b| + |b - c|, applied componentwise and then summed over i = 1 to n, making SAD a that preserves distances in a manner consistent with the L1 norm. Unlike measures based on squared differences, SAD exhibits reduced sensitivity to outliers because it penalizes deviations linearly rather than quadratically; large errors contribute proportionally to the total without amplification, enhancing robustness in noisy scenarios. Computing SAD requires only O(n) time complexity for n elements, involving a linear pass to take absolute differences and sum them, with no multiplications needed beyond the absolutes themselves, which underscores its efficiency for real-time applications. However, SAD is non-differentiable at points where any component x_i = y_i, due to the kink in the absolute value function at zero; this necessitates subgradient methods or smoothed approximations for gradient-based optimization techniques.

Computation and Algorithms

Basic Calculation

The sum of absolute differences (SAD), also known as the L₁ distance or Manhattan distance, measures the between two sequences of equal length by summing the s of their corresponding elements. To compute SAD, align the sequences element-wise, calculate the absolute difference for each pair of elements at the same position, and then sum these differences across all positions. Mathematically, for two sequences \mathbf{x} = (x_1, x_2, \dots, x_n) and \mathbf{y} = (y_1, y_2, \dots, y_n) of length n, the SAD is given by \text{SAD}(\mathbf{x}, \mathbf{y}) = \sum_{i=1}^{n} |x_i - y_i|. This direct summation provides a simple metric of dissimilarity, where smaller values indicate greater similarity between the sequences. A numerical example illustrates the process: consider the sequences \mathbf{x} = [1, 3, 2] and \mathbf{y} = [2, 2, 3]. The pairwise absolute differences are |1 - 2| = 1, |3 - 2| = 1, and |2 - 3| = 1. Summing these yields \text{SAD}(\mathbf{x}, \mathbf{y}) = 1 + 1 + 1 = 3. This calculation highlights the metric's sensitivity to deviations in individual elements, making it useful for assessing alignment in discrete data. The SAD is defined only for sequences of equal length, as the requires corresponding elements at each position. When sequences have mismatched lengths, practical approaches include truncating the longer sequence to match the shorter one or the shorter sequence (e.g., with zeros) to equalize lengths before ; these methods adapt the standard definition to real-world scenarios. Special cases arise with edge conditions. For zero-length sequences (empty sets), the SAD is 0 by mathematical convention, as the evaluates to the . If the sequences are identical (x_i = y_i for all i), each absolute difference is 0, resulting in SAD = 0. Similarly, for two all-zero sequences, SAD = 0, consistent with the identical case. These outcomes follow directly from the formula and underscore the metric's behavior at extremes.

Efficient Implementation Techniques

Computing the sum of absolute differences (SAD) efficiently is crucial for large-scale applications such as real-time , where naive implementations become prohibitive due to high computational demands. One foundational technique involves the use of integral images, also known as summed-area tables, which precompute prefix sums to enable constant-time queries for rectangular region sums. Originally proposed for , this allows the sum of values over any rectangular area to be retrieved in O(1) time after O(N) preprocessing, where N is the number of pixels in the image. In the context of 2D SAD for block matching, integral images are applied to precomputed maps between reference and target blocks. For two images u and v, the SAD for a block offset is given by: \text{SAD}(p, q) = \sum_{t \in B(0, r)} |u(p + t) - v(q + t)| where B(0, r) denotes the block of size (2r+1) × (2r+1). To optimize, compute the image d(x, y) = |u(x, y) - v(x, y)|, then construct an integral image ii such that ii(x, y) represents the sum of d over the from (1,1) to (x,y). The sum over any (x1, y1) to (x2, y2) is then ii(x2, y2) - ii(x1-1, y2) - ii(x2, y1-1) + ii(x1-1, y1-1). This reduces the per-offset SAD computation from O(r^2) to O(1), transforming the overall block matching complexity from O(M N r^2) (for M offsets and N pixels) to O(M N + N) with preprocessing. The construction of the integral image can be implemented efficiently using a scalar accumulator approach for speed, as follows in pseudocode:
for y = 1 to height:
    s = 0
    for x = 1 to width:
        s += d(x, y)
        ii(x, y) = ii(x-1, y) + s
This method processes the image row-by-row, achieving up to 2x faster computation than recursive alternatives and enabling parallelization across horizontal bands. For precision in low-bit-depth images, preprocessing steps like subtracting block means can mitigate rounding errors in the sums. While full search exhaustively evaluates all candidate offsets to minimize SAD, yielding optimal matches but with quadratic complexity in the search window size, reduced-resolution techniques offer approximations for faster processing. Logarithmic search, for instance, performs a multi-stage coarse-to-fine evaluation, starting with large step sizes and refining to pixel accuracy, evaluating only 5–9 points per stage instead of the full . The algorithm initializes step size s = ⌈log₂(d+1)⌉ for maximum d, checks cross-pattern points around the current best, and halves s until s=1, then evaluates a neighborhood. This reduces average computations per to about 14 for typical parameters, compared to hundreds for full search, while maintaining comparable (PSNR) in video sequences. further approximates by downscaling images or blocks before SAD computation, trading minor accuracy loss for proportional in . These methods are particularly valuable in for video, where real-time constraints demand balancing speed and quality. Hardware accelerations leverage parallelism inherent in SAD. On CPUs, single instruction multiple data (SIMD) instructions such as SSE2 and AVX enable vectorized absolute difference and summation. For example, the PSADBW instruction computes packed sums of absolute differences on 16-byte vectors, processing up to 16 pixels simultaneously, while PADDD accumulates results. In high-efficiency video coding (HEVC) motion estimation, these yield up to 50% reduction in SAD complexity and 30–35% overall encoding speedup on x86 architectures. On GPUs, CUDA implementations parallelize block matching across thousands of threads, assigning each macroblock or offset to a thread block for concurrent SAD evaluation. A shared-memory strategy minimizes global access latency, achieving high throughput on commodity hardware for full-search motion estimation. Overall, these techniques shift SAD computation from O(n^2) naive block matching to near-linear O(n) regimes with preprocessing and parallelism, enabling scalable deployment in resource-constrained environments.

Applications in Signal Processing

Motion Estimation in Video

In video coding, the block-matching algorithm employs the sum of absolute differences (SAD) as a primary metric for motion estimation, which predicts inter-frame motion to reduce temporal redundancy and improve compression efficiency. The current frame is divided into macroblocks, typically of size 16×16 pixels in standards like H.264/AVC, though variable sizes down to 4×4 are supported for finer granularity. For each macroblock in the current frame, the algorithm searches for the most similar matching block within a defined search window in the reference frame (usually the previous frame), selecting the candidate that minimizes the SAD value. This process enables efficient temporal prediction by exploiting the correlation between consecutive frames. The output of the block-matching process is a motion vector (d_x, d_y) for each , representing the horizontal and vertical displacement to the best-matching position in the reference frame where SAD is minimized. These motion vectors are encoded and transmitted alongside the residual frame (the difference between the original and predicted blocks), allowing the to reconstruct the predicted frame by shifting and adding the reference block. This approach significantly reduces the data required for transmission, as only the residuals and vectors need to be coded rather than full frames. In practice, full search within the window evaluates all possible displacements, though efficient techniques like diamond search can approximate the minimum SAD with fewer computations. SAD is a core component of in major video coding standards, including H.264/AVC ( H.264, 2003) and its successor HEVC (H.265, 2013), where it serves as the metric for integer-pixel motion vector selection. In H.264/AVC, SAD is computed during full-pel motion estimation within the reference software (JM), while sub-pixel refinement may use sum of absolute transformed differences (SATD). HEVC extends this with larger block sizes up to 64×64 and more reference frames, but retains SAD for its balance of accuracy and speed in block matching. For illustration, consider a 4×4 example: the SAD for a candidate displacement (d_x, d_y) is calculated as \sum_{i=0}^{3} \sum_{j=0}^{3} |C(i,j) - R(i+d_x, j+d_y)|, where C is the current block and R is the reference block; the minimum over candidate vectors yields the motion estimate. This integration enables real-time encoding in hardware implementations. SAD's computational efficiency stems from requiring only absolute values and summations, avoiding the multiplications needed for sum of squared differences (SSD), which provides a notable speed advantage in real-time video encoding applications, particularly on resource-constrained hardware. Motion estimation using SAD contributes to substantial compression gains; for instance, HEVC achieves approximately 50% bit rate reduction over H.264/AVC at equivalent quality, largely due to refined block-matching predictions. This demonstrates SAD's role in balancing quality and bitrate. Despite its effectiveness, SAD-based block matching can introduce visible block artifacts, especially in scenes with non-translational motion (e.g., or ), where discrete boundaries fail to capture smooth motion fields. This limitation often leads to approaches in modern codecs, such as variable block partitioning, sub-pixel , and deblocking filters to mitigate artifacts and improve perceptual quality.

Template Matching in Images

Template matching in images utilizes the sum of absolute differences (SAD) to identify and locate a predefined within a larger search by measuring pixel-wise dissimilarity. The process begins by sliding the across the search , aligning it with every possible position where the fits entirely within the boundaries. At each position, SAD is computed as the sum of the absolute differences between the intensities of corresponding pixels in the and the overlapping region of the search ; the position yielding the minimum SAD value indicates the best match, and a is applied to determine if the match is sufficiently strong to confirm detection. This approach is computationally straightforward and widely implemented in libraries for real-time applications. In tasks within , SAD-based enables the localization of specific patterns, such as human faces in digital photographs or in scanned documents, by comparing the against candidate regions in the . For instance, in , templates derived from average facial features can scan input images to highlight regions with low SAD scores, facilitating subsequent recognition steps. Similarly, detection employs line-pattern templates to identify encoded symbols amid cluttered backgrounds, with SAD providing a for verification. To achieve and invariance, techniques—such as resizing the template across multiple scales or applying rotational offsets before SAD computation—are integrated, enhancing detection reliability in varied imaging conditions. A representative example involves searching for a 10×10 , such as a small facial outline, within a 100×100 . The is positioned at each (i, j) coordinate where 1 ≤ i ≤ 91 and 1 ≤ j ≤ 91 to ensure full overlap, with SAD calculated for each alignment; the coordinate with the lowest SAD value is selected as the detected location, assuming it meets a like 20% of the maximum possible SAD for the size. This efficiently narrows down matches but requires careful selection to balance false positives and misses. To address variations in object size, multi-scale SAD incorporates image , constructing successive lower-resolution versions of both the search image and through Gaussian blurring and . Matching proceeds hierarchically: coarse SAD computations at pyramid levels quickly eliminate unlikely regions, followed by refinement at higher resolutions around promising candidates, reducing overall computation while maintaining accuracy. Experimental evaluations of pyramid-based SAD show significant speedups compared to single-scale searches. SAD offers trade-offs in accuracy, exhibiting robustness to minor lighting changes and outliers—owing to its linear penalization of differences, which avoids amplifying small intensity variations unlike squared metrics—yet remaining sensitive to exact pixel alignments and substantial illumination shifts that alter global brightness. This makes SAD preferable for controlled environments but often necessitates preprocessing like for broader robustness. Additionally, its outlier resistance supports reliable matching in images with or partial occlusions.

Comparisons and Alternatives

Versus Sum of Squared Differences

The sum of squared differences (SSD) is a dissimilarity measure defined as the sum over all elements of the squared deviations between two corresponding values, given by \text{SSD} = \sum_{i} (x_i - y_i)^2, where x_i and y_i are the i-th elements of the input sequences or blocks being compared. This quadratic penalization amplifies large deviations, making SSD particularly sensitive to outliers in the data. In contrast, the sum of absolute differences (SAD) corresponds to the L1 norm (Manhattan distance), while SSD aligns with the norm (); the L1 formulation treats all deviations linearly, rendering SAD more robust to outliers than the outlier-amplifying approach. For instance, in the presence of erroneous large residuals, L1 minimization yields solutions closer to the true underlying structure, whereas can skew results toward accommodating the anomalies. SAD finds primary use in robust matching tasks involving noisy data, such as in video where intensities may include transmission errors or , while SSD suits scenarios requiring gradients, like least-squares optimization in signal modeling. In video coding, SAD's lower computational demand—avoiding multiplications for squaring—supports real-time processing in bandwidth-constrained environments like conferencing. To illustrate the divergence under outliers, consider two 4-element vectors: a clean pair (1,2,3,4) and (1,2,3,5) (small deviation), versus an outlier-affected pair where the second becomes (1,2,3,20). The table below computes SAD and SSD for both cases:
ScenarioVectorsSADSSD
Clean (no )(1,2,3,4) vs (1,2,3,5)11
With (1,2,3,4) vs (1,2,3,20)16256
The SSD value escalates dramatically with the outlier, highlighting its heightened sensitivity compared to SAD's linear growth. Historically, early video codecs such as (1988) and (1993) favored SAD in for its computational efficiency in block-matching, enabling feasible integer-pel searches on limited . Modern standards like HEVC (2013) incorporate hybrids, using both SAD and SSD in rate-distortion optimization to balance speed and precision across block sizes.

Versus Mean Absolute Error

The Mean Absolute Error (MAE) is a statistical measure defined as the average of the absolute differences between predicted and actual values across a of size n, given by the formula \text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |x_i - y_i|, where x_i and y_i represent corresponding elements from the two sequences. This normalization by n distinguishes MAE from the Sum of Absolute Differences (SAD), which computes the total unnormalized \sum_{i=1}^{n} |x_i - y_i|. Consequently, SAD is scale-dependent, as its magnitude varies with the dataset size, making direct comparisons across differing lengths challenging and potentially misleading in terms of overall performance. In contrast, MAE provides a scale-invariant per observation, enhancing interpretability by yielding a consistent regardless of n, which facilitates easier assessment of prediction accuracy. In practical applications, SAD is commonly employed in domains like video compression where block sizes are fixed, such as in algorithms that match blocks of predetermined dimensions (e.g., 16×16 ) to minimize computational overhead during encoding. Here, the lack of is advantageous because all comparisons occur within uniform sizes, ensuring SAD directly reflects the cumulative mismatch without needing adjustment. Conversely, MAE finds greater use in evaluation, where datasets may vary in size, allowing for standardized benchmarking of predictive models across experiments. For instance, in , MAE's averaging property supports robust comparisons of model performance, particularly when outlier sensitivity is a concern, as it treats all deviations linearly without amplification. The relationship between the two metrics is straightforward: \text{MAE} = \frac{\text{SAD}}{n}, enabling easy conversion when the dataset length is known. This normalization can alter performance rankings between models or methods. For example, consider two prediction scenarios: one with SAD = 20 over n = 10 observations (MAE = 2.0), and another with SAD = 15 over n = 5 observations (MAE = 3.0). SAD would rank the second as superior due to its lower total error, but MAE ranks the first higher by emphasizing per-observation accuracy, highlighting how unnormalized totals may favor smaller datasets in direct comparisons. Regarding advantages and limitations, SAD's computation relies solely on integer absolute differences and summations, avoiding the division required for , which often results in fractional values and demands additional arithmetic precision—making SAD preferable for implementations like FPGA-based video processors where and integer operations reduce power consumption and . However, this simplicity comes at the cost of reduced comparability across datasets of varying sizes. , by normalizing, excels in cross-dataset evaluations, enabling fair assessments of model generalizability in diverse statistical contexts, though it may introduce minor overhead in resource-constrained environments due to the averaging step.

Extensions and Variants

Weighted Sum of Absolute Differences

The weighted sum of absolute differences (WSAD) is a variant of the sum of absolute differences that assigns varying importance to individual elements through non-uniform weights, defined as \sum_{i} w_i |x_i - y_i|, where x_i and y_i are corresponding elements from two sequences or signals, and w_i \geq 0 are weights that modulate the contribution of each difference. This generalizes the basic sum of absolute differences, which corresponds to the special case where all w_i = 1. Weights w_i can be predefined based on spatial or structural properties; for instance, Gaussian weights emphasize central elements in a window, as in the Gaussian weighted sum of absolute differences (GWSAD), where w_{i,j} = \exp\left(-\frac{(i^2 + j^2)}{2\sigma^2}\right) for a patch, with \sigma controlling the spread. In adaptive schemes, weights are computed dynamically from features. involves first calculating the absolute differences |x_i - y_i|, multiplying each by the corresponding w_i, and summing the results, maintaining linear O(n) identical to the unweighted case, where n is the number of elements. This process is straightforward to implement in pixel-wise operations for images or blocks in video, with weights often precomputed or looked up from tables for efficiency. In , WSAD enhances by incorporating weights that reflect motion vector reliability; for instance, in frame rate up-conversion, normalized weighted SAD values from neighboring blocks assess bidirectional motion consistency, enabling mode decisions that fill gaps in low-frame-rate sequences and improve PSNR by up to 5 . Weights here may derive from block activity or vector agreement, prioritizing reliable estimates in textured or occluded regions. WSAD improves matching accuracy in non-uniform data, such as textured regions or scenes with varying illumination, by focusing computation on relevant elements and mitigating the averaging effects of uniform weights, leading to more robust performance in homogeneous areas and at boundaries.

Multidimensional SAD

The sum of absolute differences (SAD) extends naturally to multidimensional settings by aggregating absolute differences across all elements of d-dimensional arrays, or tensors, X and Y, defined over a . Formally, for arrays of size n_1 \times \cdots \times n_d, \text{SAD}(X, Y) = \sum_{i_1=1}^{n_1} \cdots \sum_{i_d=1}^{n_d} |X_{i_1, \dots, i_d} - Y_{i_1, \dots, i_d}|. This L1-based metric quantifies dissimilarity between multidimensional signals, such as volumetric data or feature spaces, preserving the robustness of absolute differences to outliers while scaling to higher dimensions. In 3D medical imaging, multidimensional SAD is widely applied for volume registration, aligning scans from modalities like MRI, CT, and PET to enable accurate diagnosis and treatment planning; for instance, it measures voxel-wise similarity in non-rigid transformations to correct for patient motion or deformation. In hyperspectral image analysis, which treats data as 3D cubes (two spatial dimensions plus numerous spectral bands), SAD evaluates reconstruction errors across channels in super-resolution tasks, supporting applications in remote sensing and material identification by highlighting spatial and spectral discrepancies. A primary challenge in multidimensional SAD is the exponential growth in computational demands, as the operation requires processing O(\prod_{k=1}^d n_k) elements, leading to high costs for large volumes like 512³ CT datasets. This is often mitigated through separable filtering approaches, where the sum is decomposed into independent 1D accumulations along each dimension—equivalent to applying box filters sequentially—to reduce complexity from O(n^d) to near-linear in practice for fixed window sizes. For example, in CT scan analysis, 3D SAD facilitates voxel matching for intra-modality registration, achieving low target registration errors in clinical evaluations. When d=2, multidimensional SAD reduces to the conventional form for images, while the tensor notation offers a unified, formal adaptable to arbitrary dimensions beyond spatial volumes, such as spatiotemporal or multichannel . Efficient techniques like multidimensional images can further accelerate these computations across dimensions.

References

  1. [1]
    Sum Of Absolute Difference - an overview | ScienceDirect Topics
    Sum of Absolute Difference (SAD) is a metric that calculates the total sum of absolute pixel value differences between two images or regions.
  2. [2]
  3. [3]
    [PDF] Convex optimization
    Feb 1, 2016 · It turns out that all norms are convex. Lemma 1.10 (Norms are convex). Any valid norm ||·|| is a convex function. Proof. By the triangle ...
  4. [4]
    [PDF] Some Important Inequalities Math 354, Winter 2008 Triangle Inequality
    Inverse Triangle Inequality: For all real a and b, |a − b| ≥ ||a|−|b||. Proof. By the triangle inequality,. |a| = |a − b + b|≤|a − b| + | ...
  5. [5]
    [PDF] Robust L1 Norm Factorization in the Presence of Outliers and ...
    In this paper, we formulate matrix factorization as a L1 norm minimiza- tion problem that is solved efficiently by alternative convex programming. Our ...
  6. [6]
    [PDF] A Note on the Complexity of Lp Minimization - Stanford University
    In terms of computational complexity, the problem with L0 norm is shown to be NP-hard [19]. On the other hand, when p = 1, the problem (1) or (2), which is a ...
  7. [7]
    [PDF] Multivariate Analysis of Ecological Data - Fundación BBVA
    The sum of absolute differences, or L1 distance (or city-block distance), is an alternative to the Euclidean distance: an advantage of this distance is that it.
  8. [8]
  9. [9]
    [PDF] Clustering multidimensional sequences in spatial and temporal ...
    consist of similar, dense sequences of the same length. ... sum of absolute differences in sequence values ... we varied the length of the data sequences and the ...
  10. [10]
    Sigma notation - Department of Mathematics at UTSA
    Jan 15, 2022 · If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.
  11. [11]
    Summed-area tables for texture mapping - ACM Digital Library
    Summed-area tables use precalculated values representing the integral over a texture function, making texture-map computations independent of texture density.Missing: original | Show results with:original
  12. [12]
    [PDF] Integral Images for Block Matching - IPOL Journal
    The integral image representation is a remarkable idea that permits to evaluate the sum of image values over rectangular regions of the image with four ...
  13. [13]
    [PDF] LOGARITHMIC SEARCH FOR MOTION ESTIMATION
    This paper presents logarithmic search algorithm for block based motion estimation. The performance of the algorithm is evaluated using standard video sequences.
  14. [14]
    Implementation of fast HEVC encoder based on SIMD and data ...
    Mar 26, 2014 · This paper presents several optimization algorithms for a High Efficiency Video Coding (HEVC) encoder based on single instruction multiple ...<|separator|>
  15. [15]
    A high-performance parallel implementation of Sum of Absolute ...
    This paper presents a novel, high performance implementation of SAD on the general purpose GPU architecture, which is available in many laptops and PCs, ...
  16. [16]
    [PDF] Motion Estimation for Video Coding
    ▫Mean squared error (sum of squared errors). ▫Sum of absolute differences. ▫Approximately same performance. → SAD less complex for some architectures. 2. 1. 1.
  17. [17]
    [PDF] Block matching algorithm based on Differential Evolution for motion ...
    In this procedure, the motion vector is obtained by minimizing the sum of absolute differences (SAD) produced by the. MB of the current frame over a determined ...
  18. [18]
    [DOC] JVT-N046.doc - ITU
    In the JM software, SAD is used when computing full-pel motion estimation while SATD is used for sub-pel motion estimation. Finally, SSD is the sum of the ...
  19. [19]
    Optimisation of HEVC motion estimation exploiting SAD and SSD ...
    Feb 1, 2018 · This process includes the SAD computation of two blocks, the current PU and the matching block in the reference frame to determine the best MV ...
  20. [20]
    [PDF] Template Matching Advances and Applications in Image Analysis
    Oct 23, 2016 · Template matching finds areas of an image that match a template image, using a template to search for a similar object in the image.Missing: seminal | Show results with:seminal
  21. [21]
    Template Matching Approach for Face Recognition System
    Aug 7, 2025 · In this context, the goal is to identify linguistic patterns within paragraphs that refer to facial features or discussions about faces [8].
  22. [22]
    A 2d barcode inspection using template matching - Academia.edu
    In this paper we demonstrate the use template matching in printed 2D barcode inspection. The inspected area is focused to the finder pattern of the data ...Missing: SAD | Show results with:SAD
  23. [23]
    Implementation and analysis of template matching for image ...
    The typical distortion measures used in template matching algorithms are the sum of absolute differences (SAD) [3] and the sum of squared differences (SSD) ...
  24. [24]
    Fast and Accurate Template Matching Algorithm Based on Image ...
    This paper reported an investigation into the use of pyramid sum of absolute difference (PSAD) for template matching systems. Based on the experimental ...
  25. [25]
    [PDF] MPEG Video Compression - Pages supplied by users
    ... slight cost to coding efficiency. Page 34. Sum of absolute differences (SAD). SAD is computed by: SAD(i, j) = N-1. X k=0. N-1. X l=0. |C(x + k, y + l) − R(x + k ...Missing: early codecs
  26. [26]
    [PDF] Performance Metrics (Error Measures) in Machine Learning ... - arXiv
    distance or mean absolute error or Gower, Kulczynski distance, Soergel distance). ... John Wiley & Sons, New York. Sum of absolute differences. (n.d.). In ...<|control11|><|separator|>
  27. [27]
    Block matching algorithm for motion estimation based on Artificial ...
    In this paper, a new algorithm based on Artificial Bee Colony (ABC) is proposed to reduce the number of search locations in the BM process.Missing: original | Show results with:original
  28. [28]
    Root mean square error (RMSE) or mean absolute error (MAE)?
    Chai, T. and Draxler, R. R.: Root mean square error (RMSE) or mean absolute error (MAE)? – Arguments against avoiding RMSE in the literature, Geosci.Metrics · Peer review · Related articles
  29. [29]
    [PDF] Efficient Adder Compressor Design for Sum of Absolute Difference ...
    SAD is more advantageous over MAD, because SAD generally gives integer but the MAD values are fractional. SAD also needs a faster calculation. The SAD value can ...
  30. [30]
    U.S. Patent for Content adaptive image restoration, scaling and ...
    This difference is based on the Gaussian weighted sum of absolute differences (GWSAD) which is termed as a distance between the two windows as shown below ...
  31. [31]
    (PDF) Motion Compensated Frame Rate Up-Conversion for Low ...
    Aug 7, 2025 · ... reliability. Then, we take 8x8 pixel. block using weighted SAD of neighbor blocks. B. MV Decision. One-directional MV (Forward, Backward) are.
  32. [32]
    Non-Rigid Multi-Modal 3D Medical Image Registration Based ... - NIH
    The sum of absolute differences between structural representations is computed as the similarity measure. ... It is easy to understand that, for a larger δ , the ...
  33. [33]
    Hyperspectral Image Super-Resolution with Self-Supervised ... - MDPI
    The error maps are the sum of absolute differences in all spectral bands between the estimated HSI and the ground truth. 4.3.2. WDCM Database. The ...
  34. [34]
  35. [35]
    miLBP: a robust and fast modality-independent 3D LBP for ...
    Jun 1, 2016 · This approach has been successfully used in both CT-MRI registration ... sum of absolute differences (SAD) between vectors to calculate ...Experiments · Brainweb And Rire · Clinical Mri And Us Image<|control11|><|separator|>