Fact-checked by Grok 2 weeks ago

Histogram equalization

Histogram equalization is a fundamental technique in designed to enhance the contrast of an by redistributing its values to achieve a more uniform distribution across the available gray levels. This automatic transforms the input intensities using a mapping derived from the (CDF) of the , effectively stretching the and making details more visible in low-contrast regions. The process begins with computing the histogram, which represents the frequency of each intensity level in the image, followed by normalization to obtain the probability density function. The discrete transformation function is then given by s_k = (L-1) \sum_{j=0}^{k} p_r(r_j), where L is the number of gray levels, p_r(r_j) is the normalized histogram value for intensity r_j, and s_k is the corresponding output intensity for input r_k. This monotonic increasing function ensures that the output histogram approximates uniformity, maximizing the use of the full intensity range from 0 to L-1. In practice, histogram equalization is widely applied in preprocessing for tasks like , where it improves visibility of structures in or MRI scans by enhancing global contrast without user-defined parameters. For color images, the technique is often restricted to the or channel (e.g., in HSI ) to preserve hue and while avoiding unnatural color shifts. Although highly effective for images with narrow intensity distributions, it can amplify or introduce artifacts in high-contrast areas, prompting the development of adaptive variants for local enhancement.

Overview

Definition and Purpose

Histogram equalization is a method in that adjusts the contrast of an image by modifying its pixel intensity distribution to utilize the full . This technique spreads out the most frequent intensity values, thereby enhancing areas of lower local contrast and improving overall image visibility. The primary purpose of histogram equalization is to enhance the global contrast of images, particularly those where intensities are clustered in a narrow range, making underlying features more discernible without introducing additional noise or losing original information. It achieves this by redistributing intensity values to approximate a uniform , which results in a more even spread across the available intensity range, such as 0 to 255 for 8-bit images. Developed in the 1970s as part of early techniques, histogram equalization was initially applied for preprocessing low-contrast images in and contexts. Early implementations, such as those explored for enhancement, demonstrated its utility in handling uneven illumination and improving detail in such domains.

Benefits and Limitations

Histogram equalization offers several key benefits in image enhancement, primarily by improving overall contrast through the redistribution of intensity values to approximate a . This process effectively reveals hidden details in underexposed or overexposed regions, making it particularly useful for images captured under non-uniform lighting conditions, such as X-rays or outdoor photographs with varying illumination. Additionally, the technique automates contrast adjustment without requiring manual parameter tuning, providing a straightforward, parameter-free approach that enhances visibility across a broad range of applications. From a computational perspective, histogram equalization is highly efficient, operating in O(N) time complexity where N is the number of pixels, which makes it suitable for real-time processing on resource-constrained devices. In many cases, it also preserves edges and textures by maintaining the relative intensity relationships within local areas, avoiding significant distortion of structural features. Despite these advantages, histogram equalization has notable limitations that can affect its suitability for certain images. It can amplify noise in flat or homogeneous regions, where subtle variations are stretched, leading to grainy artifacts that degrade perceived quality. Furthermore, the method often causes over-enhancement, resulting in unnatural appearances, such as washed-out colors or halo effects around high-contrast boundaries, particularly in consumer photographs. Histogram equalization performs poorly on images with bimodal histograms or , where the global transformation fails to balance disparate intensity distributions, potentially suppressing details in one mode while exaggerating the other. It is not ideal for already well-contrasted images, as applying it can introduce unnecessary alterations without benefit, and for scenarios requiring local enhancement, adaptive variants may be more appropriate to address regional variations without global over-correction.

Theoretical Foundation

Image Histograms

In digital image processing, particularly for grayscale images, an image histogram serves as a graphical representation of the frequency distribution of pixel intensities, illustrating how many pixels exhibit each possible intensity value. For an 8-bit grayscale image, intensity levels typically range from 0 (black) to 255 (white), resulting in 256 discrete bins, each corresponding to one intensity level r_k where k = 0, 1, \dots, L-1 and L = 256. This histogram provides a visual summary of the image's intensity distribution, enabling analysis of its tonal range and overall quality. The computation of an histogram begins by counting the occurrences of each level across all pixels in the . For an image containing N total pixels, the h(r_k) is defined as the number of pixels n_k that have r_k, such that h(r_k) = n_k. To facilitate statistical interpretation, the normalized histogram, or probability density estimate, is given by p(r_k) = \frac{n_k}{N}, where the sum of p(r_k) over all k equals 1, treating the intensities as a probability distribution. This process assumes a , where each pixel has a single value, and forms the basis for statistical analysis in image enhancement techniques. Key properties of histograms include their ability to reveal structural characteristics of the . The cumulative , computed as the running sum of frequencies up to a given level, s(r_k) = \sum_{j=0}^{k} h(r_j), indicates the total number of pixels with intensities less than or equal to r_k, providing insight into the distribution's spread. that are skewed—such as those clustered toward low intensities (dark ) or high intensities (bright )—or narrow with a concentrated peak, signal poor due to limited use of the available range and clustering of pixel values, which obscures details. These properties make foundational for identifying that benefit from enhancement methods like equalization.

Probability Density Function and Cumulative Distribution Function

In image processing, the (PDF) characterizes the distribution of pixel intensities within an image. For continuous intensity values r typically normalized to the range [0, 1], the PDF p(r) represents the relative frequency of occurrence of intensity r, such that the probability of a pixel having an intensity between r and r + dr is p(r) \, dr. This function integrates to 1 over the full intensity range, providing a normalized measure of intensity distribution. For discrete digital images with L intensity levels r_k where k = 0, 1, \dots, L-1, the PDF is approximated using the as p(r_k) = \frac{h(r_k)}{N}, where h(r_k) is the number of pixels with intensity r_k and N is the total number of pixels in the image. This approximation treats the histogram frequencies as empirical probabilities, enabling probabilistic analysis of data. The (CDF) extends the PDF by accumulating probabilities up to a given intensity. In the continuous case, it is defined as s(r) = \mathrm{CDF}(r) = \int_{0}^{r} p(u) \, du, which yields values from 0 to 1 as r spans the intensity range. For the case, the CDF at level k is s_k = \mathrm{CDF}(r_k) = \sum_{i=0}^{k} p(r_i). Both forms are monotonically non-decreasing, ensuring they preserve the order of intensities. In histogram equalization, the CDF plays a central role as a mapping due to its monotonicity and range from 0 to 1, which allows transformation of the original intensity distribution into one that approximates uniformity. This redistribution enhances contrast by spreading intensities more evenly. A defining property of the equalized image is that its resulting CDF becomes linear, corresponding to a constant (uniform) PDF across the output intensity levels, thereby achieving the desired equalization effect.

Equalization Algorithm

Transformation Function Derivation

The transformation function in histogram equalization is designed to map the input intensity levels such that the output image has a uniform intensity , thereby maximizing contrast utilization across the available . Consider a continuous r representing the input intensity with p_r(r) defined over the interval [0, 1] for . The goal is to find a s = T(r) where s is also in [0, 1], such that the output p_s(s) = 1 for all s \in [0, 1]. By the probability integral transform theorem, if r follows a continuous distribution with (CDF) G(r) = \int_0^r p_r(u) \, du, then the transformed variable s = G(r) follows a on [0, 1]. This follows from differentiating the CDF of s: let s = G(r), so r = G^{-1}(s) assuming G is strictly increasing and invertible; then p_s(s) = p_r(G^{-1}(s)) \cdot \left| \frac{d}{ds} G^{-1}(s) \right| = p_r(G^{-1}(s)) \cdot \frac{1}{p_r(G^{-1}(s))} = 1. Thus, the transformation T(r) = G(r) = \int_0^r p_r(u) \, du achieves the desired uniform output . For discrete images with L intensity levels (typically L = 256), the intensities are scaled to integers from 0 to L-1. The continuous transformation is approximated by normalizing the histogram probabilities and applying the discrete CDF: T(r_k) = (L-1) \sum_{j=0}^k \frac{n_j}{N}, where r_k is the k-th level, n_j is the number of pixels at level j, and N is the total number of pixels. This form approximates the via summation of normalized bins, yielding T(r) values in [0, L-1]. The function T(r) is monotonic non-decreasing because the CDF G(r) is non-decreasing by definition, and strictly increasing where p_r(r) > 0, ensuring that the order of pixel intensities is preserved and no information is lost due to mapping multiple inputs to the same output unless inherently tied in the input distribution.

Step-by-Step Process

The histogram equalization algorithm proceeds through a series of procedural steps to remap the intensity values of an image, aiming to produce a more uniform distribution across the intensity range. This process leverages the image's histogram to derive a monotonic, non-decreasing transformation function that spreads out the pixel intensities, thereby enhancing overall contrast without requiring prior knowledge of the image content. The steps are typically applied to grayscale images with discrete intensity levels ranging from 0 to L-1, where L is the number of possible intensity levels (commonly 256 for 8-bit images).
  1. Compute the : First, calculate the histogram h(r_k) for each level k = 0, 1, \dots, L-1, where h(r_k) represents the number of pixels in the that have value r_k. This step quantifies the frequency of intensities in the original .
  2. Derive the (PDF) and (CDF): Normalize the histogram to obtain the PDF by dividing each bin count by the total number of pixels N: p(r_k) = \frac{h(r_k)}{N}. Then, compute the CDF by taking the cumulative sum up to level k: \text{CDF}(r_k) = \sum_{j=0}^{k} p(r_j). The CDF serves as the basis for the intensity transformation, as it is inherently monotonic and spans from 0 to 1.
  3. Normalize the CDF to create the mapping table: Scale the CDF values to the full range by multiplying by L-1 and to the nearest to ensure output levels: s_k = \round((L-1) \cdot \text{CDF}(r_k)). This generates a that maps each original r_k to a new s_k, approximating a .
  4. Apply the mapping to the image pixels: Traverse each pixel in the original image and replace its intensity value r with the corresponding mapped value s_r from the . This step produces the equalized output image.
The resulting equalized image exhibits an approximately uniform , which redistributes the intensity values to utilize the full more effectively, often revealing details that were obscured in low-contrast regions of the original.

Implementation Details

Pseudocode and Computation

The implementation of histogram equalization typically involves computing the image histogram, deriving the cumulative distribution function (CDF), and applying a transformation via a lookup table for efficient pixel remapping. A standard pseudocode outline for grayscale images with intensity levels from 0 to L-1 (where L is the number of discrete levels, such as 256 for 8-bit images) proceeds as follows:
Initialize histogram h[0..L-1] to 0
For each pixel with intensity r in the image:
    h[r] += 1
N = total number of pixels
cdf[0] = h[0] / N
For k = 1 to L-1:
    cdf[k] = cdf[k-1] + h[k] / N
For k = 0 to L-1:
    lut[k] = round( (L-1) * cdf[k] )
For each pixel with original value old_value:
    new_value = lut[old_value]
    Set pixel to new_value
This algorithm builds the histogram in a single pass over all N pixels, computes the normalized CDF as a running in O(L) time, generates the (LUT) in O(L) time, and remaps pixels in a second pass over N pixels, yielding a total of O(N + L). Since L is typically small (e.g., 256 for 8-bit images), the complexity is effectively linear in the number of pixels, making it suitable for large images. To accelerate the CDF computation, cumulative functions available in many libraries can be employed. In practice, libraries provide optimized implementations that handle integer arithmetic to prevent floating-point errors during CDF normalization and LUT creation. For instance, OpenCV's cv::equalizeHist() function processes 8-bit single-channel images by internally computing the , CDF, and LUT, then applying the , ensuring output values remain in the [0, 255] range without overflow. Similarly, MATLAB's histeq function equalizes the of images or indexed colormaps, using integer-compatible operations to map intensities while minimizing discrepancies between input and target cumulative histograms; it defaults to 64 bins but supports custom bin counts and returns values scaled appropriately for the input data type (e.g., uint8). These functions leverage vectorized operations for speed, with notes in their documentation emphasizing integer arithmetic for discrete levels to maintain exact mappings.

Discrete Intensity Handling

In digital images, intensity levels are discrete and finite, typically ranging from 0 to L-1, where L is the number of possible gray levels (e.g., L = 256 for 8-bit images). The histogram equalization transformation for such discrete cases is defined as s_k = \round\left( (L-1) \sum_{j=0}^{k} p_r(r_j) \right), where r_k is the k-th input intensity level, p_r(r_j) = n_j / (MN) is the estimated with n_j as the of level r_j and MN as the total number of pixels, and \round denotes to the nearest . The rounding step in this often results in multiple consecutive input levels r_k to the identical output level s_k, especially when the (CDF) increments are small relative to the quantization step of 1. This clustering effect flattens portions of the transformation curve, causing the output to deviate from uniformity, with some bins receiving more pixels than others and potentially empty bins appearing. In general, discrete histogram equalization does not produce a perfectly flat output due to these quantization constraints. To address the non-uniformity arising from rounding, consistent application of the floor or function is recommended to ensure a monotonic non-decreasing , preserving the order of intensities. For improved uniformity, one common adjustment is to add 0.5 to the argument before , effectively shifting the quantization boundaries to distribute output levels more evenly across the range. More advanced solutions involve specification methods, which directly target a predefined uniform or near-uniform output rather than relying solely on the CDF-derived mapping. Quantization effects become particularly pronounced in low-bit-depth images, where L is small (e.g., L = [16](/page/16)), as the equalization stretches the limited intensity range, often revealing or exacerbating banding artifacts—visible steps in smooth gradients that the sparse levels cannot resolve adequately. Although dithering techniques, which add controlled to break up bands, could mitigate these artifacts in principle, they are not typically integrated into standard histogram equalization implementations, as the method prioritizes global contrast over local smoothness. Due to the inherent discreteness of intensity levels, the resulting equalized histogram is only approximately uniform, with bin probabilities deviating from the ideal $1/L by amounts bounded on the order of $1/L, reflecting the granularity of the quantization process. This approximation holds well for high L and large images, where statistical averaging minimizes irregularities, but underscores the theoretical limitations of applying continuous-domain concepts to .

Extensions to Color and Advanced Variants

Color Image Application

Applying histogram equalization to color images requires careful consideration to prevent unwanted color distortions, as the technique originally designed for images can alter hues if applied naively across channels. The standard approach involves transforming the image into a that decouples from , such as , and equalizing only the luminance channel while leaving the chrominance components unchanged. This preserves the relative color information, enhancing contrast without shifting hues. In the color space, the Y component captures the , derived from the RGB values as Y = 0.299R + 0.587G + 0.114B, while U and V encode the color differences. Histogram equalization is performed exclusively on the Y channel using the transformation s_Y(k) = (L-1) \sum_{j=0}^{k} p_r(j), where L is the number of gray levels (typically 256 for 8-bit images), p_r(j) is the probability density of level j in the Y histogram, and the sum is the (CDF). The equalized Y values are then recombined with the original U and V to reconstruct the image, often converting back to RGB for display. This method leverages the perceptual uniformity of in human vision, focusing enhancement on brightness distribution. An alternative is to apply histogram equalization independently to each RGB channel, treating them as separate grayscale images. While straightforward and computationally efficient, this can cause significant hue shifts because the channels are interdependent; for instance, over-amplifying one channel relative to others may desaturate or tint areas like skin tones, leading to unnatural appearances. The luminance-based method excels in maintaining color fidelity and perceptual naturalness, as chrominance remains intact, but it may underperform in boosting color saturation or vibrancy in low-contrast scenes. Conversely, independent channel equalization simplifies implementation without color space conversions but introduces artifacts like color bleeding or over-saturation in specific regions, making it less suitable for applications requiring accurate color reproduction.

Adaptive and Contrast-Limited Variants

Adaptive histogram equalization (AHE) addresses the limitations of global histogram equalization by performing local contrast enhancement on image regions with varying intensity distributions. It divides the input image into small, typically non-overlapping tiles, computes a histogram and cumulative distribution function (CDF) for each tile independently, and applies the equalization transformation derived from that local CDF to pixels within the tile. For pixels near tile boundaries, bilinear interpolation of the transformation functions from adjacent tiles is used to ensure smooth transitions and avoid artifacts. This approach enhances local details in areas of low contrast, such as shadows or highlights, making it particularly useful for images with non-uniform illumination, including medical and natural scenes. However, AHE can amplify in homogeneous regions due to the redistribution of intensities in local histograms, leading to over-enhancement where small intensity variations are exaggerated. To mitigate this, (CLAHE) was developed as an improvement, introducing a to control . In CLAHE, before computing the CDF for each tile, histogram bins exceeding a predefined clip limit—typically 3 to 4 times the bin —are truncated, and the excess counts are redistributed uniformly across the histogram to prevent excessive stretching in flat regions. This limits while preserving meaningful local contrasts, with common parameters including tile sizes of 8x8 pixels and a clip factor adjusted based on image content. remains employed for seamless blending across tiles. Introduced in 1994 specifically for medical imaging applications like chest radiographs, CLAHE balances the detail enhancement of AHE with reduced artifacts, achieving better visualization of structures in low-contrast areas without the global uniformity imposed by standard methods. Unlike AHE, which may produce halo effects or excessive noise in uniform areas, CLAHE maintains perceptual quality by enforcing a contrast ceiling, making it widely adopted in fields requiring precise local enhancement, such as and .

Examples

Numerical Example

To illustrate histogram equalization on a small , consider a 4×4 with levels from 0 to 7 ( gray levels) and N= pixels total. The input , which has intensities clustered toward lower values for demonstration, is given by:
0  0  1  1
0  0  1  1
0  1  2  2
1  2  2  2
The corresponding h(r), counting occurrences of each r, is:
r01234567
h(r)56500000
The probability density function is p(r) = h(r)/N, yielding p(0) = 5/16 = 0.3125, p(1) = 6/16 = 0.375, p(2) = 5/16 = 0.3125, and p(r) = 0 for r ≥ 3. The cumulative distribution function CDF(r) = ∑_{k=0}^r p(k) is then computed as CDF(0) = 0.3125, CDF(1) = 0.6875, CDF(2) = 1, and CDF(r) = 1 for r ≥ 3. The transformation function for discrete intensities is T(r) = round[ CDF(r) × (L - 1) ], so T(0) = round(0.3125 × 7) = round(2.1875) = 2, T(1) = round(0.6875 × 7) = round(4.8125) = 5, T(2) = round(1 × 7) = 7, and T(r) = 7 for r ≥ 3. Applying T(r) to each pixel produces the equalized image:
2  2  5  5
2  2  5  5
2  5  7  7
5  7  7  7
The output h'(r) is:
r01234567
h'(r)00500605
This results in intensities spread across the full 0–7 range, with the output more uniformly distributed than the input (which was concentrated at low values), enhancing as intended by .

Visual Demonstration

To illustrate the practical effects of histogram equalization, consider a low-contrast of the , where details are obscured by uniform tones. In the original , the histogram is narrowly concentrated, resulting in limited dynamic range and indistinct features. After applying histogram equalization, the intensity distribution spreads across the full range from black to white, enhancing global contrast and revealing previously hidden details without introducing significant in smooth areas. Another representative example is a X-ray image featuring dark, underexposed regions, such as a with obscured fields. The original histogram clusters toward lower intensities, compressing subtle anatomical details like edges or boundaries. Histogram equalization redistributes the intensities to utilize the entire spectrum, uncovering these faint structures and improving diagnostic visibility, particularly in regions with inherently low , while maintaining overall image integrity in non-noisy areas. Comparisons between original and equalized images often demonstrate measurable improvements, such as an increase in the deviation of intensities, which quantifies enhanced spread and perceptual sharpness—though this process can amplify noise in textured or artifact-prone regions, leading to visible graininess if the input contains substantial sensor noise. Such visual demonstrations are commonly generated using open-source software like , where side-by-side views of the original image, equalized result, and corresponding histograms highlight the transformation's impact on and detail recovery.

References

  1. [1]
    Digital Image Processing - Google Books
    Digital Image Processing. Authors, Rafael C. Gonzalez, Richard Eugene Woods. Edition, 4, illustrated. Publisher, Pearson, 2018. ISBN, 0133356728, 9780133356724.
  2. [2]
    Histogram equalization using a selective filter | The Visual Computer
    Nov 29, 2022 · Histogram equalization is a commonly used enhancement technique to increase the visual contrast of an image in applications, such as medical ...
  3. [3]
    A Review of Histogram Equalization Techniques in Image ...
    The aim is to enhance the image contrast, while preserving the background brightness for images with well- defined background brightness. The original image was ...
  4. [4]
    Histogram Equalization - an overview | ScienceDirect Topics
    Histogram equalization is defined as a technique used to adjust the contrast of an image by modifying the intensity distribution of its histogram, ...Fundamentals of Histogram... · Applications of Histogram...
  5. [5]
    a neural model based on histogram equalization - PMC
    Histogram equalization is a classical, very basic image processing technique dating at least to the early 1970s (see Pratt, 2007 and references therein), aiming ...
  6. [6]
    [PDF] Histogram Equalization-A Simple but Efficient Technique for Image ...
    But it has the disadvantage that if there are gray values that are physically far apart from each other in the image then this method fails. LAHE offers an ...
  7. [7]
  8. [8]
    Using Histogram Equalization and Edge Detection to Enhance ...
    Apr 8, 2012 · Unlike AGC, histogram equalization preserves subtle stratigraphic features and does not create "shadow-zones" beneath strong reflectors.<|control11|><|separator|>
  9. [9]
    [PDF] Contrast Enhancement Techniques using Histogram Equalization
    Jun 1, 2014 · There are various histogram equalization techniques with their own advantages and disadvantages. •. Classical Histogram Equalization (CHE).
  10. [10]
    (PDF) Controlling Over Enhancement of Images Using Histogram ...
    Controlling Over Enhancement of Images Using Histogram Equalization Technique ... artifacts, and these drawbacks become more prominent in enhancing dark images.
  11. [11]
    [PDF] DECOMPOSITION HISTOGRAM EQUALIZATION - arXiv
    Histogram Equalization (HE) has been an essential addition to the Image Enhancement world. Enhancement techniques like Classical Histogram Equalization(CHE) ...
  12. [12]
    [PDF] A COMPARISON OF HISTOGRAM EQUALIZATION METHOD AND ...
    This research paper provides an overview of histogram methods and expansion by presenting their advantages and disadvantages of the dynamic range, linear ...
  13. [13]
  14. [14]
    The Probability Integral Transform and Related Results | SIAM Review
    A simple proof of the probability integral transform theorem in probability and statistics is given that depends only on probabilistic concepts.
  15. [15]
    [PDF] Digital Image Processing Histogram Equalization & Specification
    Example: Histogram Specification. Histogram Equalization output s = T(r). Page 19. Example: Histogram Specification output specified histogram estimated.Missing: algorithm | Show results with:algorithm<|control11|><|separator|>
  16. [16]
    [PDF] Digital Image Processing (CS/ECE 545) Lecture 2: Histograms and ...
    ○ Useful for certain operations (e.g. histogram equalization) later ... ○ Gonzales and Woods, Digital Image Processing (3rd edition), Prentice Hall.
  17. [17]
    [PDF] CS353 Digital Image Processing
    Histogram Equalization​​ The intensity levels in an image may be viewed as random variables in the interval [0, L-1]. Let ( ) and ( ) denote the probability ...Missing: algorithm | Show results with:algorithm
  18. [18]
    3.2.2. Histogram Equalization — Image Processing and Computer ...
    Let the number of samples in xs (and thus in F ) be equal to N N what is the order of complexity for an efficient algorithm to implement the interp function?
  19. [19]
    Histogram equalization using a selective filter | The Visual Computer
    Nov 29, 2022 · The pseudocode for histogram equalization is given in Algorithm 2. By constructing the CDF using the original discretized intensities, the ...
  20. [20]
    2: Histogram Equalization - OpenCV Documentation
    Goal. In this section,. We will learn the concepts of histogram equalization and use it to improve the contrast of our images.<|control11|><|separator|>
  21. [21]
    histeq - Enhance contrast using histogram equalization - MATLAB
    J = histeq( I , n ) transforms the grayscale image I so that the histogram of the output grayscale image J with n bins is approximately flat.Adapthisteq
  22. [22]
    Color Models
    For example, the histogram equalization of the color image in the YUV format may be performed simply by applying histogram equalization to its Y component.
  23. [23]
    Hue-preserving and saturation-improved color histogram ...
    In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers ...<|control11|><|separator|>
  24. [24]
    Adaptive histogram equalization and its variations - ScienceDirect
    Adaptive histogram equalization (ahe) is a contrast enhancement method designed to be broadly applicable and having demonstrated effectiveness.
  25. [25]
    Histogram Equalization — skimage 0.25.2 documentation
    This examples enhances an image with low contrast, using a method called histogram equalization, which “spreads out the most frequent intensity values” in an ...
  26. [26]
    [PDF] A COMPARATIVE STUDY OF HISTOGRAM EQUALIZATION ... - arXiv
    Histogram Equalization is a contrast enhancement technique in the image processing which uses the histogram of image. However histogram equalization is not ...
  27. [27]
    Process Menu
    Enhances image contrast by using either histogram stretching or histogram equalization. Both methods are described in detail in the Hypermedia Image Processing ...Enhance Contrast · Noise> · Binary> · Math>