Gamma correction
Gamma correction is a nonlinear image processing technique used to adjust the brightness and contrast of digital images by compensating for the inherent nonlinearity in the way display devices, such as cathode ray tube (CRT) monitors, convert input voltage signals to output light intensity.[1] This nonlinearity arises because CRTs produce light intensity proportional to the input voltage raised to a power, typically around 2.2 to 2.5, causing midtones to appear darker than intended without correction.[2][3] By applying an inverse power-law transformation—known as gamma encoding—to the image data before storage or transmission, gamma correction ensures that the final displayed image approximates linear light output, preserving perceptual accuracy.[4]
The mathematical foundation of gamma correction involves the power function I_{out} = I_{in}^{\gamma}, where \gamma (gamma) characterizes the display's response curve, and correction applies I_{in} = I_{out}^{1/\gamma} to linearize the signal.[5] For example, in traditional video systems, linear light intensity is transformed into a nonlinear video signal via gamma correction to match the CRT's behavior, which has been a standard practice since the inception of analog television.[6] This precompensation not only counters hardware limitations but also aligns with human visual perception, which follows a roughly power-law response to brightness changes, allowing more efficient allocation of digital bits to darker tones where the eye is more sensitive.[4] Without proper gamma handling, images can appear washed out or overly contrasted across different devices.[7]
Historically, gamma correction emerged in the early 20th century alongside the development of photographic film and television technology, where it was essential for accurate tone reproduction in CRT-based systems.[8] By the 1950s, it became integral to broadcast standards like NTSC, encoding signals to optimize signal-to-noise ratio in transmission while compensating for display gamma.[8] In computer graphics, pioneers like Alvy Ray Smith formalized its role in the 1990s, emphasizing the need for consistent terminology and application to avoid errors in rendering and compositing.[9]
In contemporary digital imaging, gamma correction persists beyond CRTs in standards like sRGB, which defines a gamma of approximately 2.2 using a piecewise function for encoding nonlinear RGB values, ensuring compatibility across monitors, printers, and web content.[5][10] Modern displays, including LCDs and OLEDs, often emulate this curve for consistency, while graphics pipelines in software like OpenGL decode gamma-encoded inputs to linear space, apply linear operations for lighting and shading calculations to ensure physical accuracy, and then encode the results for display.[11] Failure to account for gamma can lead to artifacts in image processing tasks, underscoring its ongoing importance in fields from photography to video production.[12]
Fundamentals
Definition and Purpose
Gamma correction is a nonlinear process that applies a power-law transformation to input pixel values, effectively encoding or decoding luminance signals to compensate for the inherent non-linearity in imaging and display systems.[13] This adjustment counters the response characteristics of devices like cathode-ray tubes (CRTs), where light output does not vary linearly with input voltage.[9] By reshaping the signal, gamma correction ensures that the reproduced image more faithfully represents the original scene intensities.[14]
The primary purpose of gamma correction is to achieve perceptual uniformity in brightness, aligning the system's output with the nonlinear sensitivity of human vision to light intensity changes.[4] Without this compensation, device non-linearities would cause images to appear either washed-out, with insufficient contrast in shadows and highlights, or excessively dark, wasting dynamic range and introducing visible noise.[13] This perceptual optimization is crucial for efficient signal transmission and storage, as it maximizes the utility of limited bit depths while minimizing quantization errors in perceptually important regions.[15]
For instance, if linear light intensities are fed directly into a nonlinear display, the human eye perceives them as unevenly bright, with mid-tones appearing too dark relative to highlights and shadows, resulting in distorted contrast and poor image fidelity.[4] Gamma correction mitigates this by pre-distorting the signal to "undo" the device's curve, yielding a more natural viewing experience.[16]
Terminology
In the context of image processing and display technology, the term "gamma" specifically denotes the exponent in the power-law relationship that characterizes the nonlinear response of imaging devices, such as the transfer function relating input voltage to output luminance in cathode-ray tubes (CRTs).[17] This device gamma, often denoted by the symbol γ, quantifies the degree of nonlinearity in the system's intensity reproduction, typically around 2.2 to 2.5 for traditional displays.[18] In contrast, "gamma correction" refers to the deliberate application of an inverse nonlinear transformation to input signals, compensating for the device's inherent gamma to achieve linear light representation or perceptual uniformity.[19] This distinction is crucial, as applying a gamma value greater than 1 to an image darkens midtones, while gamma correction with the same value lightens them to counteract device nonlinearity.[20]
Related terminology includes "encoding gamma," which describes the nonlinear adjustment applied to linear-light signals during capture, storage, or transmission to optimize bit depth usage and reduce quantization noise in shadows, and "decoding gamma," the inverse process performed at the display to restore linear light output.[21] For example, in the sRGB color space, the transfer function is a piecewise curve consisting of a linear segment for low values (below 0.04045 in the encoded domain) followed by a power-law segment with exponent 1/2.4, overall approximating an effective gamma of 2.2 for perceptual encoding.[22] This sRGB gamma ensures efficient representation of human vision's nonlinear sensitivity while maintaining compatibility with 8-bit digital pipelines.[23]
A common misconception is that gamma correction functions similarly to brightness or contrast controls; however, brightness adjustments uniformly shift all intensity levels, potentially clipping shadows or highlights, while contrast scales the difference between dark and light areas linearly without altering midtone curvature.[24] Gamma correction, by contrast, applies a specific nonlinear remapping of intensities, preserving black and white points but redistributing midtones to match perceptual or device requirements, thus avoiding the uniform tonal shifts of brightness or the edge-emphasizing effects of contrast.[25]
The term "gamma" originates from the Greek letter γ (gamma), first used in 1890 by Ferdinand Hurter and Vero Charles Driffield in the context of photographic film to denote the slope of the characteristic curve.[26] This nomenclature persisted into video and imaging standards, where it provided a concise descriptor for the observed power-law behaviors in light intensity mapping.[17]
Mathematical Models
Power Law Relationship
The power law relationship forms the core of gamma correction, modeling the nonlinear response of display devices through a simple exponential function. For a display with gamma value \gamma, the output intensity I_{\text{out}} is related to the input intensity I_{\text{in}} by the equation
I_{\text{out}} = I_{\text{in}}^{\gamma},
where intensities are normalized between 0 and 1.[27] To counteract this nonlinearity and achieve linear light representation, gamma correction applies the inverse transformation:
I_{\text{corrected}} = I_{\text{linear}}^{1/\gamma}.
This inverse ensures that the corrected signal, when passed through the display, produces output proportional to the original linear light values.[28]
The power law arises from the interplay between human visual perception and device characteristics. According to the Weber-Fechner law, perceived brightness is roughly logarithmic with respect to actual light intensity, meaning the human eye responds nonlinearly to luminance changes.[29] The power law in gamma correction compensates for the display's inherent nonlinearity—such as in electron beam deflection in older devices—while aligning the overall system response more closely with this perceptual model, thereby optimizing efficient encoding of brightness levels.[30]
A historical example illustrates this in cathode ray tube (CRT) displays, which typically exhibited a gamma of approximately 2.5 due to electrostatic effects in the electron gun.[3] For such systems, images captured under linear light conditions required encoding with a gamma of about 0.4 (the inverse of 2.5) to ensure accurate reproduction when decoded and displayed.[31]
This model assumes application to additive RGB color systems, where red, green, and blue channels are combined linearly to produce perceived color.[5] It further presumes uniform gamma across channels, disregarding minor color-specific variations that can occur in real devices.[29]
Generalized Gamma
The generalized gamma extends the basic power law model by incorporating piecewise or more complex functions to better align with the non-linear characteristics of human visual perception, particularly in regions of low and high intensity where uniform exponents can lead to loss of detail or inefficient quantization. These extensions address limitations in simple power laws by introducing segments that approximate perceptual response more closely, ensuring that encoded values distribute dynamic range in a way that minimizes visible artifacts in digital representations.[23]
A key example is the sRGB transfer function, standardized in IEC 61966-2-1, which combines a short linear segment near black with a power law segment to approximate an overall gamma of 2.2. For a normalized linear input value V \in [0, 1], the encoded sRGB value V' is computed as:
\begin{cases}
V' = 12.92 \times V & \text{if } V \leq 0.0031308 \\
V' = 1.055 \times V^{1/2.4} - 0.055 & \text{if } V > 0.0031308
\end{cases}
This piecewise approach enhances shadow detail by avoiding the steep slope of a pure power function at low values, thereby reducing over-darkening and improving perceptual uniformity without wasting code values on imperceptible near-black variations.[32][23]
The ITU-R BT.709 standard adopts a similar piecewise structure for high-definition television, featuring a linear segment for inputs below 0.018 followed by a power exponent of 0.45 (equivalent to a decoding gamma of approximately 2.22), which refines the curve to match display characteristics and human sensitivity more effectively than a single exponent. In contrast to the uniform application of a simple power law, which treats all intensity levels equally and can compress shadows excessively, these generalized models dynamically adjust the transfer curve to preserve perceptual accuracy across the luminance range, with the linear near-black portion specifically countering noise amplification and quantization issues in low-light areas.[23]
For high dynamic range (HDR) applications, perceptual quantizers like the one in SMPTE ST 2084 employ non-power curves derived from models of absolute visual perception, such as contrast sensitivity functions, to encode luminance up to 10,000 cd/m² efficiently; this allows for smooth gradients in both deep shadows and bright highlights without the limitations of power-based approximations.
Historical Applications
Film Photography
In film photography, the characteristic curve describes the response of photographic emulsions to light exposure, with negative film typically exhibiting a gamma of approximately 0.6 in the toe and shoulder regions where non-linearity predominates, providing a low-contrast foundation for subsequent processing. Positive print film, by contrast, features a higher gamma of approximately 2.5 to 3.0, enabling the enhancement of tonal separation during printing. These values ensure that the overall system gamma, the product of negative and print gammas, approximates 1.5 to 1.8 for visually appealing contrast in the final image.[33][34]
Gamma correction in analog film workflows occurs primarily during printing, where contact printing inverts the tones of the negative while applying the print material's gamma to counteract the negative's low slope, transforming the inverted low-contrast image into a positive with balanced densities. When using an enlarger for larger prints, photographers adjust exposure times and contrast filters to compensate for the film's inherent non-linearities, particularly in the toe and shoulder areas, thereby achieving neutral density reproduction that aligns the print's tonal scale with the original scene's luminance without excessive compression or expansion.[35]
The systematic integration of gamma correction into film photography practices emerged in the 1930s alongside advancements in densitometry, which enabled precise quantification of density variations on film for consistent processing control. Ansel Adams' Zone System, formulated in the late 1930s with Fred Archer, further refined this by emphasizing gamma matching through targeted exposure and development adjustments, allowing photographers to position scene tones along the characteristic curve for optimal negative contrast suited to printing papers.[36][37]
Analog film's non-uniform gamma, evident in the curved toe and shoulder regions flanking the straighter mid-tones, often resulted in uneven tonal rendering that global adjustments alone could not fully resolve, necessitating manual interventions like dodging and burning during enlargement. Dodging involves selectively blocking light to lighten underexposed areas in the toe, while burning extends exposure to darken overexposed highlights in the shoulder, thereby locally correcting for the film's variable response and enhancing overall image detail and balance.[38]
Analog Television
In analog television, gamma correction was first formalized in the United States through the National Television System Committee (NTSC) standards adopted by the Federal Communications Commission (FCC) in 1941 for monochrome broadcasting, establishing a nonlinear transfer characteristic to match the response of cathode-ray tube (CRT) displays and optimize signal perception.[39] This initial specification addressed the inherent gamma of approximately 2.2 in CRT receivers by applying an encoding gamma at the transmitter, ensuring that the reproduced image aligned with human visual sensitivity despite the limitations of early electronic transmission.[18]
The NTSC color standard, introduced in 1953 and approved by the FCC, refined this approach with an explicit encoding gamma of 0.45 (equivalent to 1/2.2) applied to linear camera signals, compensating for the receiver's display gamma of approximately 2.2 to achieve accurate luminance reproduction. This encoding compressed the dynamic range of brighter areas while expanding detail in shadows, reducing the effective bandwidth required for low-light regions during analog transmission and minimizing visible noise in dark areas—a critical efficiency for the limited spectrum allocated to broadcast signals.[18] In CRT televisions, a corresponding decoder circuit restored the signal to linear light, enabling faithful rendering of scene luminance.
European analog systems, such as PAL and SECAM introduced in the late 1960s, adopted a similar display gamma of approximately 2.8 for CRT compatibility, with encoding adjusted to maintain consistent brightness perception during the transition from monochrome to color broadcasting. This ensured backward compatibility with existing black-and-white receivers while preserving perceptual uniformity across varying lighting conditions in homes. The approach in these systems echoed the NTSC refinements, prioritizing transmission efficiency by gamma-encoding camera outputs to allocate signal resources effectively to shadow details. Influenced by earlier film photography practices, which also employed gamma adjustments for density control, analog television gamma correction became a foundational element in achieving natural-looking images over imperfect channels.[18][17]
Modern Standards
Video and Broadcast Standards
In high-definition television (HDTV) production and broadcasting, the Rec. 709 standard defines parameter values for colorimetry and transfer characteristics, employing an approximate gamma value of 2.4 to optimize image rendering on displays under typical viewing conditions. This gamma ensures perceptual uniformity by compensating for the non-linear response of human vision and the dimmer ambient light in home environments compared to brighter studio setups. Cameras capture linear light values and apply an opto-electronic transfer function (OETF) with an encoding gamma of approximately 1/2.4 (around 0.45 in the power-law segment) to produce the encoded video signal, facilitating efficient transmission and storage while preserving dynamic range.
For digital cinema distribution, the DCI-P3 color space adopts a gamma of 2.6, tailored for projection in darkened theaters to achieve consistent brightness and contrast from studio grading through to exhibition.[40] This higher gamma value enhances shadow detail and mid-tone reproduction on high-luminance projectors, ensuring the creative intent of filmmakers is accurately conveyed without over-brightening in controlled lighting conditions.[40]
The ATSC 3.0 standard, introduced in the 2020s, incorporates support for Hybrid Log-Gamma (HLG) as a transfer function for high dynamic range (HDR) broadcasting, enabling seamless compatibility with both standard dynamic range (SDR) and HDR receivers over existing broadcast infrastructure.[41] HLG blends a logarithmic curve for highlights with a gamma-like response for lower tones, utilizing static parameters for scene-referred encoding while allowing optional dynamic metadata in ATSC 3.0 implementations to refine tone mapping on varied displays.[41]
Streaming platforms such as Netflix require BT.2020 colorimetry combined with the Perceptual Quantizer (PQ) transfer curve for 4K Ultra HD HDR content, addressing the expanded color gamut and luminance range of modern displays.[42] The PQ curve, defined in ITU-R BT.2100, provides absolute perceptual uniformity up to 10,000 nits, with gamma correction applied during encoding to map linear scene light into a 10-bit or higher signal that corrects for wide-gamut consumer TVs, preventing clipping in bright areas.[42]
Computer Displays and Graphics
In computer displays and graphics, the sRGB color space serves as the foundational standard, incorporating a piecewise gamma transfer function that effectively approximates a gamma value of 2.2 for perceptual uniformity across typical viewing conditions. Defined in the IEC 61966-2-1:1999 specification, this function applies a linear segment for low-intensity values (below 0.0031308) followed by a power-law segment with an exponent of 1/2.4, ensuring efficient encoding for 8-bit per channel data while compensating for human vision's non-linear response. Since its proposal by HP and Microsoft in 1996, sRGB has been the default color space for web content and Microsoft Windows operating systems, enabling consistent rendering on consumer monitors without explicit calibration.[23][43]
Historically, Apple's macOS diverged from this norm by adopting a gamma of 1.8 as the standard for CRT displays, which provided a brighter appearance suited to early Macintosh workflows and print matching. However, with the transition to LCD panels in the mid-2000s, Apple updated the default to gamma 2.2 starting with OS X 10.6 Snow Leopard in 2009, aligning macOS more closely with sRGB for cross-platform compatibility and modern display characteristics. This shift addressed visual discrepancies in shared digital ecosystems, as LCDs inherently exhibit less non-linearity than CRTs, requiring adjusted correction for accurate tone reproduction.[44][45]
Graphics APIs integrate sRGB gamma handling to streamline rendering pipelines. OpenGL and DirectX presume sRGB encoding in textures and framebuffers, automatically applying the inverse transfer function (decoding to linear light) during sampling and re-encoding outputs when sRGB formats like GL_SRGB8_ALPHA8 are enabled, which prevents double gamma application in shading computations. Vulkan extends this capability with explicit support for sRGB formats (e.g., VK_FORMAT_R8G8B8A8_SRGB) and optional extensions for custom transfer functions, facilitating professional workflows in color-critical applications such as VFX and CAD where non-standard gammas or wide-gamut spaces are required.[43]
The Adobe RGB (1998) space, while also using a straight gamma of 2.2 (precisely 563/256 ≈ 2.199), defines a broader color gamut to encompass more saturated greens and cyans for professional printing and photography. Without color management systems to transform between spaces, viewing Adobe RGB images on sRGB-calibrated displays results in clipped saturations and perceptual color shifts, as out-of-gamut colors are desaturated or remapped inaccurately. This mismatch underscores the need for ICC profiles in graphics software to maintain fidelity across diverse hardware.[46]
Implementation Techniques
Methods in Computing
In software implementations, gamma correction is commonly achieved using lookup tables (LUTs) for real-time efficiency, where a precomputed array maps input pixel values to their gamma-corrected outputs, avoiding costly exponentiation operations during runtime. This method is widely used in image processing libraries and graphics pipelines to accelerate transformations on high-resolution imagery.[47] For GPU-based rendering, correction is often applied in fragment shaders via the power function to linearize textures; a typical OpenGL example for decoding is:
glsl
vec3 linearColor = pow(texture2D(texSampler, texCoord).rgb, vec3(gamma));
gl_FragColor = vec4(linearColor, 1.0);
vec3 linearColor = pow(texture2D(texSampler, texCoord).rgb, vec3(gamma));
gl_FragColor = vec4(linearColor, 1.0);
where gamma is set to 2.2 for sRGB compliance, ensuring linear light space for accurate lighting calculations.[48]
On the hardware side, GPU drivers automatically perform gamma decoding when sRGB textures and framebuffers are enabled, converting encoded values to linear space during sampling and applying inverse correction post-shader for output. This hardware acceleration, supported in modern APIs like OpenGL via GL_FRAMEBUFFER_SRGB, offloads the computation from software.[49] ICC profiles further integrate gamma correction into color management by embedding transfer function curves—such as the sRGB gamma tag—that dictate how devices apply encoding and decoding for consistent cross-platform rendering.[50]
The overall pipeline begins with encoding at the capture stage, where linear sensor data from a camera's RAW format is transformed into gamma-encoded JPEG output using a curve like gamma ≈ 0.45 to optimize perceptual bit depth allocation. Decoding then occurs at the display end, where the inverse transformation linearizes the signal to counteract the monitor's inherent gamma, achieving a net system response near 1.0 for faithful image reproduction.[51]
A notable recent advancement is in Autodesk 3ds Max 2025, which defaults to OpenColorIO (OCIO) color management over the legacy gamma workflow, enhancing support for the ACES standard in VFX pipelines as refined in the 2024 update for improved scene-referred color handling.[52]
Monitor Calibration and Tests
Monitor calibration for gamma involves verifying that the display's luminance response matches the intended nonlinear encoding, typically through visual patterns and measurement tools. A fundamental method is the grayscale ramp test, where a pattern displaying successive intensity levels from 0 to 255 is rendered on the screen. Users visually inspect the ramp for uniform perceptual steps across the range; ideally, the transitions should appear smooth without abrupt jumps or compressed shadows/midtones, indicating correct gamma application. Tools such as the Lagom LCD test generate these patterns, featuring vertical bars with blended bands that align at the 2.2 gamma mark when viewed from a normal distance, allowing users to adjust monitor settings for optimal blending.[53][54]
To assess gamma in scenarios involving image scaling and compositing, tests with overlapping gradients are employed. These patterns superimpose multiple linear or nonlinear gradients, revealing artifacts from mismatched gamma during blending operations. Incorrect gamma encoding can manifest as visible banding in smooth areas or unwanted haloing around edges, particularly when content is resized or layered, as the perceptual uniformity breaks down. Such tests ensure that the display handles blended regions consistently, preventing distortions that affect image fidelity in applications like photo editing.[55]
For precise verification beyond visual inspection, hardware tools like colorimeters measure the actual luminance curve. Devices such as the X-Rite i1Display Pro capture light output at various input levels, computing the effective gamma by fitting the data to a power law model. Accompanying software, including DisplayCAL, processes these measurements to generate ICC profiles that apply corrections via the graphics card's lookup table, ensuring the display's response aligns with standards. This approach quantifies deviations and automates adjustments for accuracy.[56][57]
The standard target for gamma in sRGB workflows is 2.2, derived from the average response of typical displays to match perceptual uniformity under dim viewing conditions. Deviations exceeding 0.1 from this value can introduce noticeable perceptual errors, such as washed-out colors or crushed details, impacting color accuracy and image interpretation. Industry guidelines recommend maintaining gamma within ±0.1 to preserve consistent quality across devices.[54][58]
Advanced Techniques
Adaptive Gamma Correction
Adaptive gamma correction represents a dynamic approach to image enhancement, where the gamma parameter is adjusted based on the statistical properties of the input image, such as its intensity histogram, to achieve more precise contrast improvement compared to static power-law transformations. This method analyzes the distribution of pixel intensities to derive a tailored gamma value, enabling better preservation of details in both dark and bright areas without introducing excessive noise or distortion. Unlike fixed gamma values, which apply a uniform nonlinear mapping across the entire image, adaptive variants respond to local or global image characteristics for context-aware processing.[59]
One prominent technique is Tuning Adaptive Gamma Correction (TAGC), developed in 2025 for low-light image enhancement, which computes the image's average color factor and luminance to determine an optimal gamma value that can be applied regionally for localized adjustments. In TAGC, the process begins with statistical analysis of the input to quantify underexposure levels, followed by a gamma computation that stretches the dynamic range while maintaining natural color balance and detail retention. Qualitative and quantitative evaluations of TAGC demonstrate its effectiveness in preserving image details and enhancing contrast in challenging low-light scenarios.[60]
Applications of adaptive gamma correction include low-light enhancement in portable devices like smartphone cameras, where it aids in real-time processing of underexposed scenes to improve visibility and color fidelity. Additionally, it is employed in night image processing workflows, such as those implemented in MATLAB, to boost contrast in low-illumination environments like surveillance or automotive vision systems. For instance, multi-linear adaptive gamma correction has been adapted for real-time night driving enhancement, reducing artifacts and improving object detection in dark conditions.[61][62]
A representative example is the Gamma-Based Light-Enhancement (GLE) curve, introduced in 2025 as part of a zero-reference deep learning framework for low-light image enhancement. The GLE curve modifies traditional gamma correction by parameterizing a nonlinear mapping that directly enhances pixel intensities, thereby increasing contrast and brightness without causing overexposure or halo effects in brighter regions. This approach integrates seamlessly with neural networks, achieving superior performance on benchmark low-light datasets by balancing enhancement across varying exposure levels.[63]
Integration with HDR and Color Management
In high dynamic range (HDR) imaging, gamma correction evolves beyond the relative power-law functions used in standard dynamic range (SDR) systems to incorporate absolute luminance mapping. The Perceptual Quantizer (PQ), defined in SMPTE ST 2084, serves as an electro-optical transfer function (EOTF) that maps digital code values to absolute light levels, supporting peak luminances up to 10,000 nits without relying on traditional gamma curves.[64] This absolute approach ensures precise tone reproduction across a wide luminance range, enabling HDR content to maintain perceptual uniformity on reference displays.[65]
Another key advancement is the Hybrid Log-Gamma (HLG) transfer function, standardized as ARIB STD-B67, which facilitates HDR broadcasting while ensuring backward compatibility with SDR equipment. HLG combines a logarithmic curve for highlights with a gamma-like curve for shadows, allowing the same signal to render acceptably on both SDR and HDR displays without mandatory metadata.[66] Optional metadata can enable automatic adaptation for optimal tone mapping on compatible receivers, enhancing flexibility in live production workflows.[67]
In modern color management systems, gamma handling integrates with workflows like the Academy Color Encoding System (ACES) via tools such as OpenColorIO (OCIO). In Autodesk 3ds Max 2025, the default OCIO configuration automates gamma correction within ACES pipelines, applying linear scene-referred rendering and output transforms to support wide color gamuts and HDR.[52] This replaces legacy manual gamma workflows (e.g., 2.2 for sRGB), reducing errors in VFX and animation by enforcing consistent color spaces from input to final render.[68]
Recent developments further blend gamma correction with HDR color processing, as seen in AGCSNet, a 2025 neural network model for high-contrast image exposure correction. AGCSNet applies separate gamma corrections to underexposed and overexposed regions along with saturation correction, using attention-based illumination maps to fuse the results and improve perceptual fidelity without introducing artifacts in wide-gamut environments.[69]