Fact-checked by Grok 2 weeks ago

Relative luminance

Relative luminance is the relative of a color stimulus as perceived by the human visual system, quantified by the Y tristimulus value in the CIE 1931 XYZ color space, where Y directly corresponds to weighted by the spectral luminous efficiency function V(λ). This value is normalized such that Y = 100 for a perfect reflecting diffuser (reference white) under the specified illuminant, making it a dimensionless measure independent of absolute light intensity. The Y component arises from the transformation of earlier RGB color-matching functions to ensure additivity and perceptual uniformity in color specification. Developed as part of the (CIE) 1931 standard observer model, relative luminance enables precise color quantification by integrating the of a stimulus with the CIE color-matching functions x(λ), y(λ), and z(λ), where y(λ) matches the photopic luminosity function. For non-spectral stimuli, such as reflecting or transmitting objects, Y is computed as Y = k ∫ φ(λ) y(λ) dλ, with k a and φ(λ) the product of spectral / and illuminant power. This framework underpins modern , providing a measure of independent of hue or . In practical applications, relative luminance is essential for assessing visual accessibility and display performance; for instance, in the color space, it is approximated via the L = 0.2126 R + 0.7152 G + 0.0722 B, where R, G, and B are linearized components, normalizing L from 0 () to 1 (). The (WCAG) 2.1 leverage this to define contrast ratios as (L_1 + 0.05) / (L_2 + 0.05), where L_1 and L_2 are the relative luminances of foreground and background colors, ensuring for users with low vision. Beyond digital media, it informs , photometry, and color reproduction standards, such as in ISO/CIE norms for object color specification.

Introduction

Definition

Relative luminance, denoted as Y, is a in that measures the proportion of contributed by a color stimulus relative to a reference under identical illumination conditions. It ranges from 0, corresponding to with no or , to 1, representing the full of the reference , such as a perfect reflecting diffuser. This allows Y to capture the relative without dependence on , making it essential for comparing stimuli across different viewing environments. The value of Y is derived by weighting the spectral power distribution of the stimulus according to the human visual system's sensitivity, specifically through the CIE 1931 luminous efficiency function V(\lambda), which approximates the eye's photopic response across wavelengths from approximately 380 nm to 780 nm, peaking near 555 nm for green light. This weighting ensures that Y reflects how the human eye perceives brightness, prioritizing wavelengths to which it is most sensitive while diminishing the contribution of others, such as deep reds and blues. Unlike absolute , which is measured in physical units like candelas per square meter (/), relative Y is typically expressed as a between 0 and or as a from 0% to 100%, emphasizing proportional relationships rather than measurable flux. In modern applications, the reference white is commonly defined using the CIE D65, simulating average daylight, where the white point is assigned Y = [1](/page/1) to standardize comparisons. As the component of the CIE tristimulus values, Y provides a foundation for broader color representation and analysis.

Historical Context

The development of relative luminance originated in the and through the (CIE), as evolved from absolute photometry—concerned with physical light intensity—to relative perceptual measures for device-independent color representation. This transition addressed the limitations of photometry in capturing human under varying conditions, emphasizing standardized responses from "standard observers" to enable reproducible specifications for industries like textiles and dyes. Key motivations included operationalizing color measurement for practical applications and accommodating perceptual variability, shifting focus from absolute to weighted luminous efficiency based on visual sensitivity. In the late 1920s, pioneering color-matching experiments by W. David Wright, using ten observers, and John Guild, using seven, provided empirical data on spectral sensitivity, forming the basis for tristimulus color models. These studies measured how observers mixed primary lights to match spectral colors, revealing the need for a framework that separated from hue and . In 1931, the CIE formalized the color space from this data, designating the Y tristimulus value as to represent relative brightness independently of , thus establishing relative luminance as a core element of device-agnostic . Post-World War II refinements enhanced the precision of luminous efficiency functions integral to relative luminance. In 1964, the CIE introduced a supplementary standard colorimetric observer for 10-degree visual fields, updating the spectral luminous efficiency function to better approximate and improve accuracy in broader photometric applications. During the 1970s, analog video standards adapted this framework for broadcast; while the original 1953 system incorporated CIE 1931-derived luminance for monochrome compatibility, the (EBU) in 1970 revised PAL and colorimetry to new primaries, retaining the Y-based relative luminance for perceptual fidelity and international . In the digital era, relative luminance achieved widespread standardization for multimedia. The sRGB color space, jointly developed by and in 1996 and codified in IEC 61966-2-1, defined relative luminance computation from linear RGB to align with CIE XYZ, ensuring consistent brightness rendering on consumer displays and web content. Concurrently, ITU-R Recommendation BT.709, revised in 2002, specified relative luminance weights for , harmonizing with sRGB primaries to support global digital video production and transmission. These milestones bridged photometric principles to computational , influencing standards from imaging to .

Mathematical Foundations

Formulation in CIE Colorimetry

In CIE colorimetry, the relative luminance is quantified as the Y tristimulus value within the , which represents the luminance component of a color stimulus relative to a reference white. This formulation derives from the spectral properties of light and the human visual system's sensitivity, ensuring that Y correlates closely with perceived brightness under photopic conditions. The , established in and refined in subsequent standards, separates color into tristimulus values X, Y, and Z to enable device-independent color specification. The Y value is computed through spectral integration of the stimulus's power distribution weighted by the color matching function \bar{y}(\lambda), which approximates the photopic luminous efficiency function V(\lambda). Specifically, for a light source or reflecting object, Y = k \int_{380}^{780} P(\lambda) \bar{y}(\lambda) \, d\lambda, where P(\lambda) is the spectral power distribution of the stimulus (in energy units per wavelength interval), and the integral spans the visible spectrum (typically 380–780 nm). The function \bar{y}(\lambda) is defined for the CIE 1931 standard colorimetric observer (2° field of view) or the 1964 supplementary observer (10° field), with values tabulated at 1 nm or 5 nm intervals; it peaks at unity (\bar{y}(555) = 1) near the eye's maximum sensitivity for green light. In discrete form for practical computation, this becomes Y = k \sum P(\lambda_i) \bar{y}(\lambda_i) \Delta\lambda, where \Delta\lambda is the wavelength step (e.g., 5 nm). The constant k normalizes the result, often set such that Y = 100 for a perfect reflecting (or transmitting) diffuser under the reference illuminant, making Y a relative measure rather than absolute luminance. For self-luminous sources, P(\lambda) represents the absolute spectral radiance, and k may incorporate the maximum luminous efficacy (683 lm/W at 555 nm). Normalization ensures comparability across stimuli: for the reference white, such as the equal-energy illuminant (where all wavelengths have equal power) or CIE D65 (daylight simulation), Y_n = 100 or depending on the . For any stimulus, the relative is then Y = Y_{\text{stimulus}} / Y_{\text{reference}}, expressing the stimulus's as a of the (e.g., 0 to for displays). This relative assumes the is a 100% reflecting diffuser under the same illumination, aligning Y directly with perceptual ratios. For monochromatic , CIE tables yield Y \approx [1](/page/1) at 555 nm for unit power input, dropping to near 0 at spectrum edges (e.g., Y \approx 0.001 at 400 nm), illustrating the function's bell-shaped sensitivity curve. This formulation rests on key assumptions: the human visual response is linear with respect to light intensity (Grassmann's laws), colors mix additively, and the standard observer represents average trichromatic vision without adaptation effects beyond the specified field of view. These enable the transformation from spectral data to tristimulus values while preserving luminance as a linear-light quantity proportional to physical radiance, weighted by visual sensitivity. Deviations occur for mesopic or scotopic vision, but the model excels for typical daylight applications.

Computation from RGB Values

To compute relative luminance from RGB color values, the input components must first be converted from the gamma-encoded space to linear RGB, as relative luminance is defined in a linear-light colorimetric space. The sRGB encoding applies a opto-electronic transfer function (OETF) during capture or encoding, which must be inverted using the electro-optical transfer function (EOTF). For each sRGB component c (where c is R', G', or B' in the range [0, 1]), the linear value c_{lin} is obtained as follows: c_{lin} = \begin{cases} \frac{c}{12.92} & \text{if } c \leq 0.04045 \\ \left( \frac{c + 0.055}{1.055} \right)^{2.4} & \text{otherwise} \end{cases} This linearization ensures that the subsequent summation reflects actual light intensities rather than perceptual encodings. Once linearized, relative luminance Y (the Y tristimulus value in CIE XYZ, normalized relative to a reference white of 1.0) is computed as a weighted sum of the linear RGB components, using coefficients derived from the sRGB primaries and D65 white point: Y = 0.2126 R_{lin} + 0.7152 G_{lin} + 0.0722 B_{lin} These weights correspond to the second row of the 3×3 transformation matrix M that converts linear sRGB to CIE XYZ tristimulus values: \begin{pmatrix} X \\ Y \\ Z \end{pmatrix} = \begin{pmatrix} 0.4124 & 0.3576 & 0.1805 \\ 0.2126 & 0.7152 & 0.0722 \\ 0.0193 & 0.1192 & 0.9505 \end{pmatrix} \begin{pmatrix} R_{lin} \\ G_{lin} \\ B_{lin} \end{pmatrix} The matrix elements are calculated from the chromaticities of the sRGB primaries (red: x=0.6400, y=0.3300; green: x=0.3000, y=0.6000; blue: x=0.1500, y=0.0600) and the D65 illuminant (x=0.3127, y=0.3290), ensuring the Y component matches the luminance definition in CIE colorimetry. Relative luminance is specifically the Y value from this transformation, scaled such that mid-gray (sRGB 0.5) yields approximately 0.215. Input RGB values are typically clamped to the [0, 1] range before processing to handle any out-of-gamut or negative values, preventing invalid results. For (HDR) content exceeding this range, such as in or PQ-encoded signals, the linear values are scaled relative to the peak white of the or reference (e.g., 1000 cd/m²), normalizing Y to the maximum achievable white rather than 1.0.

Integration with Color Spaces

Linear Colorimetric Spaces

In the CIE color space, the tristimulus value Y directly corresponds to the relative luminance of a stimulus, scaled such that Y = 100 represents the luminance of a perfect diffuser under the reference illuminant, while X and Z tristimulus values capture the components. This structure provides a device-independent framework for color specification, where all real colors have non-negative tristimulus values, enabling metamerism-free matching by ensuring no negative lobes in the color matching functions as in earlier RGB systems. The CIE xyY color space transforms XYZ coordinates into chromaticity values x = X/(X + Y + Z) and y = Y/(X + Y + Z), retaining Y as the luminance measure to facilitate and specification of colors on a 2D diagram. This representation is widely used for defining illuminants; for instance, the daylight illuminant D65, simulating average midday light in Western/, has chromaticity coordinates x = 0.3127 and y = 0.3290 with Y normalized to 1. Illuminant in xyY allows straightforward scaling of Y to adjust for varying light intensities while preserving chromaticity, as seen in applications like adjusting whites for different viewing conditions. Key advantages of these linear colorimetric spaces include their support for mixing, where the XYZ tristimulus values of superimposed lights sum linearly to yield the mixture's values, making them suitable for computational color reproduction. They also scale predictably with illuminant changes, simply by multiplying tristimulus values by the illuminant's intensity factor, and form the foundational basis for the (CRI), which quantifies a source's ability to colors accurately relative to a reference illuminant like D65. However, a notable limitation is the lack of perceptual uniformity in CIE XYZ and xyY, where equal increments in Y do not produce equal perceived brightness differences due to the nonlinear nature of human vision.

Gamma-Encoded Spaces

Gamma encoding applies a nonlinear opto-electronic transfer function (OETF) to linear RGB values, transforming them into encoded R'G'B' values for storage and transmission, with the sRGB OETF approximating a of γ ≈ 2.2 to allocate more code values to darker tones and minimize visible quantization noise in limited bit-depth formats like 8-bit per channel. This encoding efficiently utilizes the by aligning with human vision's greater sensitivity to relative changes in low-light levels, preventing banding artifacts in shadows that would occur with linear encoding. In gamma-encoded spaces, computing relative luminance Y from encoded R'G'B' values without decoding leads to inaccuracies, as the direct weighted sum of encoded components overestimates the true linear luminance due to the nature of the encoding curve. Accurate calculation requires first applying the inverse electro-optical (EOTF) to linearize R'G'B' back to linear RGB, then applying the linear weights (as detailed in the computation from RGB values) to obtain Y. For , this linearization step ensures Y reflects the photometric contribution correctly, normalized such that RGB = (1,1,1) yields Y = 1. Standards for gamma-encoded RGB spaces define specific primaries and transformation matrices that determine the luminance weighting coefficients, with ITU-R BT.709 specifying coefficients of 0.2126 for R, 0.7152 for G, and 0.0722 for B, derived from its primaries and D65 white point to match human visual sensitivity. Adobe RGB (1998), designed for wider gamut printing, employs a different matrix with coefficients approximately 0.297 for R, 0.627 for G, and 0.076 for B, normalized to sum to 1 for Y at white. Modern variants like Display P3 (introduced in 2016 for wide-gamut displays) update the primaries from DCI-P3 to better align with D65 illumination, using coefficients around 0.229 for R, 0.738 for G, and 0.033 for B while retaining an sRGB-like γ ≈ 2.2 encoding. Failing to linearize gamma-encoded values before luminance or contrast computations introduces significant artifacts, such as inflated contrast ratios and distorted perceived brightness, because the nonlinear encoding skews the additive properties essential for accurate Y derivation. This issue is prevalent in legacy video workflows using standards for standard-definition content, where coefficients of 0.299 R + 0.587 G + 0.114 B—optimized for /PAL primaries—often result in mismatched luminance if decoded assuming modern BT.709 parameters, leading to washed-out or oversaturated renders without proper correction.

Perceptual Uniformity Spaces

In perceptual uniformity color spaces, relative luminance Y is transformed through nonlinear functions to approximate the human visual system's response, aiming to create scales where equal numerical differences correspond to equal perceived differences in . These transformations account for the compressive nature of perception, where perceived changes more slowly than physical , particularly under typical conditions. Such spaces, developed by the (CIE), provide a foundation for metrics and appearance modeling by mapping tristimulus values to coordinates that better align with psychophysical data. A prominent example is the , where the component L^* is derived from relative luminance as follows: L^* = 116 f\left(\frac{Y}{Y_n}\right) - 16 Here, Y_n is the Y tristimulus value of the reference white (often normalized to 1 for illuminant D65), and the function f(t) is defined piecewise to ensure perceptual uniformity across low and high luminance ranges: f(t) = \begin{cases} t^{1/3} & \text{if } t > \left(\frac{6}{29}\right)^3 \\ \frac{1}{3} \left(\frac{29}{6}\right)^2 t + \frac{4}{29} & \text{otherwise} \end{cases} This formulation, introduced in the CIE 1976 recommendations, approximates uniform perceived differences such that \Delta L^* \approx constant corresponds to a in , based on experimental data from color matching and studies. The cube-root compression in f(t) for mid-to-high luminances reflects the nonlinear of responses and post-receptoral processing in the visual pathway. The CIELUV color space employs a similar transformation for its lightness coordinate L^*, using the same cube-root function f(Y/Y_n) to achieve approximate perceptual uniformity, though optimized for different applications like mixture in displays. This shared structure underscores the CIE's effort to standardize uniform spaces that outperform earlier linear models in predicting suprathreshold color differences. Stevens' power law provides a foundational psychophysical basis for these cube-root approximations, positing that perceived \psi scales with relative luminance Y as \psi \propto Y^{0.33} near typical adaptation levels, derived from magnitude estimation experiments on isolated stimuli. Validation of these models in the through psychophysical experiments confirmed their utility but highlighted limitations, such as deviations in uniformity under varying illuminants or for highly chromatic stimuli; for instance, studies using paired comparisons and category scaling showed that while \Delta L^* = 1 approximates a on average, actual perceptual steps vary by up to 20% depending on context. These findings, from researchers like Fairchild, informed refinements in subsequent appearance models while affirming the enduring role of CIELAB and in practical .

Applications and Standards

In Digital Imaging and Displays

In digital imaging pipelines, relative luminance plays a key role in operations, particularly during the conversion from () to standard dynamic range (SDR) content. This process scales the relative component (Y) to fit the target display's and peak brightness capabilities, ensuring perceptual consistency without clipping highlights or shadows. For instance, in the (PQ) electro-optical transfer function (EOTF) defined in Recommendation BT.2100, the non-linear signal is mapped to display values, where Y represents the normalized luminance relative to a reference white, allowing for in HDR-to-SDR workflows. Color management systems leverage through ICC profiles to maintain accurate reproduction across devices. In black point compensation (BPC), the (Y) of the source black point is mapped to the destination's minimum , using a scale factor derived from Y values in the (PCS), typically CIE XYZ, to prevent shadow detail loss during conversions. For mapping, ICC profiles apply transformations in the PCS where relative Y guides the adjustment of out-of- colors, preserving overall while clipping or compressing chromaticities to fit the destination , as specified in the ICC.1:2022 standard. Display involves measuring in nits (cd/m²) and normalizing it to relative luminance Y for accurate calibration. Instruments capture the absolute output, which is then scaled relative to a D65 target (where Y=1 corresponds to the reference white of approximately 80-100 cd/m² in typical viewing environments), ensuring neutral greyscale tracking and color accuracy. This normalization process aligns the display's Y response with CIE colorimetric standards, compensating for variations in intensity or aging. In software implementations like , the relative colorimetric rendering intent preserves relative luminance ratios by mapping the source to the destination and clipping out-of-gamut colors without altering in-gamut Y proportions, which is essential for proofing and cross-device consistency. Modern video codecs, such as developed by the , utilize relative luminance as the basis for luma coding in their hybrid compression framework. The luma plane (Y) is encoded first using block-based prediction and transform techniques, with relative luminance values normalized within the BT.709 or BT.2020 to optimize bitrate efficiency while maintaining perceptual brightness fidelity.

In Web Accessibility and Contrast Calculation

Relative luminance is integral to standards, serving as the basis for calculating in the (WCAG) 2.1 to promote readability and inclusivity for users with visual impairments. In WCAG 2.1, the between text and its background is defined as (L1 + 0.05) / (L2 + 0.05), where L1 and L2 represent the relative luminances of the lighter and darker colors, respectively. This formula accounts for the non-linear of near black levels by adding a small constant (0.05). For Level AA conformance, normal text requires a minimum ratio of 4.5:1, while large text (18 point or 14 point bold) needs only 3:1; Level AAA elevates these to 7:1 and 4.5:1, respectively, to support users with more significant vision loss. To compute relative luminance for contrast, sRGB color values are first converted to linear light by applying the inverse —for each channel C (R, G, or B), if sRGB ≤ 0.03928, then linear = sRGB / 12.92, else linear = ((sRGB + 0.055) / 1.055)2.4—yielding linear RGB components that feed into the luminance formula L = 0.2126 × R + 0.7152 × G + 0.0722 × B. This process ensures accurate simulation of how colors appear on typical displays. Tools like automate these calculations during audits, flagging elements that fail WCAG thresholds based on relative luminance differences. Legal requirements under the Americans with Disabilities Act (ADA) increasingly reference WCAG 2.1 Level AA as a benchmark for , mandating the 4.5:1 minimum to avoid discrimination claims; for instance, the U.S. Department of Justice's 2024 rule explicitly ties compliance to these ratios for public entities' . Achieving AAA-level 7:1 ratios, which demand greater relative luminance separation, further mitigates risks in high-stakes contexts like or sites. The WCAG 3.0 draft, updated in September 2025, proposes evolving beyond these ratios by adopting the Advanced Perceptual Contrast Algorithm (APCA), which better models visual response to differences while retaining relative luminance as its core input for foundational assessment.

Versus Luma in Video Signals

Relative luminance Y, a linear photometric quantity derived from tristimulus values, differs fundamentally from luma Y', which serves as the nonlinear component in video signals. Luma is computed as a weighted of gamma-encoded RGB components, typically using the formula Y' = 0.299 R' + 0.587 G' + 0.114 B' as specified in Recommendation BT.601 for . This approximation applies directly to encoded RGB values without full linearization, prioritizing perceptual encoding over strict photometric accuracy. The primary distinction arises from their domains: relative luminance Y represents linear light intensity, suitable for colorimetric computations, whereas luma Y' incorporates gamma compensation to match the nonlinear response of () displays prevalent in early television systems. Historically, this made Y' a better for perceived brightness in analog video transmission, as it aligned signal levels with human vision's sensitivity under display constraints. In video encoding, the use of luma enables significant bandwidth efficiencies through Y'CbCr subsampling schemes like , where components ( and ) are reduced to one-quarter the resolution of luma, achieving approximately 50% overall bandwidth savings compared to unsubsampled formats while preserving perceived detail, since human vision prioritizes luma resolution. While legacy broadcast standards adhere to nonlinear luma Y', modern codecs such as HEVC (H.265) and have evolved to incorporate elements of linear processing for enhanced ; for instance, 's from Luma (CfL) mode models as a of reconstructed luma samples to improve intra-frame efficiency in high-dynamic-range content. However, the core signal representation remains gamma-encoded luma to maintain compatibility with existing pipelines.

Versus Perceptual Lightness

Relative luminance, denoted as Y in the CIE XYZ color space, represents a linear photometric measure of , normalized between 0 and 1, and is directly dependent on the illuminant spectrum illuminating the surface. In contrast, perceptual lightness is a psychophysical attribute describing the human perception of a surface's , designed to remain across changes in illuminant color or , as exemplified by the Munsell value scale, which arranges grays in perceptually equal steps from black (value 0) to white (value 10). This invariance arises because the computes lightness based on contextual cues like relative contrasts and anchors, rather than absolute light levels, allowing observers to perceive a gray surface as consistently light or dark regardless of whether it is viewed under daylight or incandescent light. The Weber-Fechner law posits that just-noticeable differences (JNDs) in stimulus intensity are proportional to the background intensity, expressed as \Delta L / L \approx k where k is a constant, implying a logarithmic relationship between physical luminance and perceived magnitude. However, relative luminance assumes a linear response to light, which mismatches human vision's compressive nonlinearity; perceptual lightness models address this by applying cube-root or power-law transformations, such as L^* \propto Y^{1/3}, to approximate equal perceptual steps. Psychophysical experiments by S. S. Stevens in the 1940s and 1950s demonstrated that brightness perception follows a power law \psi = k I^n with n \approx 0.33 for luminance, supporting the need for nonlinear scaling in lightness over linear relative luminance. Color appearance models like the Hunt-Pointer-Estevez adaptation incorporate relative luminance Y as a foundational input but extend it with the von Kries transform to model cone responses under , enabling predictions of that account for illuminant shifts. For instance, two gray patches with identical relative luminance may appear to have different due to simultaneous effects, where a gray on a dark surround seems lighter than the same gray on a light surround, as shown in classic experiments revealing low-level inhibitory mechanisms in early vision. Under colored illuminants, such as one patch under reddish and another under bluish , models compensate via transforms to maintain perceived equivalence, whereas raw relative luminance would vary with the illuminant spectrum.

References

  1. [1]
    CIE XYZ Color Space - SPIE Digital Library
    The Y tristimulus value directly corresponds to the luminance of the color represented by X, Y, Z since the coefficients that define ¯y(λ) y ¯ ( λ ) correspond ...
  2. [2]
    None
    Summary of each segment:
  3. [3]
  4. [4]
    Colorimetry, 3rd edition | CIE
    CIE 15:2004 Colorimetry represents the latest edition of these recommendations and contains information on standard illuminants; standard colorimetric ...Missing: relative luminance
  5. [5]
    CIE standard illuminant D65
    CIE standard illuminant D65 is a dataset with 1 nm wavelength steps, from ISO/CIE FDIS 11664-2:2022, Table B.1, and is from the International Commission on ...Missing: relative Y=
  6. [6]
    [PDF] The Construction of Colorimetry by Committee
    Thus, as played out on the CIE committee, two aspects of color measurement became important from the early 1920s: first, a dramatic shift of research impetus ...
  7. [7]
    CIE Standard Observers and calculation of CIE X, Y, Z color values
    Jul 21, 2022 · The CIE X, Y, Z tristimulus values are calculated from the CIE Standard Observer functions, a selected CIE illuminant and the reflectance or transmittance of ...Missing: luminance | Show results with:luminance
  8. [8]
    [PDF] The CIE XYZ and xyY Color Spaces
    Mar 21, 2010 · The XYZ color space itself has a fascinating genesis. Its nature, history, and role in both theoretical and practical color science are.
  9. [9]
    [PDF] Colorimetric standards in colour television - ITU
    In the 625-line PAL and SECAM systems, the colorimetry is now based upon the three specific primary. 7. colours: **. Red: X = 0.64. У. 0.33. Green: X= 0.29 y.
  10. [10]
    A Standard Default Color Space for the Internet - sRGB - W3C
    The current proposal assumes an encoding ambient luminance level of 64 lux which is more representative of a dim room in viewing computer generated imagery.
  11. [11]
    [PDF] COLORIMETRY - NIST Technical Series Publications
    As it is possible to measure with a spectrophoto- meter the spectral energy distribution of any light beam, and as the color oi a light correlates closely.
  12. [12]
    Relative Luminance - an overview | ScienceDirect Topics
    DR is defined as the ratio of luminance of the lightest and darkest elements of a scene or image. In an absolute sense, this ratio can be considered the ...
  13. [13]
    None
    ### sRGB Specification Details
  14. [14]
    sRGB luminance: technology, standards, and color science
    sRGB luminance is calculated using the formula: Luminance = R*Y_r + G*Y_g + B*Y_b, where R, G, B are RGB values and Y_r, Y_g, Y_b are sRGB's Y values.
  15. [15]
    [PDF] Adobe RGB (1998) Color Image Encoding
    May 5, 2005 · The reference display black point shall have the same chromaticity as the reference display white point, and a luminance equal to 0.34731% of ...
  16. [16]
    Display P3 - INTERNATIONAL COLOR CONSORTIUM
    Adapted white point luminance: unspecified. Adapted white point chromaticity: unspecified. Reference medium. White point luminance: 80 cd/m2. White point ...Missing: coefficients | Show results with:coefficients
  17. [17]
    Colorimetry - Part 4: CIE 1976 L*a*b* Colour space
    The purpose of this CIE Standard is to define procedures for calculating the coordinates of the CIE 1976 L*a*b* (CIELAB) colour space and the Euclidean colour ...
  18. [18]
    Colorimetry: CIELAB Color Space - SPIE
    The 1976 CIELAB coordinates (L*, a*, b*) in this color space can be calculated from the tristimulus values XYZ with the following formulas. The subscript n ...<|control11|><|separator|>
  19. [19]
    Colorimetry: CIELUV Color Space - SPIE
    The CIELUV coordinates (L*,u*,v*) can be calculated from the tristimulus values XYZ or the chromaticity coordinates (x,y) with the following formulas. The ...
  20. [20]
    [PDF] Image appearance modeling - Mark Fairchild
    Color appearance modeling research applied to digital imaging systems was very active throughout the 1990s culminating with the recommendation of the CIECAM97s ...
  21. [21]
    None
    ### Summary of Tone Mapping and HDR to SDR Conversion Involving Relative Luminance Y or Scaling Luminance
  22. [22]
    [PDF] Black-point compensation: theory and application (ICC White Paper ...
    ... Relative Colorimetric input transform to determine its color in the PCS. The luminance (Y) of this color can then be used as the black-point luminance in the.
  23. [23]
    [PDF] Specification ICC.1:2022 - INTERNATIONAL COLOR CONSORTIUM
    Jan 3, 2022 · - tone scale and gamut mapping to map the scene colours onto the dynamic range and colour gamut of the reproduction, and. - applying ...
  24. [24]
    Display calibration - INTERNATIONAL COLOR CONSORTIUM
    This page summarises methods of calibrating displays for different purposes. Calibration and profiling are used in combination to achieve colour consistency.
  25. [25]
    Color management settings for print in Photoshop Elements
    Jan 12, 2022 · In Photoshop Elements, choose File > Print, or press Cmd + P. Select your printer, click More Options, and then choose Color Management from the left column.
  26. [26]
    AV1 Bitstream & Decoding Process Specification
    May 25, 2023 · This document defines the bitstream formats and decoding process for the Alliance for Open Media AV1 video codec.
  27. [27]
  28. [28]
    Understanding Success Criterion 1.4.3: Contrast (Minimum) | WAI
    Key Terms. For the sRGB colorspace, the relative luminance of a color is defined as L = 0.2126 * R + 0.7152 * G + 0.0722 * B where R, G and B are defined as: ...
  29. [29]
    WAVE Web Accessibility Evaluation Tools
    WAVE can identify many accessibility and Web Content Accessibility Guideline (WCAG) errors, but also facilitates human evaluation of web content. Our ...WAVE Browser Extensions · WAVE Report · WAVE API · WAVE FeedbackMissing: relative luminance
  30. [30]
    Fact Sheet: New Rule on the Accessibility of Web Content ... - ADA.gov
    Apr 8, 2024 · WCAG 2.1, Level AA requires a color contrast ratio of 4.5:1 for this text. It can be hard for some people with vision disabilities to see text ...
  31. [31]
    W3C Accessibility Guidelines (WCAG) 3.0
    Sep 4, 2025 · W3C Accessibility Guidelines (WCAG) 3.0 will provide a wide range of recommendations for making web content more accessible to users with disabilities.
  32. [32]
    Gamma FAQ - Frequently Asked Questions about Gamma
    So in image science, luminance is more properly called relative luminance. (A different quantity, luma, is often carelessly referred to as luminance, as I will ...4. What Is Luminance? · 8. What Is Gamma? · 10. Does Ntsc Use A Gamma Of...
  33. [33]
    Introduction to Color Spaces in Video
    Subsampling size saving​​ In 4:2:2, every two pixels will have four bytes of data. This gives an average1 of two bytes per pixel (33% bandwidth reduction). In 4: ...
  34. [34]
    [PDF] Tool Description for AV1 and libaom - Alliance for Open Media
    Oct 4, 2021 · The framework of the Alliance for Open Media Video 1 (AV1) codec is based on a hybrid video coding structure that consists of a few major ...Missing: luminance | Show results with:luminance
  35. [35]
  36. [36]
    Munsell System - SPIE
    The position along this axis, with ten steps from zero to nine, is called the value, representing the perceived lightness, which is nonlinear with the luminance ...
  37. [37]
    [PDF] Lightness Perception in Complex Scenes - York University
    Jun 24, 2021 · Abstract. Lightness perception is the perception of achromatic surface colors: black, white, and shades of grey. Lightness has long been a ...
  38. [38]
    [PDF] Weber's Law and Fechner's Law Introduction
    Weber's law expresses a general relationship between a quantity or intensity of something and how much more needs to be added for us to be able to tell that ...Missing: lightness | Show results with:lightness
  39. [39]
    [PDF] Scaling Lightness Perception and Differences Above and Below ...
    This is illustrated in Fig. 1 with the example of Munsell Value scale, where the CIELAB lightness equation exhibits a good approximation. CIELAB L* is scaled ...
  40. [40]
    [PDF] Modeling Human Color Perception under Extended Luminance Levels
    It aims to accurately predict lightness, colorfulness and hue, including the Hunt effect (colorfulness in- creases with luminance levels), the Stevens effect ( ...
  41. [41]
    Mechanisms Underlying Simultaneous Brightness Contrast
    May 25, 2020 · Simultaneous brightness contrast is when patches appear different brightness despite being the same. It's based on low-level, innate mechanisms ...
  42. [42]
    A colour-appearance transform for the CIE 1931 Standard ...
    Aug 7, 2025 · The one-step von Kries CAT integrated with Hunt-Pointer-Estevez (HPE) transformation matrix [29, 30] was used to predict the corresponding color ...