Color Rendering index
The Color Rendering Index (CRI), also known as the general color rendering index (Ra), is a standardized quantitative metric developed by the International Commission on Illumination (CIE) to evaluate the ability of a light source to accurately reproduce the colors of objects as they would appear under a reference illuminant, such as daylight or a blackbody radiator.[1] The index ranges from 0 to 100, where 100 indicates perfect color fidelity matching the reference, while lower values signify increasing color distortion; values above 90 are generally considered excellent for applications requiring precise color perception, such as retail, photography, and medical lighting.[1] Introduced in its initial form in 1965 through CIE Publication 13 and refined in subsequent editions (CIE 13.2 in 1974 and CIE 13.3 in 1995), the CRI remains the most widely adopted global standard for assessing color quality in artificial lighting, despite recognized limitations in handling modern solid-state sources like LEDs.[2] Calculation involves comparing the chromaticity shifts of eight standardized, moderately saturated test-color samples (Munsell samples 1–8) under the test light source versus the reference illuminant in the CIE 1964 uniform color space, with the general index Ra as the arithmetic mean of individual special rendering indices (Ri = 100 – 4.6 × ΔE_i, where ΔE_i is the color difference).[1] Reference illuminants are selected based on correlated color temperature (CCT): a Planckian radiator for CCT below 5000 K or phases of daylight for 5000 K and above, ensuring chromaticity differences remain within strict tolerances (Δu'v' < 5.4 × 10^{-3}).[1] While CRI emphasizes color fidelity through average performance on neutral-to-vivid hues, it does not account for direction of shifts, metamerism, or perceptual preferences beyond accuracy, prompting ongoing research into supplementary metrics like the IES TM-30 standard and a 2025 CIE recommendation to transition to the Color Fidelity Index (Rf) for more comprehensive evaluation.[3][4] In practice, high-CRI lighting (80+) is essential for environments where color accuracy impacts human perception and task performance, influencing industries from architecture to horticulture.[1]Fundamentals
Definition and Scale
The color rendering index (CRI) is a quantitative metric that evaluates the ability of a light source to accurately reproduce the colors of objects as they would appear under a reference illuminant, such as a blackbody radiator for correlated color temperatures below 5000 K or a standardized daylight illuminant for higher temperatures.[5] This index assesses color fidelity by comparing the chromaticity shifts of test color samples under the test light source versus the reference, providing a standardized way to quantify how naturally colors are rendered.[2] The CRI scale ranges from 0 to 100, where a value of 100 indicates perfect color rendering with no perceptible differences from the reference illuminant, while lower values reflect increasing color distortions.[5] The general color rendering index, denoted as Ra, represents the overall CRI and is computed as the arithmetic mean of eight special color rendering indices (Ri, for i=1 to 8), which evaluate rendering for a set of moderately saturated test colors spanning the visible spectrum.[2] Individual Ri values measure the rendering accuracy for specific test color samples, allowing assessment of performance on particular hues, with Ra serving as the primary metric for broad comparisons across light sources.[5] Additional special indices (R9 to R14) can be calculated for more saturated or application-specific colors, such as skin tones or foliage, but are not included in Ra.[2] The CRI was standardized by the International Commission on Illumination (CIE) in 1974 through the revision of its earlier method, establishing it as the globally adopted metric for light source color quality evaluation.[5] At its core, each special index Ri is derived from the formula R_i = 100 - 4.6 \Delta E_i where \Delta E_i is the Euclidean distance in CIELUV color space representing the perceptual color difference for the i-th test sample between the test and reference illuminants (full details of \Delta E_i computation are provided in measurement procedures).[2] This linear scaling ensures that small color shifts yield high Ri values close to 100, emphasizing the metric's focus on perceptual uniformity.[5]Reference Illuminants
In the evaluation of color rendering index (CRI), reference illuminants serve as ideal benchmarks against which the color appearance of objects under a test light source is compared. These illuminants are mathematically defined spectral power distributions (SPDs) that approximate natural or ideal light sources, ensuring a standardized basis for assessing how faithfully a light source reproduces colors.[1] For light sources with a correlated color temperature (CCT) below 5000 K, the reference illuminant is a Planckian radiator, representing the continuous spectrum of a blackbody at thermal equilibrium. Above 5000 K, the reference shifts to one of the CIE standard daylight illuminants from the D series, such as D65, which simulates average midday sunlight with a CCT of approximately 6500 K. This selection ensures the reference illuminant's chromaticity closely matches that of the test source, with a tolerance limit of ΔC < 5.4 × 10⁻³ to minimize discrepancies in color evaluation.[1][6][1] The criteria for choosing these reference illuminants emphasize spectral power distributions that closely mimic natural lighting conditions, such as blackbody radiation for warmer tones or daylight phases for cooler ones. This approach aims to reduce metamerism—the phenomenon where colors appear to match under one illuminant but differ under another—by providing a smooth, full-spectrum reference that avoids spectral gaps common in artificial sources. As a result, the reference illuminant facilitates accurate quantification of color shifts without introducing artifacts from mismatched spectra.[1][7] For non-ideal light sources whose chromaticity deviates slightly from the Planckian locus or daylight phases, the correlated CCT (CCT,c) is calculated as an approximation to select and scale the reference illuminant's SPD accordingly. This adjustment ensures the reference's CCT aligns with the test source's perceived color temperature.[1] Incandescent lamps exemplify perfect alignment with the reference illuminant, achieving a CRI of 100 because their thermal emission spectrum matches the Planckian radiator for CCTs below 5000 K. In contrast, light-emitting diodes (LEDs) often require spectral adjustments, such as phosphor conversion, to approach high CRI values, as their inherent discrete emission lines deviate from the continuous reference SPD, leading to lower scores unless optimized.[8][9]History
Origins and Development
The origins of the Color Rendering Index (CRI) can be traced to early 20th-century investigations into how artificial light sources alter color perception, particularly with the advent of fluorescent lamps. In the 1930s, Dutch physicist Piet J.H. Bouma conducted foundational studies on color rendition under these new lights, proposing an eight-band spectral method to measure the similarity between a light source's spectrum and a reference spectrum, such as daylight or incandescent light. This approach aimed to quantify deviations in color appearance by dividing the visible spectrum into discrete bands and assessing their relative power, laying groundwork for later fidelity-based metrics.[10] By the 1950s, the widespread adoption of fluorescent lighting in commercial and residential settings highlighted the need for systematic evaluation of color quality, leading to influential work on color preference and appearance. Dorothy Nickerson, a prominent color scientist at the U.S. Department of Agriculture, developed preference scales in the mid-1950s that rated light sources based on subjective assessments of color vividness and naturalness using standardized samples. Collaborating with C.W. Jerome, Nickerson advanced the idea of employing a set of test color samples to objectively measure rendering effects, emphasizing average color shifts as a proxy for overall quality; their proposals, including an initial framework with around 10 samples, directly shaped the methodological basis for CRI.[11] The 1960s marked a pivotal era for formalizing these concepts, driven by the Illuminating Engineering Society (IES) and the International Commission on Illumination (CIE) amid the explosive growth of fluorescent lighting technologies. Recognizing the limitations of spectral similarity alone, these organizations launched collaborative initiatives to create a standardized index that compared color appearance under test lights to reference illuminants using human vision models. In 1965, the CIE released Publication 13, introducing the initial CRI calculation method with 8 medium-saturation test color samples evenly spaced in hue, computing the general index Ra as the average of special rendering indices based on chromaticity differences; this evolved from IES-backed proposals testing 8-10 samples, expanding to a 14-sample system for broader hue coverage while prioritizing conceptual fidelity over exhaustive spectral analysis.[2][12]Standardization and Evolution
The formal standardization of the Color Rendering Index (CRI) began in 1964 when the Illuminating Engineering Society (IES) adopted it as a metric for evaluating light source color rendition, marking the first industry standard for this purpose.[13] This adoption laid the groundwork for quantitative assessment, focusing on how light sources reproduce colors relative to a reference illuminant.[14] In 1974, the International Commission on Illumination (CIE) formalized the CRI through Publication 13 (second edition), establishing the method with 14 test color samples—eight low-to-medium chroma colors for the general index Ra (the average of their special indices Ri) and six supplementary high-chroma samples for additional evaluation.[5] This publication defined Ra as the primary metric, calculated by comparing color shifts in a uniform color space under the test source versus a reference, emphasizing practical application for fluorescent and incandescent sources prevalent at the time. The CIE refined the CRI in 1995 with Publication 13.3, updating the 1974 method to align with contemporary spectroradiometric practices and incorporating refinements to the von Kries chromatic adaptation transform for more accurate color shift calculations, alongside adjustments to the spectral data of the test color samples for better representation across illuminants.[2] These changes improved computational precision without altering the core Ra framework, ensuring compatibility with evolving measurement technologies.[15] By the 2010s, growing adoption of light-emitting diodes (LEDs) highlighted CRI limitations, such as poor correlation with visual preferences for sources with spectral gaps in the red region, prompting CIE and IES discussions on enhancements.[16] These led to proposals like the R96a method, developed by NIST researcher Yoshi Ohno, which modifies CRI calculations using updated color appearance models and additional samples to better assess LED performance, though it remains a research tool rather than a standard.[17] Complementing this, the IES introduced TM-30 in 2015 (with updates through 2018, 2020 as TM-30-20, and 2024 as TM-30-24) as a multifaceted metric, originating from efforts by the IES Color Committee to provide fidelity, gamut, and local chroma measures beyond CRI's scope.[18] [19] In January 2025, the CIE issued Position Statement PS 002:2025, recommending that the lighting community adopt the general color fidelity index Rf from TM-30 to replace the general color rendering index Ra for evaluating color rendering properties of light sources.[20]Measurement Methods
Standard Test Procedure
The standard test procedure for computing the Color Rendering Index (CRI), as defined by the International Commission on Illumination (CIE), evaluates the color rendering performance of a light source by comparing the chromaticities of test color samples under the test illuminant to those under a reference illuminant. This involves selecting an appropriate reference illuminant based on the test source's correlated color temperature (CCT), applying chromatic adaptation to account for differences in white point, calculating color differences (ΔE) for 14 standardized test color samples (R1 through R14), and deriving individual special color rendering indices (Ri) from these differences. The general color rendering index (Ra) is then obtained as the arithmetic mean of the first eight Ri values (R1 to R8), providing an overall measure of color fidelity.[1] The procedure begins with determining the CCT of the test illuminant, denoted as CCT_c, which serves as the basis for selecting the reference illuminant—a Planckian blackbody radiator for CCT_c below 5000 K or a phase-shifted daylight spectrum for higher temperatures—to ensure a close match in chromaticity. Next, the spectral power distributions (SPDs) of both the test and reference illuminants are transformed into a common color space, typically involving computation of CIE 1931 XYZ tristimulus values for each test sample and subsequent application of chromatic adaptation (using a von Kries-type transform) to simulate human visual adaptation between the illuminants. Color differences ΔE are then calculated in the CIE 1964 UVW* uniform color space for samples R1 through R14, quantifying the perceptual shift in hue, chroma, and lightness for each. Finally, Ra is computed as Ra = (1/8) × (R1 + R2 + ... + R8), where values are rounded to the nearest integer.[1][5] The special color rendering index for each sample i is given by the formula: R_i = 100 - 4.6 \times \Delta E_i where ΔE_i is the Euclidean distance in the uniform color space between the adapted colors under the test and reference illuminants. The factor 4.6 is empirically derived such that ΔE_i = 0 yields Ri = 100 (perfect rendering) and ΔE_i ≈ 21.7 yields Ri = 0 (unacceptable rendering). This linear scaling ensures Ri reflects the relative acceptability of color shifts, with the factor derived from psychophysical data on tolerable differences. Ri can be negative if ΔE_i > 21.7, indicating rendering worse than null, and such values are included as calculated in the average for Ra.[1][5] To handle computational edge cases, the procedure includes rounding rules, but differences are treated continuously without minimum thresholds for negligibility.[1]Chromatic Adaptation Transform
The chromatic adaptation transform plays a crucial role in the color rendering index (CRI) calculation by accounting for the human visual system's adaptation from the reference illuminant to the test illuminant, thereby minimizing bias in color appearance comparisons due to differing chromaticities. This step ensures that the perceived color shifts of test samples are evaluated as if viewed under equivalent adaptation states, aligning the test source's rendering with the reference's ideal conditions.[1] In the standard CRI method, the Von Kries transform is applied, which models chromatic adaptation as independent scaling of the long- (L), medium- (M), and short-wavelength (S) cone responses. This approach assumes a diagonal adaptation in the LMS cone space, where the transform matrix adjusts the cone excitations proportionally to the illuminants' white points. The adaptation matrix D is defined as D = \begin{pmatrix} d_L & 0 & 0 \\ 0 & d_M & 0 \\ 0 & 0 & d_S \end{pmatrix}, where d_i = Y_i / Y_{ir} for i = L, M, S, and the Y_i values are the tristimulus responses in the cone fundamentals derived from the XYZ values of the test (Y_i) and reference (Y_{ir}) illuminants. This matrix is incorporated into the overall transformation from test to reference conditions, typically via a full chromatic adaptation model [M^{-1} D M], where M converts between XYZ and LMS spaces using cone sensitivity matrices. The resulting adapted coordinates are then used to compute color differences in a uniform color space. The standard employs Judd’s fundamental primaries for this transform.[14][1] For enhanced accuracy in modern applications, the CIE's 1990 color appearance modeling efforts introduced alternatives to the basic Von Kries transform, such as the Bradford adaptation model, which employs a sharpened spectral sensitivity basis for better handling of cone crosstalk and real-world corresponding colors. Similarly, the CAT02 transform, embedded in the CIECAM02 model, refines adaptation through a sharpened LMS space and degree-of-adaptation parameter, improving predictions for non-achromatic illuminants and reducing errors in rendering assessments. These methods are increasingly adopted in revised CRI procedures to address limitations of the original Von Kries implementation.[21]Test Color Samples
The original set of 14 test color samples (TCS), designated TCS1 through TCS14, forms the basis for evaluating color rendering in the CIE method as defined in CIE Publication 13.3 (1995). These samples consist of matte-surfaced pigments selected from the Munsell Book of Color to represent typical object colors encountered in everyday viewing conditions.[1][22] TCS1 through TCS8 are medium-chroma colors with relatively neutral hues, designed to span the Munsell hue circle evenly while maintaining similar lightness levels (approximately Munsell Value 6). Their spectral radiance factors, tabulated at 5 nm intervals from 360 nm to 830 nm, approximate the average reflectance spectra of multiple real samples under daylight illumination for each hue category, ensuring broad representation of common pastel-like colors.[1][22] This selection rationale prioritizes uniform distribution across color space to assess general color fidelity without bias toward extreme saturations. TCS9 is a strongly saturated red (Munsell 5R 4/14), featuring high reflectance specifically in the 600–630 nm wavelength range to test the light source's ability to render vivid reds accurately. TCS10 through TCS12 represent strongly saturated colors in yellow, green, and blue hues, respectively, with TCS10 emphasizing broad reflectance above 500 nm, TCS11 showing a peak in the green region, and TCS12 reflecting primarily below 500 nm. TCS13 simulates Caucasian skin tone (a light yellowish pink with moderate reflectance across visible wavelengths), while TCS14 depicts foliage (a moderate olive green with lower overall reflectance and emphasis in the yellow-green spectrum). These additional samples (TCS9–TCS14) incorporate higher chroma and varied lightness to probe rendering performance for more saturated or application-specific colors like reds, vegetation, and human skin.[1][22][23] In computation, the general color rendering index Ra is the arithmetic mean of the special color rendering indices Ri for TCS1 through TCS8 only, focusing on balanced everyday rendering. The full CRI assessment includes individual Ri values for all 14 samples to provide a more comprehensive evaluation, particularly highlighting potential weaknesses in saturated hues.[1][14]Updated Methods
R96a Procedure
The R96a procedure represents a refinement to the color rendering index calculation method, developed under the auspices of CIE Technical Committee 1-33 and further evaluated by TC 1-62 to address limitations in handling spectral mismatches common in fluorescent lamps and early white LED sources. This update aimed to improve accuracy for modern light sources by incorporating more representative color evaluation techniques, as detailed in the committee's 1999 chairman's report and subsequent CIE technical reports.[24] Key modifications in the R96a procedure include an extended set of 14 test color samples—adding six more (R9–R14) to the original eight—to better capture a wider range of color appearances, along with modified scaling of color differences (ΔE) calculated in CIELAB space and the CIE 1994 chromatic adaptation transform (CIECAT94) to map both test and reference illuminants to a D65 white point for consistent evaluation. The test colors are derived from the Macbeth ColorChecker chart to provide more realistic object reflectances. The procedure employs six discrete reference illuminants (D65, D50, and blackbody radiators at 4200 K, 3450 K, 2950 K, and 2700 K) instead of a continuous Planckian locus, reducing errors in high-CCT scenarios.[24] In terms of formula adjustments, the standard 1995 method uses a linear relation R_i = 100 - 4.6 \Delta E_i for individual color rendering indices; the R96a procedure uses a similar linear scaling with ΔE*ab in CIELAB space across the extended samples. This approach enhances the overall general color rendering index (R_a) calculation by averaging the adjusted R_i values across the 14 samples.[24] The R96a procedure gained traction in some regulatory contexts and research applications during the late 1990s and early 2000s for evaluating emerging LED technologies, but it was not fully integrated into the core CIE standard (CIE 13.3), serving instead as a bridge to later developments until more comprehensive updates were adopted.[24]Revised Test Color Samples
The R96a method introduces an expanded and modified set of test color samples to enhance the evaluation of color rendering properties, particularly for light sources with narrow spectral bands such as LEDs. These revisions aim to provide a more comprehensive assessment by incorporating samples that better capture color shifts in high-saturation regions and realistic object tones. The changes were developed by CIE Technical Committee 1-33 to address limitations in the original 1974 method, focusing on improved fidelity for modern lighting technologies.[24] Key additions in the R96a method include six new samples: four high-saturation colors (R9–R12) and two skin tone samples (R13–R14), which target issues not adequately represented in the original set of eight samples. These samples feature reflectance spectra with elevated saturation levels and realistic object properties, enabling better detection of color distortion in extended color spaces. By including these, the method increases sensitivity to how light sources reproduce vivid hues and skin tones, reducing underestimation of rendering quality in applications involving diverse materials.[24] Modifications were applied to select existing samples for greater realism, with the test colors derived from the Macbeth ColorChecker chart. For R9, the deep red sample, the reflectance spectrum was updated to emphasize longer wavelengths for more accurate evaluation of red rendering under sources with cyan deficiencies common in LEDs. Similarly, R13, representing Caucasian skin tones, received revised reflectance data to align more closely with actual human skin spectral properties, incorporating variations for improved metameric discrimination. These updates prioritize biological and perceptual relevance over the original Munsell-based approximations.[24] The primary rationale for these revised samples is to heighten the method's responsiveness to metamerism—the phenomenon where colors appear consistent under one illuminant but differ under another—especially in narrow-band spectra like those from phosphor-converted LEDs. This addresses shortcomings in the original CRI, where low-saturation samples failed to reveal gamut compression or expansion, leading to overly optimistic scores for certain sources. The revisions promote a more robust framework for specifying rendering performance across varied correlated color temperatures.[25]Interpretation
Calculation Examples
To illustrate the computation of the general color rendering index (Ra), consider a halogen lamp with a correlated color temperature (CCT) of 3000 K, compared against a blackbody reference illuminant at the same CCT. The spectral power distribution (SPD) of the halogen lamp closely approximates the reference, leading to small color differences (ΔE) for the eight standard test color samples (R1 to R8). These ΔE values are derived by first transforming the SPDs through the CIE 1931 XYZ color space to obtain chromaticity coordinates, applying the von Kries chromatic adaptation transform to match the illuminants, and then converting to CIELAB coordinates (L*, a*, b*) for each sample. The color difference for each sample is then calculated as: \Delta E_i = \sqrt{ (\Delta L^*_i)^2 + (\Delta a^*_i)^2 + (\Delta b^*_i)^2 } where the subscript i denotes the test color sample. For small ΔE_i (typically < 5 units in high-fidelity sources), the special color rendering index is approximated as R_i = 100 - 4.6 \times \Delta E_i. Representative ΔE values for this halogen lamp, based on measured SPDs, are approximately 0.2 for R1 (light skin tone), 0.4 for R2 (moderate orange), 0.3 for R3 (purplish blue), 0.5 for R4 (moderate yellow-green), 0.1 for R5 (green foliage), 0.6 for R6 (blue sky), 0.3 for R7 (moderate reddish purple), and 0.4 for R8 (Chinese red).[26][2] Applying the formula yields R1 ≈ 99.1, R2 ≈ 98.2, R3 ≈ 98.6, R4 ≈ 97.7, R5 ≈ 99.5, R6 ≈ 97.2, R7 ≈ 98.6, and R8 ≈ 98.2. The average Ra is then (99.1 + 98.2 + 98.6 + 97.7 + 99.5 + 97.2 + 98.6 + 98.2)/8 ≈ 98. This high Ra reflects the lamp's smooth, continuous spectrum, which minimizes metamerism across the visible range.[27] In contrast, a cool white fluorescent lamp with CCT around 4100 K typically yields a lower Ra of 72 when evaluated against a daylight reference illuminant (e.g., CIE illuminant D50 adjusted to match CCT). The fluorescent's discontinuous spectrum, dominated by mercury emission lines and phosphor bands, causes larger color shifts, particularly in the red region. For R1 to R8, average ΔE values are around 6-8 units, resulting in Ri values of 70-75. Notably, the supplemental index R9 (saturated red) is low, often around -50 to -90, due to weak red emission in the SPD; for this example, ΔE_9 ≈ 41 leads to R9 ≈ -89. This negative R9 highlights the lamp's poor rendering of deep reds, such as skin tones or produce, without significantly pulling down Ra (which excludes R9).[28][29] A spreadsheet-style breakdown for the halogen example can organize the computation as follows, using simplified input data (normalized SPD values at key wavelengths for brevity; full calculations require 1 nm resolution SPDs across 380-780 nm):| Test Sample | SPD_Test (key λ, rel. power) | SPD_Ref (key λ, rel. power) | XYZ_Test | XYZ_Ref | L*_Test | a*_Test | b*_Test | L*_Ref | a*_Ref | b*_Ref | ΔL* | Δa* | Δb* | ΔE | Ri |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| R1 (skin) | 450:0.8, 550:1.0, 650:0.9 | 450:0.8, 550:1.0, 650:0.9 | 0.45,0.48,0.35 | 0.45,0.48,0.35 | 75.2 | 5.1 | 12.3 | 75.0 | 5.0 | 12.1 | 0.2 | 0.1 | 0.2 | 0.2 | 99.1 |
| R2 (orange) | 450:0.7, 550:0.9, 650:1.1 | 450:0.7, 550:0.9, 650:1.1 | 0.52,0.42,0.28 | 0.52,0.42,0.28 | 68.4 | 28.5 | 45.2 | 68.0 | 28.1 | 44.8 | 0.4 | 0.4 | 0.4 | 0.4 | 98.2 |
| R3 (blue) | 450:1.2, 550:0.6, 650:0.4 | 450:1.2, 550:0.6, 650:0.4 | 0.25,0.18,0.55 | 0.25,0.18,0.55 | 45.1 | -15.2 | -20.1 | 44.8 | -15.5 | -19.8 | 0.3 | -0.3 | 0.3 | 0.3 | 98.6 |
| R4 (yellow-green) | 450:0.6, 550:1.1, 650:0.5 | 450:0.6, 550:1.1, 650:0.5 | 0.38,0.55,0.22 | 0.38,0.55,0.22 | 82.3 | -12.4 | 35.6 | 81.8 | -12.8 | 35.1 | 0.5 | -0.4 | 0.5 | 0.5 | 97.7 |
| R5 (foliage) | 450:0.5, 550:1.2, 650:0.3 | 450:0.5, 550:1.2, 650:0.3 | 0.22,0.62,0.15 | 0.22,0.62,0.15 | 55.6 | -25.3 | 18.9 | 55.5 | -25.2 | 18.8 | 0.1 | 0.1 | 0.1 | 0.1 | 99.5 |
| R6 (sky) | 450:1.3, 550:0.5, 650:0.2 | 450:1.3, 550:0.5, 650:0.2 | 0.18,0.12,0.68 | 0.18,0.12,0.68 | 32.4 | -8.7 | -45.2 | 31.8 | -9.3 | -44.6 | 0.6 | -0.6 | 0.6 | 0.6 | 97.2 |
| R7 (purple) | 450:0.9, 550:0.7, 650:0.8 | 450:0.9, 550:0.7, 650:0.8 | 0.35,0.28,0.42 | 0.35,0.28,0.42 | 50.2 | 45.1 | -15.3 | 49.9 | 44.8 | -15.0 | 0.3 | 0.3 | 0.3 | 0.3 | 98.6 |
| R8 (red) | 450:0.4, 550:0.8, 650:1.3 | 450:0.4, 550:0.8, 650:1.3 | 0.48,0.35,0.52 | 0.48,0.35,0.52 | 62.5 | 52.3 | 28.4 | 62.1 | 51.9 | 28.0 | 0.4 | 0.4 | 0.4 | 0.4 | 98.2 |
Typical Values by Light Source
The color rendering index (CRI), specifically the general index Ra, varies significantly across lighting technologies, reflecting differences in their spectral power distributions relative to reference illuminants. Incandescent and tungsten lamps achieve the highest Ra values, serving as the reference standard with Ra = 100 and individual color rendering indices Ri typically near 100 across the test samples, due to their continuous blackbody-like spectra.[32] Fluorescent lamps generally exhibit Ra values between 50 and 85, with cool white variants around 70 and triphosphor types reaching 80 or higher, depending on phosphor blends that enhance spectral coverage. High-intensity discharge (HID) lamps, such as metal halide, offer Ra from 60 to 90, while low-pressure sodium vapor lamps perform poorly with Ra below 30, often as low as 10–22, owing to their narrow emission lines that poorly render most colors. Light-emitting diodes (LEDs) span Ra 65 to 95, with standard white LEDs at about 80 and high-CRI designs exceeding 90, enabled by optimized phosphor conversions or multi-channel spectra.[32] The following table summarizes representative Ra values for common light sources, grouped by correlated color temperature (CCT) where applicable, based on manufacturer data and standards. Values can vary with specific formulations, and higher CCTs often correlate with slightly lower Ra in non-incandescent sources due to spectral gaps in the blue-green region.| Light Source Type | Typical Ra Range | Example CCT (K) | Notes on Variability |
|---|---|---|---|
| Incandescent/Tungsten | 100 | 2700 (warm) | Reference standard; all Ri ≈ 100; minimal variation across CCT. |
| Fluorescent (Cool White) | 60–75 | 4100 (cool) | Basic halophosphate phosphors; lower Ra due to mercury lines. |
| Fluorescent (Triphosphor) | 80–85 | 3000–5000 | Rare-earth phosphors improve red rendering; up to 90 in premium types. |
| LED (Standard White) | 70–85 | 2700–5000 | Phosphor-converted; ENERGY STAR minimum 80 for interiors.[32] |
| LED (High-CRI) | 90–95 | 2700–4000 | Multi-phosphor or hybrid designs; approaches incandescent fidelity.[32] |
| Metal Halide (HID) | 60–85 | 3000–4000 | Ceramic additives boost to 90; varies with halide mix. |
| High-Pressure Sodium (HID) | 20–30 | 2000–2200 (warm) | Limited spectrum; unsuitable for color-critical tasks. |
| Low-Pressure Sodium (HID) | 0–25 | 1800 (warm) | Monochromatic yellow; Ra often near 0 for non-yellow hues. |