Temperature
Temperature is a physical quantity that serves as a measure of the average kinetic energy of the microscopic particles—such as atoms and molecules—within a substance, reflecting the degree of hotness or coldness relative to a reference point.[1] In thermodynamic terms, it quantifies the average total internal energy associated with the random motion of these particles, providing an absolute scale where zero corresponds to the absence of thermal motion.[2] This concept underpins thermal equilibrium, where two systems in contact reach the same temperature and no net heat transfer occurs.[3] The most widely used temperature scales include the Celsius (°C), Fahrenheit (°F), and Kelvin (K) scales, each defined by specific reference points such as the freezing and boiling points of water under standard atmospheric pressure.[4] The Celsius scale sets the freezing point of water at 0°C and the boiling point at 100°C, making it intuitive for everyday applications.[5] In contrast, the Fahrenheit scale, common in the United States, assigns 32°F to water's freezing point and 212°F to its boiling point, resulting in finer degree intervals for human-perceived temperatures.[6] The Kelvin scale, the SI unit for thermodynamic temperature, starts at absolute zero (0 K, equivalent to -273.15°C), where molecular motion theoretically ceases, and is essential for scientific calculations involving gases and absolute energy measures.[7] Temperature is measured using various thermometers that exploit physical properties changing with thermal energy, such as the expansion of liquids or the resistance of metals.[8] Traditional liquid-in-glass thermometers, often using mercury or alcohol, rely on the volume expansion of the liquid to indicate temperature on a calibrated scale.[9] Modern electronic methods, including thermocouples (which generate voltage from temperature-induced metal junctions) and resistance temperature detectors (RTDs, which measure changes in electrical resistance), offer higher precision and are used in industrial, medical, and environmental monitoring.[10] For instance, air temperature is accurately gauged with electronic thermometers capable of resolutions down to fractions of a degree Celsius.[10] In physics and related fields, temperature plays a central role in thermodynamics, influencing phenomena like heat transfer, phase changes, and chemical reactions, while its variations drive weather patterns, biological processes, and material behaviors.[11]Core Concepts
Definition
Temperature is fundamentally defined through the zeroth law of thermodynamics, which states that if two systems are separately in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.[12] This law establishes temperature as the property shared by systems in thermal equilibrium, where no net heat transfer occurs between them due to the absence of a temperature gradient.[12] Thermal equilibrium thus serves as the empirical basis for measuring and comparing temperatures across thermodynamic systems. As an intensive property, temperature does not depend on the size or amount of the system but reflects the average kinetic energy of its microscopic particles, such as atoms or molecules.[13] In thermodynamic contexts, it quantifies the tendency of a system to exchange energy with its surroundings via heat transfer.[14] Importantly, temperature must be distinguished from heat: temperature is a state variable characterizing the system's internal energy distribution, while heat is the process of energy transfer driven by a temperature difference between systems.[15] Objects possess temperature but not heat in isolation; heat arises only during transfer.[14] In the specific case of an ideal gas, temperature relates directly to the average translational kinetic energy per molecule, given by the equation \frac{3}{2} k T = \frac{1}{2} m \langle v^2 \rangle, where T is the temperature, k is Boltzmann's constant, m is the molecular mass, and \langle v^2 \rangle is the mean square speed of the molecules.[16] This proportionality underscores temperature's role as a macroscopic measure of microscopic agitation, applicable under conditions where intermolecular forces are negligible.[16]Equilibrium and Non-Equilibrium
In thermodynamic equilibrium, a system achieves a uniform temperature throughout when it is isolated or in contact with a heat bath, such that no net heat flows between parts of the system or with the surroundings, consistent with the zeroth law of thermodynamics.[2] This state implies that macroscopic properties like pressure and density are also uniform, and the system remains unchanged over time unless perturbed.[17] For instance, two objects in thermal contact reach the same temperature when equilibrium is attained, ceasing any energy exchange.[18] In non-equilibrium steady states, temperature is not uniform; instead, persistent gradients drive continuous heat flow, as seen in steady-state heat conduction where the system properties remain constant over time despite the imbalance.[19] Here, the overall energy input equals output, maintaining a constant flux, but local temperatures vary spatially—for example, in a rod with fixed hot and cold ends, a linear temperature profile develops along its length. Such states are analyzed using extended irreversible thermodynamics, where temperature becomes a local quantity adjusted for dissipative effects like heat flux. Non-steady states, or transient conditions, feature time-varying temperatures as the system evolves toward equilibrium or another steady configuration, such as during initial heat flow in an insulated body suddenly exposed to a temperature difference.[19] In these scenarios, temperature profiles change dynamically, with heat diffusion governed by the unsteady heat equation, leading to temporary gradients that diminish over time.[20] Unlike steady states, no constant flux persists, and the system's departure from uniformity is both spatial and temporal. Local thermodynamic equilibrium (LTE) approximates equilibrium conditions in specific regions of an otherwise non-equilibrium system, where collision rates among particles are high enough to maintain Maxwell-Boltzmann distributions locally, despite global imbalances.[21] This assumption is valid in dense plasmas or stellar atmospheres when radiative processes are negligible compared to collisions, allowing temperature to be defined via local energy equipartition.[22] For example, in the solar photosphere, LTE holds over small scales where temperature gradients are shallow, enabling equilibrium statistical descriptions amid outward energy transport.[23] An axiomatic approach in non-equilibrium thermodynamics treats temperature as a functional of the system's state variables, extending equilibrium definitions to include fluxes and gradients for consistency with the second law. In extended irreversible thermodynamics, this involves a non-equilibrium entropy depending on energy density and heat flux, yielding a local temperature via the derivative ∂s/∂u, where deviations from equilibrium temperature scale with flux magnitude, such as θ ≈ T (1 - α q²) for small perturbations.[24] This framework ensures thermodynamic relations hold formally, as in the generalized Gibbs equation ds = θ⁻¹ du + dissipative terms.Scales and Units
Empirical Scales
Empirical temperature scales are defined based on reproducible physical phenomena, such as the freezing and boiling points of water under standard atmospheric pressure, without reference to any underlying molecular or thermodynamic theory. These scales emerged in the 18th century as practical tools for measuring temperature variations observable through thermometric fluids like mercury or alcohol, prioritizing human-relevant reference points over universal absolutes./08%3A_Entropy_Production_and_Accounting/8.02%3A_Empirical_and_Thermodynamic_Temperature) The Fahrenheit scale, proposed by German physicist Daniel Gabriel Fahrenheit in 1724, was one of the earliest standardized empirical scales. Fahrenheit calibrated his mercury-in-glass thermometers using three fixed points: the temperature of a brine mixture of ice, water, and ammonium chloride at 0°F; the freezing point of water at 32°F; and the average human body temperature, initially set near 96°F but later adjusted to 98.6°F. The boiling point of water was determined to be 212°F, establishing 180 divisions between freezing and boiling. This scale's finer graduations (using 1/96 of the human body temperature interval as a degree) aimed for precision in meteorological and medical applications.[25][26] In 1731, French naturalist René Antoine Ferchault de Réaumur introduced his scale based on the volumetric expansion of alcohol in a thermometer tube. He defined 0°R as the freezing point of water and 80°R as the boiling point under standard conditions, dividing the interval into 80 equal parts to reflect alcohol's expansion coefficient, which he measured as expanding by 1/1000 of its volume per degree. This choice made the scale convenient for instruments using alcohol, which expands more than mercury, and it gained popularity in continental Europe for scientific and industrial uses, such as monitoring fermentation processes.[27] The Celsius scale, developed by Swedish astronomer Anders Celsius in 1742, refined earlier proposals by using water's phase changes as fixed points in a decimal system. In his publication Observations of two persistent degrees on a thermometer, Celsius initially proposed 0°C for water's boiling point and 100°C for its freezing point, but this was reversed shortly after his death to the modern convention of 0°C at freezing and 100°C at boiling, dividing the interval into 100 equal degrees. This centigrade (hundred-grade) approach emphasized simplicity and universality for astronomical and everyday measurements, quickly supplanting other scales in scientific contexts.[28][28] Conversions between these empirical scales account for their differing zero points and degree sizes, derived from the ratios of their intervals between water's freezing and boiling points. The formula to convert Celsius to Fahrenheit is^\circ\mathrm{F} = ^\circ\mathrm{C} \times \frac{9}{5} + 32
and the inverse is
^\circ\mathrm{C} = (^\circ\mathrm{F} - 32) \times \frac{5}{9}.
For the Réaumur scale, which spans four-fifths the interval of Celsius, the conversions are
^\circ\mathrm{R} = ^\circ\mathrm{C} \times \frac{4}{5}
and
^\circ\mathrm{C} = ^\circ\mathrm{R} \times \frac{5}{4}.
These relations highlight the proportional differences: the Fahrenheit degree is 5/9 of a Celsius degree, while the Réaumur degree is 4/5 of a Celsius degree.[29][29] A key limitation of empirical scales is their arbitrary zero points, which are set by convenient references like brine or human body temperature rather than any intrinsic physical limit, leading to non-intuitive values for common phenomena across scales. Additionally, intervals are not directly additive between scales without conversion, complicating comparisons; for instance, a 100°R change does not equal a 100°C change in magnitude. These features make empirical scales practical for relative measurements but less suitable for precise scientific calculations involving energy or entropy./08%3A_Entropy_Production_and_Accounting/8.02%3A_Empirical_and_Thermodynamic_Temperature)/08%3A_Entropy_Production_and_Accounting/8.02%3A_Empirical_and_Thermodynamic_Temperature)
Absolute Scales
Absolute scales of temperature are defined by an invariant reference point at absolute zero, the lowest conceivable temperature where thermal motion theoretically ceases, providing a universal foundation independent of specific substances or arbitrary fixed points used in empirical scales. This zero point emerges from extrapolations of gas behavior under constant pressure, as observed in Charles's law, which posits that the volume of a gas is directly proportional to its absolute temperature.[30] At absolute zero, marked as 0 on these scales, a system's internal energy reaches its minimum, and entropy approaches a minimum value for ideal cases, establishing a physical limit to cooling processes.[31] The Kelvin scale, the SI unit of thermodynamic temperature denoted by K, anchors its definition to fundamental physical constants rather than material properties alone. Prior to 2019, it was realized through the triple point of water, fixed at exactly 273.16 K, where water coexists in solid, liquid, and vapor phases in equilibrium.[32] Since the 2019 redefinition, the kelvin is defined by assigning the exact value of the Boltzmann constant as k = 1.380649 \times 10^{-23} J/K, linking temperature directly to the average kinetic energy per degree of freedom in a system.[33] This ensures the scale's invariance and precision in scientific measurements, with intervals equivalent to those of the Celsius scale but shifted to start at absolute zero. The Rankine scale (°R), an absolute counterpart to the Fahrenheit scale, employs the same degree size as Fahrenheit but sets absolute zero at 0 °R. Introduced by Scottish engineer William John Macquorn Rankine in the 19th century, it places the freezing point of water at 491.67 °R and the boiling point at 671.67 °R under standard pressure.[34] Primarily used in English-unit engineering contexts, such as thermodynamics in the United States, it facilitates calculations involving absolute temperatures without negative values.[35] Thermodynamic temperature, the core concept underlying absolute scales, is defined independently of any working substance through the efficiency of reversible heat engines, as established by the Carnot cycle. The maximum efficiency \eta of such an engine operating between hot reservoir temperature T_h and cold reservoir temperature T_c is given by \eta = 1 - \frac{T_c}{T_h}, where temperatures are measured on an absolute scale, ensuring the ratio reflects intrinsic thermal properties rather than arbitrary calibrations.[36] This formulation, derived from Sadi Carnot's 1824 analysis, guarantees that all reversible engines between the same reservoirs achieve identical efficiency, defining temperature ratios universally.[37] Gas thermometers calibrate absolute scales using the ideal gas law, PV = nRT, where P is pressure, V is volume, n is the amount of substance, R is the gas constant, and T is the absolute temperature in kelvins. By maintaining constant volume and measuring pressure changes with temperature, or vice versa, the law extrapolates to absolute zero where volume or pressure would theoretically vanish.[38] This method provides a practical realization of the thermodynamic scale, approximating ideal behavior with dilute gases like helium at low pressures.[39]Theoretical Foundations
Kinetic Theory
The kinetic theory of gases interprets temperature as a measure of the average kinetic energy of particles in random motion, providing a microscopic foundation for macroscopic thermodynamic properties. This approach assumes that a gas consists of a large number of point-like particles with negligible volume compared to the container, moving in straight lines between elastic collisions that conserve both momentum and kinetic energy, and that intermolecular forces are absent except during collisions.[40] These assumptions idealize the gas as non-interacting except at instantaneous collisions, allowing derivation of observable properties from particle dynamics. To derive the pressure exerted by the gas on the container walls, consider the momentum change from a particle colliding elastically with a wall perpendicular to the x-direction. A particle of mass m with velocity component v_x imparts an impulse of $2 m v_x upon reversal, and with particles distributed uniformly, the number of collisions per unit time per unit area yields the pressure P = \frac{1}{3} \rho \langle v^2 \rangle, where \rho is the mass density and \langle v^2 \rangle is the mean square speed.[41] For N particles in volume V, this simplifies to P V = \frac{1}{3} N m \langle v^2 \rangle, establishing the kinetic equation of state. Linking the average kinetic energy \frac{1}{2} m \langle v^2 \rangle = \frac{3}{2} k T—where k is the Boltzmann constant and T the temperature—yields the ideal gas law P V = N k T, directly connecting temperature to microscopic motion.[41] The speeds of particles follow the Maxwell-Boltzmann distribution, derived from the assumption of isotropic random motion and conservation laws in collisions. The probability density function for speed v is given by f(v) = \left( \frac{m}{2 \pi k T} \right)^{3/2} 4 \pi v^2 \exp\left( -\frac{m v^2}{2 k T} \right), which predicts that the most probable speed scales as \sqrt{T/m} and the root-mean-square speed as \sqrt{3 k T / m}.[41] This distribution emerges from maximizing the number of microstates consistent with fixed energy under the classical assumptions, ensuring the average kinetic energy per particle is \frac{3}{2} k T. The equipartition theorem underpins this energy-temperature relation, stating that in thermal equilibrium, each quadratic degree of freedom in the Hamiltonian contributes \frac{1}{2} k T to the average energy. For a monatomic ideal gas particle with three translational degrees of freedom (kinetic energy terms \frac{1}{2} m v_x^2, \frac{1}{2} m v_y^2, \frac{1}{2} m v_z^2), the total average kinetic energy is thus \frac{3}{2} k T.[41] This theorem arises from the equal weighting of phase space volumes in classical mechanics, explaining why temperature quantifies the total translational energy in gases. While developed for gases, kinetic theory extends to liquids and solids by considering bound particles with vibrational motion. In solids, atoms oscillate around lattice sites, contributing quadratic terms for both kinetic and potential energy in harmonic approximations, leading to an average energy of k T per mode via equipartition. For a three-dimensional lattice, each atom has six degrees of freedom (three kinetic, three potential), yielding a total energy of $3 k T per atom and specific heats approaching $3 [R](/page/R) per mole at high temperatures, as observed in many metals.[42] This vibrational contribution links temperature to lattice dynamics, though quantum effects limit equipartition at low temperatures.Thermodynamic Approach
In thermodynamics, temperature emerges as a fundamental parameter through the zeroth law, which establishes that if two systems are each in thermal equilibrium with a third system, they are in thermal equilibrium with each other; this transitivity allows temperature to be defined as the property shared by systems in mutual thermal equilibrium.[12][43] The first law of thermodynamics relates temperature to energy changes in a system via the conservation of energy, expressed as dU = \delta Q - \delta W, where dU is the change in internal energy, \delta Q is the heat added to the system, and \delta W is the work done by the system.[44][45] For processes at constant volume, where \delta W = 0, this simplifies to dU = \delta Q, and the heat capacity C_V is defined as C_V = \left( \frac{\delta Q}{dT} \right)_{V}, quantifying how temperature changes with added heat while holding volume fixed.[46] The second law introduces entropy S as a state function that governs irreversible processes, with the differential form for reversible processes given by dS = \frac{\delta Q_{\text{rev}}}{T}, linking temperature directly to the rate of entropy change with reversible heat transfer.[47][48] From this, temperature can be rigorously defined in terms of fundamental thermodynamic potentials as \frac{1}{T} = \left( \frac{\partial S}{\partial U} \right)_{V,N}, where the partial derivative is taken at constant volume V and particle number N, emphasizing temperature's role as the inverse of the entropy's sensitivity to internal energy.[49][50] As an intensive property, temperature remains uniform throughout a system in thermodynamic equilibrium and does not depend on the system's size or the amount of matter present, distinguishing it from extensive properties like internal energy or entropy.[51][52] This uniformity ensures that, in equilibrium, all parts of an isolated system attain the same temperature regardless of scale.[53] In the context of heat engines, temperature's thermodynamic role is exemplified by the Carnot cycle, an idealized reversible cycle comprising two isothermal and two adiabatic processes, whose efficiency \eta = 1 - \frac{T_C}{T_H} depends solely on the absolute temperatures of the hot reservoir T_H and cold reservoir T_C, providing a scale-independent upper limit on the conversion of heat to work.[54][55] This efficiency underscores temperature's function as a universal measure of thermal potential in macroscopic systems.[56]Statistical Mechanics
In statistical mechanics, temperature emerges as a parameter characterizing the distribution of energy among microscopic states in a system at thermal equilibrium. This probabilistic framework bridges the macroscopic thermodynamic properties, such as those defined by the zeroth law, to the underlying microstates, providing a fundamental interpretation of temperature through ensemble theory.[57] The microcanonical ensemble describes an isolated system with fixed energy E, volume V, and particle number N, where all accessible microstates are equally likely. The entropy S is given by S = k \ln \Omega, with k the Boltzmann constant and \Omega the number of microstates corresponding to energy E. Temperature is then defined as the inverse of the rate of change of entropy with energy, T = \left( \frac{\partial S}{\partial E} \right)_{V,N}^{-1} = \frac{1}{k} \left( \frac{\partial \ln \Omega}{\partial E} \right)_{V,N}^{-1}, linking macroscopic temperature directly to the density of states. This relation, originating from Boltzmann's foundational work, ensures consistency with the second law of thermodynamics by maximizing entropy for the given constraints.[57] In the canonical ensemble, the system exchanges energy with a heat reservoir at fixed temperature T, while V and N remain constant. The probability of a microstate with energy E_i is proportional to e^{-[\beta](/page/Beta) E_i}, where [\beta](/page/Beta) = 1/(k[T](/page/KT)) is the inverse temperature. The partition function Z = \sum_i e^{-[\beta](/page/Beta) E_i} normalizes this distribution and encodes thermodynamic quantities, such as the Helmholtz free energy F = -k[T](/page/KT) \ln Z. This ensemble, formalized by Gibbs, facilitates calculations for systems in contact with a bath, where temperature controls the Boltzmann factor's weighting of states. In non-equilibrium or non-ideal systems, the thermodynamic temperature—measured via equilibrium criteria like the zeroth law—may diverge from the statistical temperature, defined through local ensemble averages or kinetic definitions. For instance, in driven systems or those with spatial gradients, the statistical temperature can reflect microscopic fluctuations differently from the macroscopic equilibrium value, leading to inconsistencies resolved only in the thermodynamic limit. Such differences highlight the limitations of ensemble equivalence outside ideal conditions.[58] Negative temperatures arise in systems with bounded energy spectra, such as nuclear spin systems, where population inversion occurs—more particles occupy higher-energy states than lower ones. Here, the canonical distribution yields T < 0, as \beta < 0, corresponding to states hotter than infinite temperature but unstable against energy exchange with positive-temperature reservoirs. This concept was experimentally realized in lithium fluoride spins, where rapid magnetic field reversal induced inversion, confirming the thermodynamic consistency of negative T.[59] Quantum extensions of statistical mechanics incorporate particle indistinguishability via Fermi-Dirac and Bose-Einstein statistics for fermions and bosons, respectively, particularly relevant for degenerate gases at low temperatures where quantum effects dominate. In a degenerate Fermi gas, the Fermi-Dirac distribution f(\epsilon) = [e^{(\epsilon - \mu)/kT} + 1]^{-1} fills states up to the Fermi energy \epsilon_F, with degeneracy setting in when T \ll T_F = \epsilon_F / k, leading to Pauli-blocked excitations and finite pressure even at T=0. For bosons, the Bose-Einstein distribution f(\epsilon) = [e^{(\epsilon - \mu)/kT} - 1]^{-1} allows condensation below T_c \approx (n \lambda^3)^{1/3} h / k (with n density and \lambda thermal wavelength), where a macroscopic ground-state occupation emerges in ideal gases. These statistics, derived by Fermi and Einstein, underpin phenomena like white dwarf stability and superfluidity.Measurement Methods
Historical Devices
The earliest devices for detecting temperature changes were thermoscopes, which qualitatively indicated variations without quantitative scales. In the 3rd century BCE, Philo of Byzantium described an apparatus consisting of a hollow sphere connected to a tube submerged in water, where heating or cooling caused air expansion or contraction, displacing the water level to show temperature differences.[60] This primitive design relied on the volumetric expansion of air and marked the first recorded attempt to observe thermal effects mechanically. Around 1593, Galileo Galilei improved upon such concepts by inventing an air thermoscope, a sealed bulb attached to a tube in a water reservoir, where rising or falling water levels in the tube visually demonstrated air's expansion with heat and contraction with cold, aiding early meteorological and experimental observations.[61][62] Advancements in the 17th century shifted toward more precise liquid-based instruments suitable for medical applications. In 1611, Italian physician Santorio Santorio adapted air thermoscopes for clinical use, employing them to monitor patients' body temperatures by observing fluid level changes, thus pioneering quantitative physiological measurements in medicine.[63][64] By sealing the devices to prevent atmospheric pressure interference, these early thermometers became more reliable. In 1714, German physicist Daniel Gabriel Fahrenheit introduced the first practical mercury-in-glass thermometer, using mercury's high thermal expansion and low freezing point for greater sensitivity and accuracy compared to alcohol or air variants, enabling finer gradations in temperature readings.[65][66] The 19th century brought electrical methods that transformed temperature measurement. In 1821, Thomas Johann Seebeck discovered the thermoelectric effect, observing that a junction of two dissimilar metals, such as bismuth and copper, generated a voltage proportional to the temperature difference (ΔT) between the hot and cold junctions, laying the foundation for thermocouples as robust sensors for high-temperature environments.[67][68] This Seebeck effect allowed indirect electrical detection of temperature changes, with the voltage output serving as a measurable proxy for ΔT. Complementing this, in 1887, British physicist Hugh Longbourne Callendar developed the platinum resistance thermometer, utilizing a platinum wire coil whose electrical resistance varied predictably with temperature according to the relation R = R_0 (1 + \alpha \Delta T), where R_0 is the resistance at a reference temperature, \alpha is the temperature coefficient of resistance, and \Delta T is the temperature change.[69][70] This design offered high precision and stability, becoming a standard for calibration due to platinum's consistent properties. Calibration of these historical devices relied on reproducible fixed points, particularly the freezing and boiling points of water under standard atmospheric pressure, which provided natural benchmarks for scaling temperature intervals. Early thermometers, such as those by Fahrenheit, used the ice point (freezing of water at 32°F) and steam point (boiling of water at 212°F) to define gradations, ensuring consistency across instruments despite variations in materials.[71] These points, later refined in the 18th century by scientists like Anders Celsius who inverted the scale to set freezing at 0°C and boiling at 100°C, allowed for empirical standardization without absolute theoretical foundations.Modern Techniques
Modern temperature measurement techniques leverage advanced electronic, optical, and superconducting principles to achieve high precision across diverse environments, from cryogenic conditions to extreme high temperatures, often enabling non-contact and remote sensing. These methods surpass traditional mechanical devices by offering resolutions down to millikelvin scales and response times in microseconds, essential for applications in aerospace, materials processing, and scientific research. Recent advances include atom-based thermometers using Rydberg atoms, which provide ultra-high accuracy for fundamental metrology, as demonstrated in developments reported in 2025.[72][73] Infrared thermometry, particularly through pyrometers, measures temperature by detecting thermal radiation emitted from objects, assuming blackbody behavior for ideal cases. Pyrometers operate on the principle that all bodies above absolute zero emit infrared radiation, with intensity governed by Planck's law, allowing non-contact measurements up to 3000°C without interference from the sensor. For blackbody radiators, Wien's displacement law relates the peak wavelength of emission to temperature via \lambda_{\max} T = 2897.77 \, \mu\text{m} \cdot \text{K}, enabling temperature determination from spectral analysis. This technique is widely used in industrial furnaces and remote sensing, though accuracy depends on correcting for emissivity variations in real materials.[73][74] Optical methods provide versatile non-contact sensing for gases, fluids, and solids. Laser-induced fluorescence (LIF) excites fluorescent molecules with a laser, measuring the intensity or spectral shift of emitted light to infer temperature, as fluorescence yield decreases with rising thermal energy. Planar LIF enables two-dimensional mapping in combustion chambers and microfluidic devices, with resolutions below 1 K in flows up to 2000 K. Raman spectroscopy detects temperature via shifts in scattered light wavelengths from molecular vibrations, using the anti-Stokes to Stokes intensity ratio for calibration. This approach achieves microscale resolution (~1.5 μm) in biological samples and harsh environments, with sensitivities up to 1.20 %/K at 300 K using titanium dioxide probes. Both techniques excel in transient, high-speed scenarios like engine testing, where physical probes would disrupt the medium.[75][76] For cryogenic applications near absolute zero, superconducting transition edge sensors (TES) offer exceptional sensitivity in the millikelvin range. TES devices consist of thin superconducting films, such as molybdenum-gold bilayers, biased at their critical temperature (~100 mK), where a small temperature rise induces a sharp resistance change due to the superconductor-normal metal transition. This enables energy resolutions of ~1.4 eV for X-ray detection at 100 mK, far superior to semiconductor alternatives, and is pivotal in bolometers for astrophysics and quantum computing cryostats. The sensitivity parameter \alpha = T/R \cdot dR/dT quantifies performance, with noise minimized at low base temperatures.[77][78] High-temperature measurements in extreme environments, such as plasmas, employ robust optical and acoustic approaches. Optical fiber Bragg gratings (FBG) inscribed in silica or sapphire fibers detect temperature through shifts in reflected Bragg wavelength, caused by thermal expansion and refractive index changes, with sensitivities of 10–15 pm/K. Regenerated FBGs withstand up to 1173 K in radiation-heavy settings like nuclear reactors, while sapphire variants reach 2173 K with 1 K resolution, outperforming thermocouples in corrosive conditions. For plasmas, acoustic thermometry uses laser-induced breakdowns to generate sound waves, measuring their propagation speed—dependent on gas temperature—to infer values up to 1000 K with ±16 K accuracy. This method, validated against thermocouples, suits fusion and combustion diagnostics where optical access is limited.[79][80] The International Temperature Scale of 1990 (ITS-90) standardizes these measurements using 17 defining fixed points from 0.65 K (³He vapor pressure) to 1357.77 K (Cu freezing point), ensuring global consistency through reproducible phase transitions of pure substances. Key points include the triple point of equilibrium hydrogen (13.8033 K), water (273.16 K), and freezing points of gallium (302.9146 K), indium (429.7485 K), tin (505.078 K), zinc (692.677 K), aluminum (933.473 K), silver (1234.93 K), and copper (1357.77 K), interpolated via resistance thermometers or radiation laws in subranges. This scale, adopted by the International Committee for Weights and Measures, underpins calibrations for all modern sensors, with uncertainties below 0.001 K at many points.[81][82]| Substance | Temperature (K) | Type |
|---|---|---|
| ³He | 0.65 | Vapor pressure |
| e-H₂ | 13.8033 | Triple point |
| Ne | 24.5561 | Triple point |
| O₂ | 54.3584 | Triple point |
| Ar | 83.8058 | Triple point |
| Hg | 234.3156 | Triple point |
| H₂O | 273.16 | Triple point |
| Ga | 302.9146 | Melting point |
| In | 429.7485 | Freezing point |
| Sn | 505.078 | Freezing point |
| Zn | 692.677 | Freezing point |
| Al | 933.473 | Freezing point |
| Ag | 1234.93 | Freezing point |
| Au | 1337.33 | Freezing point |
| Cu | 1357.77 | Freezing point |