Boltzmann constant
The Boltzmann constant, denoted as k or kB, is a fundamental physical constant that relates the average kinetic energy of particles in a gas to the absolute temperature of the gas, serving as a bridge between microscopic statistical descriptions and macroscopic thermodynamic properties.[1] In the International System of Units (SI), it is defined exactly as 1.380649 × 10−23 joules per kelvin (J/K).[1] Named after the Austrian physicist Ludwig Boltzmann (1844–1906), the constant emerged from his pioneering work in statistical mechanics during the late 19th century, where it quantified the connection between entropy and the number of microscopic configurations of a system.[2] Boltzmann developed the relation between entropy and probability in his 1877 work, expressing it as S \propto \ln W, where W is the multiplicity of microstates. The constant k was introduced by Max Planck in 1900, who named it in Boltzmann's honor, providing a probabilistic foundation for the second law of thermodynamics. The constant's significance extends across physics: in the ideal gas law (PV = N k T, where N is the number of particles and T is temperature), it links pressure, volume, and temperature at the molecular level; in statistical mechanics, it scales the energy distribution in systems like the Maxwell-Boltzmann distribution; and in quantum statistics, it appears in Fermi-Dirac and Bose-Einstein formulations.[1] Its exact value was fixed in the 2019 SI redefinition, which redefined the kelvin in terms of k rather than the triple point of water, enabling more precise and universal temperature measurements independent of material artifacts.[1] This redefinition, based on advanced techniques like Johnson noise thermometry, underscores k's role as an invariant cornerstone of modern metrology.[1]Definition and Value
Physical Significance
The Boltzmann constant, denoted as k or k_B, is the fundamental proportionality factor that relates the average thermal energy of particles in a system to the absolute temperature, enabling the direct conversion between thermal energy scales (such as [kT](/page/KT)) and mechanical or electrical energy units.[1] This scaling allows physicists to quantify the energy associated with temperature in microscopic processes, where T is the thermodynamic temperature in kelvins.[1] As a universal bridge between the macroscopic laws of phenomenological thermodynamics and the microscopic probabilities of statistical mechanics, the Boltzmann constant links observable bulk properties—like pressure and volume in gases—to the random motions and configurations of individual particles in thermal equilibrium.[1] Its applicability spans diverse systems, from ideal gases to complex materials, emphasizing the shared statistical foundations of thermal phenomena across physics.[1] The dimensions of the Boltzmann constant are those of energy per unit temperature, expressed as joules per kelvin (J/K) or, in base SI units, kg·m²·s⁻²·K⁻¹.[3] The symbol k_B, with the subscript B, distinguishes it from other constants denoted by k, such as the Coulomb constant k_e = 1/(4\pi\epsilon_0). Named after Austrian physicist Ludwig Boltzmann (1844–1906) for his foundational work in statistical mechanics, the constant underscores the probabilistic interpretation of thermodynamic quantities.[1]Numerical Value
The Boltzmann constant k is defined exactly as k = 1.380649 \times 10^{-23} J/K following the 2019 revision of the International System of Units (SI), which fixed its numerical value to define the kelvin independently of experimental measurements.[4] This exactness eliminates any uncertainty in k itself, thereby shifting metrological efforts toward refining measurements of other defining constants, such as the Planck constant h.[5] Prior to the 2019 redefinition, the Committee on Data for Science and Technology (CODATA) recommended a value of k = 1.38064852(79) \times 10^{-23} J/K based on the 2014 adjustment, with a relative standard uncertainty of $5.7 \times 10^{-7}.[6] The exact value of k facilitates conversions between energy and temperature scales in various units, essential for interdisciplinary applications. The table below lists selected common conversions derived from the fixed SI value.| Unit | Value of k |
|---|---|
| J/K (SI) | $1.380649 \times 10^{-23} |
| eV/K | $8.617333262145 \times 10^{-5} |
| cal/K | $3.299158069393 \times 10^{-24} |
Roles in Statistical Mechanics
Equipartition of Energy
The equipartition theorem states that, in a classical system at thermal equilibrium, each quadratic degree of freedom contributes an average energy of \frac{1}{2} k [T](/page/Temperature) to the total energy, where k is the Boltzmann constant and T is the absolute temperature.[8] This theorem provides a fundamental link between microscopic energy distribution and macroscopic temperature in statistical mechanics.[9] For a system possessing f independent quadratic degrees of freedom, the total average energy per particle is given by \langle E \rangle = \frac{f}{2} k T. [8] For instance, the translational motion of a monatomic gas molecule has three quadratic degrees of freedom (one for each Cartesian momentum component), yielding \langle E \rangle = \frac{3}{2} k T for the average translational kinetic energy.[10] The theorem derives from Maxwell-Boltzmann statistics, where the average value of an energy term is obtained by integrating over phase space with the Boltzmann weight e^{-E / k T}. Consider a quadratic energy contribution \epsilon(p_i) = b p_i^2 depending on a single momentum coordinate p_i, separable from the rest of the system's energy E'. The average \overline{\epsilon_i} is then \overline{\epsilon_i} = \frac{\int_{-\infty}^{\infty} \epsilon(p_i) \, e^{-\beta \epsilon(p_i)} \, dp_i}{\int_{-\infty}^{\infty} e^{-\beta \epsilon(p_i)} \, dp_i}, [8] with \beta = 1 / k T. Evaluating the Gaussian integrals yields \overline{\epsilon_i} = \frac{1}{2} k T, establishing the \frac{1}{2} k T contribution per quadratic term.[8] This result extends to position-dependent quadratic terms by analogy.[11] In the kinetic theory of ideal gases, the three translational degrees of freedom lead to an average kinetic energy of \frac{3}{2} k T per molecule, independent of molecular mass or interactions in the dilute limit.[10] For a classical harmonic oscillator, the energy includes both kinetic (\frac{1}{2} m v^2) and potential (\frac{1}{2} k x^2) quadratic terms, resulting in a total average energy of k T per mode.[11] The equipartition theorem applies strictly in the classical regime, where the thermal energy k T greatly exceeds the quantum energy level spacing, allowing continuous phase space sampling.[12] It fails at quantum scales, particularly at low temperatures, where modes like vibrational oscillators in solids retain zero-point energy and do not achieve full classical excitation, leading to heat capacity deficits.[13]Boltzmann Factors
In statistical mechanics, the Boltzmann factor quantifies the relative likelihood of a system occupying different energy states in thermal equilibrium within the canonical ensemble. For two states with energies E_i and E_j, the ratio of their probabilities is given by \frac{P_i}{P_j} = \exp\left( -\frac{E_i - E_j}{kT} \right), where k is the Boltzmann constant and T is the absolute temperature.[14] This exponential form arises because higher-energy states are less probable, with the factor kT determining the characteristic energy scale set by thermal fluctuations.[15] For a system with discrete energy levels, the full probability distribution incorporates degeneracy, the number of microstates g(E) associated with energy E. The probability P(E) of finding the system at energy E is then P(E) \propto g(E) \exp\left( -\frac{E}{kT} \right), normalized such that the sum over all states equals unity.[16] This distribution emerges from the principle of maximum entropy, where the most probable configuration maximizes the Shannon entropy subject to constraints on average energy and particle number, yielding the exponential weighting as the unique solution.[17] Alternatively, it can be derived by projecting the uniform distribution of the microcanonical ensemble—valid for an isolated system of fixed energy—onto a smaller subsystem in contact with a large heat reservoir; the probability then becomes proportional to the reservoir's density of states at the complementary energy, approximating the exponential form in the thermodynamic limit.[18] The Boltzmann factor finds direct application in determining the population ratios of excited states in atoms and molecules at thermal equilibrium. For instance, in a two-level system like the ground and first excited electronic states of an atom, the ratio of populations N_{\rm excited}/N_{\rm ground} = (g_{\rm excited}/g_{\rm ground}) \exp(-\Delta E / kT), where \Delta E is the energy difference; at room temperature, states separated by several kT are sparsely populated, explaining the dominance of ground states in dilute gases.[19] Similarly, in molecular spectroscopy, vibrational or rotational excited states follow this distribution, enabling temperature measurements from spectral line intensities. In the context of ideal gases, the Boltzmann factor underpins the Maxwell-Boltzmann speed distribution. For non-interacting particles, the probability density f(v) for speeds v is obtained by considering the phase space volume and applying the factor to the kinetic energy E = \frac{1}{2} m v^2, yielding an integral form f(v) \, dv \propto 4\pi v^2 \exp\left( -\frac{m v^2}{2 kT} \right) \, dv, which describes the distribution of molecular speeds without requiring explicit solution of the normalization.[20] This probabilistic weighting highlights how the Boltzmann constant establishes the thermal energy scale: at T = 300 K, kT \approx 4.14 \times 10^{-21} J, comparable to molecular binding energies but much smaller than electronic transitions, thus dictating the extent of thermal excitation.[1] The averages derived from this distribution, such as mean kinetic energies, connect to broader principles like equipartition in quadratic systems.[21]Statistical Definition of Entropy
In statistical mechanics, the Boltzmann constant serves as the proportionality factor that connects the microscopic multiplicity of states to the macroscopic entropy, quantifying the degree of disorder or uncertainty in a physical system. The seminal expression for this is the Boltzmann entropy formula,S = k \ln W,
where S denotes the entropy, k is the Boltzmann constant, and W is the number of accessible microstates corresponding to a given macrostate.[22] This formulation establishes entropy as a logarithmic measure of probabilistic possibilities, with k ensuring the result has dimensions of energy per temperature (joules per kelvin).[23] A broader derivation emerges from the canonical ensemble in equilibrium statistical mechanics, yielding the Gibbs entropy formula
S = -[k](/page/K) \sum_i p_i \ln p_i,
where p_i is the probability of occupation of microstate i.[23] When the system has W equally likely microstates, each with p_i = 1/W, the sum simplifies to S = [k](/page/K) \ln W, recovering the original form.[23] This probabilistic expression closely resembles the Shannon entropy from information theory,
H = -\sum_i p_i \log_2 p_i,
which measures uncertainty in bits; the statistical mechanical version employs the natural logarithm and scales by k to confer thermodynamic units, thereby linking abstract information content to physical energy scales.[24][25] For a monatomic ideal gas, the Sackur-Tetrode equation illustrates k's scaling role:
S \approx Nk \left[ \ln \left( \frac{V}{N} \right) + \frac{3}{2} \ln T + c \right],
where N is the particle number, V the volume, T the temperature, and c a constant incorporating mass and quantum effects; here, k multiplies the logarithmic multiplicity terms to produce an extensive entropy proportional to system size.[26] In a two-state paramagnet with N spins, each able to align up or down in a magnetic field, the multiplicity for a configuration with N_+ up-spins is W = \binom{N}{N_+}, so S = k \ln W, maximizing at S = Nk \ln 2 for equal populations and underscoring k's conversion of combinatorial growth to thermal disorder.[27] The presence of k guarantees entropy additivity for composite systems of independent subsystems, as W_\text{total} = W_1 W_2 implies S_\text{total} = S_1 + S_2, aligning statistical predictions with the extensive nature of thermodynamic entropy.[28] It also resolves the Gibbs paradox in gas mixing, where treating particles as indistinguishable avoids unphysical entropy jumps, with k preserving the correct scaling for identical versus distinct components.[28]
Applications in Physics and Engineering
Ideal Gas Law
The Boltzmann constant k plays a central role in the microscopic interpretation of the ideal gas law, bridging the kinetic behavior of individual particles to macroscopic thermodynamic properties. In kinetic theory, the pressure P exerted by an ideal gas arises from the momentum flux of particles colliding with the container walls. Considering a gas of N particles, each with mass m, the pressure is derived as P = \frac{1}{3} \frac{N m \langle v^2 \rangle}{V}, where V is the volume and \langle v^2 \rangle is the mean square speed.[29] The equipartition theorem assigns an average translational kinetic energy of \frac{3}{2} kT per particle, leading to \frac{1}{2} m \langle v^2 \rangle = \frac{3}{2} kT. Substituting this relation yields the microscopic form of the ideal gas law: PV = NkT.[29] This microscopic equation connects directly to the empirical molar form PV = nRT, where n is the number of moles and R is the molar gas constant. Here, N = n N_A, with N_A being Avogadro's number, so R = N_A k. Historically, the value of k emerged from dividing the measured R by N_A, providing a per-particle energy scale for temperature. Following the 2019 redefinition of the SI units, k was fixed exactly at $1.380649 \times 10^{-23} J/K, rendering both N_A and R exact as well.[30] The ideal gas law with k underpins classical gas behaviors, such as Boyle's law (PV = constant at fixed T), which follows from P \propto 1/V at constant NkT, and Charles's law (V \propto T at fixed P), reflecting thermal expansion tied to increasing kinetic energies \propto kT. For real gases, the van der Waals equation (P + \frac{a n^2}{V^2})(V - n b) = n R T introduces corrections for intermolecular forces (a) and molecular volume (b), but reduces to the ideal form PV = n R T (or PV = N k T) in the limit of low density where these effects vanish. Experimental verifications of this framework link macroscopic observables back to kT energy scales. For instance, the speed of sound in an ideal monatomic gas is v = \sqrt{\frac{\gamma k T}{m}}, where \gamma = \frac{5}{3} is the adiabatic index, directly incorporating the thermal kinetic energy per particle. Similarly, the molar specific heat at constant volume for a monatomic gas, C_V = \frac{3}{2} R = \frac{3}{2} N_A k, confirms the three translational degrees of freedom each contributing \frac{1}{2} kT per particle.Thermal Voltage
In semiconductor physics, the thermal voltage V_T is a key parameter defined as V_T = \frac{k T}{q}, where k is the Boltzmann constant, T is the absolute temperature in kelvin, and q is the elementary charge, exactly 1.602176634 × 10^{-19} C (since the 2019 SI redefinition).[1] This quantity represents the thermal energy scale in electron volts, bridging temperature to electrical potential across junctions. At room temperature (300 K), V_T evaluates to approximately 25.85 mV, providing a characteristic voltage for charge carrier dynamics in devices.[31] The thermal voltage appears prominently in the Shockley diode equation, which describes the current through a p-n junction:I = I_s \left( \exp\left( \frac{V}{V_T} \right) - 1 \right),
where I is the diode current, I_s is the reverse saturation current, and V is the applied forward bias voltage.[32] This exponential form arises from the Boltzmann factors that dictate the concentration of minority carriers injected across the junction, with V_T setting the steepness of the current rise. In bipolar junction transistors (BJTs), V_T similarly governs the base-emitter junction, where the collector current follows I_C = I_S \exp\left( \frac{V_{BE}}{V_T} \right), causing the base-emitter voltage to scale logarithmically with current and linearly with temperature. These relations enable precise modeling of p-n junctions in diodes and transistors, essential for amplification and switching in electronic circuits. The temperature dependence of V_T, which increases proportionally with T, significantly impacts device performance; for instance, higher temperatures reduce the forward voltage drop across a diode at fixed current by about -2 mV/K due to the enhanced thermal generation of carriers.[33] This effect is measured via the forward voltage drop technique, where the voltage across a biased diode or transistor junction is monitored as a proxy for temperature, often using integrated sensors in integrated circuits. In electronics, V_T also underlies thermal noise, known as Johnson-Nyquist noise, where the root-mean-square noise voltage across a resistor R in bandwidth \Delta f scales as \sqrt{4 k T R \Delta f}, limiting signal integrity in low-noise amplifiers and sensors.[34] In degenerate semiconductors, where the Fermi level lies within the conduction or valence band, the thermal voltage V_T (or equivalently [kT](/page/KT)) establishes the energy scale relative to the Fermi energy E_F; when E_F \gg kT, Fermi-Dirac statistics replace classical Boltzmann approximations, altering carrier concentrations and transport properties in heavily doped materials used in high-speed devices.[35]
Historical Development
Origins and Naming
The development of the Boltzmann constant emerged in the mid-19th century amid the rise of kinetic theory of gases, which sought to explain macroscopic thermodynamic properties through the microscopic behavior of molecules. In 1860, James Clerk Maxwell published his seminal paper "Illustrations of the dynamical theory of gases," deriving the distribution of molecular velocities in an ideal gas based on assumptions of random collisions and elastic interactions. This distribution implicitly incorporated a proportionality constant relating the average kinetic energy per molecule to temperature, equivalent to what would later be identified as the Boltzmann constant k, appearing as k = R / N where R is the gas constant and N is the number of molecules (or Avogadro's number N_A)./27%3A_The_Kinetic_Theory_of_Gases/27.03%3A_The_Distribution_of_Molecular_Speeds_is_Given_by_the_Maxwell-Boltzmann_Distribution) Maxwell's work laid the groundwork for statistical interpretations of thermal equilibrium, though the constant itself remained unnamed and not explicitly isolated at the time.[36] Ludwig Boltzmann advanced this framework significantly in the 1870s through his investigations into the statistical mechanics of gases. In his 1866 paper "Über die mechanische Bedeutung des zweiten Hauptsatzes der Wärmetheorie," Boltzmann explored the equilibrium distribution of molecular energies, building on Maxwell's velocity distribution and introducing probabilistic considerations for energy sharing among particles, which foreshadowed the equipartition theorem.[37] His most influential contribution came in 1877 with the paper "Über die Beziehung zwischen dem zweiten Hauptsatz der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung," where he formulated the entropy of a system in terms of its microscopic states as S = k \ln W, with W representing the number of possible microstates and k the proportionality constant linking thermodynamic entropy to probability.[22] This expression, introduced in this work—which builds on Boltzmann's earlier H-theorem (1872) by providing a probabilistic foundation for the approach to equilibrium—explicitly introduced k as a fundamental bridge between macroscopic and microscopic scales, resolving paradoxes in classical thermodynamics like the ultraviolet catastrophe through statistical averaging.[2] Early experimental validation of the constant's role came from studies connecting gas laws and specific heats to molecular scales. By the early 20th century, Jean Baptiste Perrin's 1908 experiments on Brownian motion provided indirect confirmation of k by determining Avogadro's number N_A through observations of colloidal particle displacements, yielding k = R / N_A consistent with theoretical predictions from kinetic theory.[38] These results, detailed in Perrin's work "Mouvement brownien et réalité moléculaire," supported the atomic hypothesis and quantified k's value via sedimentation equilibrium and diffusion measurements.[38] The constant was formally named the "Boltzmann constant" in the early 20th century in recognition of Ludwig Boltzmann's pioneering statistical formulations, with Max Planck noting its common attribution to Boltzmann by 1920 in his Nobel lecture.[1] To avoid ambiguity with other physical constants like the wave number k (in k = 2\pi / \lambda) or the Coulomb constant k_e = 1/(4\pi\epsilon_0), it is conventionally denoted as k_B or k_B.[1]Measurement and 2019 Redefinition
The experimental determination of the Boltzmann constant (k) has evolved significantly over the past century, driven by advances in precision metrology to support its role in linking thermodynamic temperature to microscopic energy scales. In the 1920s, initial estimates were derived from X-ray scattering and diffraction experiments, which enabled measurements of the Avogadro constant (N_A) through crystal lattice spacings and densities; combined with the known molar gas constant (R), these yielded early values of k = R / N_A with relative uncertainties on the order of 0.1% or larger.[39] By the 1970s, CODATA adjustments refined k to $1.380 \times 10^{-23} J/K with a relative uncertainty of about $8 \times 10^{-5}, incorporating data from speed-of-sound measurements and electrochemical cells.[40] Further progress in the 2010s achieved precisions to parts per billion, facilitated by linkages to other fundamental constants via watt balance experiments (for the Planck constant h) and silicon sphere volumetry (for N_A). The 2014 CODATA recommended value was k = 1.38064852(79) \times 10^{-23} J/K, with a relative standard uncertainty of $5.7 \times 10^{-7}, reflecting contributions from multiple independent methods.[40] Key experimental approaches included acoustic gas thermometry, which measures the speed of sound in gases like argon at known pressures and volumes to relate macroscopic thermodynamic properties to k; Johnson noise thermometry, which quantifies thermal voltage fluctuations across a resistor proportional to kT (where T is temperature); and Doppler broadening spectroscopy, analyzing the thermal broadening of spectral lines in gases such as helium to infer k from line widths. These methods provided consistent results with uncertainties below 1 part per million, essential for the impending SI revision.[1][41] The 2019 revision of the International System of Units (SI), effective May 20, 2019, fixed k exactly at $1.380649 \times 10^{-23} J/K, alongside the Planck constant (h), elementary charge (e), and Avogadro constant (N_A). This redefinition anchors the kelvin to a fundamental constant, such that the kelvin, symbol K, is the SI unit of thermodynamic temperature defined by taking the fixed numerical value of k to be $1.380649 \times 10^{-23} when expressed in the unit J/K = kg m^2 s^{-2} K^{-1}, where the kilogram, meter, and second are defined in terms of h, the speed of light c, and the hyperfine transition frequency \Delta \nu_{\rm Cs}. The triple point of water, previously the basis for the kelvin, now serves as a secondary reference point near 273.16 K, ensuring continuity with prior scales.[42][5] Challenges in these measurements include systematic errors from thermophysical properties (e.g., gas impurities or virial coefficients in acoustic methods) and inter-method discrepancies, which required rigorous uncertainty budgeting to achieve pre-redefinition consensus. Post-redefinition, calibration standards like the International Temperature Scale of 1990 (ITS-90) must be traceable to k via primary thermometry, potentially introducing small non-uniqueness in fixed-point realizations (e.g., up to ±1.23 mK deviations in mercury triple points), necessitating updates to practical scales.[1][43] In the 2020s, ongoing experiments focus on consistency checks across methods, including refined Johnson noise thermometry targeting 0.1 ppm uncertainties below 25 K via Coulomb blockade effects, and high-temperature acoustic gas thermometry up to 3000 K using carbon eutectics for traceability above 1300 K. These efforts aim to validate the fixed value and support advanced applications in quantum thermometry and cryogenics.[43][41]Units and Dimensionless Quantities
SI and Conventional Units
The Boltzmann constant k is exactly defined in the International System of Units (SI) as k = 1.380649 \times 10^{-23} J/K, where the joule (J) is the unit of energy and the kelvin (K) is the unit of temperature.[3] This value establishes the scale between thermal energy and temperature in SI, with the equivalent expression in base SI units being k = 1.380649 \times 10^{-23} kg m² s⁻² K⁻¹.[3] In conventional unit systems, the Boltzmann constant takes on corresponding values to maintain dimensional consistency, where its dimensions are always energy per unit temperature ([M] [L]² [T]⁻² [Θ]⁻¹, with [M] mass, [L] length, [T] time, and [Θ] temperature). In the centimeter-gram-second (CGS) system, k = 1.380649 \times 10^{-16} erg/K, since 1 J = 10⁷ erg.[3] A value frequently used in semiconductor physics and electron spectroscopy is k = 8.617333262145 \times 10^{-5} eV/K.[7] The Boltzmann constant relates to the molar gas constant R via R = N_A k, where N_A is Avogadro's constant, exactly defined as N_A = 6.02214076 \times 10^{23} mol⁻¹ following the 2019 SI redefinition.[44] This yields the exact value R = 8.314462618 J mol⁻¹ K⁻¹, which scales k to molar quantities in thermodynamics and chemistry.[45] The following table summarizes the Boltzmann constant in atomic and spectroscopic units, derived from its SI value and standard conversions for precision in quantum chemistry and molecular spectroscopy:| Unit | Value | Reference |
|---|---|---|
| hartree/K (E_h/K) | $3.1668114 \times 10^{-6} E_h K⁻¹ | https://physics.nist.gov/cgi-bin/cuu/Value?k, https://physics.nist.gov/cgi-bin/cuu/Value?hrj |
| cm⁻¹/K | $0.69503476 cm⁻¹ K⁻¹ | https://physics.nist.gov/cgi-bin/cuu/Value?kshcminv |