Physical constant
A physical constant is a fundamental quantity in physics that remains invariant across space and time, appearing universally in the basic equations and theories that describe natural phenomena.[1] These constants serve as unchanging reference points for scientific measurements and theoretical predictions, enabling the consistency and precision required in fields ranging from quantum mechanics to cosmology.[2] Notable examples include the speed of light in vacuum (c = 299792458 m/s), which defines the maximum speed of information propagation and underpins special relativity; Planck's constant (h = 6.62607015 × 10⁻³⁴ J s), linking energy and frequency in quantum theory; the elementary charge (e = 1.602176634 × 10⁻¹⁹ C), the basic unit of electric charge; and the fine-structure constant (α ≈ 1/137.035999), which characterizes the strength of electromagnetic interactions.[3] Their values are determined through rigorous experimental measurements and are periodically refined by international bodies like the CODATA Task Group on Fundamental Physical Constants, with the most recent adjustment based on data through December 31, 2022, ensuring self-consistent sets for global use in science and metrology.[4] These constants not only test the validity of physical theories but also play a critical role in defining the International System of Units (SI), such as fixing h and c to establish the kilogram and meter.[5]Definition and Characteristics
Core Definition
A physical constant is a physical quantity that remains invariant regardless of location, time, or surrounding conditions, serving as a fundamental parameter in the core equations of physics. These quantities, such as the speed of light or Planck's constant, underpin theories from electromagnetism to quantum mechanics, enabling precise predictions about natural phenomena.[1] Unlike physical variables, which fluctuate based on specific systems or measurements—such as position or velocity in a given experiment—physical constants maintain fixed values across all contexts. Parameters, by contrast, often represent adjustable or model-dependent values tailored to particular scenarios, lacking the universal scope of true constants; for example, a mass in a classical mechanics problem might function as a parameter rather than an invariant like the gravitational constant.[6][1] The concept of physical constants originated in classical mechanics with Newton's introduction of the gravitational constant G in 1687 and evolved significantly in the late 19th and early 20th centuries with the formalization of electromagnetic theory (Maxwell's equations highlighting the speed of light c as an invariant) and quantum theory (Planck's constant h in 1900). This progression marked a shift toward recognizing these quantities as unexplained building blocks of physical laws, essential for bridging classical and relativistic frameworks.[1]Key Properties
Physical constants exhibit universality, meaning their values remain identical across all regions of the observable universe, independent of location or epoch. This property is foundational to the consistency of physical laws, as indicated by tight constraints from spectroscopic observations of distant quasars and the cosmic microwave background radiation, which show no significant variation in constants like the fine-structure constant over billions of years (Δα/α < 10^{-5}).[7][8] These constants also demonstrate invariance under coordinate transformations, such as Lorentz boosts in special relativity, ensuring that physical laws maintain their form regardless of the observer's frame of reference. For instance, the speed of light c serves as the invariant scale that defines the structure of spacetime in relativistic theories, remaining unchanged under such transformations. This invariance extends to other constants like Planck's constant h, which preserves quantum relations across inertial frames.[1] Physical constants play a crucial role in scaling laws and symmetries, dictating the dimensional structure and symmetry principles of fundamental equations. In quantum field theory, constants such as h and c set the scales for renormalization group flows, which describe how coupling strengths evolve under changes in energy scale while preserving gauge symmetries. Similarly, in general relativity, the gravitational constant G governs scaling in virial theorems for self-gravitating systems, ensuring symmetry under diffeomorphisms. These roles ensure that theoretical frameworks yield consistent predictions across scales, from subatomic particles to cosmological structures. Conceptually, physical constants bridge theoretical predictions with experimental observations by appearing in dimensionless combinations that parameterize the strength of interactions and test the validity of models. The fine-structure constant \alpha \approx \frac{1}{137}, defined as \alpha = \frac{e^2}{4\pi \epsilon_0 \hbar c}, exemplifies this, quantifying electromagnetic coupling in a frame-independent manner and enabling precise comparisons between quantum electrodynamics calculations and atomic spectra measurements. Such combinations highlight the indispensability of constants in unifying disparate phenomena and refining our understanding of nature's fundamental scales.[1]Classification of Constants
Dimensional Constants
Dimensional constants are fundamental physical quantities that carry units of measurement, reflecting their role in connecting abstract laws to observable scales in the universe. Unlike dimensionless ratios, these constants incorporate dimensions such as mass [M], length [L], time [T], and temperature [Θ], ensuring that physical equations remain dimensionally homogeneous. For instance, the speed of light in vacuum, c, has dimensions [L T^{-1}] and serves as the universal speed limit for information propagation in special relativity. Similarly, the Newtonian gravitational constant G possesses dimensions [M^{-1} L^3 T^{-2}], quantifying the strength of gravitational attraction between masses. These examples illustrate how dimensional constants bridge theoretical principles with empirical measurements, as documented in the CODATA recommended values.[3] In dimensional analysis, these constants are essential for maintaining the consistency of physical equations, where every term must share the same dimensional structure. The Buckingham π theorem provides a formal framework for this, asserting that if a physical problem involves n variables with m fundamental dimensions, it can be reduced to n - m independent dimensionless π groups, allowing the derivation of scaling relations without solving the full equations. This theorem, originally formulated by Edgar Buckingham, enables scientists to identify key dependencies in complex systems, such as fluid dynamics or electromagnetism, by grouping dimensional constants with variables to form dimensionless combinations that govern universal behaviors. For example, in deriving the Reynolds number for fluid flow, constants like density and viscosity (both dimensional) combine to yield a scale-invariant parameter. The presence of dimensional constants profoundly influences the scalability of physical phenomena, as variations in their values would rescale fundamental lengths, times, and energies across the cosmos. Hypothetical changes to these constants could, for instance, expand or contract atomic radii by altering the interplay with electromagnetic forces, setting the stage for how dimensionless parameters like the fine-structure constant dictate relative sizes. Such scalability underscores their dependence on chosen unit systems, where redefining units adjusts their numerical values while preserving physical predictions. Planck's constant h, with dimensions [M L^2 T^{-1}] (equivalent to energy times time, or action), quantifies the granularity of quantum processes, such as photon energy E = h \nu. The Boltzmann constant k_B, dimensioned as [M L^2 T^{-2} Θ^{-1}] (energy per temperature), links macroscopic thermodynamics to microscopic statistical mechanics, appearing in equations like the ideal gas law PV = N k_B T. Their exact or measured values, such as h = 6.62607015 \times 10^{-34} J s, anchor these scales in the SI system.[3]Dimensionless Constants
Dimensionless physical constants arise as ratios of physical quantities sharing identical dimensions, resulting in pure numerical values independent of any chosen unit system.[1] A prominent example is the fine-structure constant, denoted α, which quantifies the strength of the electromagnetic interaction and is defined as α = e² / (4πε₀ ℏ c), where e is the elementary charge, ε₀ the vacuum permittivity, ℏ the reduced Planck's constant, and c the speed of light; its measured value is approximately 7.2973525693 × 10^{-3}, or equivalently 1/α ≈ 137.035999177 (CODATA 2022).[9] Another key instance is the proton-to-electron mass ratio, μ = m_p / m_e, representing the ratio of the proton's rest mass to that of the electron, with a value of approximately 1836.15267343 (CODATA 2022).[10] These constants hold profound theoretical significance because they encapsulate the intrinsic properties of fundamental interactions without contamination from arbitrary unit choices, making them pivotal parameters in theoretical frameworks such as quantum field theory (QFT).[11] In QFT, dimensionless coupling constants like α govern the perturbative expansion and renormalization of interactions, ensuring the theory's consistency across energy scales by avoiding dimensional inconsistencies in Feynman diagrams. For instance, the renormalizability of quantum electrodynamics relies on α being dimensionless, allowing ultraviolet divergences to be absorbed without altering the theory's predictive power.[12] Similarly, mass ratios like μ parameterize the structure of hadronic and atomic systems in the Standard Model, influencing phenomena from spectral lines to binding energies.[13] Deriving the specific numerical values of these constants from first principles remains a central challenge in theoretical physics, as current frameworks such as the Standard Model treat them as free parameters that must be determined empirically rather than predicted.[13] This empirical necessity underscores the incompleteness of existing theories, prompting ongoing research into unification schemes like grand unified theories, which aim to relate dimensionless constants but have yet to yield precise derivations. An illustrative case is the cosmological constant Λ, which possesses dimensions of inverse length squared in natural units; when rendered dimensionless via multiplication by the square of the Planck length (l_p² Λ), it assumes an extraordinarily small value on the order of 10^{-122}, highlighting the "cosmological constant problem" where quantum field theory predictions vastly exceed observations.[14] Such discrepancies emphasize the experimental input required to fix these values, with high-precision measurements from facilities like those contributing to CODATA adjustments providing the foundational data.[15]Relation to Physical Units
Numerical Determination
The numerical values of physical constants are determined through meticulously designed experiments that exploit quantum and classical phenomena to achieve extreme precision. For the speed of light in vacuum, denoted c, laser interferometry has been a cornerstone method, involving the direct measurement of a laser's frequency f and wavelength \lambda to compute c = f \lambda. A pivotal experiment at the National Institute of Standards and Technology (NIST) in 1972 utilized a methane-stabilized helium-neon laser operating at 3.39 \mum, yielding c = 299792458 m/s with a standard uncertainty of 4 m/s, representing an order-of-magnitude improvement in accuracy over prior optical methods.[16] This measurement demonstrated the feasibility of tying the meter to the speed of light, culminating in the 1983 redefinition of the meter such that c was fixed exactly at 299792458 m/s, eliminating measurement uncertainty for this constant. Similarly, the Planck constant h is measured via techniques that link mechanical and electrical quantities, such as the Kibble (watt) balance, which operates on the principle of equating gravitational potential power to electromagnetic power using the Josephson effect for voltage and the quantum Hall effect for resistance. At NIST, the NIST-4 Kibble balance provided a key determination in 2016, measuring h = 6.62607004 \times 10^{-34} J s with a relative standard uncertainty of 34 parts per billion, contributing significantly to the pre-redefinition consensus value. Another approach for h involves cavity resonance in superconducting microwave cavities, where the resonant frequency of electromagnetic modes relates to quantized energy levels, though the Kibble method has dominated recent high-precision efforts due to its direct traceability to SI units. Following the 2019 SI redefinition, h was fixed exactly at $6.62607015 \times 10^{-34} J s, alongside the elementary charge e, Boltzmann constant k_B, and Avogadro constant N_A, shifting focus to refining other derived constants. The Committee on Data for Science and Technology (CODATA), under the International Science Council, plays a central role in synthesizing these experimental results into globally recommended values through a rigorous least-squares adjustment procedure. This process integrates hundreds of input measurements from international laboratories, resolving inconsistencies and ensuring self-consistency across the network of constants; the 2022 CODATA evaluation, for instance, adjusted 133 input data points to derive values for 79 constants, incorporating advancements up to December 2022.[17] Uncertainties in individual measurements are propagated via covariance matrices in this adjustment, where correlations between experiments (e.g., shared auxiliary constants like atomic masses) are accounted for, yielding final uncertainties that reflect the global data set's reliability rather than isolated errors—for example, the 2022 value of the gravitational constant G carries a relative uncertainty of about 22 parts per million due to propagated experimental variances.[4] Historical refinements illustrate the progressive tightening of these values through technological evolution. The speed of light, first approximately determined by Hippolyte Fizeau and Léon Foucault in the 1840s–1860s at around 298000–300000 km/s with uncertainties exceeding 0.1%, saw dramatic improvements with Michelson's rotating mirror apparatus in the early 20th century (reaching ~0.01% precision) and laser-based methods in the 1970s, ultimately rendering c exact in 1983. Likewise, h's value has evolved from Millikan's 1916 oil-drop experiment (~0.5% uncertainty) to sub-ppm precision by the 2010s, driven by quantum electrical standards, enabling the 2019 fixes and ongoing refinements for constants like the fine-structure constant \alpha. These updates underscore how successive generations of experiments reduce uncertainties, enhancing the foundational accuracy of physical theories.Role in SI Units
The 2019 redefinition of the International System of Units (SI), effective from 20 May 2019, fundamentally anchored four base units—the kilogram, ampere, kelvin, and mole—to exact numerical values of physical constants, thereby eliminating reliance on physical artifacts for their definitions.[18][19] This shift fixed the values of the Planck constant h = 6.626\,070\,15 \times 10^{-34} J s, the speed of light in vacuum c = 299\,792\,458 m/s, the elementary charge e = 1.602\,176\,634 \times 10^{-19} C, and the Boltzmann constant k = 1.380\,649 \times 10^{-23} J/K, along with the hyperfine transition frequency of caesium-133 \Delta \nu_{\text{Cs}} = 9\,192\,631\,770 Hz for the second.[18] By doing so, the SI system transitioned from definitions based on reproducible prototypes to ones rooted in invariant properties of nature, enhancing the framework's universality.[20] This redefinition offers significant advantages, including improved long-term stability, as physical constants do not drift or degrade like material artifacts, such as the former International Prototype of the Kilogram.[19][20] It promotes global consistency in measurements by allowing any laboratory worldwide to realize the units through fundamental physical processes, rather than calibrating against centralized standards, thus reducing uncertainties and fostering technological advancements in metrology.[18][20] The base units are now explicitly defined through these constants. The metre is the distance travelled by light in vacuum during a time interval of $1/299\,792\,458 of a second, directly fixing c.[18] \text{metre} \equiv \frac{c}{299\,792\,458} \times \text{second} The kilogram is defined such that the Planck constant has its exact value, linking mass to quantum mechanical energy-frequency relations, where the joule is \text{kg} \cdot \text{m}^2 \cdot \text{s}^{-2}.[19][18] Realizations often involve watt balances or silicon spheres, tying the unit to h, c, and \Delta \nu_{\text{Cs}}. The ampere is the electric current corresponding to the flow of exactly $1/1.602\,176\,634 \times 10^{-19} elementary charges per second, quantizing charge in terms of e.[18] \text{ampere} \equiv \frac{e}{1.602\,176\,634 \times 10^{-19}} \times \frac{\text{coulomb}}{\text{second}} The kelvin is defined by fixing the Boltzmann constant, connecting temperature to thermal energy in statistical mechanics, where the joule relates energy to particle motion.[19][18] \text{kelvin} \equiv \frac{k}{1.380\,649 \times 10^{-23}} \times \text{joule} These definitions tie the SI units to core quantum phenomena—such as the quantization of action (h), electromagnetic invariance (c), charge discreteness (e), and thermodynamic equilibrium (k)—enabling metrology to leverage atomic-scale precision for macroscopic standards and supporting innovations in quantum technologies like Josephson junctions and single-electron devices.[18][19]Use in Natural Units
Natural units are systems of measurement in theoretical physics where specific fundamental constants are assigned the value of 1, thereby eliminating them from equations and revealing the intrinsic scales of physical phenomena. These systems simplify mathematical expressions by absorbing constants into the definitions of units themselves. Prominent examples include Planck units, which set the speed of light c, reduced Planck's constant \hbar, and gravitational constant G to 1, providing a framework independent of human-defined prototypes and rooted in the properties of spacetime, quantum mechanics, and gravity. Another example is atomic units, commonly used in quantum chemistry and atomic physics, where \hbar = 1, the electron mass m_e = 1, and the elementary charge e = 1 (with $4\pi\epsilon_0 = 1 in the corresponding electrostatic units), streamlining calculations involving electron interactions and atomic structure. The primary benefit of natural units lies in their ability to highlight fundamental scales without extraneous numerical factors, making theoretical derivations more elegant and focused on physical content. For instance, in Planck units, the Planck length l_P = \sqrt{\frac{\hbar G}{c^3}} is defined as 1, corresponding to approximately $1.616255 \times 10^{-35} m in SI units, representing the scale at which quantum gravitational effects become significant. This approach reduces the complexity of equations, such as those in general relativity or quantum field theory, by removing constants that would otherwise require dimensional tracking, thus emphasizing relationships between quantities like energy and length. In particle physics, natural units with c = \hbar = 1 are widely applied, equating the dimensions of mass, energy, and inverse length, which simplifies relativistic kinematics and Feynman diagram calculations in quantum chromodynamics and the Standard Model. Similarly, in cosmology, these units aid in modeling the universe's expansion and structure formation, where setting c = \hbar = 1 (and often G = 1) facilitates the integration of gravitational and quantum effects in equations governing cosmic evolution, such as the Friedmann equations. To restore constants when transitioning back to conventional units, dimensional analysis is employed: for example, the relativistic energy-momentum relation E = mc^2 simplifies to E = m when c = 1, and c^2 is reinserted by matching the dimensions of energy to mass times velocity squared.Fundamental Constants
Definition and Count
Fundamental physical constants are universal quantities that appear in the fundamental equations of physics and cannot be derived from other principles or more basic constants; they must instead be determined through experimental measurement. In modern theoretical physics, these constants primarily parameterize the interactions and properties of elementary particles and fields in the Standard Model of particle physics, as well as the gravitational and cosmological sectors. They represent the irreducible inputs required to fully specify the laws governing the observable universe, distinguishing them from derived quantities or those fixed by symmetries.[1] The Standard Model, which describes electromagnetic, weak, and strong nuclear forces, incorporates approximately 26 such fundamental parameters. These include three gauge couplings for the respective forces, six quark masses (or Yukawa couplings), three charged lepton masses, the Higgs boson parameters, four parameters in the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix, and seven parameters accounting for neutrino masses and the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix. Adding the Newtonian gravitational constant G and the cosmological constant \Lambda from general relativity and cosmology yields a total of around 28 fundamental constants. Debates persist regarding the true independence of these parameters, as some may emerge as effective descriptions of deeper underlying dynamics. Theoretical advancements aim to reduce this count by imposing additional symmetries or unifications. For instance, grand unified theories (GUTs) propose embedding the Standard Model's three gauge couplings into a single unified coupling at high energies, potentially eliminating independent values for the strong, weak, and electromagnetic interactions. Such reductions reflect an ongoing pursuit of greater theoretical economy, where fewer free parameters could signal progress toward a more complete theory of fundamental interactions.[21] Philosophically, the proliferation of these parameters underscores a perceived incompleteness in current physical laws, prompting efforts to minimize their number as a hallmark of deeper understanding. This minimalism drives research beyond the Standard Model, seeking frameworks where constants are predicted rather than postulated, though no such theory has yet been experimentally confirmed.[22]Examples and Table
Physical constants underpin the fundamental laws of physics, and a selection of the most significant ones includes those that define key scales in relativity, quantum mechanics, gravity, electromagnetism, and statistical mechanics. These constants are chosen for their central roles in theoretical frameworks and experimental determinations, such as the speed of light c, which sets the universal speed limit, and the fine-structure constant \alpha, which governs atomic interactions. The values presented here are the internationally recommended CODATA 2022 set, reflecting the latest least-squares adjustment of experimental data as of 2024.[3] While many physical constants are independent fundamentals, others like the vacuum electric permittivity \varepsilon_0 are derived from primaries via relations such as \varepsilon_0 = 1/(\mu_0 c^2), where \mu_0 is the magnetic constant; the table below prioritizes the primary constants to avoid redundancy.[3]| Name | Symbol | Value | Uncertainty | SI Units | Brief Description |
|---|---|---|---|---|---|
| Speed of light in vacuum | c | 299792458 | exact | m s⁻¹ | Exact value defining the speed of light, fundamental to special relativity and electromagnetism.[3] |
| Planck constant | h | $6.62607015 \times 10^{-34} | exact | J s | Exact quantum of action, linking energy and frequency in quantum mechanics.[3] |
| Reduced Planck constant | \hbar | $1.054571817 \times 10^{-34} | exact | J s | Exact h / 2\pi, central to quantum angular momentum and wave functions.[3] |
| Newtonian constant of gravitation | G | $6.67430 \times 10^{-11} | $0.00015 \times 10^{-11} | m³ kg⁻¹ s⁻² | Determines the strength of gravitational attraction between masses, key to general relativity and celestial mechanics.[3] |
| Elementary charge | e | $1.602176634 \times 10^{-19} | exact | C | Exact magnitude of the charge on an electron or proton, basis for electromagnetic interactions.[3] |
| Fine-structure constant | \alpha | $7.2973525643 \times 10^{-3} | $0.0000000011 \times 10^{-3} | (dimensionless) | Dimensionless measure of the strength of electromagnetic interactions between elementary charged particles.[3] |
| Boltzmann constant | k_B | $1.380649 \times 10^{-23} | exact | J K⁻¹ | Exact link between temperature and energy in statistical mechanics and thermodynamics.[3] |
| Avogadro constant | N_A | $6.02214076 \times 10^{23} | exact | mol⁻¹ | Exact number of specified entities in one mole of substance, bridging microscopic and macroscopic scales.[3] |
| Magnetic constant | \mu_0 | $1.25663706127(20) \times 10^{-6} | $0.00000000020 \times 10^{-6} | N A⁻² | Permeability of free space, relating magnetic fields to electric currents in vacuum.[3] |
| Electric constant | \varepsilon_0 | $8.8541878188(14) \times 10^{-12} | $0.0000000014 \times 10^{-12} | F m⁻¹ | Permittivity of free space, characterizing electric field responses to charges in vacuum.[3] |