Fact-checked by Grok 2 weeks ago

Physical constant

A physical constant is a fundamental quantity in physics that remains invariant across space and time, appearing universally in the basic equations and theories that describe natural phenomena. These constants serve as unchanging reference points for scientific measurements and theoretical predictions, enabling the consistency and precision required in fields ranging from to . Notable examples include the in vacuum (c = 299792458 m/s), which defines the maximum speed of information propagation and underpins ; Planck's constant (h = 6.62607015 × 10⁻³⁴ J s), linking energy and frequency in ; the (e = 1.602176634 × 10⁻¹⁹ C), the basic unit of ; and the (α ≈ 1/137.035999), which characterizes the strength of electromagnetic interactions. Their values are determined through rigorous experimental measurements and are periodically refined by international bodies like the CODATA Task Group on Fundamental Physical Constants, with the most recent adjustment based on data through December 31, 2022, ensuring self-consistent sets for global use in science and . These constants not only test the validity of physical theories but also play a critical role in defining the (SI), such as fixing h and c to establish the and meter.

Definition and Characteristics

Core Definition

A physical constant is a that remains invariant regardless of location, time, or surrounding conditions, serving as a fundamental in the core equations of physics. These quantities, such as the or Planck's constant, underpin theories from to , enabling precise predictions about natural phenomena. Unlike physical variables, which fluctuate based on specific systems or measurements—such as or in a given experiment—physical constants maintain fixed values across all contexts. Parameters, by contrast, often represent adjustable or model-dependent values tailored to particular scenarios, lacking the universal scope of true constants; for example, a mass in a problem might function as a rather than an invariant like the . The concept of physical constants originated in classical mechanics with Newton's introduction of the gravitational constant G in 1687 and evolved significantly in the late 19th and early 20th centuries with the formalization of electromagnetic theory ( highlighting the speed of light c as an invariant) and quantum theory ( in 1900). This progression marked a shift toward recognizing these quantities as unexplained building blocks of physical laws, essential for bridging classical and relativistic frameworks.

Key Properties

Physical constants exhibit universality, meaning their values remain identical across all regions of the , independent of location or epoch. This property is foundational to the consistency of physical laws, as indicated by tight constraints from spectroscopic observations of distant quasars and the radiation, which show no significant variation in constants like the over billions of years (Δα/α < 10^{-5}). These constants also demonstrate invariance under coordinate transformations, such as Lorentz boosts in special relativity, ensuring that physical laws maintain their form regardless of the observer's frame of reference. For instance, the speed of light c serves as the invariant scale that defines the structure of spacetime in relativistic theories, remaining unchanged under such transformations. This invariance extends to other constants like Planck's constant h, which preserves quantum relations across inertial frames. Physical constants play a crucial role in scaling laws and symmetries, dictating the dimensional structure and symmetry principles of fundamental equations. In quantum field theory, constants such as h and c set the scales for renormalization group flows, which describe how coupling strengths evolve under changes in energy scale while preserving gauge symmetries. Similarly, in general relativity, the gravitational constant G governs scaling in virial theorems for self-gravitating systems, ensuring symmetry under diffeomorphisms. These roles ensure that theoretical frameworks yield consistent predictions across scales, from subatomic particles to cosmological structures. Conceptually, physical constants bridge theoretical predictions with experimental observations by appearing in dimensionless combinations that parameterize the strength of interactions and test the validity of models. The fine-structure constant \alpha \approx \frac{1}{137}, defined as \alpha = \frac{e^2}{4\pi \epsilon_0 \hbar c}, exemplifies this, quantifying electromagnetic coupling in a frame-independent manner and enabling precise comparisons between quantum electrodynamics calculations and atomic spectra measurements. Such combinations highlight the indispensability of constants in unifying disparate phenomena and refining our understanding of nature's fundamental scales.

Classification of Constants

Dimensional Constants

Dimensional constants are fundamental physical quantities that carry units of measurement, reflecting their role in connecting abstract laws to observable scales in the universe. Unlike dimensionless ratios, these constants incorporate dimensions such as mass [M], length [L], time [T], and temperature [Θ], ensuring that physical equations remain dimensionally homogeneous. For instance, the speed of light in vacuum, c, has dimensions [L T^{-1}] and serves as the universal speed limit for information propagation in special relativity. Similarly, the Newtonian gravitational constant G possesses dimensions [M^{-1} L^3 T^{-2}], quantifying the strength of gravitational attraction between masses. These examples illustrate how dimensional constants bridge theoretical principles with empirical measurements, as documented in the CODATA recommended values. In dimensional analysis, these constants are essential for maintaining the consistency of physical equations, where every term must share the same dimensional structure. The Buckingham π theorem provides a formal framework for this, asserting that if a physical problem involves n variables with m fundamental dimensions, it can be reduced to n - m independent dimensionless π groups, allowing the derivation of scaling relations without solving the full equations. This theorem, originally formulated by , enables scientists to identify key dependencies in complex systems, such as fluid dynamics or electromagnetism, by grouping dimensional constants with variables to form dimensionless combinations that govern universal behaviors. For example, in deriving the for fluid flow, constants like density and viscosity (both dimensional) combine to yield a scale-invariant parameter. The presence of dimensional constants profoundly influences the scalability of physical phenomena, as variations in their values would rescale fundamental lengths, times, and energies across the cosmos. Hypothetical changes to these constants could, for instance, expand or contract atomic radii by altering the interplay with electromagnetic forces, setting the stage for how dimensionless parameters like the fine-structure constant dictate relative sizes. Such scalability underscores their dependence on chosen unit systems, where redefining units adjusts their numerical values while preserving physical predictions. h, with dimensions [M L^2 T^{-1}] (equivalent to energy times time, or action), quantifies the granularity of quantum processes, such as photon energy E = h \nu. The k_B, dimensioned as [M L^2 T^{-2} Θ^{-1}] (energy per temperature), links macroscopic thermodynamics to microscopic statistical mechanics, appearing in equations like the ideal gas law PV = N k_B T. Their exact or measured values, such as h = 6.62607015 \times 10^{-34} J s, anchor these scales in the SI system.

Dimensionless Constants

Dimensionless physical constants arise as ratios of physical quantities sharing identical dimensions, resulting in pure numerical values independent of any chosen unit system. A prominent example is the , denoted α, which quantifies the strength of the electromagnetic interaction and is defined as α = e² / (4πε₀ ℏ c), where e is the elementary charge, ε₀ the vacuum permittivity, ℏ the , and c the ; its measured value is approximately 7.2973525693 × 10^{-3}, or equivalently 1/α ≈ 137.035999177 (CODATA 2022). Another key instance is the proton-to-electron mass ratio, μ = m_p / m_e, representing the ratio of the proton's rest mass to that of the electron, with a value of approximately 1836.15267343 (CODATA 2022). These constants hold profound theoretical significance because they encapsulate the intrinsic properties of fundamental interactions without contamination from arbitrary unit choices, making them pivotal parameters in theoretical frameworks such as (QFT). In QFT, dimensionless coupling constants like α govern the perturbative expansion and renormalization of interactions, ensuring the theory's consistency across energy scales by avoiding dimensional inconsistencies in Feynman diagrams. For instance, the renormalizability of relies on α being dimensionless, allowing ultraviolet divergences to be absorbed without altering the theory's predictive power. Similarly, mass ratios like μ parameterize the structure of hadronic and atomic systems in the , influencing phenomena from spectral lines to binding energies. Deriving the specific numerical values of these constants from first principles remains a central challenge in theoretical physics, as current frameworks such as the Standard Model treat them as free parameters that must be determined empirically rather than predicted. This empirical necessity underscores the incompleteness of existing theories, prompting ongoing research into unification schemes like grand unified theories, which aim to relate dimensionless constants but have yet to yield precise derivations. An illustrative case is the cosmological constant Λ, which possesses dimensions of inverse length squared in natural units; when rendered dimensionless via multiplication by the square of the Planck length (l_p² Λ), it assumes an extraordinarily small value on the order of 10^{-122}, highlighting the "cosmological constant problem" where quantum field theory predictions vastly exceed observations. Such discrepancies emphasize the experimental input required to fix these values, with high-precision measurements from facilities like those contributing to CODATA adjustments providing the foundational data.

Relation to Physical Units

Numerical Determination

The numerical values of physical constants are determined through meticulously designed experiments that exploit quantum and classical phenomena to achieve extreme precision. For the speed of light in vacuum, denoted c, laser interferometry has been a cornerstone method, involving the direct measurement of a laser's frequency f and wavelength \lambda to compute c = f \lambda. A pivotal experiment at the National Institute of Standards and Technology () in 1972 utilized a methane-stabilized helium-neon laser operating at 3.39 \mum, yielding c = 299792458 m/s with a standard uncertainty of 4 m/s, representing an order-of-magnitude improvement in accuracy over prior optical methods. This measurement demonstrated the feasibility of tying the meter to the speed of light, culminating in the 1983 redefinition of the meter such that c was fixed exactly at 299792458 m/s, eliminating measurement uncertainty for this constant. Similarly, the Planck constant h is measured via techniques that link mechanical and electrical quantities, such as the Kibble (watt) balance, which operates on the principle of equating gravitational potential power to electromagnetic power using the for voltage and the for resistance. At NIST, the NIST-4 Kibble balance provided a key determination in 2016, measuring h = 6.62607004 \times 10^{-34} J s with a relative standard uncertainty of 34 parts per billion, contributing significantly to the pre-redefinition consensus value. Another approach for h involves cavity resonance in superconducting microwave cavities, where the resonant frequency of electromagnetic modes relates to quantized energy levels, though the Kibble method has dominated recent high-precision efforts due to its direct traceability to SI units. Following the 2019 SI redefinition, h was fixed exactly at $6.62607015 \times 10^{-34} J s, alongside the e, k_B, and N_A, shifting focus to refining other derived constants. The Committee on Data for Science and Technology (CODATA), under the International Science Council, plays a central role in synthesizing these experimental results into globally recommended values through a rigorous least-squares adjustment procedure. This process integrates hundreds of input measurements from international laboratories, resolving inconsistencies and ensuring self-consistency across the network of constants; the 2022 CODATA evaluation, for instance, adjusted 133 input data points to derive values for 79 constants, incorporating advancements up to December 2022. Uncertainties in individual measurements are propagated via covariance matrices in this adjustment, where correlations between experiments (e.g., shared auxiliary constants like atomic masses) are accounted for, yielding final uncertainties that reflect the global data set's reliability rather than isolated errors—for example, the 2022 value of the gravitational constant G carries a relative uncertainty of about 22 parts per million due to propagated experimental variances. Historical refinements illustrate the progressive tightening of these values through technological evolution. The speed of light, first approximately determined by Hippolyte Fizeau and Léon Foucault in the 1840s–1860s at around 298000–300000 km/s with uncertainties exceeding 0.1%, saw dramatic improvements with Michelson's rotating mirror apparatus in the early 20th century (reaching ~0.01% precision) and laser-based methods in the 1970s, ultimately rendering c exact in 1983. Likewise, h's value has evolved from Millikan's 1916 oil-drop experiment (~0.5% uncertainty) to sub-ppm precision by the 2010s, driven by quantum electrical standards, enabling the 2019 fixes and ongoing refinements for constants like the fine-structure constant \alpha. These updates underscore how successive generations of experiments reduce uncertainties, enhancing the foundational accuracy of physical theories.

Role in SI Units

The 2019 redefinition of the International System of Units (SI), effective from 20 May 2019, fundamentally anchored four base units—the kilogram, ampere, kelvin, and mole—to exact numerical values of physical constants, thereby eliminating reliance on physical artifacts for their definitions. This shift fixed the values of the h = 6.626\,070\,15 \times 10^{-34} J s, the in vacuum c = 299\,792\,458 m/s, the e = 1.602\,176\,634 \times 10^{-19} C, and the k = 1.380\,649 \times 10^{-23} J/K, along with the hyperfine transition frequency of caesium-133 \Delta \nu_{\text{Cs}} = 9\,192\,631\,770 Hz for the second. By doing so, the SI system transitioned from definitions based on reproducible prototypes to ones rooted in invariant properties of nature, enhancing the framework's universality. This redefinition offers significant advantages, including improved long-term stability, as physical constants do not drift or degrade like material artifacts, such as the former International Prototype of the Kilogram. It promotes global consistency in measurements by allowing any laboratory worldwide to realize the units through fundamental physical processes, rather than calibrating against centralized standards, thus reducing uncertainties and fostering technological advancements in metrology. The base units are now explicitly defined through these constants. The metre is the distance travelled by light in vacuum during a time interval of $1/299\,792\,458 of a second, directly fixing c. \text{metre} \equiv \frac{c}{299\,792\,458} \times \text{second} The kilogram is defined such that the has its exact value, linking mass to quantum mechanical energy-frequency relations, where the joule is \text{kg} \cdot \text{m}^2 \cdot \text{s}^{-2}. Realizations often involve watt balances or silicon spheres, tying the unit to h, c, and \Delta \nu_{\text{Cs}}. The ampere is the electric current corresponding to the flow of exactly $1/1.602\,176\,634 \times 10^{-19} elementary charges per second, quantizing charge in terms of e. \text{ampere} \equiv \frac{e}{1.602\,176\,634 \times 10^{-19}} \times \frac{\text{coulomb}}{\text{second}} The kelvin is defined by fixing the , connecting temperature to thermal energy in statistical mechanics, where the relates energy to particle motion. \text{kelvin} \equiv \frac{k}{1.380\,649 \times 10^{-23}} \times \text{joule} These definitions tie the SI units to core quantum phenomena—such as the quantization of action (h), electromagnetic invariance (c), charge discreteness (e), and thermodynamic equilibrium (k)—enabling metrology to leverage atomic-scale precision for macroscopic standards and supporting innovations in quantum technologies like Josephson junctions and single-electron devices.

Use in Natural Units

Natural units are systems of measurement in theoretical physics where specific fundamental constants are assigned the value of 1, thereby eliminating them from equations and revealing the intrinsic scales of physical phenomena. These systems simplify mathematical expressions by absorbing constants into the definitions of units themselves. Prominent examples include , which set the speed of light c, reduced Planck's constant \hbar, and gravitational constant G to 1, providing a framework independent of human-defined prototypes and rooted in the properties of spacetime, quantum mechanics, and gravity. Another example is , commonly used in quantum chemistry and atomic physics, where \hbar = 1, the electron mass m_e = 1, and the elementary charge e = 1 (with $4\pi\epsilon_0 = 1 in the corresponding electrostatic units), streamlining calculations involving electron interactions and atomic structure. The primary benefit of natural units lies in their ability to highlight fundamental scales without extraneous numerical factors, making theoretical derivations more elegant and focused on physical content. For instance, in Planck units, the Planck length l_P = \sqrt{\frac{\hbar G}{c^3}} is defined as 1, corresponding to approximately $1.616255 \times 10^{-35} m in SI units, representing the scale at which quantum gravitational effects become significant. This approach reduces the complexity of equations, such as those in general relativity or quantum field theory, by removing constants that would otherwise require dimensional tracking, thus emphasizing relationships between quantities like energy and length. In particle physics, natural units with c = \hbar = 1 are widely applied, equating the dimensions of mass, energy, and inverse length, which simplifies relativistic kinematics and Feynman diagram calculations in and the . Similarly, in cosmology, these units aid in modeling the universe's expansion and structure formation, where setting c = \hbar = 1 (and often G = 1) facilitates the integration of gravitational and quantum effects in equations governing cosmic evolution, such as the . To restore constants when transitioning back to conventional units, dimensional analysis is employed: for example, the relativistic energy-momentum relation E = mc^2 simplifies to E = m when c = 1, and c^2 is reinserted by matching the dimensions of energy to mass times velocity squared.

Fundamental Constants

Definition and Count

Fundamental physical constants are universal quantities that appear in the fundamental equations of physics and cannot be derived from other principles or more basic constants; they must instead be determined through experimental measurement. In modern theoretical physics, these constants primarily parameterize the interactions and properties of elementary particles and fields in the , as well as the gravitational and cosmological sectors. They represent the irreducible inputs required to fully specify the laws governing the observable universe, distinguishing them from derived quantities or those fixed by symmetries. The Standard Model, which describes electromagnetic, weak, and strong nuclear forces, incorporates approximately 26 such fundamental parameters. These include three gauge couplings for the respective forces, six quark masses (or Yukawa couplings), three charged lepton masses, the Higgs boson parameters, four parameters in the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix, and seven parameters accounting for neutrino masses and the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix. Adding the Newtonian gravitational constant G and the cosmological constant \Lambda from general relativity and cosmology yields a total of around 28 fundamental constants. Debates persist regarding the true independence of these parameters, as some may emerge as effective descriptions of deeper underlying dynamics. Theoretical advancements aim to reduce this count by imposing additional symmetries or unifications. For instance, grand unified theories (GUTs) propose embedding the Standard Model's three gauge couplings into a single unified coupling at high energies, potentially eliminating independent values for the strong, weak, and electromagnetic interactions. Such reductions reflect an ongoing pursuit of greater theoretical economy, where fewer free parameters could signal progress toward a more complete theory of fundamental interactions. Philosophically, the proliferation of these parameters underscores a perceived incompleteness in current physical laws, prompting efforts to minimize their number as a hallmark of deeper understanding. This minimalism drives research beyond the , seeking frameworks where constants are predicted rather than postulated, though no such theory has yet been experimentally confirmed.

Examples and Table

Physical constants underpin the fundamental laws of physics, and a selection of the most significant ones includes those that define key scales in relativity, quantum mechanics, gravity, electromagnetism, and statistical mechanics. These constants are chosen for their central roles in theoretical frameworks and experimental determinations, such as the speed of light c, which sets the universal speed limit, and the fine-structure constant \alpha, which governs atomic interactions. The values presented here are the internationally recommended CODATA 2022 set, reflecting the latest least-squares adjustment of experimental data as of 2024. While many physical constants are independent fundamentals, others like the vacuum electric permittivity \varepsilon_0 are derived from primaries via relations such as \varepsilon_0 = 1/(\mu_0 c^2), where \mu_0 is the magnetic constant; the table below prioritizes the primary constants to avoid redundancy.
NameSymbolValueUncertaintySI UnitsBrief Description
Speed of light in vacuumc299792458exactm s⁻¹Exact value defining the speed of light, fundamental to special relativity and electromagnetism.
Planck constanth$6.62607015 \times 10^{-34}exactJ sExact quantum of action, linking energy and frequency in quantum mechanics.
Reduced Planck constant\hbar$1.054571817 \times 10^{-34}exactJ sExact h / 2\pi, central to quantum angular momentum and wave functions.
Newtonian constant of gravitationG$6.67430 \times 10^{-11}$0.00015 \times 10^{-11}m³ kg⁻¹ s⁻²Determines the strength of gravitational attraction between masses, key to general relativity and celestial mechanics.
Elementary chargee$1.602176634 \times 10^{-19}exactCExact magnitude of the charge on an electron or proton, basis for electromagnetic interactions.
Fine-structure constant\alpha$7.2973525643 \times 10^{-3}$0.0000000011 \times 10^{-3}(dimensionless)Dimensionless measure of the strength of electromagnetic interactions between elementary charged particles.
Boltzmann constantk_B$1.380649 \times 10^{-23}exactJ K⁻¹Exact link between temperature and energy in statistical mechanics and thermodynamics.
Avogadro constantN_A$6.02214076 \times 10^{23}exactmol⁻¹Exact number of specified entities in one mole of substance, bridging microscopic and macroscopic scales.
Magnetic constant\mu_0$1.25663706127(20) \times 10^{-6}$0.00000000020 \times 10^{-6}N A⁻²Permeability of free space, relating magnetic fields to electric currents in vacuum.
Electric constant\varepsilon_0$8.8541878188(14) \times 10^{-12}$0.0000000014 \times 10^{-12}F m⁻¹Permittivity of free space, characterizing electric field responses to charges in vacuum.

Tests of Constancy

Historical Tests

In 1937, Paul Dirac proposed the large number hypothesis, noting striking coincidences among dimensionless ratios in physics, such as the approximate equality between the ratio of the observable universe's radius to the classical electron radius (around 10^{40}) and the ratio of the electromagnetic force to the gravitational force between an electron and a proton. To explain these "large numbers," Dirac suggested that the gravitational constant G varies inversely with the age of the universe, G ∝ 1/t, implying a slow decrease over cosmic time, which could be tested through astronomical observations like the secular acceleration of the Moon or the expansion of the universe. This hypothesis prompted early efforts to probe the temporal stability of fundamental constants using available data, though subsequent analyses of solar system dynamics and geological records found no evidence for such variation in G. One of the earliest experimental confirmations of a constant speed of light c came from the 1887 Michelson-Morley experiment, which aimed to detect Earth's motion through the hypothetical luminiferous aether by measuring interference fringes in light beams split along perpendicular paths. The null result—no detectable shift in fringe patterns corresponding to an expected velocity of about 30 km/s—demonstrated that c is invariant with respect to the observer's direction of motion, to within 1/40th of the anticipated effect, laying foundational support for the constancy of c in classical physics. For the gravitational constant G, early tests in the early 1900s relied on precision measurements of the equivalence principle, notably through 's torsion balance experiments conducted between 1885 and 1922. These involved suspending pairs of dissimilar materials (e.g., platinum and aluminum) and observing any differential torque due to Earth's gravity, finding the gravitational and inertial masses equal to within 2.5 × 10^{-9}, which assumes and confirms G's universality and apparent constancy across substances without evidence of temporal change in laboratory settings. The stability of the fine-structure constant α, which governs electromagnetic interactions, was inferred from the consistency of atomic spectral lines measured over decades using early spectroscopy techniques. For instance, Balmer series lines in hydrogen, first precisely cataloged in the 1880s by Johann Balmer and later refined through interferometry in the 1920s and 1930s, showed Rydberg constants stable to parts in 10^5 when compared across laboratories, implying no detectable drift in α over half a century, as variations would shift line positions proportionally to α^2. A key milestone in testing constancy over cosmological timescales emerged in the 1990s from analyses of quasar absorption spectra by John Webb and collaborators, who compared fine-structure multiplets (e.g., Fe II and Mg II lines) in intervening gas clouds at redshifts z ≈ 0.5–1.5, spanning about 10 billion years. Their method achieved an order-of-magnitude improvement in sensitivity by modeling line separations, yielding Δα/α = (0.72 ± 0.59) × 10^{-5} × z, consistent with no variation in α to within 10^{-5} over this epoch, marking the first robust astronomical constraint against Dirac-like changes.

Modern Experimental Probes

Modern experimental probes of physical constants' potential variations leverage advanced atomic clocks to compare hyperfine transition frequencies in atoms like cesium and hydrogen, which are largely insensitive to the fine-structure constant α, against optical transitions that depend strongly on α. Over multi-year monitoring periods, such comparisons have constrained the temporal variation to |\dot{\alpha}/\alpha| < 10^{-17} \ \mathrm{yr}^{-1}, consistent with no detectable change. Cosmological observations provide complementary constraints over vast timescales. Analysis of isotopic ratios in uranium ores from the OKLO natural nuclear reactor, active approximately 2 billion years ago, limits changes in α to |\Delta \alpha / \alpha| < 6 \times 10^{-9} at 2\sigma confidence, assuming minimal variation in quark masses. Similarly, cosmic microwave background (CMB) anisotropies measured by the Planck satellite constrain variations in α from the epoch of recombination (z \approx 1100) to the present, yielding \Delta \alpha / \alpha = (0.4 \pm 1.4) \times 10^{-3} for independent changes in α and the electron mass m_e. In laboratory settings, high-resolution laser spectroscopy of molecular species like H_2 and HD targets variations in the proton-to-electron mass ratio \mu = m_p / m_e. Recent measurements using ultracold KRb molecules and precision vibrational transitions have set stringent limits of |\dot{\mu}/\mu| < 4 \times 10^{-17} \ \mathrm{yr}^{-1} over laboratory timescales, with no evidence of variation. Complementary tests employ gravitational redshift experiments with atomic clocks positioned at varying heights to verify local position invariance (LPI) under the equivalence principle. Differential comparisons using chip-scale rubidium clocks separated by 33 cm have confirmed the predicted redshift to within 20 ppm, bounding LPI violations that could signal constant drifts to less than 10^{-6} relative to general relativity. Observations from the James Webb Space Telescope (JWST) in 2023–2025 have extended these probes to the early universe via high-redshift (z > 6) quasars and emission-line galaxies. Spectroscopic analysis of fine-structure lines in these objects tightens constraints on α variation to |\Delta \alpha / \alpha| < 10^{-5} at z \approx 7–9, surpassing prior ground-based limits. Additionally, JWST photometry and dynamics of massive galaxies at z > 10 inform dark energy models using the Chevallier–Polarski–Linder parameterization, providing constraints on the equation-of-state parameters w_0 and w_a.

Implications in Physics

Fine-Tuning and the Universe

The of physical constants refers to the observation that small variations in their values would render the inhospitable to complex structures and as we know it. For instance, the (α), which governs the strength of electromagnetic interactions, is approximately 1/137; a change of just 4% in α would prevent the formation of stable atoms by disrupting orbits and binding energies, leading to a universe dominated by unbound rather than chemistry-supporting matter. Similarly, the (G), which dictates the force of gravity, must lie within a narrow range for : if G were slightly weaker, matter would fail to coalesce into stars and planets; if stronger, stars would burn too rapidly, exhausting their fuel before allowing time for planetary systems or to develop. The weak provides a framework for understanding this apparent tuning, positing that we observe these precise values because only in such a could observers like ourselves exist to make the observation. Formulated by , this principle states that the 's conditions must be compatible with the existence of observers, thereby explaining the selection effect without invoking design or necessity. It contrasts with stronger versions that imply the universe must support life, focusing instead on the bias introduced by our presence in a life-permitting . One proposed explanation for fine-tuning is the multiverse hypothesis, particularly within , which predicts a vast "landscape" of approximately 10^{500} possible vacua, each with different values for the physical constants. In this scenario, our universe is one of many, and the observed constants are simply those that allow for observers, with emerging preferentially in habitable regions of the . This idea, advanced by , suggests that the apparent tuning arises from the statistical inevitability across an immense of universes. Critiques of arguments highlight alternatives, such as theories of varying constants over cosmic time or space, which challenge the assumption of fixed, universal values. Victor Stenger argued that many claimed tunings are overstated when considering correlated parameter changes or alternative physical laws, potentially allowing life in a broader range of conditions without invoking a . In response, Luke Barnes counters that such variations do not eliminate the sensitivity of key constants like α and to small perturbations, maintaining that the universe's remains improbably precise under standard models.

Role in Theoretical Frameworks

In the of , physical constants serve as essential free parameters that underpin the theory's predictive power, with 19 such parameters—including six quark masses, three charged lepton masses, four parameters in the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix, three gauge couplings, two Higgs sector parameters, and the strong CP-violating phase θ_QCD—requiring experimental input to fix their values. These constants enable calculations of key phenomena, such as the mass, which emerges from the interplay of the Higgs and self-coupling λ via the relation m_H = √(2λ) v, where v ≈ 246 GeV is determined by the electroweak scale. Without these inputs, the model remains incomplete, highlighting the empirical foundation of its successes in describing electroweak interactions and . Beyond the , theoretical frameworks seek to impose relations on these constants to achieve greater unification and explanatory depth. In (SUSY), for instance, the theory predicts interconnections among parameters, such as improved convergence of the gauge couplings at high energies, potentially resolving discrepancies in the Standard Model's running couplings and stabilizing the Higgs mass against quantum corrections. Similarly, approaches quantize Newton's G non-perturbatively, incorporating it into a background-independent of spin networks to reconcile with , though G itself remains an input parameter rather than a derived quantity. Grand Unified Theories (GUTs) represent a key effort to reduce the proliferation of constants by embedding the Model's SU(3) × SU(2) × U(1) gauge structure into a single , such as SU(5) or SO(10), where the three fine-structure constants α_1, α_2, and α_3 unify to a common value α_GUT at an energy scale around 10^{16} GeV. This unification not only diminishes the number of independent couplings but also implies testable predictions like , though supersymmetric GUTs enhance the viability by aligning the running more precisely with observations. Despite these advances, physical constants reveal profound open issues in theoretical frameworks. The questions why the electroweak scale parameter μ (related to the Higgs mass) vastly exceeds the m_e while remaining minuscule compared to the Planck scale M_Pl ≈ 1.22 × 10^{19} GeV, demanding unnatural to avoid divergences that would push μ toward M_Pl. The similarly puzzles over why the observed vacuum energy density Λ ≈ 10^{-47} GeV^4 is 120 orders of magnitude smaller than expectations from the Planck scale. Recent 2025 constraints from the experiment have refined the effective electron antineutrino mass upper limit to < 0.45 eV at 90% confidence level, while the DESI survey provides a bound on the sum of masses ∑m_ν < 0.064 eV at 95% confidence level, influencing the effective parameter count in neutrino-extended models by constraining three additional masses and mixing parameters.

References

  1. [1]
    Introduction to the Fundamental Physical Constants
    The following material is intended to give nonexperts insight into the general subject of fundamental physical constants.Missing: authoritative sources
  2. [2]
    The fundamental physical constants
    Incorporating recent data, a new set of recommended values of the basic constants and conversion factors of physics and chemistry has just been issued.Missing: authoritative | Show results with:authoritative
  3. [3]
    Fundamental Physical Constants from NIST
    ### Summary of Fundamental Physical Constants from NIST
  4. [4]
    [2409.03787] CODATA Recommended Values of the Fundamental ...
    Aug 30, 2024 · This paper reports the 2022 CODATA recommended values of constants and conversion factors for physics and chemistry, based on data through 2022.
  5. [5]
    Fundamental Physical Constants from NIST
    The values of the fundamental physical constants provided at this site are recommended for international use by CODATA and are the latest available.
  6. [6]
    Math Phys - UMD Physics Department
    We have universal constants without dimensions (p, 2, e), universal constants with dimensions (e, c, h), constants that are parameters of a problem (m, R, g), ...
  7. [7]
    When and how did the notion/idea of physical constant emerge?
    Jun 22, 2019 · By the end of 19th century the notion of a physical constant was part of the folklore. For example, Planck was aware that he was proposing a new ...
  8. [8]
    Universe's Constants Now Known with Sufficient Certainty to ...
    Nov 22, 2016 · Fundamental constants are physical quantities that are universal in nature. For example, the speed of light in vacuum and the charge of a ...
  9. [9]
    fine-structure constant - CODATA Value
    Click symbol for equation. fine-structure constant $\alpha$. Numerical value, 7.297 352 5643 x 10-3. Standard uncertainty, 0.000 000 0011 x 10-3.
  10. [10]
    proton-electron mass ratio - CODATA Value
    proton-electron mass ratio $m_{\rm p}/m_{\rm e}$. Numerical value, 1836.152 673 426. Standard uncertainty, 0.000 000 032.
  11. [11]
    Current advances: The fine-structure constant
    It is the coupling constant or measure of the strength of the electromagnetic force that governs how electrically charged elementary particles (eg, electron, ...
  12. [12]
    [PDF] Renormalizability and Dimensional Analysis - UT Physics
    All couplings of a renormalizable theory must be dimensionless or have positive dimen- sions; at least one coupling should be dimensionless to avoid super- ...
  13. [13]
    CODATA recommended values of the fundamental physical constants
    Jun 30, 2021 · CODATA recommended values of the fundamental physical constants: 2018 ... Tests of fundamental symmetries. *This review is being published ...
  14. [14]
    Dimensionless constants, cosmology and other dark matters - arXiv
    Nov 29, 2005 · We identify 31 dimensionless physical constants required by particle physics and cosmology, and emphasize that both microphysical constraints and selection ...Missing: review | Show results with:review
  15. [15]
    dimensional analysis - Dimensionless Constants in Physics
    Apr 10, 2011 · John Baez has an interesting perspective on the relative importance of dimensionless constants, which he calls fundamental like alpha, versus dimensioned ...Is there a theoretical interesting in another dimensionless constants?What does it mean that dimensionless physical constants cannot be ...More results from physics.stackexchange.comMissing: review | Show results with:review
  16. [16]
    Cosmological constant—the weight of the vacuum - ScienceDirect
    In classical general relativity, based on the constants G,c and Λ, it is not possible to construct any dimensionless combination from these constants.
  17. [17]
    [PDF] CODATA Recommended Values of the Fundamental Physical ...
    This paper reports the 2018 self-consistent values of constants and conversion factors of physics and chemistry, based on data through 2018.
  18. [18]
    [PDF] Measurement of an Optical Frequency and the Speed of Light
    Jul 17, 1972 · We report the measurement of the frequency of the 633-nm red laser line. This is the first measurement of an optical frequency in the visible ...
  19. [19]
    [PDF] CODATA RECOMMENDED VALUES OF THE FUNDAMENTAL ...
    CODATA RECOMMENDED VALUES OF THE FUNDAMENTAL PHYSICAL CONSTANTS: 2022. NIST SP 961 (May 2024). An extensive list of constants is available on the NIST Physics ...
  20. [20]
    The SI - BIPM
    From 20 May 2019 all SI units are defined in terms of constants that describe the natural world. This assures the future stability of the SI and opens the ...
  21. [21]
    SI Redefinition | NIST - National Institute of Standards and Technology
    the kilogram, kelvin, ampere and mole — were redefined in terms of constants of nature.
  22. [22]
    The redefinition of the SI units - NPL - National Physical Laboratory
    From 20 May 2019 four of the base units acquired new definitions: the kilogram, ampere, kelvin and mole. The definitions of the other three base units were ...
  23. [23]
    [PDF] 93. Grand Unified Theories - Particle Data Group
    May 31, 2024 · 93.1 The standard model. The Standard Model (SM) may be defined as the renormalizable field theory with gauge group.
  24. [24]
    How Many Fundamental Constants Does It Take To Explain ... - Forbes
    Nov 16, 2018 · It takes 26 fundamental constants to give us our known Universe, and even with them, they still don't give us everything.
  25. [25]
    The Cosmological Constants - Nature
    There are, however, more of these constants than are necessary for this purpose, with the result that certain dimensionless numbers can be constructed from them ...
  26. [26]
    Dirac's large numbers hypothesis and the acceleration of the moon's ...
    This Large Numbers Hypothesis requires that the gravitational constant G shall vary inversely with atomic time t, and that there shall be continuous creation of ...
  27. [27]
    [PDF] On the Relative Motion of the Earth and the Luminiferous Ether (with ...
    The experimental trial of the first hypothesis forms the subject of the present paper. If the earth were a transparent body, it might perhaps be conceded, in ...
  28. [28]
    The Confrontation between General Relativity and Experiment - PMC
    Einstein's equivalence principle (EEP) is well supported by experiments such as the Eötvös experiment, tests of local Lorentz invariance and clock experiments.
  29. [29]
    Were Fundamental Constants Different in the Past? | Physics Today
    Oct 1, 2004 · Atomic physics, nuclear physics, and cosmology enable physicists to probe changes in the fine-structure constant over time scales ranging ...
  30. [30]
    Search for Time Variation of the Fine Structure Constant
    Feb 1, 1999 · An order of magnitude sensitivity gain is described for using quasar spectra to investigate possible time or space variation in the fine structure constant α.Missing: 1990s | Show results with:1990s
  31. [31]
    New Limits on Variation of the Fine-Structure Constant Using Atomic ...
    Aug 6, 2013 · New Limits on Variation of the Fine-Structure Constant Using Atomic Dysprosium ... | α ˙ / α | ( 10 - 17 yr - 1 ). Electronic offsets, 200 ...
  32. [32]
    Time variability of α from realistic models of Oklo reactors
    Aug 28, 2006 · Disagreements among recent Oklo determinations of the time evolution of α, the electromagnetic fine structure constant, are shown to be due to ...Missing: constraints | Show results with:constraints
  33. [33]
    Planck intermediate results - XXIV. Constraints on variations in ...
    Our analysis of Planck data limits any variation in the fine-structure constant from z ~ 103 to the present day to be less than approximately 0.4 ...
  34. [34]
    Measurement of the variation of electron-to-proton mass ratio using ...
    Aug 21, 2019 · Here we use a rovibrationally pure sample of ultracold KRb molecules to improve the measurement on the stability of electron-to-proton mass ratio.
  35. [35]
    A lab-based test of the gravitational redshift with a miniature clock ...
    Aug 12, 2023 · Here we perform a laboratory-based, blinded test of the gravitational redshift using differential clock comparisons within an evenly spaced array of 5 atomic ...
  36. [36]
    Constraints on the Variation of the Fine-structure Constant at 3 < z ...
    Feb 6, 2025 · We present constraints on the spacetime variation of the fine-structure constant α at redshifts 2.5 ≤ z < 9.5 using JWST emission-line galaxies (ELGs).
  37. [37]
    Exploring the dark energy equation of state with JWST
    Aug 9, 2024 · Our study utilizes the Chevallier–Polarski–Linder (CPL) parameterization, one of the dynamic dark energy models, to probe the role of dark energy on shaping ...2 Methods · 2.2 The Halo Mass Function · 3 Results And Discussions
  38. [38]
  39. [39]
    [1112.4647] The Fine-Tuning of the Universe for Intelligent Life - arXiv
    Many of Stenger's claims will be found to be highly problematic. We will touch on such issues as the logical necessity of the laws of nature; ...Missing: varying | Show results with:varying
  40. [40]
    [1902.05142] Electroweak Precision Tests of the Standard Model ...
    Feb 13, 2019 · This paper reviews electroweak precision tests of the Standard Model after the Higgs boson discovery, focusing on the impact of the Higgs mass ...
  41. [41]
    [hep-ph/0608183] Grand Unified Theories - arXiv
    Aug 16, 2006 · In this talk I briefly review the status of SUSY GUTs in 4, 5, 6 and 10 dimensions, focusing on the issue of gauge coupling unification and proton decay.Missing: alpha1 alpha2 alpha3
  42. [42]
    [gr-qc/9710008] Loop Quantum Gravity - arXiv
    Oct 1, 1997 · Loop quantum gravity is a mathematically well-defined, non-perturbative and background independent quantization of general relativity.
  43. [43]
    Direct neutrino-mass measurement based on 259 days of KATRIN ...
    Apr 10, 2025 · KATRIN aims to reach 1000 measurement days by the end of 2025. This will correspond to about five times the statistics of the work presented ...