Fact-checked by Grok 2 weeks ago

Theoretical physics

Theoretical physics is the subfield of physics dedicated to interpreting and codifying experimental data into a coherent body of , known as , to explain and predict the behavior of physical systems in the . It relies on abstract models, computations, and conceptual frameworks rather than direct experimentation to rationalize natural phenomena, drawing on disciplines such as , , astronomy, , chemistry, and geology. Theoretical physicists develop hypotheses and predictive models that guide , often exploring fundamental questions about the origins and structure of , , , and time. The historical roots of theoretical physics trace back to in the , but the modern field crystallized in the early amid crises in , leading to revolutionary developments in and . began in 1900 when proposed that energy is emitted in discrete to resolve the problem, a Einstein extended in 1905 to explain the using light (photons), for which he received the 1921 . Concurrently, Einstein's special , published in 1905, unified space and time into and established the equivalence of mass and energy (E=mc²), addressing inconsistencies between Newtonian mechanics and . This culminated in the 1915 formulation of , which redefined as the curvature of caused by mass and energy, profoundly influencing and . Contemporary theoretical physics encompasses diverse branches, including , which provides the mathematical framework for the of elementary particles and their interactions; for gravitational phenomena; and condensed matter theory for understanding solids, liquids, and complex materials. Efforts to unify these frameworks persist, notably in , which posits that fundamental particles are vibrating strings in higher dimensions, and quantum gravity approaches aiming to reconcile with . Theoretical physicists employ advanced methods such as vector analysis, differential equations, Fourier transforms, , and numerical simulations to model systems ranging from subatomic particles to the cosmos. These theories not only predict observable effects, like confirmed in 2015, but also drive technological innovations and deepen our comprehension of the universe's fundamental laws.

Definition and Scope

Core Definition

Theoretical physics is the branch of physics that employs mathematical abstractions, hypotheses, and logical reasoning to explain and predict physical phenomena, focusing on constructing abstract models of natural laws rather than direct empirical testing. This approach emphasizes the development of conceptual frameworks that capture the underlying principles governing the universe, allowing physicists to derive consequences from assumed axioms and compare them with observable outcomes. Key characteristics of theoretical physics include its reliance on deductive methods, where conclusions are logically inferred from foundational premises, and the use of idealized models to simplify complex systems for analysis. Examples of such models include point particles, which treat objects as having zero spatial extent to facilitate calculations in mechanics and particle physics, and continuous fields, which represent forces like electromagnetism as smooth distributions across space rather than discrete entities. Additionally, theoretical physics prioritizes the universality of laws, seeking principles that apply consistently across all scales and conditions, independent of specific local contexts. The term "theoretical physics" emerged in the 19th century, particularly in German-speaking academic circles, to delineate this deductive, model-based pursuit from the more applied or experimentally oriented aspects of the discipline. Its scope encompasses phenomena from the subatomic realm, such as the strong interactions described by , to vast cosmic structures governed by , unifying diverse scales under a coherent theoretical umbrella.

Distinction from Experimental Physics

Theoretical physics primarily involves the development of mathematical models and hypotheses to explain and predict physical phenomena, relying on from abstract principles rather than direct observation. In contrast, focuses on designing and conducting measurements to collect empirical data, testing hypotheses through controlled observations and instrumentation. The two fields are interdependent, with theoretical models guiding experimental design by specifying what phenomena to investigate or predict outcomes to verify. For instance, the was theoretically predicted in 1964 as part of the electroweak symmetry-breaking mechanism in the , directing experimental searches at particle accelerators. Conversely, experimental results can refine or falsify theories; the 1887 Michelson-Morley experiment's null result, which failed to detect the luminiferous ether, undermined classical ether theories and paved the way for . Philosophically, theoretical physics employs the hypothetico-deductive method, where hypotheses are formulated and logical consequences are derived to make testable predictions. , however, often utilizes , generalizing broader principles from accumulated specific observations and data patterns. A key challenge in distinguishing the fields arises from computational simulations, which blend theoretical modeling with experimental-like validation by numerically solving equations to mimic real-world systems, often serving as a between pure prediction and empirical testing.

Historical Development

Ancient and Classical Foundations

The foundations of theoretical physics trace back to ancient philosophical inquiries into the nature of motion and change, particularly among Greek thinkers. Aristotle (384–322 BCE), in his work Physics, proposed a teleological framework where natural phenomena are explained through four causes: material (the substance composing an object), formal (its structure or essence), efficient (the agent producing change), and final (its purpose or end goal). He distinguished natural motion—such as the downward fall of earth or upward rise of fire—as inherent to elements seeking their "natural place," contrasting it with violent motion imposed by external forces. This qualitative approach dominated early conceptions of dynamics, influencing subsequent thought for over a millennium. Building on Aristotelian ideas, (c. 287–212 BCE) advanced quantitative methods in through his treatises and On the Equilibrium of Planes. In , he formulated the principle that a body immersed in a fluid experiences an upward buoyant force equal to the weight of the displaced fluid, enabling precise calculations for floating objects and laying groundwork for . His work on levers established the as inversely proportional to the distances from the , expressed as the condition where moments balance: for weights w_1 and w_2 at distances d_1 and d_2, w_1 d_1 = w_2 d_2. These contributions shifted focus toward mathematical rigor in analyzing forces and . During the medieval period, Islamic scholars refined observational and analytical techniques, bridging ancient and modern paradigms. (c. 965–1040 CE), in his , pioneered an experimental methodology by systematically testing hypotheses on propagation, , and , demonstrating that travels in straight lines from objects to the eye and refuting emission theories of vision. His controlled experiments with pinhole cameras and lenses emphasized repeatable observations, prefiguring the in and laying empirical foundations for later physical theories. In , (c. 1320–1382) introduced graphical representations of motion in his Tractatus de configurationibus qualitatum et motuum, plotting against time to visualize uniform as a linear increase, allowing qualitative proofs of mean speed theorems without algebraic notation. This innovation facilitated conceptual analysis of changing qualities like speed, influencing kinematic thought. The marked a pivotal shift toward empirical and mathematical modeling of motion. (1564–1642), in (1638), developed by studying inclined planes and pendulums, establishing that objects in accelerate uniformly regardless of mass and introducing the concept of : bodies maintain uniform motion in the absence of friction or external forces. His resolution of trajectories into (constant ) and vertical (accelerated) components provided a vectorial framework for dynamics. Complementing this, (1571–1630) derived his three laws of planetary motion from Tycho Brahe's precise astronomical data (1546–1601), published in (1609) and (1619): planets orbit in ellipses with at one focus; a line from to a planet sweeps equal areas in equal times (indicating conserved angular momentum); and the square of the orbital period is proportional to the cube of the semi-major axis (T^2 \propto a^3). These empirical laws challenged geocentric models and demanded a unified theoretical explanation. Isaac Newton's (1687) synthesized these developments into a comprehensive mechanical framework. Newton unified terrestrial and celestial motion through his three laws of motion—first stating , second relating to (F = ma), and third describing action-reaction pairs—and his law of universal gravitation, positing that every mass attracts every other with a proportional to the product of their masses and inversely proportional to the square of their separation: F = G \frac{m_1 m_2}{r^2} where G is the gravitational constant. By demonstrating that Kepler's laws follow from this inverse-square force applied to elliptical orbits, Newton established a deterministic, mathematical basis for classical mechanics, transforming theoretical physics into a predictive science.

19th and Early 20th Century Advances

The 19th century marked a pivotal shift in theoretical physics toward unifying disparate phenomena through mathematical frameworks, building upon classical mechanics to address heat, electricity, and magnetism. In thermodynamics, Sadi Carnot introduced the concept of an ideal heat engine in 1824, describing a reversible cycle that maximizes work output from heat transfer between reservoirs at different temperatures, laying the groundwork for the second law of thermodynamics. This model, analyzed without knowledge of energy conservation, emphasized efficiency limits based on temperature differences. Rudolf Clausius formalized entropy in 1865 as a state function quantifying irreversible processes, defined mathematically as S = \int \frac{dQ_{\text{rev}}}{T}, where dQ_{\text{rev}} is reversible heat transfer and T is absolute temperature, establishing that entropy increases in isolated systems. Ludwig Boltzmann advanced this in the late 19th century through statistical mechanics, linking macroscopic thermodynamic properties to microscopic particle states; his 1877 formula S = k \ln W, with k as Boltzmann's constant and W as the number of microstates, probabilistically explained entropy as a measure of disorder, bridging atomic chaos to observable irreversibility. Electromagnetism saw profound unification with James Clerk Maxwell's 1865 equations, a set of four partial differential equations that integrated electric and magnetic fields into a single electromagnetic field theory, predicting that changing electric fields generate magnetic fields and vice versa. These equations implied the existence of electromagnetic waves propagating at speed c = \frac{1}{\sqrt{\epsilon_0 \mu_0}}, where \epsilon_0 and \mu_0 are the permittivity and permeability of free space, respectively, aligning theoretically with the measured speed of light and foreshadowing light as an electromagnetic phenomenon. This framework resolved inconsistencies in earlier theories, such as Ampère's law, by incorporating displacement currents, enabling predictions of phenomena like radio waves. Advances in atomic theory emerged from experimental insights interpreted theoretically. J.J. Thomson proposed the in 1904, envisioning the atom as a uniform sphere of positive charge embedding negatively charged electrons to achieve electrical neutrality, with electrons oscillating to explain spectral lines and stability. This model accounted for the atom's overall neutrality and size based on electron discovery data. In 1911, refined this through scattering experiments, proposing a model where most and positive charge concentrate in a tiny central , with electrons orbiting at a distance, as evidenced by alpha particles deflecting sharply from gold foil, implying a dense core rather than diffuse charge. Early 20th-century relativity addressed inconsistencies between Newtonian and . Hendrik developed transformations in 1904 to reconcile with the invariance of light speed in the ether frame, introducing and factors \gamma = \frac{1}{\sqrt{1 - v^2/c^2}} for moving observers, preserving electromagnetic laws across inertial frames. Albert Einstein's 1905 theory dispensed with the ether, positing that light speed is constant in all inertial frames and laws of physics are identical therein, leading to the equivalence of mass and energy via E = mc^2, where m is rest mass and c is light speed, revolutionizing concepts of , time, and .

Post-World War II Developments

Following , theoretical physics saw significant advancements in , particularly through the resolution of infinities plaguing perturbative calculations in (). In the late 1940s, Sin-Itirō Tomonaga, , and independently developed the technique, which systematically absorbs infinite quantities into redefined physical parameters like charge and mass, yielding finite, accurate predictions for electromagnetic interactions. Tomonaga's covariant formulation in 1946 provided a relativistically framework for handling field interactions, while Schwinger's 1948 approach used canonical transformations to derive equations, and Feynman's path-integral method introduced diagrammatic representations that simplified computations. Their work, unified by Freeman Dyson's 1949 synthesis, restored as a predictive theory, matching experimental precision to parts per thousand for phenomena like the electron's anomalous . The 1960s marked a pivotal shift with the introduction of in , enabling massive gauge bosons without violating gauge invariance. This mechanism, explored by in analogy to and formalized by , , Robert Brout, and , posits a acquiring a nonzero , "hiding" symmetries and generating particle masses. In 1964, Higgs demonstrated how this applies to gauge theories, producing massive vector bosons alongside a neutral scalar remnant. This breakthrough resolved longstanding issues in weak interactions, paving the way for electroweak unification. Sheldon Glashow's 1961 SU(2) × U(1) gauge model laid the groundwork, but it predicted massless bosons; Steven Weinberg's 1967 incorporation of the yielded massive , with the remaining massless, while independently developed a parallel formulation in 1968. These efforts culminated in the electroweak theory, predicting neutral currents later confirmed experimentally. The emergence of the in the 1970s integrated electroweak theory with , unifying electromagnetic, weak, and strong forces. The Glashow-Weinberg-Salam framework, augmented by the proposed by and in 1964, gained empirical support through experiments at SLAC in the late 1960s and early 1970s. These probed proton structure at high energies, revealing scaling behavior consistent with point-like quarks as predicted by James Bjorken and Sidney Feynman, confirming quarks as fundamental constituents with fractional charges. By 1973, in QCD, calculated by , , and David Politzer, explained quark confinement at low energies while allowing perturbative calculations at high ones, solidifying the 's core. In cosmology, post-war theoretical efforts refined the Big Bang model using general relativity, emphasizing the Friedmann-Lemaître-Robertson-Walker (FLRW) to describe an expanding, homogeneous, isotropic universe. Originally formulated in the and , the metric underwent post-1945 enhancements to incorporate radiation, matter, and densities, with the scale factor a(t) governing expansion. The is given by ds^2 = -dt^2 + a(t)^2 \left[ \frac{dr^2}{1 - kr^2} + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2 \right], where k denotes spatial curvature (k = +1, 0, -1) and coordinates are comoving. Refinements in the 1960s, including Gamow's nucleosynthesis predictions and Peebles' recombination calculations, aligned the model with observations like the cosmic microwave background, establishing the hot Big Bang as the consensus framework by the 1970s.

Fundamental Methods and Tools

Mathematical Frameworks

Theoretical physics relies on a variety of mathematical frameworks to model physical phenomena, ranging from classical to quantum regimes. These tools provide the language for formulating laws, deriving equations, and uncovering symmetries, enabling predictions and conceptual insights. Central to this are differential and integral calculus, which underpin variational principles; linear algebra and tensor analysis, essential for describing spacetime and fields; group theory, which captures symmetries and conservation laws; and functional analysis, crucial for quantum descriptions. Differential and integral calculus forms the foundational toolkit for theoretical physics, particularly through variational methods that extremize functionals to yield . In , the are derived from the principle of least , expressed as \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q}} \right) - \frac{\partial L}{\partial q} = 0, where L is the function depending on q and velocities \dot{q}. This formulation, introduced by , reformulates Newtonian mechanics in a coordinate-independent way, facilitating the treatment of constraints and complex systems. Linear algebra and tensor calculus extend these ideas to multidimensional spaces and curved geometries, with tensors representing physical quantities that transform covariantly under coordinate changes. The Einstein summation convention simplifies tensor expressions by implying summation over repeated indices, such as in A^i = B^{ij} C_j, avoiding explicit \sum symbols and streamlining calculations in relativity. In general relativity, the metric tensor g_{\mu\nu} defines distances in curved spacetime via the line element ds^2 = g_{\mu\nu} dx^\mu dx^\nu, originating from Bernhard Riemann's work on differential geometry, which provides the geometric structure for gravitational fields. This tensor encodes the spacetime curvature, allowing the formulation of geodesic equations and field equations without reference to flat-space coordinates. Group theory offers a powerful framework for understanding symmetries in physical laws, where continuous transformation groups classify particles and interactions. For instance, the SU(3) underlies the in , organizing hadrons into multiplets like the octet of baryons, as proposed by . links these symmetries to conservation laws: for every differentiable symmetry of the action, there exists a corresponding , such as time-translation invariance implying via \frac{\partial L}{\partial \dot{q}} \dot{q} - L = constant. Formally stated in Emmy Noether's 1918 work, the theorem applies to systems and has profound implications for invariance principles across physics. Functional analysis generalizes these structures to infinite-dimensional spaces, vital for quantum mechanics. Hilbert spaces, complete inner product spaces, serve as the arena for quantum states, where wave functions are vectors and observables are self-adjoint operators, formalized by John von Neumann to rigorize the probabilistic interpretation. In the path integral formulation, quantum amplitudes are computed as \langle q_f | q_i \rangle = \int \mathcal{D}q(t) \, e^{i S/\hbar}, summing over all paths from initial to final configurations weighted by the action S, as developed by to bridge classical and quantum dynamics. This approach unifies and , emphasizing functional integrals over configuration spaces.

Computational Approaches

Computational approaches in theoretical physics involve numerical techniques to approximate solutions to complex systems that are intractable analytically, particularly for nonlinear partial differential equations and many-body interactions. These methods discretize continuous problems into computable forms, enabling simulations on digital computers to explore theoretical predictions. Unlike purely analytical frameworks, such as tensor-based formulations in , computational methods emphasize iterative algorithms and stochastic sampling to handle high-dimensional spaces and quantum effects. Monte Carlo methods employ stochastic sampling to estimate integrals and averages in and quantum systems, providing reliable results for equilibrium properties through repeated random trials. In quantum many-body physics, (QMC) techniques, such as variational Monte Carlo and diffusion Monte Carlo, project the many-body onto trial states to compute ground-state energies and correlations, overcoming the exponential scaling of exact diagonalization for systems with dozens of particles. A foundational application is the simulation of the , where the Metropolis algorithm generates configurations by accepting or rejecting spin flips based on the Boltzmann probability, allowing estimation of phase transitions and in ferromagnetic lattices. These methods have been pivotal in validating theoretical models of , with QMC achieving chemical accuracy for solid-state properties like cohesive energies in . Finite element methods (FEM) approximate solutions to partial differential equations by dividing the domain into a of elements and solving variational principles locally, offering flexibility for irregular geometries and adaptive refinement. In theoretical , FEM discretizes the Navier-Stokes equations to model incompressible flows, capturing and boundary layers in viscous regimes through stabilized formulations that mitigate numerical instabilities. For , adaptive FEM simulates mergers by evolving the Einstein equations on moving meshes, resolving the highly dynamic curvature during inspiral and ringdown phases with error controls below 1% in waveform amplitudes. This approach has enabled predictions of signals consistent with observations, highlighting the merger's nonlinear dynamics without singularities. Lattice gauge theory discretizes into a hypercubic grid to formulate non-Abelian gauge theories like quantum chromodynamics (QCD) on a , allowing non-perturbative computations via path integrals. Introduced by Wilson in 1974, this framework confines through strong-coupling expansions and enables sampling of gauge configurations to compute masses and decay constants. In simulations, supercomputers perform hybrid updates on lattices with spacings below 0.1 fm, achieving precision of 1-2% for light spectra and the strong coupling constant α_s at 1.5 GeV. These calculations, requiring petaflop-scale resources, have confirmed and confinement, with recent exascale efforts reducing continuum extrapolation errors. Recent advances incorporate to enhance in high-dimensional data from particle colliders, accelerating theoretical interpretations post-2010. Neural networks classify event topologies in LHC data, identifying anomalies beyond the with sensitivities improved by factors of 2-5 over traditional cuts, as in jet substructure analysis for quark-gluon discrimination. Generative models like variational autoencoders rare events in QCD processes, reducing computational costs by orders of magnitude while preserving theoretical correlations. These techniques bridge theoretical predictions with collider outputs, enabling faster hypothesis testing for new physics.

Mainstream Theories

Classical Mechanics and Electromagnetism

forms a cornerstone of theoretical physics, providing the foundational framework for describing the motion of macroscopic objects under deterministic forces. Building upon Newtonian principles, advanced formulations like and offer elegant, coordinate-independent approaches to solving complex dynamical systems. These methods emphasize variational principles and symmetry, enabling the derivation of through optimization of action integrals rather than direct force balances. Lagrangian mechanics, introduced by Joseph-Louis Lagrange in his 1788 treatise Mécanique Analytique, reformulates the laws of motion using and the principle of least action. The function is defined as L = T - V, where T is the and V is the . The equations of motion emerge from the Euler-Lagrange equation: \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q}_i} \right) - \frac{\partial L}{\partial q_i} = 0, for each q_i. This approach simplifies problems involving constraints, such as systems, by incorporating them via Lagrange multipliers, and it naturally accommodates time-dependent or dissipative forces through generalized potentials. Hamiltonian mechanics extends this framework, as developed by in his 1834 paper "On a General Method in Dynamics." It employs the Hamiltonian H(q, p, t), typically the total energy expressed in terms of q and conjugate momenta p = \partial L / \partial \dot{q}. The dynamics are governed by Hamilton's equations: \dot{q}_i = \frac{\partial H}{\partial p_i}, \quad \dot{p}_i = -\frac{\partial H}{\partial q_i}. This phase-space representation, where trajectories evolve on a $2n-dimensional manifold for n , reveals conserved quantities through Poisson brackets and facilitates the study of integrability and in nonlinear systems. Hamiltonian methods are particularly powerful for adiabatic invariants and transformations, preserving the form of the equations. In , these tools address the challenges of multi-body interactions, notably the , which lacks a general closed-form solution beyond Keplerian two-body orbits. Approximations via , pioneered by Lagrange in Mécanique Analytique, treat deviations from integrable cases as small corrections. For instance, in the restricted three-body problem—where one body has negligible mass—the motion is expanded in series around equilibrium points, using secular perturbations to average over fast orbital periods. This yields insights into stability, such as Lagrange points L1–L5, and long-term orbital evolution, as refined by later works including Poincaré's analysis of non-integrability. enable predictions of planetary perturbations, like those explaining Mercury's before . Electromagnetism achieves a unified theoretical description through , synthesized by James Clerk Maxwell in his 1865 paper "A Dynamical Theory of the Electromagnetic Field." These four coupled partial differential equations encapsulate electric and magnetic phenomena: for electricity, \nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon_0}, relates the divergence to ; , \nabla \cdot \mathbf{B} = 0, asserts zero magnetic monopoles; Faraday's law, \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}, describes induced electric fields from changing magnetic flux; and Ampère's law with Maxwell's correction, \nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t}, links magnetic curls to currents and time-varying electric fields. Maxwell's addition of the displacement term ensures and predicts electromagnetic waves propagating at speed c = 1/\sqrt{\mu_0 \epsilon_0}. The energy dynamics of electromagnetic fields are quantified by the , derived by in his 1884 paper "On the Transfer of Energy in the ." Defined as \mathbf{S} = \frac{1}{\mu_0} \mathbf{E} \times \mathbf{B}, it represents the directional density, with units of power per area. Integrated over a closed surface, \oint \mathbf{S} \cdot d\mathbf{A}, it yields the rate of energy flow out of a volume, complementing the for electromagnetic . In plane waves, |\mathbf{S}| = \frac{E_0 B_0}{\mu_0} = \frac{E_0^2}{c \mu_0}, illustrating and momentum transport. Relativistic extensions of classical electrodynamics incorporate radiation reaction effects, notably the Abraham-Lorentz force, which accounts for the self-force on an accelerating charge due to its own emitted . Originally derived by Max Abraham in 1903 and refined by in 1904, the non-relativistic form is \mathbf{F}_{AL} = \frac{\mu_0 q^2}{6 \pi c} \dot{\mathbf{a}}, where \dot{\mathbf{a}} is (time derivative of \mathbf{a}). This term modifies Newton's second law to m \mathbf{a} = \mathbf{F}_{ext} + \mathbf{F}_{AL}, capturing loss to radiation and resolving inconsistencies in point-charge models by introducing a characteristic time scale \tau = \frac{\mu_0 q^2}{6 \pi m c} \approx 10^{-23} s for electrons. Despite challenges like runaway solutions, it provides essential corrections for ultra-relativistic particles in accelerators.

Special and General Relativity

Special , formulated by in 1905, establishes that the laws of physics remain invariant under Lorentz transformations and that the in vacuum is constant for all observers, regardless of their relative motion. This framework resolves inconsistencies between Newtonian mechanics and Maxwell's electromagnetism by treating space and time as interconnected components of a unified four-dimensional known as , introduced by in 1908. Lorentz invariance ensures that physical quantities transform covariantly, preserving the structure of intervals ds^2 = -c^2 dt^2 + dx^2 + dy^2 + dz^2. Key implications include , where the time interval t measured by an observer for a moving clock is longer than the t_0 elapsed on the clock itself, given by the formula t = \frac{t_0}{\sqrt{1 - \frac{v^2}{c^2}}}, with v as the and c the . Another cornerstone is mass-energy equivalence, expressed as E = mc^2, which demonstrates that and are interchangeable forms, derived from considerations of in relativistic systems. These relations underpin phenomena such as the relativistic increase in inertial and the invariance of the , fundamentally altering classical notions of and absolute time. General relativity extends special relativity to include gravitation, positing in 1915 that gravity arises from the curvature of spacetime induced by mass and energy, with objects following geodesic paths in this curved geometry. The theory's mathematical foundation is the Einstein field equations, G_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}, where G_{\mu\nu} is the Einstein tensor encoding spacetime curvature, T_{\mu\nu} the stress-energy tensor representing matter and energy distribution, G Newton's gravitational constant, and c the speed of light. Geodesic motion describes the free-fall trajectories of particles and light, determined by the metric tensor g_{\mu\nu} via the geodesic equation \frac{d^2 x^\mu}{d\tau^2} + \Gamma^\mu_{\alpha\beta} \frac{d x^\alpha}{d\tau} \frac{d x^\beta}{d\tau} = 0, where \Gamma are Christoffel symbols and \tau proper time. A pivotal exact solution is the , derived by in 1916 for the vacuum around a non-rotating, spherically symmetric mass M: ds^2 = \left(1 - \frac{2GM}{c^2 r}\right) c^2 dt^2 - \left(1 - \frac{2GM}{c^2 r}\right)^{-1} dr^2 - r^2 (d\theta^2 + \sin^2\theta d\phi^2). This predicts an at the r_s = 2GM/c^2 and a central at r = 0, where invariants diverge, signaling a breakdown of classical predictability. The also implies singularities as generic features in , where becomes infinite under certain initial conditions. Theoretical predictions of include , linearized perturbations of the metric propagating at light speed, first derived by Einstein in 1916 from the field equations in weak-field approximations. Another forecast is the deflection of light by gravitational fields; during the 1919 , expeditions led by measured starlight bending near the Sun, confirming the predicted deflection angle of $1.75'' to within experimental error. Exotic structures like wormholes, exemplified by the Einstein-Rosen bridge in 1935, emerge as topological connections between distant spacetime regions in certain solutions, though unstable in classical . To construct a model in 1917, Einstein introduced the \Lambda into the field equations, modifying them to G_{\mu\nu} + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}, representing a uniform inherent to . In contemporary cosmology, this term is reinterpreted as , accounting for the observed accelerated , consistent with data showing \Lambda dominating the energy budget at approximately 70%.

Quantum Mechanics and Field Theory

Quantum mechanics revolutionized theoretical physics by introducing a probabilistic framework for describing the behavior of matter and energy at atomic and subatomic scales, fundamentally differing from through the incorporation of wave-particle duality. This duality posits that particles, such as electrons, exhibit both particle-like and wave-like properties depending on the experimental context. proposed in 1924 that every particle of p has an associated \lambda = h / p, where h is Planck's constant, extending the wave nature previously observed in light to matter. This hypothesis was experimentally verified through experiments, confirming the wave aspect of particles. Complementing this, Werner Heisenberg's , formulated in 1927, establishes a fundamental limit on the precision with which certain pairs of physical properties, such as position x and p, can be simultaneously known: \Delta x \Delta p \geq \hbar / 2, where \hbar = h / 2\pi. This principle arises from the non-commutative nature of quantum operators and underscores the inherent indeterminacy in quantum systems, prohibiting classical trajectories and emphasizing statistical predictions over deterministic outcomes. The mathematical foundation of non-relativistic is provided by the , introduced by in 1926, which governs the of the wave function \psi representing the of a system. The time-dependent form is given by i \hbar \frac{\partial \psi}{\partial t} = \hat{H} \psi, where \hat{H} is the operator encapsulating the total energy, including kinetic and potential terms. Solutions to this equation yield eigenvalues corresponding to measurable observables, such as energy levels in atoms, with the wave function's squared modulus |\psi|^2 providing the probability density for finding a particle in a given region. For stationary states, the time-independent \hat{H} \psi = E \psi determines discrete energy spectra, explaining phenomena like atomic stability and spectral lines, which eluded classical models. This framework successfully described early atomic models by incorporating quantization, marking a shift from semi-classical approximations to a fully wave-based theory. Quantum field theory (QFT) extends to relativistic regimes and incorporates particle creation and annihilation, essential for describing interactions in . A cornerstone is the , derived by in 1928, which combines with for spin-1/2 fermions like electrons: (i \gamma^\mu \partial_\mu - m) \psi = 0, where \gamma^\mu are Dirac matrices, \partial_\mu represents derivatives, m is the particle mass, and \psi is a four-component . This equation predicts the existence of , such as the , and naturally incorporates electron , resolving inconsistencies in earlier relativistic quantum treatments. In QFT, particles are excitations of underlying fields, and interactions are computed perturbatively using Feynman diagrams, introduced by in 1948 as a graphical method to represent terms in the perturbative expansion of scattering amplitudes. These diagrams visually encode particle exchanges and loops, simplifying calculations in (QED) and enabling precise predictions for processes like electron-photon scattering, verified to extraordinary accuracy. The of , developed in the 1970s, represents the culmination of QFT applications, unifying electromagnetic, weak, and strong nuclear s through a -invariant . The core structure includes fields for the (3)_C × (2)_L × U(1)_Y , describing quarks and leptons interacting via gluons (strong ), W and Z bosons (weak ), and photons (electromagnetic ). masses arise from Yukawa couplings in terms like \bar{\psi}_L \phi \psi_R, where \phi is the Higgs field, which also generates masses via . The full is \mathcal{L}_{SM} = \mathcal{L}_{gauge} + \mathcal{L}_{fermion} + \mathcal{L}_{Yukawa} + \mathcal{L}_{Higgs}, with the sector featuring terms like -\frac{1}{4} F_{\mu\nu}^a F^{a\mu\nu} for field strengths. This framework, building on electroweak unification by Glashow, Weinberg, and Salam, has been extensively validated by experiments, though it leaves outside its scope and requires 19 free parameters.

Proposed and Emerging Theories

Grand Unified Theories

Grand unified theories (GUTs) seek to unify the electromagnetic, weak, and strong nuclear forces of the into a single gauge interaction at high energies, providing a more fundamental description beyond the separate gauge groups SU(3)_C × SU(2)_L × U(1)_Y. These theories predict new phenomena, such as violation, arising from the enlarged structure. The simplest GUT is the SU(5) model proposed by and in 1974, which embeds the Standard Model gauge group into the simple Lie group . In this framework, the fermions of one generation are accommodated in the 10 and \bar{5} representations, while the Higgs sector includes a 5 and \bar{5} to break the . A key prediction is proton decay mediated by heavy gauge bosons (X and Y), which act as leptoquarks by coupling quarks to leptons, violating both and conservation. The dominant decay mode is p → e^+ π^0, with a predicted lifetime around 10^{30} to 10^{31} years in the minimal non-supersymmetric version, though experimental lower limits from exceed 10^{34} years, constraining the model. Extensions to larger groups, such as SO(10), address limitations of SU(5) by incorporating right-handed s into the spectrum. In the SO(10) model, introduced by Georgi in 1975, each fits into a single 16-dimensional , naturally including a right-handed ν_R as a singlet under SU(5). This enables the seesaw mechanism, where heavy right-handed s with masses near the unification scale suppress light masses via m_ν ≈ (y_ν v)^2 / M_R, explaining observed oscillations without small parameters. SO(10) also allows for intermediate patterns, such as SO(10) → SU(5) × U(1) or SU(4)_C × SU(2)_L × SU(2)_R, enhancing predictive power for masses and mixings. The theoretical motivation for GUTs stems from the renormalization group evolution of gauge couplings, which in the Standard Model run logarithmically and meet at a high scale in minimal supersymmetric extensions. Specifically, the one-loop beta functions lead to unification around 10^{16} GeV, where α_1, α_2, and α_3 converge, assuming threshold corrections from heavy particles. However, challenges persist, including the hierarchy problem, where the large separation between the electroweak scale (~100 GeV) and the unification scale requires fine-tuning to keep Higgs masses light without supersymmetry. Additionally, the lack of experimental evidence for superpartners or leptoquarks at LHC energies up to several TeV has strained minimal GUT realizations, prompting explorations of non-minimal or flavored variants.

String Theory and Quantum Gravity

String theory emerged as a promising framework for unifying and by positing that the fundamental constituents of the are one-dimensional strings rather than point particles, with their vibrations giving rise to particles and forces, including . In this approach, strings propagate in a higher-dimensional , and interactions occur through string splitting and joining, naturally incorporating gravitons as massless string modes. The earliest formulation, , describes closed and open strings in 26 dimensions, where the arises from requiring Lorentz invariance and the absence of anomalies in the . However, this theory suffers from issues like tachyons ( particles) and lacks fermions, limiting its physical relevance. To address these, incorporates , extending the framework to 10 dimensions and including both bosonic and fermionic degrees of freedom, eliminating tachyons and enabling consistent quantization. There are five consistent superstring theories—Type I, Type IIA, Type IIB, and two heterotic variants—unified under in 11 dimensions, which includes extended objects like branes. To connect to our observed four-dimensional universe, string theory requires compactification of the extra dimensions, typically on a six-dimensional Calabi-Yau manifold for superstring theories, which preserves and yields a rich landscape of possible low-energy effective theories with varying particle spectra and couplings. Calabi-Yau spaces are Ricci-flat Kähler manifolds that allow for mirror symmetry, relating different compactifications with identical physics. A key insight in string theory is the AdS/CFT correspondence, proposed by in 1997, which posits a holographic duality between type IIB superstring theory on anti-de Sitter (AdS) space times five-sphere in 10 dimensions and a four-dimensional \mathcal{N}=4 supersymmetric Yang-Mills (CFT) on the boundary. This equivalence implies that in the bulk AdS space is fully captured by a non-gravitational CFT, providing a non-perturbative definition of and tools to study strong-coupling gravity. An alternative approach to quantum gravity is loop quantum gravity (LQG), a background-independent quantization of general relativity using Ashtekar variables, where the Hilbert space is spanned by spin networks—graphs labeled by SU(2) representations (spins) at edges and intertwiners at vertices, representing the quantum geometry of space. Spin networks encode the diffeomorphism-invariant states of the theory, with volume and area operators acting on them to yield discrete spectra. In particular, the area operator for a surface pierced by spin network links has eigenvalues bounded below by a minimal value, A \geq 8\pi \gamma \ell_p^2, where \gamma is the Immirzi parameter and \ell_p^2 = \hbar G / c^3 is the square of the Planck length, implying a granular structure to spacetime at the Planck scale. Quantum gravity theories like and LQG also address , where the Bekenstein-Hawking entropy formula S = \frac{A}{4 \ell_p^2}—with A the event horizon area—assigns an entropy proportional to the horizon area, derived semi-classically by in 1973 and confirmed by in 1975 through black hole evaporation via . This formula receives microscopic support in through counting microstates on the horizon or via D-branes, matching the entropy exactly for certain extremal s. In LQG, the entropy arises from counting spin network configurations puncturing the horizon, yielding the Bekenstein-Hawking formula S = \frac{A}{4 \ell_p^2} in the large-area limit after fixing \gamma to match the semi-classical value. The , highlighted by Hawking in 1976, arises because appears thermal and independent of the black hole's formation history, suggesting irreversible loss in unitary quantum evolution. Resolutions in leverage the AdS/CFT duality, where the CFT unitarity ensures preservation, with radiation entangled in a way that reconstructs the interior via . In LQG, the paradox is averted by the absence of singularities due to quantum geometry bounce, forming Planck-scale remnants or transitioning to white holes that release .

Applications and Interplay with Experiment

Predictive Power and Verification

Theoretical physics excels in generating precise, testable predictions that have been repeatedly verified through experiments, establishing its foundational role in understanding fundamental forces and particles. A seminal example is the , formulated in 1928, which relativistically quantized the and predicted the existence of a positively charged , the , to resolve negative energy solutions. This prediction was confirmed just four years later in 1932 by Carl Anderson, who observed tracks in experiments using a , marking the first experimental evidence of . Another landmark prediction arose from Big Bang cosmology, where Ralph Alpher, , and calculated in 1948 that primordial would leave a relic radiation field, now known as the (), with a around 5 K. This was serendipitously detected in 1965 by Arno Penzias and , who measured an excess antenna of 3.5 K uniform across the sky, providing strong corroboration for the hot Big Bang model and transforming cosmology. Verification in theoretical physics often involves high-precision comparisons between theory and experiment, such as in (QED). The Adler-Bell-Jackiw anomaly, a chiral violation in QED predicted in 1969, manifests in processes like the neutral decay to two photons (π⁰ → γγ), where the decay rate is dominated by the anomalous triangle diagram. Experimental measurements of this decay lifetime, refined over decades to a width of (7.8 ± 0.1) eV, match QED predictions to better than 1% accuracy, confirming the anomaly's role in electroweak interactions. Similarly, precision tests of the , exemplified by the muon's anomalous (g-2), probe quantum corrections from virtual particles; the latest measurement yields a_μ^exp = 1165920705(11) × 10^{-11}, aligning with Standard Model calculations to parts per billion while revealing subtle tensions that spur further research. Case studies highlight dramatic confirmations, such as the detection of by in 2015, which directly observed ripples in from merging holes, precisely matching general relativity's waveform predictions from binary inspirals. The signal's and amplitude, detected on September 14, 2015, validated Einstein's 1915 theory in the strong-field regime, opening multimessenger astronomy. Particle accelerators play a crucial role in such verifications; the Large Hadron Collider's ATLAS and experiments discovered the in 2012, observing a at 125 GeV in multiple decay channels with 5σ significance, fulfilling the Standard Model's mechanism for electroweak . This mass value, predicted within theoretical bounds from radiative corrections, underscores how accelerators test unified theories against data.

Thought Experiments and Conceptual Tools

Thought experiments have long served as essential tools in theoretical physics, enabling physicists to explore abstract concepts and probe the implications of physical laws without the need for empirical apparatus. These mental constructs allow for the examination of hypothetical scenarios that reveal inconsistencies or profound insights into theories, often highlighting counterintuitive aspects of reality. By isolating variables in idealized settings, thought experiments facilitate the development and refinement of theoretical frameworks, such as and . In , Albert Einstein's thought experiment illustrates the , positing that the effects of are indistinguishable from those of acceleration in a local frame. Einstein envisioned an observer in a sealed : if accelerating upward in space, the observer would feel a force akin to , leading to the conclusion that would bend in a just as it would in an accelerating frame. This 1907 conceptualization laid the groundwork for by equating inertial and gravitational mass. The , another cornerstone of , addresses apparent asymmetries in time dilation. Consider two twins: one remains on while the other travels at near- speed to a distant star and returns; upon reunion, the traveling twin is younger due to the and the asymmetry introduced by the turnaround acceleration, resolving the paradox without violating the theory's postulates. Einstein first alluded to this scenario in his 1905 paper on , emphasizing its consistency with Lorentz transformations. Shifting to quantum mechanics, Erwin Schrödinger's cat paradox underscores the peculiarities of superposition and measurement. In this 1935 setup, a cat in a sealed box is linked to a quantum event, such as radioactive decay triggering poison release; until observed, the cat exists in a superposition of alive and dead states, challenging the classical intuition of definite outcomes and highlighting the measurement problem in the Copenhagen interpretation. Schrödinger devised this to critique the probabilistic nature of quantum theory, illustrating how microscopic quantum rules clash with macroscopic reality. Conceptual tools complement thought experiments by providing abstract frameworks for computation and interpretation in theoretical physics. Feynman diagrams, introduced by in 1948, offer a pictorial representation of particle interactions in , where lines depict propagating particles and vertices show interactions, simplifying perturbative calculations of scattering amplitudes. These diagrams revolutionized by making complex integrals intuitive and verifiable. serves as a key conceptual method in to handle infinities arising in perturbative expansions, redefining parameters like mass and charge to absorb divergences and yield finite, observable predictions. Developed in the late 1940s through contributions from physicists like and , it transforms seemingly pathological theories into predictive ones, as seen in the precise agreement of with experiment. Despite their power, thought experiments and conceptual tools have limitations; they cannot fully supplant physical experiments for determining empirical parameters or resolving ambiguities in untested regimes, as their outcomes depend on the assumed theoretical framework.

Current Challenges and Frontiers

Unification Efforts

A (TOE) aims to unify the of , which describes the electromagnetic, weak, and strong nuclear forces, with , which governs , into a single coherent framework. This ambitious goal seeks to resolve inconsistencies between and gravitational theory at high energies, potentially explaining all fundamental interactions and the structure of the from first principles. One major obstacle to such unification is the non-renormalizability of when treated as a . In like the , infinities arising in perturbative calculations can be absorbed through procedures, but 's coupling constant has negative mass dimension, leading to an infinite number of counterterms required at higher orders, rendering the theory unpredictable beyond the Planck scale. Another profound challenge is the problem, or , where predicts a density on the order of the Planck scale, approximately $10^{120} times larger than the observed value inferred from cosmological measurements. This discrepancy highlights a fundamental mismatch between theoretical expectations and empirical data, complicating efforts to incorporate into a quantum framework. Supersymmetry (SUSY) proposes a symmetry between bosons and fermions, predicting superpartner particles (sfermions and gauginos) for each particle, which could stabilize the Higgs mass hierarchy and facilitate unification by extending the symmetry group. In minimal supersymmetric extensions, SUSY breaking is expected at the TeV scale to avoid , yet no superpartners have been detected in LHC experiments up to energies of about 13 TeV, pushing the lower mass limits for many SUSY particles into the multi-TeV range and prompting refinements or alternatives to the theory. Models with large offer a potential resolution to the —the vast disparity between the electroweak scale (~100 GeV) and the Planck scale (~10^{19} GeV)—by allowing gravity to propagate in additional spatial dimensions compactified at scales as large as millimeters. Proposed by Arkani-Hamed, Dimopoulos, and Dvali in 1998, these models dilute gravitational strength in our four-dimensional while permitting it to spread into the bulk, naturally lowering the fundamental Planck scale without invoking or other mechanisms. Such frameworks predict observable effects like Kaluza-Klein production at colliders, though none have been confirmed, and they inspire broader explorations including variants.

Open Problems in Cosmology and Particle Physics

One of the most profound challenges in theoretical physics arises from the composition of the at large scales, where observations indicate that ordinary baryonic accounts for only about 5% of the total , while and dominate with approximately 25% and 70%, respectively. These components, inferred from gravitational effects rather than direct detection, reveal discrepancies between the predictions and empirical data, such as the flat rotation curves of galaxies that suggest the presence of non-baryonic mass. In , similar puzzles emerge at small scales, including masses and the observed matter-antimatter imbalance, which the alone cannot accommodate without extensions. Dark matter, posited as a non-luminous form of necessary to explain gravitational phenomena, remains undetected despite extensive searches, with galaxy rotation curves providing key evidence for its existence. Observations of spiral galaxies, such as those conducted by in the 1970s and 1980s, showed that orbital velocities of stars and gas remain nearly constant at large radii, rather than declining as expected from visible mass alone under Newtonian gravity, implying an additional unseen mass component that is non-baryonic to avoid conflicting with constraints. Leading candidates include weakly interacting massive particles (WIMPs), which could arise as thermal relics from the early universe with masses around the electroweak scale, and axions, ultralight particles motivated by the strong CP problem solution. While WIMPs are probed through direct detection experiments seeking nuclear recoils and indirect signals like gamma rays, axions are targeted via conversions in magnetic fields, yet neither has been confirmed, leaving the nature of as an open question. The accelerated , discovered through observations in 1998, points to as a repulsive component counteracting on cosmic scales. Independent teams led by and analyzed distant supernovae, finding that their luminosities indicated a expanding faster than in a matter-dominated model, with the data favoring a positive \Lambda in the \LambdaCDM framework. In this concordance model, is parameterized as a constant vacuum energy density driving late-time acceleration, consistent with and large-scale structure data. Alternatives like propose a dynamic with evolving equation-of-state parameter w > -1, potentially resolving the coincidence problem of why dominates today, though such models must mimic \LambdaCDM observations to remain viable. Neutrino oscillations, confirmed by experiments like , demonstrate that s have non-zero masses, challenging the massless assumption in the minimal and implying physics beyond it. The atmospheric data revealed muon neutrino disappearance with oscillation parameters indicating mass-squared differences \Delta m^2 \sim 10^{-3} \, \mathrm{eV}^2 for the \nu_\mu - \nu_\tau sector, establishing a among neutrino flavors. The existence of sterile neutrinos, right-handed counterparts that do not interact via the weak force, has been suggested by short-baseline anomalies such as those from LSND and MiniBooNE, potentially explaining excess appearances, but recent null results from experiments like MicroBooNE have tightened constraints without ruling them out entirely. The of the , quantified by the ratio of baryons to photons \eta \approx 6 \times 10^{-10}, describes why matter outnumbers despite , requiring mechanisms that violate conservation. outlined three essential conditions in 1967: violation, charge-parity (, and departure from , which must occur in the early to generate a net asymmetry. One prominent explanation is leptogenesis, where a asymmetry arises from the out-of-equilibrium decays of heavy right-handed s in models, subsequently converted to via processes during the electroweak , with the required linked to mixing parameters. This framework ties the observed asymmetry to the same physics generating masses, though the exact scale and viability depend on unresolved details like the heavy masses.

References

  1. [1]
    Ted Erler: About Physics
    Theoretical physics is about interpreting and codifying information (gathered by the experimentalists) into a working body of knowledge. This working body ...
  2. [2]
    What is Physics? - Michigan Technological University
    Theoretical physicists work on inventing and studying theories. They utilize knowledge of mathematics, statistics, and sciences like astronomy, biology, ...
  3. [3]
    Origins of Quantum Theory - University of Pittsburgh
    Unlike relativity theory, the birth of quantum theory was slow and required many hands. It emerged in the course of the first quarter of the twentieth century ...
  4. [4]
    Origins of Special Relativity - University of Pittsburgh
    Before Einstein's time, relativity theory could not properly emerge. Once electromagnetic theory had been developed, there was a sense that relativity theory ...
  5. [5]
    Einstein's Theory of Gravitation | Center for Astrophysics | Harvard ...
    Albert Einstein published his full theory of general relativity in 1915, followed by a flurry of research papers by Einstein and others exploring the ...
  6. [6]
    Quantum Field Theory - Stanford Encyclopedia of Philosophy
    Jun 22, 2006 · Quantum Field Theory (QFT) is the mathematical and conceptual framework for contemporary elementary particle physics. It is also a framework ...
  7. [7]
    CAS PY 355 » Academics | Boston University
    Survey of mathematical and computational methods used in modern theoretical physics. Vectors, fields, differential and integral vector calculus.
  8. [8]
  9. [9]
    Models in Science - Stanford Encyclopedia of Philosophy
    Feb 27, 2006 · Galilean idealizations are ones that involve deliberate distortions: physicists build models consisting of point masses moving on frictionless ...
  10. [10]
    Are Nature's Laws Really Universal? | Michael Murphy
    All of modern physical theory is based on the assumption that the laws of physics remain the same everwhere and everywhen.<|control11|><|separator|>
  11. [11]
    MIT Center for Theoretical Physics – a Leinweber Institute
    We are a unified research and teaching center focused on fundamental physics. Our activities range from string theory and cosmology at the highest energies ...People · CTP - LI Seminars · CTP-LI Publications · CTP-LI Guidelines for...
  12. [12]
    In Theory: Which came first…? - CERN
    Theorists are no longer experimental physicists, and vice versa. “100 years ago, no distinction was made between theoretical and experimental physicists.
  13. [13]
    What Kind of Science Is Experimental Physics?
    Oct 1, 2004 · Experimental physicists, unlike botanists or geologists, do not observe nature but rather artificially create physical phenomena in their laboratories.
  14. [14]
    The Higgs boson - CERN
    ... theoretical predictions. Interaction strength can be measured experimentally by looking at Higgs boson production and decay: the heavier a particle the more ...What’s next for Higgs boson... · How does the Higgs boson...
  15. [15]
    November 1887: Michelson and Morley report their failure to detect ...
    Nov 1, 2007 · In 1887 Albert Michelson and Edward Morley carried out their famous experiment, which provided strong evidence against the ether.
  16. [16]
  17. [17]
    Computer Simulations in Science
    May 6, 2013 · A computer simulation is a program that is run on a computer and that uses step-by-step methods to explore the approximate behavior of a mathematical model.
  18. [18]
    Aristotle's Natural Philosophy
    May 26, 2006 · The natural motions of the four sublunary elements are also caused by specific external causes responsible for these motions, and on the ...
  19. [19]
    The History of Archimedes
    His further research into volume and density was fundamental to the development of theories of hydrostatics-the branch of physics dealing with liquids at rest.
  20. [20]
    [PDF] Archimedes, the Center of Gravity, and the First of Mechanics:
    is expressed in physics by saying that the law of the lever follows the principle of superposition. The result of this specific experiment is also true in ...
  21. [21]
    Ibn al-Haytham Alhazen (965–1040 AD) | High Altitude Observatory
    In his book, “Book of Optics,” he showed through experiment that light travels in straight lines, and carried out various experiments with lenses, mirrors, ...
  22. [22]
    Newton's Laws - Galileo and Einstein
    This is sometimes called "The Law of Inertia": in the absence of an external force, a body in motion will continue to move at constant speed and direction, ...
  23. [23]
    GALILEO'S STUDIES OF PROJECTILE MOTION
    Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion.
  24. [24]
    Tycho Brahe and Johannes Kepler - Galileo and Einstein
    In its place, he found his three laws of planetary motion: I The planets move in elliptical orbits with the sun at a focus. II In their orbits around the ...
  25. [25]
    Newton's Philosophiae Naturalis Principia Mathematica
    inverse-square gravity, one in kind with everyday terrestrial gravity — turned on a largely suppressed failure to account ...
  26. [26]
    Isaac Newton: the first physicist.
    Hooke was the first to realize that orbital motion was produced by a centripetal force (268), and in 1679 he suggested an inverse square law to Newton [387].
  27. [27]
    June 12, 1824: Sadi Carnot Publishes Treatise on Heat Engines
    May 26, 2009 · In 1824 he published Reflections on the Motive Power of Fire, which described a theoretical “heat engine” that produced the maximum amount of work for a given ...Missing: primary | Show results with:primary
  28. [28]
    [PDF] Rudolf Clausius, “Concerning Several Conveniently ... - Le Moyne
    The whole mechanical heat theory rests on two main theses: the equivalence of heat and work, and the equivalence of the transformations.
  29. [29]
    Boltzmann's Work in Statistical Physics
    Nov 17, 2004 · Particularly famous is his statistical explanation of the second law of thermodynamics. The celebrated formula S=klogW, expressing a relation ...
  30. [30]
    VIII. A dynamical theory of the electromagnetic field - Journals
    Maxwell James Clerk. 1865VIII. A dynamical theory of the electromagnetic fieldPhil. Trans. R. Soc.155459–512http://doi.org/10.1098/rstl.1865.0008. Section.
  31. [31]
    [PDF] LXXIX. The scattering of α and β particles by matter and the structure ...
    To cite this Article Rutherford, E.(1911) 'LXXIX. The scattering of α and β particles by matter and the structure of the atom', Philosophical Magazine ...
  32. [32]
    Hendrik Lorentz (1853 - 1928) - Biography - MacTutor
    Lorentz transformations, which he introduced in 1904, form the basis of Einstein's special theory of relativity. They describe the increase of mass, the ...Missing: original | Show results with:original
  33. [33]
    [PDF] ON THE ELECTRODYNAMICS OF MOVING BODIES - Fourmilab
    This edition of Einstein's On the Electrodynamics of Moving Bodies is based on the English translation of his original 1905 German-language paper. (published as ...
  34. [34]
    Sin-Itiro Tomonaga – Nobel Lecture - NobelPrize.org
    Sin-Itiro Tomonaga - Nobel Lecture: Development of Quantum Electrodynamics ... However, these matters will probably be discussed by Schwinger and Feynman ...
  35. [35]
    Space-Time Approach to Quantum Electrodynamics | Phys. Rev.
    R. P. Feynman, Phys. Rev. 76, 749 (1949); R. P. Feynman, Phys. Rev. 74, 939 (1948); R. P. Feynman, Phys. Rev. 74, 1430 (1948); J. Schwinger, Phys. Rev. 74, 1439 ...
  36. [36]
    The Radiation Theories of Tomonaga, Schwinger, and Feynman
    A unified development of the subject of quantum electrodynamics is outlined, embodying the main features both of the Tomonaga-Schwinger and of the Feynman ...Missing: renormalization original
  37. [37]
    Broken Symmetries and the Masses of Gauge Bosons
    Oct 11, 2013 · Phys. Rev. Lett. 13, 508 (1964) ... The 2013 Nobel Prize in Physics has been awarded to two of the theorists who formulated the Higgs mechanism, ...
  38. [38]
    [PDF] The Discovery of Quarks* - SLAC National Accelerator Laboratory
    Key evidence for quarks came from electron-nucleon scattering experiments at SLAC (1967-1973), with MIT, providing direct evidence of their existence.
  39. [39]
    [PDF] Gell-Mann.pdf
    That part which is non-invariant under the group will transform like particular representations of. SU(3) × SU(3), for example like (3, 3) and (3, 3) if it.
  40. [40]
    Quantum Monte Carlo simulations of solids | Rev. Mod. Phys.
    Jan 5, 2001 · This article describes the variational and fixed-node diffusion quantum Monte Carlo methods and how they may be used to calculate the properties of many- ...
  41. [41]
    Binary black hole simulation with an adaptive finite element method
    Feb 19, 2015 · In this paper we start a systematic investigation of applying an adaptive finite element method to the Einstein equations, especially binary ...
  42. [42]
    Confinement of quarks | Phys. Rev. D - Physical Review Link Manager
    Oct 15, 1974 · The lattice gauge theory has a computable strong-coupling limit; in this limit the binding mechanism applies and there are no free quarks. There ...
  43. [43]
    [PDF] 17. Lattice Quantum Chromodynamics - Particle Data Group
    Dec 1, 2023 · LQCD simulations cannot directly calculate resonance properties, but methods have been developed to do so indirectly for resonances coupled to ...
  44. [44]
    Machine learning and the physical sciences | Rev. Mod. Phys.
    Dec 6, 2019 · This article reviews in a selective way the recent research on the interface between machine learning and the physical sciences.
  45. [45]
    Machine Learning in High Energy Physics Community White Paper
    Jul 8, 2018 · In this document we discuss promising future research and development areas for machine learning in particle physics.
  46. [46]
    Mécanique analytique : Lagrange, J. L. (Joseph Louis), 1736-1813
    Jan 18, 2010 · Publication date: 1811 ; Topics: Mechanics, Analytic ; Publisher: Paris, Ve Courcier ; Collection: thomasfisher; universityofottawa; toronto; ...<|separator|>
  47. [47]
    [PDF] ON A GENERAL METHOD IN DYNAMICS By William Rowan Hamilton
    This edition is based on the original publication in the Philosophical Transactions of the. Royal Society, part II for 1834. The following errors in the ...
  48. [48]
    [PDF] electromagnetic phenomena in a system moving with any velocity ...
    We may further use the equations (20), instead of the original formulæ (10), if we wish to consider the forces exerted by the polarized particle on a similar ...Missing: derivation | Show results with:derivation
  49. [49]
    [PDF] ON THE ELECTRODYNAMICS OF MOVING BODIES - FaMAF
    This edition of Einstein's On the Electrodynamics of Moving Bodies is based on the English translation of his original 1905 German-language paper. (published as ...
  50. [50]
    [PDF] Space and Time - UCSD Math
    It was Hermann Minkowski (Einstein's mathematics professor) who announced the new four- dimensional (spacetime) view of the world in 1908, which he deduced from ...
  51. [51]
    [PDF] DOES THE INERTIA OF A BODY DEPEND UPON ITS ENERGY ...
    In this paper Einstein uses L to denote energy; the italicised sentence in the conclusion may be written as the equation “m = L/c2” which, using the more modern ...
  52. [52]
    The Field Equations of Gravitation - Wikisource, the free online library
    Aug 9, 2025 · We obtain the ten general covariant equations of the gravitational field in spaces, in which matter is absent.
  53. [53]
    Singularities and Black Holes - Stanford Encyclopedia of Philosophy
    Jun 29, 2009 · General relativity, Einstein's theory of space, time, and gravity, allows for the existence of singularities. Everyone agrees on this. When it ...
  54. [54]
    [PDF] Einstein's Discovery of Gravitational Waves 1916-1918 - arXiv
    Dec 2, 2016 · In his gravitational waves paper, Einstein concluded that gravitational fields propagate at the speed of light. The solution is the Minkowski ...
  55. [55]
    [PDF] A Determination of the Deflection of Light by the Sun's Gravitational ...
    Observations made at the Total Eclipse of iAay 29, 1919. By Sir F. W. DYSON, F.R.S., Astronomer Royal, Prof. A. S. EDDINGTON, F.R.S., and Mr. C.
  56. [56]
    [PDF] Einstein's 1917 Static Model of the Universe: A Centennial Review
    Feb 15, 2025 · We present a historical review of Einstein's 1917 paper 'Cosmological Considerations in the General Theory of Relativity' to mark the ...
  57. [57]
    The cosmological constant and dark energy
    Apr 22, 2003 · Physics welcomes the idea that space contains energy whose gravitational effect approximates that of. Einstein's cosmological constant, ⌳ ...
  58. [58]
    Heisenberg's Uncertainty Principle and Particle Trajectories
    Nov 26, 2022 · Heisenberg's work on quantum mechanics, in particular, his well-known 1927 paper [1] and his 1929 Chicago lectures [2], led to the rejection ...<|separator|>
  59. [59]
    The quantum theory of the electron - Journals
    Husain N (2025) Quantum Milestones, 1928: The Dirac Equation Unifies Quantum Mechanics and Special Relativity, Physics, 10.1103/Physics.18.20, 18. Shah R ...
  60. [60]
    [PDF] 93. Grand Unified Theories - Particle Data Group
    May 31, 2024 · 93.1 The standard model. The Standard Model (SM) may be defined as the renormalizable field theory with gauge group.
  61. [61]
  62. [62]
    SO(10) Grand Unification - SpringerLink
    The possibility of SO(10) as a grand unification group of the standard SU(2) L × U(1) Y × SU(3) c , was first noted by Georgi [1] and Fritzsch and Minkowski ...
  63. [63]
    Superstring Theory - Cambridge University Press
    Twenty-five years ago, Michael Green, John Schwarz, and Edward Witten wrote two volumes on string theory. ... Green-Schwarz covariant superstring action', Phys.
  64. [64]
    [PDF] String Theory - DAMTP
    The Magnetic Brane in Bosonic String Theory. After these generalities, let's see what it means for the bosonic string. The fundamental string is a 1-brane ...Missing: seminal | Show results with:seminal
  65. [65]
    [2210.16597] Low energy models of string theory - arXiv
    Oct 29, 2022 · In this review, we will review different compactification schemes proving that in absence of flux, the compact manifold must be a Calabi-Yau manifold.
  66. [66]
    [PDF] Calabi-Yau Compactification 1 Introduction 2 Mathematical ...
    Mar 10, 2004 · It is certainly true that string theory in ten dimensions has no “tuneable parameters”, but there remains the choice among tens of thousands ...
  67. [67]
    The Large N Limit of Superconformal Field Theories and Supergravity
    Jan 22, 1998 · We show that the large N limit of certain conformal field theories in various dimensions include in their Hilbert space a sector describing supergravity.
  68. [68]
    [gr-qc/9505006] Spin Networks and Quantum Gravity - arXiv
    May 4, 1995 · We introduce a new basis on the state space of non-perturbative quantum gravity. The states of this basis are linearly independent, are well defined.
  69. [69]
    Discreteness of area and volume in quantum gravity - ScienceDirect
    We argue that the spectra of volume and area determined here can be considered as predictions of the loop-representation formulation of quantum gravity on the ...
  70. [70]
    The Origin of Chemical Elements | Phys. Rev.
    The origin of chemical elements. RA Alpher, H. Bethe, G. Gamow, Applied Physics Laboratory, The Johns Hopkins University, Silver Spring, Maryland.Missing: URL | Show results with:URL
  71. [71]
    Neutral pion lifetime measurements and the QCD chiral anomaly
    Jan 9, 2013 · A fundamental property of QCD is the presence of the chiral anomaly, which is the dominant component of the 𝜋 0 → 𝛾 ⁢ 𝛾 decay rate.
  72. [72]
    [2106.06723] Muon $g-2$: A review - arXiv
    Jun 12, 2021 · This article reviews the current status of the experimental measurement and theoretical prediction of the muon anomalous magnetic moment.
  73. [73]
    [1207.7235] Observation of a new boson at a mass of 125 GeV with ...
    Jul 31, 2012 · An excess of events is observed above the expected background, with a local significance of 5.0 standard deviations, at a mass near 125 GeV, signalling the ...
  74. [74]
    [PDF] A Translation of Schrödinger's "Cat Paradox" Paper - Unicamp
    sights into Schrodinger's thought. The translator's goal has been to adhere to the logical and physical content of the original, while at the same time trying.
  75. [75]
    A theory of everything? - Nature
    Jan 19, 2005 · It modifies the notions of quantum field theory and forces upon us the unification of gravity and quantum theory. It can naturally encompass the ...
  76. [76]
    A pedagogical explanation for the non-renormalizability of gravity
    Sep 22, 2007 · We present a short and intuitive argument explaining why gravity is non-renormalizable. The argument is based on black-hole domination of the high energy ...
  77. [77]
    [PDF] The Cosmological Constant Problem, Dark Energy, and the ... - arXiv
    Mar 5, 2012 · Dark energy might be a form of scalar matter (quintessence) which mimics a fixed cosmological constant closely enough to be compatible with ...
  78. [78]
    The Cosmological Constant Is Physics' Most Embarrassing Problem
    Feb 1, 2021 · Because that is clearly not the case, the vacuum energy in the universe must be very small—about 120 orders of magnitude smaller than what ...
  79. [79]
    Searches for supersymmetry at the Large Hadron Collider
    Supersymmetry can provide the cancellation of the quadratic quantum corrections by the contributions from the superpartner particles at the TeV scale and ...
  80. [80]
    [PDF] The hierarchy problem and new dimensions at a millimeter
    Jun 18, 1998 · Physics Letters B 429 1998 263–272. The hierarchy problem and new dimensions at a millimeter. Nima Arkani–Hamed a, Savas Dimopoulos b, Gia Dvali ...<|control11|><|separator|>
  81. [81]
    History of dark matter | Rev. Mod. Phys.
    Zwicky, in his famous 1937 article on galaxy clusters, discussed the possibility of using the rotation curves of galaxies to infer their mass distribution, ...
  82. [82]
    Rotational properties of 21 SC galaxies with a large range of ...
    Most rotation curves are rising slowly even at the farthest measured point. Neither high nor low luminosity Sc galaxies have falling rotation curves. Sc ...
  83. [83]
    [hep-ph/9710467] WIMP and Axion Dark Matter - arXiv
    Oct 24, 1997 · The two leading candidates for this dark matter are axions and weakly-interacting massive particles (WIMPs), such as the neutralino in supersymmetric ...
  84. [84]
    Evidence for Oscillation of Atmospheric Neutrinos | Phys. Rev. Lett.
    We present an analysis of atmospheric neutrino data from a 33.0 kton yr (535-day) exposure of the Super-Kamiokande detector. The data exhibit a zenith angle ...Missing: Δm² value
  85. [85]
    [hep-ex/9807003] Evidence for oscillation of atmospheric neutrinos
    Jul 3, 1998 · We present an analysis of atmospheric neutrino data from a 33.0 kiloton-year (535-day) exposure of the Super-Kamiokande detector.
  86. [86]
    Violation of CP Invariance, C asymmetry, and baryon ... - Inspire HEP
    Violation of CP Invariance, C asymmetry, and baryon asymmetry of the universe. 1967 4 pages. Published in: DOI: links cite claim reference search.
  87. [87]
    [0802.2962] Leptogenesis - arXiv
    Feb 20, 2008 · We explain the motivation for leptogenesis. We review the basic mechanism, and describe subclasses of models. We then focus on recent ...