Fact-checked by Grok 2 weeks ago

Quantum field theory

Quantum field theory (QFT) is the theoretical framework that combines the principles of and to describe the behavior of subatomic particles and their interactions through quantized fields that permeate all of . In this paradigm, particles such as electrons and photons are viewed as excitations or of underlying fields, enabling a consistent treatment of phenomena like particle creation, annihilation, and relativistic invariance. This approach resolves inconsistencies in non-relativistic , such as negative probability densities in relativistic contexts, by promoting fields to operator status in a known as . QFT emerged in the mid-20th century as physicists sought to reconcile with Einstein's , building on earlier work in (QED). Pioneering contributions came from in the late 1920s, who formulated the for relativistic electrons, and later from , , and Sin-Itiro Tomonaga in the 1940s, who developed techniques to handle infinities in perturbative calculations. These advancements culminated in QED as the first successful QFT, accurately predicting phenomena like the and anomalous of the to high precision. By the 1970s, QFT had expanded to encompass the strong and weak nuclear forces through (QCD) and electroweak theory, forming the basis of the Standard Model of particle physics. At its core, QFT employs Lagrangian densities to encode the dynamics of fields, from which and symmetries are derived. For instance, scalar fields like the Higgs field are described by terms involving kinetic energy and potential, while fermionic fields (e.g., quarks and leptons) use Dirac Lagrangians, and gauge fields (e.g., photons, gluons) incorporate local symmetries via the Yang-Mills action. Interactions are computed perturbatively using Feynman diagrams, which represent scattering amplitudes as series expansions in coupling constants, with propagators for particle lines and vertices for interactions. ensures finite, observable predictions by absorbing divergences into redefined parameters, a essential for theories like and QCD. QFT underpins modern , providing the mathematical language for the , which has been experimentally validated at accelerators like the through discoveries such as the in 2012. It also extends to , describing phenomena like via effective field theories, and serves as a foundation for attempts to unify gravity with in research. Despite its successes, challenges remain, including the and the lack of a complete theory incorporating .

Overview

Definition and basic concepts

Quantum field theory (QFT) is the quantum mechanical description of relativistic systems, providing a framework that unifies and by treating particles as excitations of underlying fields pervading . In this approach, the fundamental entities are not point-like particles but fields, which resolve inconsistencies arising in , such as the variable particle number in high-energy processes. QFT emerged from the need to combine with , enabling a consistent treatment of systems where both quantum effects and relativistic speeds are significant. Fields in QFT are operator-valued functions defined on , satisfying specific commutation or anticommutation relations to incorporate quantum uncertainty and relativistic invariance. These fields are expanded in terms of , which act on multi-particle states to generate or remove corresponding to particles; for instance, the annihilation operator a(\mathbf{k}) destroys a particle with \mathbf{k}, while the operator a^\dagger(\mathbf{k}) adds one. The theory distinguishes between different types of fields based on their transformation properties under the : (spin-0), which are invariant under rotations; fields (spin-1/2), describing fermions like electrons; and vector fields (spin-1), associated with bosons like photons. A foundational example is the real , governed by the Klein-Gordon equation for free particles: (\Box + m^2) \phi(x) = 0, where \Box = \partial^\mu \partial_\mu is the d'Alembertian and m is the particle , ensuring Lorentz invariance and relativistic propagation. The state in QFT, denoted |0\rangle, represents the lowest-energy configuration with no particles present and is annihilated by all annihilation operators, such as a(\mathbf{k}) |0\rangle = 0. Particle states are constructed in , a built by applying creation operators successively to the , allowing for variable particle numbers and enabling the description of processes like and pair creation. This structure underpins QFT's ability to model both phenomena and aspects of condensed matter systems through effective field theories.

Scope and applications

Quantum field theory (QFT) provides the foundational framework for describing relativistic quantum systems involving many particles and fields, particularly those governed by , in contrast to non-relativistic which suffices for atomic and molecular scales without high speeds or energies. This scope encompasses interactions at fundamental scales where particles are excitations of underlying fields, enabling predictions for phenomena from subatomic to cosmological levels, though it is optimized for scenarios with Lorentz invariance. Key applications of QFT include high-energy particle collisions, where it models scattering amplitudes and decay processes in accelerators like the , allowing verification of particle properties through Feynman diagrams and perturbative expansions. In (QED), a cornerstone QFT, it precisely calculates atomic spectra, such as the in , achieving agreement with experiments to parts per million, and explains the electron's anomalous to parts per trillion. For strong interactions, (QCD) applies QFT to describe and dynamics within hadrons, capturing confinement and that govern nuclear forces. QFT serves as the theoretical backbone of the Standard Model, unifying the electromagnetic, weak, and strong forces through gauge symmetries, with QED for electromagnetism, electroweak theory for weak interactions, and QCD for the strong force, successfully predicting particle masses and couplings observed in experiments. Beyond full theories, QFT enables effective field theories (EFTs) that approximate low-energy behaviors by integrating out high-energy degrees of freedom; for instance, chiral perturbation theory models pion interactions in quantum chromodynamics at energies below 1 GeV, providing accurate descriptions of meson scattering and decays. A primary limitation of QFT is its incompatibility with , as attempts to quantize yield a non-renormalizable theory with infinities that cannot be systematically absorbed, necessitating separate treatments or extensions like for unification at the Planck scale. This boundary highlights QFT's prowess in three of the four fundamental forces while underscoring ongoing challenges in incorporating quantum mechanically.

Historical development

Early theoretical foundations

The foundations of quantum field theory trace back to classical field theories, particularly , which provided the first comprehensive framework for describing forces as propagating fields rather than instantaneous actions at a distance. In 1865, James Clerk Maxwell unified , , and through a set of four partial differential equations that govern the behavior of electric and magnetic fields. These equations, known as , are: \nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon_0}, \quad \nabla \cdot \mathbf{B} = 0, \quad \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}, \quad \nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t}, where \mathbf{E} is the electric field, \mathbf{B} the magnetic field, \rho the charge density, \mathbf{J} the current density, \epsilon_0 the vacuum permittivity, and \mu_0 the vacuum permeability. This formulation revealed electromagnetic waves traveling at the speed of light, establishing fields as fundamental entities with their own dynamics, independent of material sources. Maxwell's work laid the groundwork for relativistic invariance in field descriptions, as the equations are Lorentz covariant, setting the stage for merging with quantum mechanics. The advent of in 1905 highlighted the need to reconcile with relativistic principles, as non-relativistic Schrödinger's failed for high-speed particles. An initial attempt was the Klein-Gordon in 1926, proposed independently by and Walter Gordon, a relativistic generalization of the Schrödinger for scalar particles: (\square + m^2) \phi = 0, where \square = \partial^\mu \partial_\mu is the d'Alembertian operator and m the mass. However, quantizing this led to severe interpretational challenges, including a probability density \rho = \phi^* \overleftrightarrow{\partial_t} \phi that could take negative values, violating the positivity required for a single-particle probability interpretation. This issue, along with negative-energy solutions, prompted interpretations like "hole theory," where negative energies were filled by a sea of particles, but it remained problematic for consistent quantum description. In 1928, Paul Dirac resolved many of these issues with his relativistic wave equation for spin-1/2 particles, the Dirac equation: (i \gamma^\mu \partial_\mu - m) \psi = 0, where \gamma^\mu are 4x4 Dirac matrices satisfying the Clifford algebra \{\gamma^\mu, \gamma^\nu\} = 2 g^{\mu\nu}, \psi is a four-component spinor, and natural units are used. This first-order equation yielded positive-definite probability densities and correctly incorporated electron spin, but it also predicted negative-energy states, leading Dirac to propose in 1930 his hole theory: the vacuum as a filled Fermi sea of negative-energy electrons, with "holes" interpreted as positively charged antiparticles (positrons), later confirmed experimentally in 1932. Dirac's 1927 paper had earlier laid foundational ideas for quantum electrodynamics by treating radiation absorption and emission via non-commuting operators, bridging quantum mechanics and classical fields. Parallel early efforts to quantize fields emerged in the late 1920s. In 1927, proposed quantizing the electromagnetic field by promoting classical field amplitudes to non-commuting operators, treating photons as field excitations to resolve wave-particle duality in radiation. Building on this, and developed a systematic canonical quantization scheme in their 1929–1930 papers, introducing field operators \hat{\phi}(x) and \hat{\pi}(x) satisfying equal-time commutation relations [\hat{\phi}(\mathbf{x},t), \hat{\pi}(\mathbf{y},t)] = i \delta^3(\mathbf{x} - \mathbf{y}), enabling a quantum description of relativistic fields while addressing interactions perturbatively. These works established the operator formalism central to quantum field theory, though infinities in calculations foreshadowed later challenges.

Emergence of quantum electrodynamics

The formulation of (QED) began in the late 1920s with efforts to reconcile and in describing the interaction between electromagnetic fields and charged particles, particularly electrons. initiated this development in 1927 by proposing a of emission and absorption, treating the as quantized oscillators interacting with atomic systems through non-commuting variables. Building on this, and advanced the quantization procedure in 1928, applying it systematically to scalar and spinor fields, including the Dirac field for electrons, to ensure relativistic invariance in the field operators and commutation relations. These works established the foundational framework for QED by combining the quantized electromagnetic field with fermionic matter fields, laying the groundwork for treating particles as excitations of underlying fields. A significant milestone came with the inclusion of positrons, predicted by Dirac's 1930 hole theory, which motivated full quantization of electron-positron fields (governed by the ) coupled to the ; however, the interacting Dirac field quantization faced challenges with and infinities. In parallel, Pauli and developed a consistent quantization of the scalar relativistic in the context of electrodynamics in 1934, providing a framework for spin-0 charged particles and their antiparticles via creation and annihilation processes while preserving . (Note: This references a historical compilation including the Pauli-Weisskopf work.) This scalar positioned an alternative prototype to the fermionic approach, unifying with Maxwell's classical electrodynamics—previously developed in the early —into a relativistic framework. However, early calculations revealed infinite self-energies and effects, as noted by in 1930 and in the same year, highlighting unresolved divergences in higher-order terms. The theory gained empirical validation in the late 1940s through precise predictions matching experiments. In 1947, and Robert Retherford observed a small energy splitting in hydrogen's 2S and 2P states, known as the , deviating from Dirac's relativistic atomic theory. promptly calculated this shift using non-relativistic , attributing it to vacuum fluctuations and electron self-interaction, yielding a value of approximately 1040 MHz in close agreement with the measured 1058 MHz. Similarly, in 1948, computed the electron's anomalous , predicting a deviation from the Dirac value g=2 due to radiative corrections. His result for the gyromagnetic ratio g-2 factor was α/(2π), where α is the , corresponding to a leading-order correction in the : \vec{\mu} = \left(1 + \frac{\alpha}{2\pi} + \cdots \right) \frac{e \hbar}{2m} \vec{S} This matched experimental measurements to high precision, confirming QED's predictive power. These successes were enabled by covariant perturbation theories developed independently in the mid-1940s. Shin'ichirō Tomonaga introduced a relativistically invariant formalism in 1946, using a hypersurface to define field equations and resolve non-covariant issues in earlier hole-theory approaches. Schwinger extended this in 1948 with canonical transformations ensuring Lorentz invariance in interaction terms, while Richard Feynman provided an alternative space-time path integral approach in 1949, diagrammatically representing processes via electron and photon propagators. Freeman Dyson's 1949 synthesis unified these methods, proving equivalence and enabling systematic calculations. Their work, awarded the 1965 Nobel Prize in Physics, established QED as a consistent theory despite lingering infinities in unrenormalized expressions.

Renormalization and infinities

In quantum field theory, ultraviolet divergences arise in perturbative calculations involving diagrams, where high-momentum particles contribute to results. For instance, the one- self-energy correction to the in (QED) yields a divergent of the form \int \frac{d^4 k}{(2\pi)^4} \frac{1}{k^2}, reflecting the unbounded contribution from arbitrarily large momenta k in the or self-interaction processes. These infinities first became evident in early QED computations, such as those for the , where the 's interaction with its own led to unphysical energy shifts. The historical resolution of these divergences began in the late 1940s with Hans Bethe's calculation of the , in which he introduced mass by subtracting the infinite self-energy contribution from the bare electron mass to match observed atomic spectra. This approach was extended in the early 1950s by , , Sin-Itiro Tomonaga, and , who developed a systematic procedure for , redefining the bare charge e, mass m, and field normalizations to absorb infinities order by order in . Dyson's work, in particular, unified the diagrammatic methods of with the operator formalism of Schwinger and Tomonaga, demonstrating that restores finite, gauge-invariant predictions for scattering amplitudes. To handle these divergences practically, regularization methods were employed to temporarily render integrals finite before . One early technique involved imposing a \Lambda, limiting integrations to |k| < \Lambda and later taking \Lambda \to \infty after counterterm subtraction. A more elegant covariant approach, proposed by Wolfgang Pauli and François Villars, introduced fictitious "regulator" fields with large masses M, modifying propagators as $1/(k^2 - M^2) to suppress high- contributions while preserving Lorentz invariance and gauge symmetry in the limit M \to \infty. These regulators ensure that loop integrals converge without altering the low-energy physics of the original theory. The key insight into QED's consistency came from proofs of its renormalizability, showing that all divergences could be absorbed using only a finite number of counterterms corresponding to the charge e, electron mass m, and wave function renormalization constants Z_2 for the electron field and Z_3 for the photon field (with the vertex renormalization Z_1 = Z_2 enforced by ). In his 1949 analysis, Dyson provided a perturbative proof by examining the structure of —visual representations of loop corrections—and demonstrating that higher-order infinities factorize into products of lower-order divergent subgraphs, allowing complete cancellation via the aforementioned counterterms. This resummation of the perturbation series yielded finite, unambiguous predictions for physical observables, such as electron scattering cross-sections, validating QED as a predictive theory despite its apparent infinities.

Gauge theories and the Standard Model

The development of gauge theories beyond quantum electrodynamics (QED) began with the generalization to non-Abelian gauge symmetries, providing a framework for describing strong and weak interactions within quantum field theory. In 1954, Chen Ning Yang and Robert Mills proposed a gauge theory based on non-Abelian Lie groups, such as SU(2) for isotopic spin invariance, extending the Abelian U(1) structure of QED. This theory introduces multiple gauge fields A_\mu^a, where a labels the group generators, and the field strength tensor takes the form F_{\mu\nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a + g f^{abc} A_\mu^b A_\nu^c, with g the coupling constant and f^{abc} the structure constants of the gauge group, capturing self-interactions among the gauge bosons that are absent in QED. Although initially challenged by issues like non-renormalizability in massive cases, Yang-Mills theory laid the groundwork for modern particle physics by enabling unified descriptions of forces under local symmetry transformations. Building on this, the electroweak theory unified the electromagnetic and weak forces through a non-Abelian gauge structure. In 1961, Sheldon Glashow introduced a model based on the gauge group SU(2) × U(1), where SU(2) governs the charged weak current and U(1) the hypercharge, with the photon emerging as a massless combination after symmetry breaking. This was fully realized in the 1960s and 1970s through independent contributions by Steven Weinberg and Abdus Salam, who incorporated spontaneous symmetry breaking via the to generate masses for the weak bosons while keeping the photon massless. The resulting predicted neutral weak currents and intermediate vector bosons, resolving long-standing puzzles in weak interaction phenomenology. Parallel advances led to quantum chromodynamics (QCD), the gauge theory of the strong force, based on the non-Abelian SU(3) color group. Formulated by Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler in 1973 and developed in the early 1970s, QCD describes quarks interacting via gluons, with the Yang-Mills structure ensuring color confinement at low energies. A pivotal discovery was asymptotic freedom, demonstrated independently by David Gross and Frank Wilczek, and David Politzer in 1973, showing that the strong coupling g weakens at high energies (short distances). This behavior is encoded in the beta function \beta(g) = -\left( \frac{11}{3} N_c - \frac{2}{3} N_f \right) \frac{g^3}{16\pi^2}, where N_c = 3 is the number of colors and N_f the number of active quark flavors, enabling perturbative calculations for high-energy processes like deep inelastic scattering. Asymptotic freedom resolved the failure of earlier strong interaction models and confirmed QCD's consistency as a renormalizable quantum field theory. The Standard Model synthesizes these gauge theories into a unified framework for electromagnetic, weak, and strong interactions, excluding gravity. Its Lagrangian is structured into gauge, fermion kinetic, Yukawa, and Higgs sectors: \mathcal{L}_\text{SM} = \mathcal{L}_\text{gauge} + \mathcal{L}_\text{fermions} + \mathcal{L}_\text{Yukawa} + \mathcal{L}_\text{Higgs}, where \mathcal{L}_\text{gauge} encompasses the SU(3)_c × SU(2)_L × U(1)_Y invariant terms for gluons, W/Z bosons, and the photon; \mathcal{L}_\text{fermions} describes Dirac kinetic terms for quarks and leptons; \mathcal{L}_\text{Yukawa} couples fermions to the Higgs doublet for mass generation; and \mathcal{L}_\text{Higgs} includes the scalar potential driving electroweak symmetry breaking. This structure, finalized by the mid-1970s, accommodates three generations of fermions and predicts precise relations among couplings and particle properties. Experimental confirmation of the electroweak sector came with the discovery of the W and Z bosons at CERN's Super Proton Synchrotron in 1983. The UA1 and UA2 collaborations observed W bosons decaying to electron-neutrino pairs at 80 GeV and Z bosons to electron-positron pairs at 95 GeV, with masses and production rates matching GWS predictions to within experimental precision, solidifying the Standard Model's validity. These events marked a triumph for non-Abelian gauge theories, enabling further tests of the model's unification principles.

Post-Standard Model advances

Following the successful formulation of the electroweak theory by Sheldon Glashow, Abdus Salam, and Steven Weinberg, which unified the electromagnetic and weak interactions into a renormalizable quantum field theory, these contributions were recognized with the 1979 Nobel Prize in Physics. This model, incorporating spontaneous symmetry breaking via the , provided a framework for the weak sector of the Standard Model. Similarly, the discovery of asymptotic freedom in by David Gross, Frank Wilczek, and David Politzer in 1973, demonstrating that the strong coupling constant decreases at high energies, earned them the 2004 Nobel Prize in Physics and solidified QCD as the theory of strong interactions. These achievements marked the completion of the Standard Model by the late 1970s, prompting explorations beyond it to unify all fundamental forces and address unresolved issues like the hierarchy problem and flavor structure. One major advance was the development of Grand Unified Theories (GUTs), which extend the Standard Model gauge group to a larger simple group, unifying the strong, weak, and electromagnetic interactions at high energies. The seminal SU(5) model, proposed by and in 1974, embeds the Standard Model SU(3) × SU(2) × U(1) into SU(5), predicting that quarks and leptons reside in unified multiplets and that the proton is unstable due to baryon-number-violating interactions mediated by heavy gauge bosons. This leads to proton decay modes such as p \to e^+ + \pi^0, with an expected lifetime around $10^{31} years in minimal implementations, though no such decays have been observed in experiments like , constraining GUT scales above $10^{16} GeV (as of 2025). GUTs also naturally generate small neutrino masses via mechanisms like the seesaw, influencing early beyond-Standard-Model phenomenology. To address limitations in the Standard Model's flavor sector, such as the origin of fermion masses and mixing angles, renormalizable extensions incorporating additional Higgs sectors have been pursued. These models, like the two-Higgs-doublet model (2HDM), introduce extra scalar doublets to provide flavor-dependent Yukawa couplings while maintaining renormalizability and gauge invariance. Such extensions suppress unwanted flavor-changing neutral currents through alignments or symmetries, offering explanations for phenomena like CP violation beyond the Cabibbo-Kobayashi-Maskawa matrix, and have been instrumental in model-building for collider searches. Parallel to these developments, alternative axiomatic approaches to quantum field theory emerged to reformulate foundational aspects. Julian Schwinger's source theory, developed from the mid-1960s through the 1970s, posits that observable phenomena arise from the dynamics of external sources interacting with fields, emphasizing Green's functions as fundamental objects rather than operator fields. This framework avoids divergences by focusing on source correlations, providing a basis for non-perturbative insights and influencing later axiomatic QFT efforts, though it did not supplant the canonical formalism. Early non-perturbative advances within QCD highlighted the role of topological configurations in the vacuum structure. In the 1970s, instantons—self-dual solutions to the Yang-Mills equations—were discovered by Alexander Belavin, Alexander Polyakov, Albert Schwarz, and Yuri Tyupkin in 1975, revealing non-perturbative effects that break chiral symmetries and contribute to the QCD eta-prime meson mass via the U(1) anomaly. Gerard 't Hooft further applied instantons in 1976 to compute multi-fermion interactions, demonstrating their relevance to processes like baryon number violation in electroweak theory and providing a bridge to lattice QCD simulations. These insights underscored the limitations of perturbation theory and spurred developments in effective field theories for low-energy hadron physics.

Fundamental principles

Classical field theory prerequisites

Classical field theory provides the foundational framework for quantum field theory by describing relativistic systems through continuous fields propagating in spacetime, governed by principles that ensure consistency with special relativity. The dynamics of such fields are formulated using the action principle, where the action S is defined as the spacetime integral of a Lagrangian density \mathcal{L}(\phi, \partial_\mu \phi), given by S = \int \mathcal{L}(\phi, \partial_\mu \phi) \, d^4x, with the integral taken over Minkowski spacetime. The equations of motion are obtained by requiring the action to be stationary under small variations of the field \delta \phi, leading to the Euler-Lagrange equations: \partial_\mu \left( \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} \right) - \frac{\partial \mathcal{L}}{\partial \phi} = 0. This variational approach extends the principles of classical mechanics to infinite degrees of freedom, ensuring relativistic covariance. A fundamental example is the real scalar field, described by the \mathcal{L} = \frac{1}{2} \partial^\mu \phi \partial_\mu \phi - \frac{1}{2} m^2 \phi^2, which yields the (\square + m^2) \phi = 0, where \square = \partial^\mu \partial_\mu is the . This equation governs massive spin-0 particles in relativistic settings, with solutions representing waves propagating at or below the speed of light. For fermionic fields, the for a spin-1/2 field \psi is \mathcal{L} = \overline{\psi} (i \gamma^\mu \partial_\mu - m) \psi, producing the (i \gamma^\mu \partial_\mu - m) \psi = 0, which incorporates spin and ensures first-order dynamics suitable for relativistic electrons. In the electromagnetic sector, the \mathcal{L} = -\frac{1}{4} F^{\mu\nu} F_{\mu\nu}, with F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu the , leads to the \partial_\mu F^{\mu\nu} = 0 in vacuum, describing the propagation of photons as classical waves. These examples illustrate how classical field theories model fundamental interactions while respecting . Symmetries of the action play a central role in classical field theory, as encapsulated by , which establishes a one-to-one correspondence between continuous symmetries and conserved quantities. For an infinitesimal field transformation \delta \phi = \varepsilon K[\phi], where \varepsilon is a constant parameter and K is the generator, the theorem implies the existence of a conserved current J^\mu satisfying \partial_\mu J^\mu = 0 on-shell (i.e., when the Euler-Lagrange equations hold). The explicit form of the current is J^\mu = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} K - \xi^\mu \mathcal{L}, where \xi^\mu accounts for any accompanying spacetime transformation x^\mu \to x^\mu + \xi^\mu; for internal symmetries without spacetime variation, \xi^\mu = 0. This result, derived from the invariance of the action under the symmetry, yields conservation laws such as charge conservation from U(1) phase rotations in the or . The theorem was originally formulated in the context of variational problems, highlighting its broad applicability to field systems. Lorentz invariance is a cornerstone of classical relativistic field theories, requiring the Lagrangian to transform as a scalar under Lorentz transformations x^\mu \to \Lambda^\mu{}_\nu x^\nu, ensuring that physical laws are independent of the observer's inertial frame. This symmetry manifests in the use of the Minkowski metric \eta_{\mu\nu} = \operatorname{diag}(1, -1, -1, -1) and covariant derivatives, preserving the form of equations like the or across boosts and rotations. Causality follows naturally from Lorentz invariance, as field propagators are confined within the light cone: influences cannot propagate faster than light, preventing acausal effects in initial value problems where data on a spacelike hypersurface determine future evolution uniquely. For instance, the retarded Green's function for the wave equation enforces this by sourcing only future-directed signals. These properties ensure the physical consistency of classical theories before quantization. The stress-energy tensor T^{\mu\nu}, derived via Noether's theorem from spacetime translation invariance x^\mu \to x^\mu + \varepsilon^\mu, encodes the energy-momentum distribution of the field. For a general scalar field, the canonical form is T^{\mu\nu} = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} \partial^\nu \phi - g^{\mu\nu} \mathcal{L}, with g^{\mu\nu} = \eta^{\mu\nu} the inverse metric; its vanishing divergence \partial_\mu T^{\mu\nu} = 0 reflects four-momentum conservation. In the electromagnetic case, T^{\mu\nu} = F^{\mu\lambda} F^\nu{}_\lambda - \frac{1}{4} g^{\mu\nu} F^{\rho\sigma} F_{\rho\sigma} (up to factors), describing the Poynting vector and electromagnetic stress. This tensor is crucial for coupling fields to gravity in general relativity, though in flat spacetime it solely governs local conservation laws.

Canonical quantization

Canonical quantization provides a systematic procedure for constructing quantum field theories by promoting classical fields to operators acting on a , preserving the canonical structure of the classical theory while incorporating quantum commutation relations. This method extends the quantization rules from non-relativistic to relativistic field theories, ensuring compatibility with through appropriate commutation relations that enforce causality. The approach was pioneered by in his formulation of for the electromagnetic field. For a free real scalar field obeying the , derived from the classical Lagrangian density \mathcal{L} = \frac{1}{2} \partial^\mu \phi \partial_\mu \phi - \frac{1}{2} m^2 \phi^2, the field \phi(x) and its canonical momentum \pi(x) = \dot{\phi}(x) are elevated to operator-valued distributions \hat{\phi}(x) and \hat{\pi}(x). The fundamental postulate of canonical quantization imposes equal-time commutation relations on these operators, analogous to the position-momentum relations in : [\hat{\phi}(t, \mathbf{x}), \hat{\pi}(t, \mathbf{y})] = i \hbar \delta^3(\mathbf{x} - \mathbf{y}), with [\hat{\phi}(t, \mathbf{x}), \hat{\phi}(t, \mathbf{y})] = [\hat{\pi}(t, \mathbf{x}), \hat{\pi}(t, \mathbf{y})] = 0. These relations were explicitly applied to the scalar field in the seminal work of , who quantized the relativistic scalar wave equation to describe spin-0 particles. The time evolution of the operators follows the , governed by the field equations promoted to operator form, ensuring the theory remains Lorentz invariant at the classical level before quantization. To diagonalize the Hamiltonian and interpret the theory in terms of particles, the field operator is expanded in a Fourier mode decomposition over momentum space: \hat{\phi}(x) = \int \frac{d^3 p}{(2\pi)^3} \frac{1}{\sqrt{2 \omega_p}} \left( \hat{a}_{\mathbf{p}} e^{-i p \cdot x} + \hat{a}_{\mathbf{p}}^\dagger e^{i p \cdot x} \right), where p^0 = \omega_p = \sqrt{\mathbf{p}^2 + m^2}, and the creation and annihilation operators satisfy the bosonic commutation relations [\hat{a}_{\mathbf{p}}, \hat{a}_{\mathbf{q}}^\dagger] = (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}), with all other commutators vanishing. This expansion, which transforms the infinite degrees of freedom of the field into an infinite set of harmonic oscillators, was developed in the context of by and . Each mode corresponds to a particle with momentum \mathbf{p} and energy \omega_p, allowing the field excitations to be interpreted as relativistic particles. The state space of the theory is the Fock space, a direct sum of symmetric Hilbert spaces for varying numbers of particles, constructed by acting with creation operators on the vacuum state |0\rangle, defined by \hat{a}_{\mathbf{p}} |0\rangle = 0 for all \mathbf{p}. Multi-particle states are built as |\{n_{\mathbf{p}}\}\rangle = \prod_{\mathbf{p}} \frac{(\hat{a}_{\mathbf{p}}^\dagger)^{n_{\mathbf{p}}}}{\sqrt{n_{\mathbf{p}}!}} |0\rangle, where n_{\mathbf{p}} is the occupation number for mode \mathbf{p}. This infinite-dimensional Hilbert space framework, essential for describing variable particle number, was introduced by Fock to formalize second quantization. The total energy is represented by the Hamiltonian operator, obtained by quantizing the classical expression and normal-ordering to regulate divergences: \hat{H} = \int d^3 x \, :\frac{1}{2} \left( \hat{\pi}^2 + (\nabla \hat{\phi})^2 + m^2 \hat{\phi}^2 \right): , where normal ordering : \hat{O} : places all creation operators to the left of annihilation operators. In the mode basis, this simplifies to \hat{H} = \int \frac{d^3 p}{(2\pi)^3} \omega_p \hat{a}_{\mathbf{p}}^\dagger \hat{a}_{\mathbf{p}}, confirming the particle interpretation with the vacuum energy subtracted. This form directly follows from the Legendre transform of the classical Lagrangian under quantization, as detailed for the scalar field. Relativistic consistency requires microcausality, ensuring that observables at spacelike separation commute, i.e., [\hat{\phi}(x), \hat{\phi}(y)] = 0 when (x - y)^2 < 0. In canonical quantization, this is achieved by extending the equal-time commutators using the field equation, yielding the full commutator proportional to the Pauli-Jordan function, which vanishes outside the light cone. This locality condition, crucial for avoiding superluminal signaling, emerges naturally from the mode expansion and was verified in early formulations of scalar field theory.

Path integral formulation

The path integral formulation of quantum field theory offers an alternative to the canonical quantization approach, expressing quantum amplitudes as sums over all possible field configurations weighted by the phase factor of the classical action. This sum-over-histories perspective, originally developed for non-relativistic quantum mechanics, was extended to relativistic quantum fields, providing a framework that unifies quantum mechanics and special relativity while facilitating calculations in interacting theories. In this formalism, the transition amplitude between an initial field configuration |i\rangle and a final configuration |f\rangle is given by \langle f | i \rangle = \int \mathcal{D}\phi \, \exp\left( \frac{i}{\hbar} S[\phi] \right), where the integral is a functional integral over all field paths \phi(x) connecting the initial and final states, and S[\phi] is the classical action functional S[\phi] = \int \mathcal{L}(\phi, \partial \phi) \, d^4x, with \mathcal{L} the Lagrangian density. This expression generalizes the path integral from to fields, treating spacetime as the arena for propagation. The formulation demonstrates equivalence to the operator methods of canonical quantization through explicit mappings in simple cases, such as free scalar fields. For interacting quantum field theories, the path integral is most practically implemented via the generating functional Z[J], which incorporates external sources J(x) to generate correlation functions: Z[J] = \int \mathcal{D}\phi \, \exp\left( \frac{i}{\hbar} \int \left( \mathcal{L}[\phi] + J(x) \phi(x) \right) d^4x \right). Here, \mathcal{L}[\phi] includes both free and interaction terms, and Z{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} normalizes the vacuum persistence amplitude. This functional encodes all dynamics, with vacuum expectation values of field products obtained as functional derivatives: \langle \phi(x_1) \cdots \phi(x_n) \rangle = (-i\hbar)^n \frac{\delta^n Z[J]}{\delta J(x_1) \cdots \delta J(x_n)} \big|_{J=0}. The approach stems from variational principles in quantum dynamics, as formalized in , which uses parameter integrals to represent transformation functions between quantum states and naturally leads to the path integral representation. Perturbative expansions arise by splitting the action into free and interaction parts, S[\phi] = S_0[\phi] + S_{\rm int}[\phi], and expanding the exponential of the interaction term: Z[J] = \int \mathcal{D}\phi \, \exp\left( \frac{i}{\hbar} S_0[\phi] \right) \exp\left( \frac{i}{\hbar} S_{\rm int}[\phi] \right). The second exponential expands as a power series in the coupling constants within S_{\rm int}, yielding a perturbative series where each order corresponds to integrals over free propagators weighted by interaction vertices. This structure enables systematic computations in weakly coupled regimes, such as quantum electrodynamics. A key advantage of the path integral formulation is its suitability for non-perturbative methods, particularly through Euclidean continuation. By performing a Wick rotation, t \to -i\tau, the Minkowski spacetime metric converts to Euclidean, transforming the oscillatory integral into a convergent one: \int \mathcal{D}\phi \, \exp\left( i \int \mathcal{L}_M d^4x \right) \to \int \mathcal{D}\phi_E \, \exp\left( - \int \mathcal{L}_E d^4x_E \right), where \mathcal{L}_E is the Euclidean Lagrangian. This rotation facilitates numerical lattice simulations of quantum fields, as the positive-definite measure avoids sign problems in many cases, and analytic continuation back to real time recovers Minkowski results under suitable conditions.

Correlation functions

In quantum field theory, correlation functions represent the vacuum expectation values of time-ordered products of quantum fields and constitute the primary objects for describing the theory's dynamics and deriving physical observables such as scattering amplitudes. These functions encapsulate the probabilistic structure of particle interactions in a relativistic setting, providing a bridge between abstract field operators and measurable quantities. The general n-point correlation function is defined as
G^{(n)}(x_1, \dots, x_n) = \langle 0 | T \phi(x_1) \cdots \phi(x_n) | 0 \rangle,
where T denotes time-ordering, \phi is a , and |0\rangle is the vacuum state. This definition arises within the axiomatic framework of , ensuring consistency with causality and Lorentz invariance. For n=2, the two-point function simplifies to the :
\langle 0 | T \phi(x) \phi(y) | 0 \rangle = i \Delta_F(x - y),
which satisfies the
(\square + m^2) \Delta_F(z) = -\delta^4(z)
with appropriate boundary conditions to incorporate causality, distinguishing it from other propagators like the retarded or advanced ones.
These correlation functions connect directly to observable processes through the Lehmann-Symanzik-Zimmermann (LSZ) reduction formula, which expresses S-matrix elements in terms of the Fourier transforms of the correlation functions. Specifically, for an n-particle scattering process, the relevant amplitude is obtained via
S_{fi} = \lim_{p_i^2 \to m^2} \prod_{j=1}^n \left( \sqrt{Z} \int d^4 x_j \, e^{i p_j x_j} (\square_j + m^2) \right) G^{(n+2)}(x_1, \dots, x_{n+2}),
where Z is the field renormalization constant, highlighting how asymptotic states emerge from the field's correlations.
To organize the information content, correlation functions are often decomposed into connected and one-particle-irreducible (1PI) components using generating functionals and Legendre transforms. The full n-point functions derive from the generating functional Z[J], while connected functions come from its logarithm W[J] = -i \ln Z[J], and 1PI functions from the Legendre effective action Γ[Φ], where Φ = δW/δJ is the expectation value of the field; this transform isolates the irreducible vertices essential for resummed perturbation theory. The Wightman axioms provide the rigorous mathematical foundation for these correlation functions, positing that they are boundary values of analytic functions in complex Minkowski space and form positive-definite distributions to ensure the Hilbert space structure and spectrum condition, thereby guaranteeing the theory's unitarity and stability. Computations of these functions can also be performed using the path integral formulation, integrating over field configurations weighted by the action.

Feynman diagrams

Feynman diagrams provide a graphical method to represent and compute the terms in the perturbative expansion of scattering amplitudes in , facilitating the visualization of particle interactions as space-time processes. Developed by , these diagrams depict particles as lines, with interactions occurring at vertices, allowing for systematic calculation of matrix elements in the S-matrix formalism. The , which describes transition probabilities between initial and final states, is expressed as a —a time-ordered exponential of the interaction Hamiltonian—where each term corresponds to a specific diagram contributing to the amplitude at a given order in the coupling constant. This series summation organizes the infinite set of diagrams into a perturbative expansion, enabling predictions for physical processes like particle scattering. The Feynman rules translate the Lagrangian of a quantum field theory into diagrammatic elements for amplitude computation. For a scalar field theory with a cubic interaction term \mathcal{L}_\text{int} = -\frac{g}{3!} \phi^3, the rules specify that each vertex contributes a factor of -i g, each propagator (representing free particle propagation) is \frac{i}{p^2 - m^2 + i\epsilon} where p is the four-momentum and m the mass, and momentum is conserved at each vertex such that the sum of incoming momenta equals outgoing. Internal lines in loops require integration over their momenta, \int \frac{d^4 k}{(2\pi)^4}, with an overall factor of i for the amplitude and symmetry factors for identical diagrams. These rules derive from the path integral formulation or canonical quantization, ensuring Lorentz invariance and unitarity in the calculations. Higher-order corrections involve loops, leading to integrals that can exhibit divergences. For instance, the one-loop self-energy diagram in scalar \phi^3 theory, where a particle emits and reabsorbs another, yields the correction \Pi(p^2) = \int \frac{d^4 k}{(2\pi)^4} \frac{1}{(k^2 - m^2 + i\epsilon) ((p - k)^2 - m^2 + i\epsilon)}, which contributes to mass renormalization. In , the one-loop vertex correction diagram—depicting an electron emitting a virtual photon that splits into an electron-positron pair before rejoining—modifies the electron-photon vertex and introduces a logarithmic ultraviolet divergence, as computed by , reflecting the anomalous magnetic moment of the electron at order \alpha / 2\pi. For two-to-two particle scattering processes, kinematics are described using Mandelstam variables, introduced by Stanley Mandelstam: s = (p_1 + p_2)^2 (center-of-mass energy squared), t = (p_1 - p_3)^2 (momentum transfer squared), and u = (p_1 - p_4)^2 (the other transfer), satisfying s + t + u = \sum m_i^2 for particles with four-momenta p_i and masses m_i. These invariants parameterize the physical region of diagrams, such as tree-level exchanges or loop contributions, and are essential for evaluating amplitudes in perturbative or scalar theories. Feynman diagrams thus serve as the primary tool for computing the connected correlation functions that underlie observable scattering cross-sections.

Symmetries and advanced theories

Gauge symmetries

Gauge symmetries represent a fundamental extension of global symmetries in quantum field theory, where the transformation parameters are allowed to vary independently at each spacetime point, leading to local (or gauge) invariance. This principle ensures that the laws of physics remain unchanged under these position-dependent transformations, imposing stringent constraints on the structure of interactions. In contrast to global symmetries, which are constant across space and time and typically yield conserved charges via Noether's theorem, local symmetries necessitate the introduction of auxiliary fields to maintain invariance, thereby generating the forces mediated by gauge bosons. Consider the simplest case of a U(1) global symmetry, under which a complex scalar field \phi transforms as \phi \to e^{i\alpha} \phi, with \alpha constant. Promoting this to a local symmetry requires \alpha \to \alpha(x), but the ordinary derivative \partial_\mu \phi then transforms inhomogeneously as \partial_\mu \phi \to e^{i\alpha(x)} (\partial_\mu \phi + i (\partial_\mu \alpha) \phi), violating invariance. To restore it, the derivative is replaced by the covariant derivative D_\mu = \partial_\mu - i g A_\mu, where A_\mu is the gauge field (photon in QED) and g is the coupling constant; under the local transformation, A_\mu \to A_\mu + \partial_\mu \alpha(x), ensuring D_\mu \phi \to e^{i\alpha(x)} D_\mu \phi. The gauge field thus acts as a connection on the fiber bundle of the theory, compensating for the local phase changes. Pure gauge transformations, where A_\mu = \partial_\mu \alpha(x), leave the physical field strength F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu unchanged, highlighting the redundancy in the description. For non-Abelian gauge groups like SU(N), the structure generalizes to matrix-valued fields in the adjoint representation. The gauge fields A_\mu = A_\mu^a T^a, where T^a are the generators satisfying [T^a, T^b] = i f^{abc} T^c with structure constants f^{abc}, transform as A_\mu \to U A_\mu U^{-1} - \frac{i}{g} (\partial_\mu U) U^{-1}, with U(x) = e^{i \alpha^a(x) T^a}. The covariant derivative becomes D_\mu = \partial_\mu - i g A_\mu, and the field strength tensor acquires a non-linear term: F_{\mu\nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a + g f^{abc} A_\mu^b A_\nu^c, which encodes self-interactions among the gauge bosons, absent in the Abelian U(1) case. These interactions arise directly from the non-commutativity of the group, leading to a rich dynamics essential for describing strong and weak forces. Gauge invariance implies powerful constraints on correlation functions through Ward identities. For a conserved current J_\mu associated with the symmetry, the identity \partial^\mu \langle J_\mu(x) O \rangle = 0 holds, where O is any local operator, ensuring transversality and relating vertex functions to propagators. These identities, derived from the invariance of the path integral or S-matrix elements, facilitate proofs of unitarity and renormalization in gauge theories. Historically, the gauge principle originated with 's 1918 attempt to unify gravity and electromagnetism via local scale transformations, though it faced criticism for predicting unobserved length changes; it was revived in the context of by Weyl himself in 1929, reinterpreting it as phase invariance for the , laying the groundwork for modern gauge theories.

Spontaneous symmetry breaking

Spontaneous symmetry breaking occurs when the ground state, or vacuum, of a quantum field theory does not respect the full symmetry of the Lagrangian, even though the Lagrangian itself is invariant under the symmetry transformations. This phenomenon leads to a degenerate set of vacua, with the true vacuum selected by the dynamics, hiding the symmetry in the low-energy spectrum. The Goldstone theorem states that for every spontaneously broken continuous global symmetry generator, there exists a corresponding massless scalar boson, known as a . This theorem applies to systems where the symmetry is exact and global, ensuring that the broken generators correspond to zero-momentum excitations with vanishing mass in the infrared limit. In relativistic , these modes propagate as massless particles, restoring an effective realization of the symmetry at long distances through the Ward identities associated with the current algebra. A prototypical example arises in scalar field theories with a potential exhibiting spontaneous breaking, such as the Mexican hat potential given by V(\phi) = -\mu^2 |\phi|^2 + \lambda |\phi|^4, where \mu^2 > 0 and \lambda > 0. The minimum of this potential occurs at |\phi| = v/\sqrt{2}, with v = \sqrt{2\mu^2 / \lambda}, forming a circle of degenerate vacua in the for a single \phi. Expanding around one such vacuum, \phi = (v + h)/\sqrt{2}, yields a massive Higgs field h and a massless Goldstone mode, corresponding to rotations in the degenerate vacuum manifold. In gauge theories, spontaneous symmetry breaking via the Higgs mechanism modifies this picture: the would-be Nambu-Goldstone bosons are absorbed as longitudinal degrees of freedom by the gauge bosons associated with the broken generators, rendering those gauge bosons massive while preserving unitarity and gauge invariance. This absorption occurs through a redefinition of the gauge fields, where the Goldstone fields mix into the gauge boson propagators, effectively giving the massive vector bosons three polarization states. A key application is in the electroweak sector of the Standard Model, where a complex scalar doublet \Phi acquires a vacuum expectation value v \approx 246 GeV under the potential V(\Phi) = -\mu^2 \Phi^\dagger \Phi + \lambda (\Phi^\dagger \Phi)^2, spontaneously breaking the SU(2)_L \times U(1)_Y gauge symmetry to U(1)_{EM}. This breaking generates masses for the W^\pm and Z^0 bosons: m_W = g v / 2 and m_Z = \sqrt{g^2 + g'^2} v / 2, where g and g' are the SU(2)_L and U(1)_Y coupling constants, respectively, while the photon remains massless due to the unbroken electromagnetic symmetry. The three broken generators correspond to the longitudinal components of these massive gauge bosons, with the radial excitation manifesting as the Higgs boson. An analogous situation appears in condensed matter systems, such as superconductors, where the Bardeen-Cooper-Schrieffer spontaneously breaks the U(1) of the electron-pair , leading to Nambu-Goldstone modes that describe collective fluctuations. In the presence of long-range interactions, these modes acquire a plasma frequency, effectively gapping them, but the underlying mechanism parallels that in Higgs models.

Supersymmetry

Supersymmetry (SUSY) is a symmetry principle in quantum field theory that relates bosonic fields, which mediate forces, to fermionic fields, which describe matter particles, by positing an equal number of bosonic and fermionic degrees of freedom in the theory. This symmetry extends the Poincaré group of spacetime symmetries through fermionic generators known as supercharges Q_\alpha, which carry spin \frac{1}{2} and map bosonic states to fermionic states (and vice versa) under transformations. The supercharges satisfy the super-Poincaré algebra, with a key anticommutation relation \{ Q_\alpha, Q^\dagger_\beta \} = 2 \delta_{\alpha\beta} H (in a simplified notation where H is the Hamiltonian), ensuring that supersymmetry commutes with translations and protects the vacuum energy from large corrections. The Haag-Łopuszański-Sohnius theorem establishes that supersymmetry is the only nontrivial extension of the Poincaré algebra compatible with the structure of interacting quantum field theories, allowing for S-matrix symmetries beyond internal symmetries like chirality. In the quantum field theory formulation of supersymmetry, fields are packaged into supermultiplets using , a coordinate extension that includes Grassmann-odd parameters \theta^\alpha and \bar{\theta}^{\dot{\alpha}}. Chiral superfields describe matter content, combining a complex \phi, a Weyl \psi, and an auxiliary scalar F, while vector superfields encode gauge interactions, containing a , a gaugino , and a real scalar D-term. The supersymmetric is constructed from these superfields to ensure invariance under SUSY transformations; for instance, the kinetic term for a chiral superfield \Phi is \int d^4\theta \, \Phi^\dagger \Phi, which expands to the standard kinetic terms for the scalar and plus a quartic scalar interaction, and the gauge kinetic term for a vector superfield V is \int d^2\theta \, W^\alpha W_\alpha + \mathrm{h.c.}, where W_\alpha is the field strength superfield. These formulations preserve the equality of and masses at tree level in unbroken SUSY, leading to degenerate multiplets. Supersymmetry is typically broken in realistic models to match observations, where bosons and fermions have distinct masses. Soft SUSY breaking introduces explicit mass terms, such as scalar squared masses m^2 |\phi|^2, gaugino masses M \lambda \lambda, and trilinear couplings A \phi \psi \psi, which preserve the renormalizability and do not reintroduce quadratic divergences that SUSY originally cancels. These soft terms can arise from higher-scale dynamics like supergravity or string theory, maintaining the theory's ultraviolet finiteness properties. While spontaneous SUSY breaking is possible in principle, as discussed in the context of general symmetry breaking mechanisms, phenomenological models favor soft breaking for its flexibility in generating the observed particle spectrum. A primary application of in quantum field theory is addressing the in the , where radiative corrections to the Higgs mass would otherwise require unnatural between bare parameters and contributions. Superpartners—such as squarks (scalar partners of quarks), sleptons (scalar partners of leptons), and gauginos (fermionic partners of bosons)—contribute oppositely in loops to the Higgs , canceling divergences and stabilizing the electroweak scale against Planck-scale physics. This naturalness protection extends to grand unification, where SUSY enables the convergence of the three couplings of the at a high scale around $10^{16} GeV, facilitating embedding into a unified group like SU(5) or SO(10). The (MSSM) provides the canonical framework for incorporating into , extending the with superpartners for all particles and two Higgs doublets to ensure anomaly cancellation and allow for up- and down-type masses via the superpotential. In the MSSM, the particle content includes chiral superfields for three generations of and leptons (with scalar partners), vector superfields for the SU(3) × SU(2) × U(1) gauge groups (with gauginos), and the two Higgs chiral superfields H_u and H_d, whose scalar components give masses to fermions while their fermionic components (Higgsinos) form part of the spectrum. Soft breaking parameters in the MSSM, such as universal scalar masses m_0 and gaugino masses m_{1/2} at a high scale, are evolved via equations to predict low-energy spectra, enabling unification while resolving the issue without excessive . As of 2025, extensive searches at the have not detected supersymmetric particles, placing strong constraints on many models while prospects for discovery remain at higher luminosities.

Topological quantum field theories

Topological quantum field theories (TQFTs) are quantum field theories whose observables, such as functions, are topological invariants of the underlying manifold, of the choice of or geometric details. These theories emerge from gauge-theoretic constructions where is diffeomorphism-invariant and -independent, ensuring that physical predictions remain unchanged under continuous deformations of . A example is the topological Yang-Mills theory in four dimensions, defined by S = \int \operatorname{tr}(F \wedge F), where F is the two-form of the gauge connection; this is exact and topological, leading to observables that probe the global structure of four-manifolds. In three dimensions, Chern-Simons theory exemplifies a TQFT with the S = \frac{k}{4\pi} \int \operatorname{tr}\left(A \wedge dA + \frac{2}{3} A \wedge A \wedge A\right), where A is the field one-form and k is an level parameter determining the quantization. The partition function of this theory on a closed three-manifold yields a topological , while the expectation value of operators—traces of the around closed curves—computes and invariants, such as the Jones polynomial, providing a field-theoretic origin for these mathematical objects. The Donaldson-Witten theory in four dimensions, derived by topological twisting of \mathcal{N}=2 supersymmetric Yang-Mills theory, connects directly to Donaldson polynomials, which classify smooth four-manifolds through invariants built from the of anti-self-dual connections. In this framework, correlation functions of twist-invariant operators correspond to these polynomials, offering a quantum field theoretic reinterpretation of Donaldson's original -theoretic constructions. A foundational mathematical link in TQFTs is provided by the Atiyah-Singer index theorem, which equates the analytical index of the D on a to a topological over the manifold M: \operatorname{index}(D) = \int_M \operatorname{ch}(F) \wedge \hat{A}(M), where \operatorname{ch}(F) is the Chern character of the curvature F and \hat{A}(M) is the A-roof genus; this theorem underpins the computation of zero modes and partition functions in supersymmetric TQFTs, ensuring their topological nature. Beyond pure mathematics, TQFTs find applications in , particularly in describing the edge states of the as chiral conformal field theories that arise from the bulk . The effective theory for these edge modes captures the universal transport properties, such as quantized conductance, through a chiral framework derived from the abelian Chern-Simons description of the bulk.

QFT in curved spacetimes

Quantum field theory in curved spacetimes treats the as a fixed classical background metric g_{\mu\nu}, while quantizing matter fields propagating on this geometry. This semiclassical approximation allows the study of quantum effects in the presence of without fully quantizing . The procedure, adapted from flat methods, involves expanding fields in terms of modes that satisfy the curved-space , with the choice of playing a crucial role. For instance, in de , the Bunch-Davies is selected as the unique invariant under the de Sitter group, defined by from Euclidean signature and ensuring positive frequency modes with respect to Bunch-Davies time. A key phenomenon arising in this framework is the , where an observer undergoing uniform acceleration a in Minkowski vacuum perceives a thermal bath of particles with temperature T = a / (2\pi) in (\hbar = c = k_B = 1). This effect highlights the observer-dependence of the vacuum state, as the Rindler modes used by the accelerated observer mix positive and negative frequency Minkowski modes, leading to particle creation. The Unruh temperature arises from the periodicity in for the accelerated trajectory, analogous to thermal field theory. The most celebrated application is Hawking radiation from black holes, predicted by analyzing quantum fields in the . In his 1974 calculation, Hawking demonstrated that particles are created near the event horizon due to the mismatch between ingoing and outgoing vacuum modes, resulting in thermal emission with temperature T_H = \kappa / (2\pi), where \kappa is the surface gravity. For a Schwarzschild black hole of mass M, this yields T_H = 1/(8\pi M), implying gradual via energy loss. The semiclassical backreaction incorporates this through the expectation value of the stress-energy tensor \langle T_{\mu\nu} \rangle, sourced in the Einstein equations as G_{\mu\nu} = 8\pi \langle T_{\mu\nu} \rangle. In four dimensions, for conformal fields, the trace anomaly contributes \langle T^\mu_\mu \rangle = \frac{1}{360 (4\pi)^2} (c C^2 + a E_4), where C^2 is the square of the Weyl tensor and E_4 the Euler density; a related form for the renormalized stress tensor near the horizon is \langle T_{\mu\nu} \rangle = \frac{1}{2880 \pi^2} (R_{\mu\rho\sigma\lambda} R^{\mu\rho\sigma\lambda} - R_{\mu\nu} R^{\mu\nu} + \frac{1}{30} \square R + \cdots), driving the process.

Renormalization and effective descriptions

Renormalization procedures

In quantum field theory, renormalization procedures address the ultraviolet divergences arising in perturbative calculations by systematically redefining the theory's parameters to absorb these infinities into finite counterterms, ensuring that physical observables remain well-defined and independent of the regularization scheme. This involves distinguishing between bare parameters, which are unphysical and cutoff-dependent, and renormalized parameters, which correspond to measurable quantities. The process relies on the structure of one-particle irreducible (1PI) correlation functions, where divergences are isolated and subtracted order by order in perturbation theory. The bare parameters, such as the bare mass m_0 and bare \phi_0, are related to the renormalized ones via factors Z: for instance, m_0 = Z_m m and \phi_0 = \sqrt{Z} \phi, where Z_m and Z are determined from the divergent parts of the 1PI self-energy and two-point functions, respectively. These Z factors are computed perturbatively from Feynman diagrams contributing to the relevant 1PI diagrams, ensuring that the renormalized Green's functions are finite. In (QED), the charge factor Z_3 specifically arises from the vacuum polarization diagram, which introduces a logarithmic that is subtracted to yield the physical . A common regularization method is , where spacetime is continued to d = 4 - \epsilon dimensions, producing poles in \epsilon that signal divergences. The minimal subtraction (MS) scheme then defines counterterms to cancel only these poles, without including finite parts, leading to renormalization constants of the form Z = 1 + \sum_{n=1}^\infty \frac{a_n}{\epsilon^n}. This scheme preserves the simplicity of perturbative expansions and is widely used in gauge theories due to its manifest maintenance of symmetries.90554-7) The running of couplings under changes in the renormalization scale \mu is captured by the beta function, defined as \beta(g) = \mu \frac{dg}{d\mu}, where g is the renormalized coupling; this quantifies how the effective coupling evolves to compensate for scale dependence in loop corrections. Renormalizability is established through power-counting arguments, which assess whether divergences can be absorbed by a finite number of counterterms. For \phi^4 theory in four dimensions, the superficial degree of divergence \delta for a diagram with E external legs is \delta = 4 - E, indicating that only mass, field, and coupling renormalizations suffice, as higher-point functions converge for E > 4. This criterion confirms the theory's renormalizability, distinguishing it from non-renormalizable interactions with positive \delta growth.

Renormalization group flow

The renormalization group (RG) flow in quantum field theory describes the scale dependence of physical parameters, such as coupling constants and masses, as the renormalization scale μ varies, enabling the study of how effective theories emerge across different energy regimes. This evolution arises from the invariance of physical observables under changes in the cutoff or regularization scheme, leading to trajectories in the space of couplings that connect ultraviolet (high-energy) and infrared (low-energy) behaviors. Fixed points of this flow, where the beta function vanishes, correspond to scale-invariant theories, such as conformal field theories at critical points. The RG framework, pioneered by Kenneth Wilson in the early 1970s, revolutionized the understanding of critical phenomena and quantum chromodynamics (QCD) by revealing universal scaling behaviors independent of microscopic details. A central tool for analyzing RG flow is the Callan-Symanzik equation, which governs the scale dependence of correlation functions G in renormalized perturbation theory. For a theory with coupling g and mass m, the equation takes the form \left( \mu \frac{\partial}{\partial \mu} + \beta(g) \frac{\partial}{\partial g} - \gamma m \frac{\partial}{\partial m} \right) G = 0, where β(g) = μ dg/dμ is the beta function encoding the running of the coupling, and γ is the anomalous dimension of the mass. This differential equation, derived independently by Curtis Callan and Kurt Symanzik, ensures that Green's functions remain finite and scale appropriately after renormalization, allowing predictions for physical quantities at arbitrary energies from high-energy data. Solutions to the equation yield scaling relations, such as power-law behaviors near fixed points, which underpin the computation of critical exponents in statistical mechanics models. Fixed points occur at couplings g* satisfying β(g*) = 0, dividing the flow into relevant, irrelevant, and marginal directions based on the eigenvalues. A seminal example is the Wilson-Fisher fixed point in the ε-expansion of φ^4 theory, where ε = 4 - d and d is the dimension; this nontrivial fixed point governs the critical behavior of the in three dimensions, with the coupling g* ≈ ε/3 + O(ε^2) to leading order. Near such fixed points, correlation functions exhibit scaling forms G(r, μ) ~ μ^{Δ} f(r μ), where Δ is the scaling dimension, capturing universal properties like the specific heat exponent α ≈ ε/2 + O(ε^2). This perturbative approach, developed by Wilson and Michael Fisher, bridges continuum field theory with lattice models of phase transitions. In non-Abelian gauge theories like QCD, the RG flow displays , where the strong coupling α_s(Q) weakens at high momentum scales Q, approaching zero logarithmically as α_s(Q) ≈ 1 / (b \ln(Q/Λ)), with b = (11 N_c - 2 N_f)/(12 π) > 0 for N_c = 3 colors and N_f ≤ 16 flavors. This behavior, discovered by , , and David Politzer, implies that perturbative methods apply at short distances, explaining the point-like nature of quarks in while confinement dominates at long distances. The positive β function coefficient ensures the flow runs from a strong-coupling fixed point (or in the ultraviolet for some theories) to weak coupling asymptotically. Operators in the effective are classified by their relevance under flow according to scaling dimensions Δ = d - y, where y is the eigenvalue and d the dimension; operators with y > 0 (Δ < d) are relevant and grow in the infrared, y < 0 (Δ > d) irrelevant and fade, while y = 0 (Δ = d) are marginal and may run logarithmically. This classification determines which terms dominate the low-energy effective , with relevant operators like terms driving flows away from fixed points and irrelevant ones justifying truncations in effective descriptions. The originates from Wilson's of mixing under rescaling, highlighting universality classes where irrelevant details decouple. The conceptual foundation of RG flow was advanced in the 1970s through block-spin transformations, where are coarse-grained by averaging spins over blocks of the to generate a sequence of effective Hamiltonians at coarser scales, revealing fixed points iteratively. Kenneth Wilson introduced this real-space method to approximate the continuum limit, while Alexander Polyakov applied similar rescaling ideas to gauge theories, connecting block averaging to the in perturbative contexts. These transformations provide an intuitive picture of how short-distance fluctuations integrate out, yielding scale-dependent couplings that match continuum equations.

Effective field theories

Effective field theories (EFTs) provide a systematic framework for describing the physics of quantum field theories at energy scales much lower than the fundamental high-energy scales of the underlying theory, by integrating out heavy degrees of freedom to obtain an effective low-energy description. This approach, rooted in the renormalization group ideas, allows for the construction of Lagrangians that capture the long-distance behavior while suppressing short-distance effects suppressed by powers of the cutoff scale \Lambda. In EFTs, the effective Lagrangian is expanded in terms of local operators ordered by their mass dimension, enabling power-counting rules to assess the importance of different contributions at low energies. The Wilsonian integration procedure forms the basis for deriving these effective theories, where high-momentum modes above a cutoff are integrated out to yield an effective action for the low-energy modes. The resulting effective Lagrangian takes the form \mathcal{L}_\text{eff} = \mathcal{L}_0 + \sum_n \frac{c_n}{\Lambda^n} \mathcal{O}_n, where \mathcal{L}_0 is the leading renormalizable part, the \mathcal{O}_n are local operators of dimension greater than four, the coefficients c_n are dimensionless numbers of order one, and \Lambda is the high-energy scale associated with the integrated-out physics. This expansion organizes interactions hierarchically, with higher-dimensional operators contributing corrections suppressed by powers of the low-energy scale over \Lambda. The validity of the EFT holds for processes with energies E \ll \Lambda, beyond which new physics from the ultraviolet (UV) completion becomes relevant. A prominent example is chiral effective field theory (ChEFT), which describes low-energy pion interactions arising from the spontaneous breaking of chiral symmetry in quantum chromodynamics (QCD). The leading-order Lagrangian is \mathcal{L} = \frac{f^2}{4} \operatorname{Tr} (\partial^\mu U \partial_\mu U^\dagger) + \cdots, where U = \exp(i \pi^a \tau^a / f) parameterizes the pion fields \pi^a, f \approx 93 MeV is the pion decay constant, and the ellipsis denotes higher-order terms. Power counting in ChEFT organizes the expansion in powers of momentum p over the chiral symmetry breaking scale \Lambda \sim 1 GeV, with contributions scaling as (p / \Lambda)^{2k} at order k, allowing precise predictions for pion scattering and other low-energy processes. The coefficients of higher-order terms are determined by matching to the underlying QCD dynamics or experimental data.90938-Z) Matching conditions ensure that the EFT reproduces the predictions of the full UV theory at the scale where heavy particles are integrated out, by equating matrix elements or Green's functions computed in both frameworks. A classic illustration is the four-fermion Fermi theory of weak interactions, which serves as the low-energy EFT of the electroweak theory below the W- mass scale M_W \approx 80 GeV; the effective coupling G_F / \sqrt{2} = 1 / (2 v^2) matches the tree-level exchange of W bosons, with v the Higgs , enabling accurate descriptions of and lifetime at energies much below M_W. The theorem formalizes how heavy particles influence low-energy physics, stating that their effects appear only as local operators in the EFT suppressed by powers of their M, specifically as $1/M^2 to light-particle amplitudes when E \ll M. This , proven perturbatively, guarantees that heavy fields do not propagate at low energies and their virtual contributions can be absorbed into the effective coefficients, preserving the separation of scales in asymptotically free theories like QCD. Exceptions occur in cases of long-range forces or exact symmetries, but in generic quantum field theories, holds, justifying the EFT approximation. An important application is Heavy Quark Effective Theory (HQET), tailored for systems containing a heavy of mass m_Q \gg \Lambda_\text{QCD}, such as b-quarks in B mesons. In HQET, the heavy quark field is redefined as h_v = e^{-i m_Q v \cdot x} P_+ Q, where v is the quark velocity and P_+ a , leading to an effective \mathcal{L}_\text{HQET} = \bar{h}_v i v \cdot D h_v + \mathcal{O}(1/m_Q), which resums non-perturbative QCD effects and simplifies calculations of heavy spectra and decay form factors. This framework has been crucial for interpreting B-meson decays at facilities like the , providing model-independent predictions aligned with heavy quark symmetry. The running of EFT coefficients can be analyzed using flows to resum large logarithms between scales.

Non-renormalizable theories

In quantum field theories, renormalizability is assessed through power counting, which evaluates the superficial degree of δ for Feynman diagrams as a function of the number of loops L, external legs, and interaction vertices. For non-renormalizable theories, δ > 0 and grows positively with L (typically δ ∝ L), implying that higher-order loop corrections introduce new divergent structures not absorbable by a of counterterms, thus requiring infinitely many parameters for . This contrasts with renormalizable cases where δ ≤ 0, allowing divergences to be controlled with finitely many adjustments. A concrete example is the scalar φ⁶ theory in four spacetime dimensions, where the interaction term λ φ⁶ / 6! has a coupling λ with mass dimension [λ] = -2, leading to δ = 2 for the one-loop self-energy diagram and higher values at more loops, confirming its non-renormalizable nature. In comparison, the φ⁴ theory in three dimensions serves as a superrenormalizable contrast, with [λ] = 1 > 0, yielding δ < 0 that decreases with additional loops, so only finitely many diagrams diverge. Non-renormalizable theories also suffer from violations of perturbative unitarity at high energies. Specifically, tree-level scattering amplitudes for processes involving 2n external legs grow as |A| ∼ (E^{2n} / Λ^{2n-4}), where E is the center-of-mass energy and Λ is a characteristic scale set by the coupling dimensions; this exceeds the unitarity bound |A| < 1 (or more precisely, the partial wave amplitude |a_l| ≤ 1) when E ≫ Λ, signaling the breakdown of perturbation theory. Historically, Einstein's general relativity provides a prominent example when quantized as an effective QFT around flat spacetime, with the Einstein-Hilbert action ∫ √-g R / (16π G_N) featuring the Newton constant G_N of dimension [G_N] = -2, which generates non-renormalizable vertex factors scaling as κ E^2 (where κ ∼ √G_N) and thus δ = 2 + 2L > 0.90279-7) The resolution to these pathologies lies in treating non-renormalizable theories as effective field theories valid only below the scale Λ, where the is systematically expanded and truncated to low powers of (E/Λ), absorbing divergences order by order while anticipating ultraviolet completion by new physics at Λ to preserve unitarity non-perturbatively. This EFT framework, as elaborated in the effective field theories section, restores predictive power for low-energy phenomena without requiring full renormalizability.

Perturbative and non-perturbative methods

Perturbative expansions

In quantum field theory, perturbative expansions provide a systematic approach to calculating physical quantities by expanding around the free-field limit, treating interactions as small perturbations parameterized by a . These expansions express correlation functions or scattering amplitudes as in the coupling, where each term corresponds to contributions from increasingly complex interaction processes. Feynman diagrams serve as a graphical basis for organizing these terms, facilitating the computation of matrix elements at successive orders in . A foundational tool for evaluating these perturbative series is the Dyson-Wick theorem, which expresses time-ordered products of quantum fields in the as sums of normal-ordered products plus all possible full contractions, with contractions defined by the free-field propagators. This theorem, combining Dyson's time-ordering formalism with 's contraction rules, reduces the evaluation of vacuum expectation values to combinatorial sums over Wick contractions, enabling the explicit computation of higher-order terms in the for the or generating functional. The theorem was originally developed in the context of but applies generally to interacting field theories. Perturbative series in quantum field theory are typically asymptotic rather than , meaning they diverge for any finite but provide accurate approximations when truncated at optimal . To improve , Borel resummation transforms the series \sum_{n=0}^\infty a_n g^n, where a_n \sim n!, into its Borel transform B(t) = \sum_{n=0}^\infty \frac{a_n}{n!} t^n and integrates along a suitable : the resummed function is given by \int_0^\infty dt \, e^{-t/g} B(t). This method is particularly effective for series arising from field theories with contributions or renormalons, yielding a finite result despite the original , provided the Borel plane lacks singularities on the positive real axis. Seminal analyses demonstrated Borel summability for planar diagrams in large-N gauge theories. Another powerful perturbative technique is the large-N expansion, where $1/N serves as the small expansion parameter in theories with global O(N) symmetry, such as the O(N) vector model. In the limit N \to \infty, interactions become exactly solvable via saddle-point methods, with quantum fluctuations captured by a systematic $1/N series; for instance, in the O(N) \phi^4 theory, the two-point function exhibits a resummed bubble chain that generates a non-trivial anomalous dimension. This approach isolates leading diagrammatic contributions, such as planar graphs, and has been applied to critical phenomena and matrix models. The method originated in studies of the O(N) nonlinear sigma model and was extended to quantum field theories. In scalar \phi^4 theory, Schwinger-Dyson equations offer a non-perturbative framework that, when truncated appropriately, allows summation of perturbative series to all orders by relating correlation functions through functional differential equations derived from the or . For the \phi^4 model with Lagrangian \mathcal{L} = \frac{1}{2} (\partial \phi)^2 + \frac{m^2}{2} \phi^2 + \frac{\lambda}{4!} \phi^4, the equations for the two- and four-point functions close in the large-N , yielding exact solutions that resum daisy and superdaisy diagrams, thereby capturing effects like screening masses beyond fixed-order . These equations, originally formulated theories, have been rigorously applied to \phi^4 to triviality and the . Perturbative calculations in massless theories often encounter and collinear divergences, arising from soft or collinear emissions, which spoil individual contributions but cancel in inclusive observables. These divergences are handled through theorems, which separate the into hard, soft, and collinear factors, with the latter regulated by renormalization-group evolution; for example, in , the parton distribution functions absorb collinear singularities, while jet functions capture collinear dynamics. This framework, rooted in the Kinoshita-Lee-Nauenberg theorem, ensures infrared safety and enables precise predictions for processes like . Seminal developments in abelian and non-abelian gauge theories established the universal structure of these factorizations.

Non-perturbative techniques

Non-perturbative techniques in quantum field theory address phenomena that cannot be captured by perturbative expansions around free-field configurations, particularly in regimes of strong or where non-analytic effects like tunneling dominate. These methods often rely on exact or semi-classical solutions to the classical in signature, providing insights into vacuum structure, confinement, and . Instantons, as finite-action solutions representing tunneling between vacua, exemplify such approaches by contributing exponentially suppressed terms to correlation functions. Instantons arise as saddle-point contributions to the , which formalizes the theory's function and observables in a manner. In pure Yang-Mills theory, the seminal single-instanton solution for SU(2) gauge group was constructed by Belavin, Polyakov, , and Tyupkin in 1975, revealing self-dual field configurations that minimize the action subject to non-trivial topology. These pseudoparticle solutions describe tunneling processes in the inverted potential, with the classical action given by S = \frac{8\pi^2}{g^2} for topological charge q = 1, where g is the Yang-Mills coupling constant; this value saturates the topological lower bound on the action S \geq \frac{8\pi^2 |q|}{g^2}. The instanton action leads to a non-perturbative factor e^{-S} in the path integral, encoding effects beyond weak-coupling series. The presence of fermions introduces zero modes in the instanton background, arising from the index theorem and reflecting broken chiral symmetries. 't Hooft demonstrated that integrating over these collective coordinates generates an effective multi-fermion vertex, which violates axial symmetries but preserves the vector anomaly. This 't Hooft vertex, for instance, in QCD with three light flavors, takes the form of a 6-fermion interaction that contributes to processes like the eta-prime meson mass, resolving the U(1) problem through instanton-induced chiral symmetry breaking. In strong-coupling regimes, where perturbation theory fails due to large g, expansions around the disordered phase provide an alternative non-perturbative tool. The strong-coupling expansion in lattice gauge theories employs a character expansion of the plaquette action in terms of group representations, allowing systematic computation of loops and string tensions as power series in \beta = 2N/g^2 for SU(N). This method reveals confinement for small \beta, with the area law for loops emerging from minimal surfaces tiled by plaquettes, offering quantitative tests of duality and phase structure without relying on weak-coupling approximations. Dualities offer another powerful framework, relating strong- and weak- descriptions through transformations of the theory's parameters. In N=4 super Yang-Mills theory, acts as an SL(2,Z) , under which the coupling transforms as g \to 1/g while interchanging electric and magnetic variables; this exchanges perturbative and sectors, with monopoles becoming perturbative particles at strong coupling. Evidence for this duality stems from exact computations of the partition function and scattering amplitudes, confirming its consistency across gauge groups.

Lattice field theory

Lattice field theory provides a non-perturbative framework for quantum field theory by discretizing continuous into a finite of points separated by a spacing a, enabling numerical computations via of path integrals, especially for strongly interacting systems like (QCD). This approach was pioneered by Kenneth Wilson in the 1970s to address challenges in perturbative methods, such as quark confinement in QCD. The core of gauge theories lies in the formulation of the for the gauge fields. For SU(N) gauge theories, the standard plaquette is employed: S = \sum_{\text{sites}} \beta \sum_{\text{plaquettes}} \left(1 - \frac{1}{N} \Re \operatorname{tr} U_p \right), where U_p represents the oriented product of link variables U_\mu around an elementary plaquette, \beta = 2N/g^2 with g the bare coupling, and the sum runs over all sites and plaquettes. This approximates the Yang-Mills while preserving local gauge invariance through the link variables, which are elements of the SU(N) group. introduced this formulation in to model non-Abelian gauge theories on the and explore their , including the confinement-deconfinement . Incorporating dynamical fermions requires careful discretization to avoid artifacts like fermion doublers—unwanted additional massless modes arising from the naive lattice derivative. Wilson's 1974 formulation addresses this by introducing the Wilson-Dirac operator: D_W = m + \sum_\mu \left[ \gamma_\mu \frac{\nabla_\mu + \nabla_\mu^*}{2} - \frac{r}{2} \nabla_\mu^* \nabla_\mu \right], where \nabla_\mu \psi(x) = U_\mu(x) \psi(x + \hat{\mu}) - \psi(x) is the forward difference, \nabla_\mu^* \psi(x) = \psi(x) - U_\mu^\dagger(x - \hat{\mu}) \psi(x - \hat{\mu}) the backward, m the bare , and r = 1 the Wilson parameter that suppresses doublers by assigning them masses of order $1/a, while the physical mode remains light for small m. This operator is added to the gauge action via the fermion determinant in the , but it explicitly breaks chiral , complicating simulations of processes sensitive to chiral . To realize chiral fermions on the lattice without doublers or exact symmetry breaking, advanced formulations have been developed. fermions, proposed by Kaplan in , construct chiral modes as bound states on a defect () in an extra (fifth) dimension, where the bulk theory uses Wilson-like fermions with opposite masses on either side of the wall; in the limit of infinite extra dimension, exact chirality emerges at the wall while doublers are gapped. , introduced by Neuberger in , provide an alternative by defining the through the overlap of low-lying eigenvectors of the , satisfying the Ginsparg- that ensures a modified but exact chiral at finite a. These methods have become essential for precision studies requiring chiral symmetry, such as weak matrix elements. A key feature of lattice field theory is the continuum limit, obtained by extrapolating simulations to a \to 0 while tuning parameters to hold physical correlation lengths fixed in lattice units; this ensures universality and recovery of the continuum quantum field theory, with lattice artifacts scaling as powers of a. In practice, this involves performing computations on ensembles with multiple lattice spacings and fitting to continuum-extrapolated results. For QCD, lattice simulations using these formulations have yielded precise determinations of hadron masses, such as the nucleon mass agreeing with experiment to within ~1% or better after continuum and chiral extrapolations, as of 2024, validating the approach for non-perturbative strong-interaction phenomenology. Recent advances include multigrid solvers for faster computations and applications to beyond-Standard Model physics like axion models.

Mathematical rigor and foundations

Axiomatic quantum field theory

Axiomatic quantum field theory seeks to provide a mathematically rigorous foundation for quantum field theory by specifying abstract postulates that encode key physical principles such as , , and positivity, without relying on perturbative expansions or specific models. These frameworks emerged in the mid-20th century to address inconsistencies in early quantum field theory formulations and to enable proofs of fundamental theorems. Central to this approach are the , developed by Arthur Wightman in the early 1950s and formally published with Lars Gårding in 1964, which define quantum fields as operator-valued tempered distributions on Minkowski spacetime. The axioms consist of five main postulates: a separable complex \mathcal{H} carrying the representation of the theory; a unitary representation of the Poincaré group on \mathcal{H} ensuring relativistic invariance; a unique, Poincaré-invariant vacuum vector \Omega \in \mathcal{H} with positive energy spectrum (spectral condition), meaning the generator of time translations has spectrum bounded below by zero; field operators \phi(f) smeared with test functions f satisfying microcausality, where fields at spacelike-separated points commute; and cluster decomposition, which enforces locality by requiring that correlations between distant regions vanish in the limit of spatial separation. Significant consequences follow from the , including the Reeh-Schlieder theorem, proved in 1961, which demonstrates that the \Omega is cyclic and separating for the algebra of local observables in any nonempty open region. This implies that applying polynomials in the local fields to the generates a dense subspace of the full , underscoring the highly entangled nature of the state in relativistic quantum field theory. During the and , Rudolf Haag and Huzihiro Araki advanced these ideas through seminal works on the structure of local observables and theory, emphasizing the role of Poincaré invariance and the spectrum condition in deriving properties like the spin-statistics theorem and , thereby solidifying the axiomatic framework's consistency. Cluster decomposition, as part of the Wightman postulates, further ensures the theory's locality by stipulating both weak and strong forms: the weak form requires vanishing correlations for large separations, while the strong form incorporates the structure to prevent long-range order in interacting theories. To bridge abstract axioms with constructive methods, Konrad Osterwalder and Robert Schrader introduced a complementary set of axioms in 1973 for Euclidean Green's functions, or Schwinger functions, defined on spacetime. These axioms include regularity (as positive tempered distributions), Euclidean invariance under translations and rotations, reflection positivity (ensuring a positive-definite inner product via ), and an Euclidean analog of cluster decomposition for asymptotic independence. The Osterwalder-Schrader reconstruction theorem then establishes an isomorphism between theories satisfying these axioms and those fulfilling the in , provided the Euclidean functions admit to Minkowski coordinates while preserving and the spectrum condition; this framework proved particularly useful for non-perturbative constructions in lower dimensions.

Constructive approaches

Constructive approaches in quantum field theory seek to establish the mathematical existence of interacting models on , typically starting from functional integrals and verifying satisfaction of axioms such as those proposed by Osterwalder and Schrader. These methods rely on rigorous control of infinite-volume limits and ultraviolet divergences, often using probabilistic techniques to define the theory without cutoffs. Successes have been limited to low dimensions, where perturbative issues are milder, providing concrete examples of non-trivial quantum fields. A foundational class of models is the P(\phi)_2 theories, which describe self-interacting scalar fields in two Euclidean dimensions with polynomial potentials P of even degree. These were constructed as probability measures on the space of distributions using reflection positivity and correlations, confirming the existence of the infinite-volume theory and its Osterwalder-Schrader reconstruction to . The approach leverages classical analogies to bound correlation functions and establish cluster properties. The Glimm-Jaffe method advanced these constructions through cluster expansions for the \phi^4_2 model in two space-time dimensions, treating the interaction as a of the free massive . By deriving inductive bounds on multi-scale cluster contributions and applying iterative , they proved the existence of the continuum theory without spatial or temporal cutoffs, including the of operators and a unique physical vacuum state satisfying . This technique demonstrated non-trivial scattering and . Within the Osterwalder-Schrader framework, reflection positivity ensures the correlation functions encode a positive-definite structure upon , while infrared bounds control long-distance behavior to justify the infinite-volume limit. These tools were essential for verifying axiomatic properties in the above models. In the , efforts extended to theories, including constructive treatments of models in three dimensions. A major challenge persists in higher dimensions: no interacting \phi^4 has been constructed in four space-time dimensions due to triviality, where the renormalized vanishes in the , reducing the to free fields—a consequence linked to the in . Rigorous proofs using approximations and flows confirm this triviality for the .

Challenges in rigorous formulation

One of the central challenges in the rigorous formulation of quantum field theory (QFT) is the triviality of the φ⁴ theory in four spacetime dimensions (φ⁴_4). Perturbative renormalization group analysis reveals a Landau pole, where the running coupling constant diverges at high energies, implying that to reach the continuum limit, the renormalized coupling must approach zero, resulting in a free (Gaussian) theory devoid of interactions. Rigorous non-perturbative proofs confirm triviality for φ⁴ theories in dimensions d > 4, showing that the continuum limits of Euclidean lattice fields are free fields. In four dimensions, the situation is marginal, with "marginal triviality" established through bounds on the renormalized coupling that force it to vanish in the scaling limit, linking directly to the critical four-dimensional Ising model. Seminal 1980s results by Aizenman and Fröhlich used geometric analysis and random-current representations to prove these bounds, providing strong evidence for triviality even in the borderline case of d=4; more recent work in 2021 by Aizenman and Duminil-Copin has strengthened these results for the scaling limits. Another major open problem is the Yang-Mills and , one of the Clay Mathematics Institute's . This requires proving that, for any compact simple gauge group G, a non-trivial quantum Yang-Mills theory exists on ℝ⁴ satisfying the (or their Euclidean counterpart), and that its has a Δ > 0, meaning the energy spectrum above the is bounded below by a positive value. In the context of (QCD), this gap corresponds to the absence of massless excitations and the presence of massive particles, but a fully rigorous proof remains elusive despite extensive simulations supporting the gap's . Haag's theorem poses a foundational obstacle to the interaction picture in QFT. It states that, for non-free theories, the Hilbert space representations of the free-field and interacting-field observables are unitarily inequivalent, rendering the standard —where one evolves in the free theory and treats interactions as perturbations—mathematically inconsistent except in the trivial free case. This theorem, first proved in 1955, implies that perturbative expansions cannot be rigorously justified within the usual framework for interacting relativistic fields, necessitating alternative approaches like the Wightman or LSZ frameworks for . Lattice models provide analogs for addressing QFT challenges, such as the seven-dimensional , which serves as a counterpart to φ⁴ theory above the upper . Rigorous analyses in dimensions d ≥ 5, including d=7, establish mean-field (e.g., β=1/2, γ=1, ν=1/2) with logarithmic corrections, confirming the absence of non-trivial interactions in the continuum limit akin to QFT triviality. These results, building on Aizenman's geometric methods, highlight how high-dimensional theories achieve exact solvability for exponents, offering insights into QFT scaling but underscoring the difficulty of constructions in lower dimensions.

Applications beyond particle physics

Condensed matter physics

Quantum field theory (QFT) provides a powerful framework for describing collective phenomena in condensed matter systems, where interactions among many particles lead to emergent excitations that behave like fields. In solids and superfluids, quasiparticles such as phonons, magnons, and electron-like excitations are treated as quanta of underlying fields, allowing the application of QFT techniques to compute correlation functions, response functions, and phase transitions. This approach bridges microscopic Hamiltonians with macroscopic properties, emphasizing low-energy effective theories that capture universal behaviors beyond simple single-particle pictures. Fermi liquid theory reformulates the behavior of interacting fermions in metals as a QFT of quasiparticles, where low-energy excitations near the Fermi surface are long-lived particles with renormalized masses and interactions, analogous to free fermions but with Landau parameters describing scattering processes. This framework, developed by Landau, explains thermodynamic properties like specific heat and susceptibility through quasiparticle interactions treated perturbatively in QFT. In one dimension, however, interactions destroy the quasiparticle picture, leading to Luttinger liquid behavior characterized by bosonic collective modes for charge and spin densities, with power-law correlations instead of Fermi liquid quasiparticles. The Luttinger model, solvable exactly via bosonization, maps the interacting fermion system to a free bosonic QFT, revealing non-Fermi liquid properties like spin-charge separation. Superconductivity exemplifies QFT applications through the Bardeen-Cooper-Schrieffer (BCS) theory, which uses a mean-field approximation to describe electron pairing into a condensate, but the full quantum treatment employs the Nambu-Gorkov formalism to incorporate anomalous propagators for Cooper pairs as field excitations. This QFT approach captures the Higgs-like mechanism where the superconducting gap breaks gauge symmetry, leading to massive photon modes and Meissner effect. For quantum spin chains, Haldane's mapping derives a low-energy effective QFT as the O(3) nonlinear sigma model, where the spin operators correspond to a unit vector field \mathbf{n} constrained to |\mathbf{n}|=1, with a topological \theta-term at \theta=\pi for half-integer spins predicting gapped excitations, contrasting gapless behavior for integer spins. Renormalization group (RG) methods from QFT, pioneered by in the 1970s, revolutionized the study of in condensed matter by treating phase transitions as fixed points of flows, applied to models like the \phi^4 that maps to the . The \epsilon-, expanding around the upper d_c=4 with \epsilon=4-d, computes perturbatively, such as the anomalous dimension \eta \approx \epsilon^2/54 to leading order, enabling quantitative predictions for exponents in three dimensions. 's , initially from , was adapted to condensed matter Hamiltonians via real-space block-spin transformations, resolving long-standing issues in understanding near criticality. In topological insulators, states are described by Chern-Simons QFT, where the \mathcal{L} = \frac{\theta}{32\pi^2} \epsilon^{\mu\nu\rho\sigma} A_\mu F_{\nu\rho} F_{\sigma\lambda} with \theta=\pi enforces chiral modes protected by time-reversal symmetry.

Quantum information and computing

Quantum field theory (QFT) provides foundational tools for understanding concepts, particularly through the lens of entanglement and its quantification in many-body systems. In relativistic QFTs, measures the quantum correlations across spatial bipartitions, revealing universal scaling behaviors tied to the theory's symmetries and dimensions. For instance, in (1+1)-dimensional conformal field theories (CFTs), the entanglement entropy S for a subsystem consisting of an interval of length L in an infinite system is given by S = \frac{c}{3} \ln \left( \frac{L}{a} \right) + \text{constant}, where c is the central charge characterizing the CFT, and a is a short-distance cutoff. This formula, derived using the and conformal mapping, highlights how entanglement entropy diverges logarithmically with subsystem size, distinguishing critical systems from gapped ones and enabling the extraction of without direct computation of functions. The AdS/CFT correspondence further bridges QFT to quantum information by interpreting holographic dualities as quantum error-correcting codes. In this framework, the bulk AdS spacetime emerges from boundary CFT degrees of freedom, where local bulk operators are reconstructible from highly entangled boundary subregions via entanglement wedge reconstruction. This structure ensures that bulk locality is protected against certain "errors" or erasures on the boundary, analogous to quantum error correction where logical information is redundantly encoded across physical qubits. Seminal work demonstrated that the Ryu-Takayanagi formula for holographic entanglement entropy aligns with the quantum error-correcting properties, allowing bulk recovery only from boundary regions containing the minimal entangled surface. Such insights suggest that AdS/CFT not only models quantum gravity but also inspires fault-tolerant quantum computing architectures leveraging holographic redundancy. Quantum simulations of QFTs on controllable quantum hardware offer a pathway to study non-perturbative dynamics intractable on classical computers, with lattice formulations providing a natural interface. Rydberg atom arrays, exploiting strong dipole blockade interactions, have emerged as a versatile platform for simulating gauge theories, where atomic states encode gauge fields and matter . For example, periodically driven Rydberg chains realize the real-time evolution of U(1) or Z_2 gauge theories, capturing phenomena like string confinement and deconfinement through emergent gauge-invariant dynamics. These setups scale to tens of sites, enabling observation of gauge string breaking and flux avalanches, as validated in experiments with neutral atom . Proposals from the targeted the Schwinger model—a (1+1)-dimensional analogue—as a benchmark for quantum simulation, focusing on and confinement. Early schemes using trapped ions mapped the staggered Schwinger to spin chains, allowing digital simulation of vacuum decay via under the Trotterized gauge-invariant operator. Feasibility studies extended this to ultracold atoms, assessing noise resilience and gate requirements for observing Schwinger mechanism analogs, with initial implementations demonstrating particle-antiparticle creation in small lattices by the mid-. In 2025, advances in quantum hardware have enabled further progress, such as qudit-based quantum computers simulating quantum field theories to study interactions, and digital simulations of QFT processes. In (2+1)-dimensional topological quantum field theories (TQFTs), serve as robust quasiparticles for topological quantum computing, exploiting non-Abelian braiding statistics for fault-tolerant gates. These theories describe gapped phases where excitations obey fractional statistics, with fusion rules and braiding matrices encoding non-locally. Kitaev's toric code model, a Z_2 TQFT, realizes as e, m, and ε particles on a , where logical qubits are stored in ground-state degeneracy and manipulated via anyon worldlines. This approach achieves universal computation through non-Abelian like types, whose representations enable Clifford and non-Clifford operations with exponential suppression of errors due to topological protection.

Cosmology and gravity interfaces

Quantum field theory (QFT) plays a central role in inflationary , where a known as the drives the rapid exponential expansion of the early universe. The field \phi evolves according to its potential V(\phi), which dominates the and leads to a quasi-de with nearly constant Hubble parameter H. This phase resolves classical cosmological puzzles such as the horizon and flatness problems by stretching initial quantum fluctuations to macroscopic scales. Perturbations in the , denoted \delta\phi, originate as quantum fluctuations during . These fluctuations, governed by the QFT in curved , are amplified by the expansion, seeding the primordial density perturbations observed in the . In the new inflationary scenario, the of these perturbations arises primarily from quantum fluctuations of the Higgs-like , resulting in a nearly scale-invariant with \Delta_\phi \approx H / (2\pi). A key development in the 1980s was the chaotic inflation model, formulated as a QFT framework where inflation occurs for arbitrary initial values of the field in a broad class of potentials, such as V(\phi) = \frac{1}{2} m^2 \phi^2. This model, proposed by in 1983, demonstrates that is a generic outcome of chaotic initial conditions in the early universe, without requiring fine-tuned phase transitions. It provides a robust QFT-based description compatible with observations of large-scale structure. Following , the reheats through the decay of the oscillating field into particles. This process often proceeds via parametric resonance, a QFT mechanism where the rapidly oscillating inflaton induces in the occupation numbers of produced particles, akin to an instability in the Mathieu equation. In models with quartic interactions, this leads to efficient preheating, rapidly thermalizing the within a few oscillations. Semiclassical gravity provides an interface between QFT and in cosmological settings, treating the as classical while quantizing matter fields. The backreaction of quantum fields on is captured by the semiclassical Einstein equations: G_{\mu\nu} + \Lambda g_{\mu\nu} = 8\pi G \langle T_{\mu\nu} \rangle, where \langle T_{\mu\nu} \rangle is the expectation value of the energy-momentum in a . This framework accounts for effects like and particle creation in expanding universes, influencing the dynamics of and late-time acceleration. In cosmological contexts, the Hawking effect manifests as thermal particle production near horizons, analogous to black hole evaporation but in de Sitter-like spacetimes. Quantum fields in the Bunch-Davies vacuum exhibit a Gibbons-Hawking T = H / (2\pi) due to the , leading to a steady flux of created particles that contributes to the effective . This semiclassical prediction highlights the interplay between QFT and curved geometry in driving late-universe expansion. The , extended to cosmology, interprets the thermal bath perceived by accelerated observers in as arising from the Hubble horizon. In this setting, the de Sitter invariance implies a dynamical Unruh temperature associated with the expanding horizon, where uniformly accelerated trajectories detect particles with a Planckian spectrum modified by the expansion rate. This provides a unified QFT perspective on particle production in accelerating cosmologies. Eternal inflation emerges as a consequence of quantum fluctuations in the , where stochastic jumps \delta\phi \sim H / (2\pi) prevent the field from uniformly reaching the end of . In chaotic models, regions where \phi remains large continue inflating indefinitely, spawning an infinite of bubble universes with varying properties. This QFT-driven scenario, first introduced by in 1983 and further developed by in the same year, implies a self-reproducing structure.

References

  1. [1]
    [PDF] Introductory Lectures on Quantum Field Theory
    – The need to introduce quantum fields, with the great complexity this implies. – Quantization of gauge theories and the role of topology in quantum phenomena.
  2. [2]
    [PDF] Quantum Field Theory - UCSB Physics
    Quantum field theory is the basic mathematical language that is used to describe and analyze the physics of elementary particles.
  3. [3]
    [PDF] AN INTRODUCTION TO QUANTUM FIELD THEORY - Erik Verlinde
    INTRODUCTION. Quantum field theory provides a successful theoretical framework for describing elementary particles and their interactions.
  4. [4]
    [PDF] Physics 215A: Quantum Field Theory Fall 2023
    Mar 20, 2024 · Quantum field theory (QFT) is the quantum mechanics of extensive degrees of freedom. What I mean by this is that at each point of space, ...
  5. [5]
    [PDF] A Very Short Introduction to Quantum Field Theory
    Nov 21, 2007 · Quantum electrodynamics, QED for short, is the theory that describes the interactions of photons with charged particles, particularly electrons ...
  6. [6]
    [PDF] HarlowQFT1.pdf - MIT
    1.3 Quantum field theory in quantum gravity. We have seen that quantum field theory gives a way to successfully combine quantum mechanics and special relativity ...
  7. [7]
    [PDF] The Search for Unity: Notes for a History of Quantum Field Theory
    Quantum field theory is the theory of matter and its interactions, which grew out of the fusion of quantum mechanics and special relativity in the late.
  8. [8]
    [PDF] Field Theory and Standard Model - arXiv
    Electromagnetic, weak, strong and also gravitational interactions are all related to local symmetries and described by Abelian and non-Abelian gauge ...
  9. [9]
    Quantum Field Theory, String Theory, and Predictions - Matt Strassler
    Sep 23, 2013 · Quantum field theory is the mathematical language of particle physics; quantum field theory equations are used to describe and predict the behavior of the ...
  10. [10]
    [PDF] Effective Field Theories Lecture 1 - ICTP – SAIFR
    Chiral perturbation theory: Describes the low energy interactions of mesons and baryons. The full theory is QCD, but the relation between the two theories (and.
  11. [11]
    The Standard Model | CERN
    The Standard Model includes the electromagnetic, strong and weak forces and all their carrier particles, and explains well how these forces act on all of the ...
  12. [12]
    [2006.01430] Chiral Perturbation Theory at NNNLO - arXiv
    Jun 2, 2020 · Chiral perturbation theory is a much successful effective field theory of quantum chromodynamics at low energies. The effective Lagrangian is ...
  13. [13]
    The deepest problem: some perspectives on quantum gravity - arXiv
    Feb 16, 2022 · Quantum gravity is likely the deepest problem facing current physics. While traditionally associated with short distance nonrenormalizability.Missing: limitations | Show results with:limitations
  14. [14]
    Open Limitations of Quantum Gravity: a Brief Overview - ResearchGate
    Aug 2, 2024 · Relativistic quantum field theory (QFT) describes fundamental interactions between elementary particles occurring in an energy range up to ...
  15. [15]
    VIII. A dynamical theory of the electromagnetic field - Journals
    Majhi A (2023) Unprovability of first Maxwell's equation in light of EPR's completeness condition: a computational approach from logico-linguistic ...
  16. [16]
    [PDF] Improving our Understanding of the Klein-Gordon Equation
    A problem raised early in the KG equation's history was that a second-order differential ... The question of apparently negative probability densities is more ...<|separator|>
  17. [17]
    The quantum theory of the electron - Journals
    Husain N (2025) Quantum Milestones, 1928: The Dirac Equation Unifies Quantum Mechanics and Special Relativity, Physics, 10.1103/Physics.18.20, 18. Shah R ...
  18. [18]
    The quantum theory of the emission and absorption of radiation
    The new quantum theory, based on the assumption that the dynamical variables do not obey the commutative law of multiplication, has by now been developed ...
  19. [19]
    Zur Quantenmechanik der Gasentartung. Offprint from: Zeitschrift für ...
    First edition, extremely rare offprint, of this important paper, in which Jordan introduces his approach to quantum field theory, independent of Dirac's.
  20. [20]
  21. [21]
    [PDF] Quantum theory of wave fields II - Neo-classical physics
    On the quantum theory of wave fields, II. By W. Heisenberg in Leipzig and W. Pauli in Zurich. (Received on 7 September 1929). Translated by D. H. ...
  22. [22]
    The Electromagnetic Shift of Energy Levels | Phys. Rev.
    The Electromagnetic Shift of Energy Levels, HA Bethe Cornell University, Ithaca, New York, PDF Share, Phys. Rev. 72, 339 – Published 15 August, 1947.Missing: calculation | Show results with:calculation
  23. [23]
    On a Relativistically Invariant Formulation of the Quantum Theory of ...
    S. Tomonaga; On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields*, Progress of Theoretical Physics, Volume 1, Issue 2, 1 Augus.Missing: electrodynamics | Show results with:electrodynamics
  24. [24]
    The Radiation Theories of Tomonaga, Schwinger, and Feynman
    A unified development of the subject of quantum electrodynamics is outlined, embodying the main features both of the Tomonaga-Schwinger and of the Feynman ...Missing: covariant perturbation Nobel
  25. [25]
    Conservation of Isotopic Spin and Isotopic Gauge Invariance
    The paper explores local isotopic spin rotations, leading to isotopic gauge invariance and a b field related to isotopic spin, similar to the electromagnetic ...
  26. [26]
    Partial-symmetries of weak interactions - ScienceDirect.com
    February 1961, Pages 579-588. Nuclear Physics. Partial-symmetries of weak interactions ... Glashow. Phys. Rev. Letters, 3 (1959), p. 570. View in Scopus. 2). J ...
  27. [27]
    A Model of Leptons | Phys. Rev. Lett. - Physical Review Link Manager
    A model of leptons. Steven Weinberg Laboratory for Nuclear Science and Physics Department, Massachusetts Institute of Technology, Cambridge, Massachusetts.
  28. [28]
    Weak and Electromagnetic Interactions - Inspire HEP
    Weak and Electromagnetic Interactions. Abdus Salam(. Imperial Coll., London and; ICTP, Trieste. ) May, 1968. 11 pages. Published in: Conf.Proc.C 680519 (1968) ...
  29. [29]
    Asymptotically Free Gauge Theories. I | Phys. Rev. D
    Asymptotically free gauge theories of the strong interactions are constructed and analyzed. The reasons for doing this are recounted.Missing: original | Show results with:original
  30. [30]
    [PDF] The Standard Model | DAMTP - University of Cambridge
    The Standard Model is a subject covered in lectures on Particle Physics, with elementary introductions available, and assumes familiarity with quantum field ...
  31. [31]
    The Discovery of the W and Z Particles - Inspire HEP
    The discovery of W and Z particles involved modifying a proton accelerator into a proton-antiproton collider at CERN, and designing detectors for evidence of ...<|control11|><|separator|>
  32. [32]
    The Nobel Prize in Physics 2004 - NobelPrize.org
    The Nobel Prize in Physics 2004 was awarded jointly to David J. Gross, H. David Politzer and Frank Wilczek for the discovery of asymptotic freedom.Missing: QCD | Show results with:QCD
  33. [33]
    Steven Weinberg and Higgs physics - ScienceDirect.com
    Finally, we summarize his important contributions in model-building of new physics with extended Higgs sectors and their possible impact in flavor physics and ...
  34. [34]
    Julian Schwinger: Source Theory and the UCLA Years - hep-ph - arXiv
    May 12, 1995 · Julian Schwinger began the construction of Source Theory in 1966 in response to the then apparent failure of quantum field theory to describe strong ...Missing: 1960s 1970s
  35. [35]
    Lagrangian formalism for fields - Scholarpedia
    Aug 30, 2010 · The action principle states that the classical motion of a given physical system is such that it extremizes a certain functional of dynamical ...
  36. [36]
    1 Classical Field Theory - DAMTP
    We can determine the equations of motion by the principle of least action. ... Euler-Lagrange equations of motion for the fields ϕ a ,. ∂ μ ⁡ ( ∂ ⁡ ℒ ...
  37. [37]
    [PDF] Invariant Variation Problems
    Invariant Variation Problems. Emmy Noether. M. A. Tavel's English translation of “Invariante Variationsprobleme,” Nachr. d. König. Gesellsch. d. Wiss. zu ...
  38. [38]
    [PDF] A short review on Noether's theorems, gauge symmetries and ... - arXiv
    Aug 30, 2017 · This is Noether's first theorem in field theory. Before moving to the examples, we show how to build a conserved charge from a conserved current ...<|separator|>
  39. [39]
    [PDF] Causality in Classical Field Theory - Clear Physics
    Aug 21, 2022 · Abstract. In special relativity, the causality principle says that the speed at which information propagates from one place to another.Missing: relativistic | Show results with:relativistic
  40. [40]
    Zur Quantenelektrodynamik ladungsfreier Felder
    Cite this article. Jordan, P., Pauli, W. Zur Quantenelektrodynamik ladungsfreier Felder. Z. Physik 47, 151–173 (1928). https://doi.org/10.1007/BF02055793.
  41. [41]
    References - Pauli's Exclusion Principle - Cambridge University Press
    Pauli, W. and Weisskopf, V. (1934) 'Über die Quantisierung der skalaren relativistischen Wellengleichung', Helvetica Physica Acta 7, 709–31.Google Scholar.
  42. [42]
    Konfigurationsraum und zweite Quantelung | Zeitschrift für Physik A ...
    Fock, V. Konfigurationsraum und zweite Quantelung. Z. Physik 75, 622–647 (1932). https://doi.org/10.1007/BF01344458. Download citation. Received: 10 March 1932.
  43. [43]
    Space-Time Approach to Non-Relativistic Quantum Mechanics
    Non-relativistic quantum mechanics is formulated here in a different way. It is, however, mathematically equivalent to the familiar formulation.
  44. [44]
    Space-Time Approach to Quantum Electrodynamics | Phys. Rev.
    In this paper two things are done. (1) It is shown that a considerable simplification can be attained in writing down matrix elements for complex processes ...Missing: path formulation
  45. [45]
    The Theory of Quantized Fields. I | Phys. Rev.
    The fundamental dynamical principle is stated as a variational equation for the transformation function connecting eigenvectors associated with different ...
  46. [46]
    gravitation and the electron - PNAS
    new principle of gauge invariance, which may go by the same name, has the character of general relativity since it contains an arbitrary func- tion X, and can ...
  47. [47]
    Broken Symmetries | Phys. Rev. - Physical Review Link Manager
    Abstract. Some proofs are presented of Goldstone's conjecture, that if there is continuous symmetry transformation under which the Lagrangian is invariant, then ...
  48. [48]
    Broken Symmetries and the Masses of Gauge Bosons
    Oct 11, 2013 · The 2013 Nobel Prize in Physics has been awarded to two of the theorists who formulated the Higgs mechanism, which gives mass to fundamental particles.
  49. [49]
  50. [50]
    [hep-ph/9709356] A Supersymmetry Primer - arXiv
    Sep 16, 1997 · I provide a pedagogical introduction to supersymmetry. The level of discussion is aimed at readers who are familiar with the Standard Model and quantum field ...
  51. [51]
    All possible generators of supersymmetries of the S-matrix
    All possible generators of supersymmetries of the S-matrix ; Rudolf Haag ; Jan T. · Łopuszański ; Martin Sohnius ...
  52. [52]
    [hep-ph/9707209] Soft supersymmetry-breaking terms from ... - arXiv
    Sep 25, 1997 · Abstract: We review the origin of soft supersymmetry-breaking terms in N=1 supergravity models of particle physics.
  53. [53]
    [hep-ph/9908491] Naturalness and Supersymmetry - arXiv
    Aug 27, 1999 · Supersymmetry solves the gauge hierarchy problem of the Standard Model if the masses of supersymmetric partners of the SM particles are close ...
  54. [54]
    The Minimal Supersymmetric Standard Model (MSSM) - hep-ph - arXiv
    Jun 22, 1996 · The structure of the MSSM is reviewed. We first motivate the particle content of the theory by examining the quantum numbers of the known ...
  55. [55]
    Quantum field theory in de Sitter space: renormalization by point ...
    We examine the modes of a scalar field in de Sitter space and construct quantum two-point functions. These are then used to compute a finite stress tensor.
  56. [56]
    Particle creation by black holes | Communications in Mathematical ...
    Cite this article. Hawking, S.W. Particle creation by black holes. Commun.Math. Phys. 43, 199–220 (1975). https://doi.org/10.1007/BF02345020. Download citation.
  57. [57]
    The renormalization method in quantum electrodynamics - Journals
    A new technique has been developed for carrying out the renormalization of mass and charge in quantum electrodynamics, which is completely general.
  58. [58]
    [PDF] Lectures on effective field theory - ICTP – SAIFR
    (b) The effective vertex in the low energy effective theory (Fermi interaction). ... Rajendran, Cosmological Relaxation of the. Electroweak Scale, Phys. Rev. Lett ...
  59. [59]
    [PDF] Quantum Field Theory - DAMTP
    The second volume covers material lectured in “AQFT”. • L. Ryder, Quantum Field Theory. This elementary text has a nice discussion of much of the material in ...
  60. [60]
    [PDF] Renormalization Of A Class Of Non-Renormalizable Theories - arXiv
    The divergences of power-counting non-renormalizable theories are commonly subtracted away introducing infinitely many independent couplings in the theory. In ...
  61. [61]
    Quarks and Strings on a Lattice - SpringerLink
    Wilson, K.G. (1977). Quarks and Strings on a Lattice. In: Zichichi, A. (eds) New Phenomena in Subnuclear Physics. The Subnuclear Series, vol 13. Springer ...
  62. [62]
    A Practical Implementation of the Overlap Dirac Operator
    Nov 9, 1998 · A practical implementation of the overlap Dirac operator $[1+{\ensuremath{\gamma}}_{5}\ensuremath{\epsilon}\left(H\right)]/2$ is presented.Missing: original | Show results with:original
  63. [63]
    Light hadron masses from lattice QCD | Rev. Mod. Phys.
    Apr 4, 2012 · This article reviews lattice QCD results for light hadron masses, discussing formulations, and how to extract masses from lattice QCD ...Article Text · Lattice Techniques · Extraction of Hadron Masses · Lattice Results
  64. [64]
  65. [65]
    The $\lambda(\varphi^4)_2$ quantum field theory without cutoffs. II ...
    The λ(φ4)2 quantum field theory without cutoffs. II. The field operators and the approximate vacuum. Pages 362-401 from Volume 91 (1970), Issue 2 by James ...Missing: λφ^ citation
  66. [66]
    Axioms for Euclidean Green's functions
    About this article. Cite this article. Osterwalder, K., Schrader, R. Axioms for Euclidean Green's functions. Commun.Math. Phys. 31, 83–112 (1973). https://doi ...Missing: paper | Show results with:paper
  67. [67]
    Proof of the Triviality of 𝜙 𝑑 4 Field Theory and Some Mean-Field ...
    Jul 6, 1981 · It is rigorously proved that the continuum limits of Euclidean φ 4 d lattice fields are free fields in d > 4.Missing: φ⁴ 4D
  68. [68]
    Marginal triviality of the scaling limits of critical 4D Ising and $\phi _4 ...
    Abstract. We prove that the scaling limits of spin fluctuations in four-dimensional Ising-type models with nearest-neighbor ferromagnetic interaction at or ...Missing: paper | Show results with:paper
  69. [69]
    Yang-Mills & the Mass Gap - Clay Mathematics Institute
    Experiment and computer simulations suggest the existence of a “mass gap” in the solution to the quantum versions of the Yang-Mills equations.
  70. [70]
    [PDF] Fermi-Liquid Theory - LPTMC
    Jan 16, 2025 · In a Fermi liquid, the elementary excitations (quasi-particles and quasi-holes) are in direct correspondence with the (particle or hole) ...
  71. [71]
    Renormalization Group and Critical Phenomena. I. Renormalization ...
    Nov 1, 1971 · Renormalization Group and Critical Phenomena. I. Renormalization Group and the Kadanoff Scaling Picture. Kenneth G. Wilson.
  72. [72]
    Topological Field Theory of Time-Reversal Invariant Insulators - arXiv
    Feb 24, 2008 · We show that the fundamental time reversal invariant (TRI) insulator exists in 4+1 dimensions, where the effective field theory is described by the 4+1 ...
  73. [73]
    [hep-th/0405152] Entanglement Entropy and Quantum Field Theory
    May 18, 2004 · Access Paper: View a PDF of the paper titled Entanglement Entropy and Quantum Field Theory, by Pasquale Calabrese and John Cardy. View PDF ...Missing: 1D CFT
  74. [74]
    [0905.4013] Entanglement entropy and conformal field theory - arXiv
    May 25, 2009 · Access Paper: View a PDF of the paper titled Entanglement entropy and conformal field theory, by Pasquale Calabrese and 1 other authors. View ...
  75. [75]
    [2408.02733] Quantum simulation of dynamical gauge theories in ...
    Aug 5, 2024 · Abstract page for arXiv paper 2408.02733: Quantum simulation of dynamical gauge theories in periodically driven Rydberg atom arrays.
  76. [76]
    Inflationary universe: A possible solution to the horizon and flatness ...
    This collection of seminal papers from PRD highlights research that ... quantum field and string theory, gravitation, cosmology, and particle astrophysics.Missing: chaotic | Show results with:chaotic
  77. [77]
    Fluctuations in the New Inflationary Universe | Phys. Rev. Lett.
    Oct 11, 1982 · The spectrum of density perturbations is calculated in the new-inflationary-universe scenario. The main source is the quantum fluctuations of the Higgs field.
  78. [78]
    Reheating after Inflation | Phys. Rev. Lett.
    Dec 12, 1994 · We have found that typically at the first stage of reheating the classical inflation field 𝜑 rapidly decays into 𝜑 particles or into other ...
  79. [79]
    [1011.3336] On the Unruh effect in de Sitter space - arXiv
    Nov 15, 2010 · We give an interpretation of the temperature in de Sitter universe in terms of a dynamical Unruh effect associated with the Hubble sphere.
  80. [80]
    [astro-ph/0002156] Inflation and Eternal Inflation - arXiv
    Feb 7, 2000 · Title:Inflation and Eternal Inflation. Authors:Alan H. Guth (MIT). View a PDF of the paper titled Inflation and Eternal Inflation, by Alan H.