Fact-checked by Grok 2 weeks ago

Renormalization

Renormalization is a systematic procedure in (QFT) that addresses ultraviolet divergences—infinities arising in perturbative expansions—by redefining bare parameters like masses, charges, and fields in terms of finite, observable quantities through the introduction of counterterms, thereby rendering calculations predictive and consistent with experimental data. The origins of renormalization trace back to the late 1940s amid challenges in (QED), where calculations of processes like the yielded infinite results due to interactions with the vacuum. In 1947, pioneered its application by computing the electromagnetic shift in energy levels, effectively absorbing the divergence into the electron's mass renormalization to match the observed of approximately 1057 MHz. This insight was rapidly generalized by , , and Sin-Itiro Tomonaga, who developed covariant formulations of QED, with providing a rigorous perturbative framework in 1949 that demonstrated the renormalizability of the theory to all orders in the . These efforts transformed QED from a plagued theory into one capable of predictions accurate to parts per billion, such as the electron's anomalous . Beyond , renormalization proved essential for non-Abelian gauge theories, including (QCD) and electroweak theory, forming the backbone of the . In 1971, established the renormalizability of these theories, confirming that infinities could be absorbed into a finite number of parameters while preserving gauge invariance. 't Hooft and Martinus Veltman later introduced in 1972 as a key tool for handling these infinities. The (RG), first conceptualized by Ernst Stueckelberg and André Petermann in 1951 as a transformation group acting on coupling constants, later revealed how parameters "run" with energy scale via beta functions, explaining phenomena like the unification of forces at high energies. Wilson's 1971 formulation of the RG for lattice models bridged QFT with , enabling the study of and phase transitions, for which he received the 1982 . Today, renormalization remains indispensable for beyond-Standard-Model physics, effective field theories, and lattice simulations, with techniques like the Bogoliubov-Parasiuk-Hepp-Zimmermann (BPHZ) theorem ensuring its mathematical rigor in handling all-order divergences. Its success underscores QFT's power in describing nature across scales, from subatomic particles to condensed matter systems.

Motivations from

Self-interactions in electrodynamics

In classical electrodynamics, a interacts with its own , leading to self-interactions that manifest as both radiation reaction and contributions. These effects become problematic for point-like charges, as the particle's own field exerts a back-reaction force during , known as the radiation reaction or self-force. The Abraham-Lorentz formula captures this self-force in the non-relativistic limit, expressing it as \mathbf{F}_{\mathrm{rad}} = \frac{2 e^2}{3 c^3} \dot{\mathbf{a}}, where e is the charge, c is the , and \dot{\mathbf{a}} is the time derivative of the (jerk). This formula arises from the Larmor radiation power and momentum conservation, but its derivation assumes a finite-sized charge distribution to avoid immediate infinities in the field; for a true point particle, the accelerating charge would experience an ill-defined, divergent self-force due to the singular nature of its own Coulomb field. A related issue emerges in the computation of the electron's electromagnetic , which represents the infinite energy stored in the particle's own . For a point charge, the is obtained by integrating the electrostatic u = \frac{E^2}{8\pi} (in ) over all space: U = \int \frac{e^2}{8\pi r^4} \, 4\pi r^2 \, dr = \frac{e^2}{2} \int_{r_{\min}}^\infty \frac{dr}{r^2}, which diverges linearly as $1/r_{\min} when the lower r_{\min} \to 0. To render this finite, early models introduced a R as a , modeling the electron as a charged ; the resulting is then U = \frac{2}{3} \frac{e^2}{R}, which diverges as R \to 0 and implies an infinite electromagnetic mass contribution m_{\mathrm{em}} = \frac{2}{3} \frac{e^2}{R c^2}. This divergence highlighted the instability of point-particle models, as the would dominate and render the electron's total mass infinite without an arbitrary . In 1938, addressed these infinities in his analysis of the classical theory of radiating , proposing a model where the possesses a finite radius to ensure a well-defined and avoid divergent self-energies and forces. 's approach treated the as a rigid charged , deriving that incorporated radiation reaction while maintaining finite quantities, though it required additional stabilizing mechanisms that remained unresolved. These classical divergences in self-interactions foreshadowed similar infinities in , necessitating renormalization techniques.

Early historical attempts

In the early , classical theory faced challenges from the infinite of point charges, prompting initial efforts to manage these divergences through adjustments. developed a model of the as a charged sphere with a finite radius, introducing a natural cutoff to the electromagnetic integral; this allowed the divergent electromagnetic mass to be absorbed into the observed inertial mass, marking an early form of mass renormalization. Henri Poincaré extended this framework in his 1905 and 1906 papers, addressing inconsistencies such as the "4/3 problem" where the electromagnetic momentum did not match the expected inertial mass. He proposed balancing the electromagnetic mass with non-electromagnetic "Poincaré stresses" or cohesive forces within the , effectively renormalizing the total mass by compensating for the electromagnetic contribution without altering the underlying theory.) By the 1930s, as advanced toward field theory formulations, and encountered similar issues in their attempts to quantize electrodynamics. In their work on quantum dynamics of wave fields, they identified divergent integrals in electron self-energy and , suggesting the inclusion of infinite counterterms to cancel these infinities and restore finite observables, though without a systematic procedure. Heisenberg further elaborated in 1934 on positron theory, highlighting logarithmic divergences that required such counterterms for consistency in perturbative expansions. A pivotal semi-empirical advance came in 1947 with Hans Bethe's calculation of the , observed experimentally that year. Bethe treated the effect as a finite energy shift by imposing a cutoff at the electron's , yielding a correction of approximately 1040 MHz to the 2S state energy without invoking full renormalization; this approach reconciled theory with measurement and foreshadowed modern techniques.

Divergences in Quantum Field Theory

Loop divergences in QED

In (QED), perturbative calculations reveal ultraviolet divergences arising from diagrams in Feynman , where virtual particles propagate in closed s with arbitrarily high momenta. These infinities first became apparent in the late 1940s through detailed computations of higher-order corrections to basic processes. and independently identified these divergences in 1949, demonstrating that they appear systematically in QED's expansions and necessitate a reappraisal of the theory's foundational parameters. A prominent example is the one-loop correction to the electron self-energy, depicted in the Feynman diagram where an electron line emits a that loops back to rejoin the same line. This diagram contributes to the electron's mass renormalization, yielding a divergent shift \delta m \propto m \ln(\Lambda / m). The is logarithmic, arising from the high-momentum region of the involving the photon and propagators. This echoes classical electrodynamics' infinite for a point charge, but in , it emerges quantum mechanically from the photon's massless . Another key loop diagram is , where a propagator is corrected by a closed loop of electron-positron pairs. This insertion modifies the photon's effective charge screening and introduces a logarithmic in the photon self-energy function \Pi(q^2), proportional to \ln(\Lambda^2 / m^2) at large momenta, where m is the . showed that such loop contributions pervade amplitudes, with divergences isolated to a few primitive graphs like and corrections.

General perturbative expansions

In perturbative quantum field theories beyond (QED), such as scalar and gauge theories, the expansion in powers of the reveals divergences arising from integrals in Feynman diagrams, similar to the loop corrections observed in QED. These divergences manifest in higher-order terms of the elements or functions, necessitating regularization and renormalization to extract finite physical predictions. The structure of these expansions depends on the field's and the form of interactions, with power-counting providing an initial assessment of potential divergences. Infrared divergences can also appear in theories with massless particles, requiring additional resummation techniques like the Bloch-Nordsieck theorem in QED. A prototypical example is scalar \phi^4 theory in four dimensions, described by the Lagrangian \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2} m^2 \phi^2 - \frac{\lambda}{4!} \phi^4. At one loop, the tadpole diagram—a propagator closing into a attached to an external leg—generates a in the \Sigma(p^2), which shifts the bare mass parameter and requires mass renormalization to absorb the . At two loops, the sunset diagram, consisting of two propagators forming a connected by a third propagator to the external legs, introduces further divergences, including logarithmic terms that contribute to both mass and field-strength (wave-function) renormalizations, as well as influencing the coupling constant through higher-order vertex corrections. These diagrams illustrate how subdivergences and overall divergences in \phi^4 theory can be systematically absorbed into redefinitions of the bare parameters, rendering the theory renormalizable. In non-Abelian theories, such as (QCD), perturbative expansions encounter more complex divergences due to the self-interacting nature of bosons. To handle and preserve the structure of the theory, Faddeev-Popov fields are introduced, which are anticommuting scalar fields that cancel unphysical in the . The Becchi-Rouet-Stora-Tyutin (BRST) , a nilpotent global transformation mixing fields, s, and antighosts, ensures that renormalization respects invariance, allowing divergent contributions from gluon self-interactions and quark loops to be absorbed into renormalized parameters without violating the Ward-Slavnov-Taylor identities. A key distinction in non-Abelian theories like QCD is the phenomenon of , where the strong \alpha_s decreases as the energy scale increases, in contrast to the running in that grows logarithmically at high energies. This behavior, arising from the negative contribution of non-Abelian terms in the , implies that perturbative expansions become more reliable at short distances or high momenta. The of resolved the issue of confinement in strong interactions and validated QCD as the theory of the strong force. To classify the severity of divergences in general perturbative diagrams, power-counting yields the superficial degree of divergence D = 4L - \sum_{\rm propagators} \delta_p I_p + \sum_{\rm vertices} \delta_v V_v, where L is the number of loops, \delta_p is the dimension of the propagator (e.g., 2 for scalars/photons, 1 for fermions), I_p the number of internal propagators of that type, \delta_v the engineering dimension of the vertex coupling (0 for marginal interactions like \phi^4 or gauge couplings), and V_v the number of such vertices. If D \geq 0, the diagram is potentially divergent, guiding the identification of counterterms needed for renormalization. For QED specifically, D = 4 - \frac{3}{2} E_e - E_\gamma, where E_e and E_\gamma are the numbers of external electron and photon legs, respectively.

Regularization Techniques

Momentum cutoff methods

Momentum cutoff methods regularize ultraviolet divergences in by imposing an upper limit \Lambda on the momenta entering loop integrals, thereby rendering them finite while preserving the four-dimensional structure of the theory. This artificial scale \Lambda represents a high-energy , beyond which contributions from virtual particles are neglected, allowing perturbative calculations to proceed systematically. The method introduces no fundamental new physics but serves as a temporary tool, with physical predictions obtained in the limit \Lambda \to \infty after renormalization absorbs cutoff-dependent terms into redefined parameters. A prominent implementation is the hard cutoff, which sharply restricts integration to regions where the |p| < \Lambda. This is often applied directly to propagators, modifying the free scalar propagator to \frac{1}{p^2 - m^2 + i\epsilon} \theta(\Lambda - |p|), where \theta denotes the Heaviside step function. Such a truncation makes divergent integrals converge but can violate symmetries like gauge invariance unless carefully adjusted. To address these issues while maintaining covariance, the Pauli-Villars regulator introduces auxiliary "ghost" fields with large fictitious masses M_i \gg \Lambda, which contribute oppositely to the original fields and cancel divergences in combinations like \sum_i (-1)^i / (p^2 - M_i^2). This technique, originally developed for invariant regularization in relativistic quantum field theories, effectively mimics a cutoff through the rapid falloff of the regulator propagators at high energies. In contrast, soft cutoff schemes apply gradual suppression to high momenta, avoiding the discontinuities of hard cutoffs that may complicate analytic continuation or numerical stability. A typical form involves multiplying integrands by factors like e^{-\delta k^2 / \Lambda^2}, where \delta > 0 controls the decay rate, providing an exponentially damped tail that preserves more symmetries and eases the treatment of Lorentz-invariant theories. These smooth regulators are particularly advantageous in momentum-space formulations where sharp boundaries might introduce artifacts. Momentum cutoff regularization was instrumental in Kenneth G. Wilson's pioneering work on the , where it naturally arises in discretizations of field theories, enforcing a finite range through the inverse spacing a^{-1} \sim \Lambda.

Dimensional regularization

Dimensional regularization is a technique in that addresses ultraviolet divergences by analytically continuing Feynman integrals from four dimensions to a general complex dimension d = 4 - \epsilon, where \epsilon is a small positive parameter, before expanding around \epsilon = 0 to isolate and handle the resulting poles. This method was introduced in 1972 by several groups, including Gerard 't Hooft and Martinus Veltman in their paper on gauge field renormalization, as well as independently by L. J. C. Biedenharn et al. and F. J. Yndurain, providing a framework particularly suited for theories with symmetries, such as gauge invariance, which might be violated by other regularization approaches. In practice, loop momentum integrals are evaluated in [d](/page/D*) dimensions, yielding expressions that are finite for non-integer [d](/page/D*) but develop simple poles $1/[\epsilon](/page/Epsilon) as [\epsilon](/page/Epsilon) \to 0, corresponding to the logarithmic divergences of the original four-dimensional theory. These poles arise from the analytic structure of the integrals and are systematically subtracted in the renormalization procedure, while finite parts contribute to physical predictions. The approach introduces an arbitrary mass scale [\mu](/page/MU) to maintain dimensional consistency, as the coupling constants and fields acquire anomalous dimensions under this continuation. A key tool in evaluating these integrals is the representation in terms of s, which facilitate the . For instance, the basic scalar takes the form \int \frac{d^d k}{(2\pi)^d} \frac{1}{k^{2\alpha}} = \frac{\Gamma\left(\alpha - \frac{d}{2}\right)}{(4\pi)^{d/2} \Gamma(\alpha)} \mu^{d - 2\alpha}, where \mu is the dimensional scale, and the \Gamma(z) encodes the poles when \alpha - d/2 is a non-positive . This formula, derived from hyperspherical coordinates and the properties of the , allows explicit computation of one-loop and higher-order integrals by reducing them to combinations of such propagators. Dimensional regularization proves especially advantageous for massless theories, as it automatically preserves gauge invariance without introducing spurious mass scales that could break Lorentz or gauge symmetries, unlike explicit cutoff methods which impose hard limits. Additionally, scaleless s—those without a characteristic , such as \int d^d k \, (k^2)^{-\alpha}—vanish identically in this scheme for \alpha > 0, because the integral over all scales uniformly and the analytic continuation yields zero due to the identity \Gamma(\beta) \Gamma(1 - \beta) = \pi / \sin(\pi \beta) balancing UV and IR divergences. This property simplifies calculations in conformal or massless limits but requires careful handling of potential infrared issues separately.

Core Concepts of Renormalization

Bare versus renormalized parameters

In , perturbative calculations often yield divergent expressions due to divergences, necessitating a distinction between bare parameters, which are unphysical quantities in the original , and renormalized parameters, which correspond to observable physical quantities. The bare parameters, such as the bare mass m_0 and bare charge e_0, are infinite in the limit where the regulator is removed, but they arise as limits of finite, regulator-dependent values that absorb these divergences. The relationship between bare and renormalized parameters is established through renormalization constants, for example, m_0 = Z_m m and e_0 = Z_e e, where m and e are the renormalized and charge, respectively, and the constants take the form Z_m = 1 + \delta m with \delta m containing divergent contributions. These renormalization constants Z are determined order by order in by requiring that physical observables remain finite and match experimental values. The bare Lagrangian density is expressed in terms of the renormalized fields and parameters through field rescalings \phi_0 = Z_\phi^{1/2} \phi_R and renormalization constants that vary by term, ensuring each operator in \mathcal{L}_R is multiplied by its appropriate Z factor (e.g., kinetic term by Z_\phi, mass term by Z_m). This framework ensures that the theory's predictions are regulator-independent and physically meaningful, with bare parameters serving as auxiliary constructs rather than directly measurable entities.

Counterterms and renormalization conditions

In , counterterms are additional terms introduced into the to systematically cancel the ultraviolet divergences arising from integrals in perturbative expansions. These counterterms are constructed such that their divergent contributions precisely offset the infinities in the bare Green's functions, rendering the finite after renormalization. The counterterm typically takes the form \delta \mathcal{L} = -\delta m \bar{\psi} \psi - \delta Z_\psi \bar{\psi} i \slash{\partial} \psi + \cdots , where \delta m and \delta Z_\psi are the mass and field renormalization counterterms, respectively, for a fermionic field \psi. This structure ensures that the divergent parts are absorbed into redefinitions of the bare parameters. The procedure for implementing counterterms involves computing the perturbative expansion of Green's functions, isolating their divergent portions using a regularization scheme (such as dimensional regularization), and then subtracting these divergences through the counterterms. For instance, the one-particle irreducible self-energy function \Sigma(p) of a field receives divergent corrections from loops, and the counterterms are chosen to cancel the poles (e.g., $1/\epsilon terms in dimensional regularization) in the Laurent expansion around the physical point. This subtraction leaves finite remainders that correspond to physical observables. In renormalizable theories, this process applies multiplicatively to all orders, meaning that renormalized Green's functions are related to bare ones by finite renormalization factors Z_i, ensuring consistent finite predictions without introducing new divergences at higher loops. Renormalization conditions are imposed to fix the finite portions of the counterterms, thereby defining the renormalized parameters in terms of measurable quantities. In the on-shell scheme, for example, the self-energy is required to vanish at the physical mass shell, \Sigma(\not{p} = m) = 0, and the wave function renormalization ensures a unit residue at the pole, \frac{d}{d\not{p}} \Sigma(\not{p}) \big|_{\not{p}=m} = 0. These conditions uniquely determine the counterterms beyond their divergent parts, bridging the bare-renormalized distinction to yield scheme-independent physical results. Seminal work demonstrated that multiplicative renormalizability in theories like quantum electrodynamics guarantees this finiteness to arbitrary perturbative order.

Renormalization in Quantum Electrodynamics

Step-by-step procedure in QED

In (QED), the renormalization procedure at one loop addresses ultraviolet divergences arising from loops in Feynman diagrams, ensuring that physical observables are finite and independent of the regularization parameter. This process involves computing the one-particle irreducible (1PI) corrections to the propagator, the electron-photon vertex, and the photon propagator, then introducing counterterms to absorb the divergences into redefinitions of the bare parameters. The foundational establishment of this renormalizability occurred through the collaborative efforts of Sin-Itiro Tomonaga, , , and between 1947 and 1951, culminating in a perturbative where infinities are systematically canceled order by order in the e. The procedure begins with the self-energy \Sigma(p), which corrects the bare electron propagator S(p) = i/(\not{p} - m_0), where m_0 is the bare . At one , \Sigma(p) arises from the with a exchanged between the electron line: \Sigma(p) = -ie^2 \int \frac{d^4 k}{(2\pi)^4} \gamma^\mu \frac{i}{\not{p} - \not{k} - m_0} \gamma^\nu \frac{-i g_{\mu\nu}}{k^2 + i\epsilon} \, . Using a momentum cutoff \Lambda for regularization, the divergent part of \Sigma(p) includes logarithmic terms proportional to \ln(\Lambda^2 / m^2), which contribute to both wave function renormalization Z_2 (rescaling the electron field \psi_0 = Z_2^{1/2} \psi) and renormalization \delta m = (Z_m - 1) m_0. Specifically, the divergent contribution to Z_2 - 1 is -\frac{e^2}{8\pi^2} \ln(\Lambda^2 / \mu^2), where \mu is a reference scale, ensuring the renormalized has a residue of 1 at the physical . Next, the vertex function \Lambda^\mu(p, q) corrects the bare interaction -ie_0 \bar{\psi} \gamma^\mu \psi A_\mu, with p and q the incoming and outgoing electron momenta. The one-loop diagram involves a fermion triangle with an internal photon: \Lambda^\mu(p, q) = ie^2 \int \frac{d^4 k}{(2\pi)^4} \gamma^\nu \frac{i}{\not{p} - \not{k} - m_0} \gamma^\mu \frac{i}{\not{q} - \not{k} - m_0} \gamma^\rho \frac{-i g_{\nu\rho}}{k^2 + i\epsilon} \, . This yields a divergent structure similar to the self-energy, but gauge invariance imposes the Ward-Takahashi identity q_\mu \Lambda^\mu(p, q) = \Sigma(p) - \Sigma(q), which relates the vertex renormalization constant Z_1 to the electron wave function renormalization: Z_1 = Z_2. This identity, a direct consequence of QED's U(1) gauge symmetry, eliminates the need for an independent vertex counterterm and simplifies the overall renormalization. Finally, the vacuum polarization \Pi(q^2) corrects the propagator D_{\mu\nu}(q) = -i g_{\mu\nu}/q^2, arising from the electron-positron : \Pi_{\mu\nu}(q) = (q^2 g_{\mu\nu} - q_\mu q_\nu) \Pi(q^2) \, , where the scalar function \Pi(q^2) at one is \Pi(q^2) = -\frac{e^2}{2\pi^2} \int_0^1 dx \, x(1-x) \ln\left( \frac{m_0^2 - q^2 x(1-x)}{\mu^2} \right) + \text{divergent terms} \, . With a cutoff \Lambda, the divergent part leads to the photon wave function renormalization Z_3, given by Z_3 = 1 - \frac{e^2}{12\pi^2} \ln(\Lambda^2 / \mu^2), which rescales the photon field A_{0\mu} = Z_3^{1/2} A_\mu. The renormalized charge then relates to the bare charge via e = Z_3^{1/2} e_0, as the identity ensures the product Z_1 Z_2^{-1} Z_3^{-1/2} = 1, confining charge renormalization solely to the photon sector. This step-by-step absorption of divergences yields finite Green's functions, with physical quantities like the emerging as small, computable corrections of order \alpha / 2\pi.

Running couplings and the beta function

In (QED), the renormalization procedure reveals that the renormalized depends on the energy scale at which it is measured, a phenomenon known as the running of the coupling. This scale dependence arises because the effective interaction strength changes with the momentum transfer due to quantum corrections, particularly from loops that screen or antiscreen charges. The Callan-Symanzik equation provides a framework for describing this behavior, relating changes in the renormalization scale \mu to the evolution of Green's functions and parameters. The \beta(e) encapsulates the running of the e, defined as \beta(e) = \mu \frac{de}{d\mu}, where the positive sign indicates that the coupling increases with scale in . This equation, derived within the Callan-Symanzik formalism, unifies the (RG) approach in by capturing how scale transformations affect renormalized quantities. At one loop in with a single , the is \beta(e) = \frac{e^3}{12\pi^2} > 0, computed from the divergent parts of the and corrections that contribute to charge renormalization. The positive beta function implies that the effective coupling grows logarithmically with energy, leading to a where the coupling diverges at a finite high-energy scale. Integrating the one-loop equation yields the running coupling \alpha(\mu) = \frac{\alpha(m)}{1 - \frac{\alpha(m)}{3\pi} \ln(\mu^2 / m^2)}, where \alpha = e^2 / 4\pi is the measured at scale m (e.g., the ). This formula shows the coupling \alpha(\mu) increasing as \mu rises, with the pole occurring when the denominator vanishes, signaling a breakdown of at \mu \approx m \exp\left(\frac{3\pi}{2\alpha(m)}\right). The 1970 formulation by Callan and Symanzik marked a pivotal unification of RG ideas from with perturbative , enabling precise predictions of scale-dependent phenomena in without resolving ultraviolet divergences explicitly.

Renormalization Schemes

Minimal subtraction scheme

The minimal subtraction scheme (), developed by in 1973, is a renormalization procedure tailored for quantum field theories regularized via dimensional continuation, where the spacetime dimension is set to d = 4 - \epsilon with \epsilon \to 0. In this approach, ultraviolet divergences manifest as simple poles $1/\epsilon^n (for n = 1, 2, \dots) in perturbative expansions, and the scheme subtracts precisely these divergent terms while preserving all finite contributions. This minimal intervention ensures that counterterms are determined solely by the singular structure of Feynman integrals, without incorporating any additional finite adjustments. The renormalization factors Z in the MS scheme take the explicit form Z = 1 + \sum_{n=1}^\infty \frac{a_n}{\epsilon^n}, where the coefficients a_n are finite numbers extracted from the Laurent expansion of bare quantities around \epsilon = 0. These Z factors multiply the bare parameters (such as fields, masses, and couplings) to yield renormalized ones, effectively canceling the poles order by order in . For instance, in gauge theories, the a_n arise from the divergent parts of one-loop and higher diagrams computed in . A key distinction of the MS scheme lies in its exclusion of universal finite terms that appear alongside the poles, such as \ln(4\pi) - \gamma (where \gamma is the Euler-Mascheroni ), which are instead subtracted in the modified minimal subtraction (MS-bar). This purity simplifies the structure of counterterms, as they consist only of rational functions of the coupling without logarithmic or additives. The MS scheme's advantages include its computational efficiency for automated higher-order calculations, as the pole-only subtractions reduce the complexity of algebraic manipulations in perturbative series. It has become a standard tool in (QCD), enabling precise determinations of running couplings and parton distribution functions in collider physics analyses. Moreover, the beta functions governing the scale dependence of couplings in MS are identical to those in MS-bar through two loops, ensuring scheme-independent predictions for leading renormalization group behaviors up to higher orders.

On-shell and momentum subtraction schemes

In the on-shell renormalization scheme, the renormalization conditions are imposed directly on physical quantities at the poles of the , ensuring that the renormalized parameters correspond to masses and couplings. For instance, in theories with massive fermions, the function \Sigma(p^2) is required to satisfy \Sigma(m^2) = 0 to define the physical mass m, and \Sigma'(m^2) = 0 to set the wave function renormalization such that the residue of the at the is . These conditions make the scheme particularly intuitive for connecting perturbative calculations to experimental measurements, as the counterterms absorb divergences while preserving finite physical values at on-shell points. The momentum subtraction (MOM) scheme, in contrast, defines renormalization conditions by subtracting the divergent parts of Green's functions at specific off-shell momentum points, often chosen for computational convenience. A common prescription involves evaluating amputated Green's functions at a symmetric Euclidean point where p^2 = -\mu^2 for all external legs, with \mu serving as the renormalization . For example, in the renormalization of the quark-gluon , the \Gamma(0,0,\mu^2) = -i g (where g is the bare coupling) ensures that the renormalized matches the tree-level form at this point. The renormalization constants in the MOM scheme, such as the gauge coupling constant Z_g = Z_1 / (Z_2 \sqrt{Z_3}) (with Z_1, Z_2, and Z_3 being the , fermion, and photon wave function renormalizations, respectively), are computed by evaluating these Green's functions at the subtraction point. Unlike minimal subtraction schemes that remove only the divergent poles, both on-shell and MOM schemes incorporate finite parts to satisfy their respective conditions, leading to scheme-dependent finite corrections in higher-order calculations. The MOM scheme finds extensive application in simulations, where it facilitates matching of lattice operators to quantities through the regularization-independent variant (RI/MOM), allowing for the extraction of physical parameters like masses and couplings from numerical data.

Renormalizability

Power-counting criteria

Power-counting criteria provide a systematic way to assess the (UV) behavior of Feynman diagrams in perturbative quantum theories, determining whether divergences can be absorbed into a of counterterms. The superficial of \delta for a given diagram quantifies the leading power of in the integrand at large momenta, serving as a first to without detailed subgraph . According to Weinberg's power-counting , a Feynman converges absolutely if \delta < 0 and the integrand satisfies certain smoothness conditions away from singularities. The superficial degree of divergence is given by the formula \delta = d - \sum_f n_f \frac{d_f}{2} + \sum_v n_v \Delta_v, where d is the spacetime dimension, the sum over f runs over external fields with n_f the number of external legs of field type f carrying canonical dimension d_f, and the sum over v runs over vertices with n_v the number at each vertex type carrying excess dimension \Delta_v (defined as the dimension of the interaction operator minus d). This expression arises from dimensional analysis of the momentum-space integral, accounting for loop measures (+d per loop), propagator denominators (related to field dimensions), and vertex contributions. establishes this power-counting rule for general Lagrangians, showing that the high-energy behavior is controlled by these engineering dimensions, enabling classification of theories prior to explicit computation. A theory is classified as renormalizable by power counting if \delta \leq 0 for all diagrams contributing to relevant (dimension \leq d) operators in d=4 spacetime dimensions, ensuring that divergences do not grow with perturbation order and can be canceled by counterterms of the original form. This criterion holds because positive \delta signals potential logarithmic or polynomial divergences requiring new counterterms, while \delta \leq 0 limits divergences to those absorbable in the existing Lagrangian parameters. For instance, in scalar \phi^4 theory, the interaction term \frac{\lambda}{4!} \phi^4 has \Delta_v = 0 (marginal in 4D), yielding \delta \leq 0 for all 1PI diagrams, confirming renormalizability. In contrast, Einstein gravity features vertices from the Ricci scalar with effective \Delta_v = -2 due to the dimensionful coupling G_N \sim M_{\rm Pl}^{-2}, resulting in \delta = 2 + 2L > 0 increasing with loops L, rendering the theory non-renormalizable by power counting.

Implications for effective field theories

In quantum field theories where power-counting reveals non-renormalizable interactions, these theories are interpreted as effective field theories (EFTs) that provide accurate descriptions only up to a \Lambda, above which additional physics is required to maintain consistency. The Wilsonian approach to EFTs involves systematically integrating out high-momentum modes with energies greater than \Lambda from the underlying fundamental theory, thereby generating an that includes all symmetry-allowed local operators; higher-dimensional operators in this action are naturally suppressed by inverse powers of \Lambda, reflecting the separation of scales between low- and high-energy physics. In 1979, Steven Weinberg developed the modern framework for EFTs through the use of phenomenological Lagrangians, explicitly applying it to the weak interactions, where the point-like Fermi four-fermion theory serves as a low-energy EFT valid below the electroweak scale \Lambda \approx 100 GeV, the mass of the intermediate vector bosons. The structure of such an EFT is captured by the effective Lagrangian \mathcal{L}_{\rm eff} = \mathcal{L}_4 + \sum_{d>4} \frac{c_d}{\Lambda^{d-4}} \mathcal{O}_d, where \mathcal{L}_4 denotes the renormalizable terms of dimension four or less, \mathcal{O}_d are composite operators of dimension d > 4, and the Wilson coefficients c_d are dimensionless constants typically of order unity, determined by matching to the underlying theory. This organized expansion in powers of the small parameter E/\Lambda (with E \ll \Lambda the relevant low-energy scale) endows the EFT with strong : observables can be computed perturbatively to any desired accuracy by including a finite number of terms at each order in the expansion, while unitarity and are preserved for processes with energies below \Lambda.

Renormalization Group Approach

Wilsonian renormalization group

The Wilsonian renormalization group, developed by Kenneth G. Wilson in his work from 1971 to 1974, provides a non-perturbative framework for understanding how physical theories change under scale transformations, effectively bridging concepts from quantum field theory and statistical physics. This approach earned Wilson the 1982 Nobel Prize in Physics for its role in elucidating critical phenomena and renormalization. Unlike perturbative methods that track running couplings via the beta function, the Wilsonian method systematically integrates out short-distance fluctuations in the path integral formulation to derive effective theories at longer scales. Central to this method is the coarse-graining procedure, which eliminates high-momentum modes in a controlled manner. Starting with a theory regulated by an ultraviolet cutoff \Lambda, one integrates over field fluctuations \phi_{\rm high} with momenta in the shell \Lambda/b < |k| < \Lambda, where b > 1 is a rescaling parameter. After this integration, the remaining low-momentum modes are rescaled in momentum space by a factor b (and fields by an appropriate power to preserve the kinetic term), restoring the cutoff to \Lambda. This process generates a new effective action describing the low-energy physics, with parameters that flow under repeated applications. The renormalization group transformation is formally expressed as S'[g'] = -\ln \int e^{-S} \, D\phi_{\rm high}, followed by the rescaling \phi'(x') = z \phi(b x') and x' = x/b, where g denotes the couplings in the original action S and z is a field rescaling factor. This induces a in the toward lower energies, where higher-dimensional (irrelevant) operators are generated and become suppressed relative to the relevant and marginal ones, justifying the use of effective theories below certain scales.

Fixed points and scaling behaviors

In the renormalization group (RG) framework, fixed points characterize the long-distance behavior of physical systems by identifying scale-invariant theories where the couplings remain unchanged under transformations. These fixed points are defined by the condition \beta(g^*) = 0, where \beta(g) is the describing the flow of the coupling g with respect to the scale parameter l, such that \frac{dg}{dl} = \beta(g). Near four dimensions, two prominent fixed points emerge in theories: the Gaussian fixed point, corresponding to the free theory with vanishing interactions (g^* = 0), which is stable above the upper d=4, and the Wilson-Fisher fixed point, an interacting fixed point with g^* > 0 that governs critical behavior below d=4. The nature of RG flows near these fixed points determines scaling behaviors, particularly through the linearization of the RG transformation around g^*. The eigenvalues y_i of the stability matrix, obtained from the Jacobian of the flow equations, classify operators as relevant (y_i > 0), irrelevant (y_i < 0), or marginal (y_i = 0), dictating how perturbations evolve under rescaling. Critical exponents, such as the correlation length exponent \nu = 1/y_t (where y_t is the eigenvalue for the thermal perturbation) and the anomalous dimension \eta (related to the scaling of the field operator), are directly derived from this spectrum, providing universal quantities independent of microscopic details. A key example is the three-dimensional Ising model, whose fixed point—the O(1) Wilson-Fisher fixed point—has been computed perturbatively using the epsilon expansion around d=4-\epsilon, yielding estimates for exponents like \nu \approx 0.63 and \eta \approx 0.036 that align well with numerical simulations. Near the critical point, the approach to the fixed point manifests in scaling laws for couplings; for an irrelevant coupling, the deviation evolves as g(t) - g^* \sim t^{y_g}, where t = (T - T_c)/T_c is the reduced temperature and y_g > 0 is the positive scaling exponent associated with the operator (noting the convention where irrelevant directions have positive y_g in some formulations to reflect decay away from the fixed point). This relation underscores the universality of critical phenomena, as trajectories in coupling space converge to the fixed point along the critical surface, leading to power-law behaviors in observables like correlation functions.

Applications in Statistical Physics

Critical phenomena and phase transitions

Renormalization group (RG) methods provide a powerful framework for understanding second-order phase transitions, where physical properties exhibit scaling behaviors near the critical point T_c. By iteratively coarse-graining the system, reveals how microscopic interactions flow under rescaling, leading to fixed points that dictate the universal critical behavior independent of short-distance details. In the , introduced the concept of block spins, where groups of spins are replaced by effective single spins to capture scaling near criticality in the , laying the groundwork for by emphasizing the irrelevance of microscopic scales at long distances. This approach inspired Kenneth Wilson's development of the modern in 1971, transforming it into a systematic tool for computing and explaining why seemingly different systems display identical scaling laws. A hallmark of critical phenomena is the divergence of the correlation length \xi, which measures the spatial extent of fluctuations and scales as \xi \sim |T - T_c|^{-\nu} as the temperature approaches T_c, with \nu a universal determined by the RG fixed point. Hyperscaling relations, such as $2 - \alpha = d \nu linking the specific heat exponent \alpha to the dimensionality d, arise naturally from the RG analysis of how the free energy density scales under length rescaling by a factor b, valid below the upper . The Ising model exemplifies RG's role in unifying critical behaviors across dimensions. In two dimensions, the exact solution yields critical exponents like \nu = 1, while RG approximations for higher dimensions flow to the same non-Gaussian fixed point governing the 3D Ising universality class, confirming the consistency of scaling laws derived from block-spin transformations. RG explains the universality of critical exponents, where diverse physical systems fall into classes sharing the same exponents due to identical long-wavelength physics. For instance, the liquid-gas transition and the uniaxial ferromagnet both belong to the 3D Ising universality class, exhibiting the same \nu \approx 0.63, while the superconducting transition aligns with the 3D XY class due to its continuous U(1) symmetry, unifying these phenomena under RG fixed points that dictate shared scaling despite differing microscopics.

Block spin and real-space methods

In real-space methods, the block spin transformation, introduced by Leo P. Kadanoff in , provides a coarse-graining procedure for models in statistical physics. The is divided into blocks of linear size b, an integer scale factor, where each block contains b^d sites in d dimensions. The multiple spins within a block are replaced by a single effective block spin, often defined as the spatial S_i' = \frac{1}{b^d} \sum_{\langle j \in i \rangle} S_j, where S_j are the original spins and the sum is over sites j in block i. This reduces the by a factor of b^d, generating a renormalized on the coarser that approximates the original long-wavelength behavior. For the , the block spin procedure yields a relation for the nearest-neighbor K = \beta J, where \beta = 1/(k_B T) and J is the exchange constant, of the form K' = f(K). The function f(K) is derived by computing the effective interactions after averaging or integrating out the block-internal spins, often approximately via a or mean-field-like summation for higher dimensions. Fixed points of this , where K^* = f(K^*), characterize behaviors; from an initial K reveals flows toward trivial (high- or low-temperature) or critical fixed points. In one dimension, the analogous decimation procedure is exact for the , preserving the partition function precisely and yielding the known absence of a finite-temperature , while in higher dimensions, it provides controlled approximations for and . The Migdal-Kadanoff approximation refines these real-space techniques for hypercubic lattices by incorporating bond moving to handle and dimensionality effects more effectively. Prior to , all bonds along one direction are moved (strengthened by a factor of b^{d-1}) to lie parallel, effectively reducing the problem to a set of one-dimensional chains that can be solved exactly via methods. This hierarchical approximation, originally proposed by A. A. Migdal in 1975 and formalized by Kadanoff in , simplifies computations on regular lattices by mapping them to solvable structures, though it overestimates critical temperatures in dimensions greater than one. The resulting remains K' = f(K), with fixed points determined iteratively; for the two-dimensional , it predicts a critical K_c \approx 0.305 compared to the exact value of \ln(1 + \sqrt{2}) \approx 0.441.

Interpretations and Modern Perspectives

Historical attitudes toward divergences

In the 1940s, divergences in (QED) calculations prompted widespread skepticism among physicists, who viewed them as indications of fundamental flaws in the theory. Early classical attempts, such as Hendrik Lorentz's mass renormalization in electron theory to address self-force infinities, had already highlighted the issue but offered only solutions. Renowned figures like described renormalization—the technique to absorb these infinities into redefined physical parameters—as a "dippy process" and "," acknowledging its practical utility in yielding finite predictions while questioning its mathematical rigor and the theory's self-consistency. By the 1950s, attitudes began to shift amid both challenges and empirical successes. and his collaborators identified the "," a high-energy singularity where the diverges, intensifying concerns that might collapse into a trivial theory at short distances and fueling a crisis of confidence in . However, renormalization's predictive power was validated by precise agreements with experiments, such as Julian Schwinger's calculation of the electron's anomalous (g-2), which matched observations to high accuracy and demonstrated the method's effectiveness despite underlying divergences. In the 1980s, Joseph Polchinski's formulation of a renormalization group equation provided a deeper , framing divergences as artifacts of perturbative expansions rather than intrinsic defects, by integrating effective field theory principles and scale-dependent Lagrangians. This work shifted perspectives toward viewing renormalization as a systematic tool for understanding theory behavior across scales. Today, infinities in renormalization are interpreted as signals of new physics emerging at extremely high energies, such as the Planck scale, where quantum field theories like are expected to require ultraviolet completions beyond current frameworks.

Role in effective field theories and beyond

In effective field theories (EFTs), renormalization systematically organizes the effects of unknown (UV) physics by absorbing divergences into a of low-energy constants, enabling predictive calculations at accessible energy scales. This approach treats higher-energy degrees of freedom as integrated out, with renormalization ensuring the theory remains consistent despite incomplete knowledge of the full UV completion. For instance, in (ChPT), the low-energy EFT of (QCD) for light quarks, renormalization handles loop divergences to compute pion scattering and other processes accurately, relying on experimental inputs for counterterms that encode UV ignorance. Beyond traditional (QFT), (RG) flows find holographic interpretations in the AdS/CFT correspondence, where radial evolution in anti-de Sitter (AdS) space duals the RG flow of boundary conformal field theories (CFTs). This maps deformations of CFTs to geometric flows in the , providing a dual to Wilsonian and proving theorems like the monotonic decrease of the central charge along RG trajectories. Recent insights from 2023 to 2025 have analogized deep neural networks to discrete RG transformations, where layer-wise feature abstraction mimics coarse-graining, revealing universal scaling laws in learning curves governed by fixed points akin to Gaussian processes. Advances in 2025 have extended thermal RG methods to cosmology, particularly in asymptotically safe , where temperature-dependent flows transition the from negative values at high temperatures—consistent with expectations—to the observed positive value as the cools, via transitions in the Einstein-Hilbert truncation. In the context of hybrid stars, RG-consistent Nambu–Jona-Lasinio models improve equations of for quark-hadron transitions, constraining parameters with astrophysical observations while yielding mass-radius relations largely indistinguishable from purely hadronic neutron stars. Non-perturbative lattice renormalization, essential for QCD simulations, employs schemes like RI/MOM and gradient flow to define renormalized operators without perturbative assumptions, with recent developments enhancing precision in and computations. In attempts, the asymptotic safety program posits a non-Gaussian fixed point in the RG flow of the Einstein-Hilbert action, rendering the theory UV complete by controlling divergences without new physics, as evidenced by functional studies confirming relevant scaling behaviors for the and Newton coupling.

References

  1. [1]
    [1908.04075] Renormalisation in Quantum Field Theory - arXiv
    Aug 12, 2019 · Abstract:An introductory review, based on a series of lectures delivered at the XXXI SERC School, Kalyani University, 9--18 January 2017.
  2. [2]
    Fifty years of the renormalization group - CERN Courier
    Aug 29, 2001 · Renormalization was the breakthrough that made quantum field theory respectable in the late 1940s. Since then, renormalization procedures, particularly the ...
  3. [3]
    The Electromagnetic Shift of Energy Levels | Phys. Rev.
    The paper 'The Electromagnetic Shift of Energy Levels' by H.A. Bethe was published in Phys. Rev. 72, 339 on 15 August 1947.
  4. [4]
    The renormalization method in quantum electrodynamics - Journals
    A new technique has been developed for carrying out the renormalization of mass and charge in quantum electrodynamics, which is completely general.
  5. [5]
  6. [6]
  7. [7]
    [1901.06573] Les Houches Lectures on Renormalization Theory ...
    Jan 19, 2019 · These lectures review the formalism of renormalization in quantum field theories with special regard to effective quantum field theories.
  8. [8]
    [PDF] The Equation of Motion of an Electron - OSTI.GOV
    The second term in the above equation is the electromagnetic self force. ... Equation (12) will be referred to as the Abraham-Lorentz equation in the following.
  9. [9]
    [PDF] Self-Force and Radiation Reaction - UCSB Physics
    Recall your first encounter with the Lorentz force: F = q(E + v × B). Isn't it weird that the particle has charge, yet its own field is ignored.
  10. [10]
    28 Electromagnetic Mass - Feynman Lectures - Caltech
    The difficulty we speak of is associated with the concepts of electromagnetic momentum and energy, when applied to the electron or any charged particle.
  11. [11]
    [PDF] A comment on the classical electron self-energy - arXiv
    Aug 14, 2023 · Abstract. This paper is devoted to the analysis of the divergence of the electron self-energy in classical electrodynamics.
  12. [12]
    Classical theory of radiating electrons | Proceedings of the Royal ...
    The Lorentz model of the electron as a small sphere charged with electricity, possessing mass on account of the energy of the electric field around it.
  13. [13]
  14. [14]
  15. [15]
  16. [16]
    [PDF] Phi4tools: Compilation of Feynman diagrams for Landau-Ginzburg ...
    May 13, 2024 · By applying the same procedure to the sunset subdiagrams, we renormalize its divergent contribution, making all the diagrams finite4 (remember ...
  17. [17]
    [PDF] On the evaluation of sunset-type Feynman diagrams - arXiv
    The structure of the UV divergence is very simple for the general water melon diagram. Any water melon diagram only has an overall divergence without ...
  18. [18]
    [PDF] Gauge fixing and BRST formalism in non-Abelian gauge theories
    Dec 6, 2007 · In this Thesis we present a comprehensive study of perturbative and non-perturbative non-Abelian gauge theories in the light of gauge-fixing ...
  19. [19]
    [PDF] BRST Symmetry of QCD - UT Physics
    Because BRST is an exact symmetry of the fully-quantized non-abelian gauge theory, it leads to a variety of Ward–Takahashi-like identities for various ...
  20. [20]
    The discovery of asymptotic freedom and the emergence of QCD
    Sep 7, 2005 · In this Nobel lecture I shall describe the turn of events that led to the discovery of asymptotic freedom, which in turn led to the formulation of QCD.
  21. [21]
    [PDF] 8.324 Relativistic Quantum Field Theory II - MIT OpenCourseWare
    and so the general expression for the superficial degree of divergence is given by. ∑. ∑. D = d −. Ef ∆f −. Viδi. (13) f i. Diagrams which are ultraviolet ...
  22. [22]
    [PDF] 4 Perturbative Evaluation of the Path Integral: λφ4 Theory - UF Physics
    Sep 4, 2019 · D = 4 - E. [for λϕ4 theory in four dimensions] . (4.44). The important result here is that the superficial degree of divergence does not ...
  23. [23]
    [PDF] THE RENORMALIZATION GROUP AND THE ~EXPANSION
    The modern formulation of the renormalization group is explained for both critical phenomena in classical statistical mechanics and quantum field theory.
  24. [24]
    [PDF] JULY 1949
    W PAULI AND F VILLARS while the integral (10) is transformed into. 1 + da. -e(a)R(a). 2. (e(a)=1 for a20). (I"). (I'a). Sometimes it may be convenient to ...
  25. [25]
    [PDF] UV Regularization Schemes - UT Physics
    UV regularization schemes, also called UV cutoffs, make divergent Feynman diagrams finite by cutting off high-momentum modes. Examples include Wilson's hard ...
  26. [26]
  27. [27]
    The Radiation Theories of Tomonaga, Schwinger, and Feynman
    The Radiation Theories of Tomonaga, Schwinger, and Feynman. F. J. Dyson. Institute for Advanced Study, Princeton, New Jersey. PDF Share. X; Facebook; Mendeley ...
  28. [28]
    Broken Scale Invariance in Scalar Field Theory | Phys. Rev. D
    We use scalar-field perturbation theory as a laboratory to study broken scale invariance. We pay particular attention to scaling laws.
  29. [29]
    Small distance behaviour in field theory and power counting
    Small distance behaviour in field theory and power counting. Published: September 1970. Volume 18, pages 227–246, (1970); Cite this ...
  30. [30]
    [PDF] 6 STANDARD MODEL: One-Loop Structure - UF Physics
    β(e) = e3. 16π2. 4. 3 . The arbitrary scale µ is fixed by measurement. In QED, it is traditional to use a different (on-shell) renormalization scheme ...
  31. [31]
    [PDF] 8.324 Relativististic Field Theory II, Lecture 26 - MIT OpenCourseWare
    namics. the Landau pole at α(µ) −→ ∞ is given by. 3π. µ = Λ ≡ µ0e2α0 ,. (6) independently of the choice of µ0. Quantum electrodynamics becomes strongly ...
  32. [32]
    Dimensional regularization and the renormalization group
    The behaviour of a renormalized field theory under scale transformations x → λx; p → p/λ can be found in a simple way when the theory is regularized by the ...
  33. [33]
    [PDF] Renormalization Scheme Dependence - UT Physics
    This is called the Minimal Subtraction because all the counterterms do is to subtract the pole at ǫ = 0; the finite part of a divergent amplitude is whatever ...
  34. [34]
    [PDF] Minimal subtraction renormalization scheme. Asymptotic freedom in ...
    Let us modify the renormalization scheme: the counterterms subtract only the divergent 1/ (and higher powers) contributions to the diagrams. Zφ =1+. ∞. X k=1 ak ...
  35. [35]
    [PDF] the problem of scales: renormalization and all that - arXiv
    Feb 12, 2009 · Minimal Subtraction Minimal subtraction (MS) is defined given a partic- ular regularization scheme, normally dimensional regularization.
  36. [36]
    [PDF] Renormalization Scheme Dependence - UT Physics
    Instead of the original Minimal Subtraction renormalization scheme (MS), people often use the Modified Minimal Subtraction scheme (MS, pronounced MS-bar).
  37. [37]
    Strong-coupling constant at three loops in momentum subtraction ...
    One of the major advantages of the MS ¯ scheme is its simplicity in practical applications. The main reason for this is that it belongs to the class of so- ...
  38. [38]
    [PDF] Renormalization Scheme Dependence and the ... - arXiv
    Jun 2, 2017 · A variety of renormalization schemes (RS) will be considered; minimal subtraction (MS) [8], a scheme due to 't Hooft in which the RG function β ...
  39. [39]
    [PDF] Revisiting on-shell renormalization conditions in theories with ... - arXiv
    Jun 20, 2016 · In this review, we present a consistent way of deriving the on-shell conditions, first for scalar, then for fermionic fields in theories with ...
  40. [40]
    [PDF] On the renormalization-scheme dependence in quantum field theory.
    It is shown that On-Mass-Shell renormalization scheme is distinguished in quantum electrodynamics not only due to the agreement of the theory predictions with ...
  41. [41]
    [1608.07982] A massive momentum-subtraction scheme - arXiv
    Aug 29, 2016 · We introduce a new massive renormalization scheme, denoted mSMOM, as a modification of the existing RI/SMOM scheme.
  42. [42]
    [PDF] arXiv:hep-ph/0005131v1 14 May 2000
    The three-loop on-shell renormalization of QCD and QED. Kirill Melnikov∗. Stanford Linear Accelerator Center. Stanford University, Stanford, CA 94309. Timo van ...
  43. [43]
    [PDF] Renormalisation in Quantum Field Theory - arXiv
    Aug 12, 2019 · An introductory review, based on a series of lectures delivered at the XXXI SERC School,. Kalyani University, 9–18 January 2017. Lecture Notes ...
  44. [44]
    [PDF] Non-perturbative renormalization in lattice QCD - arXiv
    May 13, 2010 · The. RI/MOM scheme is widely used in the light quark sector of state-of-the-art lattice calculations today. The use of SF scheme for light ...
  45. [45]
  46. [46]
  47. [47]
    Effective Field Theory - Annual Reviews
    Dec 1, 1993 · Effective Field Theory. H Georgi; Vol. 43:209-252 (Volume publication date December 1993) ... PDF and ePub formats. Price: $ 32.00. Buy Online ...
  48. [48]
    The renormalization group and critical phenomena | Rev. Mod. Phys.
    Jul 1, 1983 · The renormalization group and critical phenomena. Kenneth G. Wilson. Kenneth G. Wilson Laboratory of Nuclear Studies, Cornell University, Ithaca, New York ...Missing: seminal | Show results with:seminal
  49. [49]
    Critical Exponents in 3.99 Dimensions | Phys. Rev. Lett.
    Jan 24, 1972 · Critical Exponents in 3.99 Dimensions. Kenneth G. Wilson and Michael E. Fisher. Laboratory of Nuclear Studies and Baker Laboratory, Cornell ...Missing: paper | Show results with:paper
  50. [50]
    Renormalization Group and Critical Phenomena. I. Renormalization ...
    Nov 1, 1971 · The Kadanoff theory of scaling near the critical point for an Ising ferromagnet is cast in differential form.Missing: model | Show results with:model
  51. [51]
    Scaling laws for ising models near | Physics Physique Fizika
    A model for describing the behavior of Ising models very near T c is introduced. The description is based upon dividing the Ising model into cells.
  52. [52]
    [PDF] Renormalization Group and Critical Phenomena. I. Renormalization ...
    Renormalization Group and Critical Phenomena. I. Renormalization Group and the Kadanoff Scaling Picture*. Kenneth G. 'Wilson. Laboratory of Nuclear Studies, ...
  53. [53]
    Renormalization Group and Critical Phenomena. II. Phase-Space ...
    Nov 1, 1971 · A generalization of the Ising model is solved, qualitatively, for its critical behavior. In the generalization the spin S → n at a lattice ...
  54. [54]
    [PDF] Universality Explained - PhilSci-Archive
    Apr 16, 2016 · A paradigm example of universality is that the liquid-gas critical phase transition and the (uniaxial) ferromagnetic-paramagnetic critical ...
  55. [55]
    [PDF] 5. Phase Transitions - DAMTP
    There is a single theory which describes the physics at the critical point of the liquid gas transition, the 3d Ising model and many other systems. This is ...
  56. [56]
    Dynamical universality classes of the superconducting phase ...
    Aug 1, 1998 · We present a finite temperature Monte Carlo study of the X Y model in the vortex representation and study its dynamical critical behavior in ...
  57. [57]
    [PDF] A Critical History of Renormalization - arXiv
    Renormalization, that astounding mathematical trick that enabled one to tame divergences in Feynman diagrams, led to the triumph of quantum electrodynamics.
  58. [58]
  59. [59]
    [hep-ph/9806303] Effective Field Theory - arXiv
    Jun 8, 1998 · Section 4 presents an overview of Chiral Perturbation Theory, the low-energy realization of Quantum Chromodynamics in the light quark sector.
  60. [60]
    AdS/CFT Duality and Holographic Renormalization Group: A Review
    Dec 6, 2024 · Abstract:In this paper we review aspects of anti de Sitter/conformal field theory (AdS/CFT) duality and the notion of holographic ...
  61. [61]
  62. [62]
    [2501.02878] Thermal RG Flow of AS Quantum Gravity - arXiv
    Jan 6, 2025 · We perform the thermal Renormalization Group (RG) study of the Asymptotically Safe (AS) quantum gravity in the Einstein-Hilbert truncation.
  63. [63]
    [2510.02244] From negative to positive cosmological constant ...
    Oct 2, 2025 · We show that the modified thermal RG study of AS quantum gravity models at very high temperatures results in a negative cosmological constant ...
  64. [64]
    Comprehensive Analysis of Constructing Hybrid Stars with an RG-consistent NJL Model
    ### Summary of Use of RG in Constructing Hybrid Stars Equations of State
  65. [65]
    [1005.2339] Non-perturbative renormalization in lattice QCD - arXiv
    May 13, 2010 · Recent developments in non-perturbative renormalization for lattice QCD are reviewed with a particular emphasis on RI/MOM scheme and its variants, RI/SMOM ...
  66. [66]
    [2301.07438] Gradient Flow: Perturbative and Non ... - arXiv
    Jan 18, 2023 · We review the gradient flow for gauge and fermion fields and its applications to lattice gauge theory computations.
  67. [67]
    [2003.00044] Asymptotically safe gravity - arXiv
    Feb 28, 2020 · In the first part, the concept of an asymptotically safe fixed point is discussed within the functional Renormalization Group framework for ...