Fact-checked by Grok 2 weeks ago

Lattice QCD

Lattice Quantum Chromodynamics (Lattice QCD) is a non-perturbative numerical framework for studying (QCD), the theory of the , by continuous into a , hypercubic of points with finite spacing a. This approach, which preserves the gauge invariance and local symmetries of QCD, allows for the computation of physical observables—such as masses, decay constants, and scattering amplitudes—directly from first principles using techniques on the of the theory. Proposed by Kenneth G. Wilson in as a means to investigate confinement and the dynamics of and , Lattice QCD addresses the limitations of perturbative methods, which break down at low energies where phenomena like and formation dominate. In this formulation, fields reside on lattice sites, while fields are represented by link variables U_\mu(x) in the SU(3) gauge group, enabling simulations that incorporate the full quantum effects of the strong interaction without approximations beyond discretization. The continuum limit is recovered by extrapolating results to a \to 0, with the lattice spacing serving as an ultraviolet regulator at scale \pi/a. Key challenges in Lattice QCD include managing discretization errors (e.g., O(a) or O(a^2) effects from fermion actions like Wilson or staggered formulations), mitigating finite-volume effects that arise on lattices of size L (requiring M_\pi L \gtrsim 5 for accuracy, where M_\pi is the pion mass), and handling the computational intensity of including dynamical quarks via algorithms like Hybrid Monte Carlo. Despite these hurdles, advances in supercomputing and improved actions have enabled precise predictions for Standard Model parameters, such as the strong coupling constant \alpha_s, light quark masses, and electroweak matrix elements, providing critical tests of QCD and insights into beyond-Standard-Model physics. Lattice QCD's applications extend to hadron spectroscopy, finite-temperature studies of quark-gluon plasma, and weak decay processes, with ongoing efforts focusing on isospin-symmetric formulations and chiral extrapolations to physical quark masses using effective theories like . Recent reviews highlight its role in determining CKM matrix elements and properties, underscoring its status as the only method for QCD calculations.

Introduction

Overview and Motivation

Lattice QCD is a non-perturbative formulation of (QCD) that discretizes the theory on a hypercubic in four-dimensional , with fields residing on lattice sites and fields represented by link variables connecting nearest-neighbor sites. This approach replaces the continuous of the theory with a discrete grid characterized by lattice spacing a and finite volume L^4, where L is the linear extent in lattice units, providing a natural ultraviolet cutoff while preserving local gauge invariance. The primary goal is to recover the full QCD in the limit a \to 0 at fixed physical volume, ensuring that lattice artifacts vanish and results match those of the underlying theory. The motivation for Lattice QCD arises from the dual nature of QCD: at high energies or short distances, allows perturbative calculations where the strong \alpha_s decreases logarithmically, enabling precise predictions for processes like . However, at low energies or long distances, the becomes strong, leading to quark confinement—where s and gluons form color-neutral s—and other non-perturbative phenomena such as , which cannot be reliably addressed by due to the absence of a small expansion parameter. Traditional methods fail here because the theory's divergences and complex structure preclude simple analytic solutions, necessitating a framework like Lattice QCD for computations of properties from first principles. In Lattice QCD, the theory is defined through the Z = \int \mathcal{D}U \, \mathcal{D}\psi \, \mathcal{D}\bar{\psi} \, e^{-S[U, \psi, \bar{\psi}]}, where S is the discretized incorporating and terms, U denotes the links, and \psi, \bar{\psi} are the Grassmann-valued fields; this integral is typically evaluated using simulations as the primary computational tool. Observables, such as masses, are obtained from vacuum expectation values of operators, particularly through two-point functions \langle O_X(t) O_Y^\dagger(0) \rangle, whose large-time yields the ground-state energy E_0, corresponding to the when the momentum is zero. This setup bridges the low-energy regime with by relating lattice parameters to the QCD scale \Lambda_{QCD}, allowing quantitative tests of the theory's consistency across energy scales.

Historical Development

Lattice QCD originated in the early 1970s as a non-perturbative approach to quantum chromodynamics (QCD), driven by the need to address confinement and low-energy phenomena that perturbative methods could not handle. Kenneth G. Wilson introduced the foundational framework in his 1974 paper, proposing a discrete lattice regularization of gauge theories to enable numerical computations of strongly interacting systems, demonstrating quark confinement in the strong-coupling limit. Building on this, John Kogut and Leonard Susskind developed staggered fermion discretizations in 1974–1977 to incorporate quarks while preserving some chiral symmetry properties on the lattice. These innovations by Wilson, Kogut, and Susskind established lattice gauge theory as a rigorous tool for QCD simulations, shifting focus from continuum approximations to computable path integrals. The 1980s marked the practical adoption of lattice QCD through pioneering numerical simulations. Michael Creutz performed the first calculations of pure SU(2) in 1980, validating Wilson's formalism by computing string tensions and phase transitions on small lattices. By the mid-1980s, collaborations such as the Columbia group extended these to full QCD, producing initial quenched estimates of masses like the and , though limited by coarse lattices and heavy masses. G. Peter Lepage contributed significantly during this era, developing improved actions and effective field theory integrations to enhance accuracy despite computational constraints. In the , algorithmic advancements, including multiboson techniques and hybrid Monte Carlo methods, enabled larger and dynamical inclusions, reducing systematic errors from heavy (around 50–100 MeV). The 2000s saw precision breakthroughs, exemplified by the MILC collaboration's 2004 determination of light and masses using improved staggered , achieving values like m_ud ≈ 2.9 ± 0.2 MeV and m_s ≈ 77 ± 7 MeV in the MS-bar scheme at 2 GeV, which informed CKM fits. These results, with masses down to 10–20 MeV, highlighted lattice QCD's role in flavor physics. The 2010s brought widespread use of chiral fermion formulations, such as domain-wall and overlap actions, allowing simulations at or near physical masses and minimizing extrapolation uncertainties. Large-scale efforts by groups like RBC/UKQCD and ETMC delivered sub-percent precision for spectra and decay constants, supporting beyond-Standard-Model searches. Entering the , the transition to via initiatives like the U.S. Department of Energy's Exascale Computing Project has enabled finer lattices (a ≈ 0.05 ) and unprecedented volumes, facilitating calculations of matrix elements and rare decays with minimal discretization errors.

Mathematical Formulation

Discretization of QCD on a Lattice

Lattice QCD is formulated in space-time, which is obtained from the Minkowski space-time of the via a , t \to -i \tau, transforming the oscillatory into a form amenable to numerical evaluation using methods. This rotation ensures that the Boltzmann weights in the are positive definite, facilitating in simulations. The space-time is discretized on a hypercubic with sites located at integer coordinates n = (n_1, n_2, n_3, n_4), where the lattice spacing a sets the scale, and the physical volume is finite with imposed to approximate the infinite volume limit. fields, represented as links between neighboring sites, and fields, residing at lattice sites, replace the fields, while derivatives in the are approximated by operators. For instance, the D_\mu is discretized using forward or symmetric differences over lattice links, preserving the local invariance of the up to lattice artifacts. The continuum QCD action, S = \int d^4x \, \bar{\psi} (i D_\mu \gamma^\mu - m) \psi - \frac{1}{4} F_{\mu\nu}^a F_{\mu\nu}^a, is mapped to a lattice version where the fermion bilinear is summed over sites with the discretized Dirac operator, and the gauge field strength F_{\mu\nu}^a is replaced by products of link variables around elementary plaquettes, the smallest closed loops on the lattice. This discretization introduces lattice spacing-dependent effects, such as doublers and chiral symmetry breaking, which must be addressed through improved actions or renormalization procedures. To recover the continuum QCD theory, lattice parameters are tuned such that as the lattice spacing a \to 0 at fixed physical volume, the lattice theory approaches the continuum limit through renormalization of operators and couplings, ensuring universality and matching of observables like hadron masses and decay constants. This asymptotic freedom-driven limit is verified by extrapolating results from multiple lattice spacings, typically in the range a \approx 0.05 to $0.1 fm, where fm denotes femtometers. In lattice QCD, the non-Abelian gauge fields of the Standard Model are discretized on a hypercubic lattice using link variables U_\mu(x) \in SU(3), which represent the parallel transporters along the lattice links from site x to x + \hat{\mu}. These variables approximate the continuum gauge links via U_\mu(x) \approx \exp(i a g A_\mu(x + a \hat{\mu}/2)), where a is the lattice spacing, g is the coupling constant, and A_\mu is the gluon field. Under local SU(3) gauge transformations g(x) \in SU(3), the link variables transform as U_\mu(x) \to g(x) U_\mu(x) g^\dagger(x + \hat{\mu}), preserving the gauge symmetry of quantum chromodynamics (QCD). This formulation, introduced by Kenneth Wilson, ensures that the lattice theory recovers the continuum QCD action in the limit a \to 0. The dynamics of these fields are encoded in the , with the simplest choice being the , given by S_g[U] = \beta \sum_{x, \mu < \nu} \left( 1 - \frac{1}{3} \Re \Tr P^{\mu\nu}(x) \right), where \beta = 6/g^2 sets the coupling scale, and P^{\mu\nu}(x) = U_\mu(x) U_\nu(x + \hat{\mu}) U_\mu^\dagger(x + \hat{\nu}) U_\nu^\dagger(x) is the plaquette variable, the oriented product of four link variables around an elementary $1 \times 1 square in the \mu\nu-plane. This corresponds to the leading-order discretization of the Yang-Mills and exhibits O(a^2) lattice artifacts, meaning discretization errors vanish quadratically as a \to 0. The has been the foundational choice for early lattice simulations due to its simplicity and exact invariance on the lattice. To mitigate higher-order discretization errors, improved gauge actions have been developed using the Symanzik effective continuum theory, which systematically corrects for lattice artifacts by including additional short-distance operators in the action. The Symanzik improvement program constructs actions that eliminate errors up to a desired order in a by matching the lattice theory to the continuum via perturbative expansions. A prominent example is the tree-level improved , which incorporates both plaquettes and $1 \times 2 rectangles: S_g[U] = \beta \left[ c_0 \sum_{x, \mu < \nu} \left( 1 - \frac{1}{3} \Re \Tr P^{\mu\nu}(x) \right) + c_1 \sum_{x, \mu \neq \nu} \left( 1 - \frac{1}{3} \Re \Tr R^{\mu\nu}(x) \right) \right], with coefficients c_0 = \frac{5}{3} and c_1 = -\frac{1}{12} chosen to cancel O(a^4) errors at tree level in perturbation theory. Non-perturbative improvements extend this by tuning coefficients via simulations to further reduce artifacts, enhancing the approach to the continuum limit in precision calculations. The QCD vacuum structure includes non-perturbative topological features, incorporated on the lattice via the theta term S_\theta = i \theta Q, where Q is the topological charge and \theta parameterizes the theta vacuum, potentially addressing the strong CP problem if \theta \approx 0. The topological charge Q is defined using the field-theoretic approach, Q = \frac{g^2}{32\pi^2} \sum_x \epsilon_{\mu\nu\rho\sigma} \Tr \left[ F_{\mu\nu}(x) F_{\rho\sigma}(x) \right], discretized with clover-improved field strengths F_{\mu\nu} derived from link variables to ensure integer-valued Q in the continuum limit. Lattice simulations often encounter topological freezing at fine spacings, where transitions between sectors Q = n (with n \in \mathbb{Z}) are suppressed, requiring techniques like open boundary conditions or reweighting to access the full theta vacuum. Although gauge-invariant observables do not require gauge fixing in lattice QCD, it is employed for quantities like gluon propagators or Polyakov loops in specific gauges, such as the Landau gauge, which maximizes the functional R = \sum_{x,\mu} \Re \Tr U_\mu(x) via Fourier-accelerated steepest descent or overrelaxation algorithms. Gauge fixing introduces ambiguities due to Gribov copies—multiple lattice configurations maximizing the same functional but related by non-trivial gauge transformations—complicating the uniqueness of fixed configurations. These ambiguities are mitigated by selecting the global maximum or using stochastic methods, but they persist as a challenge in non-perturbative studies.

Fermion Discretizations

In lattice quantum chromodynamics (QCD), the naive discretization of the Dirac operator using finite differences for the covariant derivative leads to the fermion doubling problem, where each continuum quark flavor is replicated 16 times (or "doublers") due to additional zeros of the operator in the Brillouin zone corners in four spacetime dimensions. This artifact arises because the naive lattice Dirac operator \slash{D} = \sum_\mu \gamma_\mu \frac{\nabla_\mu - \nabla_{-\mu}}{2}, with forward and backward differences \nabla_\mu \psi(x) = \psi(x+\hat{\mu}) - \psi(x) and \nabla_{-\mu} \psi(x) = \psi(x) - \psi(x-\hat{\mu}), has a spectrum that includes unphysical low-energy modes at momenta p_\mu = (0,0,0,0) and (\pi/a, 0,0,0) permutations, violating the Nielsen-Ninomiya no-go theorem for chiral fermions on the lattice. To address the doublers while preserving some chiral properties, Wilson fermions introduce a non-chiral Laplacian term that gives the doubler modes masses of order $1/a, suppressing them in the continuum limit a \to 0. The Wilson fermion action is S_w = \sum_x \bar{\psi}(x) (D_w + m) \psi(x), where the Dirac-Wilson operator is D_w = \sum_\mu \left( \gamma_\mu \frac{\nabla_\mu - \nabla_{-\mu}}{2} - \frac{a}{2} \nabla_\mu \nabla_{-\mu} \right), with the second term being the Wilson kernel that breaks explicit chiral symmetry through additive mass renormalization, requiring tuning to recover the continuum chiral limit. This formulation, originally proposed by , enables efficient simulations but introduces O(a) discretization errors, which can be improved to O(a^2) via clover terms or other enhancements. Staggered fermions, also known as Kogut-Susskind fermions, mitigate the doubling problem by incorporating a staggered phase factor \eta_\mu(x) = (-1)^{\sum_{\nu < \mu} x_\nu} into the naive operator, reducing the 16 doublers to 4 "tastes" that correspond to degenerate flavors in the continuum limit, with the spinor structure emerging from the taste degrees of freedom. The resulting action S_s = \sum_x \bar{\chi}(x) \sum_\mu \eta_\mu(x) [U_\mu(x) \chi(x+\hat{\mu}) - U^\dagger_\mu(x-\hat{\mu}) \chi(x-\hat{\mu})] + m \sum_x \bar{\chi}(x) \chi(x) preserves an exact U(1) chiral symmetry even at finite lattice spacing a, avoiding the need for tuning, though taste-breaking interactions at O(a^2) mix the flavors and require rooting procedures for N_f < 4 in unquenched simulations. For formulations that restore exact chiral symmetry on the lattice, domain-wall fermions embed the theory in five dimensions with quark fields confined to a four-dimensional slice via a domain-wall profile in the extra dimension, effectively separating left- and right-handed modes and exponentially suppressing mixing as the wall separation L_s increases. This approach, pioneered by , yields a residual chiral symmetry breaking controlled by a residual mass m_{res} \propto e^{- \lambda L_s / a}, where \lambda is the domain-wall height parameter, allowing near-chiral behavior at finite a but at higher computational cost due to the extra dimension. Overlap fermions provide a rigorous solution by constructing a lattice Dirac operator that satisfies the Ginsparg-Wilson relation \{\gamma_5, D\} = a D \gamma_5 D, ensuring an exact chiral symmetry \delta \psi = i \epsilon \gamma_5 (1 - \frac{a}{2} D) \psi at finite a. The overlap operator is given by D = \frac{1}{a} \left(1 + \gamma_5 \epsilon(H) \right), where H = \gamma_5 (D_w + m_0) is the Hermitian Wilson operator with negative mass parameter m_0 \in (-2,1), and \epsilon(H) = H / \sqrt{H^2} is the sign function that projects onto chiral modes, derived from the overlap of continuum wavefunctions. Developed by Narayanan and Neuberger, this operator exactly preserves the chiral anomaly and topology, making it ideal for phenomena sensitive to chiral symmetry, though its non-local nature and sign-function evaluation increase computational demands. Twisted mass fermions extend the Wilson action by adding a twisted mass term i \mu \bar{\psi} \gamma_5 \tau^3 \psi in the two-flavor case, rotating the mass in chiral space to suppress exceptional configurations and automatically O(a) improve physical observables without clover tuning, particularly useful in unquenched simulations where dynamical quark effects are included. This formulation, introduced by Frezzotti and collaborators, enhances stability near the chiral limit by protecting against zero modes, with the twist angle \omega = \tan^{-1}(\mu / m) tuned to maximal twist for parity symmetry restoration in the continuum. Mixed action approaches combine different fermion discretizations for valence and sea quarks, such as overlap valence on staggered or Wilson sea, to leverage computational efficiency while controlling systematic errors through unitarity violations and taste-mixing, enabling hybrid simulations for precision spectroscopy in unquenched QCD. These methods, explored in works by Orginos and others, introduce partial quenching artifacts like double poles in neutral meson propagators but allow flexible ensembles for studying quark mass hierarchies.

Computational Techniques

Monte Carlo Methods

Lattice QCD simulations rely on Monte Carlo methods to evaluate the path integral non-perturbatively by generating ensembles of gauge field configurations distributed according to the Boltzmann weight e^{-S_g[U]}, where S_g[U] is the gauge action depending on the link variables U. This is achieved through importance sampling via Markov Chain Monte Carlo (MCMC) algorithms, which produce a sequence of configurations where the probability of each state approaches the desired equilibrium distribution after sufficient iterations, allowing statistical averages of observables to approximate the partition function integral. These methods are essential because direct integration over the infinite-dimensional field space is intractable, and the stochastic approach enables reliable estimates despite finite sample sizes. For pure gauge theories without dynamical fermions, local update algorithms such as the Metropolis and heat bath methods are employed to sample configurations. The Metropolis algorithm proposes small random changes to individual link variables and accepts or rejects them based on the Metropolis criterion, ensuring detailed balance and ergodicity to explore the configuration space. This method, first applied to lattice gauge theories in early simulations of Abelian and non-Abelian models, provides a baseline for generating independent configurations but can suffer from high rejection rates for larger updates. In contrast, the heat bath algorithm directly samples new link variables from the conditional probability distribution given the neighboring links, achieving higher acceptance rates and faster equilibration, particularly for SU(2) gauge groups where exact sampling is feasible using group properties. These local updates are typically applied in a checkerboard pattern to parallelize computations across the lattice sites. To include dynamical fermions, which introduce a determinant factor \det(D) from the Dirac operator D into the measure, the Hybrid Monte Carlo (HMC) algorithm extends local methods by proposing global updates through fictitious molecular dynamics trajectories in an extended phase space of gauge fields and conjugate momenta. In HMC, a trajectory of fixed length is evolved using the leapfrog integrator to discretize Hamilton's equations derived from a Hamiltonian incorporating the gauge action and momenta, followed by a Metropolis accept/reject step to correct for integration errors and preserve detailed balance. This approach efficiently samples the full QCD measure for even numbers of degenerate quark flavors by representing the fermion determinant via pseudofermion fields \phi, where the effective action becomes S_{\text{eff}} = S_g + \phi^\dagger (D^\dagger D)^{-1} \phi, with \phi drawn from a Gaussian distribution. A key challenge in these MCMC methods is critical slowing down, where autocorrelation times \tau scale unfavorably with the lattice spacing a, typically as \tau \sim a^{-z} with dynamical exponent z \approx 1 for , leading to exponentially increasing computational costs near the continuum limit. Autocorrelation times measure the number of steps needed for successive configurations to become statistically independent, and prolonged correlations in slow modes, such as low-momentum gauge fields, amplify statistical errors and limit precision in physical predictions. Efforts to mitigate this include optimizing trajectory lengths, step sizes, and integrators in to balance acceptance rates around 60-80% while minimizing reversibility violations.

Handling Fermions in Simulations

In lattice QCD simulations with dynamical fermions, the fermion determinant \det(D) must be incorporated into the Monte Carlo sampling of gauge configurations to account for quark loops. The pseudofermion technique achieves this by representing \det(D^\dagger D) as a bosonic integral over auxiliary fields \phi, known as pseudofermions, with the effective action S_\mathrm{pf} = \phi^\dagger (D^\dagger D)^{-1} \phi. This Gaussian integral exactly equals \det(D^\dagger D) up to a normalization constant, allowing the determinant to be included in the path integral without direct computation. Introduced in early numerical studies, this method enables efficient sampling using hybrid Monte Carlo (HMC) algorithms, where pseudofermions contribute to the Hamiltonian and their forces are computed via stochastic estimates. For two degenerate quark flavors, the pseudofermion action is Hermitian and positive definite, facilitating integration with standard HMC dynamics. In practice, the inverse (D^\dagger D)^{-1} is not computed explicitly; instead, pseudofermion fields are generated from a heat bath, and forces are derived from changes in the action during molecular dynamics evolution. This approach has been foundational for generating unquenched ensembles since the mid-1980s, though it introduces noise that scales with the inverse quark mass. Computing quark propagators and related observables requires solving linear systems D x = b on fixed gauge fields from Monte Carlo ensembles. For Hermitian operators, such as the even-odd preconditioned D^\dagger D in Wilson or staggered formulations, the conjugate gradient (CG) method is the standard iterative solver due to its efficiency and convergence properties for positive definite matrices. CG minimizes the residual iteratively, with the number of iterations scaling as the square root of the condition number, which worsens near the chiral limit. Preconditioning techniques, like domain decomposition, can reduce iterations by factors of 2–5. For non-Hermitian Dirac operators, such as the unpreconditioned Wilson-Dirac matrix, BiCGStab serves as a robust Krylov subspace method, avoiding the need to solve normal equations and achieving faster convergence in practice. It combines biconjugate gradient steps with stabilizing polynomials to damp oscillations, typically requiring fewer matrix-vector multiplications than multi-mass CG for single-precision solves. Both solvers are implemented with even-odd preconditioning to halve the lattice size effectively, and their performance is critical, as propagator inversions dominate computational cost in dynamical simulations. Alternative approaches like the multiboson algorithm address limitations of pseudofermions, particularly for light quarks where CG iterations proliferate. In this method, the determinant is approximated by an integral over multiple bosonic fields via a polynomial expansion of the inverse Dirac operator, \det(D)^{-1} \approx \int \prod_i d\eta_i \exp(-|\eta|^2 - P(D^\dagger D) |\eta|^2), where P is a low-degree polynomial tuned for accuracy. This avoids iterative solvers during sampling, replacing them with local bosonic updates, and is exact in the limit of infinite bosons. Originally proposed for two-flavor QCD, it mitigates critical slowing down but requires careful polynomial optimization to control systematic errors. Exact algorithms, such as those based on polynomial HMC or local bosonic representations, extend multiboson ideas to eliminate approximations entirely, ensuring unbiased sampling even for non-degenerate masses. These methods represent the determinant as a ratio of bosonic integrals and use exact updates to avoid sign fluctuations in the effective action, particularly useful in two-flavor theories where the determinant is positive. While computationally intensive for large volumes, they provide a benchmark for pseudofermion accuracy and help circumvent sign issues in phase-quenched simulations. Observables like traces for disconnected diagrams or the pion decay constant require stochastic estimation to approximate inverses and traces efficiently. This involves injecting random noise vectors \eta with \langle \eta^\dagger \eta \rangle = 1 and estimating \mathrm{Tr}(D^{-1}) \approx \frac{1}{N} \sum_{i=1}^N \eta_i^\dagger (D^{-1} \eta_i), where N is the number of sources, typically 10–100 for percent-level precision. For propagators, multi-source stochastic methods reduce variance by averaging over diluted noises, enabling reliable computation of flavor-singlet correlators. Variance reduction via low-mode deflation or hierarchical probing further improves signal-to-noise ratios by factors of 5–10. To handle multiple quark masses simultaneously, rational hybrid Monte Carlo (RHMC) extends pseudofermions using rational approximations to the inverse square root, \det(D)^{1/2} \approx \int d\phi \exp(-\phi^\dagger R(D) \phi), where R(x) = \sum_k c_k (x + \mu_k)^{-1} is a multi-shift rational function. This allows a single trajectory to sample multiple determinants, with forces computed via a few CG solves per shift. Developed for efficient multi-flavor simulations, RHMC reduces autocorrelation times by incorporating exact multi-mass solvers and has become standard for 2+1 flavor ensembles. Reweighting techniques complement these methods by generating ensembles at one mass and reweighting to nearby parameters, \langle O \rangle_m = \frac{\langle O w \rangle_{m_0}}{\langle w \rangle_{m_0}}, where w = \det(D_m)/\det(D_{m_0}) is estimated stochastically. This is particularly effective for fine-tuning the strange quark mass or exploring the in heavy-ion contexts, with overlaps limited to 10–20% detuning to maintain efficiency. High-mode averaging and Taylor expansions further stabilize estimates, enabling precise control over systematic errors in physical point extrapolations.

Perturbative Expansions on the Lattice

Lattice perturbation theory (LPT) adapts the standard perturbative expansion of quantum chromodynamics (QCD) to the discrete lattice regularization, enabling analytical computations of short-distance quantities where the continuum limit can be approached systematically. In LPT, Feynman diagrams are constructed using lattice-specific rules, where the gluon propagator in momentum space takes the form G_{\mu\nu}^{ab}(k) = \delta^{ab} \frac{1}{4a^2 \sum_\lambda \sin^2(ak_\lambda/2)} \left[ \delta_{\mu\nu} - (1-\alpha) \frac{\sin(ak_\mu/2) \sin(ak_\nu/2)}{\sum_\lambda \sin^2(ak_\lambda/2)} \right], incorporating the lattice spacing a and gauge parameter \alpha. For fermions, such as in the Wilson discretization, the quark propagator is S^{ab}(k, m_0) = \delta^{ab} \frac{a}{-i \sum_\mu \gamma_\mu \sin(ak_\mu) + a m_0 + 2r \sum_\mu \sin^2(ak_\mu/2)} \div \left[ \sum_\mu \sin^2(ak_\mu) + \left(2r \sum_\mu \sin^2(ak_\mu/2) + a m_0 \right)^2 \right], with Wilson parameter r typically set to 1 to suppress doublers. Vertices, like the quark-quark-gluon interaction, are modified accordingly, e.g., V^a_{1bc\mu}(p_1, p_2) = -g_0 (T^a)_{bc} \left[ i \gamma_\mu \cos(a(p_1+p_2)_\mu/2) + r \sin(a(p_1+p_2)_\mu/2) \right], reflecting the exponential link variables in the lattice action. These rules allow perturbative series expansions in the bare coupling g_0, facilitating the study of ultraviolet behavior near the continuum limit. To relate lattice results to continuum QCD, LPT computes matching coefficients that convert bare lattice parameters to renormalized quantities in schemes like \overline{\rm MS}, often employing Symanzik improvement to remove discretization errors. The Symanzik effective theory expands the lattice action as S_{\rm lat} = S_{\rm cont} + a \sum_i K_i O_i + a^2 \sum_j L_j P_j + \cdots, where improvement coefficients like the Sheikholeslami-Wohlert term c_{\rm sw} for clover fermions cancel O(a) errors; for the plaquette gauge action with N_f=0, c_{\rm sw} = 1 - 0.656 g_0^2 - 0.152 g_0^4 - 0.054 g_0^6 / (1 - 0.922 g_0^2). Renormalization factors, such as for quark bilinears Z_O(a\mu, g_0) = 1 - \frac{g_0^2 C_F}{16\pi^2} \left[ \gamma^{(0)} \log(a^2\mu^2) + \Delta R \right], ensure scheme equivalence up to higher orders. This matching is crucial for precision, as demonstrated in early viability studies showing LPT agreement with Monte Carlo data for Wilson loops after tadpole resummation. The running of the lattice coupling is governed by the beta function, adapted to discrete spacing as \beta(g_0) = -b_0 g_0^3 (1 + b_1 g_0^2 + b_2 g_0^4 + \cdots), with universal coefficients b_0 = \frac{11 - 2N_f/3}{(4\pi)^2} and b_1 = \frac{102 - 38N_f/3}{(4\pi)^4}, extended perturbatively to match continuum asymptotics. LPT applies these tools to short-distance observables, such as renormalized quark masses m_R = Z_m (m_0 - m_c), where the critical mass shift is m_c \approx -0.3257 \, g_0^2 C_F at one loop for , enabling determinations with sub-percent precision after continuum extrapolation. Similarly, pseudoscalar decay constants like f_\pi are extracted from correlators, with perturbative corrections ensuring matching to \overline{\rm MS} values, e.g., f_\pi = 130.2(1.0) MeV from improved actions. In contrast, non-perturbative renormalization via the RI/MOM scheme computes Z factors from Landau-gauge Green's functions at fixed momentum, avoiding perturbative truncation but requiring additional matching to \overline{\rm MS}.

Advanced and Emerging Approaches

Quantum Computing for Lattice QCD

Quantum computing presents a transformative approach to Lattice QCD simulations by exploiting quantum superposition and entanglement to encode the path integral over gauge and fermion fields, thereby mitigating the exponential computational cost of classical methods for large lattices or complex observables. Variational quantum algorithms, such as the , enable the approximation of ground states and real-time evolution in lattice gauge theories, directly addressing the path integral through parameterized quantum circuits optimized classically. These methods avoid the fermion sign problem inherent in classical Monte Carlo sampling, allowing simulations at finite baryon density or out-of-equilibrium conditions that are intractable classically. Quantum Monte Carlo on quantum hardware further enhances this by facilitating efficient sampling of gauge configurations via quantum amplitude estimation, potentially yielding polynomial or exponential speedups over classical importance sampling for non-perturbative QCD dynamics. Specific algorithms tailored to Lattice QCD include the Quantum Approximate Optimization Algorithm (QAOA) for generating gauge-invariant configurations by optimizing over constrained Hilbert spaces, and linear systems solvers like the Harrow-Hassidim-Lloyd (HHL) algorithm for computing fermion propagators by inverting the Dirac operator as a sparse linear system. The HHL approach, in particular, leverages quantum phase estimation to achieve quadratic speedups in solving these systems compared to classical iterative solvers. Key challenges in implementing these on quantum devices include the rapid scaling of qubit requirements with lattice volume—for instance, simulating a physical 96³ lattice demands 10⁷ to 10⁸ logical qubits—and the need for fault-tolerant error correction, where each logical qubit may require over 1,000 physical qubits to suppress noise below threshold levels. Current noisy intermediate-scale quantum (NISQ) devices exacerbate these issues with gate error rates of 0.1% to 1%, necessitating advanced error mitigation techniques like zero-noise extrapolation. Recent progress up to 2025 includes proof-of-concept demonstrations on NISQ hardware for small lattices, such as VQE-based computations of hadron masses in (1+1)D SU(2) gauge theory using 10–100 qubits on superconducting platforms. Hybrid quantum-classical workflows have advanced further, integrating quantum circuits for state preparation with classical tensor networks for validation, as seen in real-time evolution simulations of (1+1)D SU(3) models mimicking QCD dynamics. By 2025, experiments exceeding 100 qubits have explored simplified 2+1D gauge theories, paving the way for scaling to QCD-relevant volumes. Compared to classical limits, quantum approaches offer potential exponential speedups in sign-problem regions, such as finite-density , where classical signal-to-noise ratios degrade exponentially, enabling access to thermodynamic phases and nuclear matter properties previously beyond reach.

Multigrid and Domain Decomposition Methods

Multigrid solvers address the critical slowing down observed in traditional iterative methods for inverting the in lattice simulations, where convergence rates degrade as lattice volumes increase due to the accumulation of low-lying eigenvalues. By employing a hierarchy of grids, multigrid methods accelerate the solution process by smoothing high-frequency errors on fine grids and correcting low-frequency modes on coarser grids, thereby maintaining near-optimal scaling even near the continuum limit. This approach is particularly effective for the , where standard conjugate gradient solvers exhibit iteration counts scaling as O(N^{1.5}) to O(N^2) for lattice volume N, leading to prohibitive computational costs on large lattices. Adaptive multigrid techniques further enhance this framework by dynamically constructing coarse-grid operators tailored to the specific gauge field configuration, separating low-mode corrections (handled on coarse levels) from high-mode smoothing (on fine levels). In adaptive setups, near-null space vectors are identified through relaxation or probing, enabling robust performance across topological sectors and mass regimes without reliance on fixed geometric coarsening. For unstructured or gauge-disordered operators, algebraic multigrid variants dispense with explicit grid hierarchies, instead using aggregation-based coarsening to approximate the spectrum and achieve similar efficiency. These methods have demonstrated iteration reductions by factors of 10-100 in benchmarks on 4D lattices, significantly boosting the feasibility of precision calculations. Domain decomposition methods complement multigrid by partitioning the lattice into subdomains for parallel computation, leveraging techniques like Schur complement systems and additive Schwarz preconditioners to minimize inter-domain communication. The Schur complement approach reformulates the global problem as local solves coupled via boundary conditions, while Schwarz methods overlap subdomains to improve convergence through iterative information exchange. These preconditioners, often integrated with Krylov subspace solvers like GCR or BiCGStab, reduce the condition number of the preconditioned operator by factors of 10-15 and enable strong scaling on thousands of nodes with low communication overhead. In lattice QCD, such methods enhance fermion solvers by distributing the Dirac inversion workload, achieving sustained performance of hundreds of Gflop/s per node on clusters. The combined use of multigrid and domain decomposition yields key improvements in overall complexity, reducing the total cost of Dirac inversions from O(N^2) to O(N) operations, which is essential for simulating physical volumes at fine lattice spacings. This optimal scaling facilitates exascale deployments in the 2020s, with implementations achieving petaflop-scale performance on systems like at and at , enabling simulations with unprecedented precision for hadron structure and interactions.

Applications

Hadron Spectrum and Masses

In lattice QCD, the masses of hadrons are determined from the asymptotic behavior of two-point correlation functions at large Euclidean time separation t. These functions are constructed as C(t) = \sum_{\vec{x}} \langle O(\vec{x},t) O(0,0)^\dagger \rangle, where O represents a local interpolating operator that creates or annihilates the hadron of interest from the vacuum, such as bilinears of quark fields for mesons or three-quark operators for baryons. For sufficiently large t, the correlator decays exponentially as C(t) \sim | \langle 0 | O | H \rangle |^2 e^{-m_H t} / (2 m_H), where m_H is the ground-state hadron mass and \langle 0 | O | H \rangle is the overlap amplitude; fitting this decay yields m_H after accounting for contributions from excited states at shorter times. Pioneering lattice QCD calculations in the late 1990s and early 2000s, using dynamical quark simulations, successfully reproduced the pion and nucleon masses in agreement with experimental values within the prevailing statistical and systematic errors, providing early validation of the non-perturbative approach. For instance, two-flavor simulations at pion masses around 500 MeV demonstrated nucleon masses consistent with experiment after extrapolation to the physical point. These results marked a shift from quenched approximations to full QCD, highlighting the importance of sea quark effects in the hadron spectrum. For light quarks, lattice results are extrapolated to the physical regime using (ChPT), which systematically incorporates the small up, down, and strange quark masses through an effective low-energy expansion matching QCD symmetries. This enables reliable predictions for pseudoscalar meson masses and baryon octet/decuplet splittings, with lattice data aligning well with ChPT fits. For heavier quarks like charm and bottom, (HQET) is applied to treat the large quark masses non-relativistically, facilitating computations of heavy-light meson and baryon masses by separating scales between light and heavy degrees of freedom. Recent advancements include the inclusion of isospin-breaking effects from the up-down quark mass difference and quantum electrodynamics (QED), which contribute to mass splittings such as the experimental charged-neutral pion difference of about 4.6 MeV, with electromagnetic effects providing the dominant contribution. Lattice QCD+QED simulations on fine lattices have quantified these corrections, achieving percent-level precision for electromagnetic contributions to light hadron masses. By the 2020s, major collaborations have reached 1% relative accuracy for light hadron masses, including pions, kaons, and nucleons, as assessed in comprehensive reviews that average results across ensembles with physical pion masses and controlled systematics (as of 2024).

QCD Thermodynamics and Phase Transitions

Lattice QCD simulations of QCD thermodynamics are conducted in a finite-temperature framework by compactifying the temporal extent of Euclidean spacetime, typically employing anisotropic lattices where the temporal lattice spacing a_t is finer than the spatial one a_s to improve resolution near the phase transition. The temperature T is set by the inverse of the temporal circumference, T = 1/(N_t a_t), with N_t denoting the number of temporal sites. This setup allows probing high-temperature regimes relevant to the early universe and heavy-ion collisions, where quark-gluon plasma (QGP) forms. Key observables include order parameters that signal symmetry changes: the chiral condensate \langle \bar{\psi} \psi \rangle, which measures spontaneous chiral symmetry breaking and decreases toward zero above the transition temperature, and the renormalized , whose expectation value rises sharply to indicate deconfinement of color charges from hadronic matter to . For QCD with 2+1 flavors of light quarks (up, down, strange) at physical masses, lattice results establish a rapid crossover transition rather than a true phase transition, centered around T_c \approx 155 MeV, as determined from peaks in susceptibilities of these order parameters and the energy density. This crossover nature persists due to explicit chiral symmetry breaking by quark masses, contrasting with the first-order transition in the chiral limit. The sign problem restricts direct simulations at nonzero real baryon chemical potential \mu_B, but briefly referencing it highlights the reliance on alternative approaches. The equation of state (EoS), relating pressure P, energy density \epsilon, and entropy density s via thermodynamic relations like \epsilon - 3P = T s \frac{\partial P}{\partial T}, has been computed up to temperatures of several hundred MeV using the integral method from trace anomalies on the lattice. These results, validated with physical quark masses and continuum extrapolations, provide essential input for hydrodynamic modeling of heavy-ion collisions at facilities like RHIC and LHC, accurately describing QGP expansion and particle spectra. To access finite-density effects despite the sign problem, simulations at imaginary \mu_B exploit the Roberge-Weiss periodicity for analytic continuation to real densities, enabling continuum-extrapolated predictions of the phase diagram up to moderate \mu_B/T \approx 3. In the 2020s, advances in lattice techniques have enabled first-principles calculations of transport coefficients, including bulk viscosity \zeta via Kubo relations from retarded correlators of the trace anomaly \theta_{\mu\mu}: \zeta = -\lim_{\omega \to 0} \frac{1}{\omega} \mathrm{Im} G_R^{\theta\theta}(\omega, \mathbf{0}). For pure SU(3) gauge theory near $1.5 T_c, \zeta/s \approx 0.2-0.3 (where s is entropy density), peaking close to the transition due to conformal breaking. Extensions to full QCD with dynamical quarks are emerging, incorporating gradient flow for correlators and improving signal-to-noise for shear viscosity \eta and electrical conductivity, enhancing QGP hydrodynamics, with recent 2025 calculations including heavy quark diffusion coefficients at physical masses.

Challenges and Limitations

The Fermion Sign Problem

The fermion sign problem in lattice QCD arises primarily when simulating systems at finite baryon density, where a real chemical potential μ is introduced to couple to the quark number. In the path integral formulation, the partition function involves the fermion determinant det(D(μ)), which becomes complex for real μ > 0 due to the non-Hermitian nature of the D in the presence of μ. This complex prevents the use of standard methods, which rely on a positive-definite , leading to the need for phase quenching or other approximations that introduce systematic biases. The severity of the sign problem is quantified by the average phase factor of the fermion determinant, \langle e^{i\theta} \rangle, where θ is the phase of det(D(μ)). This factor decays exponentially with the spacetime volume V as \langle e^{i\theta} \rangle \sim e^{-\Delta F / T}, with ΔF representing the free-energy difference between the full theory and the phase-quenched ensemble, and T the temperature. This exponential suppression implies that the signal-to-noise ratio in Monte Carlo estimates deteriorates rapidly, rendering simulations infeasible for large volumes or low temperatures relevant to physical QCD conditions. Several workarounds have been developed to circumvent or mitigate the problem. One approach is the Taylor expansion of observables in powers of μ/T around μ = 0, where simulations are sign-problem-free, allowing to finite μ using coefficients computed on isotropic lattices. Another method involves reweighting ensembles generated at μ = 0 to incorporate the from det(D(μ)), though this becomes computationally prohibitive due to the small \langle e^{i\theta} \rangle. More recently, the Lefschetz thimble method deforms the integration contour in complex field space to paths where the imaginary part of the action is constant, potentially reducing oscillations and enabling sampling. The sign problem severely restricts direct lattice QCD studies of finite-density phenomena, such as the equation of state in interiors where densities exceed saturation, and the high-density regions probed in heavy-ion collisions at facilities like RHIC and LHC. These limitations force reliance on models or extrapolations, impacting predictions for matter properties and transitions. Recent progress up to 2025 includes advancements in density-of-states methods, which compute the partition function via its in the imaginary chemical potential sector, enabling expansions for real μ without direct sampling. Additionally, Langevin dynamics have shown promise for full QCD at moderate densities, but convergence issues persist, particularly near transitions where boundary terms in the drift force lead to incorrect results unless carefully monitored with techniques like gauge cooling. Ongoing efforts, including the SIGN25 workshop in January 2025, continue to explore these and emerging approaches like for mitigating the sign problem.

Scaling and Systematic Errors

In lattice QCD simulations, discretization errors arise from the finite lattice spacing a, which introduces lattice artifacts that must be systematically reduced to recover QCD results. These errors are analyzed using Symanzik's effective field theory, which maps the lattice theory onto a effective with higher-dimensional operators suppressed by powers of a. For standard Wilson fermions, errors scale as O(a), but improved actions, such as the clover fermion action with the Sheikholeslami-Wohlert term, achieve tree-level O(a^2) improvement by canceling leading-order artifacts. Extrapolation to the continuum limit typically involves fitting lattice data across multiple spacings to forms guided by Symanzik theory, ensuring the residual discretization error at the finest spacing is below a few percent for physical quantities like masses. Finite volume effects occur due to the in lattice simulations, leading to in observables that depend on the spatial extent L. For single-particle masses, such as the , these effects include contributions from virtual particle exchanges around the , known as wrapping effects, which exponentially as \exp(-m_\pi L) where m_\pi is the . The Lüscher formula provides a quantitative framework for these shifts, relating finite-volume energy levels to properties, particularly for states but also applicable to via its single-particle formulation. To minimize these effects, simulations require m_\pi L \gtrsim 5, with estimated using effective field theory and subtracted or extrapolated accordingly. Chiral extrapolation is necessary because lattice simulations often use quark masses heavier than physical values to reduce computational cost, requiring fits to the chiral limit using (χPT). In lattice χPT, adapted for discretization and partial quenching, observables like masses are expanded in powers of the quark mass m_q, incorporating non-analytic terms such as m_q^2 \ln m_q. Fits typically employ next-to-leading or next-to-next-to-leading order SU(2) or SU(3) χPT formulas to interpolate or extrapolate to physical and masses, with systematic uncertainties assessed by varying the fit range and order. The limit is obtained by fitting lattice results for a , such as a mass m_\mathrm{lat}, as a function of a: m_\mathrm{lat}(a) = m_\mathrm{cont} + c_1 a^2 + c_2 a^4 + \cdots for O(a^2)-improved actions, where higher powers account for errors. Multiple ensembles at different spacings (typically 3 or more, with the coarsest a \approx 0.1 fm) are required for reliable , with goodness-of-fit tests ensuring consistency. The Flavour Lattice Averaging Group () quantifies systematic errors across lattice collaborations by averaging results only from simulations meeting strict criteria on , , and chiral controls, providing error budgets for key quantities like the pion constant or light quark masses. For instance, in recent averages (as of 2024), systematic errors from , chiral , and finite are each at the sub-percent level, contributing comparably to the total in f_\pi.

References

  1. [1]
    [PDF] 17. Lattice Quantum Chromodynamics - Particle Data Group
    May 31, 2024 · This review describes the theoretical foundations of LQCD and sketches the methods used to calculate the quantities relevant for the RPP. It ...
  2. [2]
  3. [3]
    [PDF] FLAG Review 2024
    Jan 20, 2025 · We review lattice results related to pion, kaon, D-meson, B-meson, and nucleon physics with the aim of making them easily accessible to the ...
  4. [4]
    [PDF] Status and progress of lattice QCD arXiv:2404.10269v2 [hep-lat] 25 ...
    Apr 25, 2024 · April 26, 2024. Abstract. We review recent progress from lattice QCD for the determination of the Cabibbo-Kobayashi-Maskawa matrix elements. 1 ...
  5. [5]
    None
    Summary of each segment:
  6. [6]
  7. [7]
    Lattice QCD -- from quark confinement to asymptotic freedom - arXiv
    Nov 14, 2002 · The lattice formulation of QCD combined with numerical simulations and standard perturbation theory are the tools that allow one to address this issue at a ...Missing: seminal motivation
  8. [8]
    Confinement of quarks | Phys. Rev. D - Physical Review Link Manager
    A mechanism for total confinement of quarks, similar to that of Schwinger, is defined which requires the existence of Abelian or non-Abelian gauge fields.
  9. [9]
    [PDF] 17. Lattice Quantum Chromodynamics - Particle Data Group
    Jun 1, 2020 · This method allows the continuum limit to be taken controlling all 1/mb corrections. Another way of introducing the 1/mb corrections is to ...<|separator|>
  10. [10]
    Monte Carlo study of quantized SU(2) gauge theory | Phys. Rev. D
    Apr 15, 1980 · Using Monte Carlo techniques, we evaluate path integrals for pure SU(2) gauge fields. Wilson's regularization procedure on a lattice ... Creutz, L ...
  11. [11]
    [PDF] 40 Years of Lattice QCD - Indico Global
    The accuracy and reliability of lattice gauge computations is vastly improved thanks in part to improved algorithms, in part to increased computer power, and in ...
  12. [12]
    LatticeQCD - Exascale Computing Project
    LatticeQCD uses supercomputers to simulate the atomic nucleus, using QCD to model subatomic interactions, and has optimized software for exascale computers.Missing: 2020s | Show results with:2020s
  13. [13]
    [PDF] 17. Lattice Quantum Chromodynamics - Particle Data Group
    Dec 1, 2023 · For the Wilson lattice gauge action, the leading correc- tions to the continuum terms come in at O(a2). They take the form P j a2cjO. (j). 6 ...
  14. [14]
    [hep-lat/0411005] Lattice QCD with mixed actions - arXiv
    Nov 3, 2004 · We discuss some of the implications of simulating QCD when the action used for the sea quarks is different from that used for the valence quarks ...Missing: approaches | Show results with:approaches
  15. [15]
    Monte Carlo study of Abelian lattice gauge theories | Phys. Rev. D
    Oct 15, 1979 · Abstract. Using Monte Carlo techniques, we study the thermodynamics of four-dimensional Euclidean lattice gauge theories, with gauge groups ...Missing: paper | Show results with:paper
  16. [16]
    THE PSEUDOFERMION METHOD AND ITS APPLICATIONS IN ...
    THE PSEUDOFERMION METHOD AND ITS APPLICATIONS IN LATTICE QCD. F. Fucito ... Marinari(. Rome U. and; INFN, Rome. ) ,. G. Parisi(. Frascati. ) ,. C. Rebbi(. CERN. ).
  17. [17]
    [PDF] Adaptive Smoothed Aggregation in Lattice QCD
    The linear systems arising in lattice QCD pose significant challenges for tradi- tional iterative solvers. For physically interesting values of the so-called ...<|control11|><|separator|>
  18. [18]
    [PDF] Computational Strategies in Lattice QCD - Martin Lüscher
    Numerical lattice QCD has seen many important innovations over the years. In this course an introduction to some of the basic techniques is provided, ...
  19. [19]
    [hep-lat/9903035] The MultiBoson method - arXiv
    Mar 23, 1999 · This review describes the multiboson algorithm for Monte Carlo simulations of lattice QCD, including its static and dynamical aspects, and ...Missing: seminal | Show results with:seminal
  20. [20]
    [hep-lat/0211036] Lattice Perturbation Theory - arXiv
    Nov 22, 2002 · In this review we explain the main methods and techniques of lattice perturbation theory, focusing on the cases of Wilson and Ginsparg-Wilson fermions.Missing: QCD | Show results with:QCD
  21. [21]
  22. [22]
  23. [23]
    [2302.00467] Review on Quantum Computing for Lattice Field Theory
    Feb 1, 2023 · In these proceedings, we review recent advances in applying quantum computing to lattice field theory.Missing: 2023-2025 | Show results with:2023-2025
  24. [24]
  25. [25]
  26. [26]
    [PDF] Quantum Computing for Lattice Field Theory - Indico Global
    Jan 23, 2025 · Problem. Error rates O 0.1% − 1% for gates and measurement. Near-term solution. Error mitigation: reduce errors, e.g., by post-processing.
  27. [27]
  28. [28]
  29. [29]
    None
    Nothing is retrieved...<|control11|><|separator|>
  30. [30]
    Adaptive Aggregation-Based Domain Decomposition Multigrid for ...
    This approach combines and improves two approaches, namely domain decomposition and adaptive algebraic multigrid, that have been used separately in lattice QCD ...
  31. [31]
  32. [32]
    (PDF) Adaptive Multigrid Algorithm for Lattice QCD - ResearchGate
    We present a new multigrid solver that is suitable for the Dirac operator in the presence of disordered gauge fields. The key behind the success of the ...
  33. [33]
    [PDF] Exascale Computing for Lattice QCD - Jefferson Lab Indico
    May 2, 2020 · • Lattice QCD is one of 24 ECP applications. • FOM (figure of merit) Our benchmark suite must run 50 X faster on Aurora or Frontier than on ...
  34. [34]
    Light hadron masses from lattice QCD | Rev. Mod. Phys.
    Apr 4, 2012 · This article reviews lattice QCD results for the light hadron spectrum. An overview of different formulations of lattice QCD with discussions on the fermion ...
  35. [35]
    Review The hadron spectrum from lattice QCD - ScienceDirect.com
    This lecture reviews some methods to compute the masses of hadrons in lattice QCD. The emphasis lies on the low-lying excited states. In the spirit of the ...
  36. [36]
    Effective Field Theories for Quantum Chromodynamics on the Lattice
    For this one uses effective theories: Chiral Perturbation Theory and Nonrelativistic QCD or Heavy Quark Effective Theory. Lattice results are reviewed on ...
  37. [37]
    Lattice QCD calculation of strong isospin breaking effects - INSPIRE
    We present a new method to evaluate with high precision the isospin breaking effects due to the mass difference between the up and down quarks using lattice ...
  38. [38]
    [2111.09849] FLAG Review 2021 - arXiv
    Nov 18, 2021 · We review lattice results related to pion, kaon, D-meson, B-meson, and nucleon physics with the aim of making them easily accessible to the nuclear and ...Missing: pseudoscalar precision
  39. [39]
    [2212.03015] Lattice QCD at non-zero temperature and density - arXiv
    Dec 6, 2022 · In this brief review I try to highlight some of the results on QCD thermodynamics obtain during the last 42 years through lattice QCD calculations.
  40. [40]
    Lattice QCD Thermodynamics with Physical Quark Masses - arXiv
    Feb 8, 2015 · Here we review recent progress in lattice QCD thermodynamics, focussing mainly on results that benefit from the use of physical quark masses: ...
  41. [41]
    [1207.5999] The QCD equation of state from the lattice - arXiv
    Jul 25, 2012 · The equation of state of QCD at finite temperatures and baryon densities has a wide range of applications in many fields of modern particle and nuclear physics.
  42. [42]
    [PDF] arXiv:hep-lat/0306031v1 24 Jun 2003
    This paper is a pedagogical review of lattice study of finite density QCD, which is attractive in high energy physics, nuclear physics and astrophysics.
  43. [43]
    Sign problem and phase quenching in finite-density QCD
    Oct 17, 2012 · In this paper, we study the effect of the phase quenching within the frameworks of effective models and holographic models. We show, in a ...Missing: ΔF/ | Show results with:ΔF/