Introduction to Lattice QCD E.-M. Ilgenfritz

Veksler and Baldin Lab for High Energy Physics, JINR Dubna

School on High Energy Physics “Phenomenology of QCD and Lattice QCD”

Bogolubov Institute for Theoretical Physics of the Ukrainian NAS Kiev, September 9 - 13, 2013

Outline: 1.

Introduction

2.

Formulation of gauge theories on the lattice

3.

What characterizes confinement ? The string tension !

4.

How is confinement and deconfinement characterized at T 6= 0 ?

5.

Which are the difficulties to include quarks as active partners ?

6.

How hadron masses are calculated ?

7.

What can be said about the structure of a confining string ?

8.

How hadron structure functions are calculated ?

9.

What is “vacuum structure” and how can one manipulate it ?

10.

How to describe high densities of energy/baryonic charge ?

1. Introduction Phenomenological challenges 1. Is QCD really the correct theory of strong interactions ? Quarks and gluons are not observed, but can one somehow reproduce the properties of mesons, baryons, ... ? Asymptotic Freedom (1973) was a good argument in favor of QCD. How to reconcile this with confinement ? This is the qualitative aspect. 2. If QCD is correct, will we be able to determine its parameters ? These are the coupling constant and the quark masses, to be fixed by matching the hadron masses to the observed values. This is the quantitative aspect.

3. Why are the prominent symmetries that QCD would have in the limiting case of massless quarks (chiral limit, chiral symmetry) so badly violated ? The quark masses are too small compared to all hadronic scales to explain this ! This is the symmetry aspect. 4. Can we understand quark confinement from QCD ? Is there an unknown symmetry related to confinement ? Which models of confinement (strings, condensation of topological objects ...) are qualitatively and quantitatively correct descriptions ? This is the modeling aspect.

d D

0

K c

s W

e− ν

5. Is Weinberg-Salam the correct theory of weak interactions ? Even though weak interactions are perturbative, initial and final states may be hadrons, which can only be described by QCD. Full control of QCD plays an essential role to give an answer. One task is to determine the CKM-matrix with good precision ! This is the service aspect to other parts of the Standard Model. 6. How do quarks and gluons behave at extremely high temperatures relevant for the Early Universe ? Heavy Ion Collision Experiments give a (not completely clean !) access. Lattice may be the ultimate way ! One of the first lattice simulations showed that a phase transition may appear. This was completely unknown territory.

7. How do quarks and gluons behave at extremely high densities, as in the cores of neutron stars ? Lattice may be the only way ! Important to understand observed properties like the • cooling rate • mass vs. radius relationship of compact stars ! Low-energy Heavy Ion Collisions (FAIR, NICA, BES at RHIC) may provide data that has to be explained. This remained badly known territory for lattice QCD until today.

8. A practical benchmark problem: get the proton and neutron properties right ! This includes: masses, form factors, structure functions, polarizability, EDM of neutron, ... It took decades of lattice QCD to reach high precision !

The importance of Quantum Field Theory for these questions is undisputed, but for long time perturbation theory was the only method to produce results. This was considered not satisfactory. • QED: αem ≈

1 137 ,

perturbation theory works extremely well !

• weak interactions: αweak ≈

1 , 30

perturbation theory works pretty well.

• strong interactions: αstrong ≈ 0.1....1,

perturbation theory works in some cases only (e.g. amplitudes for processes with high momentum transfer Q2).

This is called “Asymptotic Freedom”.

First principal question: “How to define QFT outside perturbation theory ? Which Constructive Field Theory is also Practicable ?” • How to define observables for which no perturbative expression exists ? Examples: masses, matrix elements, e.g. m ∼ e−const/g

2

(essentially singular)

• Perturbative series are generically nonconvergent asymptotic series. Dyson’s rule of thumb says : “For given α, the series should be summed up to the smallest term. The next order term is an estimate for the error.” • No way to systematically improve the result simply investing bigger effort !

Second principal question: “Why should we be interested in mesoscopic QCD at O(10) fermi ?” • This is the size of the region of vacuum which is disturbed in a typical hA or AA collision. • The symmetries of the QCD Lagrangian (at least in the limit of massless current quarks) are badly broken in the daily-life world: chiral symmetry breaking is responsible for the weight of nucleons i.e. of human beings. • The fundamental particles (quarks and gluons) featuring in the QCD Lagrangian are not existing as separate particles under usual conditions. This is called “Confinement”.

• Both properties are not realized under all circumstances: clearly, high temperature destroys confinement and restores the (approximate) chiral symmetry (lattice and HEP experiment). Whether both happens simultaneously at high baryonic density is subject of present experimental investigation. • The phase diagram is full of phase transitions or smooth crossovers. QCD must describe all forms of hadronic matter on equal footing without a priori knowledge. • External fields (magnetic field) clearly modify the mentioned phase structure : the relevant phase diagram is at least three-dimensional in T − µB − B space.

• These mechanisms can be be studied only in statistical QCD, as a system of many degrees of freedom with a priori unknown composition. The description of phase transitions is beyond perturbation theory. • What actually happens in a hA or AA collision, is subject to non-equilibrium statistical QCD. Conclusion: Discretization (finite number of sites or countably infinite number of sites) might be a viable alternative to perturbation theory. The sources of error are clearly related to the resources. Memory and computing time are dictated by the number of sites and the required precision.

Notice, that • discretization works differently for scalar and for gauge fields. • gauge invariance is the guiding principle for doing the latter. Third principal question: “How is the relation between Lattice QCD and other (continuum) non-pertubative approaches ?” Examples : Schwinger-Dyson equations, Functional Renormalization group et al. They contain a hierarchy of equations for Green’s functions. Truncations are unavoidable, which are in a process of gradual improvement.

In many talks about QCD one can see curves from several models, approximation schemes etc. and, in addition, one labelled “lattice”.

It is obvious: • deviations are understood as deficits of the respective model, of chosen approximation/truncation etc., • lattice data seem to have the rang of ultimate truth, • theoretical progress is measured by analytical results aproaching the lattice result better and better,

What does this mean ?

• Lattice gauge theory (in QCD, electroweak theory etc.) is active part of wider particle phenomenology, competing with (computationally) cheaper approaches around. Approximative or not ?

• The lattice technique apparently has no approximation problem. This not true, several limits have to be taken ! Might be temporally beyond the available computing power ? • Lattice data are considered “non-perturbative”, i.e. undoubtedly beyond perturbation theory. Strictly speaking, it makes no sense to ask for the “perturbative part”. • Lattice results are “objective”, generated by computer simulation (this is mostly true, unless stated otherwise).

Some words about the relation between lattice and perturbation theory

• Sometimes, a perturbative result (in a certain loop order) is subtracted from lattice data in order to define what is then called the “purely non-perturbative” part. • Perturbative results (expansion in the coupling constant) are usually available through analytic or computer-supported analytical work, which requires regularization, renormalization procedure, etc. • Nowadays, in addition, the lattice allows to imitate perturbation theory by so-called “Numerical Stochastic Perturbation Theory”. • Unfortunately, perturbation theory is generically non-convergent. Borel and other resummation techniques sometimes help.

Something that I was involved in in the last two years

• Numerical lattice PT up to very high orders helps clarifying these subtle questions met in perturbation theory: – What is the pattern of perturbative coefficients cn ? – What in particular is the growth with n ? – Is there a non-vanishing radius of convergence of

P

2n c g ? n n

– Essentially singular quantities have zero radius of convergence ! – Is the series resummable ? How convergence improves ?

Lattice results are computer-made. What is their limitation ?

What comes first to mind ? What affects any simulation method ? • lattice techniques require discretization: finite lattice spacing ! • numerical lattice simulations require a finite space-time lattice: finite volume ! • lattice Monte Carlo yields results with statistical errors : p error ∝ 1/ Nsampling

Statistical errors → Lattice computations may be very time-consuming ! The results can be controlled by a → 0 (continuum) and V → ∞ extrapolations ! Then calculations become even more time consuming because of many pairs (a, V ) that have to be considered !

Only high quality Monte Carlo results can be really convincing, they are costly ! Further limitations more specific for QCD will be discussed later.

2. Formulation of gauge theories on the lattice One wants to do Quantum Chromo Dynamics Simulations these three aspects will be discussed next • Quantum aspect: find a universal, really versatile formulation of Quantum Mechancs (extendable to Quantum Field Theory). This universal framework is Feynman’s path integral approach. Example: Quantum Mechanics of one degree of freedom : (also known as “field theory in 0+1 dimensions”) “field”: the coordinate q(τ ) of a particle moving in one dimension “basic space”: the time axis, discretized in Nτ steps, τ = n ∆τ “partition function” (the limit β = Nτ ∆τ → ∞ covers the case

of zero temperature T = 0, else fixed β = inverse temperature)

Z(β) =

Z

τ ΠN i=0



dqi N



  ∆τ V (qi+1 ) (qi+1 −qi )2 ∆τ V (qi ) τ −1 ΠN e− 2 e− 2∆τ e− 2 δ(q0 − qNτ ) j=0

This is the thermal partition function written as Feyman path integral ! Here, the “Euclidean” action, SE =

R

dτ (Tkin + V), is required.

The partition function defines “measure” for periodic trajectories {qNτ , qNτ −1, ...., q2, q1, q0}, such that trajectories can be truly “sampled” (with qNτ = q0). Sampling, i.e. fluctuations, is the essence of quantum effects in this setting ! Lowest action trajectories, i.e. classical solutions of EoM (equation of motion), like “kink” or “instanton”, may be useful to assess the validity of certain semiclassical approximations.

Some remarks: – limit V = 0 gives → “Wiener measure” (also in Brownian motion) – correlators like hqτ1 qτ2 i represent time-ordered Green’s functions, however in Euclidean time, such that analytic continuation to

real (Minkowski) time is needed ! – All functionals of trajectories (observables) are easily sampled. – The average is automatically the expectation value of the corresponding time-ordered operator product ! – This makes the numerical path integral technique attractive for Quantum Field Theory.

Generalize this quantum formulation to an interacting scalar field theory “field”: Φ~x,t given on sites of a 3-dimensional lattice L3 consisting of points (x1 = n1a, x2 = n2a, x3 = n3a) replicated for a lattice T of discrete times t = n4a (which address different “timeslices” L3)

“basic space”: L3 × T , the 4-dimensional lattice. Z=

Z

Π~x∈L3 Πt∈T dΦ~x,t e−SE [Φ;χ]

with Euclidean action (χ stands for some parameters) a4 SE [Φ; κ] = 2

X

~x∈L3 ,t∈T

(Φ~x,t+a − Φ~x,t )2 a4 + a2 2 + a

4

X

~x∈L3 ,t∈T

3 X X (Φ~x+aˆi,t − Φ~x,t )2

~x∈L3 ,t∈T i=1

V (Φ~x,t; χ)

a2

The potential is g0 1 V (Φ~x,t) = m20Φ~2x,t + Φ~4x,t 2 4 Before simulation, a is eliminated by rescaling of the field Φ. This should not matter because of the integration limits ±∞ . After rescaling

√ 2κ Φ ϕ= a g0 = m20 = the action takes a standard form:

6λ κ2

1 − 2λ − 8κ a2 κ

SE = −2κ

X x,µ

X X 2 2 (ϕx − 1) + ϕ2x ϕx ϕx+ˆµ + λ x

x

κ = “hopping parameter”, λ “four-particle coupling” What is here the continuum limit ? Measure the correlation length of the two-point function Gx,y

|x − y| = hϕxϕy i ∼ exp ξ 



The continuum limit requires ξ >> a, this requires to study the theory near a second order phase transition characterized by: ξ/a → ∞ while ratios of various masses m1/m2 → const

Well-known limiting cases: λ → ∞ leads to the 4-dimensional Ising model

with a second order phase transition at κc = 0.0748 For λ < ∞ one has to find the critical line κc(λ). One can define a renormalized coupling λrenorm → 0 for κ → κc(λ) This means that the ϕ4 theory is a – non-interacting (free) theory in the continuum limit, – non-trivial, interacting theory as effective theory at finite cutoff a. As field theory in the continuum sense, a bit pathological !

Coming back to theories with nontrivial continuum limit ! One is working on finite lattices ni = 1, ..., Ni for all i = 1, .., 4 3-volume V3 = N1 N2 N3 a3 inverse temperature

1 T

4-volume V4 = N1 N2 N3 N4 a4

= β = N4 a

If possible, all the following limits have to be taken : – continuum limit at fixed temperature : N4 → ∞ such that N4 a =

1 T

and

Ni → ∞

is fixed and V3 remains fixed.

– later, eventually, the “thermodynamic limit” (in the sense of usual thermodynamics) : V3 → ∞

and

a→0

– continuum limit as field theory at zero temperature : N4 → ∞

and

Ni → ∞

and

a→0

such that V4 = N1 N2 N3 N4 a4 remains fixed. Later, eventually, let V4 → ∞ (pseudo-thermodynamic limit) !

To make sure that there are no infrared effects hidden by finite V4 !

In an asymptotically free gauge theory like QCD, the continuum limit requires to let the gauge coupling constant glattice → 0.

For the definition of gauge coupling constant see later !

There is no other constant controlling the lattice spacing ! Spacing a does not appear explicitely in lattice QCD action ! The other parameters of real QCD (and lattice QCD) are quark masses. That is all ! The lattice spacing a must be determined by “calibration”, i.e. by measuring an experimentally known dimensionful observable O which will come out as O = number/ad Compare this with the physical value and fix a ! Then you know all other dimensionful observables in physical units. This is an empirical way. More precisely, see later for QCD.

Calculation of energies in a Quantum Field Theory How to get the ground state energy E0, the free energy F , the thermal average of energy hEi(β) = U , also known as

“internal energy” ? Let’s begin with bulk quantities !

In field theory the 4-dimensional volume V4 = V3 β replaces β, pressure p or free energy density f are obtained from the partition function with action SE [Φ; χ]: pV4 = −f V4 = log Z = log

Z

dΦe−SE [Φ;χ]

Notice : the partition function is not calculable by sampling ! By parameter-differentiation (wrt. χ) of log Z E dΦ ∂S e−SE [Φ;χ] ∂ log Z ∂SE ∂χ h i=− R = ∂χ ∂χ dΦ e−SE [Φ;χ]

R

Measuring the resulting observable at a set of values χ′ .... ... and integrating this expectation value over χ′ Z(χ) = log Z(χ0 )

Z

χ χ0

h

∂SE ′ i dχ ∂χ′

one gets access to the EoS (equation of state) : pV4 = log Z(χ) and can derive the thermodynamic quantities for a homogeneous system (bulk quantities, given here in dimensionless ratios): – pressure 1 p = log Z T 4 V3 T 3 – energy density ǫ ∂ 1 log Z = − T4 V3 T 4 ∂(1/T ) The rise (vs. T ) of these two quantities signals the proliferation of unconfined degrees of freedom ! (“onset of deconfinement”)

– quark number density ∂ 1 nq log Z = T 3 V3 T 3 ∂(µq /T ) Net baryonic density, vanishes at low T for µq < µonset ! (this is the “silver blaze” problem, sudden “baryonic onset”)

– quark number susceptibility  ∂2 χq 1 T 2 2 hN i − hN i = log Z = q q T 2 V3 T 3 ∂(µq /T )2 V3 The peaks of this quantity signal the phase transition !

So far, only bulk quantities (interesting for thermodynamics of QCD) have been considered. In the history of LGT, long before thermodynamics, hadron masses/bound state energies were in the center of interest (because of the “confinement problem”) Following questions have been asked : “What energy is necessary to separate the (anti)quarks in a bound state ? Does this energy infinitely rise with distance ? Or does the string break ? If at which distance ? ” These are local exitations, therefore they can/must be embedded by sources into the (otherwise unexplored) gluonic soup (and the fermi sea).

Consider again the QM example ! Lowest energy ? First excited state ? From the partition function in 0 + 1 dimensions Z(β) ≈ e−βE0 × (1 + e−β(E1 −E0 + ...) one might think to obtain the ground state energy (not density) log Z(β) β→∞ β so-called “Kac formula” (in path integral context), not useful ! E0 = − lim

Better one calculates instead the limit E0 = lim hEi(β) β→∞

from the thermal average of energy, which can be easily sampled.

Mass gap (which is relevant for field theory !) from a correlation function, gives the mass of a localized excitation above vacuum : consider the original quantum mechanical expression for the two-point function of a “source operator” qˆ: Z 1 ˆ ˆ ˆ dq hq|e−H(β−τ2 ) qˆ e−H(τ2 −τ1 ) qˆ e−H(τ1 −0) |qi hqτ1 qτ2 i = Z ( ) 1 X = |h0|ˆ q|ni|2 e−(En −E0 )(τ2 −τ1 ) e−E0 β + .... Z n o 1n 2 2 −(E1 −E0 )(τ2 −τ1 ) |h0|ˆ q |0i| + |h0|ˆ q|1i| e e−E0 β + .... = Z Actually, one has to calculate the connected two-point function hqτ1 qτ2 iconnected = hqτ1 qτ2 i − hˆ q i2 ∝ e−(τ2 −τ1 )(E1 −E0 )(1 + ....)

It allows to extract the “mass gap” E1 − E0 from the slope of its

exponential decay.

All this holds for QFT in 3 + 1 dimensions too, with localized sources : this gives masses of embedded “hadrons” in QCD.    N 4 G(~p = 0, t) ∼ cosh −t M 2 The hyperbolic cosine reflects the periodicity in “time” t.

Example : mesonic correlator (propagator) on a time-periodic lattice.

C2(tx) 2 1 0

log10C2(tx)

−1 −2 −3 −4 −5 −6

0

2

4

6

8 tx

10

12

14

16

• Chromo aspect: what is the color degree of freedom ? Kinematics of color ? How to formulate a gauge field theory ? Consider first a scalar field theory with global symmetry Φ~x,t → Φ~αx.t → complex valued Nc -component “vector” a4 SE [Φ; κ] = 2

X

~x∈L3 ,t∈T

~ ~x,t+a − Φ ~ ~x,t |2 |Φ a4 + a2 2 + a

4

X

~x∈L3 ,t∈T

3 ~ ˆ −Φ ~ ~x,t |2 X X |Φ ~x+ai,t

~x∈L3 ,t∈T i=1

~ ~x,t |; κ) V (|Φ

~ →Φ ~ ′ according to global symmetry : action invariant under Φ

αβ β Φ~αx,t → Φ~′α x,t = G Φ~x,t

a2

with a globally uniform Nc × Nc matrix Gαβ . ~ Indices run α, β = 1, .., Nc, A mass term is contained in V (|Φ|) as usual. Nc = will become the number of “colors”.

So far, this is nothing but a spin system in 3 + 1 dimensions. Next crucial step : one has to “gauge” this symmetry, i.e. make it local, with G~x,t allowed to be space-time dependent ! This requires to modify the usual partial derivatives (now in 4-dimensional notation µ = 1, ..., 4, x = (x1, x2, x3, x4)) Φαx+aˆµ − Φαx = a to the “covariant derivatives” (the principle of minimal coupling) (∂µΦ)αx

(DµΦ)αx = αβ with a so-called “transporter” Ux,y

β αβ α Φ Ux,x+aˆ x+aˆ µ − Φx µ

a between neighbouring lattice

sites x, y derived from a (“compensating”) gauge field Aµ (x).

Here Aµ (x) =

PNc2 −1 b=1

Abµ (x) Tb is a Nc × Nc matrix field ∈ su(Nc),

the Lie algebra spanned by Tb = 12 λb (with Gell-Mann matrices λa). These are Nc2 − 1 hermitean matrices. (8 gluons for Nc = 3)

Aµ is the gluon field, it is matrix-valued in the Lie-Algebra. In shorthand, the shortest transporters on the lattice link (“link matrices”) Ux,x+aˆµ = Ux,µ

contain the gauge field exponentiated with path ordering link def Ux,µ =

link def Ux,−µ =

R ig xx+aµ Aµ dxµ

Pe

R ig xx−aµ Aµ dxµ

Pe

≃ 1 + igaAµ (x + aˆ µ/2)

link,† µ/2) = Ux−aˆ µ,µ ≃ 1 − igaAµ (x − aˆ

The link matrices must be subject to gauge transformations, too: link link′ link Ux,µ → Ux,µ = Gx Ux,µ G†x+aµ

Notice, that this, in the continuum limit, corresponds to the inhomogeneous gauge tranformation for the gluon field Aµ A′µ (x) = Gx Aµ (x) G†x −

i (∂µGx ) G†x g0

With these covariant derivatives, the scalar action can be written (again quadratic in the derivative) 4

a4 X X scalar ~ |2 + mass term + self interaction |Dµ Φ SE [Φ] = x 2 x µ=1 Together with the gauge field action (see next) this is a suitable setting for studies of the SU (2) Higgs field dynamics ! – Matter field in fundamental representation Some physically important questions of standard Higgs can be investigated in an effective theory framework (i.e. without taking the continuum limit): Temperature and order of the electroweak phase transition are strongly related to the assumed Higgs mass MHiggs.

– Matter field in adjoint representation : adjoint in Dµ replacing Ulink → Ulink

investigation of Grand unification, monopoles etc.

These are not the only examples of Lattice Gauge Field Theory being applied outside QCD – strongly coupled QED (existence as interacting separate theory ?) – Higgs-Yukawa model (bounds for the Higgs mass, existence of a fourth generation ?) – breaking of Supersymmetry – unphysical modifications of QCD : Nc → ∞, Nf large, theories with fermions in the adjoint representation (considered as a theoretical laboratory) – High-Tc superconductivity – properties of graphene – .... and many more systems

The gauge field acquires an action, too ! The gauge action must be defined in a gauge-invariant way, in terms of closed loops. The most local loops are plaquettes: def

SEgauge = β

X

x,µ,ν

with four links :

1−

1 Re tr Pµν (x) Nc



def

Pµν = Ux,µ Ux+aˆµ,ν Ux+aˆµ+aˆν ,−µ Ux+aˆν ,−ν ) This “Wilson action” satisfies that in the continuum limit SEgauge

=a

4

X1 x

4

a Fµν (x)

a Fµν (x)

2

+ O(a ) = a

4

X1 x

2

tr Fµν (x) Fµν (x) + O(a2)

β = 2Nc/g 2 (a) (inverse in the coupling squared) then leads to the standard notation of action.

An Abelian plaquette represents the Abelian field strength tensor: point x in the center, directions µ (vertical) and ν (horizontal) links Uµ (x) = eiθµ (x) = eiaAµ (x) are located at x − µˆ /2 pointing forward in ν direction : Uν (x − µˆ /2) x + νˆ/2 pointing upward in µ direction : Uµ (x + νˆ/2)

x + µˆ /2 pointing backward Uν∗(x + µˆ /2) in −ν direction

x − νˆ/2 pointing downward Uµ∗(x − νˆ/2) in −µ direction

Consider the counter clockwise plaquette around x Uνµ (x) = exp (iθν (x − µˆ /2) + iθµ (x + νˆ/2) − iθν (x + µˆ /2) − iθµ (x − νˆ/2)) = exp (i(θν (x) − (a/2)∂µθν (x)) + i(θµ (x) + (a/2)∂ν θµ (x)))

× exp (−i(θν (x) + (a/2)∂µθν (x)) − i(θµ (x) − (a/2)∂ν θµ (x)))  = exp (ia(∂ν θµ (x) − ∂µ θν (x))) = exp ia2Fνµ(x)  P P Sgauge = x,ν<µ 1 − cos(a2Fνµ(x)) = 12 x,ν<µ a4Fνµ (x)Fνµ(x)

A non-Abelian plaquette represents the non-Abelian field strength tensor : point x in the center, directions µ (vertical) and ν (horizontal) link matrices Uµ (x) = eiagAµ(x) are located at (with Aµ ∈ su(N )) x − µˆ /2 pointing forward in ν direction : Uν (x − µˆ /2) x + νˆ/2 pointing upward in µ direction : Uµ (x + νˆ/2)

x + µˆ /2 pointing backward Uν†(x + µˆ /2) in −ν direction

x − νˆ/2 pointing downward Uµ† (x − νˆ/2) in −µ direction Consider the counter clockwise plaquette around x

(using the Baker-Campbell-Hausdorff formula) Uνµ (x)= Uν (x − µˆ /2)Uµ (x + νˆ/2)Uν† (x + µˆ /2)Uµ† (x − νˆ/2)

2



= exp iag (+Aν + Aµ + (a/2)(∂ν Aµ − ∂µ Aν ) + (iag/2)[Aν , Aµ ]) + O(a )   2 × exp iag (−Aν − Aµ + (a/2)(∂ν Aµ − ∂µ Aν ) + (iag/2)[Aν , Aµ ]) + O(a )     Uνµ (x) = exp ia2g (∂ν Aµ − ∂µ Aν + ig[Aν , Aµ ]) + O(a2) = exp ia2gFνµ(x)

Sgauge

 1 2Nc X  2 1 − ReTr exp ia gFνµ (x) = 2 g x,ν<µ Nc

Sgauge

1 X 4 c c = a Fνµ(x)Fνµ (x) 2 x,c,ν<µ

Field strength contains the gluon field Aµ in non-linear way ! With the structure coefficients of the Lie group f abc it can be expressed in color-component-wise notation

c Fνµ (x) = ∂ν Acµ − ∂µ Acν + igf cab Aaν (x)Abµ (x)

There are possibilities to improve the lattice gauge action:

Aim : suppression of O(a)-corrections in final results – add larger loop contributions to the action (this is similar to the matter action, where extended “templates” would have to be included) – add contributions from loops taken in higher representations

Examples of gauge-invariant expressions on the lattice

U x(x,y+3)

U y(x,y)

U y(x+4,y) U x(x,y)

(a)

(b)

The fermion action The fermionic degrees of freedom, the quarks, are also defined associated to the lattice sites x, α is the color degree of freedom, σ is the spinor index :

α (coming eventually in several flavors) ψx,σ

For fermions the covariant derivative (here two-sided derivative) 1 (Ux,µ ψx+aˆµ − Ux,−µ ψx−aˆµ ) 2a enters the action linearly (the γµ matrices act on the spinor index) (Dµ ψ)x,σ =

def SEquark =

a

4

X

µ ¯ ψ(x)(γ Dµ + m)ψ(x)

x

This linearity is the reason for the doubling problem. Moreover, quarks fields are Grassmann fields. This means : they are anticommuting, not representable by usual numbers. Fortunately, the action is bilinear (i.e. in principle integrable).

As in the QM case one is interested in a Euclidean formulation of Lattice QCD. This is achieved by performing a Wick rotation i x4 → x0 ;

xi → xi

In the result the action becomes real in the exponential weight factor compared to the Minkowski space path integral: ei SM

ia4

=e

P

xL

−SE

→e

−a4

=e

P

x LE

In terms of links, any correlation function in QCD can therefore be obtained in form of a well-defined path integral over U , ψ, ψ¯ : def 1 h0|O(...)|0i = Z

Z

¯

¯ O(...) e−SE [U,ψ,ψ] [dU][dψ][dψ]

¯ . where (in QCD there are no scalars !) SE = SEgauge [U ] + SEquark [U, ψ, ψ]

More about the integration measure – For the gauge field : [dU] is the Haar measure on the Lie group. Its abstract properties: R [dU] = 1 (normalization) RG R R f (U )[dU] = f(VU)[dU] = G G G f(UV)[dU] (left/right shift invariance) R R −1 f (U )[dU] = G G f(U )[dU] (inversion invariance)

If written in terms of Lie group parameters : near to a group element U = eiω

cT c

first have defined a metric gab = tr



∂U † ∂U ∂ω a ∂ω b

then the integration measure [dU] =





det g Πa dω a .

Concretely for the simplest examples for U (1) : U = eiθ , then [dU] =

1 2π

dθ (uniform measure on the circle)

~ · ~σ with (B 0)2 + B ~ ·B ~ = 1, then for SU (2)  : U = B 0 + iB  1 0 2 ~ ~ [dU] = π2 δ (B ) + B · B − 1 dB0 dB1 dB2 dB3 (uniform measure on sphere S3 )

– Remarks : 1. The generic form is important for the perturbative treatment. 2. Then also gauge fixing is necessary. For both aspects see the book of H. J. Rothe 3. In numerical simulations, because of shift invariance, it is necessary to know the measure only it in the neighborhood of the unit element of the gauge group when formulating the so-called “heat bath algorithm” (see later).

What about the integration rules for fermions ? For the fermion field, the measure is only formally defined by the Grassmann calculus, in particular the integration rules. Here are the first most important fermionic integrals :  R R ¯ exp ψ¯ M[U] ψ = det M[U] [dψ][dψ]  R R ¯ ψx ψ¯y exp ψ¯ M[U] ψ = M[U]−1 det M[U] [dψ][dψ] xy

etc.

This is enough to know, since the fermion action is bilinear in ψ¯ and ψ. Even if the determinant is neglected (so-called “quenched gauge theory”), the symbols ψx and ψ¯y occurring in operators representing physical quantities are substituted by the corresponding fermionic propagators M [U ]−1 xy in all possible pairings (including sign factors from anti-commutativity) . Then there is no “sea”. Then one speaks of the “valence approximation”. The quarks’ backreaction on the gluon field configurations is neglected.

Since the fermion action is bilinear in ψ¯ and ψ, ¯ = SEquark [U, ψ, ψ]

Z

ψ¯ M [U ] ψ

(symbolically with the lattice Dirac operator M [U ]), the innermost integral can be formally performed (several times for Nf flavors) 1 h0|O(...)|0i = Z def

Z

gauge

[dU] O(...) e−SE

[U]

(det M[U])Nf

For many years, to take the very (!) nonlocal determinantal factor into account was computationally prohibitive. As long as it is neglected, one remains within the quenched approximation (“gluodynamics”). If quantities with quarks are calculated, these quarks are “merely valence quarks”. Once this approximation is abandoned, one speaks of full QCD, nonquenched QCD or QCD with dynamical quarks. The quarks’ backreaction on the gluon field configurations is then included.

• Simulation aspect: quenched QCD : – In the quenched approximation, the gluonic field (individual links) can easily be simulated by very effective Markoff chain Monte Carlo algorithms. – The plaquette structure of the action makes it possible to easily define the local field acting on each link, and the measure factor for each link in interaction with the neighboring links can either be directly sampled by the heatbath algorithm or dealt with by the Metropolis method. – Everybody can simulate this on her/his laptop.

With the Wilson action, through the dependence of action on a single link U , the probability distribution for this link is of the form β  P (U )[dU ] ∼ exp Re tr (U × S) [dU ] Nc S is the sum of six (2 × (d − 1)) incomplete plaquettes which – multiplied

by U form that part of action that depends on U . S is a sum of SU (N ) matrices. For the case of SU (2) one can see that S =



det S V where V ∈ SU (2).

Then for W = U × V ∈ SU (2), the probability distribution is

  β √det S Re tr W [dW ] P (U )[dU ] = R(W )[dW ] ∼ exp 2 From the explicit form of R(W ) one can sample an SU (2) matrix W (W ≈ 1 for β large) and obtain U = W × V †

Higher groups are updated subsequently in all SU (2) subgroups ! The width of the distribution R(W ) is not uniform on the whole lattice ! It depends on the local microscopic “force” S that acts on U .

This is the “heatbath algorithm” The “heat bath” is not the local environment, but a reservoir that provides and absorbs the change of action resulting from the update.

Alternative : “Metropolis algorithm” Here a fixed-width distribution R(W ) provides (prefabricated or on demand) random matrices W for a random shift U → W × U = U ′ This shift is accepted or not depending on the change of action. if S(U ′ ) < S(U ) then U = U′ else if exp (S(U ) − S(U ′ )) > ξ (here, 0 < ξ < 1 is a random number)

U = U ′ endif

Otherwise the link U is not updated !

nonquenched QCD : – For full QCD, the method of Hamiltonian equations of motion is used to generate not too small moves in configuration space of the whole configuration under the influence of the bosonic “force” (from the plaquettes) and an additional fermionic “force” (derived from the determinant). – The evaluation of the fermionic “force” requires a huge number of inversions of fermionic matrices (like our M ). This is the bottleneck of each fermionic simulation, where a very low quark mass becomes a deadly serious obstacle ! – At the end of such “trajectories” the new configuration is accepted (or not !) according to a Metropolis criterion. This “Hybrid Monte Carlo algorithm” (a hybrid of molecular dynamics and Monte Carlo decisions) is a prototype which lays the ground for all algorithms for fully dynamical QCD.

The cost factor (estimate Ukawa (2001) CPU flop rate required for finishing a certain project in a reasonable time : CPU rate ≈ F ×



mρ mπ

zπ

 zL   r0 za L × × a a

F ≈ 6 · 106 flops zπ ≈ 6 zL ≈ 5 za ≈ 2 mπ /mρ is an a posteriori quality measure depending on the quark mass !

So far simulations only for T 6= 0 in equilibrium, (including T = 0 as a limit !) What about real-time simulations ? – The (classical, not Euclidean) equations of motion of the lattice degrees of freedom can also be solved in Minkowski space. – So far only classical event-by-event simulations have been performed. – Eventually, an estimate of quantum corrections is possible (Wigner function simulations). – This is very important for the description of the early stage of Heavy Ion Collisions, before the hydrodynamical regime sets in. This defines the “Color Glass Condensate”(McLerran/Venugopalan). Simulations by Fukushima and Gelis, Berges and not many others !

Applications/Questions to be addressed : – which instabilities occur in the non-Abelian plasma ? – how does fast equilibration to local equilibrium occur (as observed at RHIC and LHC) ? – how turbulence sets in ?

3. What characterizes confinement ? The string tension ! The confinement problem was the main concern when lattice gauge theory was proposed by Wilson 1974. Imagine, a charge pair is inserted in the fluctuating gauge field, and compare this situation with the pure gauge field. Consider a process ~1 + X ~ 2), t = 0), • pair production at ( 12 (X ~ 1, t = 0) and B = (X ~ 2, t = 0), • pair separation to A = (X ~1 − X ~ 2|, increasing the distance to R = |X • simultaneous propagation between t = 0 and t = T ~ 1 and X ~ 2, to A′ = (X ~ 1, t = T ) and B ′ = (X ~ 2, t = T ) at fixed positions X ~1 + X ~ 2), t = T ) • finally annihilation at ( 12 (X This is represented by a closed “Wilson loop” A → A′ → B ′ → B → A,

Wilson loop: (screening) masses

t

r

d

... a rectangular loop with side lengths R (spatial) and T (temporal). Charges moving from A to B are represented by long “transporters” def

UBA =

R ig BA Aµ dxµ Pe

At zero temperature, the Wilson loop expectation value hW i = h tr (UAA′ UA′B′ UB′BUBA)i =

Z(with quark pair) Z(without quark pair)

should represent the exponential of the energy difference ∆E × T . With a linearly rising quark-antiquark potential V = σ R, one expects the area law with string tension σ : hW i ∝ e−σ

R T

At strong coupling, all reduces to counting of plaquettes covering the operators, the so-called tiling. gauge

If β small, one expands the factor e−SE

[U ]

in powers of β.

Integrated together with the Wilson loop, only terms with a complete tiling of the Wilson loop by plaquettes give a nonR vanishing integral under the group integration dU to be

performed for all links.

Minimal tiling gives the leading term hW (area)i ∝ β σ∼−

area/a2

1 log β 2 a

(A) Minimum tiling of a 6x6 Wilson loop.

(B) Tiling of one face of a plaquette-plaquette correlation function.

At the same time the correlator between two remote plaquettes, the connected glueball propagator, goes like Gglueball(distance) ∝ β

4 distance/a

This establishes a relation between string tension and glueball mass 4 mglueball ∼ − log β a which, however, is valid only in the strong coupling region. According to this approximation, all compact groups would give rise to a confining gauge theory. This is a paradox that one should evade ! Best known a famous counter-example is electrodynamics.

Explanation : The physical phase of electrodynamics is separated from the strong coupling region by a (zero temperature) phase transition at β ≈ 1.0 . Confinement at weak lattice coupling (in the continuum limit) is the crucial issue to prove, even if the problem is restricted to heavy, spinless charges (this is what the Wilson loop actually describes). For SU (3) gauge theory, strong coupling region and continuum limit are connected without phase transition ! This can be checked in a space of extended actions (fundamental and adjoint action) where the plaquette evaluated in both representations enters the action.

Z(N)

βA SO(N)

A

SU(2) X

SU(3) W SU(4) B

βF

How looks the heavy quark-antiquark potential from Wilson loops ? What is the influence of dynamical quarks ? (“String breaking”) The (strictly speaking unphysical) “heavy quark potential” is still an important tool for calibration (scale setting) and comparing various simulations, including unphysical ones with gauge groups like SU (2) or SU (10) or G2 (exceptional group) etc. Sommer scale parameter r0 (Rainer Sommer) replaces the string tension σ = (440M eV )2 for scale setting purposes (earlier preferred, taken from the slope of the Regge trajectories) ! r2

dV = 1.65 d r |r=r0

Its presently agreed value is r0 = 0.468(4) fm This allows to characterize a given simulation by giving dimensionful numbers proper units, by specifying dimensionless products/ratios like • lattice spacing a/r0 • π mass mπ r0 • ρ mass mρ r0 • transition temperature Tc r0 • energy density ǫ r04 etc.

Once more about scale setting If universal scale setting is possible somewhere, then in the limit β → ∞, appoaching the continuum limit !

Then, for example, σ/m2glueball → const means “Absolute Scaling”. Assume for a physical observable Ω the following behavior in the continuum limit lim Ω(g0(a), a) = Ωcontin

a→0

Then the following renormalization group equation must hold:  ∂ dΩ ∂  = − β(g0) Ω(g0, a) = 0 d log a ∂ log a ∂g0

0 Here β(g0) = − ∂ ∂g log a is known from perturbation theory.

with β0 = and β1 =

1 (4π)2

1 (4π)4





11 3 Nc

34 3

− 23 Nf

Nc2 −

10 3

β(g) 2 4 = −β − β g + O(g ) 0 1 3 g 

Nc Nf −

Nc2 −1 Nc

Nf



being coefficients independent of the renormalization scheme. This dictates in the continuum limit the relation defining g0(a) β − 12  2 2β

a Λlattice = β0go

0

 1  2 exp − 2 × 1 + O(g0 ) (∗) 2β0g0 

1/a → ∞ for g0 → 0 or β¯ → ∞ expresses “Asymptotic Freedom”, corresponding to a second order phase transition at g0 = 0.

If simultateously a → 0 and β¯ =

2Nc g02

→ ∞ according to (*), then ratios

of different quantities of the same dimensionality remain constant ! This means “Asymptotic Scaling”. Λlattice characterizes the lattice set-up (the action etc.) Corrections beyond the two-loop relation (*) are possible if scaling of observables is enforced !

4. How is confinement and deconfinement characterized at T 6= 0 ? The time direction takes a special role, Lorentz symmetry is broken. Instead of the Wilson loop, a correlator of Polyakov loops plays the role to define the interaction between heavy spinless charges. The Polyakov loop   1 4 P (~x)) = tr ΠN x0 =1 U(~x,x0 ),4 Nc

is built of only timelike links. The Polyakov correlator ∗

− T1 FQQ ¯ (R,T )

hP (~x)P (~y ) i ∝ e

is a function of distance R = |~x − ~y | and temperature.

¯ pair. FQQ ¯ (R) is a color-averaged “excess of free energy” due to a QQ

Coulomb gauge-fixing would allow to extract the excess free energy ¯ pair. separately in singlet and octet conformations of the QQ If confinement persists (for T < Tdeconf ) lim FQQ ¯ (R) = ∞

R→∞

This requires that the average of a single Polyakov loop vanishes 1 X L= hP (~x)i = 0 V3 ~x

If deconfinement has set on at T > Tdeconf , this (thermal and volume) average becomes non-vanishing. Excess free energy (averaged and singlet) in deconfinement with dynamical quarks included in simulation (2+1 flavors) : (Petreczky)

1

F(r,T) [GeV]

0.8 0.6 0.4 T [MeV] 175 189 200 217 234 338 421 546 686

0.2 0 -0.2 -0.4 -0.6 r [fm]

-0.8 0

0.5

1

1.5

2

1

F1(r,T) [GeV]

0.8 0.6 0.4 T [MeV] 175 189 200 217 234 338 421 546 686

0.2 0 -0.2 -0.4 -0.6 r [fm]

-0.8 0

0.5

1

1.5

2

While the original action (without quarks) possesses a so-called Z(Nc ) center symmetry, with respect to replacements U~x,t0,4 → U~x′ ,t0,4 = zU~x,t0,4 for a selected time slice t0 and any z ∈ Z(Nc ), this symmetry would be

broken by a non-vanishing expectation value L of the Polyakovloop. Therefore L is called “order parameter of deconfinement”. Evidence for deconfinement has been the first genuine prediction

of numerical lattice simulations almost applicable to the real world : McLerran/Svetitsky, Kuti/Polonyi/Szlachanyi (both 1981) Although not vanishing in the low-temperature phase due to the presence of dynamical quarks, the qualitative behavior of the Polyakov loop remains the same for full QCD. (see Figure)

1 0.9

Lren(T)

0.8 0.7 0.6

HISQ/tree: Nτ=6 Nτ=8 Nτ=12 stout, cont. SU(3) SU(2)

0.5 0.4 0.3 0.2 0.1

T [MeV]

0 150

200

250

300

350

400

450

500

Spacelike Wilson loops (extending in two spacelike directions) remain confining in the “deconfined phase”, i.e. satisfy an area law with a string tension depending on temperature, σspat(T )

This provides evidence for strong coupling in the magnetic sector even at highest temperatures. The square root of the spatial string tension σspat rises approximately proportional to the temperature at T > Tdeconf . p → Figure showing T / σspat(T )

0.8 T/σ1/2(T) s 0.7 0.6 0.5

Nτ=4

0.4

Nτ=6

0.3

Nτ=8

0.2 0.1 AdS/CFT

0 1

1.5

2

2.5 T/T0

3

3.5 4 4.5 5

Another order parameter for the thermal transition is the chiral condensate, ¯ ∝ h tr M −1i hψψi which characterizes breaking of chiral symmetry. It is non-vanishing in the chirally broken (low-temperature) phase and vanishes in the chirally restored (high-temperature) phase. Under normal conditions (not Nc → ∞, no baryonic µB 6= 0 etc.)

both transitions (with fundamental quarks only !) are coupled to each other. → Figure showing chiral condensate and Polyakov loop → Figure showing chiral condensate and light quark number

fluctuation

The latter (a susceptibility !) increases in the deconfined phase, not like other susceptibilities, that show a peak at the phase transition !

0.9 0.8 0.7

Lren − <ψ ψ>sub

0.6 0.5 0.4 0.3 0.2

Nτ=8, fK scale

0.1 T [MeV] 0 120 140 160 180 200 220 240 260 280 300

1

0.8

0.6

χl/T2 − <ψ ψ>sub

0.4

0.2

Nτ=8, fK scale

T [MeV] 0 120 140 160 180 200 220 240 260 280 300

Equation of state (showing liberation of d.o.f. due to deconfinement) As indicated earlier in a more general context, in QCD one gets the pressure p(T ) p(T0) = − 4 4 T T0

Z

T T0

d T′ ′ ′ (ǫ(T ) − 3 p(T )) ′ 5 T

The integrand contains the “interaction measure” or “trace anomaly” that can be expressed by the expectation value of the derivative of (all parts of ) the action with respect to the lattice spacing a.

→ Figure showing the trace anomaly as function of T ,

with ml = ms/20 and different fermion actions, compared with the HRG (hadron resonance gas model) → Figure showing pressure and energy density as functions of temperature

8

HISQ: Nτ=8 Nτ=6 asqtad: Nτ=12 Nτ=8 p4, Nτ=8 stout, cont. s95p-v1

(ε-3p)/T4

7 6 5 4 3 2 1

T [MeV]

0 150

200

250

300

350

400

450

500

0.4

0.6

0.8

ε/T

1.2 4

εSB/T

Tr0

16 14

1

4

12 10 8 6

4

3p/T

p4 asqtad p4 asqtad

4 2

T [MeV]

0 100 150 200 250 300 350 400 450 500 550

Columbia plot The real nature of the phase transition depends on the quark masses mu = md (abscissa) and ms (ordinate in the Columbia plot), if quark masses could be varied at will. This is a physical, not a simulation question ! This idealized situation is targeted in a “2 + 1 flavor simulation”. More realistic would be a “1 + 1 + 1 flavor simulation” (mu 6= md). Nowadays “2 + 1 + 1 flavor simulations” (targeting light plus strange plus charmed quarks) have become possible. For 2 +1 flavors see Figure

8

ms

1st

~ ~ 90MeV

physical point ?

mPS~~ 3.0GeV

r

ve

so

os

cr

2nd order

Z(2) order

O(4) mTCP s

mcPS

Z(2) 1st

~ ~ 5MeV

8

mu,d

order 0

~ 70MeV ~

5. Which are the difficulties to include quarks as active partners ? Quarks appear in twofold role: • sea quarks : Dynamical quarks need to be considered in the simulation algorithm, the mass plays a crucial role hampering the performance. (see the cost factor !!!) • valence quarks: The quark propagators are used to construct operators, for example propagators entering the hadron propagators. Here the chiral properties of the quarks play a crucial role, bad chiral properties may spoil the physics outcome principally.

Why is chiral symmetry so important ? • Masslessness of quarks is only approximately realized in nature ! • Dynamical breaking of chiral symmetry (in addition to the explicit breaking by mu 6= 0, md 6= 0) is crucial for hadron physics. • It gives mass to the nucleons while making the PS mesons nearly massless, apart from the η ′. • Topological effects take care of the η ′ mass. • This is accomplished by the formation of “constituent quarks”. • The starting Lagrangian should be (apart from masses) chirally symmetric and contain current quarks. • This is difficult (and only approximately) to achieve within a discretized space-time.

Unwanted problems and their solution • Fermion doubling From the covariant two-sided derivative (with gauge field switched off ) the propagator in momentum space is a S(p) = µ iγ sin(pµa) + am which has 16 zeros in the Brillouin zone in the limit m → 0, to be confronted with the single zero of the continuum

propagator S(p) =

1

iγ µpµ + m This trouble is known as the doubling problem of the “naive lattice fermions”.

• Wilson’s “solution” sacrificing chiral symmetry To avoid the proliferation of zero modes Wilson proposed to add to the action a term of order a of the form 1 r X¯ ψ(x) 2 [Uµ(x) ψ(x + aˆ µ) − 2 ψ(x) + U−µ(x) ψ(x − aˆ µ)] −a 2 x,µ a 5

This term explicitly avoids doubling but breaks chiral symmetry. → “Wilson fermions” have no chiral symmetry. • O(a) improvement for Wilson fermions When correlation functions are computed, one is striving for the limit a → 0. It is desirable to improve the convergence of

correlation functions from order a to order a2.

This can be achieved by adding to the action another O(a) term to compensate for the discretization errors.

The “clover” term serves this purpose (Pauli term) : X 5 cSW ¯ ψ(x) γ µ γ ν Gµν (x) ψ(x) −a 4 x,µ>ν

Gµν is a discretized version of the chromo-electro-magntic tensor,

and the constant cSW must be fixed otherwise. cSW must be found as a function of β (i.e. the lattice spacing). With these O(a) corrections, the quark action can be written as 3 X a quark ¯ ¯ = ψ(x) Qxy [U ] ψ(y) SE [U, ψ, ψ] 2κ x,y 6

1 [bare mass], cSW = 1 at tree level) , κ = 2ma+8r X  µ µ (r − γ )Uµ(x)δx,y−µ + (r + γ )U−µ(x)δx,y+µ Qxy [U ] = δxy − κ

where (with β =

g 2 (a)

µ

κcSW X µ ν − γ γ Gµν (x)δxy 2 µ>ν

“clover-improved Wilson fermions” (Sheikoleslami-Wohlert action) • Twisted mass Wilson fermions Another way to get O(a2) improvement without a clover term !

It works for pairs of flavors, the kinematics is like Wilson fermions. Therefore restricted to Nf = 2 or Nf = 2 + 2 flavors ¯ SEquark[U, ψ, ψ] with ψ = (ψu, ψd)T

=a

4

X h

¯ ψ(x) DW + m0 +

x

3 iµ0 γ5 τflavor x,y



ψ(y)

Automatic O(a) improvement with maximal twist if κ → κc(β) Mainly concentrated in two collaborations :

– European twisted mass collaboration (ETM-Collaboration) – twisted mass finite temperature (tmfT) Collaboration

i

• Staggered (“Kogut-Susskind”) fermions Another way to circumvent the proliferation of species ! Kogut and Sussking 1975 – Use naive discretization and diagonalize the action with respect to the spinor degrees of freedom. – Neglect three of four degenerate Dirac components → keep only one one-component “spinor”

– Attribute now the 16 fermionic degrees of freedom localized on each 24 hypercube (at sites rµ = 2 nµ + ρµ, in a lattice decomposition into such elementary 24 cells) to four tastes with four Dirac indices each.   X 1 αβ αβ αβ Mx,y ηµ(x) Ux,µ = δx+ˆµ,y − Ux,−µ δx−ˆµ,y 2 µ

with a simple “staggered factor” ηµ(x) = (−1)x1+x2+..+xµ−1

(η1(x) = 1)

Chiral symmetry partly preserved, flavor assignment a bit obscured. Four degenerate flavors (= tastes) are described. If one wants to describe Nf different flavors, one has to represent each of them in the Monte Carlo weight by a factor (det M )

1 4

This is the “rooting trick”, still under debate. Actually, taste symmetry is broken. Can be ameliorated by smoothing the links. Simulations with staggered fermions are very fast. Mostly used in thermodynamics. MILC and HotQCD (Bielefeld/RIKEN-BNL) collaborations Improvement for discretization effect requires to go beyond a single-link action: p4 action, HISQ action etc.

• Chirally improved fermion actions Chiral symmetry would require γ5DLatt + DLattγ5 = 0 This is impossible on a regular lattice. The best imaginable way to respect chirality approximately is γ5DLatt + DLattγ5 = a DLatt γ5 DLatt This is the famous Ginsparg-Wilson (1982) relation. Forgotten for 16 years ! Rediscovered by P. Hasenfratz. One can construct Dirac operators more extended than consisting of simple links such that this relation is fulfilled (Gattringer) αβ DLatt,x,y

=

X path

w(path)U(path)αβ x,y Γ(path)

with Γ built of γ−matrices

• Overlap fermions Use a specific form of solution of the GW relation with an

arbitrary input kernel (for example the Wilson Dirac operator DW ) The Neuberger overlap operator is such an exact solution : replaces 

DW → DN = 1 + with

1

2

  1

1 + A(A†A)− 2

A = 1 + s − DW Properties/problems †

– (A A)

− 21

is numerically involved (polynomial approx. needed)

– det DN is hard to compute, full QCD simulation would need that.

– Spectrum exactly as it should be: pairwise non-zero eigenvalues, – |Q| zero-modes of same chirality (gives topological charge Q). – Discretization errors still unimproved. Remarks: Ideally suited for chirally invariant valence quarks (whatever sea quarks are used for simulation) ! Ideally suited for topology and chiral properties of the vacuum ! No massive dynamical QCD simulations so far, still methodical work going on. Equivalent to domain wall fermion.

• Domain wall fermions Extension to 4+1 dimension (additional dimension s, also discretized) ρ 1 ∗ ∗ DLatt = [γ5 (∂s + ∂s) − as ∂s ∂s] + DW − 2 a with 0 < ρ < 2 and boundary condition P+ψ(s = 0, x) = P−ψ((Ns + 1)as, x) = 0, with Ns → ∞ and as → 0.

Kaplan 1992, Shamir 1993 Used by hotQCD, Columbia University group

1 ± γ5 P± = 2

6. How hadron masses are calculated ? Two-point functions have to be evaluated, looking for an exponential slope. • Source/sink have to be inserted at two Euclidean times 0 and t (if extended sources are used, they have to be constructed with gauge links in order to get the object gauge invariant), • quark propagators (for each given gauge field configuration) have to be inserted, • all has to be contracted into a single “loop”, • this object is averaged over an ensemble of gauge configurations. For a meson this is represented in the Figure:

SF (x,t)

(0,0)

SF (A) Local Interpolating operators (z,t)

SF

(y,0) Gauge Links

(x,t)

SF (B) Extended Interpolating operators

(0,0)

C2(tx) 2 1 0

log10C2(tx)

−1 −2 −3 −4 −5 −6

0

2

4

6

8 tx

10

12

14

16

• Contaminated by higher mass states • cosh-like effect of periodicity

The exponential slope sets in only at large enough (not too large !) (due to the arbitrariness of the source/sink, which are not variationally optimized). Therefore an “effective mass plot” has to clarify the situation. → Figure for the pion effective mass → Figure for the nucleon effective mass

(A) Quenched QCD: quark loops neglected

(B) Full QCD

7. What can be said about the structure of a confining string ? The following construction works to get a profile of the string • Take the operator that would represent the measurement of bound state mass, i.e. Wilson loop or corresponding object for a three-quark object (baryon) denoted as W • calculate the expectation value if multiplied with a local (at y = (~y , τ2 )) observable S which can be either – action density or – energy density or – (local) chiral condensate or – topological charge density.

• divide by the expectation value of the Wilson or baryon loop hW i • normalize by the global average of the observable hSi

from Bali et al. hep-th/9709114 Abelian action density: full Abelian action

from Bali et al. hep-th/9709114 Abelian action density: monopole contribution only

from Bali et al. hep-th/9709114 Abelian action density: monopoles subtracted

from Bali et al. hep-th/9802005 Energy distribution within a flux tube of length 2 fermi between a QQ pair in MAGP

from Bali et al. hep-th/9802005 Checking the localisations of div E on source and sink, at R = 15.

0.5

0

-0.5

For a baryon three-quark object see the Figure: (Leinweber et al. a Xiv:0910.0958)



W3Q(~r1, ~r2, ~r3; τ ) S(~y , τ /2)



C3Q(~y ; ~r1, ~r2, ~r3; τ ) = W3Q(τ ) S(~y , τ /2)

WQQ¯ (~r1, ~r2; τ ) S(~y , τ /2)

CQQ¯ (~y ; ~r1, ~r2; τ ) =

WQQ¯ (τ ) S(~y , τ /2)

The following three pictures show the action profile of • a quark-diquark flux tube (Leinweber et al. arXiv:0910.0958) • a quark-antiquark flux tube • the transversal profile of both (at the middle) They find few-% reduction relative to the action in vacuum. The profile of the quark-antiquark condensate would be similar !

8. How hadron structure functions are calculated ? structure functions, Generalized Parton Distributions (GPD) The principle is the same : calculate hadronic matrix elements of operators coupling to external currents begun by German QCDSF collaboration (Schierholz et al.) structure function =

three point function two point function

gives momenta of fu(x), fd(x), fs(x) GPD : describes transversal (impact parameter) dependence more recently : Wigner function picture of nucleon (in phase space) last summer : Lorce/Pasquini arXiv:1206.3143

A parallel between structure functions and semileptonic decays

d D

0

K c

s W

e− ν

9. What is “vacuum structure” and why should it be manipulated ? All unusual properties of QCD are believed to be related to a weird “vacuum structure”. The typical vacuum field without any particle sources should be complicated enough to explain all oddities (confinement, chiral symmetry breaking etc.), however by recognizable structures. Two classes of models/pattern recognition searches exist • Semiclassical models – A semiclassical model has been developed, with “instantons” as prominent structure at T = 0, and “calorons” at T 6= 0.

Both are quantized lumps of winding number (topological charge). Instantons cannot explain confinement !

– Calorons are instantons at T 6= 0. They have an additional

degree of freedom. Thanks to their composite structure at “nontrivial holonomy”, they have the potential to confine.

– Dyons, the caloron constituents (Nc calorons per caloron), are probably the agents of confinement. • Searching for defects : monopoles and vortices – On the other hand, gauge fixing and “projection” to Abelian fields has been used to localize special structures (monopoles of U Nc−1 and vortices of ZNc gauge theory) which each alone can mimick confinement (in the resp. reduced theory). – Removing one sort of them destroys the other type of defects and leads to the loss of confinement.

– It also leads to the loss of objects of topological charge (instantons, calorons) Conclusion : Topological charge density seems to link all these features together.

q(x) ∝ ǫµνσρ Gaµν Gaσρ ∝ E a · B a

(this is the density of winding number [Pontryagin number]) To map out the typical structure of topological charge, taking snapshots from the Markoff chain of configurations is sufficient. It needs (direct or indirect) smoothing (cooling, filtering, Wilson flow ...) of the gauge field. Structures appear which extend in space-time and are long-living down the Markoff chain. They are called “infrared structures”.

Without extensive smoothing these structures are overlaid (but not hidden) by a membrane-like structure. The infrared features become visible only when the resolution is reduced. Remarkably, the same structure is obtained applying different smoothing techniques: – active : smoothing the link field (slightly modifying the configuration) – passive : spectral cut-off of “diagnostic” quark fermion modes The following two pictures show the same configuration – after nsw = 48 smoothing steps – with a cut-off of λcut = 634MeV

The following two pictures show the same configuration after only nsw = 5 smoothing steps, which is expected to be equivalent to no cutoff at all beyond the lattice discretization λcut = a1 , i.e. including all modes of the fermion spectrum up to the lattice UV cutoff. What do we see ? – positive and negative topological charges form entangling 3-dimensional membranes filling all of space – positive topological charges alone form a sponge

The following two pictures (from Ph. de Forcrand) show – an instanton-antiinstanton pair together with the corresponding low-energy quark (almost zero-) modes. The topological charge could be mapped out so clearly only after a fair amount of smoothing. – The almost-zero modes, however, have been identified in the original non-smoothed configuration. – schematical view how to reconcile the UV structure with the IR structure

0

0

10. How can high densities of energy/baryonic charge be described ? High energy density by high temperature (only equilibrium !) The situation holds in the central rapidity region at RHIC and LHC. A surplus of baryonic charge becomes important at lower collision energy (NICA and FAIR, and spectator region at RHIC and LHC). Add a chemical potential (Grand canonical ensemble) (mostly a potential counting quark minus antiquark, i.e. net baryon number). Remember Polyakov loop P (~x) vs. P ∗(~y ) for quark and antiquark ! Replace all timelike links (at all positions in space, all timeslices x0) U(~x,x0),ν=4 → U(~x,x0),ν=4 eµq a

This amounts to giving all possible circulating quark loops (forward in time) in a configuration an enhancing factor e

µq T

and all possible circulating antiquark loops (backward in time) a suppression factor µ

− Tq

e

For 3 quarks passing (no matter whether they go correlated or uncorrelated !), a factor e

3µq T

=e

µB T

and for 3 antiquarks passing (correlated or uncorrelated !), a factor e

−3µq T

=e

−µB T

appears in the weight (like in the HRG model for baryons and antibaryons), therefore µB = 3µq . HRG = hadron resonance gas

Links appear with this additional factor at all timelike links in the Dirac operator and therefore in the quark determinant. In a corresponding way quarks (antiquarks) could be accounted for according to their isospin. If baryon number is counted → fermion determinant becomes complex-valued, giving rise to the sign problem. Importance sampling does not work anymore ! → Workshop in Regensburg (Germany), September 19-22, 2012 see http://www.physik.uni-regensburg.de/sign2012/talks.shtml If total isospin is counted (irrespective of baryon number) → fermion determinant real-valued positive (no phase problem).

The weight factor contains det M = | det M |ei arg(det M) If µq = 0 all non-zero eigenvalues λn appear in complex-conjugated pairs, and the number of zero modes λ0 = 0 is equal to |Q| (Q = topological charge)

This is the Atiyah-Singer index theorem ! det M = (λ0 + m)|Q|Πn [(iλn + m)(−iλn + m)] = real positive with a small mass m:

It is tempting to treat the phase problem for µq 6= 0 by the phase quenching method :

For any observable A, the expectation value at µ 6= 0 i arg det M



h Ae iphase quenched hAiB = hei arg det M iphase quenched but the denominator goes like hei arg det M iphase

quenched

∼ e−c

V4

and becomes lost in noise for any physically reasonable volumes. This trick does not really work for physically interesting volumes !!

How to circumvent the phase (sign) problem is presently under intensive research : • strong coupling approximations • mean field methods ( .... EMI 1984) • flux representation of the partition function (Gattringer) • lattice-suggested effective models • complex Langevin simulation (Aarts, EMI 1986) • more radical solutions (saddle point integration in complexified space, DiRenzo) • effective models like PNJL, PQM etc. (which have to be arranged such to match lattice QCD at µ = 0)

See my lectures given at the Helmholtz International Summer School “Dense Matter in Heavy-Ion Collisions and Astrophysics” (DM 2012) at JINR, Dubna, August 28 September 8, 2012 see the BLTP web page http://theor.jinr.ru/ dm12/list-of-lecturers.htm

11. Instead of a conclusion Coming back more explicitely to the origin of systematic errors (of an otherwise absolutely safe ab initio method)

• Discretization errors because of finite lattice spacing a Physical results must be extracted in the limit of a → 0.

This limit cannot be reached in real lattice simulation.

The discrepancy between the computed results and their continuum limit is usually of the order of a (or a2 for improved lattice actions). In typical simulations 1/a = 1 ÷ 3 GeV.

• Statistical errors because of finite computing time All Monte Carlo simulations are based on statistical sampling. √ Therefore they cause a statistical error that decreases with 1/ N where N is the number of independent measurements. Notice that N is the number of uncorrelated field configurations. Since the gauge field configurations are created using a Markov chain (with small changes of field variables) one should monitor autocorrelations of the observables under consideration. Many configurations might have to be abandoned. Because of present day computational power it is possible to generate reasonable statistical samples of uncorrelated configurations. Except for some very important (“infrared”) observables, there is no major problem left.

• Finite volume because of limited memory and production speed Instead of dealing with an open volume, periodic or anti-periodic boundary conditions are imposed for the (bosonic or fermionic) fields. Therefore every observable which is computed on the lattice suffers from unphysical contributions of mirror states. These finite-volume contributions to the correlation functions fall off exponentially ∼ exp (−Lmπ ) with the lattice length and

they are usually negligible for a lattice size L bigger than 5/mπ .

• Quenching, i.e. neglecting fermion feed-back to the gluons This approximation is the hardest to justify. Its only reason was the temporary limitation in computational power. One can argue that the unquenched simulations have suggested that the effects of the quark loops in the mass spectrum are small (hadron mass ratios are relatively unaffected). But this does not apply to other aspects. The error introduced by quenching in the determination of a−1 can be as big as 10%-20%, which limits the absolute calibration. Quenched calculations are outdated, except for specialized (that means methodical) investigations.

• Chiral extrapolation, i.e. reaching realistic quark masses Because of the finite volume effects, an infrared cut-off is a natural consequence of the lattice. Therefore the u and d quarks are actually too light to be simulated, even by present day computers. Therefore one usually performs lattice simulations for values of mu = md much bigger of the physical values (typically bigger than 50 MeV), then performs an extrapolation of the results to the physical point or to the limit mu = md = 0. This is called the chiral extrapolation guided by predictions of the Chiral Lagrangian (encoded in Chiral Perturbation Theory, χPT) such as the Gell-Mann-Okubo formula.

The chiral limit corresponds to some limit of the bare lattice parameters, like the limit κ → κcrit which practically cannot

be taken in the simulation.

• Heavy quarks The c and b quarks are very heavy, such that not all of their modes can propagate on a typical lattice. To solve the problem there are three conventional approaches: 1. Simulate these quarks with a mass smaller than the physical one and perform an extrapolation to the physical mass (guided by the so-called Heavy Quark Effective Theory). 2. Implement the HQET itself on lattice. This implies that one considers the heavy quark as almost static (non-relativistic) and computes corrections to this approximation in an 1/mh expansion. 3. The so-called Fermilab Heavy Quark Theory, based on HQET.

• Matching between lattice and continuum renormalization schemes Experimental data are analyzed using certain continuum renormalization schemes, usually dimensional regularization with the MS prescription. To confront Lattice QCD results with phenomenology it is therefore necessary to match matrix elements between the two different schemes. In general h0|Oi(...)|0iMS = Zij h0|Oj (...)|0ilatt

= (δij + O(αs)) h0|Oj (...)|0ilatt

where Zij are called matching coefficients and have a perturbative origin. The matching coefficients can be computed in perturbation theory and usually they are known only at 1-loop.

Since αs is big at the typical lattice energy scale, corrections of higher order in αs can contribute to an error in the matching of as much as 10%. Moreover in the matching procedure it is common that matrix elements of some continuum operator mix with the corresponding matrix elements of specific new operators that appear on lattice, because Zij is, in general, non diagonal. The contribution of these operators can be big and must be taken into account.

I Introduction to Lattice QCD.pdf

Introduction. Phenomenological challenges. 1. Is QCD really the correct theory of strong interactions ? Quarks and gluons are not observed, but can one somehow. reproduce the properties of mesons, baryons, ... ? Asymptotic Freedom (1973) was a good argument in favor. of QCD. How to reconcile this with confinement ?

888KB Sizes 0 Downloads 152 Views

Recommend Documents

REACTIVE LATTICE GAS AUTOMATA 1. Introduction -
A probabilistic lattice gas cellular automaton model of a chemically reacting system is constructed. Microdynamical equations for the evolution of the system are given; the continuous and discrete Boltzmann equations are developed and their reduction

REACTIVE LATTICE GAS AUTOMATA 1. Introduction
do not play an important role. The macroscopic reaction-diffusion equation has its basis in an underlying molecular dynamics which is necessarily quite complex ...

Lattice
search engines, as most of today's audio/video search en- gines rely on the ..... The choice and optimization of a relevance ranking for- mula is a difficult problem ...

REACTIVE LATTICE GAS AUTOMATA 1. Introduction
Chemical Physics Theory Group, Department of Chemistry, University of Toronto, ... A probabilistic lattice gas cellular automaton model of a chemically reacting ...

I. Introduction
This paper was developed as part of a broader research program on the ...... know policymakers' preferences, incumbents have an incentive to behave as if they ...

i introduction
oscillators for wireless communication applications ... resonators in applications as timing references for .... (SOI) MEMS technology with a 20um thick p+.

i. introduction
I. INTRODUCTION. In kinesthetic analysis pertaining to (computer) animation, sports training, or ... second class of problems. Optical detection can hide motion.

Lattice Based Transcription Loss for End-to-End ...
architecture has performed better than traditional DNNs , and the use of temporal ..... pass large vocabulary continuous speech recognition using bi- directional ...

Instabilities leading to vortex lattice formation in rotating ...
The above results apply to conventional BECs composed of atoms of mass m with short-range s-wave interactions, .... rotation frequency Ω. However, to determine how the instability manifests itself and whether it leads to lattice formation ... At the

Lattice-based Cryptography∗
Nov 7, 2008 - A lattice is a set of points in n-dimensional space with a periodic structure, such ..... choice of matrix A and its internal randomness). .... It reduces the key storage requirement from nm elements of Zq to just m elements, because ..

Introduction to Handibot Software and Handibot Apps I. Hello ... - GitHub
describing the new “FabMo” software platform that runs the tools. ... as a methodology because we believe it is an effective way for small companies, ..... Page 10 ...