Lecture Notes For The C6 Theory Option
Lecture Notes For The C6 Theory Option
F.H.L. Essler
The Rudolf Peierls Centre for Theoretical Physics
Oxford University, Oxford OX1 3NP, UK
Contents
1
5 The Ising Model 28
5.1 Statistical mechanics of the Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.2 The One-Dimensional Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.2.1 Transfer matrix approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2.2 Averages of observables in the transfer matrix formalism . . . . . . . . . . 30
5.2.3 The related zero-dimensional quantum model . . . . . . . . . . . . . . . . . . 31
5.3 The Two-Dimensional Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3.1 Transfer Matrix Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3.2 Spontaneous Symmetry Breaking . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4 Homework Questions 8-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6 Second Quantization 37
6.1 Systems of Independent Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.1.1 Occupation Number Representation . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Fock Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2.1 Creation and Annihilation Operators . . . . . . . . . . . . . . . . . . . . . . . 39
6.2.2 Basis of the Fock Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.3 Homework Questions 11-13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.3.1 Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.4 Second Quantized Form of Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.4.1 Occupation number operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.4.2 Single-particle operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.4.3 Two-particle operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.5 Homework Question 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2
10 Path Integral for interacting Bose systems 64
10.1 Coherent States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.2 Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Part I
Functional Methods in Quantum Mechanics
1 Some Mathematical Background
Functional Methods form a central part of modern theoretical physics. In the following we introduce the
notion of functionals and how to manipulate them.
1.1 Functionals
What is a functional? You all know that a real function can be viewed as a map from e.g. an interval [a, b]
to the real numbers
A functional is similar to a function in that it maps all elements in a certain domain to real numbers,
however, the nature of its domain is very different. Instead of acting on all points of an interval or some
other subset of the real numbers, the domain of functionals consists of (suitably chosen) classes of functions.
In other words, given some class {f } of functions, a functional F is a map
F : {f } → R , f → F [f ]. (2)
1. The distance between two points. A very simple functional F consists of the map which assigns to all
paths between two fixed points the length of the path. To write this functional explicitly, let us consider
a simple two-dimensional situation in the (x, y) plane and choose two points (x1 , y1 ) and (x2 , y2 ). We
consider the set of paths that do not turn back, i.e. paths along which x increases monotonically as we
go from (x1 , y1 ) to (x2 , y2 ). These can be described by the set of functions {f } on the interval [x1 , x2 ]
satisfying f (x1 ) = y1 and f (x2 ) = y2 . The length of a path is then given by the well-known expression
Z x2
dx0 1 + f 0 (x0 )2 .
p
F [f (x)] = (3)
x1
2. Action Functionals. These are very important in Physics. Let us recall their definition in the context
of classical mechanics. Start with n generalised coordinates q(t) = (q1 (t), . . . , qn (t)) and a Lagrangian
L = L(q, q̇). Then, the action functional S[q] is defined by
Z t2
S[q] = dt L(q(t), q̇(t)) . (4)
t1
It depends on classical paths q(t) between times t1 and t2 satisfying the boundary conditions q(t1 ) = q1
and q(t2 ) = q2 .
3
1.2 Functional differentiation
In both the examples given above a very natural question to ask is what function extremizes the functional.
In the first example this corresponds to wanting to know the path that minimizes the distance between two
points. In the second example the extremum of the action functional gives the solutions to the classical
equations of motion. This is known as Hamilton’s principle. In order to figure out what function extremizes
the functional it is very useful to generalize the notion of a derivative. For our purposes we define the
functional derivative by
δF [f (x)] F [f (x) + δ(x − y)] − F [f (x)]
= lim .
δf (y) →0
(5)
Here, as usual, we should think of the δ-function as being defined as the limit of a test function, e.g.
1 2 2
δ(x) = lim √ e−x /a , (6)
a→0 πa
and take the limit a → 0 only in the end (after commuting the limit with all other operations such as
the lim→0 in (5)). Importantly, the derivative defined in this way is a linear operation which satisfies the
product and chain rules of ordinary differentiation and commutes with ordinary integrals and derivatives.
Let us see how functional differentiation works for our two examples.
1. The distance between two points. In analogy with finding stationary points of functions we want to
extremize (3) by setting its functional derivative equal to zero
δF [f (x)]
0= . (7)
δf (y)
f 0 (x0 )δ 0 (x0 − y)
q q
2
0 0 0 0
1 + [f (x ) + δ (x − y)] = 1 + [f 0 (x0 )]2 + q + O(2 ) , (9)
1 + [f 0 (x0 )]2
where δ 0 (x) is the derivative of the delta-function and O(2 ) denote terms proportional to 2 . Substi-
tuting this back into (8) we have 1
x2
δ 0 (x0 − y)f 0 (x0 ) f 0 (y)
Z
δF [f (x)] d
= dx0 q =− q . (11)
δf (y) dy
x1 1 + [f 0 (x0 )]2 1 + [f 0 (y)]2
The solution to (7) is thus
f 0 (y) = const, (12)
which describes a straight line. In practice we don’t really go back to the definition of the functional
derivative any more than we use the definition of an ordinary derivative to work it out, but proceed
as follows.
1
In the last step we have used Z b
dx0 δ 0 (x0 − y)g(x0 ) = −g 0 (y) , (10)
a
which can be proved by “integration by parts”.
4
• We first interchange the functional derivative and the integration
Z x2
δF [f (x)] δ
q
= dx 0
1 + [f 0 (x0 )]2 . (13)
δf (y) x1 δf (y)
δf 0 (x0 ) d δf (x0 ) d
= 0 = 0 δ(x0 − y) . (15)
δf (y) dx δf (y) dx
Now we can put everything together and arrive at the same answer (11).
Exercise
2. Next we want to try out these ideas on our second example and extremize the classical action (4) in order
to obtain the classical equations of motion. We first interchange functional derivative and integration and
then use the chain rule to obtain
Z t2
δS[q] δ
= dt̃ L(q(t̃), q̇(t̃)) (16)
δqi (t) δqi (t) t1
Z t2
∂L δqj (t̃) ∂L δ q̇j (t̃)
= dt̃ (q, q̇) + (q, q̇) (17)
t1 ∂qj δqi (t) ∂ q̇j δqi (t)
(18)
δ q̇j (t̃) d δqj (t̃)
We now use that δqi (t) = dt̃ δqi (t)
and integrate by parts with respect to t̃
Z t2
δS[q] ∂L d ∂L δqj (t̃)
= dt̃ (q, q̇) − (q, q̇) (19)
δqi (t) t1 ∂qj dt̃ ∂ q̇j δqi (t)
Z t2
∂L d ∂L ∂L d ∂L
= dt̃ (q, q̇) − (q, q̇) δij δ(t̃ − t) = (q, q̇) − (q, q̇) . (20)
t1 ∂qj dt̃ ∂ q̇j ∂qi dt ∂ q̇i
∂L d ∂L
(q, q̇) − (q, q̇) = 0.
∂qi dt ∂ q̇i
(22)
Nice.
Exercise
5
1.3 Multidimensional Gaussian Integrals
As a reminder, we start with a simple one-dimensional Gaussian integral over a single variable y. It is given
by
Z ∞ r
1 2 2π
I(z) ≡ dy exp(− zy ) = ,
−∞ 2 z
(23)
where z is a complex number with Re(z) > 0. The standard proof of this relation involves writing I(z)2
as
p a two-dimensional integral over y1 and y2 and then introducing two-dimensional polar coordinates r =
y12 + y22 and ϕ. Explicitly,
Z ∞ Z ∞ Z ∞ Z ∞
2 1 2 1 2 1
I(z) = dy1 exp(− zy1 ) dy2 exp(− zy2 ) = dy1 dy2 exp(− z(y12 + y22 )) (24)
−∞ 2 −∞ 2 −∞ −∞ 2
Z 2π Z ∞
1 2π
= dϕ dr r exp(− zr2 ) = . (25)
0 0 2 z
over variables y = (y1 , . . . , yn ), where A is a symmetric, positive definite matrix (all its eigenvalues are
positive). This integral can be reduced to a product of one-dimensional Gaussian integrals by diagonalising
the matrix A. Consider an orthogonal rotation O such that A = ODOT with a diagonal matrix D =
diag(a1 , . . . , an ). The eigenvalues ai are strictly positive since we have assumed that A is positive definite.
Introducing new coordinates ỹ = OT y we can write
n
X
T T
y Ay = ỹ Dỹ = ai ỹi2 , (27)
i=1
where the property OT O = 1 of orthogonal matrices has been used. Note further that the Jacobian of
the coordinate change y → ỹ is one, since |det(O)| = 1. Hence, using Eqs. (23) and (27) we find for the
integral (26)
n Z
Y 1
W0 (A) = dỹi exp(− ai ỹi2 ) = (2π)n/2 (a1 a2 . . . an )−1/2 = (2π)n/2 (detA)−1/2 . (28)
2
i=1
To summarise, we have found for the multidimensional Gaussian integral (26) that
a result which will be of some importance in the following. We note that if we multiply the matrix A by a
complex number z with Re(z) > 0 and then follow through exactly the same steps, we find
n/2
2π
W0 (zA) = (detA)−1/2 . (30)
z
One obvious generalisation of the integral (26) involves adding a term linear in y in the exponent, that is
Z
n 1 T T
W0 (A, J) ≡ d y exp − y Ay + J y . (31)
2
6
Here J = (J1 , . . . , Jn ) is an n-dimensional vector. Changing variables y → ỹ, where
y = A−1 J + ỹ (32)
where n = n(X) is a function. (The minima of this functional can be interpreted as light rays propagating in a
medium with refractive index n.)
a) Derive the differential equation which has to be satisfied by minimal paths X.
b) Consider a two-dimensional situation with paths X(τ ) = (X(τ ), Y (τ )) in the x, y plane and a function
n = n0 + (n1 − n0 ) θ(x). (The Heaviside function θ(x) is defined to be 0 for x < 0 and 1 for x ≥ 0. Recall
that θ0 (x) = δ(x).) Solve the differential equation in a) for this situation, using the coordinate x as parameter τ
along the path.
c) Show that the solution in b) leads to the standard law for refraction at the boundary between two media with
refractive indices n0 and n1 .
for a complex constant z. What is the requirement on z for the integral to exist?
b) The gamma function Γ is defined by
Z ∞
Γ(s + 1) = dx xs e−x .
0
7
2 Path Integrals in Quantum Mechanics
So far you have encountered two ways of doing QM:
1. Following Schrödinger, we can solve the Schrödinger equation for the wave function → Fun with
PDEs...
2. Following Heisenberg, we can work with operators, commutation relations, eigenstates → Fun with
Linear Algebra...
Historically it took some time for people to realize that these are in fact equivalent. To quote the great
men: I knew of Heisenberg’s theory, of course, but I felt discouraged, not to say repelled, by the methods
of transcendental algebra, which appeared difficult to me, and by the lack of visualizability. (Schrödinger in
1926)
The more I think about the physical portion of Schrödinger’s theory, the more repulsive I find it. What
Schrödinger writes about the visualizability of his theory is probably not quite right, in other words it’s crap.
(Heisenberg, writing to Pauli in 1926)
There is a third approach to QM, due to Feynman. He developed it when he was a graduate student,
inspired by a mysterious remark in a paper by Dirac. Those were the days! Feynman’s approach is partic-
ularly useful for QFTs and many-particle QM problems, as it makes certain calculations much easier. We
will now introduce it by working backwards. The central object in Feynman’s method is something called
a propagator. We’ll now work out what this is using the Heisenberg/Schrödinger formulation of QM you
know and love. After we have done that, we formulate QM à la Feynman.
where |~xi are the simultaneous eigenstates of the position operators x̂, ŷ and ẑ. The propagator is the
probability amplitude for finding our QM particle at position ~x0 at time t, if it started at position ~x at time
t0 . To keep notations simple, we now consider a particle moving in one dimension with time-independent
Hamiltonian
p̂2
H = T̂ + V̂ = + V (x̂). (41)
2m
We want to calculate the propagator
hxN |U (t; 0)|x0 i. (42)
8
It is useful to introduce small time steps
tn = n , n = 0, . . . , N, (43)
The propagator is
i i
hxN |U (t; 0)|x0 i = hxN |e− ~ H · · · e− ~ H |x0 i
Z Z
i i i
= dxN −1 . . . dx1 hxN |e− ~ H |xN −1 ihxN −1 |e− ~ H |xN −2 i . . . hx1 |e− ~ H |x0 i, (45)
This expression now has a very nice and intuitive interpretation, see Fig. 1: The propagator, i.e. the
probabilty amplitude for finding the particle at position xN and time t given that it was at position x0 at
time 0 is given by the sum over all “paths” going from x0 to xN (as x1 ,. . . , xN −1 are integrated over).
In the next step we determine the “infinitesimal propagator”
i
hxn+1 |e− ~ H |xn i. (47)
9
So up to terms of order 2 we have
i i i i i
hxn+1 |e− ~ H |xn i ' hxn+1 |e− ~ T̂ e− ~ V̂ |xn i = hxn+1 |e− ~ T̂ |xn ie− ~ V (xn ) , (50)
where we have used that V̂ |xi = V (x)|xi. As T̂ = p̂2 /2m it is useful to insert a complete set of momentum
eigenstates 2 to calculate
Z Z
i dp ip̂2 dp − ip2 −i p (xn −xn+1 )
hxn+1 |e− ~ T̂ |xn i = hxn+1 |e− 2m~ |pihp|xn i = e 2m~ ~
2π~ 2π~
r
m im 2
= e 2~ (xn −xn+1 ) . (51)
2πi~
In the second step we have used that p̂|pi = p|pi and that
ipx
hx|pi = e ~ . (52)
The integral over p is performed by changing variables to p0 = p + m (xn − xn+1 ) (and giving a very
small imaginary part in order to make the integral convergent). Substituting (51) and (50) back into our
expression (45) for the propagator gives
N −1
!
h m iN Z i X m xn+1 − xn 2
2
hxN |U (t; 0)|x0 i = lim dx1 . . . dxN −1 exp − V (xn ) . (53)
N →∞ 2πi~ ~ 2
n=0
10
2.2 Quantum Mechanics à la Feynman
Feynman’s formulation of Quantum Mechanics is based on the single postulate that the probability amplitude
for propagation from a position x0 to a position xN is obtained by summing over all possible paths connecting
x0 and xN , where each path is weighted by a phase factor exp ~i S , where S is the classical action of the
when we take the real parameter a to infinity. In this case the integrand will oscillate wildly as a function of
t because the phase of exp iah2 (t) will vary rapidly. The dominant contribution will arise from the points
where the phase changes slowly, which are the stationary points
The integral can then be approximated by expanding around the stationary points. Assuming that there is
a single stationary point at t0
Z ∞ ah00
2 (t0 ) 2
dt h1 (t0 ) + (t − t0 )h01 (t0 ) + . . . eiah2 (t0 )+i 2 (t−t0 ) ,
g(a 1) ≈ (60)
−∞
Changing integration variables to t0 = t − t0 (and giving a a small imaginary part to make the integral
converge at infinity) as obtain a Gaussian integral that we can take using (23)
s
2πi
g(a 1) ≈ h1 (t0 )eiah2 (t0 ) . (61)
ah002 (t0 )
Subleading contributions can be evaluated by taking higher order contributions in the Taylor expansions
into account. If we have several stationary points we sum over their contributions. The method we have
just discussed is known as stationary phase approximation.
The generalization to path integrals is now clear: in the limit ~ → 0 the path integral is dominated by
the vicinity of the stationary points of the action S
δS
= 0. (62)
δx(t0 )
V (x) = 0. (63)
11
It is useful to change integration variables to
yj = xj − xN , j = 1, . . . , N − 1, (65)
im
m (x0 −xN )2
p
hxN |U (t; 0)|x0 i = 2πi~t e
2~t .
(70)
For a free particle we can evaluate the propagator directly in a much simpler way.
Z ∞ Z ∞
dp p̂2 t
−i 2m~ dp −i p2 t −i p(x0 −xN )
hxN |U (t; 0)|x0 i = hxN |e |pihp|x0 i = e 2m~ ~
−∞ 2π~ −∞ 2π~
r
m im (x0 −xN )2
= e 2~t . (71)
2πi~t
Aside: Lattice Laplacian
The matrix A is related to the one dimensional Lattice Laplacian. Consider functions of a variable z0 ≤ z ≤ zN
with “hard-wall boundary conditions”
f (z0 ) = f (zN ) = 0. (72)
The Laplace operator D acts on these functions as
d2 f (z)
Df ≡ . (73)
dz 2
Discretizing the variable z by introducing N − 1 points
zn = z0 + na0 , n = 1, . . . , N − 1 (74)
where a0 = (zN − z0 )/N is a “lattice spacing”, maps the function f (z) to a N − 1 dimensional vector
Recalling that
d2 f f (z + a0 ) + f (z − a0 ) − 2f (z)
(z) = lim , (76)
dz 2 a0 →0 a20
12
we conclude that the Lapacian is discretized as follows
Df → a−2
0 ∆f , (77)
where
∆jk = δj,k+1 + δj,k−1 − 2δj,k . (78)
im
Our matrix A is equal to ~ ∆. The eigenvalue equation
∆an = λn an , n = 1, . . . , N − 1 (79)
13
Now work out JT A−1 J by working in the eigenbasis of A−1 (Hint: write this as JT A−1 J = JT OT OA−1 OT OJ,
where OT O = 1 and OA−1 OT is a diagonal matrix you have already calculated above.). A useful identity you
may use is
N −1
X N −1
cos2 (πj/2N ) = . (88)
2
j=1
e) Use the result you have obtained to write an explicit expression for the propagator.
Show that the wave function ψ(t, x) = hx|Ψ(t)i, where |Ψ(t)i is a solution to the time-dependent Schrödinger
equation
∂
i~ |Ψ(t)i = H|Ψ(t)i, (90)
∂t
fulfils the integral equation Z ∞
ψ(t, x) = dx0 K(t, x; t0 x0 ) ψ(t0 , x0 ). (91)
−∞
where
i 0
K(t, x; t0 x0 ) = hx|e− ~ H(t−t ) |x0 i. (93)
b) Using that the propagation for 0 ≤ t < T and T ≤ t < T + τ is that of a free particle, obtain an explicit
integral representation for the wave function.
c) Show that the wave function can be expressed in terms of the Fresnel integrals
Z x Z x
C(x) = dy cos(πy 2 /2) , S(x) = dy sin(πy 2 /2) . (94)
0 0
mb2 (1 + τ /T )
γ= . (95)
~τ
Discuss your findings.
14
3 Path Integrals in Quantum Statistical Mechanics
Path integrals can also be used to describe quantum systems at finite temperatures. To see how this works
we now consider a quantum mechanical particle coupled to a heat bath at a temperature T . An important
quantity in Statistical Mechanics is the partition function
h i
Z(β) = Tr e−βH , (96)
where H is the Hamiltonian of the system, Tr denotes the trace over the Hilbert space of quantum mechanical
states, and
1
β= . (97)
kB T
Ensemble averages of the quantum mechanical observable O are given by
1 h i
hOiβ = Tr e−βH O . (98)
Z(β)
Taking the trace over a basis of eigenstates of H with H|ni = En |ni gives
1 X −βEn
hOiβ = e hn|O|ni ,
Z(β) n
X
Z(β) = e−βEn . (99)
n
where |0i is the ground state of the system. Let us consider a QM particle with Hamiltonian
p̂2
H= + V (x̂), (101)
2m
coupled to a heat bath at temperature T . The partition function can be written in a basis of position
eigenstates Z Z Z
−βH
Z(β) = dxhx|e |xi = dx dx0 hx|x0 i hx0 |e−βH |xi. (102)
Here
hx0 |e−βH |xi (103)
is very similar to the propagator
i(t−t0 )
hx0 |e− ~
H
|xi. (104)
Formally (103) can be viewed as the propagator in imaginary time τ = it, where we consider propagation
from τ = 0 to τ = β~. Using this interpretation we can follow through precisely the same steps as before
and obtain
N −1
!
h m iN Z X m xn+1 − xn 2
−βH 2
hxN |e |x0 i = lim dx1 . . . dxN −1 exp − + V (xn ) , (105)
N →∞ 2π~ ~ 2
n=0
where now
~β
= . (106)
N
15
We again can interpret this in terms of a sum over paths x(τ ) with
1
hxN |e−βH |x0 i = N Dx(τ ) e− ~ SE [x(τ )] ,
R
(108)
and the path integral is over all paths that start at x0 and end at xN . Substituting (108) into the expression
for the partition function we find that
1
Dx(τ ) e− ~ SE [x(τ )] ,
R
Z(β) = N
(110)
p̂2 κ
H= + x̂2 . (112)
2m 2
The physical quantities we want to work out are the averages of powers of the position operator
If we know all these moments, we can work out the probability distribution for a position measurement
giving a particular result x. At zero temperature this is just given by the absolute value squared of the
ground state wave function. The coupling to the heat bath will generate “excitations” of the harmonic
oscillator and thus affect this probability distribution. We have
h i
dx 2 κ 2
Z
− ~1 0~β dτ m2 ( dτ )
R
−βH +2x
hx|e |xi = N Dx(τ ) e , (114)
where the path integral is over all paths with x(0) = x(β~). Integrating by parts we can write the action as
" #
1 ~β m dx 2 κ 2 1 ~β
Z Z
1 m ~β
− SE [x(τ )] = − dτ + x =− dτ x(τ )D̂x(τ ) − x(τ )ẋ(τ ) , (115)
~ ~ 0 2 dτ 2 2 0 2~ 0
where
m d2 κ
D̂ = − 2
+ . (116)
~ dτ ~
16
The contributions from the integration boundaries in (115) don’t play a role in the logic underlying the
following steps leading up to (124) and work out in precisely the same way as the “bulk” contributions. In
order to show that we’re not dropping anything important we’ll keep track of them anyway. We now define
the generating functional Z
1
R ~β
W [J] ≡ N Dx(τ ) e− ~ SE [x(τ )]+ 0 dτ J(τ )x(τ )
. (117)
Here the functions J(τ ) are called sources. The point of the definition (117) is that we can obtain hx̂n iβ by
taking functional derivatives
1 δ δ
hx̂n iβ = ... W [J]. (118)
W [0] δJ(0) δJ(0)
J=0
We now could go ahead and calculate the generating functional by going back to the definition of the the
path integral in terms of a multidimensional Gaussian integral. In practice we manipulate the path integral
itself as follows. Apart from the contribution from the integration boundaries the structure of (115) is
completely analogous to the one we encountered for Gaussian integrals, cf (31). The change of variables
(32) suggests that we should shift our “integration variables” by a term involving the inverse of the integral
operator D̂. The latter corresponds to the Green’s function defined by
D̂τ G(τ − τ 0 ) = δ(τ − τ 0 ) , G(0) = G(β~). (119)
We then change variables in the path integral in order to “complete the square”
Z
y(τ ) = x(τ ) − dτ 0 G(τ − τ 0 )J(τ 0 ). (120)
In the last step you need to use (119) and integrate by parts twice to simplify the last term in the second line.
Exercise
Putting everything together we arrive at
Z ~β
1 ~β
Z
1 1
− SE [x(τ )] + dτ J(τ )x(τ ) = − SE [y(τ )] + dτ dτ 0 J(τ )G(τ − τ 0 )J(τ 0 ). (123)
~ 0 ~ 2 0
On the other hand, the Jacobian of the change of variables (120) is 1 as we are shifting all paths by the
same constant (you can show this directly by going back to the definition of the path integral in terms of
multiple Gaussian integrals). Hence we have Dy(τ ) = Dx(τ ) and our generating functional becomes
1
dτ dτ 0 J(τ )G(τ −τ 0 )J(τ 0 )
R
W [J] = W [0] e 2 .
(124)
17
Now we are ready to calculate (118). The average position is zero
Z
1 δ 1
dτ dτ 0 δ(τ )G(τ − τ 0 )J(τ 0 ) + J(τ )G(τ − τ 0 )δ(τ 0 ) W [J]
hx̂iβ = W [J] = = 0.
W [0] δJ(0) 2W [0]
J=0 J=0
(125)
Here we have used that
δJ(τ )
= δ(τ − τ 0 ). (126)
δJ(τ 0 )
The expression (125) vanishes, because we have a “left over” J and obtain zero when setting all sources to
zero in the end of the calculation. By the same mechanism we have
hx̂2n+1 iβ = 0. (127)
Next we turn to
1 δ δ
hx̂2 iβ = W [J]
W [0] δJ(0) δJ(0)
J=0
Z
1 δ 1
dτ dτ 0 δ(τ )G(τ − τ 0 )J(τ 0 ) + J(τ )G(τ − τ 0 )δ(τ 0 ) W [J] = G(0). (128)
=
W [0] δJ(0) 2
J=0
So the mean square deviation of the oscillator’s position is equal to the Green’s function evaluated at zero.
∞
1 X ω2
G(τ ) = eiωn τ , (132)
βκ n=−∞ ω 2 + ωn2
p
where ω = κ/m. Using contour integration techniques this can be rewritten as
" #
~ω eω|τ | e−ω|τ |
G(τ ) = + . (133)
2κ e~βω − 1 1 − e−~βω
Setting τ = 0 gives
~ω ~ω 1 1
G(0) = = + . (134)
2κ tanh β~ω/2 κ eβ~ω − 1 2
18
We can relate this result to things we already know: using equipartition
~2 d2
H=−
2m dx2
and wavefunctions obey ψ(x + L) = ψ(x). We want to determine the imaginary time propagator
hx1 | exp(−βH)|x2 i .
a) What are the eigenstates and eigenvalues of H? As we are dealing with a free particle, we can determine
the propagator as in the lectures in a simple way by inserting resolutions of the identity in terms of the eigenstates
of H. Show that this leads to the following result
∞
β(2πn)2 ~2
X 1 [x1 − x2 ]
hx1 | exp(−βH)|x2 i = exp − + 2πin . (139)
n=−∞
L 2mL2 L
b) Next, approach this using a path integral in which paths x(τ ) for 0 ≤ τ ≤ β~ satisfy the boundary
conditions x(0) = x1 and x(β~) = x2 . The special feature of a particle moving on a circle is that such paths
may wind any integer number l times around the circle. To build in this feature, write
τ
x(τ ) = x1 + [(x2 − x1 ) + lL] + s(τ ),
β~
where the contribution s(τ ) obeys the simpler boundary conditions s(0) = s(β~) = 0 and does not wrap around
the circle. Show that the Euclidean action for the system on such a path is
m ds 2
Z β~
m
S[x(τ )] = Sl + S[s(τ )] where Sl = [(x2 − x1 ) + lL]2 and S[s(τ )] = dτ .
2β~ 0 2 dτ
19
where Z0 is the diagonal matrix element hx|e−βH |xi for a free particle (i.e. without periodic boundary conditions)
moving in one dimension.
d) Argue on the basis of the result you obtained in Qu 3. for the propagator of a free particle that
1/2
m
Z0 = . (141)
2πβ~2
e) Show that the expressions in Eq. (139) and Eq. (140) are indeed equal. To do so, you should use the Poisson
summation formula
X∞ X∞
δ(y − l) = e−2πiny
l=−∞ n=−∞
(think about how to justify this). Introduce the left hand side of this expression into Eq. (140) by using the
relation, valid for any smooth function f (y),
∞
X Z ∞ ∞
X
f (l) = dy δ(y − l)f (y) ,
l=−∞ −∞ l=−∞
substitute the right hand side of the summation formula, carry out the (Gaussian) integral on y, and hence
establish the required equality.
What is their significance? Graphically, the path integral in (142) is represented in Fig. 2. It consists of
τ
x(hβ )
hβ
τn ..
x(τn )
.
τ3 x( τ3 )
τ2
τ1
x( 0 )
several parts. The first part corresponds to propagation from x(0) to x(τ1 ) and the associated propagator is
20
The second part corresponds to propagation from x(τ1 ) to x(τ2 ), and we have a multiplicative factor of x(τ1 )
as well. This is equivalent to a factor
Repeating this analysis for the other pieces of the path we obtain
n
Y
hx(τj+1 )|e−H(τj+1 −τj )/~ x̂|x(τj )i hx(τ1 )|e−Hτ1 /~ |x(0)i , (145)
j=1
where τn+1 = ~β. Finally, in order to represent the full path integral (142) we need R to integrate over
the intermediate positions x(τj ) and impose periodicity of the path. Using that 1 = dx|xihx| and that
W [0] = Z(β) we arrive at
Z
1
dx(0)hx(0)|e−H(β−τn )/~ x̂e−H(τn −τn−1 )/~ x̂ . . . x̂e−H(τ2 −τ1 )/~ x̂e−Hτ1 /~ |x(0)i
Z(β)
1 h i
= Tr e−βH x̄(τn )x̄(τn−1 ) . . . x̄(τ1 ) , (146)
Z(β)
Finally, if we analytically continue from imaginary time to real time τj → itj , the operators x̄(τ ) turn into
Heisenberg-picture operators
it it
x̂(t) ≡ e ~ H x̂e− ~ H . (150)
The quantities that we get from (149) after analytic continuation are called n-point correlation functions
1 h i
hT x̂(t1 )x̂(t2 ) . . . x̂(tn )iβ ≡ Tr e−βH T x̂(t1 )x̂(t2 ) . . . x̂(tn ) .
Z(β)
(151)
Here T is a time-ordering operator that arranges the x̂(tj )’s in chronologically increasing order from right
to left. Such correlation functions are the central objects in both quantum field theory and many-particle
quantum physics.
21
then taking the functional derivatives, and finally setting all sources to zero we find that
n
1 Y δ X
W [J] = G(τP1 − τP2 ) . . . G(τPn−1 − τPn ) . (153)
W [0] δJ(τj )
j=1 J=0 P (1,...,n)
Here the sum is over all possible pairings of {1, 2, . . . , n} and G(τ ) is the Green’s function (132). In particular
we have
hTτ x̄(τ1 )x̄(τ2 )iβ = G(τ1 − τ2 ). (154)
The fact that for “Gaussian theories” 3 like the harmonic oscillator n-point correlation functions can be
expressed as simple products over 2-point functions is known as Wick’s theorem.
So the expectation value of the delta-function indeed gives the correct result for the probability distribution
of a position measurement, namely the absolute value squared of the wave function. We then have
Z ∞
dk ik(x̂−x0 )
hδ(x̂ − x0 )iβ = he iβ
−∞ 2π
Z ∞
dk −ikx0 N
Z
1 ~β
R R ~β
= e Dx(τ ) e− 2 0 dτ x(τ )D̂x(τ )+ 0 dτ x(τ )ikδ(τ ) . (156)
−∞ 2π W [0]
This is a special case of our generating functional, where the source is given by J(τ ) = ikδ(τ ). We therefore
can use (124) to obtain
Z ∞ Z ∞
dk −ikx0 W [ikδ(τ )] dk −ikx0 1 R0~β dτ R0~β dτ 0 ikδ(τ ) G(τ −τ 0 ) ikδ(τ 0 )
hδ(x̂ − x0 )iβ = e = e e2
−∞ 2π W [0] −∞ 2π
1 2
= p e−x0 /2G(0) . (157)
2πG(0)
To go from the first to the second line we have taken the integrals over τ and τ 0 (which are straightforward
because of the two delta functions) and finally carried out the k-integral using the one dimensional version
of (34). We see that our probability distribution is a simple Gaussian with a variance that depends on
temperature through G(0). Note that at zero temperature (157) reduces, as it must, to |ψ0 (x0 )|2 , where
ψ0 (x) is the ground state wave function of the harmonic oscillator.
3
These are theories in which the Lagrangian is quadratic in the generalized co-ordinates.
22
The partition function is
Zλ (β) = Wλ [0] . (160)
The idea is to expand (159) perturbatively in powers of λ
Z Z ~β
λ 1
R ~β
Wλ [J] = N Dx(τ ) 1 − dτ x (τ ) + . . . e− ~ SE [x(τ )]+ 0 dτ J(τ )x(τ )
0 4 0
4!~ 0
Z " Z ~β 4 #
λ δ − ~1 SE [x(τ )]+ 0~β dτ J(τ )x(τ )
R
= N Dx(τ ) 1 − dτ 0 + . . . e
4!~ 0 δJ(τ 0 )
Z λ
R ~β 0 h δ i4
− 4!~ 0 dτ
1
R ~β
= N Dx(τ ) e δJ(τ 0 ) e− ~ SE [x(τ )]+ 0 dτ J(τ )x(τ )
h i4
λ δ
dτ 0
R ~β
− 4!~ δJ(τ 0 )
= e 0
W0 [J]. (161)
λ
R ~β
(b) The interaction vertex − 4!~ 0 dτ is represented by
Combining these two elements, we can express the integral λγ1 (β) by the diagram
Here the factor of 3 is a combinatorial factor associated with the diagram.
23
Figure 3: Graphical representation of the interaction vertex.
Figure 4: Feynman diagram for the 1st order perturbative contribution to the partition function.
2 Z ~β Z ~β
1 λ
λ2 γ2 (β) = 72 dτ dτ 0 G(τ − τ )G2 (τ − τ 0 )G(τ 0 − τ 0 )
2 4!~ 0 0
2 Z ~β Z ~β
1 λ
+ 24 dτ dτ 0 G4 (τ − τ 0 )
2 4!~ 0 0
2
λ 2
Z ~β
1 2
+ 9 dτ G (τ − τ ) . (165)
2 4!~ 0
The corresponding Feynman diagrams are shown in Fig.5. They come in two types: the first two are
connected, while the third is disconnected.
Figure 5: Feynman diagram for the 2nd order perturbative contribution to the partition function.
The point about the Feynman diagrams is that rather than carrying out functional derivatives and then
representing various contributions in diagrammatic form, in practice we do the calculation by writing down
the diagrams first and then working out the corresponding integrals! How do we know what diagrams to
draw? As we are dealing with the partition function, we can never produce a diagram with a line sticking
out: all (imaginary) times must be integrated over. Such diagrams are sometimes called vacuum diagrams.
Now, at first order in λ, we only have a single vertex, i.e. a single integral over τ . The combinatorics works
out as follows:
24
1. We have to count the number of ways of connecting a single vertex to two lines, that reproduce the
diagram we want.
The last term we have written is the one that gives rise to our diagram, so we have a factor
1
(167)
23
to begin with.
3. Now, the combinatorics of acting with the functional derivatives is the same as the one of connecting
a single vertex to two lines. There are 4 ways of connecting the first line to the vertex, and 3 ways of
connecting the second. Finally there are two ways of connecting the end of the first line to the vertex
as well. The end of the second line must then also be connected to the vertex to give our diagram,
but there is no freedom left. Altogether we obtain a factor of 24. Combining this with the factor of
1/8 we started with gives a combinatorial factor of 3. That’s a Bingo!
p̂2 κ λ1 λ2
H(λ1 , λ2 ) = + x̂2 + x̂3 + x̂4 . (168)
2m 2 3! 4!
where κ, λ1,2 > 0 and λ21 − 3κλ2 < 0. Define a generating functional by
Z nR o
~β
dτ [− ~1 SE [x(τ )]+J(τ )x(τ )]+U x(τ )
Wλ1 ,λ2 [J] = N Dx(τ ) e 0 , (169)
where
m d2
Z ~β
1 λ1 3 λ2 4 κ
U x(τ ) = − dτ x (τ ) + x (τ ) , D̂ = − 2
+ . (170)
~ 0 3! 4! ~ dτ ~
a) Show that the partition function is equal to
c) Determine the first order perturbative corrections in λ1 and λ2 to the partition function. Draw the corresponding
Feynman diagrams.
d) Determine the perturbative correction to the partition function proportional to λ21 . Draw the corresponding
Feynman diagrams. Are there corrections of order λ1 λ2 ?
e)∗ Determine the first order corrections to the two-point function
Draw the corresponding Feynman diagrams. What diagrams to you get in second order in perturbation theory?
25
Part II
Path Integrals and Transfer Matrices
4 Relation of D dimensional quantum systems to D + 1 dimensional
classical ones
Let’s start by defining what we mean by the spatial “dimension” D of a system. Let us do this by considering
a (quantum) field theory. There the basic objects are fields, that depend on time and are defined at all
points of a D-dimensional space. This value of D defines what we mean by the spatial dimension. For
example, in electromagnetism we have D = 3. In this terminology a single quantum mechanical particle or
spin are zero-dimensional systems. On the other hand, a linear chain of spins is a one-dimensional system,
while a bcc lattice of spins has D = 3. Interestingly, there is a representation of D dimensional quantum
systems in terms of D + 1 dimensional classical ones. We will now establish this for the particular case of
the simple quantum mechanical harmonic oscillator.
Here the sum is over all possible configurations C, and E(C) is the corresponding energy. Thermal averages
of observables are given by
1 X
hOiβ = O(C)e−βE(C) , (175)
Z
configurations C
where O(C) is the value of the observable O in configuration C. The average energy is
1 X ∂
E= E(C)e−βE(C) = − ln(Z). (176)
Z ∂β
configurations C
N −1
!
h m iN Z X m xn+1 − xn 2
Z
2
Z(β) = lim dx dx1 . . . dxN −1 exp − + V (xn ) , (179)
N →∞ 2π~ ~ 2
n=0
where we have set x0 = xN = x. For a given value of N , this can be interpreted as the partition function
of N classical degrees of freedom xj , that can be thought of as deviations of classical particles from their
equilibrium positions, cf. Fig. 6. In this interpretation V (xj ) is simply a potential energy associated with
moving the j th particle a distance xj away from its equilibrium position, while m 2 2
2 (xn+1 − xn ) / describes
26
Figure 6: Periodic array of classical particles.
an interaction energy that favours equal displacements, i.e. xn = xn+1 . Importantly, the temperature Tcl of
this one-dimensional classical model equals
~
kB Tcl = = N kB T. (180)
So for large values of N (and fixed T ) this temperature is very large. A convenient way for working out
partition functions in classical statistical mechanics is by using transfer matrices. In the case at hand, this
is defined as an integral operator T̂ with kernel
r
0 m − β Ecl (x,x0 )
T (x, x ) = e N ,
2π~
m x − x0 2 V (x) + V (x0 )
0
Ecl (x, x ) = + . (181)
2 2
By construction T̂ is a real, symmetric operator and can therefore be diagonalized. Hence the partition
function can be expressed in terms of the eigenvalues of T̂ using
X
Tr(T̂ N ) = λN
n. (184)
n
In order to get a clearer idea how to use transfer matrices in statistical mechanics problems we now turn
to a simpler example, the celebrated Ising model. This is in fact the key paradigm in the theory of phase
transitions.
27
5 The Ising Model
Ferromagnetism is an interesting phenomenon in solids. Some metals (like Fe or Ni) are observed to acquire a
finite magnetization below a certain temperature. Ferromagnetism is a fundamentally quantum mechanical
effect, and arises when electron spins spontaneously align along a certain direction. The Ising model is a
very crude attempt to model this phenomenon. It is defined as follows. We have a lattice in D dimensions
with N sites. On each site j of this lattice sits a “spin” variable σj , which can take the two values ±1.
These are referred to as “spin-up” and “spin-down” respectively. A given set {σ1 , σ2 , . . . , σN } specifies a
configuration. The corresponding energy is taken to be of the form
X N
X
E({σj }) = −J σi σj − h σj , (185)
hiji j=1
where hiji denote nearest-neighbour bonds on our lattice and J > 0. The first term favours alignment
on neighbouring spins, while h is like an applied magnetic field. Clearly, when h = 0 the lowest energy
states are obtained by choosing all spins to be either up or down. The question of interest is whether the
Ising model displays a finite temperature phase transition between a ferromagnetically ordered phase at low
temperatures, and a paramagnetic phase at high temperatures.
28
5.2.1 Transfer matrix approach
The general idea is to rewrite Z as a product of matrices. The transfer matrix T is taken to be a 2 × 2
matrix with elements
0
Tσσ0 = e−βE(σ,σ ) . (193)
Its explicit form is
eβ(J+h) e−βJ
T++ T+−
T = = . (194)
T−+ T−− e−βJ eβ(J−h)
The partition function can be expressed in terms of the transfer matrix as follows
X
Z= Tσ1 σ2 Tσ2 σ3 . . . TσN −1 σN TσN σ1 (195)
σ1 ,...,σN
Z = Tr T N .
(196)
The trace arises because we have imposed periodic boundary conditions. As T is a real symmetric matrix,
it can be diagonalized, i.e.
† λ+ 0
U TU = , (197)
0 λ−
where U is a unitary matrix and
q
λ± = eβJ cosh(βh) ± e2βJ sinh2 (βh) + e−2βJ . (198)
λN
† N
† N
† N
0
Z = Tr U U T = Tr U T U = Tr [U T U ] = Tr + = λN N
+ + λ− . (199)
0 λN
−
So for large N , which is the case we are interested in, we have with exponential accuracy
Z ' λN
+.
(201)
Given the partition function, we can now easily calculate the magnetization per site
1 ∂
m(h) = ln(Z). (202)
N β ∂h
In Fig. 7 we plot m(h) as a function of inverse temperature β = 1/kB T for two values of magnetic field h.
We see that for non-zero h, the magnetization per site takes its maximum value m = 1 at low temperatures.
At high temperatures it goes to zero. This is as expected, as at low T the spins align along the direction
of the applied field. However, as we decrease the field, the temperature below which m(h) approaches unity
decreases. In the limit h → 0, the magnetization per site vanishes at all finite temperatures. Hence there is
no phase transition to a ferromagnetically ordered state in the one dimensional Ising model.
29
mHh=0.1L mHh=10-5 L
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
Β Β
2 4 6 8 10 2 4 6 8 10
Figure 7: Magnetization per site as a function of inverse temperature for two values of applied magnetic
field. We see that when we reduce the magnetic field, the temperature region in which the magnetization is
essentially zero grows.
Using that
(T σ z )σj−1 σj = Tσj−1 σj σj , (205)
z 1 0
where σ = is the Pauli matrix, we obtain
0 −1
1 j−1 z N −j+1 1
= Tr T N σ z .
hσj iβ = Tr T σ T (206)
Z Z
Diagonalizing T by means of a unitary transformation as before, this becomes
N
1 h † N † z
i 1 λ+ 0 † z
hσj iβ = Tr U T U U σ U = Tr U σ U . (207)
Z Z 0 λN −
as
U = (|+i, |−i). (209)
For h = 0 we have
1 1 1
U |h=0 =√ . (210)
2 1 −1
This gives
hσj iβ = 0. (211)
h=0
30
For general h the expression is more complicated
α
√ + 2 √ α− 2
q
1+α 1+α
U = 1 + −
, α± = 1 + e4βJ sinh2 (βh) ± e2βJ sinh(βh). (212)
√ 2 √ 1 2
1+α+ 1+α−
This now allows us to prove, that in the one dimensional Ising model there is no phase transition at any
finite temperature:
lim lim hσj iβ = 0 , β < ∞.
h→0 N →∞
(214)
Note the order of the limits here: we first take the infinite volume limit at finite h, and only afterwards
take h to zero. This procedure allows for spontaneous symmetry breaking to occur, but the outcome of our
calculation is that the spin reversal symmetry remains unbroken at any finite temperature.
Similarly, we find
r N −r
1 j−1 z r z N +1−j−r 1 † z λ+ 0 † z λ+ 0
hσj σj+r iβ = Tr T σ T σ T = Tr U σ U U σ U −r . (215)
Z Z 0 λr− 0 λN
−
So in zero field the two-point function decays exponentially with correlation length
1
ξ= . (217)
ln coth(βJ)
31
5.3 The Two-Dimensional Ising Model
We now turn to the 2D Ising model on a square lattice with periodic boundary conditions. The spin variables
have now two indices corresponding to rows and columns of the square lattice respectively
σj,k = ±1 , j, k = 1, . . . , N. (223)
The boundary conditions are σk,N +1 = σk,1 and σN +1,j = σ1,j , which correspond to the lattice “living” on
σk,j
k
The idea of the transfer matrix method is again to write this in terms of matrix multiplications. The
difference to the one dimensional case is that the transfer matrix will now be much larger. We start by
expressing the partition function in the form
X PN
Z= e−β k=1 E(k;k+1) , (226)
{σj,k }
where
N
X 1
E(k; k + 1) = −J σk,j σk+1,j + [σk,j σk,j+1 + σk+1,j σk+1,j+1 ] . (227)
2
j=1
This energy depends only on the configurations of spins on rows k and k + 1, i.e. on spins σk,1 , . . . , σk,N
and σk+1,1 , . . . , σk+1,N . Each configuration of spins on a given row specifies a sequence s1 , s2 , . . . , sN with
sj = ±1. Let us associate a vector
|si (228)
with each such sequence. By construction there 2N such vectors. We then define a scalar product on the
space spanned by these vectors by
N
Y
ht|si = δtj ,sj . (229)
j=1
32
With this definition, the vectors {|si} form an orthonormal basis of a 2N dimensional linear vector space.
In particular we have X
I= |sihs|. (230)
s
The point of this construction is that the partition function can now be written in the form
XX X
Z= ··· hσ1 |T |σ2 ihσ2 |T |σ3 i . . . hσN −1 |T |σN ihσN |T |σ1 i (232)
σ1 σ2 σN
We now may use (230) to carry out the sums over spins, which gives
Z = Tr T N
, (233)
where the trace is over our basis {|si|sj = ±1} of our 2N dimensional vector space. Like in the 1D case,
thermodynamic properties involve only the largest eigenvalues of T . Indeed, we have
2 N
X
Z= λN
j , (234)
j=1
where λmax is the largest eigenvalue of T , which we assume to be unique. As |λj /λmax | < 1, the second
contribution in (235) is bounded by −kB T N ln(2), and we see that in the thermodynamic limit the free
energy per site is
F kB T
f = lim = lim − ln(λmax ).
N →∞ N 2 N →∞ N
(236)
Thermodynamic quantities are obtained by taking derivatives of f and hence only involve the largest eigen-
value of T . The main complication we have to deal with is that T is still a very large matrix. This poses
the question, why we should bother to use a transfer matrix description anyway? Calculating Z from its
2 2
basic definition (225) involves a sum with 2N terms, i.e. at least 2N operations on a computer. Finding
the largest eigenvalue of a M × M matrix involves O(M 2 ) operations, which in our case amounts to O(22N ).
For large values of N this amounts to an enormous simplification.
33
m(T)
1 T/Tc
Figure 9: Phase Transition in the square lattice Ising model.
The operation (237) is a discrete (two-fold) symmetry of the Ising model. Because we have translational
invariance, the magnetization per site is
m = hσj,k iβ . (238)
Hence a non-zero value of m signifies the spontaneous breaking of the discrete symmetry (237). In order to
describe this effect mathematically, we have to invoke a bit of trickery. Let us consider zero temperature.
Then there are exactly two degenerate lowest energy states: the one with all spins σj,k = +1 and the one
with all spins σj,k = −1. We now apply a very small magnetic field to the system, i.e. add a term
X
δE = − σj,k (239)
j,k
to the energy. This splits the two states, which now have energies
E± = −JNB ∓ N , (240)
where NB is the number of bonds. The next step is key: we now define the thermodynamic limit of the free
energy per site as
−kB T ln(Z)
f (T ) ≡ lim lim . (241)
→0 N →∞ N2
The point is that the contributions Z± = e−βE± of the two states to Z are such that
Z−
= e−2N/kB T . (242)
Z+
This goes to zero when we take N to infinity! So in the above sequence of limits, only the state with all
spins up contributes to the partition function, and this provides a way of describing spontaneous symmetry
breaking! The key to this procedure is that
The procedure we have outlined above, i.e. introducing a symmetry breaking field, then taking the infinite
volume limit, and finally removing the field, is very general and applies to all instances where spontaneous
symmetry breaking occurs.
34
5.4 Homework Questions 8-10
Question 8. A lattice model for non-ideal gas is defined as follows. The sites i of a lattice may be empty or
occupied by at most one atom, and the variable ni takes the values ni = 0 and ni = 1 in the two cases. There
is an attractive interaction energy J between atoms that occupy neighbouring sites, and a chemical potential µ.
The model Hamiltonian is X X
H = −J ni nj − µ ni , (244)
hiji i
P
where hiji is a sum over neighbouring pairs of sites.
(a) Describe briefly how the transfer matrix method may be used to calculate the statistical-mechanical properties
of one-dimensional lattice models with short range interactions. Illustrate your answer by explaining how the
partition function for a one-dimensional version of the lattice gas, Eq. (1), defined on a lattice of N sites with
periodic boundary conditions, may be evaluated using the matrix
eβµ/2
1
T= .
eβµ/2 eβ(J+µ)
(b) Derive an expression for hni i in the limit N → ∞, in terms of elements of the eigenvectors of this matrix.
(c) Show that
1
hni i = ,
1 + e−2θ
where
sinh(θ) = exp(βJ/2) sinh(β[J + µ]/2) .
Sketch hni i as a function of µ for βJ 1, and comment on the physical significance of your result.
Question 9. The one-dimensional 3-state Potts model is defined as follows. At lattice sites i = 0, 1, . . . , L
“spin” variables σi take integer values σi = 1, 2, 3. The Hamiltonian is then given by
L−1
X
H = −J δσi ,σi+1 , (245)
i=0
Question 10. Consider a one dimensional Ising model on an open chain with N sites, where N is odd. On all
even sites a magnetic field 2h is applied, see Fig. 10. The energy is
N −1 (N −1)/2
X X
E = −J σj σj+1 + 2h σ2j . (248)
j=1 j=1
35
2h 2h 2h 2h
1 J J J J J J J J N
Figure 10: Open Ising chain with magnetic field applied to all even sites.
(a) Show that the partition function can be written in the form
where T is an appropriately constructed transfer matrix, and |ui and |vi two dimensional vectors. Give explicit
expressions for T , |ui and |vi.
(b) Calculate Z for the case h = 0.
36
Part III
Many-Particle Quantum Mechanics
In the basic QM course you encountered only quantum systems with very small numbers of particles. In
the harmonic oscillator problem we are dealing with a single QM particle, when solving the hydrogen atom
we had one electron and one nucleus. Perhaps the most important field of application of quantum physics
is to systems of many particles. Examples are the electronic degrees of freedom in solids, superconductors,
trapped ultra-cold atomic gases, magnets and so on. The methods you have encountered in the basic QM
course are not suitable for studying such problems. In this part of the course we introduce a framework,
that will allow us to study the QM of many-particle systems. This new way of looking at things will also
reveal very interesting connections to Quantum Field Theory.
6 Second Quantization
The formalism we develop in the following is known as second quantization.
p̂2j ~2 2
Hj = + V (r̂j ) = − ∇ + V (r̂j ). (251)
2m 2m j
The key to solving such problems is that [Hj , Hl ] = 0. We’ll now briefly review the necessary steps,
switching back and forth quite freely between using states and operators acting on them, and the position
representation of the problem (i.e. looking at wave functions).
• Step 1. Solve the single-particle problem
This follows from the fact that in the position representation Hj is a differential operator that acts
only on the j’th position rj . The corresponding eigenstates are tensor products
37
• Step 3. Impose the appropriate exchange symmetry for indistinguishable particles, e.g.
1
ψ± (r1 , r2 ) = √ [φl (r1 )φm (r2 ) ± φl (r2 )φm (r1 )] , l 6= m. (257)
2
Generally we require
ψ(. . . , ri , . . . , rj , . . . ) = ±ψ(. . . , rj , . . . , ri , . . . ) , (258)
where the + sign corresponds to bosons and the − sign to fermions. This is achieved by taking
X
ψl1 ...lN (r1 , . . . , rN ) = N (±1)|P | φlP1 (r1 ) . . . φlPN (rN ),
P ∈SN
(259)
where the sum is over all permutations of (1, 2, . . . , N ) and |P | is the number of pair exchanges required
to reduce (P1 , . . . , PN ) to (1, . . . , N ). The normalization constant N is
1
N =√ , (260)
N !n1 !n2 ! . . .
where nj is the number of times j occurs in the set {l1 , . . . , lN }. For fermions the wave functions can
be written as Slater determinants
φl1 (r1 ) . . . φl1 (rN )
1
ψl1 ...lN (r1 , . . . , rN ) = √ det ... ..
. (261)
.
N!
φlN (r1 ) . . . φlN (rN )
where Q is an arbitrary permutation of (1, . . . , N ). As the overall sign of state is irrelevant, we can therefore
choose them without loss of generality as
| |1 .{z
. . 1} |2 .{z . . 3} 4 . . . i ≡ |n1 n2 n3 . . . i.
. . 2} 3| .{z (264)
n1 n2 n3
In (264) we have as many nj ’s as there are single-particle eigenstates, i.e. dim H 4 . For fermions we have
nj = 0, 1 only as a consequence of the Pauli principle. The representation (264) is called occupation number
representation. The nP j ’s tell us how many particles are in the single-particle state |ji. By construction the
states {|n1 n2 n3 . . . i| j nj = N } form an orthonormal basis of our N -particle problem
Y
hm1 m2 m3 . . . |n1 n2 n3 . . . i = δnj ,mj , (265)
j
38
6.2 Fock Space
We now want to allow the particle number to vary. The main reason for doing this is that we will encounter
physical problems where particle number is in fact not conserved. Another motivation is that experimental
probes like photoemission change particle number, and we want to be able to describe these. The resulting
space of states is called Fock Space.
1. The state with no particles is called the vacuum state and is denoted by |0i.
P
2. N -particle states are |n1 n2 n3 . . . i with j nj = N .
√ Pl−1
nj
cl |n1 n2 . . . i = nl (±1) j=1 |n1 n2 . . . nl − 1 . . . i .
(267)
Similarly we have
√ √ Pm−1
cm c†l | . . . nl . . . nm . . . i = nl + 1 nm (−1)1+ j=l nj | . . . nl + 1 . . . nm − 1 . . . i. (272)
39
The case l < m works in the same way. This leaves us with the case l = m. Here we have
√ Pl−1
c†l cl | . . . nl . . . nm . . . i = c†l nl (−1) j=1 nj | . . . nl − 1 . . . i = nl | . . . nl . . . i. (275)
( √ Pl−1
cl nl + 1(−1) j=1 nj | . . . nl + 1 . . . i if nl = 0 ,
cl c†l | . . . nl ...i =
0 if nl = 1 ,
(
| . . . nl . . . i if nl = 0 ,
= (276)
0 if nl = 1 ,
{c†l , cl } = 1. (278)
• Single-particle states
1 0 . . . i = c†l |0i .
|0 . . . 0 |{z} (280)
l
• N -particle states
Y 1 † nj
|n1 n2 . . . i = c |0i . (281)
nj ! j
p
j
Question 12. A quantum-mechanical Hamiltonian for a system of an even number N of point unit masses
interacting by nearest-neighbour forces in one dimension is given by
N
1X 2
pr + (qr+1 − qr )2 ,
H=
2
r=1
40
where the Hermitian operators qr , pr satisfy the commutation relations [qr , qs ] = [pr , ps ] = 0, [qr , ps ] = iδrs , and
where qr+N = qr . New operators Qk , Pk are defined by
1 X 1 X
qr = √ Qk eikr and pr = √ Pk e−ikr ,
N k N k
and determine the canonical commutation relations for ak and a†p . Construct the Fock space of states and de-
termine the eigenstates and eigenvalues of H.
Question 13. Bosonic creation operators are defined through their action on basis states in the occupation
number representation as √
c†l |n1 n2 . . . i = nl + 1|n1 n2 . . . nl + 1 . . . i , (282)
a) Deduce from this how bosonic annihilation operators act.
b) Show that the creation and annihilation operators fulfil canonical commutation relations
You know from second year QM that it is often convenient to switch from one basis to another, e.g. from
energy to momentum eigenstates. This is achieved by a unitary transformation
where X
|αi = hl|αi |li. (286)
| {z }
l Ulα
By construction X X
†
Ulα Uαm = hl|αihα|mi = hl|mi = δlm . (287)
α α
41
We now want to “lift” this unitary transformation to the level of the Fock space. We know that
†
X
dα = Uαl cl .
l
(291)
We emphasize that these transformation properties are compatible with the (anti)commutation relations (as
they must be). For fermions
{dα , d†β } =
X † X †
Uαl Umβ {cl , c†m } = Uαl Ulβ = (U † U )αβ = δα,β . (292)
| {z }
l,m l
δl,m
where the operator ôj acts only on the j’th particle. Examples are kinetic and potential energy operators
X p̂2j X
T̂ = , V̂ = V (x̂j ). (296)
2m
j j
We want to represent Ô on the Fock space built from single-particle eigenstates |αi. We do this in two steps:
42
• Step 1: We first represent Ô on the Fock space built from the eigenstates of ô
As |n1 n2 . . . i constitute a basis, this together with (294) imply that we can represent Ô in the form
λk c†k ck .
X X
Ô = λk n̂k = (300)
k k
• Step 2: Now that we have a representation of Ô on the Fock space built from the single-particle states
|li, we can use a basis transformation to the basis {|αi} to obtain a representation on a general Fock
space. Using that hk|Ô|k 0 i = δk,k0 λk we can rewrite (300) in the form
hk 0 |Ô|kic†k0 ck .
X
Ô = (301)
k,k0
Then we apply our general rules for a change of single-particle basis of the Fock space
c†k =
X †
Uαk d†α . (302)
α
This gives
†
XX X
hk 0 |Uαk Ukβ |ki d†α dβ .
Ô = 0 Ô (303)
α,β k0 k
| {z } | {z }
hα| |βi
We now work out a number of explicit examples of Fock space representations for single-particle operators.
Remark
43
These are shorthand notations for
and
hpx , py , pz |kx , ky , kz i = (2π~)3 δ(kx − px )δ(ky − py )δ(kz − pz ) . (308)
Remark
Using our general result for representing single-particle operators in a Fock space built from their
eigenstates (300) we have
d3 p
Z
P̂ = pc† (p)c(p) , [c† (k), c(p)} = (2π~)3 δ (3) (p − k). (309)
(2π~)3
2. Single-particle Hamiltonian:
N
X p̂2j
H= + V (x̂j ). (315)
2m
j=1
(i) Let us first consider H in the single-particle basis of energy eigenstates H|li = El |li, |li = c†l |0i.
Our result (300) tells us that
El c†l cl .
X
H= (316)
l
(ii) Next we consider the position representation, i.e. we take position eigenstates |xi = c† (x)|0i as a
basis of single-particle states. Then by (305)
Z
H = d3 xd3 x0 hx0 |H|xi c† (x0 )c(x). (317)
hx0 |V (x̂)|xi = V (x)δ (3) (x − x0 ) , hx0 |p̂2 |xi = −~2 ∇2 δ (3) (x − x0 ) , (318)
44
we arrive at the position representation
2 2
~ ∇
Z
3 †
H= d x c (x) − + V (x) c(x).
2m
(319)
(iii) Finally we consider the momentum representation, i.e. we take momentum eigenstates |pi =
c† (p)|0i as a basis of single-particle states. Then by (305)
Z 3 3 0
d pd p
H= hp0 |H|pi c† (p0 )c(p). (320)
(2π~)6
where Ṽ (p) is essentially the three-dimensional Fourier transform of the (ordinary) function V (x).
Hence
Z 3 3 0
d3 p p2 †
Z
d pd p
H= c (p)c(p) + Ṽ (p − p0 )c† (p0 )c(p).
(2π~)3 2m (2π~)6
(323)
On the Fock space built from single-particle position eigenstates this is represented as
Z
1
V̂ = d3 rd3 r0 c† (r)c† (r0 )V (r, r0 )c(r0 )c(r).
2
(325)
Note that when writing down the first quantized expression (324), we assumed that the operators acts
specifically on states with N particles. On the other hand, (325) acts on the Fock space, i.e. on states
where the particle number can take any value. The action of (325) on N -particle states (where N is fixed
but arbitrary) is equal to the action of (324).
Derivation of (325)
Let us concentrate on the fermionic case. The bosonic case can be dealt with analogously. We start with our
original representation of N -particle states (262)
X
|r1 , . . . , rN i = N (−1)|P | |r1 i ⊗ . . . |rN i . (326)
P ∈SN
45
Then X 1X
V̂ |r1 , . . . , rN i = V (ri , rj )|r1 , . . . , rN i = V (ri , rj )|r1 , . . . , rN i . (327)
2
i<j i6=j
Now consider
N
Y N
Y
†
c(r)|r1 , . . . , rN i = c(r) c (rj )|0i = [c(r), c† (rj )}|0i , (329)
j=1 j=1
where is the last step we have used that c(r)|0i = 0, and [A, B} is an anticommutator if
both A and B involve an odd number of fermions and a commutator otherwise.
In our case we have a commutator for even N and an anticommutator for odd N .
By repeatedly adding and subtracting terms we find that
N
Y N
Y N
Y
[c(r), c† (rj )} = {c(r), c† (r1 )} c† (rj ) − c† (r1 ){c(r), c† (r2 )} c† (rj )
j=1 j=2 j=3
N
Y −1
+... + c† (rj ){c(r), c† (rN )}. (330)
j=1
Hence
N N missing
X X
† 0 0 n−1 (3) (3) 0 z}|{
c (r )c(r ) c(r)|r1 , . . . , rN i = (−1) δ (r − rn ) δ (r − rm ) |r1 . . . rn . . . rN i, (332)
| {z }
n=1 m6=n
number op.
and finally
N
X N
X
c† (r)c† (r0 )c(r0 )c(r)|r1 , . . . , rN i = δ (3) (r − rn ) δ (3) (r0 − rm ) |r1 . . . rn . . . rN i. (333)
n=1 m6=n
hl|ri c†l ,
X
c† (r) = (335)
l
46
We can rewrite this by using that the action of V̂ on two-particle states is obtained by taking N = 2 in
(324), which tells us that V̂ |ri ⊗ |r0 i = V (r, r0 )|ri ⊗ |r0 i. This implies
V (r, r0 )hl|rihl0 |r0 ihr0 |m0 ihr|mi = V (r, r0 ) hl| ⊗ hl0 | |ri ⊗ |r0 i| hr| ⊗ hr0 | |mi ⊗ |m0 i|
to obtain
1
hl| ⊗ hl0 |V̂ | mi ⊗ |m0 i c†l c†l0 cm0 cm .
X
V̂ = (339)
2
l,l0 ,m,m0
Finally we can express everything in terms of states with the correct exchange symmetry
1
|mm0 i = √ |mi ⊗ |m0 i ± |m0 i ⊗ |mi (m 6= m0 ).
(340)
2
in the form
hll0 |V̂ |mm0 ic†l c†l0 cm0 cm .
X
V̂ =
(ll0 ),(mm0 )
(341)
Here the sums are over a basis of 2-particle states. In order to see that (339) is equal to (341) observe that
X 1 X 1 X
[|mi ⊗ |m0 i]cm0 cm = [|mi ⊗ |m0 i ± |m0 i ⊗ |mi]cm0 cm = √ |mm0 i cm0 cm (342)
2 2
m,m0 m,m0 m,m0
Here the first equality follows from relabelling summation indices m ↔ m0 and using the (anti)commutation
relations between cm and cm0 to bring them back in the right order. The second equality follows from the
definition of 2-particle states |mm0 i. Finally we note that because |mm0 i = ±|m0 mi (the minus sign is for
fermions) we have
1 X √ X
√ |mm0 i cm0 cm = 2 |mm0 i cm0 cm , (343)
2 m,m0 0
(mm )
where the sum is now over a basis of 2-particle states with the appropriate exchange symmetry. The
representation (341) generalizes to arbitrary two-particle operators O.
where V (r̂i , r̂j ) = V (r̂j , r̂i ). Show that in second quantization it is expressed as
Z
1
V̂ = d3 rd3 r0 V (r, r0 ) c† (r)c† (r0 )c(r0 )c(r).
2
47
where nj is the occupation number of the j th single-particle state. Argue that in an arbitrary basis of single-particle
eigenstates |li V̂ can be expressed in the form
{cσ (p), cτ (k)} = 0 = {c†σ (p), c†τ (k)} , {cσ (p), c†τ (k)} = δσ,τ (2π~)3 δ (3) (k − p). (345)
Here µ > 0 is the chemical potential. As c†σ (p)cσ (p) = n̂σ (p) is the number operator for spin-σ fermions
with momentum p, we can easily deduce the action of the Hamiltonian on states in the Fock space:
h i
H − µN̂ |0i = 0 ,
h i
H − µN̂ c†σ (p)|0i = (p) c†σ (p)|0i ,
n
" n # n
h iY X Y
†
H − µN̂ cσj (pj )|0i = (pk ) c†σj (pj )|0i . (347)
j=1 k=1 j=1
48
Importantly, we can now normalize the eigenstates to 1, i.e.
1 i
ψk (r) = 3 e ~ k·r . (353)
L 2
Hence Z
hk|k0 i = d3 rψk∗ (r)ψk0 (r) = δk,k0 . (354)
{cσ (p), cτ (k)} = 0 = {c†σ (p), c†τ (k)} , {c†σ (p), cτ (k)} = δσ,τ δk,p . (355)
The Hamiltonian is
X X
H − µN̂ = (p) c†σ (p)cσ (p).
p σ=↑,↓
(356)
We define a Fermi momentum by
p2F
= µ. (357)
2m
This is extensive (proportional to the volume) as expected. You can see the advantage of working in a finite
volume: the product in (358) involves only a finite number of factors and the ground state energy is finite.
The ground state momentum is X X
PGS = p = 0. (360)
σ |p|<pF
The ground state momentum is zero, because is a state with momentum p contributes to the sum, then so
does the state with momentum −p.
ε (p)
Figure 11: Ground state in the 1 dimensional case. Blue circles correspond to “filled” single-particle states.
49
7.1.2 Excitations
• Particle excitations
c†σ (k)|GSi with |k| > pF . (361)
Their energies and momenta are
• Hole excitations
cσ (k)|GSi with |k| < pF . (363)
Their energies and momenta are
• Particle-hole excitations
c†σ (k)cτ (p)|GSi with |k| > pF > |p|. (365)
Their energies and momenta are
p p p
Figure 12: Some simple excited states: (a) particle (b) hole (c) particle-hole.
1. One-point function.
We now want to determine the expectation value of this operator in the ground state
X
hGS|ρ(r)|GSi = hGS|c†σ (r)cσ (r)|GSi. (369)
σ
50
A crucial observation is that the ground state has a simple description in terms of the Fock space built
from momentum eigenstates. Hence what we want to do is to work out the momentum representation
of ρ(r). We know from our general formula (291) that
X
cσ (r) = hr|pi cσ (p). (370)
| {z }
p i
L−3/2 e ~ p·r
Substituting this as well as the analogous expression for the creation operator into (369), we obtain
X 1 X i 0
hGS|ρ(r)|GSi = 3
e ~ (p−p )·r hGS|c†σ (p0 )cσ (p)|GSi. (371)
σ
L 0 p,p
For the expectation value hGS|c†σ (p0 )cσ (p)|GSi to be non-zero, we must have that c†σ (p0 )cσ (p)|GSi
reproduces |GSi itself. The only way this is possible is if |p| < pF (so that the c pokes a hole in the
Fermi sea) and p0 = p (so that the c† precisely fills the hole made by the c). By this reasoning we
have
hGS|c†σ (p0 )cτ (p)|GSi = δσ,τ δp,p0 θ(pF − |p0 |). (372)
Similarly we can show that
So our expectation value gives precisely the particle density. This is expected because our system is
translationally invariant and therefore hGS|ρ(r)|GSi cannot depend on r.
2. Two-point function.
Next we want to determine the two-point function
X 1 XX i 0 i 0 0
hGS|ρ(r)ρ(r0 )|GSi = 6
e ~ (p−p )·r e ~ (k−k )·r hGS|c†σ (p0 )cσ (p)c†τ (k0 )cτ (k)|GSi. (375)
σ,τ
L 0 0
p,p k,k
The expectation value hGS|c†σ (p0 )cσ (p)c†τ (k0 )cτ (k)|GSi can be calculated by thinking about how the
creation and annihilation operators act on the ground state, and then concentrating on the processes
that reproduce the ground state itself in the end (see Fig. 13).
The result is
hGS|c†σ (p0 )cσ (p)c†τ (k0 )cτ (k)|GSi = δk,k0 δp,p0 θ(pF − |p|)θ(pF − |k|)
+δσ,τ δp,k0 δk,p0 θ(|k0 | − pF )θ(pF − |k|). (376)
Observe that by virtue of (372) and (373) this can be rewritten in the form
hGS|c†σ (p0 )cσ (p)|GSihGS|c†τ (k0 )cτ (k)|GSi + hGS|c†σ (p0 )cτ (k)|GSihGS|cσ (p)c†τ (k0 )|GSi . (377)
The fact that the 4-point function (376) can be written as a sum over products of two-point functions is a
reflection of Wick’s theorem for noninteracting spin-1/2 fermions. This is not part of the syllabus and we
51
Figure 13:
won’t dwell on it, but apart from extra minus signs, this says that 2n-point functions are given by the sum
over all possible “pairings”, giving rise to a product of two-point functions. In our particular case this gives
hc†σ (p0 )cσ (p)c†τ (k0 )cτ (k)i = hc†σ (p0 )cσ (p)ihc†τ (k0 )cτ (k)i − hc†σ (p0 )c†τ (k0 )ihcσ (p)cτ (k)i
+ hc†σ (p0 )cτ (k)ihcσ (p)c†τ (k0 )i, (378)
and using that the two point function of two creation or two annihilation operators is zero we obtain (377).
Substituting (376) back in to (375) gives
X 1 X
hGS|ρ(r)ρ(r0 )|GSi = θ(pF − |k|)θ(pF − |p|)
0
L6
σ,σ k,p
X 1 X i 0 0
+ 6
θ(|k| − pF )θ(pF − |k0 |)e ~ (k−k )·(r−r )
σ
L 0 k,k
1 X i k·(r−r0 ) 1 X − i k0 ·(r−r0 )
= hGS|ρ(r)|GSihGS|ρ(r0 )|GSi + 2 e~ e ~ . (379)
L3 L3 0
|k|>pF |k |<pF
Remark
Evaluting the k sums for large L: The idea is to turn sums into integrals
Z ∞ Z π Z 2π
d3 k θ(pF − ~p) ip|R| cos ϑ
Z
1 X i k·R i
k·R 2
e ~ −→ θ(p F − |k|)e ~ = dpp dϑ sin ϑ dϕ e
L3 (2π~)3 0 0 0 (2π)3
|k|<pF
Z pF /~
dp 2p sin(p|R|) sin(pF |R|) − pF |R| cos(pF |R|)
= 2
= ≡ h(|R|). (380)
0 (2π) |R| 2π 2 |R|3
52
Here we have introduced spherical polar coordinates such that the z-axis of our co-ordinate system is along the
R direction, and
kx = ~p sin ϑ cos ϕ ,
ky = ~p sin ϑ sin ϕ ,
kz = ~p cos ϑ. (381)
Here we have used standard definitions for Fourier series, cf Riley/Hobson/Bence 12.7.
Remark
Using these simplifications for large L we arrive at our final answer
2
hGS|ρ(r)ρ(r0 )|GSi = hGS|ρ(r)|GSi2 + hGS|ρ(r)|GSiδ (3) (r − r0 ) − 2 h(|r − r0 |) .
(385)
The first two terms are the same as for a classical ideal gas, while the third contribution is due to the
fermionic statistics (Pauli exclusion: “fermions don’t like to be close to one another”).
Express ρ(x) in terms of Ψ†p and Ψq , and show from this that
X †
N= Ψk Ψk .
k
53
Let |0i be the vacuum state (containing no particles) and define |φi by
where uk and vk are complex numbers depending on the label k, and A is a normalisation constant.
Evaluate (i) |A|2 , (ii) hφ|N |φi, and (iii) hφ|N 2 |φi. Under what conditions is |φi an eigenstate of particle
number?
Question 16. Consider a system of fermions in which the functions ϕ` (x), ` = 1, 2 . . . N , form a complete
orthonormal basis for single particle wavefunctions.
a) Explain how Slater determinants may be used to construct a complete orthonormal basis for n-particle states
with n = 2, 3 . . . N . Calculate the normalisation constant for such a Slater determinant at a general value of n.
How many independent n-particle states are there for each n?
b) Let C`† and C` be fermion creation and destruction operators which satisfy the usual anticommutation relations.
The quantities ak are defined by
XN
ak = Uk` C` ,
`=1
where Uk` are elements of an N × N matrix, U . Write down an expression for a†k . Find the condition which
must be satisfied by the matrix U in order that the operators a†k and ak also satisfy fermion anticommutation
relations.
c) Non-interacting spinless fermions move in one dimension in an infinite square-well potential, with position
coordinate 0 ≤ x ≤ L. The normalised single particle energy eigenstates are
1/2
2 `πx
ϕ` (x) = sin ,
L L
Sketch this function and comment briefly on its behaviour for x → 0 and x → ∞.
54
Here we have assumed that our system in enclosed in a large, periodic box of linear dimension L. The
boson-boson interaction is most easily expressed in position space
Z
1
V̂ = d3 rd3 r0 c† (r)c† (r0 )V (r, r0 )c(r0 )c(r) (387)
2
i.e. bosons interact only if they occupy the same point in space. Changing to the momentum space
description
1 X i p·r
c(r) = 3/2 e ~ c(p), (389)
L p
we have
U X
V̂ = c† (p1 )c† (p2 )c(p3 )c(p1 + p2 − p3 ). (390)
2L3 p1 ,p2 ,p3
However,
[c† (0)c(0), V̂ ] 6= 0, (394)
so that the number of p = 0 bosons is not conserved, and the ground state |GSi will be a superposition of
states with different numbers of p = 0 bosons. However, for the ground state and low-lying excited states
we will have
hΨ|c† (0)c(0)|Ψi ' N0 , (395)
where N0 , crucially, is a very large number. The Bogoliubov approximation states that, when acting on the
ground state or low-lying excited states, we in fact have
c† (0) '
p p
N0 , c(0) ' N0 ,
(396)
i.e. creation and annihilation operators are approximately diagonal. This is a much stronger statement than
(395), and at first sight looks rather strange. It amounts to making an ansatz for low-energy states |ψi that
fulfils
hψ 0 |c(0)|ψi = N0 hψ 0 |ψi + . . .
p
(397)
55
√
where the dots denote terms that are small compared to N0 . We’ll return to what this implies for the
structure of |ψi a little later. Using (396) we may expand H in inverse powers of N0
X p2
H = c† (p)c(p)
p
2m
U U N0 X †
+ N0
2
+ 2c (k)c(k) + 2c† (−k)c(−k) + c† (k)c† (−k) + c(−k)c(k)
2L3 2L3
k6=0
+... (398)
Uρ X p2
Uρ h † i
H= N+ + U ρ c† (p)c(p) + c (p)c† (−p) + c(−p)c(p) + . . .
2 2m 2
p6=0 | {z }
(p)
(401)
The Bogoliubov approximation has reduced the complicated four-boson interaction to two-boson terms. The
price we pay is that we have to deal with the “pairing”-terms quadratic in creation/annihilation operators.
56
where s 2
p2
E(p) = + Uρ − (U ρ)2 . (407)
2m
We note that
p2
E(p) −→ for |p| → ∞, (408)
2m
which tells us that at high momenta (and hence high energies) we recover the quadratic dispersion. In this
limit θp → 0, so that the Bogoliubov bosons reduce to the “physical” bosons we started with. On the other
hand r
Uρ
E(p) −→ |p| for |p| → 0. (409)
m
So here we have a linear dispersion.
b(p)|0̃i = 0 . (411)
Clearly, for p 6= 0 we have E(p) > 0, and hence no Bogoliubov quasiparticles will be present in the ground
state. On the other hand, a basic assumption we made was that
p
hGS|b(0)|GSi ' N0 . (412)
In order to get an idea what this implies for the structure of the ground state, let us express it in the general
form
∞
X n
|GSi = αn b† (0) |0̃i . (413)
n=0
Low-lying excited states can now be obtained by creating Bogoliubov quasipartices, e.g.
b† (q)|GSi, (416)
57
8.5 Ground state correlation functions
We are now in a position to work out correlation functions in the ground state such as
Using that
hGS|b† (p) = 0 = b(q)|GSi, (419)
we find that
This tells us that, in contrast to the ideal Bose gas, in the ground state of the interacting Bose gas we have
a finite density of bosons with non-zero momentum
Another feature of the ground state is that the two-point function of two annihilation/creation operators is
non-zero
These imply that boson number is not a good quantum number in the ground state. More formally, we
say that the ground state spontaneously breaks the U(1) symmetry of the Hamiltonian H = T̂ + V̂ . Let us
explain that statement. The Hamiltonian is invariant under the symmetry operation
i.e.
Û H Û † = H. (424)
The reason for this is that all terms in H involve the same number of creation as annihilation operators,
and the total particle number is therefore conserved. This is referred to as a global U(1) symmetry (as
the transformations (423) form a group called U(1)). Let us now investigate how ground state expectation
values transform. We have
hGS|c(p)c(q)|GSi = hGS|Û † Û c(p)Û † Û c(q)Û † Û |GSi = e2iφ hGS|Û † c(p)c(q)Û |GSi . (425)
If the ground state were invariant under the symmetry, we would have Û |GSi = |GSi. Eqn (425) would then
imply that hGS|c(p)c(q)|GSi = 0. Reversing the argument, we see that a non-zero value of the expectation
value (422) implies that the ground state cannot be invariant under the U(1) symmetry, and in fact “breaks
it spontaneously”.
58
8.6 Depletion of the Condensate
We started out by asserting that for small interactions U > 0 we retain a Bose-Einstein condensate, i.e. the
consensate fraction N0 /N remains large. We can now check that this assumption is self-consistent. We have
X
N0 = N − c† (p)c(p). (426)
p6=0
We again turn this into an integral and evaluate it in spherical polar coordinates, which gives
Z ∞
N0 2π dp 1
p2
≈1− 3
r − 1
. (429)
N ρ 0 (2π~) h
Uρ
i 2
1 − (p)
√
By means of the substitution p = 2mU ρz we can see that the integral is proportional to U 3/2 and thus
indeed small for small U .
Srα , α = x, y, z , (430)
We will assume that the spin are large in the sense that
X 2
S2r = Srα = s(s + 1) 1. (432)
α
Let us begin by constructing a basis of quantum mechanical states. At each site we have 2s + 1 eigenstates
of Srz
Srz |mir = m|mir , m = s, s − 1, . . . , −s. (433)
They can be constructed from |sir using spin lowering operators Sr− = Srx − iSry
1
|s − nir = S − )n |sir , n = 0, 1, . . . , 2s, (434)
Nn r
where Nn are normalization constants. A basis of states is then given by
Y
|sr ir , −s ≤ sr ≤ s spin on site r. (435)
r
59
9.1 Heisenberg model and spin-rotational SU(2) symmetry
An appropriate Hamiltonian for a ferromagnetic insulator was derived by Heisenberg
X
H = −J Sr · Sr0 .
hr,r0 i
(436)
Here hr, r0 i denote nearest-neighbour pairs of spins and we will assume that J > 0. The model (436) is
known as the ferromagnetic Heisenberg model. You can check that the Hamiltonian (436) commutes with
the three total spin operators X
[H, S α ] = 0 , S α = Srα . (437)
r
These imply that the Hamiltonian is invariant under general rotations (in spin space)
The transformations (438) form a group known as SU(2), and the Heisenberg Hamiltonian (436) is invariant
under them.
where NB is the total number of bonds in our lattice. The total spin lowering operator S − = −
P
r Sr
commutes with H by virtue of (437) and hence
1 n
|GS, ni = S − |GSi , 0 ≤ n ≤ 2sN (441)
Nn
are ground states as well (as they have the same energy). Here Nn is a normalization.
Remark
Proof that |GSi is a ground state:
Here J2 is the total angular momentum squared. Its eigenvalues follow from the theory
of adding angular momenta to be
This tells us that the maximal eigenvalue of J2 is 2s(2s + 1), and by expanding |ψi in a
basis of eigenstates of J2 we can easily show that
X
hψ|J2 |ψi = hψ|j, mihj, m|J2 |j 0 , m0 ihj 0 , m0 |ψi
j,m,j 0 ,m0
X X
= |hψ|j, mi|2 j(j + 1) ≤ 2s(2s + 1) |hψ|j, mi|2 = 2s(2s + 1). (444)
j,m j,m
60
This tells us that
hψ|Sr · Sr0 |ψi ≤ s2 . (445)
This provides us with a bound on the eigenvalues of the Hamiltonian, as
X
hψ|H|ψ|i ≥ −J s2 = −Js2 N z. (446)
hr,r0 i
The state we have constructed saturates this bound, so must be a ground state.
Remark
Let us now see how the SU(2) symmetry is reflected in expectation values of operators O. At finite
temperature we have
1 h i
hOiβ = Tr e−βH O , (447)
Z(β)
where Z(β) = Tr[e−βH ] is the partition function and β = 1/kB T . In the T → 0 limit we have
2sN
1 X
hOi∞ = hGS, n|O|GS, ni, (448)
2sN + 1
n=0
i.e. we average over all ground states. The thermal average, as well as its T = 0 limit, are invariant under
rotations in spin space. Indeed, under a rotation in spin space we have
1 h i
heiα·S Oe−iα·S iβ = Tr e−βH eiα·S Oe−iα·S (449)
Z(β)
P
where S = r Sr are the global spin operators. Using the cyclicity of the trace and the fact that H
commutes with the global spin operators, we see that this equals hOiβ . If we choose as our operator O any
of the global spin operators, and consider a rotation by π around one of the orthogonal axes, we see that
the magnetization always vanishes
hS α iβ = 0 , α = x, y, z. (450)
Physically this is what one would expect for a system that is spin rotationally invariant, i.e. looks the same
in any direction in spin space.
This means that if we define the thermodynamic limit in the above way, then the only surviving ground
states will have magnetization per site s, i.e. contain only a non-extensive number of spin flips. In all of
these remaining ground states the spin rotational symmetry has been broken. As we have taken → 0
our Hamiltonian is again SU(2) symmetric, but the remaining ground states “spontaneously” break this
symmetry.
61
9.4 Holstein-Primakoff Transformation
We succeeded in finding the ground states of H because of their simple structure. For more general spin
Hamiltonians, or even the Hamiltonian (436) with negative value of J, this will no longer work and we need
a more general, but approximate way of dealing with such problems. This is provided by (linear) spinwave
theory.
As shown by Holstein and Primakoff, spin operators can be represented in terms of bosonic creation and
annihilation operators as follows:
s
√ a†r ar
Srz = s − a†r ar , Sr+ = Srx + iSry = 2s 1 − ar . (453)
2s
You can check that the bosonic commutation relations
imply that
[Srα , Srβ0 ] = δr,r0 iαβγ Srγ . (455)
However, there is a caveat: the spaces of QM states are different! At site r we have
n
Sr |sir , n = 0, . . . , 2s (456)
for spins, but for bosons there are infinitely many states
n
a†r |0ir , n = 0, . . . , ∞. (457)
To make things match, we must impose a constraint, that there are at most 2s bosons per site. Now we take
advantage of the fact that we have assumed s to be large: in the ground state there are no bosons present,
because
hGS|s − a†r ar |GSi = hGS|Srz |GSi = s = (458)
Low-lying excited states will only have a few bosons, so for large enough s we don’t have to worry about
the constraint. Using the Holstein-Primakoff transformation, we can rewrite H in a 1/s expansion
h i
s2 − s a†r ar + a†r0 ar0 − a†r ar0 − a†r0 ar + . . .
X
H = −J
hr,r0 i
(459)
Here the dots indicate terms proportional to s0 , s−1 , etc. Once again using that s is large, we drop these
terms (for the time being). We then can diagonalize H by going to momentum space
1 X ik·r
ar = √ e a(k) , [a(k), a† (p)] = δk,p , (460)
N k
which gives
X
H = −Js2 N z + (q)a† (q)a(q) + . . .
q
(461)
For a simple cubic lattice the energy is
62
In the context of spontaneous symmetry breaking these gapless excitations are known as Goldstone modes.
Let us now revisit the logic underlying our 1/s expansion. For things to be consistent, we require that
the terms of order s in (461) provide only a small correction to the leading contribution proportional to s2 .
This will be the case as long as we are interested only is states |Ψi such that
hΨ|a† (q)a(q)|Ψi s. (464)
This condition is certainly fulfilled for the ground state and low-lying excited states.
where hr, r0 i denote nearest-neighbour pairs of spins on a simple cubic lattice and J > 0. Compared to (436)
all we have done is to switch the overall sign of H, but this has important consequences. In particular, it is
no longer possible to obtain an exact ground state for the model. Instead, we start by considering our spins
to be classical. This is justified if we are interested only in states with large spin quantum numbers. We
will assume this to be the case and check the self-consistency of our assumption later. In the classical limit
we can think of the spins as three-dimensional vectors. The lowest energy configuration is then one, where
all neighbouring spins point in opposite directions, i.e. along the three cystal axes the spin configuration
looks like ↑↓↑↓↑↓ .... This is known as a Néel state. It is convenient to subdivide our lattice into two
sublattices: on sublattice A all spins point in the same direction, while on sublattice B all spins point in
the opposite direction. Like the ferromagnet, the model (465) has a global spin-rotational symmetry, that
will be spontaneously broken in the ground state. By choosing our spin quantization axis appropriately, the
classical ground state can then be written in the form
Y Y
|sir | − sir0 (466)
r∈A r0 ∈B
The idea is now to map this state to a ferromagnetic one, by inverting the spin quantization axis in the B
sublattice. After that we can employ the Holstein-Primakoff transformation to carry out a 1/s expansion.
As a result of the rotation of spin quatization axis on the B sublattice, the part of the Hamiltonian of order
s now contains terms involving two annihilation or two creation operators. Diagonalizing the Hamiltonian
then requires a Bogoliubov transformation.
where the i, j are nearest neighbours, respectively on the A and B sublattices. J is positive. Show that the
classical ground state has all the A spins ferromagnetically aligned in one direction and all the B spins ferromag-
netically aligned in the opposite direction. Assume the quantum mechanical ground state is well approximated
by the classical one. To a first approximation the spin operators are given in terms of boson operators a, b by
A sublattice B sublattice
Siz = SA − a†i ai Sjz = −S B + b†j bj
Si+ ≡ Six + iSiy ' (2S A )1/2 ai Sj+ ≡ Sjx + iSjy ' (2S B )1/2 b†j
Si− ≡ Six − iSiy ' (2S A )1/2 a†i Sj− ≡ Sjx − iSjy ' (2S B )1/2 bj
63
Discuss the validity of this approximation. Use these relations to express the Hamiltonian in terms of the boson
operators to quadratic order.
Transforming to crystal momentum space using (with N the number of sites on one sublattice)
X X
ai = N −1/2 e−ik·ri ak , bj = N −1/2 eik·rj bk
k k
and determine the coefficients. Hence calculate the spectrum of excitations at low momenta. Consider both the
cases with S A = S B and S A 6= S B and comment on your results.
Question 18. (optional) Consider the ideal Fermi gas at finite density N/V in a periodic 3-dimensional box
of length L.
(a) Give an expression of the ground state in terms of creation operators for momentum eigenstates.
(b) Calculate the single-particle Green’s function
Z Z
0 0
Gστ (ω, q) = dt eiω(t−t ) d3 r e−iq·(r−r ) Gστ (t, r; t0 , r0 ) ,
where T denotes time-ordering (i.e. T O(t1 )O(t2 ) = θ(t1 − t2 )O(t1 )O(t2 ) − θ(t2 − t1 )O(t2 )O(t1 ) for fermionic
operators), and
i i
cσ (r, t) ≡ e ~ Ht cσ (r)e− ~ Ht . (468)
First express the creation/annihilation operators c†σ (r, t), cσ (r, t) in terms of creation/annihilation operators in
momentum space c†σ (p, t), cσ (p, t). Then show that for annihilation operators in momentum space we have
i i i
cσ (p, t) ≡ e ~ Ht cσ (p)e− ~ Ht = cσ (p)e− ~ t(p) , (469)
Now insert (470) into (467) and evaluate the ground state expectation value to obtain an integral representation
for Gστ (t, r; t0 , r0 ). Why does the Green’s function only depend on t − t0 and r − r0 ? Finally, calculate Gστ (ω, q).
Note: the imaginary part of the single-particle Green’s function is (approximately) measured by angle resolved
photoemission (ARPES) experiments.
64
10.1 Coherent States
In order to deal with many-boson systems, we require a convenient analog on the Fock space. This is
provided by coherent states
!
†
X
|φi = exp φ` a` |0i , φ` ∈ C ,
`
(472)
where a` denotes the bosonic annihilation operator for the single-particle state labeled by ` and |0i is the
Fock vacuum. If NSP is the number of single-particle states, then φ is a NSP -dimensional complex vector.
The states (472) are simultaneous eigenstates of all annihilation operators
aj |φi = φj |φi.
(473)
The commutator is easily calculated by expanding the exponential in its power series
h i
aj , exp φj a†j = φj exp φj a†j , (475)
and substituting this back into (474) establishes (473). Coherent states are not mutually orthogonal. In
fact, they fulfil
ψ`∗ φ`
P
hψ|φi = e ` .
(476)
This result for the scalar product can be obtained by applying the Baker-Campbell-Hausdorff (BCH) formula,
which states that for two operators such that [A, [A, B]] = 0 = [B, [A, B]] we have
1
eA eB = eA+B e 2 [A,B] = eB eA e[A,B] . (477)
Setting A = ` ψ`∗ a` , B = j φj a†j , using the BCH formula, and then noting that A|0i = 0 = h0|B, we
P P
obtain (476). While coherent states do not form an orthogonal set, they nevertheless provide a resolution
of the identity on the Fock space
Z Y 2
d φj − P` |φ` |2
1= e |φihφ|.
π
j
| {z }
d(φ,φ∗ )
(478)
Here d2 φ` denotes the integration over the complex variable φ` , e.g. in polar co-ordinates we have
Z Z ∞ Z 2π
2
d φj = drj rj dϕj , φj = rj eiϕj . (479)
0 0
65
where |n1 n2 . . . i is a state in the occupation number representation. Hence
X X φn1 1 (φ∗1 )m1 φn2 2 (φ∗2 )m2 . . .
|φihφ| = √ |n1 n2 n3 . . . ihm1 m2 m3 . . . | . (481)
n1 ,n2 ,... m1 ,m2 ,...
n1 !m1 !n2 !m2 ! . . .
Inspection of (478) and (481) shows that the integral over φj and φ∗j is
Z ∞ Z 2π 2
drj dϕj rnj +mj +1 e−rj eiϕ(nj −mj ) = nj ! δnj ,mj . (482)
0 0
The right hand side is a resolution of the identity in the occupation number representation.
We first want to derive a path integral representation for the partition function
h i X
Z(β) = Tr e−β Ĥ = hn|e−β Ĥ |ni. (485)
n
Inserting a resolution of the identity in terms of coherent states this can rewritten as
Z Z
∗ − ` |ψ` |2 2
P X P X
−β Ĥ
d(ψ, ψ )e hn|ψi hψ|e |ni = d(ψ, ψ ∗ )e− ` |ψ` | hψ|e−β Ĥ |nihn|ψi
n n
Z
|2
P
= d(ψ, ψ ∗ )e− ` |ψ`
hψ|e−β Ĥ |ψi. (486)
where X X
H ψ∗, ψ0 = hij ψi∗ ψj0 + Vijkl ψi∗ ψj∗ ψk0 ψl0 . (489)
i,j i,j,k,l
In going from the first to the second line in (488) we have used that
66
After these steps we end up with a representation of the form
X ψ (n) ∗ − ψ (n+1) ∗
Z YN Z
" N −1 #
∗ ∗
d(ψ (m) , ψ (m) ) exp − · ψ (n) + H ψ (n+1) , ψ (n) . (491)
Z(β) = lim
N →∞
m=1 n=0
In complete analogy to what we did in the single-particle case, we can now interpret the sequence ψ (1) ,
ψ (2) ,. . . ψ (N −1) as a discretization of a path on the space of NSP dimensional complex vectors
In the limit N → ∞ this goes over into a vector-valued function of imaginary time ψ(τ ), and the partition
function acquires the following formal expression
Z
∗
Z(β) = D ψ ∗ (τ ), ψ(τ ) e−S[ψ (τ ),ψ(τ )] ,
(493)
67