Advanced Quantum Mechanics Concepts
Advanced Quantum Mechanics Concepts
In this section we discuss why working with wavefunctions is not a good idea in systems with
N ∼ 1023 particles, and introduce much more suitable notation for this purpose.
hx|x0 i = h~r|~r0 ihS, Sz |S, Sz0 i = δ(~r − ~r0 )δSz ,Sz0 = δx,x0 (1)
and
S
X Z X
1= d~r|~r, Sz ih~r, Sz | = dx|xihx| (2)
Sz =−S
The last notation should hopefully remind us to sum over all discrete variables, and integrate over
all continuous ones. In this basis, any state can be decomposed as:
X S
X Z
|ψi = dx|xihx|ψi = d~rψ(~r, Sz )|~r, Sz i
Sz =−S
1
where ψ(x) = ψ(~r, Sz ) = h~r, Sz |ψi, also known as the wavefunction, is the amplitude of probability
that the particle in state |ψi is at position ~r with spin Sz . This amplitude is generally a complex
number. Note: if the particle has a spin S > 0, then h~r|ψi is a spinor with 2S + 1 entries,
ψ(~r, S)
ψ(~r, S − 1)
h~r|ψi = ψ(~r) =
...
ψ(~r, −S)
Then, for instance, the probability to find the particle at ~r is ψ † (~r)ψ(~r) = r, Sz )|2 , as
P
Sz |ψ(~
expected. Clearly, if we know the wavefunctions ψ(x) ∀x, we know the state of the system. Any
equation for |ψi can be turned into an equation for ψ(x) by simply acting on it with hx|, for
example: |ψ1 i = |ψ2 i → ψ1 (x) = ψ2 (x). However, we also have to deal with expression of the
form hx|Â|ψi, where  is some abstract operator. In fact, the only operators that appear in the
Hamiltonian are (combinations of):
hx|~ˆr|ψi = ~rψ(~r, Sz )
ˆ~|ψi = −ih̄∇ψ(~r, Sz )
hx|p
hx|Sˆz |ψi = h̄Sz ψ(~r, Sz ); hx|Sˆ± |ψi = h̄ (S ∓ Sz )(S ± Sz + 1)ψ(~r, Sz ± 1)
p
2
You should convince yourselves that the two equations for the wavefunctions ψ(~r, Sz ) that we
obtain by projecting the abstract equation onto h~r, Sz | are equivalent to this one spinor equation.
All this may seem rather trivial and somewhat of a waste of time. However, it useful to
remember that using wavefunctions ψ(x) is a choice and should be done only when convenient.
When not convenient, we may use another representation (for instance of momentum states, i.e.
projecting on |~k, Sz i basis, or coherent states, or any other number of options). Or, we may decide
to work directly in the abstract space.
which is the amplitude of probability that at time t, particle 1 is at x1 = (~r1 , σ1 ), etc. Note that if
the number of particles is not fixed (as is the case in a grand-canonical ensemble) we’re in trouble
deciding what h| to use. But let us ignore this for the moment, and see how we get in trouble even
if the number of particles is fixed.
We almost always decompose wavefunctions in a given basis. For a single particle, we know
that any wavefunction Ψ(x,P t) can be decomposed in terms of a complete and orthonormal basis
φα (x) = hx|αi as Ψ(x, t) = α cα (t)φα (x) – this reduces the problem to that of working with the
time-dependent complex numbers cα (t).
If we have a complete basis for a single-particle Hilbert space, we can immediately generate
a complete basis for the N -particle Hilbert space (the particles are identical), as the products of
one-particle basis states:
X
Ψ(x1 , . . . , xN , t) = cα1 ,...,αN (t)φα1 (x1 )φα2 (x2 ) · · · φαN (xN )
α1 ,...,αN
As usual, cα1 ,...,αN (t) is the amplitude of probability to find particle 1 in state α1 and located at
x1 , etc.
However, because the particles are identical we know that the wavefunctions must be symmetric
(for bosons) or antisymmetric (for fermions) to interchange of any particles:
where
1 2 ... N
P1 P2 ... PN
is any permutation, P is its sign (number of transpositions), and from now on we use the notation:
−1, for fermions
ξ= (3)
+1, for bosons
3
It follows immediately that c...,αi ,...,αj ,... = ξc...,αj ,...,αi ,... and so c...,αi ,...,αi ,... = 0 for fermions:
we cannot have two or more fermions occupying identical states αi – the Pauli principle is
automatically enforced through this symmetry.
Because of this requirement, the physically meaningful many-body fermionic (bosonic) wave-
functions are from the antisymmetric (symmetric) sector of the N -particle Hilbert space, and we
only need to keep the properly symmetrized basis states, which we denote as:
rQ
i ni !
X
φα1 ,...,αN (x1 , . . . , xN ) = ξ P φα1 (xP1 )φα2 (xP2 ) · · · φαN (xPN ) (4)
N!
P ∈SN
The factor in front is the normalization constant; ni is the total number of particles in the same
state αi (only important for bosonic systems; for fermionicQ ones all ni = 1), and the summation is
over all possible permutations, of which there are N !/ i ni ! distinct ones. You should check that
indeed, these functions are properly normalized. For fermions, such a properly antisymmetrized
product of one-particle states is called a Slater determinant.
Then, any many-body wavefunction can be written as:
X
Ψ(x1 , . . . , xN , t) = cα1 ,...,αN (t)φα1 ,...,αN (x1 , . . . , xN )
α1 ,...,αN
This means that even if we are extremely lucky, and only a single combination of states α1 , . . . , αN
is occupied, so that the sum contains a single φα1 ,...,αN (x1 , . . . , xN ) basis wavefunction, this alone
contains on the order of N ! terms (from Eq. (4)). If there are more basis states involved in
the decomposition of Ψ, then there are that many more terms on the right-hand side of the
decomposition.
So now we can see why it is inconvenient to use this approach. Even if the number of particles
is fixed (which is usually not the case); and even if we have managed to solve somehow the problem
and find the many-body wavefunction Ψ(x1 , . . . , xN , t) (which is to say, we know its decomposition
coefficients cα1 ,...,αN (t) for the given basis) – what we really need in the end are expectation values of
single particle operators (such as the total momentum, or particle density, or whatever interests us)
or two-particle operators (such as a Coulomb potential interaction). Any single particle operator
PN
is of general form  = i=1 Âi , where Âi is the operator acting in the single-particle Hilbert space
P
of particle i. Similarly, two-particle operators are of general form B̂ = i<j B̂ij , with a total of
N (N − 1)/2 terms in the sum. So what we typically have to calculate is something of the form:
X X
hΨ|O|Ψi = dx1 . . . dxN Ψ∗ (x1 , . . . , xN , t)OΨ(x1 , . . . , xN , t)
so that we have to perform N integrals over real space (all operators we deal with are diagonal in
positions); in general 2N sums over spin indexes (which reduce to N if the operator is diagonal in
spin-space, as I assumed here) ... and this is out of a combination of a product of order (N !)2 terms
contained in ΨΨ∗ , times N or N (N − 1)/2 terms from the action of the operator. Not impossible,
but exceedingly unpleasant to keep track of all these terms. And as I said, if the number of particles
is not fixed (if we work in a grand-canonical ensemble) things become that much uglier.
The origin of these complications is the fact that we insisted on working which wavefunctions
which contain a lot of useless information (which particle is where). In fact, because the particles
are indistinguishable, all we need to know is which one-particle states are occupied, and we do
not need to bother listing which particle is where – we know that all possible permutations will
appear, anyway. Keeping only the necessary information is precisely what 2nd quantization does.
4
3 2nd quantization
Notation: from now on, I will use a single index α to label states of a complete single-particle
basis (including the spin). Which basis to use depends on the problem at hand: for instance, for
translationally invariant problems, we will use α = (~k, σ) as quantum numbers, whereas if we deal
with an atom, we can use α = (n, l, m, σ) for its one-particle eigenstates (the usual hydrogen-like
orbitals). So α is simply a shorthand notation for the collection of all quantum numbers needed
to characterize the single-particle state: if there is a single particle in the system, we can identify
each of its possible states by a unique set of values for the numbers making up α.
Next, we define an ordering for these states (α1 , α2 , ...); for instance, we order them in increasing
order of energy for some single-particle Hamiltonian Eα1 < Eα2 < ... etc. If there are degenerate
states, we define some rule to order the states, e.g. spin-up first and spin-down second. You may
worry that if ~k is one of the quantum numbers, we cannot order continuous variables – however,
we will place the system in a box of volume V so that only discrete ~k values are allowed (which
we can order), and then let V → ∞ at the end of all calculations. So, in practice there is always
some way to order these one-particle states.
Once this ordering is agreed upon, we define the abstract vectors:
|n1 , n2 , . . .i (5)
as being the state with n1 particles in state “1” with index α1 , n2 particles in state “2” with index
α2 , etc. We have to list occupation numbers for all possible states – for an empty state, ni = 0.
Of course, for fermionic systems we can only have ni = 0 or 1, for any i (Pauli’s principle). For
bosons, ni = 0, 1, 2, ... can be any non-negative integer.
Examples (we’ll discuss more of these in class): the ground-state for N non-interacting fermions
is represented as |1, 1, ..., 1, 0, 0, ...i (first lowest-energy N states are occupied, the other ones are
all empty) whereas the ground-state of a non-interacting bosonic system is |N, 0, 0, ...i (all bosons
in the lowest energy state). The vacuum is |0, 0, ...i in both cases. Etc.
The ensemble of all possible states {|n1 , n2 , . . .i} is a complete orthonormal basis of the so-
called Fock space. The Fock space is the reunion of the Hilbert (fermionic or bosonic) spaces
with any number of particles, from zero (the vacuum) to any N → ∞. If the number of particles P
is fixed, we work in the Hilbert subspace of the Fock space defined by the constraint N = i ni .
The link between these abstract states and the N -particle basis states of Eq. (4) is straightfor-
ward. For fermions, the Slater determinant φα1 ,...,αN (x1 , ...., xN ) = hx1 , ..., xN |..., nα1 = 1, ..., nα2 =
1, ....i is the wavefunction associated with the abstract state that has the one-particle states
α1 , ..., αN occupied, while all other one-particle
states are empty; and similarly for bosons, but their the occupation numbers can be larger than
1. As advertised, the abstract state contains only the key information of which states are occupied.
By contrast, the Slater determinants (and their bosonic analogues) also contain the unnecessary
information of which particle is where.
We would like now to be able to generate easily these abstract states, and also be able to
work with operators represented directly in this Fock space, so that computing matrix elements is
straightforward. Remember that these are identical particles, so these basis states must obey the
proper statistics. We enforce this in the following way: to each single-particle state α we associate
a pair of operators cα , c†α which obey the algebra:
5
We use the notation [a, b]ξ = ab − ξba to deal simultaneously with both fermions (ξ = −1)
and bosons (ξ = 1). When dealing with a well-defined type, we will use the usual notation for
commutators [, ]+ = [, ] and for anticommutators [, ]− = {, }. Also, in this section I will call
fermionic operators as aα , a†α , and bosonic operators bα , b†α . I will use the c operators when I want
to deal with both types simultaneously. These operators are called creation and annihilation
operators. This is because as we will see next, a†α |Ψi is the state we obtain by adding a particle
in state α to the state |Ψi; similarly, aα |Ψi is the state we obtain by removing a particle in state
α from state |Ψi (if there is no particle in this state, then aα |Ψi = 0).
For fermions, Pauli’s principle is automatically obeyed because the second line of the Wigner-
Pauli algebra (Eq. (6)) gives that: (a†α )2 = (aα )2 = 0: we can’t create or annihilate two fermions
in/from the same state.
Let n̂α = c†α cα - as we will see now, this is the number operator, which tells us how many
particles are in state α.
For fermions n̂α n̂α = a†α aα a†α aα = a†α (1 − a†α aα )aα = n̂α . As a result, the operator n̂α can
only have eigenvalues 0 or 1 (which, of course, is precisely what Pauli said). Let us look at each α
individually (and not write its label α, for the moment). Then the eigenstates of this operator are
n̂|0i = 0; n̂|1i = |1i. Let’s now see how the individual operators a, a† act on these two states. Start
from n̂|0i = a† a|0i = 0 and act on both sides with a. Using aa† a = (1 − a† a)a = a because a2 = 0,
we find a|0i = 0. Which hopefully feels re-assuring, as it says that we cannot remove a particle if
the state is already empty. Let’s now see what is the answer for a|1i (what is your expectation?).
For this, I’ll calculate n̂a|1i = 0 because, again, a2 = 0 (Pauli is so useful!) But we know that
n̂|0i = 0, so we must have a|1i = C|0i where C is some normalization constant, which is easily
shown to be C = 1. So a|1i = |0i , i.e. removing a particle from the state with one particle, we
get to the empty state (no particle). Now for the creation operator: a† |0i = a† a|1i = n̂|1i = |1i,
so adding a particle to the empty state gives us the state with one particle (no more comments
needed, hopefully). And finally, a† |1i = (a† )2 |0i = 0, so indeed we cannot add a second particle if
there is already one in that state – Pauli.
To summarize, after a bit of algebra we found that:
6
normalization is then found to be:
√
b† |ni = √ n + 1|n + 1i
b|ni = n|n − 1i
and again, one can express the state with n bosons in state α as:
(b† )n
|ni = √ |0i
n!
for any n = 0, 1, 2, .... Formally this looks just like what we obtained for fermions, so we can again
deal with both cases simultaneously.
Definition: we define a general Fock basis state as:
cα (anti)commutes with all operators before arriving near (c†α )nα . There are n1 + ... + nα−1 such
operators. We define the integers:
α−1
X
Sα = ni
i=1
and so:
(c† )n1 (c† )nα (c† )n1 (c† )nα
cα √1 · · · √α · · · |0i = ξ Sα √1 · · · cα √α · · · |0i
n1 ! nα ! n1 ! nα !
Since (this is true for both fermions and bosons, see above)
√
cα |nα i = nα |nα−1 i
This looks similar to what we had above, except for the factor ξ Sα = ±1 which keeps track of the
ordering and how many other states in “front” of state α are occupied. This sign is obviously very
important for fermions (for bosons it is always 1). Similarly, one finds that:
p
c†α |n1 , . . . , nα , . . .i = ξ Sα ξnα + 1|n1 , . . . , nα + 1, . . .i
So indeed, as advertised, creation operators add one more particle in the corresponding state, i.e.
increase that occupation number by 1 (for fermions this is only allowed if nα = 0, of course, and
7
the prefactor takes care of that); while annihilation operators remove one particle from that state.
Since this is true for any state in the basis, it will also be true for any general states because they
can be decomposed as a linear combination of basis states.
With these two equations, we can now compute the action of any operator because, as we show
next, any operator can be written in terms of creation and annihilation operators.
Just one more example, to see that we get sensible results. Applying twice these rules, we find
√ √ p
c†α cα |n1 , . . . , nα , . . .i = ξ Sα nα c†α |n1 , . . . , nα −1, . . .i = ξ Sα nα ξ Sα ξ(nα − 1) + 1|n1 , . . . , nα , . . .i
p p
Since ξ 2 = 1 → ξ 2Sα = 1. If ξ = 1 → nα [ξ(nα − 1) + 1] = nα . If ξ = −1, then nα [ξ(nα − 1) + 1] =
p
nα (2 − nα ) = nα , because in this case we’re dealing with fermions, and we can only have
nα = 0, 1. So we find, as expected, that:
In other words, this operator indeed counts how many particles are in that state.
The only other question we need to answer now, is how to write general operators in terms of
these creation and annihilation
PN operators? Here’s the answer:
Theorem: If  = i=1 Ai is a single-particle operator, with Ai acting only on particle “i”,
then: X
 = hα|A|α0 ic†α cα0 (8)
α,α0
where
XZ
hα|A|α0 i = d~rφ∗α (~r, σ)h~r, σ|A|~r, σ 0 iφα0 (~r, σ 0 ) (9)
σ,σ 0
I will not reproduce the proof here because it’s long. We will discuss it in class and you can also
find it in any standard textbook, for example Orland and Negele, or Fetter and Walecka. However,
let me list the steps here. First, since a single-particle operator acts on just one particle (can be
any of them), all it can do is either leave it in the state it was in, α → α, or change its state α0 → α.
All terms in Eq. (8) describe such processes. The question is how to find the coefficients for each
process, namely hα|A|α0 i. This is done so that the matrix elements of  in a complete basis are
correct – and it is a straightforward, but boring and time-consuming task to verify that indeed Eq.
(9) is the correct answer. Note: in writing Eq. (9), I assumed that the single-particle operator A
is diagonal in position, which is basically always the case. Before showing some examples, let me
state the result for two-particle operators so that we’re done with formalities:
Theorem: If B̂ = 21 i6=j Bi,j is a two-particle operator, with Bi,j acting only on particles “i”
P
and “j” (1 ≤ i, j ≤ N ), then:
1X
B̂ = hα, β|B|α0 , β 0 ic†α c†β cβ 0 cα0 (10)
2 α,α0
β,β 0
where
X Z Z
0 0
hα, β|B|α , β i = d~r1 d~r2 φ∗α (~r1 , σ1 )φ∗β (~r2 , σ2 )h~r1 , σ1 ; ~r2 , σ2 |B|~r1 , σ10 ; ~r2 , σ20 iφα0 (~r1 , σ10 )φβ 0 (~r2 , σ20 )
σ1 ,σ 0
1
σ2 ,σ 0
2
(11)
8
NOTE the order of listing the annihilation operators in Eq. (10) !!! Do not list them in
the “expected” order cα0 cβ 0 because for fermions that implies changing the sign of the interaction
from attractive to repulsive (or vice-versa) and that’s bound to make all the subsequent results
very very wrong!
All I’m going to say about two-particle operators is that since they act on two particles, they
can change the states of up to two particles, and that’s precisely what all terms in Eq. (10)
describe. And probably I’ll give you the proof of this theorem as a homework, because it might be
good to force you to calculate such an expectation value with wavefunctions, once in your lives.
After that “enjoyable” experience, you’ll adopt this 2nd quantization notation much more happily.
Let’s see some examples, before you start thinking that this is all too complicated. We start
first with some single-particle operators:
PN pˆ~2i
(a) The kinetic energy T̂ = i=1 2m . Since
ˆ~2
p
2
−h̄ 2
0
h~r, σ| |~r, σ i = δσ,σ0 ∇
2m 2m
where
p~2 0 −h̄2 2
X Z
hα| |α i = d~rφ∗α (~r, σ) ∇ φα0 (~r, σ)
2m σ
2m
p~2 ~0 0 h̄2~k 2
h~k, σ| |k , σ i = δσ,σ0 δ~k,k~0
2m 2m
(which you can either write down directly, or check by doing the integral I wrote above for these
plane-waves basis states), and so, in this basis:
X h̄2~k 2
T̂ = c~† c~k,σ (12)
2m k,σ
~
k,σ
This, you should agree, makes a lot of sense. What this tells us is that this operator counts how
many particles have a given momentum ~k and spin σ (i.e., n̂~k,σ = c~† c~k,σ ), multiplies that number
k,σ
h̄2~
k2
by the kinetic energy associated with this state 2m ,
and sums over all possible states. You should
PN
alsoPcheck that the operator for the total number of particles N̂ = i=1 1 (in first quantization)
→ α c†α cα (in second quantization), no matter what basis we use. Hopefully this also makes
perfect sense to you!
Note: the operator in the second-quantized form is independent of the number of particles N
of the state it acts upon! In the first quantization, if we have a wavefunction with N particles,
then the kinetic energy must be the sum of the N single-particle kinetic energies. If we have
9
N + 2 particles, then the kinetic energy is a different operator, with N + 2 terms. By contrast,
in the second-quantization the kinetic energy always is like in Eq. (12) (if we choose this basis).
This means we can act with it on wavefunctions which are superpositions of states with different
numbers of particles, no problem! This is a significant improvement.
I will show one more example, and give as assignment a few more usually encountered single-
particle operators, so that you see that the final result is always what common-sense would predict.
(b) Total spin operator, S ~ = PN ~si . For simplicity, let us assume we’re still in the basis
i=1
α = (~k, σ). We need the matrix elements, let’s compute them for each spin projection separately.
I will assume that we work with spins 1/2, which can be written in terms of the Pauli matrices
~s = h̄2 ~σ . Then:
h̄
h~k, σ|sz |k~0 , σ 0 i = δ~k,k~0 δσ,σ0 σ
2
so that
h̄ X † h̄ X †
Ŝz = σc~ c~k,σ = c~ c~k↑ − c~† c~k↓
2 k,σ 2 k↑ k↓
~
k,σ ~
k
i.e, simply subtract the number of particles with spin down from those with spin-up, and multiply
by h̄/2 to find the total spin in the z-direction. Similarly:
X †
Ŝ+ = Ŝx + iŜy = h̄ c~ c~k↓
k↑
~
k
c~† c~k↑
X
Ŝ− = Ŝx − iŜy = h̄
k↓
~
k
are indeed raising and lowering the total spin, by flipping spins-down into spins-up, or viceversa
(while leaving the translational part, eg momentum carried by particles, unchanged). Putting all
together, we can write:
ˆ~ X † h̄~σσσ0
S = c~ c~k,σ0
k,σ 2
~
k,σ,σ 0
where ~σσσ0 are the matrix elements of the Pauli matrices. Again, this is correct no matter how
many particles are in the system!
(c) Two-particle operator V̂ = 12 i6=j u(~ri − ~rj ) for two-particle interactions (for instance,
P
u(~r) = e2 /r can be Coulomb repulsions between electrons). For simplicity, I assume that the
interactions depend only on the distance between particles, but not on their spins (true for Coulomb
and all other examples we’ll consider in this course. But more general options exist, and you can
deal with them using Theorem 2).
We’ll continue to work in the basis α = (~k, σ), so that the basis wavefunctions are simple
~ √
planewaves h~r|~ki = eik~r / V , where V is the volume of the system. (I will assume V finite to begin
with, since in this case the allowed ~k values are discrete, and it makes sense to write k , not
P
d~k. We can let the volume V → ∞ at the end, if need be.) Since:
R
ˆ
h~r1 , σ1 ; ~r2 , σ2 |u(~ˆr − r~0 )|~r1 , σ10 ; ~r2 , σ20 i = u(~r1 − ~r2 )δσ1 ,σ10 , δσ2 ,σ20
Eq. (11) becomes:
~ ~ ~ ~
e−ik1 ~r1 −ik2 ~r2 eik3 ~r1 +ik4 ~r2
Z Z
h~k1 , σ1 ; ~k2 , σ2 |u|~k3 , σ3 ; ~k4 , σ4 i = δσ1 ,σ3 δσ2 ,σ4 d~r1 d~r2 u(~r1 − ~r2 )
V V
10
We now define the Fourier transform of the interaction potential:
Z
1 X i~q~r
u(~r) = e uq~ ; uq~ = d~re−i~q~r u(r)
V
q
~
~ ~0
Using the identity d~rei(k−k )~r = V δ~k,k~0 we can now do the two integrals over ~r1 , ~r2 easily. The
R
In other words, two particles that initially were in the states (~k, σ), (k~0 , σ 0 ) have interacted with
one another, exchanged some momentum ~q (but with total momentum conserved, as it should be
since this interaction is invariant to global translations), and end up in the states (~k + ~q, σ), (k~0 −
~q, σ 0 ). The vertex uq~ is directly determined by u(~r) and basically shows which ~q is more likely
to be exchanged during the interaction. Because this two-particle interaction does not act on
spins, these remain unchanged between the initial and final states. Of course, if we choose a
more complicated two-particle interaction that also involves the spins somehow, then its second
quantization expression will reflect that. In this course we will only consider Coulomb interactions
which do not depend on spins, so the equation listed above is the only one we will need.
We can similarly write the second-quantization expression for any operator, in terms of creation
and annihilation operators. Because we know how c, c† act on the Fock basis states |n1 , ...i, we
can now easily and efficiently perform calculations such as finding matrix elements of various
operators.
P In particular, consider how much easier it is now to calculate hΨ|Ô|Ψi for some |Ψi =
n1 ,n2 ,... c n1 ,n2 ,...
|n1 , n2 , ...i , i.e. if we work in the abstract space using the 2nd quantization
(contrast this with the discussion at the bottom of page 4, for wavefunctions). Unlike there, here
we don’t have to perform the N integrals over positions, the 2N sums over spins, and moreover
each |n1 , n2 , ...i is a single object, not a (antisymmetrized) sum of N ! objects.
So doing calculations is much simpler, once we adopt the second quantization notation, and in
plus we are no longer forced to work with states that have a fixed number of particles, we can easily
deal with grand-canonical ensembles. You might say, though, that having to list an infinite number
of occupation numbers n1 , n2 , ... for each basis state is less than ideal. And you are right, but in
practice we don’t do that. Let me denote the state without particles |n1 = 0, n2 = 0, ....i ≡ |0i
(we could call this the vacuum, but don’t forget that it’s only the vacuum for the particles we’re
describing as possibly occupying these one-particle states – example, it could be the vacuum for
the valence electrons, but that still leaves lots of core electrons and nuclei in the system). This
|0i is the only state with N = 0. Consider now states with N = 1, i.e. with one particle. If the
particle occupies state α, this is the state |..., 0, 1, 0, ...i where the 1 corresponds to nα = 1, and
all other nβ = 0. As discussed, this state is generated from the vacuum as |..., 0, 1, 0, ...i = c†α |0i.
Similarly, a state with one particle in state α and one in β 6= α is |..., 0, 1, 0, ..., 0, 1, 0, ...i = c†α c†β |0i
In pratice, we’ll use the rhs notation for states, which is more compact than the lhs one. In other
words, we don’t bother listing all the empty states, we only specify which ones are occupied.
Before moving on, let me comment on a special one-particle basis, namely α = (~r, σ) – the basis
associated with the position and spin operators. Can we use this basis for the 2nd quantization?
The answer is of course yes, because it is a complete one-particle basis set, and that is all we asked
for. It is customary to use special notation for the creation and annihilation operators associated
with adding/removing a particle with spin σ from position ~r, namely Ψ̂†σ (~r), Ψ̂σ (~r) (instead of
11
c~r†,σ , c~r,σ , although they are precisely the same objects), and to called them field operators. We
will not need these operators in this course, so I will not insist on this topic. However, you will
need them in Phys503 and in general, so it is useful to at least have some idea how they work
(which, really, is just like any other set of creation and annihilation operators). I added some extra
material at the end of this section if you want to read more about them. I also explain there why
this notation is called the second quantization.
12
where the electronic Hamiltonian is:
N ˆ2
X p~i 1X
Ĥel = + u(ri − rj ) (14)
i=1
2m 2
i6=j
We now go to the second quantization, and use the (k, σ) basis, to find:
X h̄2 k 2 1 XX N2
Ĥ = c†kσ ckσ + uq c†k+q,σ c†k0 −q,σ0 ck0 ,σ0 ckσ − u0
2m 2V k,k0 q 2V
k,σ
σ,σ 0
(these terms were discussed individually in the previous section). Let us consider the q = 0
contribution from the electron-electron interaction:
u0 X † † u0 X † u0 2
ck,σ ck0 ,σ0 ck0 ,σ0 ckσ = c ckσ c†k0 ,σ0 ck0 ,σ0 = N
2V k,k0 2V k,k0 k,σ 2V
σ,σ 0 σ,σ 0
Here, in the second equality we use the fact that (kσ) 6= (k 0 , σ 0 ) because if they are equal, ckσ ckσ =
0 (electrons are definitely fermions). As a result, we can anticommute ckσ past the two other
operators to obtain the middle expression. In the last equality, we used the fact that kσ c†k,σ ckσ =
P
N if the system contains N electrons (strictly speaking, the condition (kσ) 6= (k 0 , σ 0 ) means that we
have N (N − 1) instead of N 2 , but in the thermodynamic limit N → ∞, this makes no difference).
So we see that this exactly cancels the term left from the backg. and el-backg Hamiltonians (so
we are no longer troubled even if µ → 0), and we obtain the jellium model Hamiltonian:
X h̄2 k 2 1 XX
Ĥ = c~† c~kσ + uq~ c~† c† c 0 0 ckσ (15)
2m kσ 2V ~ ~0 k+~ q ,σ 0 k~ ,σ ~
q ,σ k~0 −~
~
k,σ k,k ~6=0
q
σ,σ 0
13
Note: this looks the same whatever the interaction happens to be (so long as it is spin inde-
pendent, which again, is the only kind of interaction we’ll consider in this course). For instance,
we may want to consider short-range interactions u(~r) = U δ(~r) → uq~ = U instead of the Coulomb
interaction. The Hamiltonian has the same form, we just have to use the appropriate uq .
~ i)
h~r, σ|i, n, σ 0 i = δσ,σ0 φn (~r − R
is the atomic-like wavefunction associated with this state. We then define a†i,n,σ , ai,n,σ as the
associated creation and annihilation operators for electrons with the spin σ, into the orbital n of
the ith atom.
Let us start again from our CM “Theory of everything” + BOA, but let’s not make the ions vs.
valence electrons approximation yet. In other words, we have the nuclei frozen at their equilibrium
lattice positions, and all the electrons (both core and valence) occupying various orbitals about
various nuclei. We’ll see at the end why/when we can ignore the core electrons and return to
thinking about ions + valence electrons only.
With this approximation, the Hamiltonian in the second quantization becomes:
1
hi1 , n1 , σ1 ; i2 , n2 , σ2 |V̂e−e |i01 , n01 , σ10 ; i02 , n02 , σ20 ia†i1 ,n1 ,σ1 a†i2 ,n2 ,σ2 ai02 ,n02 ,σ20 ai01 ,n01 ,σ10
X
+
2 i1 ,n1 ,σ1 ,i0 ,n0 ,σ 0
1 1 1
i2 ,n2 ,σ2 ,i0 ,n0 ,σ 0
2 2 2
The first line has the contribution from one-particle operators, i.e. the kinetic energy T̂ = ˆ2
Pp~ /2m
and the interaction of the electrons with the potential created by the frozen ions V̂ext = −Z i v(~r−
~ i ); and the second one has the two-particle operators, i.e. the electron-electron repulsion V̂e−e =
R
v(~r − ~r0 ), where v(r) is the (maybe screened) Coulomb interaction. Of course, there is also a
contribution from the nucleus-nucleus repulsion, but if the nuclei are frozen that is just some
overall constant and I am not writing it here (that constant is very important for the cohesion
energy of the material, but it’s not affecting what the electrons do).
Let us consider (T̂ + V̂ext )|j, m, σ 0 i first. We can divide the potential into the term j corre-
sponding this particular nucleus and the interaction with all other nuclei: V̂ext = −Zv(~r − R ~ j) −
P ~ 0 ~
Z l6=j v(~r − Rl ). If |j, m, σ i is an atomic orbital, then (T̂ − Zv(~r − Rj ))|j, m, σ i = Em |j, m, σ 0 i
0
14
i.e. it is an eigenstate of the one-particle problem with only that one nucleus in the system. Here,
Em are the atomic energies of the 1s, 2s, ... corresponding levels. As a result, we can write the first
line as:
En a†i,n,σ ai,n,σ − ti,n;j,m a†i,n,σ aj,m,σ
X X
H1 =
i,n,σ i,n,σ
j,m
where X
ti,n;j,m = hi, n, σ|Z ~ l )|j, m, σi
v(~r − R
l6=j
is due to the attraction that the electron in the state j, m, σ feels from all other nuclei l 6= j in
the system. Note that this term in diagonal in spin. So looking at H1 , the first term counts the
energy of the electrons if they were in isolated atoms – how many are on each level, times their
atomic level energy. The second reflects the influence of the other nuclei. P The second term can
be split into two parts: there are terms where i = j, n = m, namely − i,n,σ ti,n;i,n a†i,n,σ ai,n,σ .
By symmetry, tin;i,n ≡ tn cannot depend on which nucleus we’re talking about. So these can be
combined with the atomic terms and change En → En − tn = E˜n . This shows that one effect of
the other nuclei is to lower the value of the effective atomic energies, simply because an electron
feels attraction not just from its own nucleus but there is also some negative potential created by
all other nuclei in the system. Depending on crystal structure, this additional potential may lift
some of the degeneracies that are present in the isolated atom, for example between the px , py , pz
orbitals, or between the dxy , dxz , dyz , dx2 −y2 , d3z2 −r2 . This is known as crystal field effects. In
class we’ll briefly discuss a simple example of how the degeneracy between a dxy and a dx2 −y2
orbital is lifted in a square lattice.
The remaining terms change the state of the electron because they move it from one orbital into
a different one. There are terms with i = j but n 6= m, which keep the electron in the same atom
but change it orbital. These are always ignored, Pso far as I~ know. One possible explanation is as
follows. Consider the matrix element hi, n, σ|Z l6=i v(~r − R l )|i, m, σi associated with such terms.
The lattice is symmetric (eg, cubic) so the total potential created by all other nuclei is almost
spherical, because the other nuclei are placed very orderly around the site i. If the potential was
truly spherical, then angular momentum conservation would guarantee that these terms vanish for
any orbitals n, m with different angular momentum quantum numbers. So how good or bad is this
approximation is a matter of how close or far is this potential from being spherically symmetric.
For inner orbitals that are located very close to nucleus i, the approximation is very good. For more
extended orbitals the approximation is more and more dubios. Depending on the symmetry of the
crystal and if one uses the so-called real wavefunctions, i.e. px , py , pz instead of φn,l=1,m (~r) with
m = −1, 0, 1 and dxy , dxz , dyz , dx2 −y2 , d3z2 −r2 instead of φn,l=2,m (~r) with m = −2, ..., 2, additional
symmetries may also guarantee that these off-diagonal matrix elements vanish. In any event, such
terms are ignored and we will do so as well in the following discussion.
Finally, we also have terms with i 6= j, which show that due to attraction from other nuclei,
an electron can “hop” from one orbital of one atom into another orbital of another atom. The
energies tin,jm associated with this are called hopping matrices or hopping integrals. Again,
it is customary to make some approximations and not keep all such terms. A very reasonable one
comes from keeping only terms where i and j are nearest-neighbor sites, because the atomic orbital
wavefunctions decay exponentially so these terms will be the largest of all. Not surprisingly, this is
called the nearest-neighbor hopping model. (Of course, when necessary we can include longer
range hopping as well). Because the nuclei are on a lattice, all these matrix elements have equal
magnitude although the sign may change, depending on the sign of the orbitals involved. To be
15
more specific, let i and j be two adjacent sites, and let me approximate:
Z
~ i )|j, m, σi = Z d~rφ∗ (~r − R
ti,n;j,m ≈ hi, n, σ|Zv(~r − R ~ i )v(~r − R
~ i )φm (~r − R
~ j)
n
P
where I kept from l6=j only the term l = i which will contribute most to the matrix element (all
other nuclei are even farther away so they create a much smaller potential). Given that any pair
of nearest-neighbor nuclei are at the same distance R ~i − R~ j from each other, the result cannot
depend on this distance and must have the same magnitude for all such hoppings.
The sign, however, can change depending on details. By definition v(r) is a positive quantity, so
the sign of this integral will depend on the signs of the orbitals. If they are both 1s-type then they
are positive everywhere and the corresponding t is positive and the same for all pairs of neighbor
atoms. But if one of them is s-type and the other is a p-type orbital, then the question is whether
the p lobe pointing towards the s-orbital is positive or negative – that will decide the sign of the
matrix element. If you think about it, if we consider hopping from a p-orbital into orbitals of
atoms to its right and its left, those will have different signs (but the same magnitude) because
one integral is controlled mostly by the negative lobe, and one by the positive one. You can also
see that symmetry will set some of these hoppings to zero, for instance if we have two atoms along
the x axis, there will be zero hopping between s orbitals of one atom and py and pz orbitals of
the other one. And so on and so forth. We’ll discuss several examples in class, because this is an
important point.
To summarize what we have so far: the single-particle operators contribution to the total
Hamiltonian is:
Ẽn a†i,n,σ ai,n,σ − ti,n;j,m a†i,n,σ aj,m,σ
X X
H1 =
i,n,σ hi,ji,n,m,σ
where the notation hi, ji is used to show that the sum is only over pairs of nearest neighbor
sites. If we stop here (i.e., ignore electron-electron interactions), we have a hopping Hamiltonian,
as I said. As is the case for any model Hamiltonian, we don’t know the proper values for its
parameters Ẽn , ti,n;j,m although as discussed above, we know the signs of the hoppings. Usually
these quantities are used as free parameters and their values are adjusted so as to obtain agreement
with experimentally measured quantities.
Now consider the two-particle contribution, due to electron-electron repulsion. The general
expressions shows that this repulsion, when acting between electrons occupying orbitals i2 , n2 and
i0n , n02 , can scatter them into the orbitals i1 , n1 and i01 , n01 . Because the repulsion does not depend
on spin, the spins of the electrons remain unchanged: σ1 = σ10 , σ2 = σ20 . Again, some of these
terms will be considerably bigger than others, and in fact you can easily see that the largest terms
are when i1 = i01 = i2 = i02 , because then all 4 atomic orbitals are located at the same site so their
overlap (which controls the size of the matrix element) is as big as possible. Physically, this makes
good sense: the strongest interactions must be between electrons belonging to the same nucleus,
because they are closest together. If we make the approximation that we only keep these terms, we
say that we only keep on-site repulsions. Again, if needed, we can also add repulsion between
nearest-neighbors and even longer-range ones.
Even if we keep only on-site repulsion, we could still scatter two electrons from any two orbitals
of any site into any two other orbitals of the same site, so there is still a huge number of possibilities.
So let me now divide the electrons into core electrons and valence electrons.
For the core electrons, which are very close to the nucleus, Ẽn ≈ En and tin,jm ≈ 0, because
corrections from other nuclei are tiny compared to the very strong potential of its own nucleus.
16
We can also ignore the contributions from the electron-electron repulsion term, for the following
reason: because all core states are full, repulsion can only scatter two core electrons into two
empty levels (Pauli principle). But those are located at energies that are much higher, and this
difference in energy is considerably bigger than the typical repulsion matrix element. Roughly put,
the repulsion is not strong enough so the probability of such processes is very small and can safely
be ignored. So you can see that we arrived at the conclusion that the core electrons basically are
not influenced by the presence of the other atoms, they just sit on their atomic levels of energy En
and are inert, they do not hop to other atoms or be scattered because of electron-electron repulsion
(similar arguments offer a second explanation as to why we can ignore the single-particle hopping
between different orbitals of the same atom). Which is why we can leave out the core electrons
altogether, we already know precisely what they’re doing and we do not need to make any further
calculations concerning them.
For the valence electrons, however, the hopping integrals cannot be ignored (remember that
these are the most spatially extended orbitals, so they have the largest hopping integrals). We also
cannot ignore their Coulomb repulsions, because these orbitals are usually only partially filled, so
there are empty orbitals with the same energy into which electrons can be scattered by repulsion.
For simplicity, let me assume that the valence electrons are in a partially-filled s-type orbital.
Then, there is a single on-site repulsion matrix element associated with scattering, i.e.:
Z Z
U = hi, s; i, s|Ve−e |i, s; i, si = d~r dr~0 |φs (~r)|2 |φs (r~0 )|2 v(~r − r~0 )
Of course, these valence electrons could also be scattered from their s orbital into higher, empty
orbitals (the lower ones are all filled with core electrons). But if those are at much higher energies
than the typical repulsion matrix element, such processes are again very unlikely to occur and are
ignored. So in this case, all that is left of the two-particle repulsion term is:
U X † † X
H2 = aisσ aisσ0 aisσ0 aisσ = U n̂is↑ n̂is↓
2 0 i
i,σ,σ
The second equality follows because we must have σ 6= σ 0 , otherwise aiσ aiσ = 0. We can then
rearrange that term by commuting aiσ past the σ 0 operators, and rewriting it in terms of n̂isσ =
a†isσ aisσ (we assume spins 1/2). This is known as the on-site Hubbard repulsion, and if we add the
hopping terms (only for these valence electrons), we get the Hubbard Hamiltonian
a†iσ ajσ + U
X X
Ĥ = −t n̂i↑ n̂i↓ (16)
<i,j>,σ i
Here I didn’t bother to specify anymore that the valence orbitals are s, because they are now
the only orbitals left in the problem. I also used the fact that for s orbitals, all nearest-neighbor
hopping have the same sign t > 0.
Of course, this model can be extended to include hopping to second-nearest neighbors etc, and
also longer-range interaction terms, between atoms which are on neighbor sites. Another type of
generalization is to multiple orbitals. If, for example, we have a Wannier orbital of p-type, i.e.
three-degenerate, then we would have 6 valence states associated with each atom (3 orbitals x 2
spin projections), and if these orbitals remain degenerate in the solid then we would need to keep
all of them in the Hamiltonian. In this case, we have extra indices to indicate which of the three
orbitals is involved, and hopping will strongly depend on the combination of orbitals involved, as
17
discussed. The on-site interaction terms also become much more complicated because electrons
can scatter from any 2 of these orbitals into any 2 others (inside the same atom), so more terms
appear. They can be calculated based on the symmetries of the orbitals and they are tabulated
in books. So things can quickly become very complicated, it all depends on the complexity of the
material we are trying to describe. If we have multiple types of atoms, then that adds an extra layer
of complexity, but if you think about it modelling will proceed in many ways in somewhat similar
terms: we have to figure out which are the valence electrons, what sort of hopping they can have
and what sort of on-site interactions they can have, consistent with the symmetries of the problem
at hand. That gives us the simplest model Hamiltonian. If it works well (ie it its predictions agrees
with experimental measurements) then great, the problem is solved. If not, we have to go back,
figure out what are the next biggest terms we ignored, include them and keep trying until we get
the “right” model. So you can see how this is less pleasant than dealing with ab-initio models,
where there is no such guessing because we know the Hamiltonian with all its parameters; but you
can also see that these model Hamiltonians are much simpler than the ab-initio ones, so we have
some chance to get some answers.
To put things in perspective, let me mention that even the Hubbard Hamiltonian of Eq. (16)
is not yet understood (except in some particular cases), despite around 50 years of intense study!
Before moving on, let me also say that the “derivation” I offered here, while more detailed than
what is typically done in textbooks, is still sweeping many things under the carpet. For instance,
you might wander why the hopping integrals are controlled by the bare electron-nucleus potential
(proportional to Z) and not the screened one (proportional to Z ∗ ); in other words, why are the core
electrons not screening this potential? In fact they do, if you track carefully the effect of some of
the terms I conveniently “forgot” to mention - for instance, what happens when a valence electron
scatters on a core electron. The core electron can stay where it was and the valence electron can be
scattered to another atom, so this looks like an effective hopping, and indeed you should convince
yourself that such terms will screen the potential. But the bottom line is that we do not calculate
the value of these hopping integrals anyway, they are free parameters, so what matters is to figure
out what are the largest contributions possible (hoppings and repulsions) and what restrictions
must be imposed due to symmetries. That is enough to set up a model Hamiltonian. We’ll discuss
some more examples in class, if we have time.
18
(but now for bosonic operators) we find that in the 2nd quantization:
X h̄2~k 2 1 X
Ĥ = b~† b~k + Vq~ b~† b†~0 bk~0 b~k (18)
2M k 2V q k −~
k+~ q
~
k ~
k,k~0 ,~
q
where, again, Vq~ is the Fourier transform of V (~r). This looks formally very similar to the jellium
Hamiltonian that describes a liquid of electrons, apart from the fact that those operators were
fermionic, while these are bosonic. This is another nice feature of 2nd quantization, many equations
look similar for fermions and bosons so one does not need to remember twice as many formulae.
Hopefully these examples gave you a good enough idea of how we can obtain various model
Hamiltonians, and what are their general expressions in second quantization notation. As adver-
tised before, we will next discuss how to solve purely electronic Hamiltonians, and then move on
to the other steps discussed at the end of the previous section.
are 2S + 1-spinors. Using the completeness of the basis α (this can be any one-particle basis we
like, eg (k, σ)) and the commutation relations [c†α , cβ ]ξ = δα,β etc., it is straightforward to show
that the field operators satisfy the proper commutation relations like any other choice of creation
and annihilation operators:
[Ψ̂(~r), Ψ̂† (r~0 )]ξ = δ(~r − r~0 ); [Ψ̂(~r), Ψ̂(r~0 )]ξ = [Ψ̂† (~r), Ψ̂† (r~0 )]ξ = 0 (20)
Before going on, let’s see what is the meaning of these operators. Consider the action of Ψ̂† (~r) on
the vacuum:
X X
Ψ̂† (~r)|0i = φTα (~r)c†α |0i = φTα (~r)|0, ..., nα = 1, 0, ...i
α α
I can also write |0, ..., nα = 1, 0, ...i = |αi, since this is the state with a single particle in state α,
and so it follows that: X
hr~0 |Ψ̂† (~r)|0i = φTα (~r)φα (r~0 ) = δ(~r − r~0 )
α
since the basis is complete. This is possible for any ~r, ~r0 only iff:
19
i.e. this operator creates a particle at position ~r (with any spin! That is why these operators are
spinors; we’ll specialize to a specific spin projection below). It can be similarly shown that this is
true in any state (not only vacuum). Similarly, it can be shown that the operator Ψ̂(~r) destroys
(removes) a particle from position ~r. So these operators are simply the creation and annihilation
operators associated with the basis |~ri; we give them special names and notation just because we’re
biased in thinking that “space” is somehow more special than other representations.
Using again the completeness and orthonormality of the basis α, we can invert Eqs. (19) to
find: Z Z
cα = d~rφTα (~r)Ψ̂(~r); c†α = d~rΨ̂† (~r)φα (~r) (21)
and Z Z
~ =1 1
X
B Bij → d~r dr~0 Ψ̂† (~r)Ψ̂† (r~0 )B~r,r~0 Ψ̂(r~0 )Ψ̂(~r) (23)
2 2
i6=j
(note the order of operators!!!), where I used the simplified notation A~r = h~r|A|~ri – this is a
2S + 1 × 2S + 1 matrix, because of the spin – and similarly for B (which is a direct product of two
2S + 1 × 2S + 1 matrices). (Don’t worry, we’ll soon introduce friendlier notation, so we won’t need
to deal with these matrices. But let’s be formal just for a bit more.)
It followsR that: 2
(a) T̂ = d~rΨ̂† (~r) − 2m
h̄
∇2 Ψ̂(~r);
PN
(b) density operator n̂(~r) = i=1 δ(~r − ~ri ) → dr~0 Ψ̂† (r~0 )δ(~r − r~0 )Ψ̂(r~0 ) = Ψ̂† (~r)Ψ̂(~r)
R
~ = N 1 → d~rΨ̂† (~r)Ψ̂(~r) = d~rn̂(~r)
P R R
(c) total number of particles N i=1
(d) Coulomb interaction: V̂ = 12 d~r dr~0 Ψ̂† (~r)Ψ̂† (r~0 )u(~r − r~0 )Ψ̂(r~0 )Ψ̂(~r), etc.
R R
In particular, the Hamiltonian for particles in an external field and also interacting with one
another becomes:
h̄2 2
Z Z Z
1
H = d~rΨ̂† (~r) − ∇ + uext (~r) Ψ̂(~r) + d~r dr~0 Ψ̂† (~r)Ψ̂† (r~0 )u(~r − r~0 )Ψ̂(r~0 )Ψ̂(~r) (24)
2m 2
looks very appealing, and is again independent on the number of particles that may be in the
system, or whether it is fixed or not. As I said, this makes it very easy to work with states in
which the number of particles is not fixed, such as is the case in grand-canonical ensembles, for
instance.
When you will learn about Green’s functions (in phys503) you will derive the equation of motion
for the field operator Ψ̂(~r) and see that it is very similar in form to Schrödinger’s
equation.
In the2
d h̄
absence of particle-particle interactions, it turns out that ih̄ dt Ψ̂(~r, t) = − 2m ∇2 + uext (~r) Ψ̂(~r, t).
Interactions add a second term. The reason I’m mentioning this now, is that it gives you a sense
of why this is called the second quantization. In the first quantization (going from classical to
quantum mechanics) we quantize the operators, and use a wavefunction φ(~r) to characterize the
state of the system. This wavefunction φ(~r) can be regarded as a “classical field” of its own. In
the second quantization, it is as if we now also quantize the wavefunction into the field operator
Ψ̂(~r).
20
Now, let us simplify the notation and remove the need of working with matrices if the spin
S > 0. In all cases we can choose the basis α = (α̃, σ), i.e. one of the quantum numbers is the
spin, and then α̃ contains all other quantum numbers needed to describe the translational sector
of the Hilbert space (e.g., the momentum of the particle, or hydrogen orbitals, or whatever).
We can then define spin-components of the field operators:
φ∗α̃ (~r)c†α̃,σ
X X
Ψ̂σ (~r) = φα̃ (~r)cα̃,σ ; Ψ̂†σ (~r) = (25)
α̃ α̃
~ √
If α̃ = ~k, then φα̃ (~r) = ei~rk / V and is a simple number, not a 2S + 1-dim spinor. So the spin-
components Ψ̂σ (~r), Ψ̂†σ (~r) are simple operators, not vectors of operators. Their meaning is that
they annihilate or create a particle with spin σ at position ~r.
It is now straightforward to verify (by repeating previous calculations) that:
[Ψ̂σ (~r), Ψ̂†σ0 (r~0 )]ξ = δσ,σ0 δ(~r − r~0 ); [Ψ̂σ (~r), Ψ̂σ0 (r~0 )]ξ = [Ψ̂†σ (~r), Ψ̂†σ0 (r~0 )]ξ = 0 (26)
and
Z Z
~ =1 1 X
X
B Bij → d~r dr~0 Ψ̂†σ1 (~r)Ψ̂†σ2 (r~0 )hσ1 , σ2 |B~r,r~0 |σ3 , σ4 iΨ̂σ4 (r~0 )Ψ̂σ3 (~r) (28)
2 2 σ1 ,σ2
i6=j σ3 ,σ4
where hσ|A~r |σ 0 i = h~r, σ|A|~r, σ 0 i and hσ1 , σ2 |B~r,r~0 |σ3 , σ4 i = h~r, σ1 ; r~0 , σ2 |B|~r, σ3 ; r~0 , σ4 i. In partic-
ular, if the operators are spin independent, then these matrix elements are proportional to δσ,σ0 ,
respectively δσ1 .σ3 δσ2 ,σ4 . As a result, a typical Hamiltonian becomes:
h̄2 2
XZ Z Z
1X
H= d~rΨ̂†σ (~r) − ∇ + uext (~r) Ψ̂σ (~r)+ d~r dr~0 Ψ̂†σ (~r)Ψ̂†σ0 (r~0 )u(~r − r~0 )Ψ̂σ0 (r~0 )Ψ̂σ (~r)
σ
2m 2 0
σ,σ
(29)
21