0% found this document useful (0 votes)
32 views14 pages

Final Exam Study Guide

This document serves as a review for the PHYS 359 final exam, focusing on statistical mechanics and its key concepts such as the microcanonical, canonical, and grand canonical ensembles. It discusses the importance of averaged quantities in systems with many particles, the derivation of fundamental equations, and the differences between classical and quantum statistical mechanics. Additionally, it highlights tools for counting and analyzing physical systems, including expansions and limits relevant to statistical mechanics.

Uploaded by

Sameed Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views14 pages

Final Exam Study Guide

This document serves as a review for the PHYS 359 final exam, focusing on statistical mechanics and its key concepts such as the microcanonical, canonical, and grand canonical ensembles. It discusses the importance of averaged quantities in systems with many particles, the derivation of fundamental equations, and the differences between classical and quantum statistical mechanics. Additionally, it highlights tools for counting and analyzing physical systems, including expansions and limits relevant to statistical mechanics.

Uploaded by

Sameed Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

PHYS 359 Winter 2024 - Final Exam Review

Prof. Alan Jamison

1
I. THE BIG PICTURE

Statistical mechanics studies the properties of systems with many parts. Problems involv-
ing one or two particles are common in other parts of physics. What we find in statistical
mechanics is that when the number of particles in a system becomes enormous, we care
little about what individual particles are doing. Instead, we are concerned with averaged
quantities (such as energy per particle or pressure). The averaging process becomes more
and more precise as the number of particles increases. We find ourselves in the situation that
we cannot possibly solve the microscopic dynamics for a system of 3 or more particles, but
we can calculate averaged quantities of interest that improve as we move to more particles.
The standard example is that of a gas. The motion of any individual gas particle would
be both intractable and uninteresting. However, if we can calculate the pressure and heat
capacity from the underlying microscopic physics, we have valuable tools for designing en-
gines, which was the original goal of thermodynamics. We’ve seen in this course that we
can understand the behaviours of white dwarves and astrophysical plasmas, magnetizations
and heat capacities in condensed matter systems, and blood oxygenation through the use of
statistical techniques.
The key formula for finding averages is
X
x̄ = x(s)p(s),
states,s

where p(s) is the probability of realizing state s. Recall that the probability of some event
A is the number of outcomes represented by A divided by the total number of possible
outcomes.
The starting point for understanding these systems with many particles is the consider-
ation of an isolated system.

A. The Microcanonical Ensemble

The microcanonical ensemble is the formal description for an isolated system. An isolated
system has no contact with the outside world, so all of its macroscopic variables (energy U ,
number N , volume V , etc.) are fixed. Given those fixed values, there will be a large number
of accessible states, i.e., states that match the fixed values of macroscopic variables.

2
We denote the number of accessible states as Ω. The fundamental assumption of statistical
mechanics is that, in an isolated system, all accessible states are equally likely. If we have a
number of accessible states Ω, then each one has a probability Ω1 . The entropy S of a system
counts the disorder of a system, or equally the information content of the system. These two
things are closely related. In a perfectly ordered system, say all particles at position x = 0,
there is almost no information needed to describe the system. In a highly disordered system,
you will need far more information to describe the microscopic state. The thermodynamic
entropy of a closed system is S = k log Ω. As you saw on a homework (PS3, E1), this
connects directly to an alternate definition of entropy from information theory.
There is little more to say about the microcanonical ensemble in this course, though it
is an important topic in further studies of statistical mechanics. Its importance to us is
as a tool to understand systems you can actually interact with, systems that can exchange
energy or particles with their environments.

B. The Canonical Ensemble

A system that can exchange energy with a much larger system, referred to as the thermal
reservoir or thermal bath, is described by the canonical ensemble. We assume that the
reservoir is much larger than the system such that when the system exchanges energy with
it, the reservoir is essentially unchanged. Macroscopic descriptions like the total energy of
the reservoir would depend on details of its structure that we don’t want to have to think
about. Instead, we can describe a large variety of reservoirs by assuming only that a reservoir
exists at a temperature T and is big enough that exchanging energy with our system doesn’t
change that temperature. If our system is at equilibrium with a reservoir of temperature T ,
then our system is by definition also at temperature T .
The importance of the closed system comes when we consider the system + thermal
reservoir as itself being a large, closed system. If we see these as a closed system, then
relative probabilities can be computed from ratios of accessible microstates. Since we’ve
assumed the reservoir to be much larger than the system of interest, the change in number
of accessible states due to energy transferring from one to the other should be dominated
by the change in accessible states of the reservoir. So, to understand the probabilities in a
system coupled to a thermal reservoir, we can just focus on the reservoir.

3
We said above that we don’t want to make many assumptions about the nature of the
reservoir, other than its size and temperature. If we need to count its microstates, then we’d
need to know way more about it. The way around this is to use something we learned from
thermodynamics: dU = T dS − p dV . We can rearrange that to dS = T1 (dU + p dV ). We also
assume that the system and reservoir cannot change volume, so the dV term becomes zero.
We know that the difference in entropy can be related to the ratio of microstates through
 
Ωres,2
S = k log Ω to give Sres,2 − Sres,1 = k log Ω res,1
or equivalently Ωres,2
Ωres,1
= e(Sres,2 −Sres,1 )/k . Since
p(state2 ) Ωres,2
all available microstates are equally likely, p(state1 )
= Ωres,1
= e(Sres,2 −Sres,1 )/k .
The ratio of probabilities is just what we are looking for, but it’s still not quite in a
useful form. Since our reservoir never changes temperature, the differential relation from
thermodynamics can be integrated to give Sres,2 − Sres,2 = (Ures,2 − Ures,1 )/T . Plugging this
p(state2 )
into the equations from above, we get p(state1 )
= e(Ures,2 −Ures,1 )/(kT ) . We’d really like this
relation in terms of the energy in our system rather than the reservoir. Since the system
+ reservoir is closed, ∆Usys = −∆Ures . This finally brings us to the key equation for the
canonical ensemble:
p(state2 ) e−βE2
= e−(E2 −E1 )/(kT ) = −βE1 ,
p(state1 ) e
1
where β ≡ kT
and Ei is the energy of the system in state i (PS1,P2). A single exponential
e−βEA is referred to as the Boltzmann factor for state A. Using the formalism of the closed
system, we have been able to derive fundamental equations for an open system in contact
with a thermal reservoir!
It would be nice to have a formula that gives us the probability for occupying a particular
state. We can achieve this by using the fact that probabilities sum to 1 to find an appropriate
e−βEA
normalization factor. If we assume all probabilities have the form p(A) = Z
, then we
find that
X
Z= e−βEs ,
states,s

which defines the partition function. We saw that Z is extremely useful, because from it we
can derive average values for any function of energy f (E) by taking derivatives with respect
−1 ∂Z
to β (PS1,P3). As a simple example, the average energy is Z ∂β
. These formulas arise
because taking the β derivative n times brings n copies of the energy out of the exponent
for each term of the partition function. If you then divide by Z, you get terms of the form
−βEi
(Ei )n e Z = (Ei )n p(i). Adding these gives us exactly the average value E n .

4
For non-interacting systems (e.g., the ideal gas), we assume every particle is independent.
In this case, we can show that the partition function ZN for the full system of N independent
particles is
ZN = (Z1 )N ,

where Z1 is the partition function for a single particle. This is a huge simplification that
makes classical statistical mechanics far more approachable.

C. The Grand Canonical Ensemble

The other important ensemble we discussed is the grand canonical ensemble. This de-
scribes a system in contact with a reservoir that can exchange both energy and particles.
We introduce a quantity called the chemical potential, symbolized by µ. This characterizes
the particle flow from the reservoir in analogy to how the temperature characterizes the
energy flow from the reservoir. The chemical potential is defined as the change in energy
of the system required to add a new particle without changing the system entropy. For a
classical system, the chemical potential is usually negative, since adding an extra particle
without changing the energy would generally give many more accessible states. As such,
you must remove energy when adding a particle to keep the number of accessible states (i.e.,
the entropy) the same.
We follow the same chain of reasoning as we used for analyzing the system in contact
with a thermal reservoir, except we need to use the more complete thermodynamic relation
dU = T dS − p dV + µ dN (and again setting dV = 0). This leads to the key formula for the
grand canonical ensemble
p(2) e−β(E2 −µN2 )
= −β(E1 −µN1 ) .
p(1) e
A single exponential e−β(EA −µNA is referred to as the Gibbs factor for state A (PS4,P1).
Again replicating the chain of reasoning that lead to the partition function for the canon-
ical ensemble, we arrive at the grand partition function for the grand canonical ensemble:
X
Z= e−β(Es −µNs ) .
states,s

Derivatives with respect to µ allow us to produce the average of any function of N , f (N ),


in the same way that derivatives with respect to β of the partition function lead to averages
f (E) (PS4,P2). .

5
D. Quantum Statistical Mechanics

Quantum statistical mechanics is fundamentally different from classical. When we deal


with systems of identical quantum particles, the identity of the particles as either fermions
or bosons becomes important. Because of the need to anti-symmetrize the many-body
wavefunction, two fermions cannot occupy the same state. Since bosonic wavefunctions
must be symmetrized, there is no constraint on how many bosons can occupy a state.
These effects can be accounted for in non-interacting systems by considering an energy
level as our fundamental object, as opposed to classical stat mech where a particle is our
fundamental object. We then consider the occupation of an energy level in the grand canon-
ical ensemble. For an energy level of energy , the total energy is E = N , that is the
number of particles in the level times the energy of a particle in that level. This simplifies
the Gibbs factor for this energy level to e−β(−µ)N . In particular, this will make it possible
to calculate the expected number of particles in a general energy level.
To find the expected number of particles in a level , we need to calculate the grand
partition function. For fermions this is straight-forward, since there are only two permissible
numbers of particles in a state: 0 or 1. This gives ZF = 1 + e−β(−µ) . For bosons we get
−β(−µ) N
a geometric series: ZB = ∞ −β(−µ)N
= ∞ 1
P P 
N =0 e N =0 e = 1−e−β(−µ) . The expected
1 ∂Z
number N = Z ∂µ
can be calculated for each. For fermions this leads to the Fermi-Dirac
distribution (PS5,E1):
1
nFD = .
eβ(−µ)
+1
For bosons it leads to the Bose-Einstein distribution:
1
nBE = .
eβ(−µ) −1
A common procedure in quantum statistical mechanics is to sum the distribution function
over all energy levels while leaving µ as an unknown. Since this sum gives the total number
of particles, the result can be used to calculate the chemical potential for a system with a
known number of total particles.

E. Adding Interactions

Adding interactions invalidates some of the most valuable simplifying assumptions we’ve
discussed. In a classical system, we lose the property of independence for particles. In other

6
words, we can no longer assume that ZN = (Z1 )N . Instead of studying a single particle in
a gas, or a single spin in a lattice, as our object of interest, we have to treat the system as
a whole as our object of interest.
However, for weak interactions we can treat the system as containing approximately
independent particles. This lead us to mean-field theory for a Bose-Einstein condensate
(described in section II.E.). Mean-field theory ignores correlations between pairs (or larger
collections) of particles. For the Ising model, we found that as the spins become more
interconnected mean-field theory becomes better even for strong interactions.
For a quantum mechanical system, the assumption that the energy per particle of a state
is unaffected by the number of particles in the state ceases to be true. This makes life really
hard, so we didn’t go down this road. It’s something to look forward to in the future!

II. TOOLS

Here is a collection of the most important new tools that we’ve learned this term. The
one old tool I will point out that I expected you to bring with you to the course (aside from
standard calculus) is summing a geometric series

X 1
rn = (for |r| < 1).
n=0
1−r

A. Ways of Counting

To count the number of ways to select r objects from a set of n, where the order of
selection matters, we use permutations, n Pr . If the order of selection doesn’t matter, we
have a combination, n Cr . You probably encountered these before this course:
n! n!
n Pr = and n Cr = .
(n − r)! (n − r)!r!
We also discussed ball and stick counting. If you want to find the number of ways to
sort n interchangeable objects into r categories, you can map this to the problem of putting
n balls and r − 1 sticks in order. The sticks mark the boundaries of the r categories: left
of the first stick is the first category and so on to the (r − 1)th stick. To the right of it is
the rth category. Since the objects are interchangeable, we can see this problem as choosing
which of our n + r − 1 positions has the sticks. The order in which the stick locations are

7
chosen doesn’t matter, so we find that the number of ways to put n objects into r categories
(n+r−1)!
is (n+r−1) C(r−1) = n!(r−1)!
.
Often, we want to simplify expressions involving large numbers. We can use Stirling’s
approximation

N ! = N N e−N

to make these simplifications. A more accurate version, N ! = N N e−N 2πN , is available
but for statistical mechanics-sized numbers it is not generally needed.

B. Expansions and Limits

One of the most important ways to understand any physical system is to look at limiting
cases. In statistical mechanics, two limits are particularly important: the high-temperature
and low-temperature limits. In any discussions of limiting behaviour, you will need to
identify the energy or temperature scale to which you want to compare kT or T , respectively.
The high-temperature limit generally involves Taylor expanding an expression in terms
1
of β = kT
, which becomes small in the high-temperature limit. This is similar to expansions
you’ve used in previous courses, for instance the small-angle approximation in a pendulum
where you Taylor expand sin θ ≈ θ. In that example, you would not just say sin θ = 0,
though that is the zeroth order behaviour. Rather, you want to expand out to the first term
with θ dependence, so you can solve for nontrivial motion. Similarly, in the high-temperature
limit, you don’t simply set β = 0. You want to expand your function until you get the first
non-zero term with β dependence.
The low-temperature limit is more subtle(PS2, P1/P2). Here we look to make an expan-
sion in Boltzmann factors, since e−βE becomes a small parameter when kT  E. As with
the high-temperature limit, you don’t want to expand only to zeroth order, but rather you
want to find the largest term with some β (i.e., temperature) dependence.

C. Specific Integration Techniques

Because gaussian integrals come up quite often in statistical mechanics, you will need to
x 2
remember the formula for these. The goal is to get the exponent into the form − 2σ 2 . From

8
there you can use the simple formula derived in class
Z ∞
x2 √
dx e− 2σ2 = 2πσ 2 .
−∞

Other integrals involving a gaussian function can be reduced to this form through integration
1
by parts. Also common is turning an integral of an even function from 0 to ∞ into 2
times
the integral from −∞ to ∞.
The other technique we’ve used in several ways to approximate integrals is turning the
integrand into a localized function and then extending the limits of integration out to ±∞.
This is an important step in the Somerfeld expansion where we integrate by parts to turn
the the Fermi-Dirac function into its derivative, which is a function sharply peaked at the
Fermi energy. A similar idea appears when counting the number of microstates in a large
system. The distribution you find is often so highly peaked that you can approximate it by
a gaussian, even if the function technically can’t extend to the whole real line (for instance,
the length distribution for a polymer chain (PS1,P1)).

D. Density of States

The density of states, g(), is the number of states that you will find per unit energy in a
small interval of energies. We introduced this in the context of degenerate Fermi gases where
every state has exactly one particle up to the Fermi energy. However, it ends up being useful
for both fermions and bosons, at zero temperature and non-zero temperature. The general
technique of counting in n space and then changing variables to  using the spectrum of
the confining potential allowed us to find the density of states in many situations including
boxes and harmonic oscillator-shaped traps in 1, 2, or 3 dimensions. Once you have found
a way to count states and changed variables to , you can find the density of states simply
as g() = dNstates /d.
Remember that the density of states is a property of the confinement, not the particles
in it, so it will be the same for a given box or trapping potential whether you put fermions
or bosons into it.

E. Mean-Field Theory

In lecture I gave a four-step process for building a mean-field approximation:

9
1. Assume all particles have identical behaviour.

2. Select one particle to analyze and derive its behaviour as a function of the other
particles.

3. Using the assumption that your selected particle is identical to all others, derive a
self-consistent equation.

4. Solve.

For the B = 0 Ising model from lecture, we calculated the expected magnetization of a
single spin as a function of the average spins of its neighbors (step 2). Then, with the step
1 assumption in mind, we set the average magnetization of our particle equal to the average
magnetization of its neighbors. This gave us a self-consistent equation (step 3) for the
magnetization in the mean-field approximation. For deriving Gross-Pitaevskii, we assumed
the product form for the many-body wavefunction (step 1). We then considered a single
atom to find the energy shift it would feel from all other atoms (step 2). Finally, we said that
our chosen particle should follow a Schrödinger equation with that extra energy, which gives
us a self-consistent description of the wavefunction φ (step 3), namely the Gross-Pitaevskii
equation.(PS9,E1 & PS10,E1/P1)

III. IMPORTANT SYSTEMS

In the problem sets, you’ve applied the big picture and tools we’ve learned this term across
a vast range of systems, from polymer chains to black holes. As our last lecture hinted, we
can move beyond even this broad collection to consider brains and economies with these
tools. However, to keep a tight focus, we’ve used the following systems as touchstones along
the way.

A. Einstein Solid

Seen in (PS2,P2)
An Einstein solid is a collection of identical harmonic oscillators. Einstein proposed this
as a model for the heat capacity of a solid, with the idea that each atom vibrates in its locally
bound state, and each atom vibrates with the same oscillator frequency. We’ve considered

10
this system in terms of ball and stick counting to understand the number of microstates.
We’ve also calculated the heat capacity of an Einstein solid and considered its low and
high-temperature limts. Since the oscillators are assumed to be independent, we can use the
classical ZN = (Z1 )N result to understand the Einstein solid from the spectrum of a single
simple harmonic oscillator: En = hf n.
Adding interactions lead us to a completely new spectrum of oscillations including col-
lective oscillations of many atoms in synchrony. This spectrum gave us the Debye model,
which corrected the low-temperature behaviour of the Einstein solid model, but had the
same high-temperature behaviour.

B. Paramagnet

Seen in (PS6,E2)
The paramagnet was our first look at a spin system. In a paramagnet, each spin is
independent (i.e., there are no interactions). Like the Einstein solid, we can understand the
whole system by considering a single element via ZN = (Z1 )N . An individual paramagnet
spin has two states, up and down, and the spin is at lower energy when it is aligned with an
external magnetic field. Adding interactions between spins leads to the Ising model.

C. Ideal gas

The original domain of interest in thermodynamics was in ideal and nearly ideal gases.
As such, it’s not surprising that we spent a lot of time on ideal gases.

1. Classical Ideal Gas

Seen in (PS3,P1/P2 & PS4,P1)


The classical ideal gas is the system in which we introduced continuous variables. In this
case we had to integrate over Boltzmann factors rather than doing a discrete sum to get
~2
p
the partition function. A classical ideal gas has kinetic energy 2m
and possibly a potential
energy function U (~x). This gives a Boltzmann factor and partition function
   
~2 ~2
p
Z p
−β 2m +U (~
x) 1 3 3 −β 2m +U (~
x)
e → Zext = 3 d x d p e ,
h

11
where one copy of one over Planck’s constant is needed per dimension to make the partition
function unitless. This is clearly ad hoc, but we can verify that it gives the correct result for
a gas in a box by considering a single particle in a box with the correct quantum mechanical
h2
spectrum for the box, E~n = 8mL2
~n2 . We can use these continuous variable techniques
to derive the Maxwell-Boltzmann distribution, which gives the probability distribution for
speeds (as opposed to velocities, which the Boltzmann distribution describes) in a gas.
The individual particles can have internal structure. Since the internal degrees of freedom
are assumed to be independent of the external state (position and momentum), the partition
function has a product form Z = Zext Zint . We saw that we can treat the addition of weak
interactions to the ideal gas through the cluster expansion.

2. Degenerate Fermi Gas

Seen in (PS5,P1/P2 & PS6,E2/P1)


Moving to the quantum mechanical ideal gases we see qualitatively different behaviour
due to the interference of particles at low temperatures. For fermions, the Pauli exclusion
principle tells us that we cannot have two identical fermions in the same state. At zero
temperature, the particles stack up, one per state, starting from the lowest energy state,
until all particles are placed.
The chemical potential at T = 0 is known as the Fermi energy. For a large system, we
can usually approximate it as either the highest filled energy level or the lowest unfilled
energy level. This is the most important energy scale for a degenerate gas of fermions. At
low but non-zero temperatures, the distribution of occupation versus energy (i.e., nFD ()) is
modified in a region around the Fermi energy with a width of roughly kT . The fundamental
relations that allow us to understand the degenerate Fermi gas are
Z Z
N = d g()nFD () and U = d g()nFD (),

where g() is the density of states. At zero temperature, the Fermi-Dirac distribution nFD ()
becomes a step function with its edge at the Fermi energy, which simplifies calculations. A
degenerate Fermi gas in a box has a large energy (U = 53 N ) even at zero temperature. This
∂U
gives rise to the degeneracy pressure, which we can find from p = − ∂V . This is responsible
for the stability of white dwarves and neutron stars.

12
3. Degenerate Bose Gas

Seen in (PS9,P1/P2)
Bosons have very different behaviour from fermions. The important relations for bosons
look identical to those for fermions, but with the Bose-Einstein distribution instead of the
Fermi-Dirac distribution:
Z Z
N= d g()nBE () and U = d g()nBE ().

Rather than being excluded from occupying the same state, we found that at low tem-
peratures bosons prefer to occupy the same state. Setting the chemical potential to zero in
the first relation above for N , we find a definite relationship between number and tempera-
ture. This gives us a critical temperature, TC , for a given number of bosons. At this critical
temperature there is a phase transition from a regular gas to a Bose-Einstein condensate.
This is a state where a macroscopic number of particles all occupy the ground-state of our
potential. At all temperatures below TC , the chemical potential is zero. This is because we
can add a particle to the condensate without changing either the energy or entropy of the
system. We also saw that if the bosons interact, in which case it is no longer an ideal gas,
the chemical potential of the BEC becomes positive and depends on the number of particles.

D. Blackbody Radiation

Seen in (PS7 & PS8)


We can understand blackbody radiation as an ideal gas of massless bosons. I set it apart
from the ideal gas case because one of our principle interests in blackbody radiation is that
it radiates, i.e. moves from one system to another, which we have not considered for any
other ideal gas. The Planck spectrum u() tells us the energy density per unit energy of
∂ U
 
blackbody radiation: u() = ∂ V
. Looking at our two principle relations for an ideal Bose
gas, we can also write this in terms of the density of states for massless particles, which have
an energy relation E = pc: u() = g()nBE (, T ). Thinking in this way, we will set µ = 0,
since we can always add a massless particle of arbitrarily low energy to a system.
The radiating of a blackbody is summarized by Stefan’s law

P = σeAT 4 ,

13
where σ is the Stefan-Boltzmann constant, A is the surface area of the object, and the
emissivity e quantifies how close an object is to being a perfect blackbody. An object that
absorbs all radiation that strikes it has e = 1, while a body that reflects all radiation that
strikes it has e = 0. We also learned that emissivity can be a function of the frequency of
the radiation. This is important in explaining the greenhouse effect.

E. Ising Model

Seen in (PS10)
The Ising model considers spins positioned at the points of a lattice. We considered chain,
square, and cubical lattices in 1, 2, or 3 dimensions. A spin can point either up (s = +1) or
P
down (s = −1). Each spin interacts with its nearest neighbors: E = − neigh si sj . Energy
is minimized when the spins align with one another. In one dimension we were able to solve
the model exactly by iteratively considering the last spin in the chain and finding the sum
over its possible states. In higher dimensions we used a mean-field approximation to calculate
the critical temperature at which the system makes a transition from disordered to oriented
(i.e., magnetized). Ising proposed the model as an explanation for ferromagnetic materials,
which it does resemble qualitatively. The idea that a local preference for aligning can lead to
a transition from a disordered phase to an ordered phase has far broader applicability that
has lead the Ising model to appear in various guises across physics and out into biological
and even social studies.

14

You might also like