thermal physicssites.science.oregonstate.edu/~roundyd/courses/ph441/thermal-physics.pdf ·...

136
Thermal Physics David Roundy

Upload: others

Post on 10-Mar-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Thermal Physics

David Roundy

Page 2: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Contents

1 Spring 2018 Physics 441 4Introduction and course philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Brief review of Energy and Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Week 1: Gibbs entropy approach 7Microstates vs. macrostates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Probabilities of microstates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Energy as a constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Lagrange multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Maximizing entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Homework for week 1 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Week 2: Entropy and Temperature (K&K 2, Schroeder 6) 14Quick version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Multiplicity of a paramagnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Entropy of our spins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Thermal contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Homework for week 2 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Week 3: Boltzmann distribution and Helmholtz (K&K 3, Schroeder 6) 22Internal energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Helmholtz free energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Using the free energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Ideal gas with just one atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Ideal gas with multiple atoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Homework for week 3 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5 Week 4: Thermal radiation and Planck distribution (K&K 4, Schroeder 7.4) 33Harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Summing over microstates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Black body radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Low temperature heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Homework for week 4 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

1

Page 3: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

6 Week 5: Chemical potential and Gibbs distribution (K&K 9, Schroeder 7.1) 42Chemical potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Gibbs factor and sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Homework for week 5 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7 Week 6: Ideal gas (K&K 6, Schroeder 6.7) 52Midterm on Monday . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Quantum mechanics and orbitals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Fermi-Dirac distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Bose-Einstien distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Classical ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Homework for week 6 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

8 Week 7: Fermi and Bose gases (K&K 7, Schroeder 7)) 60

9 Notes from last year 62Density of (orbital) states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62Finding the density of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Using the density of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Fermi gas at finite temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Bose gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Homework for week 7 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

10 Week 8: Work, heat, and cycles (K&K 8, Schroeder 4) 71Heat and work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Homework for week 8 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

11 Week 9: Phase transformations (K&K 10, Schroeder 5.3) 75Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Clausius-Clapeyron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76van der Waals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77van der Waals and liquid-vapor phase transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80Examples of phase transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Landau theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Homework for week 9 (PDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

12 Review 89Equations to remember . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Equations not to remember . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

13 Solutions 92Solution for week 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Solution for week 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Solution for week 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99Solution for week 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

2

Page 4: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Solution for week 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Solution for week 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116Solution for week 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Solution for week 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Solution for week 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

3

Page 5: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 1

Spring 2018 Physics 441

Office hours David Roundy: TF 1-2, 401B Weniger(or possibly look for me in 477 Weniger) or uponrequest. My door is open when I’m in my office,and students are welcome to enter with questions.

Syllabus The syllabus is here.Textbook Thermal Physics by Kittel and Kroemer.

The textbook is not required, but the course willfollow the text reasonably closely.

Course notes If you wish, you may download thisentire website as a PDF file.

Homework Homework will be due in class onWednesday of each week (but not the first weekof class). You should be able to start each home-work the week before it is due. See the syllabusfor details on homework grading. You may usethe solutions (or any other resource you wish touse) but at the end of each problem, please citewhat resources you used (students you workedwith, whether you looked at the solutions, etc).Note that verbatim copying from any source isplagiarism. I recommend not using the solutions.

Introduction and course philoso-phy

This is your second course in thermal physics. Energyand Entropy took a thermodyamics-first approach,with primary emphasis on how you could measuresomething, and only later introducing how you couldpredict it. I strongly support this approach, but it is

not the most common approach to thermal physics.

I will teach this course in a more traditional order andapproach, following the text of Kittel and Kroemer.This is the textbook that I used as an undergraduate.It’s an excellent book, but very much uses a moremathematical theory-first approach than I prefer.

Since this is now your second course in thermal physics,this should balance things out. With your Energy andEntropy background, you should be able to makephysical connections with the math more easily. Byorganizing this course in this different way, I hope tobroaden and deepen your understanding of thermalphysics, while at the same time showing you a widerange of different physical systems.

Brief review of Energy and En-tropy

Extensive/intensive (Schroeder 5.2)

If you consider two identical systems taken together(e.g. two cups of water, or two identical cubes ofmetal), each thermodynamic property either doublesor remains the same.

Extensive An extensive property, such as mass willdouble when you’ve got twice as much stuff.

Intensive An intensive property, such as densitywill be the same regardless of how much stuffyou’ve got.

4

Page 6: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

We care about extensivity and intensivity for severalreasons. In one sense it functions like dimensionsas a way to check our work. In another sense, it isa fundamental aspect of each measurable property,and once you are accustomed to this, you will feelvery uncomfortable if you don’t know whether it isextensive or intensive.

How to measure things

Volume Measure dimensions and compute it.(extensive)

Pressure Force per area. Can equalize if systemscan exchange volume. (intensive) (Schroeder1.2)

Temperature Find something that depends on tem-perature, and calibrate it. Alternatively use anideal gas. Equalizes when systems are in contact.(intensive)

Energy Challenging. . . measure work andheat (e.g. by measuring power into resistor).(extensive)

W = −∫pdV (1.1)

Entropy (extensive) Measure heat for a quasistaticprocess and find

∆S =∫dQ

T(1.2)

(Schroeder 3.2)Derivatives Measure changes of one thing as the

other changes, with the right stuff held fixed.

First Law (Energy conservation,Schroeder 1.4)

dU = dQ+ dW (1.3)

Second Law (Entropy increases,Schroeder 2.3)

∆Ssystem + ∆Senvironment ≥ 0 (1.4)

Thermodynamic identity (Schroeder3.4)

dU = TdS − pdV (1.5)

Thermodynamic potentials (Schroeder1.6, 5.1)

Helmholtz free energy

F = U − TS (1.6)dF = dU − TdS − SdT (1.7)

= −SdT − pdV (1.8)

Enthalpy

H = U + pV (1.9)dH = dU + pdV + V dp (1.10)

= TdS + V dp (1.11)

Gibbs free energy

G = H − TS (1.12)= U − TS + pV (1.13)

dG = dH − TdS − SdT (1.14)= −SdT + V dp (1.15)

Statistical entropy (Schroeder 2.6,Problem 6.43)

Boltzmann formulation (microcanonical or for largeN):

S(E) = kB ln g (1.16)

5

Page 7: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Gibbs formulation (always true):

S(E) = −kBall states∑

i

Pi lnPi (1.17)

Boltzmann ratio (Schroeder 6.1)

PiPj

= e−Ei−EjkBT (1.18)

Pi = e−Ei−EjkBT∑all states

j e−

EjkBT

(1.19)

Thermal averages (Schroeder 6.2)

The average value of any quantity is given by theweighted average

〈X〉 =all states∑

i

PiXi (1.20)

6

Page 8: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 2

Week 1: Gibbs entropy approach

The only reference in the text is (Schroeder Problem6.43).

There are two different approaches for deriving theresults of statistical mechanics. These results differin what fundamental postulates are taken, but agreein the resulting predictions. The textbook takes atraditional microcanonical Boltzmann approach.

This week, before using that approach, we will reachthe same results using the Gibbs formulation of theentropy (sometimes referred to as the “informationtheoretic entropy”), as advocated by Jaynes. Notethat Boltzmann also used the more general Gibbsentropy, even though it doesn’t appear on his tomb-stone.

Microstates vs. macrostates

You can think of a microstate as a quantum mechan-ical energy eigenstate. As you know from quantummechanics, once you have specified an energy eigen-state, you know all that you can about the state ofa system. Note that before quantum mechanics, thiswas more challenging. You can define a microstateclassically, and people did so, but it was harder. Inparticular, the number of microstates classically is gen-erally both infinite and non-countable, since any realnumber for position and velocity is possible. Quan-tum mechanics makes this all easier, since any finite(in size) system will have an infinite but countablenumber of microstates.

When you have a non-quantum mechanical system (orone that you want to treat classically), a microstaterepresents one of the “primitive” states of the system,in which you have specified all possible variables. Inpractice, it is common when doing this to specify whatwe might call a “mesostate”, but call it a microstate.e.g. you might hear someone describe a microstateof a system of marbles in urns as defined by howmany marbles of each color are in each urn. Obvi-ously there are many quantum mechanical microstatescorresponding to each of those states.

Small White Boards Write down a description ofone particular macrostate.

A macrostate is a state of a system in which wehave specified all the properties of the system thatwill affect any measurements we may care about. Forinstance, when defining a macrostate of a given gasor liquid, we could specify the internal energy, thenumber of molecules (or equivalently mass), and thevolume. We need to specify all three properties (if wewant to ask, for instance, for the entropy), becauseotherwise we won’t have a unique answer. For differentsorts of systems there are different ways that we canspecify a macrostate. In this way, macrostates have aflexibility that real microstates do not. e.g. I could ar-gue that the macrostate of a system of marbles in urnswould be defined by the number of marbles of eachcolor in each urn. After all, each macrostate wouldstill correspond to many different energy eigenstates.

7

Page 9: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Probabilities of microstates

The name of the game in statistical mechanics is de-termining the probabilities of the various microstates,which we call {Pi}, where i represents a microstate. Iwill note here the term ensemble, which refers to aset of microstates with their associated probabilities.We define ensembles according to what constraintswe place on the microstates, e.g. in this discussion wewill constrain all microstates to have the same volumeand number of particles, which defines the canonicalensemble. Next week/chapter we will discuss themicrocanonical ensemble (which also constrains allmicrostates to have identical energy), and other en-sembles will follow. Today’s discussion, however, willbe largely independent of which ensemble we choose towork with, which generally depends on what processeswe wish to consider.

Normalization

The total probability of all microstates added up mustbe one.

all µstates∑i

Pi = 1 (2.1)

This may seem obvious, but it is very easy to forgetwhen lost in algebra!

From probabilities to observables

If we want to find the value that will be measured fora given observable, we will use the weighted average.For instance, the internal energy of a system is givenby:

U =all µstates∑

i

PiEi (2.2)

= 〈Ei〉 (2.3)

where Ei is the energy eigenvalue of a given microstate.The 〈Ei〉 notation simply denotes a weighted averageof E. The subscript in this notation is optional.

This may seem wrong to you. In quantum mechanics,you were taught that the outcome of a measurementwas always an eigenvalue of the observable, not theexpectation value (which is itself an average). Thedifference is in how we are imagining performing ameasurement, and what the size of the system isthought to be.

In contrast, imagine measuring the mass of a liter ofwater, for instance using a balance. While you aremeasuring its mass, there are water molecules leavingthe glass (evaporating), and other water moleculesfrom the air are entering the glass and condensing.The total mass is fluctuating as this occurs, far morerapidly than the scale can tip up or down. It reachesbalance when the weights on the other side balancethe average weight of the glass of water.

The process of measuring pressure and energy aresimilar. There are continually fluctuations going on,as energy is going back and forth between your systemand the environment, and the process of measurement(which is slow) will end up measuring the average.

In contrast, when you perform spectroscopy on a sys-tem, you do indeed see lines corresponding to discreteeigenvalues, even though you are using a macroscopicamount of light on what may be a macroscopic amountof gas. This is because each photon that is absorbedby the system will be absorbed by a single molecule(or perhaps by two that are in the process of colliding).Thus you don’t measure averages in a direct way.

In thermal systems such as we are considering inthis course, we will consider the kind of observablewhere the average value of that observable is what ismeasured. This is why statistics are relevant!

Energy as a constraint

Energy is one of the most fundamental concepts.When we describe a macrostate, we will (almost) al-

8

Page 10: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

ways need to constrain the energy. For real systems,there are always an infinite number of microstateswith no upper bound on energy. Since we never haveinfinite energy in our labs or kitchens, we know thatthere is a practical bound on the energy.

We can think of this as applying a mathematicalconstraint on the system: we specify a U , and thisdisallows any set of probabilities {Pi} that have adifferent U .

Small Group Question Consider a system thathas just three microstates, with energies −ε, 0,and ε. Construct three sets of probabilities cor-responding to U = 0.

I picked an easy U . Any “symmetric” distribution ofprobabilities will do. You probably chose somethinglike:

Ei: −ε 0 εPi: 0 1 0Pi: 1

2 0 12

Pi: 13

13

13

Question Given that each of these answers has thesame U , how can we find the correct set of prob-abilities are for this U? Vote on which you thinkmost likely!

The most “mixed up” would be ideal. But how dowe define mixed-up-ness? The “mixed-up-ness” ofa probability distribution can be quantified via theGibbs formulation of entropy:

S = −kall µstates∑

i

Pi lnPi (2.4)

=all µstates∑

i

Pi (−k lnPi) (2.5)

= 〈−k lnPi〉 (2.6)

So entropy is a kind of weighted average of − lnPi.

The Gibbs entropy expression (sometime referred to asthe information theory entropy, or Shannon entropy)

can be shown to be the only possible entropy function(of of {Pi}) that has a reasonable set of properties:

1. It must be extensive. If you subdivide your sys-tem into uncorrelated and noninteracting subsys-tems (or combine two noninteracting systems),the entropy must just add up. Solve problem 2on the homework this week to show this. (Tech-nically, it must be additive even if the systemsinteract, but that is more complicated.)

2. The entropy must be a continuous function ofthe probabilities {Pi}. Realistically, we want itto be analytic.

3. The entropy shouldn’t change if we shuffle aroundthe labels of our states, i.e. it should be symmet-ric.

4. When all microstates are equally likely, the en-tropy should be maximized.

5. All microstates have zero probability except one,the entropy should be minimized.

Note The constant k is called Boltzmann’s constant,and is sometimes written as kB . Kittel and Kroe-mer prefer to set kB = 1 in effect, and defines σas S/kB to make this explicit. I will include kBbut you can and should keep in mind that it isjust a unit conversion constant. Note also thatchanging the base of the logarithm in effect justchanges this constant.

How is this mixed up?

Small Group Question Compute S for each of theabove probability distributions.

AnswerEi: −ε 0 ε SPi: 0 1 0 0Pi: 1

2 0 12 ln 2

Pi: 13

13

13 ln 3

You can see that if more states are probable, theentropy is higher. Or alternatively you could saythat if the probability is more “spread out”, theentropy is higher.

9

Page 11: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Maximize entropy

The correct distribution is that which maximizes theentropy of the system, its Gibbs entropy, subject tothe appropriate constraints. Yesterday we tackled apretty easy 3-level system with U = 0. If I had chosena different energy, it would have been much harder tofind the distribution that gave the highest entropy.

Small Group Question Find the probability distri-bution for our 3-state system that maximizes theentropy, given that the total energy is U .

Answer We have two constraints,

∑µ

Pµ = 1 (2.7)

∑µ

PµEµ = U (2.8)

and we want to maximize

S = −k∑µ

Pµ lnPµ. (2.9)

Fortunately, we’ve only got three states, so wewrite down each sum explicitly, which will makethings easier.

P− + P0 + P+ = 1 (2.10)−εP− + εP+ = U (2.11)

P+ = P− + U

ε(2.12)

P− + P0 + P− + U

ε= 1 (2.13)

P0 = 1− 2P− −U

ε(2.14)

(2.15)

Now that we have all our probabilities in termsof P− we can simplify our entropy:

−Sk

= P− lnP− + P0 lnP0 + P+ lnP+ (2.16)

= P− lnP− +(P− + U

ε

)ln(P− + U

ε

)+(1− 2P− − U

ε

)ln(1− 2P− − U

ε

)(2.17)

Now we can maximize this entropy by setting itsderivative to zero!

−dSdP−

k= 0 (2.18)

= lnP− + 1− 2 ln(1− 2P− − U

ε

)− 2

+ ln(P− + U

ε

)+ 1 (2.19)

= lnP− − 2 ln(1− 2P− − U

ε

)+ ln

(P− + U

ε

)(2.20)

= ln(

P−(P− + U

ε

)(1− 2P− − U

ε

)2)

(2.21)

1 =P−(P− + U

ε

)(1− 2P− − U

ε

)2 (2.22)

And now it is just a polynomial equation. . .

P−(P− + U

ε

)=(1− 2P− − U

ε

)2 (2.23)

P 2− + U

ε P− = 1− 4P− − 2Uε + 4P 2− + 4P− Uε + U2

ε2

(2.24)

At this stage I’m going to stop. Clearly you couldkeep going and solve for P− using the quadraticequation, but we wouldn’t learn much from do-ing so. The point here is that we can solve forthe three probabilities given the internal energyconstraint. However, doing so is a major pain,and the result is not looking promising in termsof simplicity. There is a better way!

Lagrange multipliers

If you have a function of N variables, and want toapply a single constraint, one approach is to use the

10

Page 12: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

constraint to algebraically eliminate one of your vari-ables. Then you can set the derivatives with respectto all remaining variables to zero to maximize. This iswhat you presumably did in the last activity. However,in many cases this isn’t feasible. And when there aremany variables, it is almost universally inconvenient.A nicer approach for maximization under constraintsis the method of Lagrange multipliers.

The idea of Lagrange multipliers is to introduce anadditional variable (called the Lagrange multiplier)rather than eliminating one. This may seem counter-intuitive, but it allows you to create a new functionthat can be maximized by setting its derivative withto all N variables to zero, while still satisfying theconstraint.

Suppose we have a situation where we want to maxi-mize F under some constraints:

F = F (w, x, y, z) (2.25)f1(w, x, y, z) = C1 (2.26)f2(w, x, y, z) = C2 (2.27)

We define a new variable L as follows:

L ≡ F + λ1(C1 − f1(w, x, y, z)) + λ1(C2 − f2(w, x, y, z))(2.28)

Note that L = F provided the constraints are satisfied,since the constraint means that f1(x, y, z)− C1 = 0.We then maximize L by setting its derivatives to zero:

(∂L

∂w

)x,y,z

= 0 (2.29)

=(∂F

∂w

)x,y,z

− λ1∂f1

∂w− λ2

∂f2

∂w(2.30)(

∂L

∂x

)w,y,z

= 0 (2.31)

=(∂F

∂x

)w,y,z

− λ1∂f1

∂x− λ2

∂f2

∂x(2.32)

(∂L

∂y

)w,x,z

= 0 (2.33)

=(∂F

∂y

)w,x,z

− λ1∂f1

∂y− λ2

∂f2

∂y

(2.34)(∂L

∂z

)w,x,y

= 0 (2.35)

=(∂F

∂z

)w,x,y

− λ1∂f1

∂z− λ2

∂f2

∂z

(2.36)

This gives us four equations. But we need to keep inmind that we also have the two constraint equations:

f1(x, y, z) = C1 (2.37)f2(x, y, z) = C2 (2.38)

We now have six equations and six unknowns, sinceλ1 and λ2 have also been added as unknowns, andthus we can solve all these equations simultaneously,which will give us the maximum under the constraint.We also get the λ values for free.

The meaning of the Lagrange multiplier

So far, this approach probably seems pretty abstract,and the Lagrange multiplier λi seems like a strangenumber that we just arbitrarily added in. Even werethere no more meaning in the multipliers, this methodwould be a powerful tool for maximization (or min-imization). However, as it turns out, the multiplieroften (but not always) has deep physical meaning.Examining the Lagrangian L, we can see that

(∂L

∂C1

)w,x,y,z,C2

= λ1 (2.39)

so the multiplier is a derivative of the lagrangian withrespect to the corresponding constraint value. Thisdoesn’t seem to useful.

11

Page 13: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

More importantly (and less obviously), we can nowthink about the original function we maximized F asa function (after maximization) of just C1 and C2. Ifwe do this, then we find that

(∂F

∂C1

)C2

= λ1 (2.40)

I think this is incredibly cool! And it is a hint thatLagrange multipliers may be related to Legendre trans-forms.

Maximizing entropy

When maximizing the entropy, we need to apply twoconstraints. We must hold the total probability to 1,and we must fix the mean energy to be U . This timeI’m going to call my lagrange multipliers αkB andβkB so as to reduce the number of subscripts requiredand also to make all the Boltzmann constants go away.

L = S + αkB

(1−

∑i

Pi

)+ βkB

(U −

∑i

PiEi

)(2.41)

= −kB∑i

Pi lnPi + αkB

(1−

∑i

Pi

)

+ βkB

(U −

∑i

PiEi

)(2.42)

where α and β are the two Lagrange multipliers. I’veadded here a couple of factors of kB mostly to makethe kB in the entropy disappear. We want to maximizethis, so we set its derivatives to zero:

∂L

∂Pi= 0 (2.43)

= −kB (lnPi + 1)− kBα− βkBEi (2.44)lnPi = −1− α− βEi (2.45)Pi = e−1−α−βEi (2.46)

So now we know the probabilities in terms of thetwo Lagrange multipliers, which already tells us thatthe probability of a given microstate is exponentiallyrelated to its energy. At this point, it is convenientto invoke the normalization constraint. . .

1 =∑i

Pi (2.47)

=∑i

e−1−α−βEi (2.48)

= e−1−α∑i

e−βEi (2.49)

e1+α =∑i

e−βEi (2.50)

(2.51)

Where we define the normalization factor as

Z ≡all states∑

i

e−βEi (2.52)

which is called the partition function. Putting thistogether, the probability is

Pi = e−βEi

Z(2.53)

= Boltzmann factorpartition function (2.54)

At this point, we haven’t yet solved for β, and to doso, we’d need to invoke the internal energy constraint:

U =∑i

EiPi (2.55)

U =∑iEie

−βEi

Z(2.56)

As it turns out, β = 1kBT

. This follows from my claimthat the Lagrange multiplier is the partial derivativewith respect to the constaint value

12

Page 14: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

kBβ =(∂S

∂U

)Normalization=1

(2.57)

However, I did not prove this to you. I will leavedemonstrating this as a homework problem.

Homework for week 1 (PDF)

1. Energy, Entropy, and Probabilities Thegoal of this problem is to show that once we havemaximized the entropy and found the microstateprobabilities in terms of a Lagrange multiplier β,we can prove that β = 1

kT based on the statisticaldefinitions of energy and entropy and the ther-modynamic definition of temperature embodiedin the thermodynamic identity.

The internal energy and entropy are each definedas a weighted average over microstates:

U =∑i

EiPi S = −kB∑i

Pi lnPi (2.58)

We saw in clase that the probability of each mi-crostate can be given in terms of a Lagrangemultiplier β as

Pi = e−βEi

ZZ =

∑i

e−βEi (2.59)

Put these probabilities into the above weightedaverages in order to relate U and S to β. Thenmake use of the thermodynamic identity

dU = TdS − pdV (2.60)

to show that β = 1kT .

2. Gibbs entropy is extensive Consider two non-interacting systems A and B. We can either treatthese systems as separate, or as a single combinedsystem AB. We can enumerate all states of thecombined by enumerating all states of each sep-arate system. The probability of the combinedstate (iA, jB) is given by PABij = PAi P

Bj . In other

words, the probabilities combine in the same wayas two dice rolls would, or the probabilities ofany other uncorrelated events.

a) Show that the entropy of the combined sys-tem SAB is the sum of entropies of the twoseparate systems considered individually, i.e.SAB = SA + SB. This means that entropyis extensive. Use the Gibbs entropy for thiscomputation. You need make no approxi-mation in solving this problem.

b) Show that if you have N identical non-interacting systems, their total entropy isNS1 where S1 is the entropy of a singlesystem.

Note In real materials, we treat properties asbeing extensive even when there are interac-tions in the system. In this case, extensivityis a property of large systems, in which sur-face effects may be neglected.

3. Boltzmann probabilities Consider the three-state system with energies (−ε, 0, ε) that we dis-cussed in class.

a) At infinite temperature, what are the probabili-ties of the three states being occupied?

b) At very low temperature, what are the threeprobabilities?

c) What happens to the probabilities if you allowthe temperature to be negative?

13

Page 15: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 3

Week 2: Entropy and Temperature(K&K 2, Schroeder 6)

This week we will be following Chapter 2 of Kitteland Kroemer, which uses a microcanonical approach(or Boltzmann entropy approach) to relate entropyto temperature. This is an alternative derivation tothe Gibbs approach we used last week, and it canbe helpful to have seen both. In a few ways theBoltzmann approach is conceptually simpler, whilethere are a number of other ways in which the Gibbsapproach is simpler.

Fundamental assumption

The difference between these two approaches is inwhat is considered the fundamental assumption. Inthe Gibbs entropy approach we assumed that theentropy was a “nice” function of the probabilities ofmicrostates, which gave us the Gibbs formula. Fromthere, we could maximize the entropy to find theprobabilities under some set of constraints.

The Boltzmann approach makes what is perhaps sim-pler assumption, which is that if only microstateswith a given energy are permitted, then all of themicrostates with that energy are equally probable.(This scenario with all microstates having the sameenergy is the microcanonical ensemble.) Thus themacrostate with the most corresponding microstateswill be most probable macrostate. The number ofmicrostates corresponding to a given macrostate is

called the multiplicity g(E, V ). In this approach,multiplicity (which did not show up last week!) be-comes a fundamentally important quantity, since themacrostate with the highest multiplicity is the mostprobable macrostate.

Outline of the week One or two topics per day:

1. Quick version showing the conclusions wewill reach.

2. Finding the multiplicity of a paramagnet(Chapter 1).

3. Combining two non-interacting systems;defining temperature.

4. Central limit theorem and how “large N”does its magic.

Quick version

This quick version will tell you all the essential physicsresults for the week, without proof. The beauty ofstatistical mechanics (whether following the text orusing the information-theory approach of last week)is that you don’t actually need to take on either faithor experiment the connection between the statisticaltheory and the empirical definitions used in thermo-dynamics.

14

Page 16: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Entropy

The multiplicity sounds sort of like entropy (since itis maximized), but the multiplicity is not extensive(nor intensive), because the number of microstatesfor two identical systems taken together is the squareof the number of microstates available to one of thesingle systems. This naturally leads to the Boltzmanndefinition of the entropy, which is

S(E, V ) = kB ln g(E, V ). (3.1)

The logarithm converts the multiplicity into an exten-sive quantity, in a way that is directly analogous tothe logarithm that appears in the Gibbs entropy.

For large systems (e.g. systems composed of ∼ 1023

particles$), the most probable configuration is essen-tially the same as any remotely probable configuration.This comes about due for the same reason that if youflip 1023 coins, you will get 5 × 1022 ± 1012 heads.On an absolute scale, that’s a lot of uncertainty inthe number of heads that would show up, but on afractional scale, you’re pretty accurate if you assumethat 50% of the flips will be heads.

Temperature

From Energy and Entropy (and last week), you willremember that dU = TdS − pdV , which tells us thatT =

(∂U∂S

)V. If we assume that only states with one

particular energy E have a non-zero probability ofbeing occupied, then U = E, i.e. the thermodynamicinternal energy is the same as the energy of any al-lowed microstate. Then we can replace U with E andconclude that

T =(∂E

∂S

)V

(3.2)

1T

=(∂S

∂E

)V

(3.3)

=(∂kB ln g(E, V )

∂E

)V

(3.4)

= kB1g

(∂g

∂E

)V

(3.5)

From this perspective, it looks like our job is to learnto solve for g(E) and from that to find S(E), and oncewe have done those tasks we will know the temperature(and soon everything else).

Differentiable multiplicity The above assumesthat g(E) is a differentiable function, whichmeans that the number of microstates must bea continuous function of energy! This highlightsone of the distinctions between the microcanoni-cal approach and our previous (cannonical) Gibbsapproach.

In reality, we know from quantum mechanics thatany system of finite size has a finite number ofeigenstates within any given energy range, andthus g(E) cannot be either continuous or differ-entiable. Boltzmann, of course, did not knowthis, and assumed that there were an infinitenumber of microstates possible within any energyrange, and would strictly speaking interpret g(E)in terms of a volume of phase space.

The resolution to this conundrum is to invokelarge numbers, and to assume that we are averag-ing g(E) over a range of energies in which thereare many, many states. For real materials withN ≈ 1023, this assumption is pretty valid. Muchof this chapter will involve learning to work withthis large N assumption, and to use it to extractphysically meaningful results. In the Gibbs ap-proach this large N assumption was not needed.

As Kittel discusses towards the end of the chap-ter, we only really need to know g(E) up to some

15

Page 17: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

constant factor, since a constant factor in g be-comes a constant additive change in S, whichdoesn’t have any physical impact.

The “real” g(E) is a smoothed average over a rangeof energies. In practice, doing this can be confusing,and so we tend to focus on systems where the energyis always an integer multiple of some constant. Thusa focus on spins in a magnetic field, and harmonicoscillators.

Multiplicity of a paramagnet

So now the question becomes how to find the numberof microstates that correspond to a given energy g(E).Once we have this in an analytically tractable form,we can everything else we might care for (with effort).This is essentially a counting problem, and much ofwhat you need is introduced in Chapter 1. We willspend some class time going over one example ofcomputing the multiplicity. Consider a paramagneticsystem consisting of spin 1

2 particles that can be eitherup or down. Each spin has a magnetic moment inthe z direction of ±m, and we are interested in thetotal magnetic moment µtot, which is the sum of allthe individual magnetic moments. Note that themagnetization M used in electromagnetism is just thetotal magnetic moment of the material divided by itsvolume.

M ≡ µtotV

(3.6)

Note It is confusingly common to refer to the totalmagnetic moment as the magnetization. Giveneither a numerical value or an expression, it’susually easy to tell what you’ve got by checkingthe dimensions.

Small Group Question Work out how many waysa system of 4 spins can have any possible magne-tization of enumerating all the microstates corre-sponding to each magnetization.

Now find a mathematical expression that will tellyou the multiplicity of a system with an even

number N spins and just one ↑ spin. Then findthe multiplicity for two ↑ spins, and for three ↑spins.

Now find a mathematical expression that willtell you the multiplicity of a system with aneven number N spins and total magnetic momentµtot = 2sm where s is an integer. We call s thespin excess, since N↑ = 1

2N + s. Alternatively,you could write your expression in terms of thenumber of up spins N↑ and the number of downspins N↓.

Answer We can enumerate all spin microstates:

µtot = −4m ↓↓↓↓ g=1

µtot = −2m ↓↓↓↑ ↓↓↑↓ ↓↑↓↓ ↑↓↓↓ g=4

µtot = 0 ↓↓↑↑ ↓↑↑↓ ↑↑↓↓ ↑↓↓↑ ↑↓↑↓ ↓↑↓↑ g=6

µtot = 2m ↑↑↑↓ ↑↑↓↑ ↑↓↑↑ ↓↑↑↑ g=4

µtot = 4m ↑↑↑↑ g=1

To generalize this to g(N, s), we need to come upwith a systematic way to count the states thathave the same spin excess s. Clearly if s = ±N/2,g = 1, since that means that all the spins arepointed the same way, and there is only one wayto do that.

g(N, s = ±12N) = 1 (3.7)

Now if we have just one spin going the other way,there are going to be N ways we could managethat:

g

(N, s = ±

(12N − 1

))= N (3.8)

Now when we go to flip it so we have two spinsup, there will be N − 1 ways to flip the secondspin. But then, when we do this we will endup counting every possibility twice, which meansthat we will need to divide by two.

16

Page 18: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

g

(N, s = ±

(12N − 2

))= N(N − 1)/2 (3.9)

When we get to adding the third ↑ spin, we’ll haveN − 2 spins to flip. But now we have to be evenmore careful, since for the same three up-spins,we have several ways to reach that microstate.In fact, we will need to divide by 6, or 3× 2 toget the correct answer (as we can check for ourfour-spin example).

g

(N, s = ±

(12N − 3

))= N(N − 1)(N − 2)

3!(3.10)

At this stage we can start to see the pattern,which comes out to

g (N, s) = N !( 12N + s

)!( 1

2N − s)!

(3.11)

= N !N↑!N↓!

(3.12)

Stirling’s approximation

As you can see, we now have a bunch of factorials.Once we compute the entropy, we will have a bunchof logarithms of factorials.

N ! =N∏i=1

i (3.13)

lnN ! = ln(

N∏i=1

i

)(3.14)

=N∑i=1

ln i (3.15)

So you can see that the log of a factorial is a sum oflogs. When the number of things being summed is

large, we can approximate this sum with an integral.This may feel like a funny business, particularly forthose of you who took my computational class, wherewe frequently used sums to approximate integrals!But the approximation can go both ways. In thiscase, if we approximate the integral as a sum we canfind an analytic expression for the factorial:

lnN ! =N∑i=1

ln i (3.16)

≈∫ N

1ln xdx (3.17)

= x ln x− x|N1 (3.18)= N lnN −N + 1 (3.19)

At this point, we should recognize that the 1 that wesee is much smaller than the other two terms, andis actually likely to be wrong. Importantly, there isa larger error being made here, which we can see ifwe zoom into the upper end of our integral. We aremissing 1

2 lnN ! The reason is that our integral wentprecisely to N , but if we imagine a midpoint rulepicture (or trapezoidal rule) we are missing half ofthat last point. This gives us:

lnN ! ≈(N + 1

2

)lnN −N (3.20)

We could find the constant term correctly (it is not1), but that is more work, and even the 1

2 above isusually omitted when using Stirling’s approximation,since it is much smaller than the others when N � 0

Entropy of our spins

I’m going to use a different approach than the text tofind the entropy of this spin system when there aremany spins and the spin excess is relatively small.

17

Page 19: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S = k ln g (N, s) (3.21)

= k ln(

N !( 12N + s

)!( 1

2N − s)!

)(3.22)

= k ln(

N !N↑!N↓!

)(3.23)

= k ln(

N !(h+ s)! (h− s)!

)(3.24)

At this point I’m going to define for convenience h ≡12N , just to avoid writing so many 1

2 . I’m also goingto focus on the s dependence of the entropy.

S

k= ln (N !)− ln (N↑!)− ln (N↓!) (3.25)

= lnN !− ln(h+ s)!− ln(h− s)! (3.26)

= lnN !−h+s∑i=1

ln i−h−s∑i=1

ln i (3.27)

At the last step, I wrote the log of the factorial as asum of logs. This is still looking pretty hairy. So let’snow consider the difference between the entropy withs and the entropy when s = 0 (which I will call hereS0 for compactness and convenience).

S(s)− S0

kB= −

h+s∑i=1

ln i−h−s∑i=1

ln i+h∑i=1

ln i+h∑i=1

ln i

(3.28)

= −h+s∑i=h+1

ln i+h∑

j=h−s+1ln j (3.29)

where I have changed the sums to account for thedifference between the sums with s and those without.At this stage, our indices are starting to feel a littleinconvenient given the short range we are summingover, so let’s redefine our index ov summation so thesums will run up to s. In preparation for this, at thelast step, I renamed one of my dummy indexes.

i = h+ k j = h+ 1− k (3.30)

With these indexes, each sum can go from k = 1 tok = s, which will enable us to combine our sums intoone.

S − S0

k= −

s∑k=1

ln(h+ k) +s∑

k=1ln(h+ 1− k)

(3.31)

=s∑

k=1(ln(h+ 1− k)− ln(h+ k)) (3.32)

At this point, if you’re anything like me, you’re think-ing “I could turn that difference of logs into a log ofa ratio!” Sadly, this doesn’t turn out to help us. In-stead, we are going to start trying to get the h out ofthe way in preparation for taking the limit as s� h.

S − S0

k=

s∑k=1

ln h+ ln(

1− k − 1h

)− ln h− ln

(1 + k

h

)(3.33)

=s∑

k=1

(ln(

1− k − 1h

)− ln

(1 + k

h

))(3.34)

It is now time to make our first approximation: weassume s � N , which means that s � h. Thatenables us to simplify these logarithms drastically! ¨

S − S0

k≈

s∑k=1

(−k − 1

h− k

h

)(3.35)

= − 2h

s∑k=1

(k − 1

2)

(3.36)

= − 4N

s∑k=1

(k − 1

2)

(3.37)

18

Page 20: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

0 1 2 3 4 5

x

0

1

2

3

4

5converting sum to integral where s = 5

x

k − 12

Figure 3.1: Sum to integral

Now we have this sum to solve. You can find this sumeither geometrically or with calculus. The calculusinvolves turning the sum into an integral. As you cansee in the figure, the integral

∫ s

0xdx = 1

2s2 (3.38)

has the same value as the sum, since the area underthe orange curve (which is the sum) is equal to thearea under the blue curve (which is the integral).

The geometric way to solve this looks visually verymuch the same as the integral picture, but instead ofcomputing the area from the straight line, we cut thestair-step area “half” and fit the two pieces togethersuch that they form a rectangle with width s/2 andheight s.

Taken together, this tells us that when s� N

S(N, s) ≈ S(N, s = 0)− k 4N

s2

2 (3.39)

= S(N, s = 0)− k 2s2

N(3.40)

This means that the multiplicity is gaussian:

0 1 2 3 4 5

x

0

1

2

3

4

5solving sum geometrically where s = 5

k − 12

Figure 3.2: Sum geometrically

S = k ln g (3.41)

g(N, s) = eS(N,s)k (3.42)

= eS(N,s=0)

k − 2s2N (3.43)

= g(N, s = 0)e− 2s2N (3.44)

Thus the multiplicity (and thus probability) is peakedat s = 0 as a gaussian with width ∼

√N . This tells

us that the width of the peak increases as we increaseN . However, the excess spin per particle decreases as∼ 1√

N. So that means that our fractional polarization

becomes far more sharply peaked as we increase N .

Thermal contact

Suppose we put two systems in contact with oneanother. This means that energy can flow from onesystem to the other. We assume, however, that thecontact between the two systems is weak enough thattheir energy eigenstates are unaffected. This is a bitof a contradiction you’ll need to get used to: we treatour systems as non-interacting, but assume there issome energy transfer between them. The reasoning is

19

Page 21: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

that the interaction between them is very small, sothat we can treat each system separately, but energycan still flow.

We ask the question: “How much energy will eachsystem end up with after we wait for things to settledown?” The answer to this question is that energywill settle down in the way that maximizes the numberof microstates.

Let us consider two simple systems: a 2-spin param-agnet, and a 4-spin paramagnet.

System A A system of 3 spins each with energy ±1.This system has the following multiplicity foundfrom Pascal’s triangle:

11 11 2 11 3 3 1

−3 −1 1 3

System B A system of 4 spins each with energy ±1.This system has the following multiplicity foundfrom Pascal’s triangle:

11 11 2 11 3 3 11 4 6 4 1

−4 −2 0 2 4

Question What is the total number of microstateswhen you consider systems A and B together asa combined system? AnswerWe need to multiply the numbers of microstatesfor each system separately, because for each mi-crostate of A, it is possible to have B be in anyof its microstates. So the total is 2324 = 128.

Since we have two separate systems here, it is mean-ingful to ask what the probability is for system A tohave energy EA, given that the combined system hasenergy EAB .

Small group question What is the multiplicity ofthe combined system if the energy is 3, i.e.gAB(EAB = 3)?

Answer To solve this, we just need to multiply themultiplicities of the two systems and add up allthe energy possibilities that total 3:

gAB(EAB = 0) = gA(−1)gB(4) + gA(1)gB(2) + gA(3)gB(0)(3.45)

= 3 · 1 + 3 · 4 + 1 · 6 (3.46)= 21 (3.47)

Small group question What is the probabilitythat system A has energy 1, if the combinedenergy is 3?

Answer To solve this, we just need to multiply themultiplicities of the two systems, which we al-ready found and divide by the total number ofmicrostates:

P (EA = 1|EAB = 3) = gA(1)gB(2)gAB(3) (3.48)

= 3 · 421 (3.49)

= 47 (3.50)

which shows that this is the most probable dis-tribution of energy between the two subsystems.

Given that these two systems are able to exchangeenergy, they ought to have the same temperature.To find the most probable energy partition betweenthe two systems, we need to find the partition thatmaximizes the multiplicity of the combined system:

gAB(EA) = gA(EA)gB(EAB − EA) (3.51)

0 = dgABdEA

(3.52)

= g′AgB − g′BgA (3.53)g′AgA

= g′BgB

(3.54)

1gA(EA)

∂gA(EA)∂EA

= 1gB(EB)

∂gB(EB)∂EB

(3.55)

This tells us that the “thing that becomes equal” whenthe two systems are in thermal contact is this strange

20

Page 22: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

ratio of the derivative of the multiplicity with respectto energy divided by the multiplicity itself. You maybe able to recognize this as what is called a logarithmicderivative.

∂EAln(gA(EA)) = 1

gA(EA)∂gA(EA)∂EA

(3.56)

thus we can conclude that when two systems are inthermal contact, the thing that equalizes is

β ≡(∂ ln g∂E

)V

(3.57)

At this stage, we haven’t shown that β = 1kT , but we

have shown that it should be a function of T , since Tis also a thing that is equalized when two systems arein thermal contact.

By dimensional reasoning, you can recognize that thiscould be 1

kT , and we’re just going to leave this at that.

Homework for week 2 (PDF)

1. Entropy and Temperature (K&K 2.1) Sup-pose g(U) = CU3N/2, where C is a constant andN is the number of particles.a) Show that U = 3

2NkBT .b) Show that

(∂2S∂U2

)N

is negative. This formof g(U) actually applies to the ideal gas.

2. Paramagnetism Find the equilibrium value attemperature T of the fractional magnetization

µtotNm

≡ 2〈s〉N

(3.58)

of a system of N spins each of magnetic momentm in a magnetic field B. The spin excess is 2s.The energy of this system is given by

U = −µtotB (3.59)

where µtot is the total magnetization. Takethe entropy as the logarithm of the multiplic-ity g(N, s) as given in (1.35 in the text):

S(s) ≈ kB log g(N, 0)− kB2s2

N(3.60)

for |s| � N , where s is the spin excess, whichis related to the magnetization by µtot = 2sm.Hint: Show that in this approximation

S(U) = S0 − kBU2

2m2B2N, (3.61)

with S0 = kB log g(N, 0). Further, show that1kT = − U

m2B2N , where U denotes 〈U〉, the ther-mal average energy.

3. Quantum harmonic oscillatora) Find the entropy of a set of N oscillators

of frequency ω as a function of the totalquantum number n. Use the multiplicityfunction:

g(N,n) = (N + n− 1)!n!(N − 1)! (3.62)

and assume that N � 1. This means youcan make the Sitrling approximation thatlogN ! ≈ N logN − N . It also means thatN − 1 ≈ N .

b) Let U denote the total energy n~ω of theoscillators. Express the entropy as S(U,N).Show that the total energy at temperatureT is

U = N~ωe

~ωkT − 1

(3.63)

This is the Planck result found the hardway. We will get to the easy way soon, andyou will never again need to work with amultiplicity function like this.

21

Page 23: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 4

Week 3: Boltzmann distribution andHelmholtz (K&K 3, Schroeder 6)

This week we will be deriving the Boltzmann ratio andthe Helmholtz free energy, continuing the microcanon-ical approach we started last week. Last week we sawthat when two systems were considered together ina microcanonical picture, the energy of each systemtaken individually is not fixed. This provides ourstepping stone to go from a microcanonical picturewhere all possible microstates have the same energy(and equal probability) to a canonical picture whereall energies are possible and the probability of a givenmicrostate being observed depends on the energy ofthat state.

We ended last week by finding that the following quan-tity is equal in two systems in thermal equalibrium

β =(∂ ln g∂E

)V

(4.1)

where g(E) is the multiplicity in the microcanonicalensemble. To more definitively connect this withtemperature, we will again consider two systems inthermal equilibrium using a microcanonical ensemble,but this time we will make one of those two systemshuge. In fact, it will be so huge that we can treat itusing classical thermodynamics, i.e. we can concludethat the above equation applies, and we can assumethat the temperature of this huge system is unaffectedby the small change in energy that could happen due

to differences in the small system.

Let us now examine the multiplicity of our combinedsystem, making B be our large system:

gAB(E) =∑EA

gA(EA)gB(EAB − EA) (4.2)

We can further find the probability of any particularenergy being observed from

PA(EA|EAB) = gA(EA)gB(EAB − EA)∑E′AgA(E′A)gB(EAB − E′A) (4.3)

where we are counting how many microstatesstatesof the combined system have this particular energyin system A, and dividing by the total number ofmicrostates of the combined system to create a proba-bility. So far this is identical to what we had last week.The difference is that we are now claiming that systemB is huge. This means that we can approximate gB.Doing so, however requires some care.

Warning wrong! We might be tempted to simplyTaylor expand gB

22

Page 24: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

gB(EAB − EA) ≈ gB(EAB)− βgB(EAB)EA + · · ·(4.4)

≈ gB(EAB)(1− βEA) (4.5)

This however, would be wrong unless βEA � 1.One way to see that this expansion must havelimited range is that if βEa ≥ 1 then we willend up with a negative multiplicity, which ismeaningless. The trouble is that we only assumedthat EA was small enough not to change thetemperature (or β), which does not mean thatβEA < 1. Thus this expansion is guaranteed tofail.When we run into this problem, we can considerthat ln g(E) is generally a smoother function thang(E). Based on the Central Limit Theorem, weexpect g(E) to typically have a Gaussian shape,which is one of our analytic functions that is leastwell approximated by a polynomial. In contrast,ln g will be parabolic (to the extent that g isGaussian), which makes it a prime candidate fora Taylor expansion.

Right way The right way to do this is to Taylorexpand the ln g (which will be entropy), since thederivative of ln g is the thing that equilibrates,and thus we can assume that this derivative won’tchange much when we make a small change to alarge system.

ln gB(EAB − EA) ≈ ln gB(EAB)− βEA + · · ·(4.6)

gB(EAB − EA) ≈ gB(EAB)e−βEA (4.7)

Now we can plug this into the probability equationabove to find that

PA(EA) = gA(EA)�����gB(EAB)e−βEA∑

E′AgA(E′A)���

��gB(EAB)e−βE′A(4.8)

= gA(EA)e−EAkBT∑

E′AgA(E′A)e−

E′A

kBT

(4.9)

Now this looks a bit different than the probabilitieswe saw previously (two weeks ago), because this isthe probability that we see an energy EA, not theprobability for a given microstate, and thus it hasthe factors of gA, and it sums over energies ratherthan microstates. To find the probability of a givenmicrostate, we just need to divide the probability ofits energy by the number of microstates at that energy,i.e. drop the factor of g:

PAi = e−βEi

Z(4.10)

Z =all energies∑

E

g(E)e−βE (4.11)

=all µstates∑

i

e−βE (4.12)

This is all there is to show the Boltzmann probabil-ity distribution from the microcanonical picture: Bigsystem with little system, treat big system thermody-namically, count microstates.

Note We still haven’t shown (this time around) thatβ = 1

kBT. Right now β is just still a particular

derivative that equalizes when two systems arein thermal equilibrium.

Internal energy

Now that we have the set of probabilities expressedagain in terms of β, there are a few things we cansolve for directly, namely any quantities that are di-rectly defined from probabilities. Most specifically,the internal energy

23

Page 25: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

U =∑i

PiEi (4.13)

=∑i

Eie−βEi

Z(4.14)

= 1Z

∑i

Eie−βEi (4.15)

Now doing yet another summation will often feel te-dious. There are a couple of ways to make this easier.The simplest is to examine the sum above and noticehow very similar it is to the partition function itself.If you take a derivative of the partition function withrespect to β, you will find:

(∂Z

∂β

)V

=∑i

e−βEi(−Ei) (4.16)

= −UZ (4.17)

U = − 1Z

(∂Z

∂β

)V

(4.18)

= −(∂ lnZ∂β

)V

(4.19)

Big Warning In this class, I do not want you begin-ning any solution (either homework or exam)with a formula for U in terms of Z! This step isnot that hard, and you need to do it every time.What you need to remember is definitions, whichin this case is how U comes from probability. Thereasoning here is that I’ve all too often seen stu-dents who years after taking thermal physics canonly remember that there is some expression forU in terms of Z. It is easier and more correctto remember that U is a weighted average of theenergy.

Pressure

How do we compute pressure? So far, everythingwe have done has kept the volume fixed. Pressure

tells us how the energy changes when we change thevolume, i.e. how much work is done. From Energyand Entropy, we know that

dU = TdS − pdV (4.20)

p = −(∂U

∂V

)S

(4.21)

So how do we find the pressure? We need to find thechange in internal energy when we change the volumeat fixed entropy.

Small white boards How do we keep the entropyfixed when changing the volume?

Answer Experimentally, we would avoid allowingany heating by insulating the system. Theoret-ically, this is less easy. When we consider theGibbs entropy, if we could keep all the prob-abilities fixed while expanding, we would alsofix the entropy! In quantum mechanics, we canshow that such a process is possible using time-dependent perturbation theory. Under certainconditions, if you perturb a system sufficientlyslowly, it will remain in the “same” eigenstateit was in originally. Although the eignestatechanges, and its energy changes, they do so con-tinuously.

If we take a derivative of U with respect to volumewhile holding the probabilities fixed, we obtain thefollowing result:

p = −(∂U

∂V

)S

(4.22)

= −(∂∑iEiPi∂V

)S

(4.23)

= −∑i

PidEidV

(4.24)

= −∑i

e−βEi

Z

dEidV

(4.25)

So the pressure is just a weighted sum of derivatives

24

Page 26: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

of energy eigenvalues with respect to volume. We canapply the derivative trick to this also:

p = 1βZ

(∂Z

∂V

(4.26)

= 1β

(∂ lnZ∂V

(4.27)

Now we have an expression in terms of lnZ and β.

Helmholtz free energy

We saw a hint above that U somehow relates to lnZ,which hinted that lnZ might be something special,and now lnZ also turns out to relate to the pres-sure somehow. Let’s put this into thermodynamicslanguage.1

U = −(∂ lnZ∂β

)V

(4.28)

d lnZ = −Udβ +(∂ lnZ∂V

dV (4.29)

d lnZ = −Udβ + βpdV (4.30)

We can already see the work in here. So now we’regoing to try a switch to a dU rather than a dβ, sincewe know something about dU .

d(βU) = Udβ + βdU (4.31)d lnZ = − (d(βU)− βdU) + βpdV (4.32)

= βdU − d(βU) + p

βdV (4.33)

βdU = d (lnZ + βU)− βpdV (4.34)

dU = 1βd (lnZ + βU)− pdV (4.35)

1Of course, we already talked last week about F = −kT ln Z,but that was done using the Gibbs entopy, which we’re pre-tending we don’t yet know. . .

Comparing this result with the thermodynamic iden-tity tells us that

S = kB lnZ + U/T (4.36)F ≡ U − TS (4.37)

= U − T (kB lnZ + U/T ) (4.38)= U − kBT lnZ + U (4.39)= −kBT lnZ (4.40)

That was a bit of a differentials slog, but got us thesame result without assuming the Gibbs entropy. Itdid, however, demonstrate a not-quite-contradiction,in that the expression we found for the entropy is notmathematically equal to the Boltmann entropy. Itapproaches the same thing for large systems, althoughI won’t prove that now.

Small groups Consider a system with g eigenstates,each with energy E0. What is the free energy?

Answer We begin by writing down the partitionfunction

Z =∑i

e−βEi (4.41)

= ge−βE0 (4.42)

Now we just need a log and we’re done.

F = −kT lnZ (4.43)= −kT ln

(ge−βE0

)(4.44)

= −kT(ln g + ln e−βE0

)(4.45)

= E0 − Tk ln g (4.46)

This is just what we would have concluded aboutthe free energy if we had used the Boltzmannexpression for the entropy in this microcanonicalensemble.Waving our hands, we can understand F =−kT lnZ in two ways:1. If there are more accessible microstates, Z

is bigger, which means S is bigger and Fmust be more negative.

25

Page 27: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

2. If we only consider the most probable ener-gies, to find the energy from Z, we need thenegative logarithm, and a kT to cancel outthe β.

Using the free energy

Why the big deal with the free energy? One way toput it is that it is relatively easy to compute. Theother is that once you have an analytic expressionfor the free energy, you can solve for pretty muchanything else you want.

Recall:

F ≡ U − TS (4.47)dF = dU − SdT − TdS (4.48)

= −SdT − pdV (4.49)

−S =(∂F

∂T

)V

(4.50)

−p =(∂F

∂V

)T

(4.51)

Thus by taking partial derivatives of F we can find Sand p as well as U with a little arithmetic. You have allseen the Helmholtz free energy before so this shouldn’tbe much of a surprise. Practically, the Helmholtz freeenergy is why finding an analytic expression for thepartition function is so valuable.

In addition to the “fundamental” physical parameters,we can also find response functions, such as heatcapacity or compressibility which are their derivatives.Of particular interest is the heat capacity at fixedvolume. The heat capacity is vaguely defined as:

CV ≡∼(dQ

∂T

)V

(4.52)

by which I mean the amount of heat required tochange the temperature by a small amount, divided

by that small amount, while holding the volume fixed.The First Law tells us that the heat is equal to thechange in internal energy, provided no work is done(i.e. holding volume fixed), so

CV =(∂U

∂T

)V

(4.53)

which is a nice equation, but can be a nuisance becausewe often don’t know U as a function of T , which isnot one of its natural variables. We can also go backto our Energy and Entropy relationship between heatand entropy where dQ = TdS, and use that to findthe ratio that defines the heat capacity:

CV = T

(∂S

∂T

)V

. (4.54)

Note that this could also have come from a manipula-tion of the previous derivative of the internal energy.However, the “heat” reasoning allows us to recognizethat the heat capacity at constant pressure will havethe same form when expressed as an entropy deriva-tive. This expression is also convenient when wecompute the entropy from the Helmholtz free energy,because we already know the entropy as a function ofT .

Ideal gas with just one atom

Let us work on the free energy of a particle in a 3Dbox.

Small groups (5 minutes) Work out (or writedown) the energy eigenstates for a particle con-fined to a cubical volume with side length L. Youmay either use periodic boundary conditions oran infinite square well. When you have doneso, write down an expression for the partitionfunction.

Answer The energy is just the kinetic energy, givenby

26

Page 28: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

T = ~2|~k|2

2m (4.55)

The allowed values of k are determined by theboundary conditions. If we choose periodicboundary conditions, then

kx = nx2πL

nx = any integer (4.56)

and similarly for ky and kz, which gives us

Enxnynz = 2π2~2

mL2

(n2x + n2

y + n2z

)(4.57)

where nx, ny, and nz take any integer values.If we chose the infinite square well boundaryconditions instead, our integers would be positivevalues only, and the prefactor would differ by afactor of four.

From this point, we just need to sum over all states tofind Z, and from that the free energy and everythingelse! So how do we sum all these things up?

Z =∞∑

nx=−∞

∞∑ny=−∞

∞∑nz=−∞

e−β2π2~2mL2 (n2

x+n2y+n2

z)

(4.58)

=∑nx

∑ny

∑nz

e−β2π2~2mL2 n2

xe−β2π2~2mL2 n2

ye−β2π2~2mL2 n2

z

(4.59)

=(∑nx

e−β2π2~2mL2 n2

x

)∑ny

e−β2π2~2mL2 n2

y

(∑nz

e−β2π2~2mL2 n2

z

)(4.60)

=( ∞∑n=−∞

e−β2π2~2mL2 n2

)3

(4.61)

The last bit here basically looks a lot like separationof variables. Our energy separates into a sum of x, yand z portions (which is why we can use separation

of variables for the quantum problem), but that alsocauses things to separate (into a product) when wecompute the partition function.

This final sum here is now something we would liketo approximate. If our box is reasonably big (andour temperature is not too low), we can assume that

4π2~2

kBTmL2 � 1, which is the classical limit. In this limit,the “thing in the exponential” hardly changes whenwe change n by 1, so we can reasonably replace thissummation with an integral.

Note You might have thought to use a power seriesexpansion (which is a good instinct!) but in thiscase that won’t work, because n gets arbitrarilylarge.

Z ≈(∫ ∞−∞

e− 2π2~2kBTmL

2 n2

dn

)3(4.62)

We can now do a u substitution to simplify this inte-gral.

ξ =

√2π2~2

kBTmL2n dξ =

√2π2~2

kBTmL2 dn (4.63)

This gives us a very easy integral.

Z =(√

kBTmL2

2π2~2

∫ ∞−∞

e−ξ2dξ

)3

(4.64)

=(kBTmL

2

2π2~2

) 32(∫ ∞−∞

e−ξ2dξ

)3(4.65)

= V

(kBTm

2π2~2

) 32

π32 (4.66)

= V

(kBTm

2π~2

) 32

(4.67)

So there we have our partition function for a singleatom in a big box. Let’s go on to find exciting things!

27

Page 29: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

First off, let’s give a name to the nasty fraction tothe 3

2 power. It has dimensions of inverse volume, ornumber per volume, and it has ~ in it (which makesit quantum) so let’s call it nQ, since I use n = N/Vfor number density.

nQ =(kBTm

2π~2

) 32

(4.68)

F = −kT lnZ (4.69)= −kT ln (V nQ) (4.70)

This looks like (and is) a very simple formula, but youneed to keep in mind that nQ depends on temperature,so it’s not quite as simple as it looks. Now that wehave the Helmholtz free energy, we can solve for theentropy, pressure, and internal energy pretty quickly.

Small groups Solve for the entropy, pressure, andinternal energy.

S = −(∂F

∂T

)V

(4.71)

= k ln (V nQ) + kT

V nQVdnQdT

(4.72)

= k ln (V nQ) + kT

nQ

32nQT

(4.73)

= k ln (V nQ) + 32kB (4.74)

You could find U by going back to the weightedaverage definition and using the derivative trickfrom the partition function, but with the freeenergy and entropy it is just algebra.

U = F + TS (4.75)

= −kT ln (V nQ) + kT ln (V nQ) + 32kBT

(4.76)

= 32kBT (4.77)

The pressure derivative gives a particularly simpleresult.

p = −(∂F

∂V

)T

(4.78)

= kT

V(4.79)

Ideal gas with multiple atoms

Extending from a single atom to several requires justa bit more subtlety. Naively, you could just arguethat because we understand extensive and intensivequantities, we should be able to go from a single atomto N atoms by simply scaling all extensive quantities.That is almost completely correct (if done correctly).

The entropy has an extra term (the “entropy of mix-ing”), which also shows up in the free energy. Notethat while we may think of this “extra term” as anabstract counting thing, it is physically observable,provided we do the right kind of experiment (whichturns out to need to involve changing N , so we won’tdiscuss it in detail until we talk about changing Nlater).

There are a few different ways we could imagineputting N non-interacting atoms together. I will dis-cuss a few here, starting from simplest, and movingup to the most complex.

Different atoms, same box One option is to con-sider a single box with volume V that holds Ndifferent atoms, each of a different kind, but allwith the same mass. In this case, each microstateof the system will consist of a microstate for eachatom. Quantum mechanically, the wave functionfor the entire state with N atoms will separateand will be a product of N single-particle states(or orbitals)

Ψmicrostate(~r1, ~r2, · · · , ~rN ) =N∏i=1

φnxi,nyi,nzi(~ri)

(4.80)

28

Page 30: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

and the energy will just be a sum of differentenergies. The result of this will be that the par-tition function of the entire system will just bethe product of the partition functions of all theseparate non-interacting systems (which happento all be equal). This is mathematically equiva-lent to what already happened with the three x,y and z portions of the partition function.

ZN = ZN1 (4.81)FN = NF1 (4.82)

This results in simply scaling all of our extensivequantities by N except the volume, which didn’tincrease when we added more atoms.

This result sounds great, in that it seems tobe perfectly extensive, but when we look moreclosely, we cna see that it is actually not exten-sive!

FN = −NkT ln(V nQ) (4.83)

If we double the size of our system, so N → 2Nand V → 2V , you can see that the free energydoes not simply double, because the V in thelogarithm doubles while nQ remains the same(because it is intensive). So there must be anerror here, which turns out to be caused by havingtreated all the atoms as distinct. If each atom isa unique snowflake, then it doesn’t quite makesense to expect the result to be extensive, sinceyou aren’t scaling up “interchangeable” things.

Identical atoms, but different boxes We canalso consider saying all atoms are truly identical,but each atom is confined into a different box,each with identical (presumably small) size. Inthis case, the same reasoning as we used aboveapplies, but now we also scale the total volumeup by N . This is a more natural application ofthe idea of extensivity.

ZN = ZN1 (4.84)FN = NF1 (4.85)V = NV1 (4.86)

This is taking the idea of extensivity to an ex-treme: we keep saying that a system with halfas much volume and half as many atoms is “halfas much” until there is only one atom left. Youwould be right to be skeptical that putting oneatom per box hasn’t resulted is an error.

Identical atoms, same box This is the picture fora real ideal gas. All of our atoms are the same,or perhaps some fraction are a different isotope,but who cares about that? Since they are all inthe same box, we will want to write the many-atom wavefunction as a product of single-atomwavefunctions (sometimes called orbitals). Thusthe wave function looks like our first option of“different atoms, same box”, but we have fewerdistinct microstates, since swapping the quan-tum numbers of two atoms doesn’t change themicrostate.

How to remove this duplication, which is sortof a fundamental problem when our business iscounting microstates? Firstly, we will considerit vanishingly unlikely for two atoms to be inthe same orbital (when we study Bose condensa-tion, we will see this assumption breaking down).Then we need to figure out exactly how manytimes we counted each orbital, so we can correctour number of microstates (and our partitionfunction). That number is equal to the numberof permutations of N distinct numbers, which isN !, if we can assume that there is negligible prob-ability that two atoms are in an identical state.Thus we have a corrected partition function

29

Page 31: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

ZN = 1N !Z

N1 (4.87)

FN = NF1 + kBT lnN ! (4.88)≈ NF1 +NkBT (lnN − 1) (4.89)= −NkT ln(V nQ) +NkT (lnN − 1)

(4.90)

= NkT

(ln(

N

V nQ

)− 1)

(4.91)

= NkT

(ln(n

nQ

)− 1)

(4.92)

This answer is extensive, because now we have aratio of V and N in the logarithm. So yay.

We now have the true free energy for an ideal gas athigh enough temperature.

Small groups (10-15 minutes) Given this free en-ergy, solve for S, U , and p.

Answer This is very similar to what we did withjust one atom, but now it will give us the trueanswer for the monatomic ideal gas.

S = −(∂F

∂T

)V

(4.93)

= −Nk(ln(n

nQ

)− 1)−NkT ∂

∂Tln(n

nQ

)(4.94)

= −Nk ln(n

nQ

)+Nk +NkT

∂TlnnQ

(4.95)

= −Nk ln(n

nQ

)+Nk + 3

2Nk (4.96)

= −Nk ln(n

nQ

)+ 5

2Nk (4.97)

This is called the Sackur-Tetrode equation. Thequantum mechanics shows up here (~2/m), eventhough we took the classical limit, because theentropy of a truly classical ideal gas has no min-imum value. So quantum mechanics sets thezero of entropy. Note that the zero of entropy

is a bit tricky to measure experimentally (albeitpossible). The zero of entropy is in fact set bythe Third Law of Thermodynamics, which youprobably haven’t heard of.Now we can solve for the internal energy:

U = F + TS (4.98)

= NkT

(ln(n

nQ

)− 1)−NkT ln

(n

nQ

)+ 5

2NkT

(4.99)

= 32NkT (4.100)

This is just the standard answer you’re familiarwith. You can notice that it doesn’t have anyquantum mechanics in it, because we tookThe pressure is easier than the entropy, since thevolume is only inside the log:

p = −(∂F

∂V

)T

(4.101)

= NkT

V(4.102)

This is the ideal gas law. Again, the quantummechanics has vanished in the classical limit.

Homework for week 3 (PDF)

For each problem, please let me know how long theproblem took, and what resources you used to solve it!

1. Free energy of a two state system (K&K3.1, modified)

a) Find an expression for the free energy as afunction of T of a system with two states,one at energy 0 and one at energy ε.

b) From the free energy, find expressions forthe internal energy U and entropy S of thesystem.

c) Plot the entropy versus T . Explain itsasymptotic behavior as the temperature be-comes high.

30

Page 32: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

d) Plot the S(T ) versus U(T ). Explain themaximum value of the energy U .

2. Magnetic susceptibility Consider a paramag-net, which is a material with n spins per unitvolume each of which may each be either “up” or“down”. The spins have energy ±mB where mis the magnetic dipole moment of a single spin,and there is no interaction between spins. Themagnetization M is defined as the total magneticmoment divided by the total volume. Hint: eachindividual spin may be treated as a two-state sys-tem, which you have already worked with above.

−4 −2 0 2 4mBkBT

−1.0

−0.5

0.0

0.5

1.0

M Nm

Figure 4.1: Plot of magnetization vs. B field

a) Find the Helmholtz free energy of a param-agnetic system (assume N total spins) andshow that F

NkT is a function of only the ratiox ≡ mB

kT .

b) Use the canonical ensemble (i.e. partitionfunction and probabilities) to find an exactexpression for the total magentization Mand the susceptibility

χ ≡(∂M

∂B

)T

(4.103)

as a function of temperature and magneticfield for the model system of magnetic mo-

ments in a magnetic field. The result forthe magnetization is

M = nm tanh(mB

kT

)(4.104)

where n is the number of spins per unitvolume. The figure shows what this magne-tization looks like.

c) Show that the susceptibility is χ = nm2

kT inthe limit mB � kT .

3. Free energy of a harmonic oscillator A one-dimensional harmonic oscillator has an infiniteseries of equally spaced energy states, with εn =n~ω, where n is an integer ≥ 0, and ω is theclassical frequency of the oscillator. We havechosen the zero of energy at the state n = 0 whichwe can get away with here, but is not actually thezero of energy! To find the true energy we wouldhave to add a 1

2~ω for each oscillator.

a) Show that for a harmonic oscillator the freeenergy is

F = kBT log(

1− e−~ωkBT

)(4.105)

Note that at high temperatures such thatkBT � ~ω we may expand the argument ofthe logarithm to obtain F ≈ kBT log

( ~ωkT

).

b) From the free energy above, show that theentropy is

S

kB=

~ωkT

e~ωkT − 1

− log(

1− e− ~ωkT

)(4.106)

This entropy is shown in the nearby figure,as well as the heat capacity.

4. Energy fluctuations (K&K 3.4, modified) Con-sider a system of fixed volume in thermal contactwith a resevoir. Show that the mean square fluc-tuations in the energy of the system is

⟨(ε− 〈ε〉)2

⟩= kBT

2(∂U

∂T

)V

(4.107)

31

Page 33: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

0.0 0.5 1.0 1.5 2.0 2.5 3.0kBThω

0.0

0.5

1.0

1.5

2.0

S/kB

Entropy of a harmonic oscillator

Figure 4.2: Entropy of a simple harmonic oscillator

0.0 0.5 1.0 1.5 2.0 2.5 3.0kBThω

0.0

0.2

0.4

0.6

0.8

1.0

CV/kB

Figure 4.3: Heat capacity of a simple harmonic oscil-lator

Here U is the conventional symbol for 〈ε〉. Hint:Use the partition function Z to relate

(∂U∂T

)V

tothe mean square fluctuation. Also, multiply outthe term (· · · )2.

5. Quantum concentration (K&K 3.8) Considerone particle confined to a cube of side L; the con-centration in effect is n = L−3. Find the kineticenergy of the particle when in the ground state.There will be a value of the concentration forwhich this zero-point quantum kinetic energy isequal to the temperature kT . (At this concentra-tion the occupancy of the lowest orbital is of theorder of unity; the lowest orbital always has ahigher occupancy than any other orbital.) Showthat the concentration n0 thus defined is equal tothe quantum concentration nQ defined by (63):

nQ ≡(MkT

2π~2

) 32

(4.108)

within a factor of the order of unity.

6. One-dimensional gas (K&K 3.11) Consider anideal gas of N particles, each of massM , confinedto a one-dimensional line of length L. Find theentropy at temperature T . The particles havespin zero.

32

Page 34: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 5

Week 4: Thermal radiation andPlanck distribution (K&K 4,Schroeder 7.4)

This week we will be tackling things that reduce toa bunch of simple harmonic oscillators. Any systemthat classically reduces to a set of normal modes eachwith its own frequency falls in this category. We willstart with just an ordinary simple harmonic oscillator,and will move on to look at radiation (photons) andsound in solids (phonons).

Harmonic oscillator

You will recall that the energy eigenvalues of a singlesimple harmonic oscillator are given by

En = (n+ 12 )~ω (5.1)

Note The text uses s rather than n for the quan-tum number, but that is annoying, and on theblackboard my s looks too much like my S, so I’llstick with n. The text also omits the zero-pointenergy. It does make the math simpler, but Ithink it’s worth seeing how the zero-point energygoes away to start with.

We will begin by solving for the properties of a singlesimple harmonic oscillator at a given temperature.

You already did this once using multiplicities, but it’seasier this way.

Z =∞∑n=0

e−(n+ 12 )β~ω (5.2)

= e−12β~ω

∞∑n=0

e−nβ~ω (5.3)

Now the sum is actually a harmonic (or geometric)sum, which has a little trick to solve:

Z = e−12β~ω

∞∑n=0

(e−β~ω

)n (5.4)

ξ ≡ e−β~ω (5.5)

The trick involves multiplying the series by ξ andsubtracting:

33

Page 35: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Ξ =∞∑n=0

ξn (5.6)

= 1 + ξ + ξ2 + · · · (5.7)ξΞ = ξ + ξ2 + · · · (5.8)

Ξ− ξΞ = 1 (5.9)

Ξ = 11− ξ (5.10)

Thus we find that the partition function is simply

Z = e−12β~ω

1− e−β~ω (5.11)

This gives us the free energy

F = −kT lnZ (5.12)

= −kT ln

e−12β~ω

1− e−β~ω

(5.13)

= 12~ω + kT ln

(1− e−β~ω

)(5.14)

We can see now that the ground state energy just endsup as a constant that we add to the free energy, whichis what you probably would have guessed. Kittel wasable to omit this constant simply by redefining thezero of energy.

Small groups Solve for the high temperature limitof the free energy.

Answer At high temperatures, β~ω � 1, whichmeans

e−β~ω = 1− β~ω + 12(β~ω)2 + · · ·

(5.15)ln(1− e−β~ω) = ln

(β~ω − 1

2 (β~ω)2 + · · ·)

(5.16)

F ≈ 12~ω + kT ln

(~ωkT

)(5.17)

So far this doesn’t tell us much, but from it wecan quickly tell the high temperature limits ofthe entropy and internal energy:

S ≈ −k ln(~ωkT

)− kT kT

(− ~ωkT 2

)(5.18)

= k

(ln(kT

)+ 1)

(5.19)

The entropy increases as we increase tempera-ture, as it always must. The manner in which Sincreases logarithmically with temperature tellsus that the number of accessible microstates mustbe proportional to kT

~ω .

S = −(∂F

∂T

)V

(5.20)

= −k ln(1− e−β~ω

)+ kT

e−β~ω

1− e−β~ω~ωkT 2 (5.21)

= −k ln(1− e−β~ω

)+ e−β~ω

1− e−β~ω~ωT

(5.22)

Planck distribution

Finally, we can find the internal energy and the aver-age quantum number (or number of “phonons”). Thelatter is known as the Planck distribution.

U = F + TS (5.23)

= 12~ω + T

e−β~ω

1− e−β~ω~ωT

(5.24)

=(

12 + e−β~ω

1− e−β~ω

)~ω (5.25)

U =(〈n〉+ 1

2)~ω (5.26)

〈n〉 = e−β~ω

1− e−β~ω (5.27)

= 1eβ~ω − 1 (5.28)

So far, all we’ve done is a straightforward applicationof the canonical formalism from last week: we com-

34

Page 36: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

puted a partition function, took a log, and from thatfound the entropy and internal energy.

Small groups Solve for the high-temperature andlow-temperature limits of the internal energyand/or the average number of quanta 〈n〉.

High temperature answer First we considerkT~ω � 1 or β~ω � 1. In this case, the exponen-tial is going to be very close to one, and we canuse a power series approximation for it.

〈n〉 = 1eβ~ω − 1 (5.29)

≈ 1(1 + β~ω + 1

2 (β~ω)2 + · · ·)− 1

(5.30)

= kT

~ω1

1 + 12β~ω + · · ·

(5.31)

= kT

~ω(1− 1

2β~ω + · · ·)

(5.32)

≈ kT

~ω− 1

2 (5.33)

The first term is our equipartition term: 12kT

each for the kinetic and potential energy. Thesecond term is our next-order correction, whichyou need not necessarily include. There wouldbe a next term which would be proportional to1/T , but we have omitted.

Low temperature answer At low temperatureβ~ω � 1, and we would rather look the otherrepresentation:

〈n〉 = 1eβ~ω − 1 (5.34)

= e−β~ω

1− e−β~ω (5.35)

because now the exponentials are small (ratherthan large), which means we can expand thedenominator as a power series.

〈n〉 = e−β~ω(1 + e−β~ω + · · ·

)(5.36)

≈ e−β~ω + e−2β~ω (5.37)

Once again, I kept one more term than is abso-lutely needed. Clearly at low temperature wehave a very low number of quanta, which shouldbe no shock. I hope you all expected that thesystem would be in the ground state at very lowtemperature.

Summing over microstates

I realized that we haven’t spent much time talkingabout how to sum over microstates. Once you “get it,”summing over microstates is very easy. Unfortunately,this makes it less obvious that this requires teaching,and I have a tendency to skim over this summation.

A nice example of this was the second homework,which involved the paramagnet again. You neededto find the partition function for N dipoles. Afterspending a week working with multiplicities, it wouldbe very natural to take the

Z ≡∑µ

e−βEµ (5.38)

and think of the µ as having something to do withspin excess, and to think that this sum should involvemultiplicities. You can write a solution here usingmultiplicities and summing over all possible energies,but that is the hard way. The easy way only lookseasy once you know how to do it. The easy way in-volves literally summing over every possible sequenceof spins.

Z =∑s1=±1

∑s2=±1

· · ·∑

sN=±1e−βE(s1,s2,··· ,sN (5.39)

This may look messy, but things simplify when weconsider the actual energy (unless we try to simplifythat by expressing it in terms of N↑ or the spin excess).

E(s1, s2, · · · , sN ) = −s1mB − s2mB − · · · − sNmB(5.40)

35

Page 37: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Now this may look pretty nasty, but it’s actuallybeautiful, because each si has a separate term thatis added together, which means that it separates! I’lluse fewer words for a bit. . .

Z =∑s1=±1

∑s2=±1

· · ·∑

sN=±1eβ(s1mB+s2mB+···+sNmB)

(5.41)

=∑s1=±1

∑s2=±1

· · ·∑

sN=±1eβs1mBeβs2mB · · · (5.42)

=∑s1=±1

eβs1mB∑s2=±1

eβs2mB · · ·∑

sN=±1eβsNmB

(5.43)

=( ∑s1=±1

eβs1mB

)· · ·

( ∑sN=±1

eβsNmB

)(5.44)

=(∑s=±1

eβsmB

)· · ·

(∑s=±1

eβsmB

)(5.45)

=(∑s=±1

eβsmB

)N(5.46)

The important steps above were

1. Writing the sum over states as a nested sum overevery quantum number of the system.

2. Breaking the exponential into a product, whichwe can do because the energy is a sum of termseach of which involve just one quantum number.

3. Doing each sum separately, and finding the resultas the product of all those sums.

Note that the final result here is a free energy that isjust N times the free energy for a system consistingof a single spin. And thus we could alternatively doour computation for a system with a single spin, andthen multiply everything that is extensive by N . Thelatter is a valid shortcut, but you should know why itgives a correct answer, and when (as when we haveidentical particles) you could run into trouble.

Black body radiation

Researchers in 1858-1860 realized that materials emit-ted light in strict proportion to their ability to absorbit, and hence a perfectly black body would be emitthe most radiation when heated. Planck and othersrealized that we should be able to use thermal physicsto predict the spectrum of black body radation. Akey idea was to recognize that the light itself shouldbe in thermal equilibrium.

One example of a “black body” is a small hole ina large box. Any light that goes into the hole willbounce around so many times before coming out ofthe hole, that it will all be absorbed. This leads tothe idea of studying the radiation in a closed box,which should match that of a black body, when it isin thermal equilibrium.

Eigenstates of an empty box

So what are the properties of an empty box? Let’sassume metal walls, and not worry too much aboutdetails of the boundary conditions, which shouldn’tmake much difference provided the box is pretty big.The reasoning is basically the same as for a particlein a 3D box: the waves must fit in the box. As forthe particle in a box, we can choose either periodicboundary conditions or put nodes at the boundaries.I generally prefer periodic (which gives both positiveand negative ~k), rather than dealing with sine waves(which are superpositions of the above). A beautifulthing about periodic boundary conditions is that yourset of ~k vectors is independent of the Hamiltonian, sothis looks very much like the single atom in a box wedid last week.

kx = nx2πL

nx = any integer (5.47)

and similarly for ky and kz, which gives us

36

Page 38: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

ω(~k) = c|~k| (5.48)

ωnxnynz = 2πcL

√n2x + n2

y + n2z (5.49)

where now we need to be extra-careful to rememberthat in this expression nx is not a number of photons,even though n is a number of photons. Fortunately,we will soon be done with our nx, once we finishsumming. The possible energies of a single mode arethose of a simple harmonic oscillator, so for each of thenx,ny,nz triples there is a different quantum numbern, and an energy given by

En = n~ω (5.50)

where technically there will also be a zero-point energy(like for the physical harmonic oscillator), but wewon’t want to include the zero point energy for acouple of reasons. Firstly, it can’t be extracted fromthe vacuum, so including it would make it harder toreason about the quantity of radiation leaking froma hole. Secondly, the total zero point energy of thevacuum will be infinite, which makes it a nuisance.

In your homework, you will use a summation overall the normal modes to solve for the thermodynamicproperties of the vacuum, and will show that

F = 8πV (kT )4

h3c3

∫ ∞0

ln(1− e−ξ

)ξ2dξ (5.51)

= −8π5

45V (kT )4

h3c3(5.52)

= −π2

45V (kT )4

~3c3(5.53)

provided the box is big enough that ~cLkT � 1. At first

this looks freaky, because the free energy is alwaysnegative while you know that the energy is alwayspositive. This just means that entropy is dominatingthe free energy.

The entropy is given by

S = 32π5

45 kV

(kT

hc

)3(5.54)

= 4π2

45 kV

(kT

~c

)3(5.55)

which is a comfortingly positive quantity, and theenergy is

U

V= 8π5

15(kT )4

h3c3(5.56)

= π2

15(kT )4

~3c3(5.57)

which is also nicely positive.

Note also that these quantities are also nicely exten-sive, as you would hope.

Knowing the thermodynamic properties of the vacuumis handy, but doesn’t tell us yet about the propertiesof a black body. To do that we’ll have to figure outhow much of this radiation will escape through a littlehole.

Sefan-Boltzmann law of radiation

To find the radiation power, we need do a couple ofthings. One is to multiply the energy per volumeby the speed of light, which would tell us the energyflux through a hole if all the energy were passingstraight through that hole. However, there is anadditional geometrical term we will need to find theactual magnitude of the power, since the radiation istravelling equally in all directions. This will give usanother dimensionless factor.

If we define the velocity as ck where c is the speed oflight and k is its direction, the power flux (or intensity)in the z direction will be given by the energy densitytimes the average value of the positive z componentof the velocity. When vz < 0, the light doesn’t comeout the hole at all. This average can be written as

37

Page 39: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

I = U

V

∫ 2π0∫ π

20 vz sin θdθdφ

4π (5.58)

= U

V

∫ 2π0∫ π

20 c cos θ sin θdθdφ

4π (5.59)

= U

V

c

2

∫ 1

0ξdξ (5.60)

= U

V

c

4 (5.61)

= −6π (kT )4

h3c2

∫ ∞0

ln(1− e−ξ

)ξ2dξ (5.62)

This is the famous Stefan-Boltzmann law of radiation.Since the constants are all mostly a nuisance, theyare combined into the Stefan-Boltzmann constant:

I ≡ power radiated per area of surface (5.63)= σBT

4 (5.64)

σB = −6π k4B

h3c2

∫ ∞0

ln(1− e−ξ

)ξ2dξ (5.65)

Side note Why is this T 4 law important for incan-descent light bulbs? The resistivity of a metalincreases with temperature. In a light bulb, ifyou have a bit of wire that is a bit hotter thanthe rest, its resistivity will be higher, and thatwill cause it to have more Joule heating, and gethotter. If nothing else came into play, we’d havea feedback loop, and the hot bit would just keepgetting hotter, and having higher resistivity, untilit vaporized. Boom. Fortunately, the power oflight radiated from that hot bit of wire will in-crease faster than its resistivity goes up (becauseT 4 is serious!), preventing the death spiral, andsaving the day!

Planck radiation law

Having found the total power radiated, a fair questionis how much of that power is at each possible frequency.This defines the black body spectrum. Each mode

has an occupancy 〈n〉 that is the same as that ofthe harmonic oscillator from Monday. But the powerradiated also depends on how many modes there areat a given frequency. This may be more clear if wesolve for the internal energy in a different way:

U =modes∑j

〈nj〉~ωj (5.66)

=modes∑j

~ωjeβ~ωj − 1 (5.67)

≈∫∫∫ hc

L

√n2x + n+

y n2z

eβhcL

√n2x+n+

y n2z − 1

dnxdnydnz (5.68)

=∫ ∞

0

hcL n

eβhcL n − 1

4πn2dn (5.69)

Now we can transform the integral from n to ω viaωn = c2πn/L.

U =(L

c2π

)3 ∫ ∞0

~ωeβ~ω − 14πω2dω (5.70)

= V

c3(2π)3

∫ ∞0

~ωeβ~ω − 14πω2dω (5.71)

At this point we can identify the internal energy perfrequency as

dU

dω= V ω2

c32π2~ω

eβ~ω − 1 (5.72)

which is proportional to the spectral density of power,which is proportional to ω3 at low frequency, and diesoff exponentially at high frequency.

Skipped:

Kirchoff law and Surface temperature

38

Page 40: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Low temperature heat capacity

Much of the same physics that we have consideredhere applies also to solids. One of the mysteries ofclassical physics was why the heat capacity of aninsulator drops at low temperatures. Experimentally,it was found that

Cp ∝ T 3 (5.73)

at low temperatures, which was pretty mysterious.Why would solids have their energy drop off like this?Based on the equipartition theorem classically, theheat capacity should be independent of temperatureprovided all the terms in the energy are quadratic.

Einstein model

Einstein proposed that we view a solid as a wholebunch of harmonic oscillators, all with the same fre-quency (for simplicity). In this case, the internalenergy is given by

U = N~ωeβ~ω − 1 + 1

2N~ω (5.74)

Small groups What is the heat capacity at low tem-peratures according to this picture? Roughlysketch this by hand.

Answer At low temperature, the exponential on thebottom is crazy big, so we can see that the inter-nal energy will be

U ≈ N~ωe−β~ω (5.75)

CV =(∂U

∂T

)V

(5.76)

≈ NkB(~ωkT

)2e−β~ω (5.77)

This dies off very quickly as the temperature be-comes kT � ~ω. We call it “exponential” scaling,but it’s exponential in β not T .

This happens (we see exponential low-T scaling in theheat capacity) whenever we have a finite energy gapbetween the ground state and the first excited state.

Debye theory

For smallish ~k, the frequency of a phonon is propor-tional to |~k|, with the speed of sound as the propor-tionality constant. We won’t have time for this inclass, but the reasoning is very similar.

The key thing Debye realized was that unlike light,the phonon ~k has a maximum value, which is de-termined by the number of atoms per unit volume,since one wavelength of sound must include at leasta couple of atoms. This means that when we com-pute the internal energy of a crystal full of phonons,there is a maximum k value, and thus a maximumvalue for

√n2x + n2

y + n2z and a maximum value for

the frequency, which we call ωD. The internal energyis thus

U =(

L

vs2π

)3 ∫ ωD

0

~ωeβ~ω − 14πω2dω (5.78)

= V

c3(2π)3

∫ ωD

0

~ωeβ~ω − 14πω2dω (5.79)

At low temperature we can RUN OUT OF TIMEPREPARING FOR LECTURE!

Homework for week 4 (PDF)

1.⊗

Radiation in an empty box As discussedin class, we can consider a black body as a largebox with a small hole in it. If we treat the largebox a metal cube with side length L and metalwalls, the frequency of each normal mode will begiven by:

ωnxnynz = πc

L

√n2x + n2

y + n2z (5.80)

39

Page 41: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

where each of nx, ny, and nz will have positiveinteger values. This simply comes from the factthat a half wavelength must fit in the box. Thereis an additional quantum number for polarization,which has two possible values, but does not affectthe frequency. Each normal mode is a harmonicoscillator, with energy eigenstates En = n~ωwhere we will not include the zero-point energy12~ω, since that energy cannot be extracted fromthe box. (See the Casimir effect for an examplewhere the zero point energy of photon modesdoes have an effect.)

Note This is a slight approximation, as theboundary conditions for light are a bit morecomplicated. However, for large n valuesthis gives the correct result.

a) Show that the free energy is given by

F = 8πV (kT )4

h3c3

∫ ∞0

ln(1− e−ξ

)ξ2dξ

(5.81)

= −8π5

45V (kT )4

h3c3(5.82)

= −π2

45V (kT )4

~3c3(5.83)

provided the box is big enough that ~cLkT �

1. Note that you may end up with a slightlydifferent dimensionless integral that numer-ically evaluates to the same result, whichwould be fine. I also do not expect youto solve this definite integral analytically,a numerical confirmation is fine. How-ever, you must manipulate your in-tegral until it is dimensionless andhas all the dimensionful quantities re-moved from it!

b) Show that the entropy of this box full ofphotons at temperature T is

S = 32π5

45 kV

(kT

hc

)3(5.84)

= 4π2

45 kV

(kT

~c

)3(5.85)

c) Show that the internal energy of this boxfull of photons at temperature T is

U

V= 8π5

15(kT )4

h3c3(5.86)

= π2

15(kT )4

~3c3(5.87)

2. Surface temperature of the earth (K&K 4.5)Calculate the temperature of the surface of theEarth on the assumption that as a black body inthermal equilibrium it reradiates as much thermalradiation as it receives from the Sun. Assumealso that the surface of the Earth is a constanttemperature over the day-night cycle. Use thesun’s surface temperature T� = 5800K; and thesun’s radius R� = 7×1010cm; and the Earth-Sundistance of 1.5× 1013cm.

3.⊗

Pressure of thermal radiation (modifiedfrom K&K 4.6) We discussed in class that

p = −(∂F

∂V

)T

(5.88)

Use this relationship to show that

a)

p = −∑j

〈nj〉~(dωjdV

), (5.89)

where 〈nj〉 is the number of photons in themode j;

b) Solve for the relationship between pressureand internal energy.

40

Page 42: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

4. Heat shields (K&K 4.8) A black (nonreflective)plane at high temperature Th is parallel to a coldblack plane at temperature Tc. The net energyflux density in vacuum between the two planesis JU = σB

(T 4h − T 4

c

), where σB is the Stefan-

Boltzmann constant used in (26). A third blackplane is inserted between the other two and isallowed to come to a steady state temperatureTm. Find Tm in terms of Th and Tc, and showthat the net energy flux density is cut in halfbecause of the presence of this plane. This is theprinciple of the heat shield and is widely usedto reduce radiant heat transfer. Comment: Theresult for N independent heat shields floating intemperature between the planes Tu and Tl is thatthe net energy flux density is JU = σB

T 4h−T

4c

N+1 .

5. Heat capacity of vacuum

a) Solve for the heat capacity of a vacuum,given the above, and assuming that photonsrepresent all the energy present in vacuum.

b) Compare the heat capacity of vacuum atroom temperature with the heat capacity ofan equal volume of water.

41

Page 43: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 6

Week 5: Chemical potential andGibbs distribution (K&K 9,Schroeder 7.1)

This week be looking at scenarios where the numberof particles in a system changes. We could technicallyalways manage to solve problems without doing sucha system, but allowing N to change is often a loteasier, just as letting the energy change made thingseasier. In both case, we enable ourselves to consider asmaller system, which tends to be both conceptuallyand mathematically simpler.

Small white boards (3 minutes) Talk with yourneighbor for a moment about how you expect thedensity of the atmosphere to vary with altitude.

The atmosphere

Let’s talk about the atmosphere for a moment. Eachatmosphere has a potential energy. We can solvethis problem using the canonical ensemble as we havelearned. We will consider just one atom, but nowwith gravitational potential energy as well as kineticenergy. This time around we’ll do this classicallyrather than quantum mechanically. We can work outthe probability of this atom having any particularmomentum and position.

P1(~p, ~r) = e−β(p22m+mgz

)Z1

(6.1)

= e−βp22m−βmgz

Z1(6.2)

This tells us that the probability of this atom beingat any height drops exponentially with height. Ifwe extend this to many atoms, clearly the densitymust drop exponentially with height. Thiw week we’llbe looking at easier approaches to explain this sortof phenomenon. You can see the obvious fact thatthat potential energy will affect density, and hencepressure. We will be generalizing the idea of potentialenergy into what is called chemical potential.

Chemical potential

Imagine for a moment what happens if you allow justtwo systems to exchange particles as well as energy.Clearly they will exchange particles for a while, andthen things will settle down. If we hold them at fixedtemperature, their combined Helmholtz free energywill be maximized. This means that the derivative

42

Page 44: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

of the Helmholtz free energy with respect to N mustbe equal on both sides. This defines the chemicalpotential.

µ =(∂F

∂N

)T,V

(6.3)

This expands our total differential of the free energy

dF = −SdT − pdV + µdN (6.4)

which also expands our understanding of the thermo-dynamic identity

dU = TdS − pdV + µdN (6.5)

which tells us that the chemical potential is also

µ =(∂U

∂N

)S,V

(6.6)

The chemical potential expands our set of thermody-namic variables, and allows all sorts of nice excitement.Specifically, we now have three extensive variablesthat the internal energy depends on, as well as theirderivatives, the temperature, pressure, and chemicalpotential.

Note In general, there is one chemical potential foreach kind of particle, thus the word “chemical”in chemical potential. Thus the “three” I discussis actually a bit flexible.

Internal and external

The chemical potential is in fact very much like po-tential energy. We can distinguish between externalchemical potential, which is basically ordinary po-tential energy, and internal chemical potential,

which is the chemical potential that we compute as aproperty of a material. We’ll do a fair amount of com-puting of the internal chemical potential this week,but keep in mind that the total chemical potential iswhat becomes equal in systems that are in equilib-rium. The total chemical potential at the top of theatmosphere, is equal to the chemical potential at thebottom. If it were not, then atoms would diffuse fromone place to the other.

Ideal gas chemical potential

Recall the Helmholtz free energy of an ideal gas isgiven by

F = NF1 + kBT lnN ! (6.7)

= −NkBT ln(V

(mkBT

2π~2

) 32)

+ kBTN(lnN − 1)

(6.8)= −NkBT ln (V nQ) + kBTN(lnN − 1) (6.9)

= NkT ln(N

V

1nQ

)−NkT (6.10)

Small groups Find the chemical potential of theideal gas.

Answer To find the chemical potential, we just needto take a derivative.

µ =(∂F

∂N

)V,T

(6.11)

= kBT ln(N

V

1nQ

)(6.12)

= kBT ln(n

nQ

)(6.13)

where the number density n is given by n ≡ N/V .

This equation can be solved to find the density interms of the chemical potential:

n = nQeβµ (6.14)

43

Page 45: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

This might remind you of the Boltzmann relation.In fact, it’s very closely related to the Boltzmannrelation. We do want to keep in mind that the µabove is the internal chemical potential.

The total chemical potential is given by the sum of theinternal chemical potential and the external chemicalpotential, and that total is what is equalized betweensystems that are in diffusive contact.

µtot = µint +mgz (6.15)

= kBT ln(n

nQ

)+mgz (6.16)

We can solve for the density now, as a function ofposition.

kBT ln(n

nQ

)= µtot −mgz (6.17)

n = nQe−β(µtot−mgz) (6.18)

This is just telling us the same result we already knew,which is that the density must drop exponentially withheight.

Interpreting the chemical potential

The chemical potential can be challenging to under-stand intuitively, for myself as well as for you. Theideal gas expression

n = nQeβµ (6.19)

can help with this. This tells us that the densityincreases as we increase the chemical potential. Parti-cles spontaneously flow from high chemical potentialto low chemical potential, just like heat flows fromhigh temperature to low. This fits with the idea thatat high µ the density is high, since I expect particlesto naturally flow from a high density region to a lowdensity region.

The distinction between internal and external chemi-cal potential allows us to reason about systems likethe atmosphere. Where the external chemical poten-tial is high (at high altitude), the internal chemicalpotential must be lower, and there is lower density.This is because particles have already fled the high-µregion to happier locations closer to the Earth.

Gibbs factor and sum

Let’s consider how we maximize entropy when weallow not just microstates with different energy, butalso microstates with different number of particles.The problem is the same was we dealt with the firstweek. We want to maximize the entropy, but need tofix the total probability, the average energy and nowthe average number.

〈N〉 = N =∑i

PiNi (6.20)

〈E〉 = U =∑i

PiEi (6.21)

1 =∑i

Pi (6.22)

To solve for the probability Pi we will want to max-imize the entropy S = −k

∑i Pi lnPi subject to the

above constraints. Like what I did the first week ofclass, we will need to use Lagrange multipliers.

The Lagrangian which we want to maximize will looklike

L = −k∑i

Pi lnPi + kα

(1−

∑i

Pi

)

+ kβ

(U −

∑i

PiEi

)

+ kγ

(N −

∑i

PiNi

)(6.23)

44

Page 46: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Small groups Solve for the probabilities Pi thatmaximize this Lagrangian, subject to the aboveconstraints. Eliminate α from the expression forprobability, so you will end up with probabilitiesthat depend on the other two Lagrange multipli-ers, one of which is our usual β, while the otherone we will relate to chemical potential.

Answer We maximize L by setting its derivativesequal to zero.

0 = −1k

∂L∂Pi

(6.24)

= lnPi + 1 + α+ βEi + γNi (6.25)Pi = e−1−α−βEi−γNi (6.26)

Now as before we’ll want to apply the normaliza-tion constraint.

1 =∑i

Pi (6.27)

=∑i

e−1−α−βEi−γNi (6.28)

= e−1−α∑i

e−βEi−γNi (6.29)

e−1−α = 1∑i e−βEi−γNi

(6.30)

Thus we find that the probability of a given mi-crostate is

Pi = e−βEi−γNi

Z(6.31)

Z ≡∑i

e−βEi−γNi (6.32)

where we will call the new quantity Z the grandpartition function or Gibbs sum.

We have already identified β as 1kT , but what is this

γ? It is a dimensionless quantity. We expect that γwill relate to a derivative of the entropy with respectto N (since it is the Lagrange multiplier for the Nconstraint). We can figure this out by examining thenewly expanded total differential of entropy:

dU = TdS − pdV + µdN (6.33)

dS = 1TdU + p

TdV − µ

TdN (6.34)

Small groups I’d like you to repeat your first everhomework problem in this class, but now withthe N -changing twist. Given the above setof probabilities, along with the Gibbs entropyS = −k

∑P lnP , find the total differential of

entropy in terms of dU and dN , keeping in mindthat V is inherently held fixed by holding theenergy eigenvalues fixed. Equate this total differ-ential to the dS above to identify β and γ withthermodynamic quantities.

Answer

S = −k∑i

Pi lnPi (6.35)

= −k∑i

Pi ln(e−βEi−γNi

Z

)(6.36)

= −k∑i

Pi(−βEi − γNi − lnZ) (6.37)

= kβU + kγN + k lnZ (6.38)

Now we can zap this with d to find its derivatives:

dS = kβdU + kUdβ + kγdN + kNdγ + kdZZ

(6.39)

Now we just need to find dZ. . .

dZ = ∂Z∂β

dβ + ∂Z∂γ

dγ (6.40)

= −∑i

Eie−βEi−γNidβ −

∑i

Nie−βEi−γNidN

(6.41)= −UZdβ −NZdγ (6.42)

Putting dS together gives

45

Page 47: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

dS = kβdU + kγdN (6.43)

= 1TdU − µ

TdN (6.44)

Thus, we conclude that

kβ = 1T

kγ = −µT

(6.45)

β = 1kT

γ = −βµ (6.46)

Actual Gibbs sum (or grand sum)

Putting this interpretation for γ into our probabilitieswe find the Gibbs factor and Gibbs sum (or grandsum or grand partition function) to be:

Pj = −β (Ej − µNj)Z

(6.47)

Z ≡∑i

e−β(Ei−µNi) (6.48)

where you must keep in mind that the sums are overall microstates (including states with different N).We can go back to our expressions for internal energyand number

U =∑i

PiEi (6.49)

= 1Z∑i

Eie−β(Ei−µNi) (6.50)

N =∑i

PiNi (6.51)

= 1Z∑i

Nie−β(Ei−µNi) (6.52)

We can now use the derivative trick to relate U andN to the Gibbs sum Z, should we so desire.

Small groups Work out the partial derivative tricksto compute U and N from the grand sum.

Answer Let’s start by exploring the derivative withrespect to β, which worked so nicely with thepartition function.

1Z∂Z∂β

= − 1Z∑i

(Ei − µNi)e−β(Ei−µNi)

(6.53)= −U + µN (6.54)

Now let’s examine a derivative with respect to µ.

1Z∂Z∂µ

= 1Z∑i

(βNi)e−β(Ei−µNi) (6.55)

= βN (6.56)

Arranging these to find N and U is not hard.

Small groups Show that

(∂N

∂µ

)T,V

> 0 (6.57)

Answer

N =∑i

NiPi (6.58)

= kT1Z

(∂Z∂µ

(6.59)

So the derivative we seek will be

46

Page 48: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

(∂N

∂µ

)T,V

= kT

(∂2Z∂µ2

(6.60)

=∑i

Ni

(∂Pi∂µ

(6.61)

=∑i

Ni

(βNiPi −

PiZ

(∂Z∂µ

)(6.62)

=∑i

Ni (βNiPi − β〈N〉Pi) (6.63)

We can simplify this the notation by expressingthings in terms of averages, since we’ve got sumsof Pi times something.

= β 〈Ni(Ni − 〈N〉)〉 (6.64)= β

(⟨N2⟩− 〈N〉2) (6.65)

= β⟨

(N − 〈N〉)2⟩

(6.66)

This is positive, because it is an average of some-thing squared. The last step is a common stepwhen examining variances of distributions, andrelies on the fact that 〈N − 〈N〉〉 = 0.

Euler’s homogeneous function theorem

There is a nice theorem we can use to better under-stand the chemical potential, and how it relates tothe Gibbs free energy. This involves reasoning abouthow internal energy changes when all the extensivevariables are changed simultaneously, and connectswith Euler’s homogeneous function theorem.

Suppose we have a glass that we will slowly pourwater into. We will define our “system” to be allthe water in the glass. The glass is open, so thepressure remains constant. Since the water is at roomtemperature (and let’s just say the room humidityis 100%, to avoid thinking about evaporation), thetemperature remains constant as well.

Small white boards What is the initial internal en-ergy (and entropy and volume and N) of thesystem? i.e. when there is not yet any water inthe glass. . .

Answer Since these are extensive quantities, theymust all be zero when there is no water in theglass.

Small white boards Suppose the water is added ata rate dN

dt . Suppose you know the values of N , S,V , and U for a given amount of room temperaturewater (which we can call N0, S0, etc.). Find therate of change of these quantities.

Answer Because these are extensive quantities, theymust all be increasing with equal proportion

dV

dt= V0

N0

dN

dT(6.67)

dS

dt= S0

N0

dN

dT(6.68)

dU

dt= U0

N0

dN

dT(6.69)

This tells us that differential changes to each of thesequantities must be related in the same way, for thisprocess of pouring in more identical water. And wecan drop the 0 subscript, since the ratio of quantitiesis the same regardless of how much water we have.

dV = V0

N0dN (6.70)

dV = V

NdN (6.71)

dU = U

NdN (6.72)

Thus given the thermodynamic identity

dU = TdS − pdV + µdN (6.73)U

NdN = T

S

NdN − p S

NdN + µdN (6.74)

U = TS − pV + µN (6.75)

This is both crazy and awesome. It feels very counter-intuitive, and you might be wondering why we didn’t

47

Page 49: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

tell you this way back in Energy and Entropy to saveyou all this trouble with derivatives. The answer isthat it is usually not directly all that helpful, sincewe now have a closed-form expression for U in termsof six mutually dependent variables! So you can’t usethis form in order to evaluate derivatives (much).

This expression is however very helpful in terms ofunderstanding the chemical potential. Consider theGibbs free energy:

G ≡ U − TS + pV (6.76)= µN (6.77)

which tells us that the chemical potential is just theGibbs free energy per particle. If we have severalchemical species, this expression just becomes

G =∑i

µiNi (6.78)

so each chemical potential is a partial Gibbs freeenergy per molecule.

This explains why the chemical potential is seldom dis-cussed in chemistry courses: they spend all their timetalking about the Gibbs free energy, which just turnsout to be the same thing as the chemical potential.

Side note There is another interesting thing we cando with the relationship that

U = TS − pV + µN (6.79)

and that involves zapping it with d. This tells usthat

dUTdS + SdT − pdV − V dp+ µdN +Ndµ(6.80)

which looks downright weird, since it’s got twiceas many terms as we normally see. This tells usthat the extra terms must add to zero:

0 = SdT − V dp+Ndµ (6.81)

This relationship (called the Gibbs-Duhem equa-tion) tells us just how T , p and µ must change inorder to keep our extensive quantities extensiveand our intensive quantities intensive.

Chemistry

Chemical equilibrium is somewhat different than thediffusive equilibrium that we have considered so far.In diffusive equilibirum, two systems can exchangeparticles, and the two systems at equilibirum musthave equal chemical potentials. In chemistry, particlescan be turned into other particles, so we have a morecomplicated scenario, but it still involves changingthe number of particles in a system. In chemicalequilibrium, when a given reaction is in equilibriumthe sum of the chemical potentials of the reactantsmust be equal to the sum of the chemical potentialsof the products.

An example may help. Consider for instance makingwater from scratch:

2H2 + O2 → 2H2O (6.82)

In this case in chemical equilibrium

2µH2 + µO2 = 2µH2O (6.83)

We can take this simple equation, and turn it intoan equation involving activities, which is productiveif you think of an activity as being something likea concentration (and if you care about equilibriumconcentrations):

eβ(2µH2O−2µH2−µO2 ) = 1 (6.84)λ2H2O

λO2λ2H2

= 1 (6.85)

48

Page 50: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Now this looks sort of like the law of mass action, ex-cept that our equilibrium constant is 1. To get to themore familiar law of mass action, we need to introduce(a caricature of) the chemistry version of activity. Thething in square brackets is actually a relative activity,not a concentration as is often taught in introductoryclasses (and was considered correct prior to the latenineteenth century). It is only proportional to con-centration to the extent that the substance obeys theideal gas relationship between chemical potential andconcentration. Fortunately, this is satisfied for justabout anything at low concentration. For solvents(and dense materials like a solid reactant or product)the chemical potential doesn’t (appreciably) changeas the reaction proceeds, so it is normally omittedfrom the mass action equation. When I was taughtthis in a chemistry class back in the nineties, I wastaught that the “concentration” of such a substancewas dimensionless and had value 1.

Specifically, we define the thing in square brackets as

[H2O] ≡ n∗H2Oeβ(µH2O−µ∗H2O) (6.86)

= n∗H2OλH2O

λ∗H2O(6.87)

where n∗ is a reference concentration, and µ∗ is thechemical potential of the fluid at that reference density.Using this notation, we can solve for the activity

λH2O = λ∗H2O[H2O]n∗H2O

(6.88)

So now we can rewrite our weird mass action equationfrom above

(λ∗H2O

[H2O]n∗H2O

)2

(λ∗O2

[O2]n∗O2

)(λ∗H2

[H2]n∗H2

)2 = 1 (6.89)

and then we can solve for the equilibrium constantfor the reaction

[H2O]2

[O2][H2]2 =(n∗H2O)2

n∗O2(n∗H2

)2λ∗O2

(λ∗H2)2

(λ∗H2O)2 (6.90)

=(n∗H2O)2

n∗O2(n∗H2

)2 eβ(µ∗O2

+2µ∗H2−2µ∗H2O) (6.91)

=(n∗H2O)2

n∗O2(n∗H2

)2 e−β∆G∗ (6.92)

where at the last step I defined ∆G∗ as the differencein Gibbs free energy between products and reactants,and used the fact that the chemical potential is theGibbs free energy per particle.

This expression for the chemical equilibrium constantis the origin of the intuition that a reaction will goforward if the Gibbs free energy of the products islower than that of the reactants.

I hope you found interesting this little side expedi-tion into chemistry. I find fascinating where thesefundamental chemistry relations come from, and alsothat the relationship between concentrations arisesfrom an ideal gas approximation! Which is why it isonly valid in the limit of low concentration, and whythe solvent is typically omitted from the equilibriumconstant, since its activity is essentially fixed.

Homework for week 5 (PDF)

1. Centrifuge (K&K 5.1) A circular cylinder of ra-dius R rotates about the long axis with angularvelocity ω. The cylinder contains an ideal gas ofatoms of mass M at temperature T . Find an ex-pression for the dependence of the concentrationn(r) on the radial distance r from the axis, interms of n(0) on the axis. Take µ as for an idealgas.

2. Potential energy of gas in gravitationalfield (K&K 5.3) Consider a column of atomseach of mass M at temperature T in a uniform

49

Page 51: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Figure 6.1: Centrifugal Force by Randall Munroe, atxkcd.

gravitational field g. Find the thermal averagepotential energy per atom. The thermal averagekinetic energy is independent of height. Findthe total heat capacity per atom. The total heatcapacity is the sum of contributions from the ki-netic energy and from the potential energy. Takethe zero of the gravitational energy at the bottomh = 0 of the column. Integrate from h = 0 toh =∞. You may assume the gas is ideal.

3. Active transport (K&K 5.4) The concentrationof potassium K+ ions in the internal sap of a plantcell (for example, a fresh water alga) may exceedby a factor of 104 the concentration of K+ ions inthe pond water in which the cell is growing. Thechemical potential of the K+ ions is higher in thesap because their concentration n is higher there.Estimate the difference in chemical potential at300K and show that it is equivalent to a voltageof 0.24V across the cell wall. Take µ as for anideal gas. Because the values of the chemicalpotential are different, the ions in the cell and inthe pond are not in diffusive equilibrium. Theplant cell membrane is highly impermeable tothe passive leakage of ions through it. Importantquestions in cell physics include these: How isthe high concentration of ions built up within thecell? How is metabolic energy applied to energizethe active ion transport?

David adds You might wonder why it is evenremotely plausible to consider the ions insolution as an ideal gas. The key idea here isthat the ideal gas entropy incorporates theentropy due to position dependence, andthus due to concentration. Since concen-tration is what differs between the cell andthe pond, the ideal gas entropy describesthis pretty effectively. In contrast to theconcentration dependence, the temperature-dependence of the ideal gas chemical poten-tial will not be so great.

4. Gibbs sum for a two level system (Modifiedfrom K&K 5.6)

a) Consider a system that may be unoccupiedwith energy zero, or occupied by one particle

50

Page 52: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

in either of two states, one of energy zeroand one of energy ε. Find the Gibbs sumfor this system is in terms of the activityλ ≡ eβµ. Note that the system can hold amaximum of one particle.

b) Solve for the thermal average occupancy ofthe system in terms of λ.

c) Show that the thermal average occupancyof the state at energy ε is

〈N(ε)〉 = λe−εkT

Z(6.93)

d) Find an expression for the thermal averageenergy of the system.

e) Allow the possibility that the orbitals at 0and at ε may each be occupied each by oneparticle at the same time; Show that

Z = 1 + λ+ λe−εkT + λ2e−

εkT (6.94)

= (1 + λ)(1 + e−

εkT

)(6.95)

Because Z can be factored as shown, wehave in effect two independent systems.

5. Carbon monoxide poisoning (K&K 5.8) Incarbon monoxide poisoning the CO replaces theO2 adsorbed on hemoglobin (Hb) molecules inthe blood. To show the effect, consider a modelfor which each adsorption site on a heme maybe vacant or may be occupied either with energyεA by one molecule O2 or with energy εB byone molecule CO. Let N fixed heme sites be inequilibrium with O2 and CO in the gas phasesat concentrations such that the activities areλ(O2) = 1 × 10−5 and λ(CO) = 1 × 10−7, allat body temperature 37◦C. Neglect any spinmultiplicity factors.

a) First consider the system in the absence ofCO. Evaluate εA such that 90 percent ofthe Hb sites are occupied by O2. Expressthe answer in eV per O2.

b) Now admit the CO under the specified con-ditions. Fine εB such that only 10% of theHb sites are occupied by O2.

51

Page 53: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 7

Week 6: Ideal gas (K&K 6, Schroeder6.7)

Midterm on Monday

Topics are everything through week 4, including week3 homework, which was due in week 4. Problemsshould be similar to homework problems, but de-signed to be completed in class. The exam will beclosed notes. You should be able to remember thefundamental equations:

dU = TdS − pdV (7.1)F = U − TS (7.2)dF = −SdT − pdV (7.3)

Pi = e−βEi

Z(7.4)

Z =∑i

e−βEi (7.5)

U =∑i

EiPi (7.6)

F = −kT lnZ (7.7)

S = −k∑i

Pi lnPi (7.8)

(7.9)

If you need a property of a particular system (the idealgas, the simple harmonic oscillator), it will be given

to you. There is no need, for instance, to rememberthe Stefan-Boltzmann law or the Planck distribution.

Motivation

You may recall that when we solved for the free energyof an ideal gas, we had a fair amount of work to sumover all possible sets of quantum numbers for eachatom, and then to remove the double-counting dueto the fact that our atoms were identical. We had asimilar issue when dealing with photon modes andblackbody radiation, but in that case one approachwas to treat each mode as a separate system, andthen just sum over all the modes separately, withoutever needing to find the partition function of all themodes taken together.

This week we will be looking at how we can treat eachorbital (i.e. possible quantum state for a single non-interacting particle) as a separate system (which mayor may not be occupied). This can only work whenwe work in the grand canonical ensemble, but willgreatly simplify our understanding of such systems.

52

Page 54: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Quantum mechanics and orbitals

Kittel uses the term orbital to refer to an energyeigenstate (or wave function) of a one-particle system.How do things differ when we have more than oneparticle?

Suppose we have three particles (and ignore spin fora moment). The wave function would be written asΨ(~r1, ~r2, ~r3, · · · ). This function in general has nothingto do with any single-particle orbitals. Orbitals arisewhen we consider a Hamiltonian in which there areno interactions between particles:

H = p21

2m + V (~r1) + p22

2m + V (~r2) + · · · (7.10)

When our Hamiltonian is separable in this way (i.e. theparticles don’t interact, and there are no terms thatinvolve both ~r1 and ~r2), we can use separation of vari-ables in the solution, and we obtain a wave functionthat is a product of orbitals:

|i1, i2, i3, · · · 〉=φi1(~r1)φi2(~r2)φi3(~r3) · · · (7.11)

Assuming the potential and mass are the same forevery particle, these orbitals are eigenstates of thefollowing single-particle eigenvalue equation:

(p2

2m + V (~r))φi(~r) = εiφi(~r) (7.12)

There is a catch, however, which arises if the particlesare truly indistinguishable (as is the case for electrons,protons, atoms of the same isotope, etc.). In this case,there is a symmetry which means that permuting thelabels of our particles cannot change any probabilities:

|Ψ(~r1, ~r2, ~r3, · · · )|2 = |Ψ(~r2, ~r1, ~r3, · · · )|2 (7.13)= |Ψ(~r2, ~r3, ~r1, · · · )|2 (7.14)

The simple product we wrote above doesn’t have thissymmetery, and thus while it is an eigenfunction of oureigenvalue equation, it cannot represent the state of areal system of identical particles. Fortunately, this ispretty easy to resolve: permuting the labels doesn’tchange the energy, so we have a largish degeneratesubspace in which to work. We are simply requiredto take a linear combination of these product stateswhich does have the necessary symmetry.

The above equation, while true, does not tell us whathappens to the wave function when we do a permuta-tion, only to its magnitude. As it turns out, there aretwo types of symmetry possible: bosons and fermions.

Fermions

Fermions are particles with half-integer spin, such aselectrons and protons. Fermions are antisymmetricwhen we exchange the labels of any two particles.

Ψ(~r1, ~r2, ~r3, · · · ) = −Ψ(~r2, ~r1, ~r3, · · · ) (7.15)

This formula is Pauli’s exclusion principle.

This isn’t a quantum class, so I won’t say much more,but we do need to connect with the orbitals picture.When we have non-interacting fermions, their energyeigenstates can be written using a Slater determinant,which is just a convenient way to write the proper an-tisymmetric linear combination of all possible productstates with the same set of orbitals:

Ψi1i2i3···(~r1, ~r2, ~r3, · · · ) =

1√N !

∣∣∣∣∣∣∣∣∣φi1(~r1) φi1(~r1) φi1(~r1) · · ·φi2(~r1) φi2(~r1) φi2(~r1) · · ·φi3(~r1) φi3(~r1) φi3(~r1) · · ·

......

.... . .

∣∣∣∣∣∣∣∣∣ (7.16)

This relies on the properties of a determinant, whichchanges sign if you swap two rows or two columns.This means that if two of your orbitals are the same,

53

Page 55: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

the result will be zero, so the “occupancy” of anyorbital is either 0 or 1. Note that the N ! is required inorder to ensure that the wave function is normalizedprovided the orbitals are orthonormal.

Bosons

Bosons have integer spin, and differ from fermions inthat their sign does not change when you interchangeparticles.

Ψ(~r1, ~r2, ~r3, · · · ) = Ψ(~r2, ~r1, ~r3, · · · ) (7.17)

The wavefunction for noninteracting bosons looks verymuch like the Slater determinant above, only witha special version of the determinant that has all +signs. The bosons can have as many particles as theywant in a given orbital. In the limiting case where allparticles are in the same orbital, a single product oforbitals satisfies the required symmetry.

Fermi-Dirac distribution

Let us now consider a set of non-interacting fermions.These fermions have a Hamiltonian with a set ofsingle-particle energy eigenvalues given by εi. Howdo we find the probability of any given many-bodymicrostate? As always, the probability of any givenmicrostate is given by the Boltzmann distribution,but given that are particles are non-interacting, we’dprefer to deal with just one at a time. As it turnsout, dealing with one particle at a time is not reallypossible, but in a grand canonical ensemble we candeal with a single orbital at a time with much greaterease. We can think of each orbital as a separatesystem, and ask how many particles it has! Particlescan now be exchanged between orbitals just like theywere between systems last week.

Small groups Work out the grand partition func-tion for a single orbital with energy εi that mayoccupied by a fermion.

Answer Now that we are thinking of an orbital asa system, we can pretty easily write down allthe possible states of that system: it is eitheroccupied or unoccupied. The latter case has 0energy, and also N = 0, while the former casehas energy ε and N = 1. Summing over thesegives the Gibbs sum

Z =all µstates∑

i

e−β(εi−µNi) (7.18)

= 1 + e−β(ε−µ) (7.19)

Note that the same statistics would apply to a statefor a classical particle if there were an infinite energyrequired to have two particles in the same state. Thephysics here is a system that can hold either zero orone particles, and there are various ways you couldimagine that happening.

Small groups Find the energy and the average oc-cupancy (〈N〉) of the orbital

Answer If we want to find 〈N〉 of the system, wecan do that in the usual way Finding is basicallythe same

〈N〉 =∑i

NiPi (7.20)

= 0 + e−β(ε−µ)

Z(7.21)

= e−β(ε−µ)

1 + e−β(ε−µ) (7.22)

= 11 + eβ(ε−µ) (7.23)

Finding the energy is basically the same, sincethe energy is proportional to the occupancy:

〈E〉 =∑i

EiPi (7.24)

= 0 + εe−β(ε−µ)

Z(7.25)

= ε〈N〉 (7.26)

54

Page 56: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

The average occupancy of an orbital is called theFermi-Dirac function, and is normally written as:

f(ε) = 1eβ(εi−µ) + 1

(7.27)

Whenever you are looking at non-interacting fermions,f(ε) will be very helpful.

Small groups Sketch the Fermi-Dirac function.

When talking about electrons, we often refer to thechemical potential µ as the Fermi level. Kittel alsodefines the Fermi energy εF as the Fermi level whenthe temperature is zero, i.e.

εF ≡ µ(T = 0) (7.28)

At zero temperature, all the orbitals with energy lessthan εF are occupied, while all the orbitals with higherenergy are unoccupied.

Actual electrons You might (or might not) be won-dering how we can talk about electrons as non-interacting particles. After all, they are chargedparticles, which naturally repel each other ratherstrongly. Indeed, a Slater determinant is a ter-rible approximation for an energy eigenstate forany many-electron system. So why are we both-ering talking about orbitals and the Fermi-Diracdistribution that relies on orbitals being an actualthing?

I’m not going to thoroughly explain this, butrather just give a few hints about why what we’redoing might be reasonable. The key idea is thatwhat we are really interested in is the behaviorof excited states of our many-body system. (Theground state is also very interesting, e.g. if youwant to study vibrations or phonons, but not interms of the thermal behavior of the electronsthemselves.) Fortunately, even though the elec-trons really do interact with one another verystrongly, it is possible to construct a picture ofelementary excitations that treats these exci-tations as not interacting with one another. In

this kind of a picture, what we are talking aboutare called quasiparticles. These represent anexcitation of the many-body state. And it turnsout that in many cases (particularly for solids) wecan represent a given excited state of the many-body system as a sum of the energy of a bunchof non-interacting quasiparticles. When thisbreaks down, we invent new names like excitonto represent an excitation in which more thanone quasiparticle are interacting.

Bose-Einstien distribution

The same ideas apply to bosons as to fermions: wecan treat each orbital as a separate system in thegrand canonical ensemble. In this case, however, theoccupancy N can have any (non-negative) value.

Small groups Solve for the Gibbs sum for an orbitalwith energy ε, and solve for the 〈N〉 for a singleorbital occupied by bosons.

Answer The Gibbs sum will be

Z =∞∑N=0

e−β(Nε−µN) (7.29)

=∞∑n=0

(e−β(ε−µ)

)N(7.30)

=∞∑n=0

(e−β(ε−µ)

)N(7.31)

This looks suspiciously like a simple harmonicoscillator. The same harmonic summation trickapplies, and we see that

Z = 1 + e−β(ε−µ) +(e−β(ε−µ)

)2+ · · ·(7.32)

e−β(ε−µ)Z = e−β(ε−µ) +(e−β(ε−µ)

)2+ · · ·

(7.33)

Subtracting the two gives

55

Page 57: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

(1− e−β(ε−µ)

)Z = 1 (7.34)

Z = 11− e−β(ε−µ) (7.35)

Solving for the average occupancy 〈N〉 is againmore tedious than for a fermion:

〈N〉 =∑i

NiPi (7.36)

= 1Z

∞∑N=0

Ne−β(ε−µ)N (7.37)

= 1Z∂Z∂µ

(1β

)(7.38)

= −hhhhhh1− e−β(ε−µ)(1− e−β(ε−µ)

)C2(−e−β(ε−µ)

)��β

(1��β

)(7.39)

= e−β(ε−µ)

1− e−β(ε−µ) (7.40)

f(ε) = 1eβ(ε−µ) − 1

(7.41)

This turns out to be just the Planck distribution wealeady saw, only with a chemical potential as ref-erence. Why does this bosonic system look like asimple harmonic oscillator? Since the particles arenon-interacting, we have the same set of energy eigen-values, which is to say an equally spaced series ofstates. This is conversely related to why we can de-scribe solutions to the simple harmonic oscillator asbosonic phonons.

Small groups Sketch the Bose-Einstein distributionfunction.

This expression, the Bose-Einstein distribution, tellsus that at low temperatures, we could end up seeinga lot of particles in low energy states (if there are anyeigenvalues below µ), in contrast to the Fermi-Diracdistribution, which never sees more than one particleper state.

Entropy

Small groups Find the entropy of a single orbitalthat may hold a fermion.

Answer We begin with the probabilities of the twomicrostates:

P0 = 1Z

P1 = e−β(ε−µ)

Z(7.42)

where

Z = 1 + e−β(ε−µ) (7.43)

Now we just find the entropy using FINISH THIS!

Classical ideal gas

We are now prepared to talk about a gas in the classi-cal limit. In the classical limit, there is no differencein behavior between fermions and bosons. This hap-pens when the probability of finding a particle in aparticular orbital is � 1. And this happens whenβ(ε− µ)� 1 for all orbitals, i.e. when µ is very neg-ative. When this is the case, both the Fermi-Diracdistribution and the Bose-Einstein distribution be-come identical.

fFD(ε) = 1eβ(ε−µ) + 1

≈ e−β(ε−µ) (7.44)

fBE(ε) = 1eβ(ε−µ) − 1

≈ e−β(ε−µ) (7.45)

In this limit (which is the low-density limit), thesystem will behave as a classical ideal gas.

A reasonable question is, “what is the chemical po-tential.” We already handled this, but can now lookat this answer in terms of orbitals and the classicaldistribution function. (Note: classical distribution

56

Page 58: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

function is a bit of a misnomer in this context, as itdefines how many particles are in a given quantummechanical orbital.)

〈N〉 =orbitals∑

i

f(εi) (7.46)

=orbitals∑

i

e−β(εi−µ) (7.47)

= eβµorbitals∑

i

e−β(εi) (7.48)

= eβµZ1 (7.49)N = eβµnQV (7.50)

where Z1 is the partition function for a single particlein a box, which we derived a few weeks ago to benQV where nQ ≡

(mkT2π~2

) 32 . Thus we can once again

find the expression we found last week, where

eβµ = 1nQ

N

V= n

nQ(7.51)

We can solve for the chemical potential

µ = kT(lnN − lnV − 3

2 ln(kT ) + 32 ln

(2π~2/m

))(7.52)

Thus it decreases as volume increases or as the temper-ature increases. We can further find the free energyby integrating the chemical potential. This is againredundant when compared with the approach we al-ready solved for this. Remember that

dF = −SdT − pdV + µdN (7.53)

µ =(∂F

∂N

)V,T

(7.54)

Note that this must be an integral at fixed V and T :

F =∫ N

0µdN (7.55)

=∫ N

0kT (lnN − lnV − lnnQ) dN (7.56)

= kT (N lnN −N −N lnV −N lnnQ) (7.57)

= NkT

(ln(n

nQ

)− 1)

(7.58)

Small groups Solve for the entropy of the ideal gas(from this free energy).

Answer

−S =(∂F

∂T

)V,N

(7.59)

= Nk

(ln(n

nQ

)− 1)− NkT

nQ

dnQdT

(7.60)

= Nk

(ln(n

nQ

)− 1)− Nk@T

��nQ

32��nQ

@T(7.61)

−S = Nk

(ln(n

nQ

)− 5

2

)(7.62)

S = Nk

(ln(nQn

)+ 5

2

)(7.63)

This expression for the entropy is known as theSackur-Tetrode equation.

Small groups Solve for the pressure of the ideal gas(from the free energy)

Answer

p = −(∂F

∂V

)T,N

(7.64)

= NkT

V(7.65)

That was pretty easy, once we saw that nQ wasindependent of volume. This expression is knownas the ideal gas law.

Small groups Solve for the internal energy of theideal gas

57

Page 59: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Answer

U = F + TS (7.66)

= 32NkT (7.67)

Also pretty familiar.

Small groups Solve for the heat capacity at con-stant volume of the ideal gas

Answer

CV =(∂U

∂T

)V,N

(7.68)

= T

(∂S

∂T

)V,N

(7.69)

= 32Nk (7.70)

This one is relatively easy.

Small groups Solve for the heat capacity at con-stant pressure of the ideal gas

Answer

Cp = T

(∂S

∂T

)p,N

(7.71)

(7.72)

This one requires one (small) step more. We haveto convert the volume into a pressure in the freeenergy expression.

Cp = T

(∂Nk

(ln(nQn

)+ 5

2)

∂T

)p,N

(7.73)

= NkT

(∂(ln(nQn

)+ 5

2)

∂T

)p,N

(7.74)

= NkT

∂ ln(V nQN

)∂T

p,N

(7.75)

= NkT

∂ ln(NkTp

nQN

)∂T

p,N

(7.76)

At this point we peek inside and see that nQ ∝T

32 and can complete the derivative

Cp = 52Nk (7.77)

This has been a series of practice computations in-volving the ideal gas. The results are useful for someof your homework, and the process of finding theseproperties is something you will need to know for thefinal exam. Ultimately, pretty much everything comesdown to summing and integrating to find partitionfunctions, and then taking derivatives (and occasionalintegrals) to find everything else.

Homework for week 6 (PDF)

1. Derivative of Fermi-Dirac function (K&K6.1) Show that −∂f∂ε evaluated at the Fermi levelε = µ has the value 1

4kT . Thus the lower thetemperature, the steeper the slope of the Fermi-Dirac function.

2. Symmetry of filled and vacant orbitals(K&K 6.2) Show that

f(µ+ δ) = 1− f(µ− δ) (7.78)

Thus the probability that an orbital δ above theFermi level is occupied is equal to the probabilityan orbital δ below the Fermi level is vacant. Avacant orbital is sometimes known as a hole.

3. Distribution function for double occu-pancy statistics (K&K 6.3) Let us imagine anew mechanics in which the allowed occupan-cies of an orbital are 0, 1, and 2. The values ofthe energy associated with these occupancies areassumed to be 0, ε, and 2ε$, respectively.

a) Derive an expression for the ensemble aver-age occupancy 〈N〉, when the system com-posed of this orbital is in thermal and diffu-

58

Page 60: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

sive contact with a resevoir at temperatureT and chemical potential µ.

b) Return now to the usual quantum mechan-ics, and derive an expression for the ensem-ble average occupancy of an energy levelwhich is doubly degenerate; that is, two or-bitals have the identical energy ε. If bothorbitals are occupied the toal energy is 2ε.How does this differ from part (a)?

4. Entropy of mixing (Modified from K&K 6.6)Suppose that a system of N atoms of type A isplaced in diffusive contact with a system of Natoms of type B at the same temperature andvolume.

a) Show that after diffusive equilibrium isreached the total entropy is increased by2Nk ln 2. The entropy increase 2Nk ln 2 isknown as the entropy of mixing.

b) If the atoms are identical (A = B), showthat there is no increase in entropy when dif-fusive contact is established. The differencehas been called the Gibbs paradox.

c) Since the Helmholtz free energy is lowerfor the mixed AB than for the separatedA and B, it should be possible to extractwork from the mixing process. Constructa process that could extract work as thetwo gasses are mixed at fixed temperature.You will probably need to use walls that arepermeable to one gas but not the other.

Note This course has not yet covered work, butit was covered in Energy and Entropy, soyou may need to stretch your memory tofinish part (c).

5. Ideal gas in two dimensions (K&K 6.12)

a) Find the chemical potential of an idealmonatomic gas in two dimensions, with Natoms confined to a square of area A = L2.The spin is zero.

b) Find an expression for the energy U of thegas.

c) Find an expression for the entropy σ. Thetemperature is kT .

6. Ideal gas calculations (K&K 6.14) Considerone mole of an ideal monatomic gas at 300K and1 atm. First, let the gas expand isothermally andreversibly to twice the initial volume; second, letthis be followed by an isentropic expansion fromtwice to four times the original volume.

a) How much heat (in joules) is added to thegas in each of these two processes?

b) What is the temperature at the end of thesecond process?

c) Suppose the first process is replaced by anirreversible expansion into a vacuum, to atotal volume twice the initial volume. Whatis the increase of entropy in the irreversibleexpansion, in J/K?

59

Page 61: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 8

Week 7: Fermi and Bose gases (K&K7, Schroeder 7))

This week in 2019, I have been less on top of thingsin terms of putting up notes. I’m adding here bitsof what we did in class, and leaving below the morecomplete notes from the previous year, which I didnot quite follow.

I began by having you (the students) solve for N(µ, T )for the Fermi or Bose gas using their respective distri-bution function functions. This was a pain, and wasmeant to lay the groundwork for the density of states,which is the main topic this week. We ended with anequation like

N = (2s+ 1)π2

∫ ∞0

f

(~2

2mπ2

L2n2)n2dn (8.1)

Sadly, a change of variables didn’t make this a beau-tiful dimensionless integral, because the presence of µin the distribution function messed things up. Nev-ertheless, we can simplify this integral with a changeof variables in a way that motivates the density ofstates:

ε = ~2

2mπ2

L2n2 dε = ~2

2mπ2

L2 2ndn (8.2)

This doesn’t look quite as pretty as some substitutionswe have made in the past, but has the advantage that

ε is something that we can reason about physically,since it is just the orbital energy.

N = (2s+ 1)π2

∫ ∞0

f(ε)(

2mL2

~2π2

) 12 √

ε12

2mL2

~2π2 dε

(8.3)

=∫ ∞

0f(ε)(2s+ 1)π4

(2mL2

~2π2

) 32 √

εdε (8.4)

This transformation from quantum number to energyis one that we could do for any set of orbital energies,for a relativistic particle (see one of your homeworks),for phonons in matter, for light, for particles con-fined in two dimensions, for real crystals. . . anything.Which makes it worthwhile to give this special “Jaco-bian” a name: density of states. Thus the density ofstates for an ideal gas is:

D(ε) = (2s+ 1)π4

(2mL2

~2π2

) 32 √

ε (8.5)

= (2s+ 1) V4π2

(2m~2

) 32 √

ε (8.6)

You can see that the density of states is an extensivequantity, since it is proportional to V . This is neces-sary if you are going to use the density of states to

60

Page 62: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

predict extensive properties. However, it is also notuncommon to refer to D(ε)

V as the density of states,particularly when describing a material, since theintensive ratio is independent of volume size.

Note Mathematically we can define. . .

Small groups Sketch this density of states for anordinary 3D ideal gas.

ε

D(ε

)

Figure 8.1: Density of states for a 3D ideal gas

In homework you will solve for the density of statesfor a 2D gas. This would be suitable for most planarsheet materials. A 2D relativistic gas would give adensity of states applicable to graphene.

To solve for properties of any system of non-interactingparticles once we have the density of states, we justneed to

61

Page 63: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 9

Notes from last year

This week we will look at Fermi and Bose gases. Theseconsist of noninteracting fermions or bosons. There isno point studying these at high temperatures and/orlow densities, since that is just where they are identicalto the classical ideal gas, which we covered last week.So we’ll be low-temperature all week. What happensat low temperatures, and where do we see these gassesin real life?

The Fermi gas is most widely seen in metals and semi-conductors. In both cases, the electrons (or possiblyholes) can be sufficiently dense that “low temperature”corresponds to room temperature or even far above.Now, you might wonder in what insane world it makessense to think of the electrons in a metal as “nonin-teracting.” If so, you could read my little note about“actual electrons” towards the end of the section onthe Fermi-Dirac distribution. In any case, it is reason-able and useful to treat metals as a non-interactingFermi gas. Room temperature is pretty low, as itturns out, from the perspective of the electrons ina metal, and it’s not hard to get things colder thanroom temperature.

Bose gases at effectively low temperatures are lesscommonly found, and thus in some ways are more cool.Partly this is because there are fewer boson particles.You need to look at atoms with integer spin, such as4He. The “new” quantum thing that Bose gases dois to condense at low temperatures. This condensateis similar to a superfluid, but not the same thing.It is also analogous to superconductivity, but again,not the same thing. The first actual Bose-Einstein

condensate wasn’t formed until 1995, out of rubidiumatoms at 170 nanokelvins. So “low temperature” inthis case was actually pretty chilly.

Density of (orbital) states

We have found ourselves often writing summationsover all the orbitals, such as

N =∑nx

∑ny

∑nz

f(εnxnynz ) (9.1)

=∫∫∫

f(εnxnynz )dnxdnydnz (9.2)

=∫ ∞

0f(ε(n))4πn2dn (9.3)

Then we make the integral dimensionless, etc. Thiscan be tedious to do over and over again. In theclassical limit we can often use a derivative trick towrite an answer as a derivative of a sum we havealready solved, but that that doesn’t work at lowtemperatures (i.e. the quantum limit). There is an-other approach, which is to solve for the density ofstates and then use that. (Note that while it is called“density of states” it is more accurately described asa density of orbitals, since it refers to the solutions tothe one-particle problem.)

The density of states is the number of orbitals perunit energy at a given energy. So basically it does

62

Page 64: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

two things for us. First, it turns the 3D integralinto a 1D integral. Secondly, it converts from an “n”integral into an energy integral. This isn’t as niceas a dimensionless integral, but we can still do thatourselves later.

We use a density of states by converting

∑nx

∑ny

∑nz

F (εnxnynz ) =∫dεF (ε)D(ε) (9.4)

where F is any function of the orbital energy ε. Youcan see why it is convenient, particularly because thedensity of states is often itself a very simple function.

Finding the density of states

Kittel gives a method for finding the density of stateswhich involves first integrating to find the numberof states under a given energy ε, and then taking aderivative of that. This is a perfectly find approach,but I think a simpler method involves just using aDirac δ-function.

D(ε) =all orbitals∑

i

δ(εi − ε) (9.5)

where you do need to be certain to turn the summationcorrectly into an integral before making use of the δ-function.

Small groups Solve for the density of states of anelectron gas (or of any other spin- 1

2 gas, whichwill have the same expression. You need to knowthat

εnxnynz = ~2

2mπ2

L2 (n2x + n2

y + n2z) (9.6)

where nx and the other range from 1 to ∞. Thiscorresponds to hard-wall boundary conditions,

since we’re putting a half-wavelength in the box.You should also keep in mind that each combi-nation of nx, ny, and nz will correspond to twoorbitals, one for each possible spin state.

I should also perhaps warn you that when in-tegrating δ-functions you will always want toperform a change of variables such that the inte-gration variable is present inside the δ-functionand is not multiplied by anything.

Answer

D(ε) = 2∞∑

nx=1

∑ny

∑nz

δ

(~2

2mπ2

L2 (n2x + n2

y + n2z)− ε

)(9.7)

= 2∫∫∫ ∞

(~2

2mπ2

L2 (n2x + n2

y + n2z)− ε

)dnxdnydnz

(9.8)

= 28

∫∫∫ ∞−∞

δ

(~2

2mπ2

L2 (n2x + n2

y + n2z)− ε

)dnxdnydnz

(9.9)

At this point I have converted to an integralover all “space”. Now we’ll switch into sphericalcoordinates before doing a change of variables.Actually, it is useful (although I don’t expectstudents to come up with this), to switch intok-space before going into an energy integral. ~k ≡πL~n

D(ε) = 2(L

)3 ∫∫∫ ∞−∞

δ

(~2k2

2m − ε)d3k

(9.10)

I hope that ~k as a vector feels more comfortableto you than a vector of quantum numbers ~n. Inany case, we now want to do a change of variablesinto spherical coordinates

D(ε) = 2(L

)3 ∫ ∞0

δ

(~2k2

2m − ε)

4πk2dk

(9.11)

63

Page 65: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

At this point I’d like to pause and point out thatan integral over momenta will always end up look-ing basically like this (except in some cases theε(k) will be different), and this is an acceptablestarting point for your solutions on homeworkor an exam. If we have fewer dimensions, wemay have an area or line element rather than avolume derivative in k space, and we would havefewer factors of L

2π . Now we can do a change ofvariables into an energy variable:

ε = ~2k2

2m dε = ~2k

mdk (9.12)

k2 = 2m~2 ε kdk = m

~2 dε (9.13)

And now putting all these things into our integralwe find

D(ε) = V

π2

∫ ∞0

δ (ε− ε)√

2m~2 ε

m

~2 dε (9.14)

= V

2π2

(2m~2

) 32∫ ∞

0δ (ε− ε) ε 1

2 dε (9.15)

And now our integral is in a form where we canfinally make use of the delta function! If we hadnot transformed it in this way (as I suspect isa common error), we would get something withincorrect dimensions! _

D(ε) = V

2π2

(2m~2

) 32

ε12 (9.16)

And that is the density of states for a non-relativistic particle in 3 dimensions. For yourhomework, you will get to solve for the proper-ties of a highly relativistic particle, which has thesame density of states as a photon (apart from afactor of two due to polarization, and any factordue to spin).

Common error A very common error which I mademyself when writing these notes is to forget thefactor of two due to spin. Of course, if you havea spin 3

2 fermion, it would be a factor of four.One advantage of using a density of states is thatit already includes this factor, so you no longerneed to remember it! ¨

Using the density of states

Once we have the density of states, we can solve forvarious interesting properties of quantum (or classical)gasses. The easiest thing to do is a fermi gas at zerotemperature, since the Fermi-Dirac function turns intoa step function at that limit. We can start by solvingfor the Fermi energy of a fermi gas, which is equal tothe chemical potential when the temperature is zero.We do this by solving for the number assuming weknow εF , and then backwards solving for εF . I willdo a couple of extra steps here to remind you howthis relates to what we did last week.

N =all orbitals∑

i

f(εi) (9.17)

=all orbitals∑

i

1eβ(εi−µ) − 1

(9.18)

=∫ ∞

0D(ε) 1

eβ(ε−µ) − 1dε (9.19)

=∫ εF

0D(ε)dε (9.20)

In the last step, I made the assumption that T = 0,so I could turn the Fermi-Dirac function into a stepfunction, which simply changes the bounds of theintegral. It is all right to start with this assumption,when doing computations at zero temperature. NowI’ll put in the density of states.

64

Page 66: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

N =∫ εF

0

V

2π2

(2m~2

) 32

ε12 dε (9.21)

= V

2π2

(2m~2

) 32∫ εF

12 dε (9.22)

= V

2π2

(2m~2

) 32 2

3ε32F (9.23)

Now we can just solve for the Fermi energy!

εF =(N

V3π2

(~2

2m

) 32) 2

3

(9.24)

= ~2

2m

(N

V3π2) 2

3

(9.25)

This is the energy of the highest occupied orbital inthe gas, when the temperature is zero. As you willsee, many of the properties of a metal (which is theFermi gas that you use on an everyday basis) dependfundamentally on the Fermi energy. For this reason,we also like to define other properties of electronsat the Fermi energy: momentum, velocity (techni-cally speed, but it is called Fermi velocity), and even“temperature”.

kF =(N

V3π2) 1

3

(9.26)

pF = ~kF (9.27)

= ~(N

V3π2) 1

3

(9.28)

vF = pFm

(9.29)

= ~m

(N

V3π2) 1

3

(9.30)

TF = εFkB

(9.31)

The text contains a table of properties of metals at theFermi energy for a number of simple metals. I don’t

expect you to remember them, but it’s worth havingthem down somewhere so you can check the reason-ableness of an answer from time to time. Basically,they all come down to

εF ∼ 4eV (with ∼ ×2 variation) (9.32)kF ∼ 108cm−1 (9.33)vF ∼ 108cm s−1 (9.34)

Biographical sidenote My PhD advisor insistedthat I memorize these numbers (and a few more)prior to my oral exam. He said that experimen-talists think that theorists don’t know anythingabout the real world, and hence it is importantto be able to estimate things. Sure enough, onmy exam I had to estimate the frequency of radi-ation that passes unabsorbed through at typicalsuperconductor (which is in the microwave band).

Units sidenote My advisor also insisted that I mem-orize these results in cgs units rather than SI (i.e.mks) units, since that is what any faculty wouldbe comfortable with. You may have noticed thatKittel preferentially uses cgs units, although notexclusively. Berkeley (where Kittel taught) wasone of the last hotbeds of cgs units, and as aresult all of my physics courses used Gaussianunits (which is synonymous with cgs).

Before we move on, it is worth showing you how wecan simplify the density of states now that we knowwhat the Fermi energy is:

D(ε) = V

(2π)2

(2m~2

) 32

ε12 (9.35)

= 32Nε

− 32

F ε12 (9.36)

There is nothing particularly deep here, but this issomewhat more compact, and often the factors of εFwill end up canceling out.

Small groups Solve for the internal energy at zerotemperature of a Fermi gas.

65

Page 67: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Answer We just need to do a different integral.

U =∫ εF

0D(ε)εdε (9.37)

= 32Nε

− 32

F

∫ εF

12 εdε (9.38)

= 32Nε

− 32

F

25ε

52F (9.39)

= 35NεF (9.40)

We generally find when looking at Fermi gassesthat things with dimensions of energy end upproportional to the Fermi energy. The N wecould also have predicted, in order to end upwith an extensive internal energy.

Fermi gas at finite temperature

The Fermi gas is more exciting (and more. . . ther-mal?) when the temperature is not precisely zero.Let’s start with the heat capacity at low tempera-tures, which is one area where metals inherently differfrom semiconductors and insulators.

We are looking at a metal with n electrons per unitvolume, at temperature T , where kT � εF . We arelooking to find out the heat capacity CV .

Small whiteboards How would you approach this?Answers Remember that

CV =(∂U

∂T

)V,N

= T

(∂S

∂T

)V,N

(9.41)

which means that we need either S or U in orderto find the heat capacity at fixed volume. Wecould do either, but given what we know aboutthe electron gas, U is easier to find.

We can find U by integrating with the density ofstates and the Fermi-Dirac distribution. This is a newvariant of our usual

U =all µstates∑

i

PiEi (9.42)

In this case, we will instead sum over all orbitals theenergy contribution of each orbital, again in effecttreating each orbital as a separate system.

U =all orbitals∑

i

f(εi)εi (9.43)

=∫εf(ε)D(ε)dε (9.44)

Remember that for an electron gas

D(ε) = V

(2π)2

(2m~2

) 32

ε12 (9.45)

= 32N

ε32F

√ε (9.46)

Sometimes one or the other of these may be moreconvenient.

Hand-waving version of heat capacity

We can begin with a hand-waving version of solving forthe heat capacity. We look at the Fermi-Dirac functionat both finite and zero temperature, and we can notethat the red and blue shaded areas, representing theprobability of orbitals being unoccupied below εFand the probability of excited orbitals above εF beingoccupied are equal (this was your homework).

To find the energy, of course, we need the Fermi-Diracfunction times the density of states. You might thinkthat the red and blue areas will now be unequal, sincewe are multiplying the blue region by a larger densityof states than the red region. However, provided thenumber of electrons is fixed (as is usual), the chemicalpotential must shift such that the two areas are equal.

66

Page 68: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

εF

ε

1

f(ε

)

T = 0

T > 0

Figure 9.1: Fermi function for zero and finite temper-ature

εF

ε

D(ε

)f(ε

)

T = 0

T > 0

Figure 9.2: Fermi function times density of states

So how do we find the heat capacity? We can workout a rough equation for the internal energy change(relative to zero), and then take a derivative. Nowthe width of the red and blue regions is ∼ kT . Weknow this from your first homework problem last week,where you showed that the slope at the chemical poten-tial is 1

4kT . A steeper slope meens a proportionatelywider region that is neither zero nor one.

U(T )− U(0) ∼ (# electrons excited) (∆energy)

(9.47)∼ (D(εF )kT ) (kT ) (9.48)

CV =(∂U

∂T

)N,V

(9.49)

∼ D(εF )k2T (9.50)

which tells us that the heat capacity vanishes at lowtemperatures, and is proportional to T , which is astark contrast to insulators, for which CV ∝ T 3 aspredicted by the Debye model.

Heat capacity without so much waving

To find the heat capacity more carefully, we could setup this integral, noting that the Fermi-Dirac functionis the only place where temperature dependence arises:

CV =(∂U

∂T

)N,V

(9.51)

=∫ ∞

0εD(ε) ∂f

∂Tdε (9.52)

=∫ ∞

0εD(ε) (ε− εF )eβ(ε−εF )(

eβ(ε−εF ) + 1)2 1

kT 2 dε (9.53)

where in the last stage I assumed that the chemicalpotential would not be changing significantly over our(small) temperature range. An interesting questionis what the shape of ∂f

∂T is. The exponential on topcauses it to drop exponentially at ε− εF � kT , while

67

Page 69: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

the expoential on the bottom causes it to drop atlow energies where εF − ε � kT . This makes itsharply peaked, provided kT � εF , which can justifyevaluating the density of states at the Fermi energy.We can also for the same reason set the bounds onthe integral to be all energies

CV ≈D(εF )kT 2

∫ ∞−∞

ε(ε− εF )eβ(ε−εF )(eβ(ε−εF ) + 1

)2 dε (9.54)

Now we have an integral that we would love to makedimensionless, but which has an annoying ε that doesnot have εF subtracted from it. Let’s look at evenand oddness. The ratio does not look either very evenor very odd, but we can make it do so by multiplyingby e−β(ε−εF ) top and bottom.

CV = D(εF )kT 2

∫ ∞−∞

ε(ε− εF ) 1(eβ(ε−εF ) + 1

) (e−β(ε−εF ) + 1

)dε(9.55)

= D(εF )kT 2

∫ ∞−∞

ε(ε− εF ) 1eβ(ε−εF ) + e−β(ε−εF ) + 2

(9.56)

Now we can do a change of variables

ξ = β(ε− εF ) dξ = βdε (9.57)

This makes our integral almost dimensionless:

CV = D(εF )kT 2

∫ ∞−∞

(kTξ +��*

oddεF

)(kTξ) 1

eξ + e−ξ + 2kTdξ

(9.58)

= D(εF )k2T

∫ ∞−∞

ξ2

eξ + e−ξ + 2dξ (9.59)

So here is our answer, expressed in terms of a dimen-sionless integral. Wolfram alpha tells me this integralis π2

3 .

Bose gas

f(ε) = 1eβ(ε−µ) − 1

(9.60)

ε

1

f(ε

)

T > 0

Figure 9.3: Bose function for finite temperature

The divergence of the Bose-Einstein distributionmeans that µ must be always less than the mini-mum orbital energy, i.e. µ < 0. As before, the totalnumber is given by

N =∫ ∞

0f(ε)D(ε)dε (9.61)

= (V · · · )∫ ∞

0f(ε)ε 1

2 dε (9.62)

where in the second step I assumed a three-dimensional gas, and omitted the various constantsin the density of states for brevity.

What happens when kT gets low enough that quan-tum mechanics matters, i.e. such that nQ ≈ n? Al-ternatively, we can ask ourselves what happens as Nbecomes large, such that N

V ≥ nQ? If we fix T , wecan increase N only by shifting µ to the right, whichincreases f(ε) at every energy. But there is a limit!If we try to make N too big, we will find that even

68

Page 70: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

µ 0

ε

D(ε

)f(ε

)

T > 0

Figure 9.4: Bose function times density of states

if we set µ = 0, we don’t get enough particles. Whatdoes that mean? Surely there cannot be a maximumvalue to N?!

No, the D(ε) integral we are using accounts for all ofthe states except for one! The ground state doesn’tshow up, because ~k = 0, so ε = 0 (or if you like non-periodic boundary conditions, when nx = ny = nz =1, and ε = ~2

2m(πL

)2). This state isn’t counted in theintegral, and all the extra bosons end up in it. If youlike, µ doesn’t become zero, but rather approachesthe ground state energy. And as it does, the numberof particles occupying the ground state diverges. Wecan find the number of particles in the ground stateby subtracting the total number of particles from thenumber of particles in all the excited states:

Nground = N −∫ ∞

0f(ε, µ = 0)D(ε)dε (9.63)

When Nground > 0 according to this formula, we saythat we have a Bose-Einstein condensate. The tem-perature at which this transition happens (for a fixeddensity) is the Bose-Einstein transition temperature.Your last homework this week is to solve for thistransition temperature.

Homework for week 7 (PDF)

1. Energy of a relativistic Fermi gas (Slightlymodified from K&K 7.2) For electrons with anenergy ε � mc2, where m is the mass of theelectron, the energy is given by ε ≈ pc wherep is the momentum. For electrons in a cube ofvolume V = L3 the momentum takes the samevalues as for a non-relativistic particle in a box.

a) Show that in this extreme relativistic limitthe Fermi energy of a gas of N electrons isgiven by

εF = ~πc(

3nπ

) 13

(9.64)

where n ≡ NV is the number density.

b) Show that the total energy of the groundstate of the gas is

U0 = 34NεF (9.65)

2. Pressure and entropy of degenerate Fermigas (K&K 7.3)

a) Show that a Fermi electron gas in the groundstate exerts a pressure

p =(3π2) 2

3

5~2

m

(N

V

) 53

(9.66)

In a uniform decrease of the volume of acube every orbital has its energy raised: Theenergy of each orbital is proportional to 1

L2

or to 1V

23.

b) Find an expression for the entropy of a Fermielectron gas in the region kT � εF . Noticethat S → 0 as T → 0.

3. Mass-radius relationship for white dwarfs(Modified from K&K 7.6) Consider a white dwarfof mass M and radius R. The dwarf consists ofionized hydrogen, thus a bunch of free electrons

69

Page 71: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

and protons, each of which are fermions. Let theelectrons be degenerate but nonrelativistic; theprotons are nondegenerate.

a) Show that the order of magnitude of thegravitational self-energy is −GM

2

R , whereG is the gravitational constant. (If themass density is constant within the sphereof radius R, the exact potential energy is− 5

3GM2

R ).b) Show that the order of magnitude of the

kinetic energy of the electrons in the groundstate is

~2N53

mR2 ≈~2M

53

mM53HR

2(9.67)

where m is the mass of an electron and MH

is the mas of a proton.c) Show that if the gravitational and kinetic

energies are of the same order of magni-tude (as required by the virial theorem ofmechanics), M 1

3R ≈ 1020g 13 cm.

d) If the mass is equal to that of the Sun(2× 1033g), what is the density of the whitedwarf?

e) It is believed that pulsars are stars com-posed of a cold degenerate gas of neutrons(i.e. neutron stars). Show that for a neutronstar M 1

3R ≈ 1017g 13 cm. What is the value

of the radius for a neutron star with a massequal to that of the Sun? Express the resultin km.

4. Fluctuations in a Fermi gas (K&K 7.11)Show for a single orbital of a fermion systemthat

⟨(∆N)2⟩ = 〈N〉 (1 + 〈N〉) (9.68)

if 〈N〉 is the average number of fermions in thatorbital. Notice that the fluctuation vanishes fororbitals with energies far enough from the chemi-cal potential µ so that 〈N〉 = 1 or 〈N〉 = 0.

5. Einstein condensation temperature(Roundy problem) Starting from the density offree particle orbitals per unit energy range

D(ε) = V

4π2

(2M~2

) 32

ε12 (9.69)

show that the lowest temperature at which thetotal number of atoms in excited states is equalto the total number of atoms is

TE = 1kB

~2

2M

NV

4π2∫∞0

√ξ

eξ−1dξ

23

TE =

(9.70)

The infinite sum may be numerically evaluatedto be 2.612. Note that the number derived byintegrating over the density of states, since thedensity of states includes all the states except theground state.

Note: This problem is solved in the text itself.I intend to discuss Bose-Einstein condensation inclass, but will not derive this result.

70

Page 72: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 10

Week 8: Work, heat, and cycles(K&K 8, Schroeder 4)

This week we will be zooming through chapters 8 ofKittel and Kroemer. Chapter 8 covers heat and work,which you learned about during Energy and Entropy.Hopefully this will be a bit of review and catch-uptime, before we move on to phase transitions.

Heat and work

As we reviewd in week 1, heat and work for a qua-sistatic process are given by

Q =∫TdS (10.1)

W = −∫pdV (10.2)

But we can often make use of the First Law in orderto avoid computing both of these (if we know how tofind the internal energy):

∆U = Q+W (10.3)

Carnot cycle

We have a monatomic ideal gas, and you can use anyof its properties that we have worked out in class. We

can begin with what you saw in Energy and Entropy

pV = NkT (10.4)

U = 32NkT (10.5)

and we can add to that the results from this class:

S = Nk

(ln(nQn

)+ 5

2

)(10.6)

F = NkT

(ln(n

nQ

)− 1)

(10.7)

n = nQe−βµ (10.8)

nQ ≡(mkT

2π~2

) 32

(10.9)

(10.10)

Let us consider a simple cycle in which we start withthe gas at temperature TC .

1. Adiabatically compress the gas until it reachestemperature TH .

2. Expand a gas to twice its original volume at fixedtemperature TH .

3. Expand the gas at fixed entropy until its temper-ature reaches TC .

71

Page 73: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

4. Finally go back to the original volume at fixedtemperature TC .

Small groups Solve for the heat and work on eachof these steps. In addition find the total workdone.

Answer We can solve this problem most easily byworking out the heat at each step.

1. Since the process is adiabatic, Q1 = 0.To find the work, we just need to know∆U = 3

2Nk∆T . So the work must beW = 3

2Nk∆(TH − TC).2. Now we are increasing the volume, which

will change the entropy. Since the temper-ature is fixed, Q = T∆S, and we can find∆S easily enough from the Sackur-Tetrodeentropy: ∆S = Nk ln 2. Since the internalenergy doesn’t change, the heat and workare opposite. Q = −W = NkTH ln 2.

3. Now we are again not changing the en-tropy, and thus not heating the system,so W = ∆U , and the work done is equaland opposite of the work done on step #1.W = 3

2Nk∆(TC − TH).4. This will be like step 2, but now the tem-

perature is different, and since we are com-pressing the work is positive while the heatis negative: Q = −W = NkTC ln 1

2 =−NkTC ln 2.

Putting these all together, the total work done is

W = NkTH ln 2−NkTC ln 2 (10.11)= ln 2Nk(TH − TC) (10.12)

Efficiency of an engine

If we are interested in this as a heat engine, we haveto ask what we put into it. This diagram shows whereenergy and entropy go. The engine itself (our idealgas in this case) returns to its original state afterone cycle, so it doesn’t have any changes. However,we have a hot place (where the temperature is TH ,

which has lost energy due to heating our engine as itexpanded in step 2), and a cool place at TC , whichgot heated up when we compressed our gas at step4. In addition, over the entire cycle some work wasdone.

EntropySH

EntropySC

HeatQH

Outputwork:

W

QCHeat

Figure 10.1: Carnot engine energy and entropy flowdiagram. The entropy lost by the hot place is thesame as the entropy gained by the cold place, becausethe Carnot engine is reversible.

The energy we put in is all the energy needed to keepthe hot side hot, which is the Q for step 2.

QH = NkTH ln 2 (10.13)

The efficiency is the ratio of what we get out to whatwe put in, which gives us

ε = W

QH(10.14)

= ln 2Nk(TH − TC)NkTH ln 2 (10.15)

= 1− TCTH

(10.16)

’ and this is just the famous Carnot efficiency.

72

Page 74: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Note I could have made this an easier problem if Ihad changed the statement to expand at fixedtemperature until the entropy changed by a given∆S. Then we would not have had to use theSackur-Tetrode equation at all, and our resultwould have been true for any material, not justan ideal gas!

We could also have run this whole cycle in reverse.That would look like the next figure. This is how arefridgerator works. If you had an ideal refridgeratorand an ideal engine with equal capacity, you couldoperate them both between the inside and outsideof a room to acheive nothing. The engine could pre-cisely power the refridgerator such that no net heatis exchanged between the room and its environment.

SCEntropy

EntropySH

QCHeat

HeatQH

Work:W

Figure 10.2: Carnot fridge energy and entropy flowdiagram.

Naturally, we cannot create an ideal Carnot engine oran ideal Carnot refridgerator, because in practice atruly reversible engine would never move. However, itis also very useful to know these fundamental limits,which can guide real heat engines (e.g. coal or nuclearpower plants, some solar power plands) and refridger-ators or air conditioners. Another use of this idealpicture is that of a heat pump, which is a refridgeratorin which you cool the outside in order to heat yourhouse (or anything else). A heat pump can thus bemore efficient than an ordinary heater. Just looking

at the diagram for a Carnot fridge, you can see thatthe heat in the hot location exceeds the work done,preciesly because it also cools down the cold place.

Homework for week 8 (PDF)

1. Heat pump (K&K 8.1)

a) Show that for a reversible heat pump theenergy required per unit of heat deliveredinside the building is given by the Carnotefficiency:

W

QH= ηC = TH − TC

TH(10.17)

What happens if the heat pump is not re-versible?

b) Assume that the electricity consumed by areversible heat pump must itself be gener-ated by a Carnot engine operating betweenthe even hotter temperature THH and thecold (outdoors) temperature TC . What isthe ratio QHH

QHof the heat consumed at THH

(i.e. fuel burned) to the heat delivered at TH(in the house we want to heat)? Give numer-ical values for THH = 600K; TH = 300K;TC = 270K.

c) Draw an energy-entropy flow diagram forthe combination heat engine-heat pump,similar to Figures 8.1, 8.2 and 8.4 in thetext (or the equivalent but sloppier) figuresin the course notes. However, in this casewe will involve no external work at all, onlyenergy and entropy flows at three tempera-tures, since the work done is all generatedfrom heat.

2. Photon Carnot engine (Modified from K&K8.3)

In our week on radiation, we saw that theHelmholtz free energy of a box of radiation attemperature T is

73

Page 75: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

F = −8πV (kT )4

h3c3π4

45 (10.18)

From this we also found the internal energy andentropy

U = 24π (kT )4

h3c3π4

45V (10.19)

S = 32πkV(kT

hc

)3π4

45 (10.20)

Given these results, let us consider a Carnotengine that uses an empty metalic piston (i.e. aphoton gas).

a) Given TH and TC , as well as V1 and V2 (thetwo volumes at TH), determine V3 and V4(the two volumes at TC).

b) What is the heat QH taken up and the workdone by the gas during the first isothermalexpansion? Are they equal to each other, asfor the ideal gas?

c) Does the work done on the two isentropicstages cancel each other, as for the idealgas?

d) Calculate the total work done by the gasduring one cycle. Compare it with the heattaken up at TH and show that the energyconversion efficiency is the Carnot efficiency.

3. Light bulb in a refridgerator (K&K 8.7) A100W light bulb is left burning inside a Carnotrefridgerator that draws 100W. Can the refridger-ator cool below room temperature?

74

Page 76: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 11

Week 9: Phase transformations (K&K10, Schroeder 5.3)

We will be ending this class by looking at phase trans-formations, such as the transformation from liquid tosolid, or from liquid to gas. The existence of phasetransformations—which are ubiquitous in nature—requires interactions between particles, which up tonow we have neglected. Hence, we will be revertingto a thermodynamics approach, since incorporatinginteractions into statistical mechanics is not so easy.

One of the key aspects for most phase transformationsis coexistence. It is posssible to have both ice andliquid water in equilibrium with each other, coexistinghappily. The existence of coexistence in fact breakssome assumptions that we have made. For instance,starting way back in Energy and Entropy, we haveassured you that you could describe the state of asystem using practically any pair of state variables (ortriple, now that we include N). However, if ice andwater are coexisting, then there must be an ambiguity,because at that temperature and pressure the systemcould be either ice or water, which are different!

For your online edification (probably not much inclass), I include here a phase diagram of water, whichincludes not only the liquid, vapor and solid phases,but also a dozen or so different crystal phases thatyou can reach at some combination of high pressureor low temperature.

A phase diagram of an ordinary pure material willhave two interesting points, and three interesting lines.

Pre

ssu

re

Temperature

1 Pa

10 Pa

100 Pa

1 kPa

10 kPa

100 kPa

1 MPa

10 MPa

100 MPa

1 GPa

10 GPa

100 GPa

1 TPa

10 µbar

100 µbar

1 mbar

10 mbar

100 mbar

1 bar

10 bar

100 bar

1 kbar

10 kbar

100 kbar

1 Mbar

10 Mbar0 K 50 K 100 K 150 K 200 K 250 K 300 K 350 K 400 K 450 K 500 K 550 K 600 K 650 K

-250 °C -200 °C -150 °C -100 °C -50 °C 0 °C 50 °C 100 °C 150 °C 200 °C 250 °C 300 °C 350 °C

Freezing point at 1 atm273.15 K, 101.325 kPa

Boiling point at 1 atm373.15 K, 101.325 kPa

Critical point647 K, 22.064 MPa

Solid/Liquid/Vapour triple point273.16 K, 611.657 Pa

251.165 K, 209.9 MPa256.164 K, 350.1 MPa

272.99 K, 632.4 MPa

355.00 K, 2.216 GPa

238.5 K, 212.9 MPa

248.85 K, 344.3 MPa

218 K, 620 MPa

278 K, 2.1 GPa

100 K, 62 GPa

Solid

Ic Ih

XI(hexagonal)

X

VII

VI

VIII

XVIX

XI(ortho-

rhombic)

II V

III Liquid

Vapour

Figure 11.1: Phase diagram of water (high resolution).,from Wikipedia

75

Page 77: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

The two interesting points are the triple point (atwhich solid, liquid, and vapor can all coexist), andthe critical point, at which the distinction betweenliquid and vapor vanishes. The three lines representcoexistence between solid and gas (or vapor), coexis-tence between liquid and gas, and coexistence betweenliquid and solid.

Coexistence

To understand a phase transformation, we first needto understand the state of coexistence.

Question If we view the liquid and solid here as twoseparate systems that are in equilibrium witheach other, what can you tell me about those twosystems?

Answer They must be at the same temperature(since they can exchange energy), they must beat the same pressure (since they can exchangevolume), and least obvious they must be at thesame chemical potential, since they can exchangemolecules.

The first two properties define why we can draw thecoexistence as a line on a pressure-temperature dia-gram, since when the two phases coexist they musthave the same pressure and temperature. If we drewa volume-temperature diagram, the coexisting phaseswould not lie at the same point. The final property,that the chemical potentials must be identical, mayseem obvious in retrospect. This also means that theGibbs free energy per particle of the two phases mustbe equal (since this is equal to the chemical potential).

Clausius-Clapeyron

When you look at the phase diagram in its usual pres-sure versus temperature representation, you can nowthink of the lines as representing the points wheretwo chemical potentials are equal (e.g. the chemi-cal potential of water and ice). A natural questionwould be whether you could predict the slopes of these

curves? Or alternatively, does knowing the slopes ofthese curves tell you anything about the materials inquestion?

We can begin by considering two very close points onthe liquid-vapor curve, separated by dp and dT . Weknow that

µg(T, p) = µ`(T, p) (11.1)µg(T + dT, p+ dp) = µ`(T + dT, p+ dp) (11.2)

We can now expand the small difference in terms ofdifferentials

����µg(T, p) +

(∂µg∂T

)p,N

dT +(∂µg∂p

)T,N

dp

=����µ`(T, p) +(∂µ`∂T

)p,N

dT +(∂µ`∂p

)T,N

dp (11.3)

We can now collect the two differentials and find theirratio.

((∂µg∂T

)p,N

−(∂µ`∂T

)p,N

)dT

=((

∂µ`∂p

)T,N

−(∂µg∂p

)T,N

)dp (11.4)

Thus the derivative of the coexistence curve is givenby

dp

dT=

(∂µg∂T

)p,N−(∂µ`∂T

)p,N(

∂µ`∂p

)T,N−(∂µg∂p

)T,N

(11.5)

= −

(∂µg∂T

)p,N−(∂µ`∂T

)p,N(

∂µg∂p

)T,N−(∂µ`∂p

)T,N

(11.6)

76

Page 78: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Small groups Find an expression for these deriva-tives to express this ratio in terms of thermalvariables that are more comfortable. You willwant to make use of the fact we derived a fewweeks ago, which says that the chemical poten-tial is the Gibbs free energy per particle, whereG = U − TS + pV .

Answer

G = U − TS + pV (11.7)= µN (11.8)

dG = dU − TdS − SdT + pdV − V dp (11.9)= −SdT + V dp+ µdN (11.10)

Ndµ = −SdT + V dp+ µdN (11.11)

dµ = − SNdT + V

Ndp+ µ

NdN (11.12)

From this differential we can see that

− SN

=(∂µ

∂T

)p,N

(11.13)

V

N=(∂µ

∂p

)T,N

(11.14)

Thus we can put these into the ratios above, andwe will find thatthe Ns will cancel, and the minussign on the entropy will cancel the minus signthat was out front.

dp

dT=

SgNg− S`

N`VgNg− V`

N`

(11.15)

This looks like a bit of a nuisance having all theseN values on the bottom. It looks cleaner if wejust define s ≡ S

N as the specific entropy (orentropy per atom) and v ≡ V N as the specificvolume (or volume per atom). Thus we have

dp

dT= sg − s`vg − v`

(11.16)

This is the famous Clausius-Clapeyron equation,and is true for any phase coexistence curve inthe pressure-temperature phase diagram.

We can further expand this by interpreting the chancein entropy as a latent heat. If the entropy changesdiscontinuously, since the phase transformation hap-pens entirely at a single temperature, we can use therelationship between heat and entropy to find that

Q = TδS (11.17)

We call the heat needed to change from one phase toanother the latent heat L, which gives us that

dp

dT= L

T∆V (11.18)

This equation can be a bit tricky to use right, sinceyou could get the direction of ∆V wrong. The onewith entropy and volume is easier, since as long asboth changes are in the same direction (vapor minusliquid or vice versa) it is still correct.

From Clausius-Clapeyron we can see that so long asthe volume increases as the entropy also increases, thecoexistence curve will have a positive slope.

Question When would the slope ever be negative?It requires a high-entropy phase that also haslower volume!

Answer Ice and water! Water has higher entropy,but also has lower volume than ice (i.e. is moredense). This is backwards from most other ma-terials, and causes the melting curve to slope upand to the left for ice.

van der Waals

When we talk about phase transformations, we requiresome sort of system in which there are interactionsbetween particles, since that is what leads to a phase

77

Page 79: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

transformation. We can either do this from the bot-tom up, by constructing a system in which there areinteractions and then solving for the properties of thatsystem, or we could use a more empirical approach,in which we use an approximate set of equations ofstate (or a free energy) that behaves much like a realsystem.

The van der Waals fluid is sort of in between these twoapproaches. I will describe how we would “derive” thevan der Waals free energy in a slightly hand-wavingmanner, and then we will use it as an effectivelyempirical system that we can use to explore how aphase transition might happen. The van der Waalsfluid in essence is a couple of corrections to the idealgas law, which together add enough interactions togive a plausible description of a liquid-vapor phasetransition.

Small white boards What kind of interactionsmight exist in a real gas that are ignored whenwe treat it as an ideal gas?

Answer Repulsion and attraction! ¨ Atoms willhave a very high energy if they sit on top of an-other atom, but atoms that are at an appropriatedistance will feel an attractive interaction.

In fluids, attraction and repulsion tend to be treatedvery differently. Repulsion tends to primarily decreaseentropy rather than increasing energy, because theatoms can simply avoid being on top of each other. Incontrast, attraction often has little effect on entropy(except when there is a phase transformation), butcan decrease the energy. It has little effect on entropybecause the attraction is often very weak, so it doesn’t(much) affect where the atoms are, but does affect theenergy, provided the atoms are close enough.

Building up the free energy: repulsion

Let’s start by looking at the ideal gas free energy:

Fideal = −NkT(

ln(nQV

N

)+ 1)

(11.19)

This free energy depends on both volume and temper-ature (also N , but let’s keep that fixed). The temper-ature dependence is out front and in nQ. The volumedependence is entirely in the logarithm. When weadd the repulsive interaction, we can wave our handsa bit and argue that the effect of repulsion is to keepatoms from sitting too close to one another, and thatresults in each atom having less volume it could beplaced in. The volume available for a given atom willbe the total volume V , minus the volume occupiedby all the other atoms, which we can call Nb whereN is the number of atoms, and b is the excluded vol-ume per atom. You might argue (correctly) that theexcluded volume should be (N − 1)b, but we will beworking in the limit of N � 1 and can ignore thatfine distinction. Making this substitution gives us

Fwith repulsion = −NkT(

ln(nQ(V −Nb)

N

)+ 1)

(11.20)

This free energy is going to be higher than the idealgas free energy, because we are making the logarithmlesser, but there is a minus sign out front. Thatis good, because we would hardly expect includingrepulsion to lower the free energy.

In your homework you will (incidentally) show thatthis free energy gives an internal energy that is identi-cal to the ideal gas free energy, which bears out whatI said earlier about repulsion affecting the entropyrather than the energy.

Adding attraction: mean field theory

When we want to add in attraction to the free energy,the approach we will use is called mean field the-ory. I prefer to talk about it as first-order thermo-dynamic perturbation theory. (Actually, meanfield theory is often more accurately described as apoor approximation to first-order perturbation the-ory, as it is common in mean-field theory to ignoreany correlations in the reference fluid.) You knowperturbation theory from quantum mechanics, but

78

Page 80: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

the fundamental ideas can be applied to any theory,including statistical mechanics.

The fundamental idea of perturbation theory is tobreak your Hamiltonian into two terms: one that youare able to solve, and a second term that is small. Inthis case, in order to derive (or motivate?) the van derWaals equation, our reference would be the systemwith repulsion only, and the perturbation would bethe attraction between our atoms. We want to solvethis purely classically, since we don’t know how tosolve the energy eigenvalue equation with interactionsbetween particles included.

Classically, we would begin by writing down energy,and then we would work out the partition function bysumming over all possible microstates in the canonicalensemble. A logarithm would then tell us the freeenergy. The energy will be

E =all atoms∑

i

p2i

2m + 12

atom pairs∑ij

U(|~ri − ~rj |) (11.21)

where U(r) is an attractive pair potential, which isto say, a potential energy of interaction between eachpair of atoms. The first term is the kinetic energy(and is the same for the ideal gas), while the secondterm is a potential energy (and is zero for the idealgas). The partition function is then

Z = 1N !

∫d3r1

∫d3r2 · · ·

∫d3rN∫

d3p1

∫d3p2 · · ·

∫d3pNe

−β(∑ p2

i2m+ 1

2

∑U(|~ri−~rj |)

)(11.22)

= 1N !

∫d3p1

∫d3p2 · · ·

∫d3pNe

−β(∑ p2

i2m

)∫d3r1

∫d3r2 · · ·

∫d3rNe

−β( 12

∑U(|~ri−~rj |))

(11.23)

At this point I will go ahead and split this parti-tion function into two factors, an ideal gas partition

function plus a correction factor that depends on thepotential energy of interaction.

Z = V N

N !

∫d3p1

∫d3p2 · · ·

∫d3pNe

−β(∑ p2

i2m

)1V N

∫d3r1

∫d3r2 · · ·

∫d3rNe

−β( 12

∑U(|~ri−~rj |))

(11.24)

= Zideal1V N

∫d3r1 · · ·

∫d3rNe

−β( 12

∑U(|~ri−~rj |))

(11.25)= ZidealZconfigurational (11.26)

Now we can express the free energy!

F = −kT lnZ (11.27)= −kT ln(ZidealZconfigurational) (11.28)= Fideal − kT lnZconfigurational (11.29)

So we just need to approximate this excess free energy(beyond the ideal gas free energy). Let’s get to theapproximation bit.

Zconfig =∫d3r1

V· · ·∫d3rNV

e−β( 12

∑U(|~ri−~rj |))

(11.30)

≈∫d3r1

V· · ·∫d3rNV

1−∑ij

β

2U(rij)

(11.31)

At this point I have used a power series approximationon the exponentials, under the assumption that ourattraction is sufficiently small. Now we can writethis sum in a simpler manner, taking account of thesymmetry between the different particles.

79

Page 81: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Zconfig = 1− β

2∑ij

∫d3r1

V· · ·∫d3rNV

U(rij) (11.32)

= 1− βN2

2

∫d3r1

V

∫d3r2

VU(r12) (11.33)

= 1− βN2

2

∫d3r

VU(r) (11.34)

At this stage, I’ve gotten things way simpler. Notealso, that I did something wrong. I assumed that thepotential was always small, but the repulsive part ofthe potential is not small. But we’ll ignore that fornow. Including it properly would be doing this right,but instead we’ll use the approach that leads to thevan der Waals equation of state. To continue. . .

Fexcess = −kT lnZconf (11.35)

= −kT ln(

1− βN2

2

∫d3r

VU(r)

)(11.36)

≈ kT βN2

2

∫d3r

VU(r) (11.37)

= N2

2

∫d3r

VU(r) (11.38)

≡ −N2

Va (11.39)

where I’ve defined a ≡ − 12∫d3rU(r). The minus

sign here is to make a a positive quantity, given thatU(r) < 0. Putting this together with the ideal gasfree energy modified to include a simple repulsionterm, we have the van der Waals free energy:

FvdW = −NkT(

ln(nQ(V −Nb)

N

)+ 1)− N2

Va

(11.40)

van der Waals equation of state

Small groups Solve for the van der Waals pressure,as a function of N , V , and T (and of course, alsoa and b).

Answer

p = −(∂F

∂V

)T,N

(11.41)

= NkT

V −Nb− N2

V 2 a (11.42)

This equation is the van der Waals equation of state,which is often rewritten to look like:

(p+ N2

V 2 a

)(V −Nb) = NkT (11.43)

as you can see it is only slightly modified from theideal gas law, provided a� p and Nb� V .

van der Waals and liquid-vaporphase transition

Let’s start by looking at the pressure as a function ofvolume according to the van der Waals equation:

p = NkT

V −Nb− N2

V 2 a (11.44)

= kTVN − b

− N2

V 2 a (11.45)

Clearly the pressure will diverge as the volume isdecreased towards Nb, which puts a lower bound onthe volume. This reflects the fact that each atom takesb volume, so you can’t compress the fluid smaller thanthat. At larger volumes, the pressure will definitelybe positive and decreasing, since the attractive termdies off faster than the first term. However, if a issufficiently large (or T is sufficiently small), we mayfind that the second term dominates when the volumeis not too large.

We can also rewrite this pressure to express it in termsof the number density n ≡ N

V , which I find a littlemore intuitive than imagining the volume changing:

80

Page 82: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

p = kT1n − b

− n2a (11.46)

= kTn

1− nb − n2a (11.47)

So this tells us that as we increase the density fromzero, the pressure will begin by increasing linearly.It will end by approaching infinity as the density ap-proaches 1

b . In between, the attractive term may ormay not cause the pressure to do something interest-ing.

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

η ≡ NV b

0.00

0.01

0.02

0.03

0.04

0.05

0.06

0.07

pb2 a

kT ba = 0.26

kT ba = 0.28

kT ba = 0.3

Figure 11.2: The van der Waals pressure for a fewtemperatures.

This equation is kind of nice, but it’s still pretty con-fusing because it has three different constants (otherthan n) in it. We can reduce that further by rewritingit in terms of the packing fraction η ≡ nb, which isthe fraction of the volume that is filled with atoms.

p = kT

b

η

1− η −a

b2η2 (11.48)

We now see that there are just two “constants” todeal with, kTb , and a

b2 each of which have dimensionsof pressure. The former, of course, depends on tem-perature, and the ratio between them (i.e. kTb

a ) will

fully determine the shape of our pressure curve (interms of density).

0 5 10 15 20VNb

0.00

0.01

0.02

0.03

0.04

0.05

0.06

pb2 a

kT ba = 0.26

kT ba = 0.28

kT ba = 0.3

Figure 11.3: The van der Waals pressure for a fewtemperatures.

Clearly something interesting is happening at lowtemperatures. This is a phase transition. But how dowe find out what the density (or equivalently, volume)of the liquid and solid are? You already know that thepressure, temperature and chemical potential must allbe equal when two phases are in coexistence. Fromthis plot we can identify triples of densities where thetemperature and pressure are both identical. Whichcorresponds to the actual phase transition?

Question How might you determine from this vander Waals equation of state or free energy wherethe phase transformation happens?

Answer As before, we need to have identical pres-sure, temperature and chemical potential. So weneed to check which of these equal pressure stateshave the same chemical potential.

Common tangent

Most approaches require us to work with theHelmholtz free energy rather than the pressure equa-tion. If we plot the Helmholtz free energy versusvolume (with number fixed) the pressure is the nega-

81

Page 83: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

tive slope. We also need to ensure that the chemicalpotential (or Gibbs free energy) is identical at the twopoints.

0 2 4 6 8 10 12VNb

−1.40

−1.35

−1.30

−1.25

−1.20

−1.15

−1.10

F

kT ba = 0.26

Figure 11.4: The van der Waals free energy.

G = F + pV (11.49)

= F −(∂F

∂V

)N,T

V (11.50)

So let us set the Gibbs free energies and pressuresequal for two points:

p1 = p2 (11.51)

−(∂F

∂V

)N,T

= same for each (11.52)

G1 = G2 (11.53)F1 + pV1 = F2 + pV2 (11.54)F1 − F2 = p (V2 − V1) (11.55)

So for two points to have the same Gibbs free energy,their Helmholtz free energy (at fixed temperature)must pass through a line with slope equal to thenegative of the pressure. If those two points also havethat pressure as their (negative) slope, then they have

both equal slope and equal chemical potential, andare our two coexisting states. This is the commontangent construction.

The common tangent construction is very commonlyused when looking at multiple crystal structures, whenyou don’t even know which ones are stable in the firstplace.

Note The common tangent construction also workswhen we plot F versus n or N .

Gibbs free energy

Another approach to solve for coexistence points isto plot the Gibbs free energy versus pressure, eachof which can be computed easily from the Helmholtzfree energy. When we plot the Gibbs free energyversus pressure, we find that there is a crossing and alittle loop. This loop corresponds to metastable andunstable states, and the crossing point is where thetwo phases (liquid and vapor, in our case) coexist.

0.005 0.010 0.015 0.020 0.025 0.030

p

−1.18

−1.16

−1.14

−1.12

−1.10

G

kT ba = 0.26

Figure 11.5: The van der Waals Gibbs free energy.

As we increase the temperature, we will find that thislittle loop becomes smaller, as the liquid and vapordensities get closer and closer. The critical point iswhere it disappears.

82

Page 84: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Why does G look like this? We had a good ques-tion about what the “points” represent in theGibbs free energy curve. We can understand thisa bit better by thinking a bit about the differen-tial of G:

dG = −SdT + V dp (11.56)

This tells us that the slope of the G versus p curve(at fixed temperature) is just the volume of thesystem. Since the volume can vary continuously(at least in the Helmholtz free energy we con-structed), this slope must continuously changeas we follow the path. That explains why wehave pointy points, since the slope must be thesame on both sides of the curve. The points thusrepresent the states where the pressure has anextremum, as we change the volume. In betweenthose two extrema is the range where increasingvolume causes the pressure to increase. Thesestates are mechanically unstable, and thus cannotbe observed.

Examples of phase transitions

I’d like to spend just a bit of time talking about thewide variety of different phase transitions that can anddo happen, before we discuss how these transitions canbe understood in a reasonably unified way throughLandau theory.

Liquid-vapor

The liquid-vapor transition is what we just discussed.The only fundamental difference between liquid andvapor is the density of the fluid. (abrupt)

Melting/freezing

Melting and freezing is similar to the liquid-vaportransition, with the difference however, that therecannot be a critical point, since we cannot go fromsolid to liquid without a phase transition. (abrupt)

Sublimation

Sublimation is very much like melting. Its majordifference happens because of the difference betweena gas and a liquid, which means that there is notemperature low enough that there will not be a gasin equilibirum with a solid at low pressure. (abrupt)

Solid/solid

Solid-solid phase transitions are interesting in thatdifferent solid phases have different crystal symme-tries which make it both possible and reasonable tocompute (and possibly even observe) properties fordifferent phases at the same density and pressure.(abrupt)

Ferromagnetism

A ferromagnetic material (such as iron or nickel) willspontaneously magnetize itself, although the magne-tized regions do break into domains. When the mate-rial is heated above a given temperature (called theCurie temperature) it is no longer ferromagnetic, butinstead behaves as an ordinary paramagnetic material.This is therefore a phase transitions. (continuous)

Ferroelectrics

A ferroelectric material is a material that has a sponta-neous electric dipole polarization at low temperatures.It behaves very much like an electrical analogue of aferromagnetic material. (continuous)

Antiferromagnetism

An antiferromagnetic material (such as nickel oxide)will have different atoms with oppositely polarizedspin. This is less easy to observe by elementary schoolchildren than ferromagnetism, but is also a distinctphase, with a phase transition in which the spinsbecome disordered. (continuous)

83

Page 85: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Superconductivity

A superconductor at low temperatures has zero elec-trical resistivity. At higher temperature it is (forordinary superconductors) an ordinary metal. Leadis a classic example of a superconductor, and has atransition temperature of 7.19K. You see high-Tc su-perconductors in demos more frequently, which havetransition temperatures up to 134K, but are morecomplicated in terms of their cause and phase dia-gram. (continuous)

Superfluidity

A superfluid (and helium 4 is the classic example) haszero viscosity at low temperatures. For helium thistransition temperature is 2.17K. (continuous)

Bose-Einstein condensation

The transition to having a macroscopic occupation inthe ground state in a gas of bosons is another phasetransition. (continuous)

Mixing of binary systems

In binary systems (e.g. salt and water, or an alloy ofnickel and iron) there are many of the same phasetransitions (e.g. liquid/gas/solid), but now we havean additional parameter which is the fraction of eachcomponent in the phase. Kittel and Kroemer have awhole chapter on this kind of phase transition.

Landau theory

There are so many kinds of phase transitions, youmight wonder whether they are all different, or if wecan understand them in the same (or a similar) way.Landau came up with an approach that allows us toview the whole wide variety of phase transitions in aunified manner.

The key idea is to identify an order parameter ξ,which allows us to distinguish the two phases. Thisorder parameter ideally should also be something thathas interactions we can control through some sort ofan external field. Examples of order parameters:

Liquid-vapor volume or densityFerromagnetism magnetizationFerroelectrics electric polarization densitySuperconductivity or superfluidity quantum

mechanical amplitude (including phase)Binary mixtures fraction of components

The key idea of Landau is to express a Helmholtz freeenergy as a function of the order parameter:

FL(ξ, T ) = U(ξ, T )− TS(ξ, T ) (11.57)

Now at a given temperature there is an equilibriumvalue for the order parameter ξ0, which is determinedby minimizing the free energy, and this equilibriumorder parameter defines the actual Helmholtz freeenergy.

F (T ) = FL(ξ0, T ) ≤ FL(ξ, T ) (11.58)

So far this hasn’t given us much. Landau theorybecomes powerful is when we expand the free energyas a power series in the order parameter (and later asa power series in temperature).

A continuous phase transition

To make things concrete, let us assume an order pa-rameter with inversion symmetry, such as magnetiza-tion or electrical polarization. This means that FLmust be an even function of ξ, so we can write that

FL(ξ, T ) = g0(T ) + 12g2(T )ξ2 + 1

4g4(T )ξ4 + · · ·(11.59)

84

Page 86: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

The entire temperature dependence is now hidden inthe coefficients of the power series. A simple examplewhere we could have a phase transition, would be ifthe sign of g2 changed at some temperature T0. Inthis case, we could do a power series expansion of ourcoefficients around T0, and we would have somethinglike:

FL(ξ, T ) = g0(T ) + 12α(T − T0)ξ2 + 1

4g4(T0)ξ4

(11.60)

where I am ignoring the temperature dependence ofg4, under the assumption that it doesn’t do anythingtoo fancy near T0. I’m leaving g0(T ) alone, becauseit causes no trouble, and will be useful later. I’m alsogoing to assume that α and g4(T0) are positive. Nowwe can solve for the order parameter that minimizesthe free energy by setting its derivative to zero.

(∂FL∂ξ

)T

= 0 (11.61)

= α(T − T0)ξ + g4(T0)ξ3 (11.62)

This has two solutions:

ξ = 0 ξ2 = (T0 − T ) α

g4(T0) (11.63)

If T > T0 there is only one (real) solution, which isthat the order parameter is zero. Thus when T > T0,we can see that

F (T ) = g0(T ) (11.64)

exactly, since ξ = 0 causes all the other terms in theLandau free energy to vanish.

In contrast, when T < T0, there are two solutions thatare minima (±

√(T0 − T )α/g4(T0)), and one maxi-

mum at ξ = 0. In this case the order parameter

continuously (but not smoothly) goes to zero. Thistells us that the free energy at low temperatures willbe given by

F (T ) = g0(T )− α2

4g4(T0) (T − T0)2 (11.65)

Small groups Solve for the entropy of this systemwhen the temperature is near T0.

Answer We can find the entropy from the free energyby considering its total differential

dF = −SdT − pdV (11.66)

which tells us that

−S =(∂F

∂T

)V

(11.67)

Let’s start by finding the entropy for T < T0:

S< = −dg0

dT− α2

2g4(T0) (T0 − T ) (11.68)

When the temperature is high, this is easier:

S< = −dg0

dT(11.69)

This tells us that the low-temperature phase hasan extra-low entropy relative to what it wouldhave had without the phase transition. How-ever, the entropy is continuous, which means thatthere is no latent heat associated with this phasetransition, which is called a continuous phasetransition. An older name for this kind of phasetransition (used in the text) is a second orderphase transition. Currently “continuous” isprefered for describing phase transitions with nolatent heat, because they are not always actuallysecond order as is this example.

Examples of continuous phase transitions include fer-romagnets and superconductors.

85

Page 87: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

An abrupt phase transition

To get an abrupt phase transition with a nonzerolatent heat (as for melting or boiling), we need toconsider a scenario where g4 < 0 and g6 > 0. Thisgives us two competing local minima at different val-ues for the order parameter. (Note that an abruptphase transition is also known as a first order phasetransition.)

FL = g0(T ) + 12α(T − T0)ξ2 − 1

4 |g4(T )|ξ4 + 16g6ξ

6 + · · ·(11.70)

Small groups if we have time Find the solutionsfor the order parameter, and in particular find acriterion for the phase transition to happen.

Answer We want to find minima of our free en-ergy. . .

∂FL∂ξ

= 0 (11.71)

= α(T − T0)ξ − |g4(T )|ξ3 + g6ξ5 (11.72)

One solution is ξ = 0. Otherwise,

0 = α(T − T0)− |g4(T )|ξ2 + g6ξ4 (11.73)

which is just a quadratic. It has solutions when

ξ2 =|g4(T )| ±

√g4(T )2 − 4g6α(T − T0)

2g6(11.74)

Note that this has four solutions. Two have ξ < 0,and show up because our free energy is even. Oneof the other solutions is a local maximum, andthe final solution is a local minimum. For this tohave a real solution, we would need for the thingin the square root to be positive, which means

g4(T )2 ≥ 4g6α(T − T0) (11.75)

It would be tempting to take this as an equalitywhen we are at the phase transition. However,that is just the point at which there is a localminimum, but we are looking for a global mini-mum (other than ξ = 0). This global minimumwill require that

FL(ξ > 0) < FL(ξ = 0) (11.76)

which leads us to conclude that

12α(T − T0)ξ2 − 1

4 |g4(T )|ξ4 + 16g6ξ

6 < 0(11.77)

We can plug in the criterion for an extremum inthe free energy at nonzero ξ to find:

12(|g4(T )|ξ2 − g6ξ

4) ξ2 − 14 |g4(T )|ξ4 + 1

6g6ξ6 < 0

(11.78)14 |g4(T )|ξ4 − 1

3g6ξ6 < 0

(11.79)14 |g4(T )| − 1

3g6ξ2 < 0

(11.80)

At this point we would want to make use of thesolution for ξ2 above that used the quadraticequation. We would then have eliminated ξ fromthe equation, and could solve for a relationshipbetween |g4(T )|, g6, and α(T − T0).

Homework for week 9 (PDF)

1. Vapor pressure equation (David) Consider aphase transformation between either solid or liq-uid and gas. Assume that the volume of thegas is way bigger than that of the liquid orsolid, such that ∆V ≈ Vg. Furthermore, as-sume that the ideal gas law applies to the gasphase. Note: this problem is solved in thetextbook, in the section on the Clausius-Clapeyron equation.

86

Page 88: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

a) Solve for dpdT in terms of the pressure of

the vapor and the latent heat L and thetemperature.

b) Assume further that the latent heat isroughly independent of temperature. In-tegrate to find the vapor pressure itself as afunction of temperature (and of course, thelatent heat).

Note that this is a rather coarse approximation,since the latent heat of water varies by about10% between 0◦C and 100◦C. Still, you will see apretty cool result, that is roughly accurate (andgood enough for the problems below).

2. Entropy, energy, and enthalpy of van derWaals gas (K&K 9.1) In this entire problem,keep results to first order in the van der Waalscorrection terms a and $b.

a) Show that the entropy of the van der Waalsgas is

S = Nk

{ln(nQ(V −Nb)

N

)+ 5

2

}(11.81)

b) Show that the energy is

U = 32NkT −

N2a

V(11.82)

c) Show that the enthalpy H ≡ U + pV is

H(T, V ) = 52NkT + N2bkT

V− 2N

2a

V(11.83)

H(T, p) = 52NkT +Nbp− 2Nap

kT(11.84)

3. Calculation of dTdp for water (K&K 9.2) Cal-

culate based on the Clausius-Clapeyron equationthe value of dTdp near p = 1atm for the liquid-vaporequilibrium of water. The heat of vaporizationat 100◦C is 2260J g−1. Express the result inkelvin/atm.

Figure 11.6: Effects of High Altitude by RandallMunroe, at xkcd.87

Page 89: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

4. Heat of vaporization of ice (Modified K&K9.3) The pressure of water vapor over ice is 518Pa at −2◦C. The vapor pressure of water at itstriple point is 611 Pa, at 0.01◦C (see Wikipediawater data page). Estimate in J mol−1 the heatof vaporization of ice just under freezing. Howdoes this compare with the heat of vaporizationof water?

88

Page 90: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 12

Review

Topics are everything that has been covered on home-work. Problems should be similar to homework prob-lems, but short enough to be completed during theexam. The exam will be closed notes. You should beable to remember the fundamental equations.

Equations to remember

Most of the equations I expect you to remember dateback from Energy and Entropy, with a few exceptions.

Thermodynamic identity The thermodynamicidentity, including the chemical potential:

dU = TdS − pdV + µdN (12.1)

You should be able from this to extract relation-ships such as µ =

(∂U∂N

)S,V

.Thermodynamic potentials You need to know

the Helmholtz and Gibbs free energies.

F = U − TS (12.2)G = U − TS + pV (12.3)dF = −SdT − pdV + µdN (12.4)dG = −SdT + V dp+ µdN (12.5)

You don’t need to remember their differentials,but you do need to be able to find them quicklyand use them, e.g. to find out how µ relates to F

as a derivative. I’ll point out that by remember-ing how to find the differentials, you also don’tneed to remember the sign of U − TS, since youcan figure it out from the thermodynamic identityby making the TdS term cancel.

Heat and work You should remember the expres-sions for differential heat and work

dQ = TdS (12.6)dW = −pdV (12.7)

and you should be able to use these expressionsfluently, including integrating to find total heator work, or solving for entropy given heat:

dS = dQ

T(12.8)

Efficiency You should know that efficiency is definedas “what you get out” divided by “what you putin”, and that for a heat engine this comes downto

ε = Wnet

QH(12.9)

Entropy You should remember the Gibbs expressionfor entropy in terms of probability.

S = −k∑i

Pi lnPi (12.10)

89

Page 91: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Boltzmann probability You should be comfort-able with the Boltzmann probability, able topredict properties of systems using them.

Pi = e−βEi

Z(12.11)

Z =∑i

e−βEi (12.12)

F = −kT lnZ (12.13)

Derivative trick You may need to remember thederivative trick for turning a summation into aderivative of another summation in order to com-plete a problem. More particularly, I want younot to use an expression for U in terms of Z thatcomes from the derivative trick, without writingdown the three lines of math (or so) required toshow that it is true.

Thermal averages You should remember that theinternal energy is given by a weighted average:

U =∑i

EiPi (12.14)

And similarly for other variables, such as N inthe grand canonical ensemble.

Chemical potential You should remember that thechemical potential is the Gibbs free energy perparticle.

µ = G

N(12.15)

You should also be able to make a distinctionbetween internal and external chemical potentialto solve problems such as finding the density as afunction of altitude (or in a centrifuge), if I giveyou the expression for the chemical potential ofan ideal gas (or other fluid).

Gibbs factor and sum You should be comfortablewith the Gibbs sum and finding probabilities inthe grand canonical ensemble.

Pi = e−β(Ei−µNi)

Z(12.16)

Z =∑i

e−β(Ei−µNi) (12.17)

Incidentally, in class we didn’t cover the grandpotential (or grand free energy), but that is whatyou get if you try to find a free energy using theGibbs sum like the partition function.

Fermi-Dirac, Bose-Einstein, and Planck distributionsYou should remember these distributions

fFD(ε) = 1eβ(ε−µ) + 1

(12.18)

fBE(ε) = 1eβ(ε−µ) − 1

(12.19)

and should be able to use them to make predic-tions for properties of non-interacting systems offermions and bosons. This also requires remem-bering how to reason about orbitals as essentiallyindependent systems within the grand canonicalensemble. You should remember that the Planckdistribution for photons (or phonons) is the sameas the Bose-Einstein distribution, but with µ = 0.This comes about because photons and phononsare bosons, but are a special kind of boson thathas no conservation of particle number.

Density of states You should remember how to usea density of states together with the above dis-tributions to find properties of a system of non-interacting fermions or bosons

〈X(ε)〉 =∫D(ε)f(ε)X(ε)dε (12.20)

As special cases of this, you should be able to findN (or given N find µ) or the internal energy. Wehad a few homeworks where you found entropyfrom the density of states, but I think that was abit too challenging/confusing to put on the finalexam.

Conditions for coexistence You should rememberthat when two phases are in coexistence, their

90

Page 92: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

temperatures, pressures, and chemical potentialsmust be identical, and you should be able tomake use of this.

Equations not to remember

If you need a property of a particular system (the idealgas, the simple harmonic oscillator), it will be givento you. There is no need, for instance, to rememberthe Stefan-Boltzmann law or the Planck distribution.

Heat capacity I do not expect you to remember thedefinition of heat capacity (although you proba-bly will remember it).

CV = T

(∂S

∂T

)V,N

(12.21)

=(∂U

∂T

)V,N

(12.22)

Cp = T

(∂S

∂T

)p,N

(12.23)

I do expect you to be able to make use of theseequations when given. Similarly, you should beable to show that the two expressions for CV areequal, using the thermodynamic identity.

Enthalpy If I give you the expression for enthalpy(U + pV ) you should be able to work with it, butsince we didn’t touch it in class, I don’t expectyou to remember what it is.

Any property of an ideal gas I don’t expect youto remember any property of an ideal gas, includ-ing its pressure (i.e. ideal gas law), free energy,entropy, internal energy, or chemical potential.You should be comfortable with these expressions,however, and if I provide them should be able tomake use of them.

Stefan-Boltzmann equation You should be ableto make use of the expression that

I = σBT4 (12.24)

where I is the power radiated per area of surface,but need not remember this.

Clausius-Clapeyron equation You should be ableto make use of

dp

dT= sg − s`vg − v`

(12.25)

but I don’t expect you to remember this. Youshould also be able to convert between this ex-pression and the one involving latent heat usingyour knowledge of heat and entropy.

Carnot efficiency You need not remember theCarnot efficiency

ε = 1− TCTH

(12.26)

but you should remember what an efficiency is,and should be able to pretty quickly solve forthe net work and high-temperature heat for aCarnot engine by looking at it in T/S space. (Orsimilarly for a Carnot refridgerator.)

Density of states for particular systems Youneed not remember any expression for thedensity of states e.g. for a gas. But given such anexpression, you should be able to make use of it.

Fermi energy You need not remember any particu-lar expression for the Fermi energy of a particularsystem, but should be able to make use of an ex-pression for the Fermi energy of a system.

91

Page 93: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Chapter 13

Solutions

Here are the solutions for all the homework problems.Although these solutions are available even beforehomework is due, I recommend that you do your bestsolve the homework problems without checking thesolutions. I would encourage you to go so far as tonot check the solutions until after you have turnedin your homework. This will enable you to practicedetermining if your answer is correct without knowingwhat the correct answer is. This is what you will haveto do after you have graduated (or on the exam)!

In any case, please include for each homework problemthat you solve an explanation of what resources youmade use of, whether it be help from other students,these solutions, or some other resource. Please explainin each case how you used the resource. e.g. did youlook at the solutions to confirm that your answer wascorrect, or did you look at the solutions to see how tostart the problem? Your grade on the homework willnot be diminished, even if you essentially copied thesolution.

I would also appreciate it if you let me know whereyou got stuck on a given problem. I may addressyour difficulty in class, or I may choose to change thehomework problem for next year.

Please note that verbatim copying of any solu-tion (whether it be these, or a homework fromanother student) is plagiarism, and is not per-mitted. If you wish to copy a solution, pleaseuse it as a guide for how to do the steps, butperform each step on your own, and ideallyadd some words of your own explaining what

you are doing in each step.

Solution for week 1

PDF version of solutions

1. Energy, Entropy, and Probabilities

To begin, as the question prompts, we will stickthe probabilities into our expressions for U andS. If you knew how everything was going to workout, you could only stick them into the lnPi, butI’ll act as though I haven’t solved this N � 1times before.

U =∑i

Eie−βEi

Z(13.1)

S = −kB∑i

e−βEi

Zln(e−βEi

Z

)(13.2)

At this point, we can see that there is a chanceto simplify the ln.

92

Page 94: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S = −kB∑i

e−βEi

Z

(ln(e−βEi

)− lnZ

)(13.3)

= −kB∑i

e−βEi

Z(−βEi − lnZ) (13.4)

= kB∑i

e−βEi

Z(βEi + lnZ) (13.5)

= kB∑i

e−βEi

ZβEi + kB

∑i

e−βEi

ZlnZ

(13.6)

= kBβ

���

���*

U∑i

e−βEi

ZEi + kB lnZ

�����>

∑i Pi = 1∑

i

e−βEi

Z

(13.7)S = kBβU + kB lnZ (13.8)

U = S

kBβ− 1β

lnZ (13.9)

I’ll note that I would normally not show so manysteps above in my own work, but am being extrathorough in this solution.

We are now at the point where we can start think-ing thermo, and make use of the thermodynamicidentity. To do that, let us “zap with d” to seewhat dU is in terms of dS and dβ, etc.

dU = TdS − p��*0

dV (13.10)

= dS

kBβ− S

kBβ2 dβ + lnZβ2 dβ − 1

βZdZ

(13.11)

So far, this may not look promising to you, butperseverence pays off!

Note I threw out the dV because our statisticalformalism only includes states with a givenvolume. Including the volume dependence isnot complicated, but it requires us to takederivatives of Ei with respect to volume,

which is a nuisance we can live without fornow.

Let’s begin by lumping together the two dβ terms.They look suspiciously similar to our previousexpression for U , which is unshocking, since Eq.13.9 showed that U was inversely proportional toβ.

dU = dS

kBβ−(

S

kBβ− lnZ

β

)dβ

β− 1βZ

dZ

(13.12)

= dS

kBβ− U

βdβ − 1

βZdZ (13.13)

Let’s start by unpacking this dZ:

dZ =∑i

e−βEi (−Eidβ) (13.14)

= −dβ∑i

e−βEiEi (13.15)

= −Zdβ∑i

e−βEi

ZEi (13.16)

= −ZUdβ (13.17)

Yay for recognizing something we have computedbefore! Now let’s put this back in our dU .

dU = dS

kBβ− U

βdβ − 1

βZ(−ZUdβ) (13.18)

= dS

kBβ(13.19)

Whew! Everything cancelled (as it had to, butone algebra error would mess this all up. . . ), andwe are left with a simple expression that doesnot have a dβ in sight! This is good, because weargued before that with volume held fixed

93

Page 95: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

dU = TdS = 1kBβ

dS (13.20)

T = 1kBβ

(13.21)

β = 1kBT

(13.22)

So we have just proven what you all knew aboutthe relationship between β and T . This is valu-able, because it establishes the connection be-tween the theoretical Lagrange multiplier andthe temperature defined in thermodynamics.

The text uses a different (microcanonical) ap-proach to establishing the connection betweenthe statistical approach and the temperature thatwe define in thermodynamics.

2. Gibbs entropy is extensive

a) To begin, we remember the relationship be-tween probabilities given in the problem:

PABij = PAi PBj (13.23)

This means that the probability of findingsystem A in state i while system B is instate j is just the product of the separateprobabilities. Now the entropy is

SAB =all states∑

α

Pα lnPα (13.24)

=states of A∑

i

states of B∑j

PAi PBj ln

(PAi P

Bj

)(13.25)

=states of A∑

i

states of B∑j

PAi PBj

(lnPAi + lnPBj

)(13.26)

=∑i

∑j

PAi PBj lnPAi +

∑i

∑j

PAi PBj lnPBj

(13.27)

=∑i

PAi lnPAi∑j

PBj +∑i

PAi∑j

PBj lnPBj

(13.28)

=��

���

���*

SA(∑i

PAi lnPAi

)������>

1∑j

PBj

+������>

1(∑i

PAi

)��������*

SB∑j

PBj lnPBj

(13.29)

= SA + SB (13.30)

b) At this stage, we can basically use just wordsto solve this. We consider the system of Nidentical subsystems as a combination ofa single one and N − 1 subsystems, thusSN = S1 + SN−1 from what we showedabove. Then we repeat N times, to showthat SN = NS1. There are other ways toshow this, e.g. by repeatedly dividing thesystem (SN = SN/2 + SN/2).

3. Boltzmann probabilities

a) At infinite temperature β = 0, which makescomputing probabilities easy: they are allequal. Thus the probabilities are each 1/3.

b) At very low temperatures βε� 1. Remem-ber that the probabilities are given by

94

Page 96: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Pi = e−βEi

Z(13.31)

Z = eβε + 1 + e−βε (13.32)

We can see that our “small quantity” fora power series should be e−βε, since thatis the small thing in the partition function.We can start with the ground state, whichwe expect to be overwhelmingly occupied:

P0 = eβε

eβε + 1 + e−βε(13.33)

= 11 + e−βε + e−2βε (13.34)

≈ 1−(e−βε +���e−2βε)+

(���e−βε + e−2βε

)2+ · · ·

(13.35)

At the last step, we used a power seriesapproximation for 1/(1−z). We now need togather terms so that we keep all terms to thesame order. In this case the best option isto keep all terms up to e−2βε, since that waywe will be able to account for the occupationof the highest energy state. Keeping theseterms gives

P0 ≈ 1− e−βε +O(e−3βε) (13.36)

becaue the e−2βε terms cancel each otherout. Thus the ground state will be almost100% occupied. When we look at the othertwo states we will get exponentially smallerprobabilities:

P1 = 1eβε + 1 + e−βε

(13.37)

= e−βε1

1 + e−βε + e−2βε (13.38)

= e−βεP0 (13.39)≈ e−βε

(1− e−βε

)(13.40)

= e−βε − e−2βε (13.41)

The middle state with zero energy is lessoccupied by precisely a factor of e−βε. Wecould have predicted this from the Boltz-mann ratio.

P2 = e−βε

eβε + 1 + e−βε(13.42)

= e−βεP1 (13.43)≈ e−2βε (13.44)

And the high energy state is hardly occu-pied at all, the same factor smaller than theprevious state.

This solution kept all terms that were atleast order e−2βε for each probability, whichresulted in a set of probabilities that add upto one. It would also have been reasonableto answer that P0 ≈ 1 and P1 ≈ e−βε, andthen discuss that actually the probability ofbeing in the ground state is not precisely 1.

I could understand saying that P2 ≈ 0, butideally you should give a nonzero answer foreach probability when asked about very lowtemperatures, because none of them are ex-actly zero. If you have an experimant thatmeasures P2 (perhaps state 2 has a distinc-tive property you can observe), then youwill not find it to be zero at any tempera-ture (unless you have poor resolution), andit is best to show how it scales.

c) If we allow the temperature to be negative,then higher energy states will be more prob-able than lower energy states. If the energy

95

Page 97: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

is small and negative (which was not speci-fied in the question), then the system willalmost always be in the +ε energy state.

Another behavior with negative temperatures forthis system is that U > 0. For positive tempera-tures, the internal energy only approaches zeroas the temperature gets very high. If the temper-ature becomes negative, the energy can exceedzero. For other systems, of course, this will notbe the case, but this will be true for any sys-tem in which the energy states are symmetricallyarranged around zero.

Solution for week 2

PDF version of solutions

1. Entropy and Temperature (K&K 2.1)

a) We begin by finding the entropy given theprovided multiplicity.

S(U,N) = kB log g(U,N) (13.45)

= kB log(CU3N/2

)(13.46)

= kB

(logC + log

(U3N/2

))(13.47)

= kB logC + 32NkB logU

(13.48)(13.49)

In the last two steps there, we made use ofproperties of log. If these are not obvious toyou, you absolutely must take the time toreview the properties of logarithms. Theyare absolutely critical to this course!

1T

=(∂S

∂U

)V,N

(13.50)

= 32NkB

1U

(13.51)

U = 32NkBT (13.52)

Yay.b) We just need to take one more derivative,

since we already found(∂S∂U

)V,N

in part (a).

(∂2S

∂U2

)V,N

= −32NkB

1U2 (13.53)

< 0, (13.54)

where in the last step we only needed toassume that N > 0 (natural for the numberof particles) and that the energy U is real(which it always must be). Thus ends thesolution.Because the second derivative of the entropyis always negative, the first derivative ismonotonic, which means that the tempera-ture (which is positive) will always increaseif you increase the energy of the system andvice versa.

2. Paramagnetism (K&K 2.2)

Okay, here we can understand the fractional mag-netization if we think Nm as being the maximumpossible magnetization (all spins are pointing thesame way). The quantity s is defined as the to-tal value of the spin. Because each spin has avalue of ± 1

2 , twice s per particle also tells us thefractional magnetization.

To convert from S(s) to S(U) we need to relatethe energy to the excess spin s. This relies onthe energy expression

U = −B2sm (13.55)

which uses the equations given. At this point, itis a simple substitution of s = − U

2mB :

96

Page 98: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S(U) = S0 − kB2(− U

2mB)2

N(13.56)

= S0 − kBU2

2m2B2N(13.57)

To determine 1/kT , we just need to take a deriva-tive:

1T

=(∂S

∂U

)V

(13.58)

= −kBU

m2B2N(13.59)

1kBT

= − U

m2B2N(13.60)

At this point we have finished the problem, withjust a bit of algebra. It is helpful at this stageto do a bit of checking. The left hand side hereis an inverse energy. On the right hand side,mB is an energy, so we have an energy over anenergy squared, so all is good. N , of course, isdimensionless. However, it is also extensive, asis U . This is good, because the left hand side isintensive, so the right hand side should be also.

The equilibrium fractional magnetization is thusgiven by

µtotNm

= − U

NmB(13.61)

= − 1NmB

(−m

2B2N

kT

)(13.62)

= mB

kT(13.63)

Thus the fractional magnetization is just equalto the ratio between mB (which is the energy ofone spin in the magnetic field) to kT (which is ameasure of the available energy).

Of interest This relationship is very differentthan the one we saw in the previous prob-lem! Previously, we saw the temperature

being proportional to the internal energy,and here we see it as inversely proportional,meaning that as the energy approaches zerothe temperature becomes infinite.

We also previously had an energy that waspositive. Here we have a negative sign,which suggests that the energy should benegative in order to maintain a positive tem-perature. This relates to the energy of asingle spin always being either positive ornegative, with equal and opposite options.

This problem illustrates a weird phe-nomenon: if the energy is positive, thenwe must conclude that the temperature isnegative. Furthermore, the temperature dis-continuously passes from ∞ to −∞ as theenergy passes through zero. There are differ-ent interpretations of these “negative tem-perature” states. You cannot reach them byheating a system (adding energy via heat-ing), and they cannot be exist in equilibriumif the system has contact with any quantityof material that can have kinetic energy. SoI consider these to be unphysical (or non-equilibrium) states. Since temperature is anequilibrium proprty, I would not say that anegative temperature is physically meaning-ful. That said, there is an analogy that canbe made to population inversion in a laser,which is a highly non-equilibrium systemthat is pretty interesting.

3. Quantum harmonic oscillator (K&K 2.3)

a) Given the multiplicity, we just need to takea logarithm, and simplify.

97

Page 99: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S(N,n) = k log g(N,n) (13.64)

= k log(

(N + n− 1)!n!(N − 1)!

)(13.65)

= k (log(N + n+ 1)!− logn!− log(N − 1)!)(13.66)

≈ k ((N + n+ 1) log(N + n+ 1)− n logn− (N − 1) log(N − 1))(13.67)

≈ k ((N + n) log(N + n)− n logn−N logN)(13.68)

= k

(N log N + n

N+ n log N + n

n

)(13.69)

= k (N log (1 + n/N) + n log (1 +N/n))(13.70)

You need not simplify your answer this far,but it is good to get practice simplifyinganswers, particularly involving logarithms.In particular, it is usually helpful at thispoint in a computation to verify that theentropy is indeed extensive. Both N (thenumber of oscillators) and n (the sum ofall the quantum numbers of all the oscilla-tors) are extensive quantities. Thus n/Nand N/n are intensive, which is good be-cause otherwise we could not add them to1. Each term is now clearly extensive, andthe entropy behaves as we expect.

b) Now we want to find U(T ), which will re-quire us to find S(U) (via simple substitu-tion of n = U/~ω) and T from a derivativeof that.

S(U) = Nk log(

1 + U

N~ω

)+Nk

U

N~ωlog(

1 + N~ωU

)(13.71)

Now we just have a derivative to take, andthen a mess of algebra to simplify.

1T

=(∂S

∂U

)N,V

(13.72)

= Nk

1 + UN~ω

1N~ω

(13.73)

+Nk1

N~ωlog(

1 + N~ωU

)(13.74)

−Nk U

N~ω1

1 + N~ωU

N~ωU2 (13.75)

And now to simplify. . .

~ωkT

= 11 + U

N~ω+ log

(1 + N~ω

U

)−

N~ωU

1 + N~ωU

(13.76)

=N~ωU

1 + N~ωU

+ log(

1 + N~ωU

)−

N~ωU

1 + N~ωU

(13.77)

= log(

1 + N~ωU

)(13.78)

Well, didn’t that simplify down nicely?The key was to multiply the first term byN~ω/U so that it shared a denominatorwith the last term (and ended up being equaland opposite).Solving for U is not bad at all, now, we’lljust take an exponential of both sides:

e~ωkT = 1 + N~ω

U(13.79)

N~ωU

= e~ωkT − 1 (13.80)

U = N~ωe

~ωkT − 1

(13.81)

Note As I mentioned in the homework, thisis the hard way to solve this problem.That said, it wasn’t actually particu-larly hard, you just need to be comfort-able doing algebra with logarithms, andsimplifying annoying ratios.

98

Page 100: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Solution for week 3

PDF version of solutions

1. Free energy of a two state system (K&K3.1, modified)

a) The partition function of this two-state sys-tem is very simple:

Z =all states∑

s

e−βEs (13.82)

= e0 + e−βε (13.83)= 1 + e−βε (13.84)

Now the free energy is just a log of this:

F = −kT logZ (13.85)= −kT log

(1 + e−βε

)(13.86)

= −kT log(1 + e−

εkT

)(13.87)

We can ask ourselves if this simplifies inany limits, and the easiest one is the low-temperature limit where e− ε

kT � 1. In thislimit, the free energy is given by

F ≈ −kTe− εkT (13.88)

b) To solve for the internal energy and entropywe can make use of the definition of the freeenergy the thermodynamic identity:

F ≡ U − TS (13.89)dF = dU − TdS − SdT (13.90)

= TdS − pdV − TdS − SdT (13.91)= −SdT − pdV (13.92)

which tells us that we can find the entropyby taking a derivative:

S = −(∂F

∂T

)V

(13.93)

= k log(1 + e−

εkT

)+ kT

1 + e−εkTe−

εkT

ε

kT 2

(13.94)

= k log(1 + e−

εkT

)+ ε

T

11 + e

εkT

(13.95)

I note here that entropy has dimensions ofenergy per temperature, and so do both ofmy terms, so we’re looking good so far. Itis also worth checking that our entropy ispositive, which it should always be. In thiscase each term is always positive so we aregood. Interesting note: the first time Isolved this I lost the minus sign, andthings did not make sense! Now to findU I just need to add TS to my free energy.

U = F + TS (13.96)

= −���

������:

0kT log

(1 + e−

εkT

)(13.97)

+���

�����

�:0kT log

(1 + e−

εkT

)+ ε

11 + e

εkT

= ε1

1 + eεkT

(13.98)

I will point out that you could have solvedfor the internal energy using the “derivativetrick” or some other memorized formula in-volving partition function, but in this classI want you to always go back to the physics.

c) Here is a nice plot of the entropy versustemperature. As you can see, the entropyasymptotes to a maximum value of k log 2as the temperature increases (the dottedline is at that value). This is reasonablebecause there are only two microstates pos-sible, so the maximum possible entropy isk log 2. You can think of the Boltzmannformulation with its log of the number of mi-crostates. At high temperatures the system

99

Page 101: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

0 1 2 3 4 5

kBT/ε

0.0

0.2

0.4

0.6

0.8

S/kB

Figure 13.1: Plot of entropy vs. temperature

approaches this maximum entropy state, inwhich both states are equally probable.

0.0 0.1 0.2 0.3 0.4 0.5 0.6

U/ε

0.0

0.2

0.4

0.6

0.8

S/kB

Figure 13.2: Plot of entropy vs. energy

d) Now let us look at this plot of the entropyas a function of internal energy. The firstthing you can note (and that the problemasks about) is that the inernal energy onlygoes up to 1

2ε. This may be counterintuitive,since the maximum energy of the system isε. The reason the internal energy maxes

out halfway there is because no matter howhot the system gets, it will never occupy thehigher energy state with greater probabilitythan the lower energy state, so we can neverget it to have more than a 50% probabilityof being in that state with energy ε.

If we had approached this problem from a mi-crocanonical perspective, where we choose theenergy and then solve for the entropy and temper-ature, we could have specified an internal energygreater than 1

2ε, and would have found that theentropy decreases for these energies, and thatthe temperature therefore is negative. This isdiscussed in Schroeder 3.3 and K&K AppendixE.

2. Magnetic susceptibility

a) Before anything else, I’ll define the energyof a single spin to be given by:

E± = ±mB (13.99)(13.100)

where the ± refers to the direction of thespin. We just need to find the partitionfunction (and thus free energy) for a singlespin and then multiply that free energy byN to find a total free energy.

Z = eβmB + e−βmB (13.101)F = −NkT lnZ (13.102)

= −NkT ln(eβmB + e−βmB

)(13.103)

This gives us the free energy, and clearlyF

NkT is only a function of βmB. This tellsus that the once we know the behavior atone temperature and all magnetic fields weactually know the behavior at all tempera-tures.

b) To find the magnetization (which is a pervolume quantity), we start by finding theaverage magnetization of a single spin:

100

Page 102: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

〈m〉 = mP+ −mP− (13.104)

= meβE+ − eβE−eβE+ + eβE−

(13.105)

= meβmB − eβ(−m)B

eβmB + eβ(−m)B (13.106)

= m tanh (βmB) (13.107)

To find the magnetization M , we just needto know the total dipole moment per unitvolume, which is just the mean dipole mo-ment of a single spin times the number ofspins per unit volume. Thus

M = n 〈m〉 = nm tanh(mB

kT

)(13.108)

as given in the problem. To find the suscep-tibility, we now just need to take a deriva-tive of this thing with respect to B whileholding temperature fixed. I’ll assume youdon’t know the derivative of a tanh, since Icouldn’t remember it myself.

χ = nm

(∂

∂B

)T

eβmB − eβ(−m)B

eβmB + eβ(−m)B

(13.109)

= nm

(eβmB + e−βmB

eβmB + e−βmB−(eβmB − e−βmB

)2(eβmB + e−βmB)2

)βm

(13.110)

= nm2

kT

(1−

(eβmB − e−βmB

)2(eβmB + e−βmB)2

)(13.111)

= nm2

kT

(eβmB + e−βmB

)2 − (eβmB − e−βmB)2(eβmB + e−βmB)2

(13.112)

= nm2

kT

4(eβmB + e−βmB)2 (13.113)

It’s not necessary to fully simplify this an-swer, but it is helpful to practice making

your answers as pretty as possible, as ittends to make it easier to understand andeasier to reason about.Looking at this solution, you can see thatthe susceptibility is always positive (as itmust be), and that it is proportional to thedensity of spins as it must be (*interestingnote: when I solved this at first I omittedthe factor of n and only when doing thisreasoning caught the error). You can seethat at large B field (either positive or neg-ative) the susceptibility vanishes (becauseeverything is already pointing the right way).You can see that the susceptibility is an evenfunction of B, which reflects the symmetryof the system under a redefinition of the“up” direction.

c) To solve this at high temperatures, we justneed to do a power series expansion of χ forsmall values of βmB. Thus

e±βmB ≈ 1± βmB (13.114)

χ ≈ nm2

kT

4(1 + βmB + 1− βmB)2

(13.115)

= nm2

kT(13.116)

and we can see that this is accurate even tofirst order in B. In fact the symmetry ofthe system prevents χ from having any oddterms in its power series. You might wonderwhy m shows up squared. That is becauseone factor of m comes from the magneticmoment causing the spins to align with themagnetic field, while the other factor comesfrom the fact that the total dipole momentis also proportional to m.

3. Free energy of a harmonic oscillator

a) We start as usual by writing down the par-tition function, from which finding the freeenergy is easy.

101

Page 103: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Z =∞∑n=0

e−βn~ω (13.117)

Here we need a little trick, which is howyou do a geometric sum. The trick involvesmultiplying by e−β~ω (with no n) on bothsides of the above. This gives us

e−β~ωZ =∞∑n=0

e−β(n+1)~ω (13.118)

Now we can shift n by 1 in this infinite sum:

e−β~ωZ =∞∑n=1

e−βn~ω (13.119)

Now we can observe that the right handside is just the original expression for Zmissing the n = 0 term, after which the restis algebra:

e−β~ωZ = Z − 1 (13.120)(1− e−β~ω

)Z = 1 (13.121)

Z = 11− e−β~ω (13.122)

F = −kT lnZ (13.123)= kT ln

(1− e−β~ω

)(13.124)

where in the last step I used the propertiesof a logarithm to put the denoninator ontop.

b) To solve for the entropy, we just need toremember (or derive) the total differentialof the free energy:

dF = −SdT − pdV + µdN (13.125)

S = −(∂F

∂T

)V,N

(13.126)

So let’s start taking a derivative!

0.0 0.5 1.0 1.5 2.0 2.5kBThω

0.0

0.5

1.0

1.5

2.0

2.5

S/kB

S/kB

1 + ln(kBThω

)

Figure 13.3: Harmonic oscillator entropy vs. tempera-ture

S = −k ln(1− e−β~ω

)− kT

1− e−β~ω

(−e−β~ω ~ω

kT 2

)(13.127)

where the trickiest part was getting the rightnumber of factors of −1 in the second term.

S = −k ln(1− e−β~ω

)+ ~ω

T

e−β~ω

1− e−β~ω(13.128)

= −k ln(1− e−β~ω

)+ ~ω

T

1eβ~ω − 1

(13.129)

which is the same as the equation in theproblem. You can look at the plot to seewhat’s going on. At temperatures much lessthan ~ω/k the entropy rapidly approacheszero, because the oscillator is essentially al-ways in the ground state. If you plot furtherout, you can see that the entropy increaseswithout bound. At high temperatures, youcan work out that the entropy varies log-arithmically with temperature (as demon-strated in the figure), which you could have

102

Page 104: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

predicted if you remembered the equiparti-tion theorem (or if I had taught it already?).

4. Energy fluctuations There are two ways toapproach this problem: from the right or fromthe left. I will show how to approach it from theleft, but you could alternatively start by takinga derivative of U and work from there.

We begin by writing down an expression for themean energy (or internal energy).

U = 〈ε〉 =∑i

EiPi (13.130)

Now let’s look at the variation that we are lookingfor:

⟨(ε− U)2⟩ =

∑i

(Ei − U)2Pi (13.131)

=∑i

E2i Pi − 2

∑i

EiUPi +∑i

U2Pi

(13.132)

=∑i

E2i Pi − 2U2 + U2 (13.133)

=∑i

E2i Pi − U2 (13.134)

= 〈ε2〉 − 〈ε〉2 (13.135)

So far we haven’t done anything thermal, we havejust used the properties of weighted averages.

Now let’s start working from the right, since it’snot obvious where to go from here (except thatwe’ll need to involve the Boltzmann factor).

(∂U

∂T

)V

=∑i

Ei

(∂Pi∂T

)V

(13.136)

=∑i

Ei

(∂

∂T

e−βEi

Z

)V

(13.137)

=∑i

Ei

(e−βEi

Z

EikT 2 −

e−βEi

Z2∂Z

∂T

)(13.138)

=∑i

e−βEi

Z

E2i

kT 2 −∑i

Eie−βEi

Z2∂Z

∂T

(13.139)

At this point we can interpret a couple of the sumsphysically, and then cope with the derivative ofthe partition function (which you have probablyseen before).

(∂U

∂T

)V

= 〈ε2〉

kT 2 − U1Z

∂Z

∂T(13.140)

So let’s do that derivative of the partition func-tion.

1Z

∂Z

∂T= 1Z

∑i

∂e−βEi

∂T(13.141)

= 1Z

∑i

e−βEiEikT 2 (13.142)

= 1kT 2

∑i

Eie−βEi

Z(13.143)

= U

kT 2 (13.144)

Thus we see that

(∂U

∂T

)V

= 〈ε2〉

kT 2 −U2

kT 2 (13.145)

=⟨(ε− U)2⟩kT 2 (13.146)

103

Page 105: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

And thus we have shown that the fluctuations inthe energy are proportional to the heat capacityCV as we were told to show. One can find asimilar relationship for the fluctuation of anythermal variable, e.g. the fluctuation-dissipationtheorem (which we will not cover in class).

5. Quantum concentration (K&K 3.8) We needto start by finding the formula for the groundstate energy of a particle in a box. It’s all right ifyou just remember this, but it also isn’t hard tofigure out without doing any complicated mathor boundary conditions. The particle needs tohave a half-wavelength of L in each direction inorder to fit in the box, thus λ = 2L for eachdirection. This means that that the wave vectoris given by kx = ky = kz = ± 2π

2L . Of course, theparticle isn’t in a traveling wave, but rather in asuperposition of the ± versions of these travelingwaves (i.e. a standing wave). The kinetic energyis given by

KE = p2

2M (13.147)

= ~2k2

2M (13.148)

= ~2π2

2ML2 (13.149)

Now as instructed we set the kinetic energy tokT and solve for the density, given by n = 1

L3 .

kT = ~2π2

2ML2 (13.150)

= ~2π2

2Mn−2/3 (13.151)

n−23 = ~2π2

2MkT(13.152)

n =(

2MkT

~2π2

) 32

(13.153)

As predicted, this difers from nQ by a factor of( 4π

) 32 , which is of order unity. Its value is around

1.4.

Let’s remind ourselves: this is the density atwhich quantum stuff becomes really importantat a given temperature. In this problem, weshowed that the density nQ is basically the sameas the density of a single particle that is confinedenough that its kinetic energy is the same as thetemperature.

6. One-dimensional gas (K&K 3.11) Consider anideal gas of N particles, each of massM , confinedto a one-dimensional line of length L. Find theentropy at temperature T . The particles havespin zero.

To find the entropy at temperature T we need firstto consider the energy eigenstates of this system.We could use either periodic boundary conditionsor a particle-in-a-box potential to confine theparticles. The kinetic energy of a plane wave isgiven by

Ek = ~2k2

2m (13.154)

For a particle in a box, a half-integer number ofwavelengths must fit in the box:

L = nλ

2 (13.155)

= 12n

2πk

(13.156)

kn = πn

L. (13.157)

Thus, the energy is given by

En = ~2

2mπ2n2

L2 (13.158)

= π2~2n2

2mL2 (13.159)

No, you don’t need to derive this, but in myopinion it is easier to derive than to remember.

104

Page 106: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

If I were not writing up solutions, I would havedone several of the steps above in my head.

Now that we have our energy, we can start think-ing about how to find the partition function fora single particle. We can get away with this be-cause the particles are non-interacting, so thetotal energy is just a sum of the energies of eachindividual particle.

Z1 =∞∑n=1

e−βEn (13.160)

=∞∑n=1

e−βπ2~22mL2 n

2(13.161)

This is the point where we typically step backand tell ourselves that kBT � π2~2

2mL2 (because thedistance L is macroscopic, and we aren’t lookingat crazy-low temperatures), which means thatwe can turn the sum into an integral:

Z1 ≈∫ ∞

0e−β

π2~22mL2 n

2dn (13.162)

The smart move here it to do a u substitution,to get all the ugly stuff out of our integral.

u =√

β

2mπ~Ln du =

√β

2mπ~Ldn (13.163)

This gives us a simple gaussian integral:

Z1 ≈√

2mkT L

π~

∫ ∞0

e−u2du. (13.164)

The value of the gaussian integral here doesn’thave any particular physical impact, since it isjust a dimensionless numerical constant. It does,of course, impact the numerical value.

Gaussian integrals You are welcome to lookup the value of integrals like this, or mem-orize the answer (I always just rememberthat it’s got a

√π in it, which doesn’t help

much). I’ll show you here how to find thevalue of a gaussian integral, which is a nicetrick to be aware of.

IG ≡∫ ∞

0e−u

2du (13.165)

= 12

∫ ∞−∞

e−u2du (13.166)

(2IG)2 =(∫ ∞−∞

e−u2du

)2(13.167)

=(∫ ∞−∞

e−x2dx

)(∫ ∞−∞

e−y2dy

)(13.168)

=∫ ∞−∞

∫ ∞−∞

e−(x2+y2)dxdy

(13.169)

=∫ ∞

0

∫ 2π

0e−r

2rdφdr (13.170)

= 2π∫ ∞

0e−r

2rdr (13.171)

ξ = r2 dξ = 2rdr (13.172)

(2IG)2 = 2π∫ ∞

0e−ξ

2 (13.173)

= π (13.174)

IG =√π

2 (13.175)

The trick was simply to square the integral,and then move from Cartesian to polar co-ordinates. This only works because we areintegrating to ∞.

Back to our original task, we have now foundthat

Z1 ≈√πmkT

2L

π~(13.176)

105

Page 107: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

To find the entropy, we will want to construct theHelmholtz free energy. We will need the entirepartition function, which will have the N ! in itto avoid double-counting states, since these areindistinguishable particles.

Z = ZN1N ! (13.177)

F = −kBT logZ (13.178)

= −kBT log(ZN1N !

)(13.179)

= −kBT (N logZ1 − logN !) (13.180)≈ −kBT (N logZ1 −N logN +N) (13.181)= −NkBT (logZ1 − logN + 1) (13.182)

= −NkBT(

log(√

πmkT

2L

π~

)− logN + 1

)(13.183)

= −NkBT(

log(√

mkT

2π~2L

N

)+ 1)(13.184)(13.185)

Here we are at a good point to check whether ouranswer makes sense. Firstly, we can check that Fis indeed extensive. It is, since it is proportionalto N , and the only other extensive quantities init are the L and N in the logarithm, and theyform a ratio, which is therefore intensive. We canalso check dimensions.

We know that ~2k2

2m is an energy, which meansthat ~2

m has dimensions of energy times distancesquared. The kT cancels the energy, and thesquare root turns the resulting one over distancesquared into an inverse distance, which happilycancels with the L, so the argument of our loga-rithm is dimensionless.

Now we will get to the answer pretty soon! Recallthat:

dF = −SdT − pdV (13.186)

(although the second term should have a dL fora 1D system) which means that

−S =(∂F

∂T

)L,V,N

(13.187)

S = −(∂F

∂T

)L,V,N

(13.188)

= NkB

(log(√

mkT

2π~2L

N

)+ 1)

+NkBT1

2T(13.189)

= NkB

(log(√

mkT

2π~2L

N

)+ 3

2

)(13.190)

This is our final answer. It looks shockingly likethe entropy of a 3D ideal gas, right down tothe quantum length scale (which is no longer aquantum density), commonly called the “thermalde Broglie wavelength.”

Nasty logs In computing the derivative of anasty logarithm (which I did in my headabove), you can use the following shortcut,provided you remember the properties oflogarithms:

log(√

mkT

32π~2L

N

)= log

(√mkT

32π~2

)+ log

(L

N

)(13.191)

= 12 log

(mkT

32π~2

)+ log

(L

N

)(13.192)

= 12 log (T ) + 1

2 log(

mk

32π~2

)+ log

(L

N

)(13.193)

Now you can take a derivative of this, whichis way easier than a derivative of the whole

106

Page 108: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

mess, and clearly must give you the sameanswer. There are other ways to do it, but Ifind this particularly simple, and it has theadvantage that it’s the same kind of manip-ulation you’re doing all the time anyhow,just to simplify your results.If you do this in your head (or on scratchpaper), you can immediately discard any ofthe terms that will vanish when you takea derivative of them, which makes it muchsimpler than what I wrote down.

Solution for week 4

PDF version of solutions

1. Radiation in an empty box

a) To find the free energy, we’ll first want tofind the partition function, so we can take alog of it. That will involve summing over allthe possible microstates of the entire system.This can be a little confusing, and there areindeed several ways that you could approachthis other than the one I show here. Pleasefeel free to try something else, and talk withme about whether it is also correct.

Summing over microstates will be similar to whatwe did for the ideal gas, and you’ve seen thingslike this before, but I think it’s worth talkingthrough again in detail. A microstate is definedby a single quantum number nnxnynz for each setof nx, ny, nz (plus polarization, which I’ll ignorefor now). The sum over all microstates thenbecomes

Z =∞∑

n111=0

∞∑n112=0

∞∑n113=0

· · ·∞∑

n∞∞∞=0

e−β~(ω111n111+ω112n112+ω113n113+···+ω∞∞∞n∞∞∞)

(13.194)

So we’ve got an infinite number of nested sums(one sum for each mode), each going from 0→∞,

and an exponential with an infinite number of n’sadded together. The energy separates (the modesare independent, or the photons don’t interactwith each other), which means that the sumsseparate.

Z =∞∑

n111=0e−β~ω111n111

∞∑n112=0

e−β~ω112n112

· · ·∞∑

n∞∞∞=0e−β~ω∞∞∞n∞∞∞ (13.195)

Each of these sums can now be done indepen-dently. . .

Z =( ∞∑n111=0

e−β~ω111n111

)( ∞∑n112=0

e−β~ω112n112

)

· · ·

( ∞∑n∞∞∞=0

e−β~ω∞∞∞n∞∞∞

)(13.196)

and this turns our nested sums into a productof sums. We can now write down explicitly thisproduct in a simpler way, and we can plug inthe expression for ωnxnynz . Note also that forevery ~k, there are two polarizations of photon,so we need to square it all. I left that out above,because I thought we had enough indices to worryabout.

Z =

∞∏nx=1

∞∏ny=1

∞∏nz=1

∞∑n=0

e−βhc2L√n2x+n2

y+n2zn

2

(13.197)

Fortunately, the inmost sum over n we have al-ready solved in class (we recognize it as a geo-metric sum). Thus we get

107

Page 109: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Z =

∞∏nx=1

∞∏ny=1

∞∏nz=1

11− e−

βhc2L√n2x+n2

y+n2z

2

.

(13.198)

This still probably doesn’t leave you dancing withjoy. Fortunately, there are quite a few simplifi-cations left. Before we start simplifying further,let’s go ahead and look at the Helmholtz freeenergy, which will turn our products into morefamiliar summations.

F = −kT lnZ (13.199)

= −2kT ln

∞∏nx=1

∏ny

∏nz

11− e−

βhc2L√n2x+n2

y+n2z

(13.200)

= 2kT∞∑

nx=1

∑ny

∑nz

ln(

1− e−βhc2L√n2x+n2

y+n2z

)(13.201)

This is starting to look more friendly. As usual(and as the problem instructs us), we’d like totake a large-volume approximation, which willmean that βhc

L � 1. In this limit, we can turnour summation into an integration.

F ≈ 2kT∫∫∫ ∞

0ln(

1− e−βhc2L√n2x+n2

y+n2z

)dnxdnydnz

(13.202)

= kT

4

∫∫∫ ∞−∞

ln(

1− e−βhc2L√n2x+n2

y+n2z

)dnxdnydnz

(13.203)

This integral is begging to be done in spherical co-ordinates, and I’ll just define n ≡

√n2x + n2

y + n2z

for that integral. I’ll also divide by 8 and do theintegral over all “n” space, instead of just thepositive n space.

F ≈ kT

4

∫ ∞0

ln(

1− e−βhc2L n

)4πn2dn (13.204)

= 8πkT(LkT

hc

)3 ∫ ∞0

ln(1− e−ξ

)ξ2dξ

(13.205)

= 8πV (kT )4

h3c3

∫ ∞0

ln(1− e−ξ

)ξ2dξ (13.206)

At this point we have pretty much solved for thefree energy of a vacuum at temperature T . Weknow it should scale as T 4, and that it shouldscale with volume. The latter should have beenobvious, since it’s the only way that the free en-ergy could be extensive. You might worry aboutthe definite integral, but it is just a dimen-sionless number! Yes, it matters if we wantto know precisely how much radiation to expectfrom a black body, but we could solve this integralnumerically, or we could have just done an exper-imental measurement to see what this number is.Wolfram Alpha tells us that this number is -2.165.You could tell it should be negative because thething in the ln is always less than one, meaningthe ln is always negative. You might be weirdedout that the free energy density is negative, butthis just means it is dominated by entropy, sincethe entropy term is always negative.

Extra fun: Can any of you find an elegant so-lution to the above definite integral? If so, pleaseshare it with me, and I’ll share it with the class.

∫ ∞0

ln(1− e−ξ

)ξ2dξ = π4

45 (13.207)

but I don’t know how to prove this except numer-ically.

b) We can solve for the entropy straight off:

108

Page 110: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S = −(∂F

∂T

)V

(13.208)

= −32πkV(kT

hc

)3 ∫ ∞0

ln(1− e−ξ

)ξ2dξ

(13.209)

c) We also want to know the internal energy

U = F + TS (13.210)

= (8− 32)πV (kT )4

h3c3

∫ ∞0

ln(1− e−ξ

)ξ2dξ

(13.211)U

V= −24π (kT )4

h3c3

∫ ∞0

ln(1− e−ξ

)ξ2dξ

(13.212)

So the internal energy scales the same asthe free energy, but is positive, as we wouldexpect.

2. Surface temperature of the earth

This problem comes down to balancing the radi-ation absorbed by the earth with the radiationemitted by the earth. Interestingly, the answerwon’t change if we drop the assumption that theearth is a perfect black body, so long as its ab-sorption is indepednent of frequency (which isn’ttrue). The assumption that it remains constanttemperature over day and night is also a bit weak,but fractionally the variation of temperature isactually relatively small.

The total energy radiated by the sun is

P� = σBT4�4πR2

� (13.213)

Now, the fraction f of that power that is absorbedby the Earth is given by

f = cross-sectional area of eartharea of sphere with Earth’s orbit

(13.214)

= πR2E

4πAU2 (13.215)

Okay, now we just need the energy radiated bythe earth:

PE = σBT4E4πR2

E (13.216)

Setting the energy absorbed by the earth to theenergy radiated by the earth gives

πR2E

4πAU2��σBT4�HH4πR2

� =��σBT 4EHH4πR2

E (13.217)

T 4E = R2

E

4AU2T4�R2�

R2E

(13.218)

= 14T

4�R2�

AU2 (13.219)

TE =√

R�2AU T� (13.220)

=√

7× 1010cm3× 1013cm5800K (13.221)

≈ 280K (13.222)

This is a bit cold, but when you consider the ap-proxmations isn’t a bad estimate of the Earth’stemperature. This neglects the power from ra-dioactivity, and also the greenhouse effect (whichis a violation of the assumption that the absorp-tion and emmission have the same proportion atall wavelengths).

3. Pressure of thermal radiation Okay, let’s be-gin with the free energy from class:

109

Page 111: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

F = 8πV (kT )4

h3c3

∫ ∞0

ln(1− e−ξ

)ξ2dξ (13.223)

Since this is proportional to volume, its derivativewith respect to volume is really easy.

p = −8π (kT )4

h3c3

∫ ∞0

ln(1− e−ξ

)ξ2dξ (13.224)

Unfortunately, we’ve already done our summa-tion over all the modes, so this didn’t help us asmuch as we might have hoped for part (a). Toget an expression in terms of the modes, we needto go back to the expression for free energy thathad

∑nx

∑ny

∑nz

and recognize that as a sumover modes.

F = kT∑j

ln(1− e−β~ωj

)(13.225)

Now we can take our derivative and hope to getan expression involving photons in modes:

p = −(∂F

∂V

)T

(13.226)

= −��kT∑j

−e−β~ωj1− e−β~ωj

(−��β~

dωjdV

)(13.227)

= −∑j

−eβ~ωj1− e−β~ωj ~

dωjdV

(13.228)

= −∑j

〈nj〉~dωjdV

(13.229)

So yay. It worked out as we were told, which isalso reasonably intuitive: the pressure is just thetotal pressure from all the photons.

Now we want to find the actual pressure, andrelate it to the internal energy. We can do this

two ways. One is to use the above expression fortotal pressure and compare with U from class.This is totally correct and fine. I’ll use a differentapproach here, since it may be less obvious, andmay give insight.

We’ll take this expression we just found, and seehow the mode frequencies change with volume.Recall from class that:

ωnxnynz = 2πcL

√n2x + n2

y + n2z (13.230)

= 2πc3√V

√n2x + n2

y + n2z (13.231)

Thus we have that

dωnxnynzdV

= −13ωnxnynz

V(13.232)

Putting this into our pressure, we see that

p = −∑j

〈nj〉~dωjdV

(13.233)

= −∑j

〈nj〉~(−1

3ωjV

)(13.234)

= 13V

∑j

〈nj〉~ωj (13.235)

= U

3V (13.236)

which tells us that the pressure is one third ofthe internal energy per volume. You might wanta dimension check: work (which is an energy) isdW = pdV which will remind you that pressureis energy per volume. If you have doubts, youcould remind yourself that pressure is force perarea, but force is energy per distance (going backto work from classical mechanics).

The factor of 13 comes from the fact that we’re

in three dimensions, and the linear dispersionrelation for light.

110

Page 112: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

4. Heat shields

The middle plane is going to absorb the same rateof energy from the hot side that it gives to thecold side. The net transfer from hot to middle(per area) will be

Jh = σB(T 4h − T 4

m) (13.237)

while the transfer from middle to cold will be

Jc = σB(T 4m − T 4

c ) (13.238)

Setting these equal tells us that

T 4h − T 4

m = T 4m − T 4

c (13.239)2T 4

m = T 4h + T 4

c (13.240)

Tm = 4

√T 4h + T 4

c

2 (13.241)

Now we can see that the power transfered perarea is given by

Jh = σB(T 4h − T 4

m) (13.242)

= σB(T 4h −

T 4h + T 4

c

2 ) (13.243)

= σBT 4h − T 4

c

2 (13.244)

which is precisely 12 of the power that would have

been transfered without the heat shield.

5. Heat capacity of vacuum

a) We can begin with either the entropy or theinternal energy, since we know that

CV =(∂U

∂T

)V

(13.245)

= T

(∂S

∂T

)V

(13.246)

Now from the first problem, we know that

U

V= π2

15(kT )4

~3c3(13.247)

so let us just start with this. We just needa temperature derivative.

CV = V4π2

15(kT )3

~3c3k (13.248)

b) At this point we just need to plug in num-bers. I will do a cubic centimeter since I havea soled intuition for 1 mL. It’s the amountof liquid vitamin D to give a toddler.

Because I know room temperature in eV,I’ll be using some idiosyncratic units. I’dencourage you to use whichever units youhave the most familiarity with. To start,let’s find the ratio

kT

~c= 25× 10−3eV

6.6× 10−16eV s · 3× 1010cm s−1

(13.249)= 1.26cm−1 (13.250)

where you really shouldn’t trust all my digitsgiven how I rounded things. Now since π2 =10. . .

CV = (1cm3)4 · 1015 (1.26cm−1)3kB

(13.251)

≈ 10−22 JK (13.252)

In comparison, the heat capacity of one mLof water is 4.2 J/K. So the vacuum indeedhas a low heat capacity.

111

Page 113: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Solution for week 5

PDF version of solutions

1. Centrifuge The key concept here is to recognizethat when working with a rotating system suchas a centrifuge, there is classically a centrifugalpotential energy. This energy serves as the ex-ternal chemical potential, and allows us to solvefor the properties of the gas by setting the totalchemical potential equal everywhere, and solvingfor the internal chemical potential, which we canrelate to the concentration.

Figure 13.4: Centrifugal Force by Randall Munroe, atxkcd.

First we need the centrifugal potential. Youmay remember this from Central Forces, but youcan also solve for it if you remember how to derivethe centripetal acceleration. This comes from thesecond derivative of the displacement of an objectmoving in a circle.

~r(t) = R cos(ωt)x+R sin(ωt)y (13.253)d2~r

dt2= −ω2~r (13.254)

= 1m~F (13.255)

So the centrifugal force ismω2~r (which is outward,opposite the centripetal force). The centrifugalwork is the integral of the centrifugal force, whichgives us a centrifugal potential energy thatis V = − 1

2mω2r2. The potential is negative

because the force is outwards.

Now that we have a potential energy, we can findthe internal chemical potential from the total:

µtot = µint −12mω

2r2µint = µtot + 12mω

2r2

(13.256)

Finally, we need to use the expression for thechemical potential of an ideal gas.

µint = kBT ln(n

nQ

)(13.257)

or alternatively

n = nQeβµint (13.258)

Now we can just plug in our µint(r) to find n(r):

n(r) = nQeβ(µtot+ 1

2mω2r2) (13.259)

112

Page 114: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

We were asked to find the ratio between n(r) andn(0) in order to avoid having to solve for µtot orto specify something banal like the total numberof molecules.

n(r)n(0) = nQe

β(µtot+ 12mω

2r2)

nQeβµtot(13.260)

= eβ12mω

2r2(13.261)

As you would expect, the density is higher atlarger radii, since the centrifugal force compressesatoms near the edge.

2. Potential energy of gas in gravitationalfield We can begin by writing down (from classnotes) the expression for the (internal) chemicalpotential of an ideal gas.

n = nQeβµint (13.262)

In this case the external potential is linear

µext = Mgh (13.263)

The internal chemical potential is the total minusthe external, telling us that

n(h) = nQeβ(µtot−Mgh) (13.264)

= n(0)e−βMgh (13.265)

We can find the average potential energy by find-ing the total potential energy and dividing by thenumber of atoms.

〈V 〉 =∫Mghn(h)dh∫n(h)dh

(13.266)

= Mg

∫∞0 he−βMghdh∫∞0 e−βMghdh

(13.267)

= Mg

βMg

∫∞0 ξe−ξdξ∫∞0 e−ξdξ

(13.268)

= kT1!0! (13.269)

= kT (13.270)

So that is interesting, our potential energy peratom is just kT , as if we had two degrees of free-dom according to equipartition (which we don’t).In this case, the equipartition theorem doesn’tapply, because the potential is not quadratic.

My favorite integral I’ll just mention that Iused here (twice!) my very favorite definiteintegral:

∫ ∞0

une−udu = n! (13.271)

You can prove this using integration by partsa few times, if need be. But it’s really handyto remember this. It comes up very oftenwhen working on the hydrogen atom, forinstance. And yes, I learned this from anintegral table.

To find the heat capacity we can start by writingdown the internal energy by adding the kineticand potential energies:

U = 32NkT +NkT (13.272)

= 52NkT (13.273)

Then we can find the heat capacity by takinga temperature derivative, noting that the vol-ume is essentially held constant (perhaps infinite,

113

Page 115: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

since the column of gas extends upward with nobound?).

CV =(∂U

∂T

)V,N

(13.274)

= 52Nk (13.275)

3. Active transport The question here is to find∆µint across the cell membrane. This must beequal and opposite to ∆µext, where the exter-nal chemical potential will be the electrostaticpotential across the cell wall. We are given that

ninside = 104noutside (13.276)ninsidenoutside

= 104 (13.277)

= nQeβµint,inside

nQeβµint,outside(13.278)

= eβ∆µint (13.279)∆µint = kT ln

(104) (13.280)

At this point it is convenient to know that at roomtemperature kT ≈ 1

40eV. Of course, you coulddo this in any unit system you are comfortablewith.

∆µint = 140 ln

(104) eV ≈ 0.23eV (13.281)

Getting the sign right is a different story. Whatwe found was µint,inside−µint,outside. The voltagewe are considering would be the external chemicalpotential, which would swap the sign, suggestingthe outside should be at a higher electrostaticpotential than the inside. More accurately, thecells are actively pumping K+ into the cell.

Geeky anecdote When I was an undergrad, Iwent through a phase in which I attemptedto become familiar with everyday temper-atures in electron volts. “It’s a balmy 25.8

millielectron volts today, but in the libraryit’s a chilly 24.9 millielectron volts.”

4. Gibbs sum for a two level system Okay, wehave three microstates, which I’ll call 0, 1, and 2

N0 = 0 (13.282)N1 = N2 = 1 (13.283)E0 = E1 = 0 (13.284)E2 = ε (13.285)

Now we remind ourselves that the activity λ isdefined by

λ ≡ eβµ (13.286)

a) The Gibbs sum is just

Z =∑i

e−β(Ei−µNi) (13.287)

=∑i

eβ(µNi−Ei) (13.288)

=∑i

λNie−βEi (13.289)

= 1 + λ+ λe−βε (13.290)

b) The average occupancy of the system isgiven by

〈N〉 =∑i

PiNi (13.291)

=∑i

NiλNie−βEi

Z(13.292)

= λ+ λe−βε

1 + λ+ λe−βε(13.293)

c) This is just the probability of the state beingat energy ε, which is

114

Page 116: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Pi = e−β(Ei−µNi)

Z(13.294)

Pε = λe−βε

Z(13.295)

The average number in this state is equalto the probability.

d) The thermal average energy is even easier,since the energies are zero and ε, the averageis just probability of energy ε times ε.

〈E〉 = λe−βε

Zε (13.296)

e) Now we’re adding one more microstate tothe system, which is a microstate with E = εand N = 2. Our Gibbs sum will just havethis one additional term in it.

Z =∑i

λNie−βEi (13.297)

= 1 + λ+ λe−βε + λ2e−βε (13.298)= (1 + λ)(1 + λe−βε) (13.299)

The separation now comes about becausewe can now separate the first orbital fromthe second, and the energy and number areboth the sum of value for the first orbitalplus the value for the second orbital.

5. Carbon monoxide poisoning The main ideahere is that because the oxygen and carbonmonoxide are in equilibrium with air, we candetermine the activities (or equivalent chemicalpotential) of the molecules from the air.

a) We are looking for probabilities of occu-pancy, so as usual let’s start with a Gibbssum. Right now we only have oxygen, so

Z = 1 + λO2e−βεA (13.300)

PO2 = λO2e−βεA

1 + λO2e−βεA

(13.301)

= 11 + eβεA/λO2

(13.302)

We are working to solve for εA here. . .

1PO2

= 1 + eβεA/λO2 (13.303)

eβεA = λO2

(1PO2

− 1)

(13.304)

εA = kT ln(λO2

(1PO2

− 1))

(13.305)

= 26.7meV ln(

10−5(

10.9 − 1

))(13.306)

= 26.7meV×−13.7 (13.307)= −366meV (13.308)

where I used kB = 8.617 × 10−2meV K−1

and body temperature is T = 310.15K tofind kT in meV. This binding energy is quitehigh, more than a third of an eV! Covalentbonds tend to be a few eV in strength, butthey don’t reverse without significant effort,whereas it’s really important for oxygen tospontaneously unbind from hemoglobin.

b) When we add in carbon monoxide, ourhemoglobin will have three possible states,so it will look a heck of a lot like our lasthomework problem.

Z = 1 + λO2e−βεA + λCOe

−βεB (13.309)

We are now asking how strongly carbonmonxide has to bind in order to allow only10% of the hemoglobin to be occupied byoxygen. So we are again going to be lookingat the probability of oxygen occupying thehemoglobin

PO2 = λO2e−βεA

1 + λO2e−βεA + λCOe−βεB

(13.310)

We are looking to isolate εB now, since weare told everything else.

115

Page 117: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

1 + λO2e−βεA + λCOe

−βεB = λO2

PO2

e−βεA

(13.311)

And moving everything to one side gives

λCOe−βεB = λO2

(1PO2

− 1)e−βεA − 1

(13.312)

e−βεB = λO2

λCO

(1PO2

− 1)e−βεA − 1

λCO(13.313)

and at long last

εB = −kT ln(λO2

λCO

(1PO2

− 1)e−βεA − 1

λCO

)(13.314)

= −26.7meV ln(102 × 9e13.7 − 107)

(13.315)= −26.7meV ln 7.9× 108 (13.316)= −26.7meV× 20.5 (13.317)= −547meV (13.318)

So the carbon monoxide doesn’t need to bemuch favored over oxygen energetically (interms of ratio) in order to crowd out almostall the oxygen, even though there is way lesscarbon monoxide available. Of course, it isnot ratios of energies that matter here, somuch as energy differences, ant that is about7kT , which is hardly a small difference.

Solution for week 6

PDF version of solutions

1. Derivative of Fermi-Dirac function Thisjust comes down to math on the Fermi func-tion. I’m going to use Newton’s notation for thederivative (which I rarely use) because it makes

it a bit easier to specify at which value we areevaluating the derivative:

f(ε) ≡ 11 + e−β(ε−µ) (13.319)

f ′(ε) = − 1(1 + e−β(ε−µ)

)2 e−β(ε−µ)(−β)

(13.320)

= 1kT

e−β(ε−µ)(1 + e−β(ε−µ)

)2 (13.321)

Now I will plug in to find f ′(µ):

f ′(µ) = 1kT

e−β(µ−µ)(1 + e−β(µ−µ)

)2 (13.322)

= 1kT

1(1 + 1)2 (13.323)

= 14kT (13.324)

2. Symmetry of filled and vacant orbitals Weare interested in f(µ ± δ), so I’ll start by justhandling both versions at the same time.

f(µ± δ) = 11 + e−β((µ±δ)−µ) (13.325)

= 11 + e−β(±δ) (13.326)

= 11 + e∓βδ

(13.327)

So we can see that these two things look prettysimilar, but they don’t look like they should addto one. To show that, I’ll add the two togetherand then use the old multiply-top-and-bottomtrick.

116

Page 118: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

f(µ− δ) + f(µ+ δ) = 11 + eβδ

+ 11 + e−βδ

(13.328)

= 11 + eβδ

+ 11 + e−βδ

eβδ

eβδ

(13.329)

= 11 + eβδ

+ eβδ

eβδ + 1(13.330)

= 1 + eβδ

1 + eβδ(13.331)

= 1 (13.332)

We could subtract to get the form that Kittelhas, but I think it is more intuitive to think ofthe two values as adding to one. Thus if one ishigh, the other must be low.

3. Distribution function for double occu-pancy statistics

a) This is much like we did in class for thefermions. We will solve for the Gibbs sum,and then for 〈N〉.

Z = 1 + e−β(ε−µ) + e−β(2ε−2µ) (13.333)= 1 + e−β(ε−µ) + e−2β(ε−µ) (13.334)

The occupancy is given by

〈N〉 = 0× 1 + 1× e−β(ε−µ) + 2× e−2β(ε−µ)

1 + e−β(ε−µ) + 2e−2β(ε−µ)

(13.335)

= e−β(ε−µ) + 2e−2β(ε−µ)

1 + e−β(ε−µ) + e−2β(ε−µ) (13.336)

= 1 + 2e−β(ε−µ)

eβ(ε−µ) + 1 + e−β(ε−µ) (13.337)

b) If we have two fermion energy levels with thesame energy ε, the total occupancy of thetwo will just be the sum of their individualoccupancies.

〈N〉 = f(ε) + f(ε) (13.338)

= 21 + eβ(E−µ) (13.339)

It’s kind of hard to say how it differs from(a), they are so dissimilar. The low energy(β(ε − µ) � −1) behavior has a differentexponential scaling. At high temperatures(or equivalently, ε = µ) both systems have atotal occupancy of 1. At high energies, onceagain the “double-occupancy statistics” oc-cupation has the same exponential behavior,but half as many occupants. This is becauseat high energies the “double occupied” statebecomes irrelevant relative tot he “singleoccupied” state. In the Fermi-Dirac statis-tics, since they are different orbitals, eachorbital contributes the same amount to theoccupancy.

4. Entropy of mixing

a) We are considering two sets of atoms withsame temperature and volume. Initiallythey are separate, and in the end, they willbe equally mixed (according to intuition:mixed gasses don’t unmix, and everythingis equal). So we can just use the Sackur-Tetrode entropy for the two scenarios. Thekey idea that is not obvious here is that wecan compute the entropy of each gas sep-arately and add them together even whenthey are mixed! We will call the initial vol-ume V0, and the initial entropy S0. I’lldefine each gas to have a distinct nQ, sincethey presumably have different masses. I’lljust use N for the number of each gas type.

S0

Nk= ln

(nQA

V0

N

)+ 5

2 + ln(nQB

V0

N

)+ 5

2(13.340)

= ln (nQAnQB) + 2 ln(V0

N

)+ 5

(13.341)

117

Page 119: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

We can rewrite this to look like a single idealgas with the geometric mean of the two nQs,with twice the volume and number:

S0

2Nk = ln(√nQAnQB

V0

N

)+ 5

2 (13.342)

So the total initial entropy would be justtwice the individual entropy if the two gasseshad the same mass. We now seek the final,mixed entropy Sf . For this, each gas willhave a volume of 2V0.

S0

Nk= ln

(nQA

2V0

N

)+ 5

2 + ln(nQB

2V0

N

)+ 5

2(13.343)

= ln(nQAnQB

(2V0)2

N

)+ 5

(13.344)

= 2 ln(√nQAnQB

2V0

N

)+ 5

(13.345)

= 2 ln(√nQAnQB

V0

N

)+ 5 + 2 ln 2

(13.346)S0

2Nk = ln(√nQAnQB

V0

N

)+ 5

2 + ln 2

(13.347)

The challenging step here is to explain whywe can just treat each gas individually whenthey are mixed. One reason is that the twogasses don’t interact, so we have separationof variables. Moreover, since they are differ-ent kinds of particles, they don’t occupy thesame orbitals (or you could say the overallwavefunction is just a product of the twoindividual wave functions A and B).

Another argument we could make would be athought experiment. Suppose we had two boxes,one of which was entirely permeable to atoms A,but held B in, and the other of which was entirelypermeable to B but held in A. With these boxes,

without doing any work, we could separate themixed gasses into unmixed gasses, each with thesame volume of 2V0. Thus the mixed gasses musthave the same free energy as two unmixed gasseswith twice the volume. By this reasoning, whatwe are seeing is not the entropy of mixing, butrather the entropy of expansion. (But it is calledentropy of mixing, so that is what we call it.)

b) Now we consider a different final entropy, whenthe particles are identical. From our thermo-dynamic perspective, this should have the sameentropy as the two unmixed (but identical) gasses,just by the extensivity reasoning we always use.But it’s worth seeing that happen mathematically.Note that we now have one gas with volume 2V0and number 2N .

SAA2Nk = ln

(nQ

2V0

2N

)+ 5

2 (13.348)

= 2 ln(nQ

V0

N

)+ 5 (13.349)

This is the same as S0, provided the two nQs arethe same.

The “Gibbs paradox” here is just that if you viewthis scenario classically (with no wavefunctions),it is not obvious why the behavior of distinguish-able and indistinguishable objects should differ.If I have a bunch of billiard balls, writing num-bers on them shouldn’t affect the pressure theyexert on the walls of a container, for instance.The resulution to the classical paradox is to notethat as long as the numbers you draw on theballs does not impact their behavior in an ex-periment, you will predict the same outcome foryour experiment whether or not you view theparticles as indistinguishable. True, they have adifferent entropy, but without quantum mechan-ics (which makes explicit the difference betweenindistinguishable and distinguishable particles interms of the wave function) there is no absolutedefinition of entropy, since there is no unique wayto count microstates.

118

Page 120: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

c) I would extract work from the system using aspecial wall separating the two boxes, which ispermeable to A but not B, and another wallthat has the inverse permeability. One of thesewalls will feel a pressure to the right, but not theleft, and the other will have the inverse pressuredifference. We can then slowly move the twopermeable walls apart from one another, and thepressure difference will do work. If we insulatethe box, the temperature of the gas will drop dueto the First Law (i.e. energy conservation). Ifwe don’t insulate the boxes (and go sufficientlyslowly), we will cool the room a tiny bit as energyflows into the box through heating.

5. Ideal gas in two dimensions

a) This requires us to use the eigenstates ofthe 2D box. We can go ahead and use non-periodic boundary conditions, which givessingle-particle energy eigenvalues of

εnxny = ~2

2mπ2

L2

(n2x + n2

y

)(13.350)

where nx goes from 1 to∞, since we have tofit a half-integer number of wavelengths inthe box with side length L. We can assumethis is a low-density system in the classicallimit, since it is described as an ideal gas.Thus we can say that the occupancy of eachorbital will be

f(ε) = e−β(ε−µ) (13.351)

Thus we can add up to find N :

N =∞∑

nx=1

∞∑ny=1

e−β(εnxny−µ)

(13.352)

=∞∑

nx=1

∞∑ny=1

e−β(

~22m

π2L2 (n2

x+n2y)−µ

)(13.353)

Ne−βµ =∞∑

nx=1

∞∑ny=1

e−β~22m

π2L2 (n2

x+n2y)

(13.354)

≈∫ ∞

0

∫ ∞0

e−β~22m

π2L2 (n2

x+n2y)dnxdny(13.355)

Now let’s do a change of variables into a di-mensionless argument, and let’s also changethe limits to go down to −∞ and divide bya factor of 2 (per integral).

Ne−βµ = 14

∫ ∞−∞

∫ ∞−∞

e−β~22m

π2L2 (n2

x+n2y)dnxdny

(13.356)

At this point we want to do a subsititutionthat turns our integral into a dimensionlessone. It’s a little weird defining x and y asdimensionless coordinates, but it’s compactand will naturally transition us into polar.

x =√β~2π2

2mL2 nx y =√β~2π2

2mL2 ny

(13.357)

This gives us in cartesian coordinates

Ne−βµ = 14

2mL2

β~2π2

∫∫e−(x2+y2)dxdy

(13.358)

= mL2

2π2β~2

∫∫e−(x2+y2)dxdy

(13.359)

119

Page 121: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Now we could go into polar coordinates, orwe could use the fact that we each of thesetwo integrals gives

√π. I’ll use the latter

approach.

Ne−βµ = mL2

2πβ~2 (13.360)

Now we can go about solving for the chemi-cal potential:

−βµ = ln(A

N

m

2πβ~2

)(13.361)

µ = −kT ln(A

N

m

2πβ~2

)(13.362)

= kT ln(N

A

2πβ~2

m

)(13.363)

b) There are a few ways we could solve for theinternal energy of the 2D ideal gas. The onemost suggested by this chapter would be toadd up the probability of each orbital beingoccupied times the energy of that orbital.

U =∞∑

nx=1

∞∑ny=1

εnxnye−β(εnxny−µ)

(13.364)

At this point we could recognize that thereis a derivative trick we can do, since thissum looks so very similar to the sum forN we had earlier. (Note, this problem isalso very much solvable by just doing theintegral, which isn’t much harder than theprevious portion). We can see that

(∂N

∂β

=∞∑

nx=1

∞∑ny=1

(µ− εnxny )e−β(εnxny−µ)

(13.365)= µN − U (13.366)

And thus that

U = µN −(∂N

∂β

(13.367)

Now we can use our expression for N frombefore to compute U .

N = mA

2πβ~2 eβµ (13.368)(

∂N

∂β

= − mA

2πβ2~2 eβµ + A

π

β~2

2πmeβµµ

(13.369)= −NkT +Nµ (13.370)

Thus the internal energy is

U = µN −N(µ− kT ) (13.371)= NkT (13.372)

This matches with the equipartition expec-tation, which would be 1

2kT per degree offreedom, which in this case is two degreesof freedom per atom.

c) The entropy of this system we can just addup the entropy of each orbital. This uses aperspective where we recognize each orbitalas a separate non-interacting system, withonly two eigenstates either occupied or un-occupied. The probability of being occupiedis e−β(ε−µ), so the probability of not beingoccupied is 1 minus that.

Sorbital = −kall microstates of orbital∑

i

Pi lnPi

(13.373)

= −ke−β(ε−µ) ln(e−β(ε−µ)

)− k

(1− e−β(ε−µ)

)ln(

1− e−β(ε−µ))

(13.374)

= 1T

(ε− µ)e−β(ε−µ)

− k(

1− e−β(ε−µ))

ln(

1− e−β(ε−µ))

(13.375)

120

Page 122: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

Here we can use that in the classical limit theoccupancy of every orbital is small. Thusthe exponential is small, and we can approx-imate the log.

Sorbital ≈ε− µT

e−β(ε−µ)

− k(

1− e−β(ε−µ))(−e−β(ε−µ)

)(13.376)

≈ ε− µT

e−β(ε−µ) + ke−β(ε−µ)

(13.377)

=(k + ε− µ

T

)e−β(ε−µ) (13.378)

In the second approximation, I dropped theterm that was proportional to the occupancysquared, since it was much smaller than theother terms. Now to find the total entropy,we can add up the entropy of all the orbitals.

S =∞∑

nx=1

∞∑ny=1

(k +

εnxny − µT

)e−β(εnxny−µ)

(13.379)

= Nk + U

T−N µ

T(13.380)

= 2Nk −N µ

T(13.381)

= 2Nk −Nk ln(N

A

2πβ~2

m

)(13.382)

= Nk

(ln(A

N

mkT

2π~2

)+ 2)

(13.383)

This looks vaguely like the Sackur-Tetrodeequation, but an number-per-area densityrather than a volume density, and a 2 wherethere would otherwise be a 5

2 .

Note You can also solve for the entropy by findingthe free energy as we did in class, and then takingits derivative. That is almost certainly an easierapproach.

6. Ideal gas calculations

a) The easy one is the second process: Q2 = 0,since the process is adiabatic. The firstprocess, we could either use the change inentropy, the change in free energy, or wecould integrate the work. In the latter twocases, we would also invoke the First Law, toargue that the work done to the system mustequal the change in internal energy plus theenergy added to it by heating. Let’s just gowith the ∆S approach.

dQ = TdS (13.384)

Q =∫TdS (13.385)

= T

∫dS (13.386)

= T∆S (13.387)

= NkT

(ln(nQnf

)+���52 − ln

(nQni

)−���52

)(13.388)

= NkT ln(ninf

)(13.389)

= NkT ln 2 (13.390)

where in the last step I used the fact the thefinal density was 1

2 of the initial density.b) Finding the temperature at the end of the

second process requires finding the statewith four times the original volume that hasthe same entropy as twice the original vol-ume. The text finds a relationship betweenp and V for an adiabatic expansion involvingpV γ , but knowing that result is less usefulthan deriving that result. We have at leasttwo ways to derive this relationship. Onewould be to use the ideal gas law combinedwith the internal energy 3

2NkT and to makeuse of energy conservation. Since we haverecently derived the Sackur-Tetrode equa-tion for the entropy of an ideal gas, we mayas well use that.

121

Page 123: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S = Nk

(ln(nQn

)+ 5

2

)(13.391)

We just need to set the two entropies to beequal before and after expansion, keepingin mind that N doesn’t change either.

Nk

(ln(nQini

)+ 5

2

)= Nk

(ln(nQfnf

)+ 5

2

)(13.392)

nQiViN

= nQfVfN

(13.393)

nQiVi = nQfVf (13.394)

T32i Vi = T

32f Vf (13.395)

Tf = 2− 23Ti (13.396)

≈ 189K (13.397)

I’ll note that you could have skipped a fewsteps in solving this. But once again, youreally need to always keep in mind tha nQdepends on temperature!

c) The increase of entropy of a system in anirreversible process is the same as for a re-versible process with the same starting andending conditions. In this case, an irre-versible expansion into vacuum will do nowork (since it moves nothing other thanthe gas itself), which means that it willnot change the internal energy (unless en-ergy is transfered by heating). Since for amonatomic ideal gas U = 3

2NkT , keepingthe internal energy fixed means the tem-perature also remains fixed, there won’t beany heating and the temperature will cer-tainly stay fixed. Thus we can work out thechange in entropy using the Sackur-Tetrodeequation again.

Sf − Si = Nk ln(nQnf

)−Nk ln

(nQni

)(13.398)

= Nk ln(ninf

)(13.399)

= Nk ln 2 (13.400)

We could also have obtained this by integrat-ing ∆S =

∫dQT for a reversible isothermal

expansion, as I think you did in Energy andEntropy.

Solution for week 7

PDF version of solutions

1. Energy of a relativistic Fermi gas There area couple of ways you could go through this prob-lem. One would be to just integrate to find theFermi energy, and then to integrate to find theinternal energy. It’s not bad done that way. Theother way, which I’ll demonstrate, is to first solvefor the density of states, and then use that tofind the Fermi energy and U .

D(ε) = 2(L

)3 ∫∫∫ ∞−∞

δ(ε(k)− ε)d3k

(13.401)

= 2(L

)3 ∫ ∞0

δ(~ck − ε)4πk2dk

(13.402)

Note that the factors of two above are for thespin degeneracy. Now changing variables to anenergy:

ε = ~ck dε = ~cdk (13.403)

And we get

122

Page 124: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

D(ε) = 8π(L

)3( 1~c

)3 ∫ ∞0

δ(ε− ε)ε2dε

(13.404)

= 8π(

L

2π~c

)3ε2 (13.405)

= V

π2~3c3ε2 (13.406)

a) Solving for the Fermi energy comes down tosolving for N .

N =∫ εF

0D(ε)dε (13.407)

= V

π2~3c3

∫ εF

0ε2dε (13.408)

= V

π2~3c313ε

3F (13.409)

εF =(N

V3π2~3c3

) 13

(13.410)

=(3π2n

) 13 ~c (13.411)

just as the problem says. The dimensionsare energy because n

13 is inverse length,

which when multiplied by c gives inversetime. ~ is energy times time, so we get anenergy as we expect.

b) The internal energy at zero temperature(which is the total energy of the groundstate) just requires us to just integrate thedensity of states time energy.

U =∫ εF

0D(ε)εdε (13.412)

=∫ εF

0

V

π2~3c3ε3εdε (13.413)

= V

π2~3c314ε

4F (13.414)

=��

������:N(

V

π2~3c313ε

3F

)34εF (13.415)

= 34NεF (13.416)

The trickiest step was looking back at ourprevious expression for N to substitute.

2. Pressure and entropy of a degenerateFermi gas

a) As we saw before (when working with theradiation pressure of a vacuum?) the pres-sure is given by the thermal average valueof the derivative of the energy eigenvalues.

p = −(∂U

∂V

)S,N

(13.417)

=all microstates∑

i

Pi

(−∂Ei∂V

)(13.418)

The usual challenge here is that fixed tem-perature is not the same thing as fixed en-tropy. In this case, when T = 0, we knowthat the probabilities are all predetermined(via the Fermi-Dirac distribution), and wecan just take a simple derivative of the en-ergy we derived in class..

U = 35NεF (13.419)

= 35N

~2

2m

(3π2N

V

) 23

(13.420)

p = −(∂U

∂V

)S,N

(13.421)

= 35N

~2

2m

(3π2N

V

) 23 2

31V

(13.422)

= 25N

VεF (13.423)

= 15~2

m3 2

3π43

(N

V

) 53

(13.424)

This agrees with the expression given in theproblem itself, so yay.

b) The entropy is a Fermi gas, when kT �εF . We can start with the general form ofentropy:

123

Page 125: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S = −kall µstates∑

i

Pi lnPi (13.425)

We will begin by first finding the entropyof a single orbital, and then adding up theentropy of all the orbitals. One orbital hasonly two microstates, occupied and unoccu-pied, which correspondingly have probabil-ities f(ε) and 1 − f(ε). Before we go anyfarther, it is worth simplifying the latter.

1− f = 1− 1eβ(ε−µ) + 1

(13.426)

= eβ(ε−µ) + 1eβ(ε−µ) + 1

− 1eβ(ε−µ) + 1

(13.427)

= eβ(ε−µ)

eβ(ε−µ) + 1(13.428)

= 1e−β(ε−µ) + 1

(13.429)

The entropy corresponding to a single or-bital, thus is

Sorbital = −k(f ln f + (1− f) ln(1− f))

(13.430)

= −k(

1eβ(ε−µ) + 1

ln(

1eβ(ε−µ) + 1

)+ 1e−β(ε−µ) + 1

ln(

1e−β(ε−µ) + 1

))(13.431)

= k

eβ(ε−µ) + 1ln(eβ(ε−µ) + 1

)+ k

e−β(ε−µ) + 1ln(e−β(ε−µ) + 1

)(13.432)

This is inherently symmetric as we change thesign of ε− µ, which makes sense given what weknow about the Fermi-Dirac distribution. It isless obvious in this form that the entropy does

−10 −5 0 5 10

β(ε− µ)

0.0

0.2

0.4

0.6

0.8

S(ε

)/kB

T = 0

Figure 13.5: Plot of entropy vs. eigenenergy

the right thing (which is to approach zero) when|ε−µ| � kT . We expect the entropy to go to zeroin this case, and one term very obviously goes tozero, but the other requires a bit more thinking.A simple approach is to plot the entropy, as Ido below, which deomonstrates that the entropydoes indeed vanish at energies far from the Fermilevel.

Using this expression for the entropy of a singleorbital, we can solve for the entropy of the wholegas. At the second step below we will make useof the fact that kT � εF , which means that theentropy looks very much like a Dirac δ-functionthat hasn’t been properly normalized.

S =∫ ∞

0D(ε)Sorbital(ε)dε (13.433)

≈ D(εF )∫ ∞−∞

Sorbital(ε)dε (13.434)

= D(εF )∫ ∞−∞

(k

eβ(ε−µ) + 1ln(eβ(ε−µ) + 1

)+ k

e−β(ε−µ) + 1ln(e−β(ε−µ) + 1

))dε

(13.435)

124

Page 126: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

This looks nasty, but we can make it dimension-less, and it’ll just be a number!

ξ = β(ε− µ) dξ = βdε (13.436)

which gives us

S = D(εF )k2T∫ ∞−∞

( ln(eξ + 1

)eξ + 1 +

ln(e−ξ + 1

)e−ξ + 1

)dε

(13.437)

Now the last bit is just a number, which happensto be finite.

3. White dwarf

a) Showing that something is a given order ofmagnitude can be both tricky and confusing.The potential energy is exactly given by

U = −12

∫d3r

∫d3r′

Gρ(~r)ρ(~r′)|~r − ~r′|

(13.438)

as you learned in static fields, where ρ(~r) isthe mass density. Unfortunately, we don’tknow what the mass density is as a functionof position, or how that function dependson the mass of the white dwarf.

An adequate if not satisfying set of reason-ing is to say that the integral above mustscale as M2, since increasing M will eitherincrease ρ or increase the volume over whichthe mass is spread. The denominator |~r−~r′|is going to on average scale as the radiusR. Thus it makes sense that the potentialenergy would be about ∼ −GM

2

R . Anotherapproach here would have been to use di-mensional analysis to argue that the energymust be this. Alternatively, you could have

assumed a uniform mass density, and thenargued that the actual energy must be of asimilar order of magnitude.

b) The kinetic energy of the electrons is the Uof the Fermi gas, which in class we showedto be

KE ∼ NεF (13.439)

∼ N ~2

2m

(N

V

) 23

(13.440)

∼ ~2N53

mR2 (13.441)

where m is the mass of the electron. Thenwe can reason that the number of electronsis equal to the number of protons, and ifthe star is made of hydrogen the total massof the star is equal to the total mass of itsprotons.

N ≈ M

MH(13.442)

KE ∼ ~2

m

M53

M53H

1R2 (13.443)

c) At this stage is is worth motivating the virialtheorem from mechanics, which basicallysays that the magnitude of the average po-tential energy of a bound system (which isbound by a power law force) is about thesame as the average of its kinetic energy.This makes sense in that the thing that isholding a bound state together is the poten-tial energy, while the thing that is pullingit apart is the kinetic energy. If they aren’tin balance, then something weird must begoing on. BTW, this virial theorem also ap-plies quite well to the quantum mechanicalhydrogen atom.

All right, so

125

Page 127: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

GM2

R∼ ~2

m

M53

M53H

1R2 (13.444)

M13R ∼ ~2

mM53HG

(13.445)

At this point we just need to plug in num-bers.

d) Again we need to plug in numbers.

M13R ∼ ~2

mM53HG

(13.446)

R3 ∼

(~2

mM53HG

)31M

(13.447)

ρ = M4π3 R

3 (13.448)

∼M2

(~2

mM53HG

)3

(13.449)

Plug in numbers.

e) All that changes is that our degenerate gashas mass MN ≈MH , and our total mass isnow M = NMN ≈ NMH .

GM2

R∼ ~2M

53

MHM53H

(13.450)

M13R ∼ ~2

M83HG

(13.451)

R ∼ ~2

M13M

83HG

(13.452)

Plug in numbers to find the neutron starradius in kilometers when its mass is onesolar mass.

R ∼

(10−27 g cm2

s

)2

(1033g)13 (10−24g)

83(

10−7 cm3

g s2

)(13.453)

∼ 106cm ∼ 10km (13.454)

That’s what I call a small-town star!

4. Fluctuations in the Fermi gasWe are lookinghere at a single orbital, and asking what is thevariance of the occupancy number.

⟨(∆N)2⟩ =

⟨(N − 〈N〉)2⟩ (13.455)

=∑i

Pi(Ni − 〈N〉)2 (13.456)

Now this single orbital has only two possiblestates: occupied and unoccupied! So we can writethis down pretty quickly, using the probability ofthose two states, which are f and 1− f . We alsonote that 〈N〉 = f , so we’re going to have f allover the place.

⟨(∆N)2⟩ = P1(1− f)2 + P0(0− f)2 (13.457)

= f(1 +��f2 − 2f) + (1− ��f)f2

(13.458)= f − f2 (13.459)= 〈N〉 (1− 〈N〉) (13.460)

This tells us that there is no variation in occu-pancy when the occupancy reaches 0 or 1. Inretrostpect that is obvious. If there is definitelyan electron there, then we aren’t uncertain aboutwhether there is an electron there.

5. Einstein condensation temperature I’m go-ing to be sloppier on this solution, because thisis done in the textbook, and I’m still quite sick.The idea is to set the chemical potential to 0,which is its maximum value and integrate to find

126

Page 128: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

the number of atoms not in the ground state, NE ,which is normally essentially equal to the numberof atoms total.

NE =∫ ∞

0D(ε)f(ε)dε (13.461)

= V

4π2

(2M~2

) 32∫ ∞

12

1eβε − 1dε (13.462)

Naturally at this stage we will want to use achange of variables to take the physics out of theintegral as we typically do.

NE = V

4π2

(2M~2

) 32

(kT ) 32

∫ ∞0

√ξ

eξ − 1dξ

(13.463)

Now we can simply solve for TE .

TE = 1kB

~2

2M

NV

4π2∫∞0

√ξ

eξ−1dξ

23

(13.464)

Solution for week 8

PDF version of solutions

1. Heat pump

a) To approach this, I’ll essentially do a quickre-derivation of the Carnot efficiency, basedaround the idea that the process is reversible.Since it is a cycle, the state of the systemdoesn’t change, only that of the environ-ment. The hot side in this case gets hotter,while the cool side gets cooler, and the en-tropy change of each must be equal andopposite.

∆SH = QHTH

(13.465)

∆SC = −QCTC

(13.466)

∆SH + ∆SC = 0 (13.467)QHTH

= QCTC

(13.468)

TCTH

= QCQH

(13.469)

Note that I’m defining all heats and worksto be positive, and taking their direction in-nto account (which explains the minus signin ∆SC . Now to find the amount of workwe had to do, we just need to use energyconservation (i.e. the First Law). The en-ergy inputs to the heat pump are our workW and the heat from the cool side QC . Theenergy output is just QH .

W +QC = QH (13.470)W

QH= 1− QC

QH(13.471)

= 1− TCTH

(13.472)

which is just the Carnot efficiency, as pre-dicted. Note however, that in this case thisefficiency is not “what we get out dividedby what we put in,” but rather the inverseof that. So in this case, when TC � TH , wehave a very inefficient heat pump, since wehardly get any “free” energy.

If the heat pump is not reversible, as always,things look worse, we will need more work toget the same amount of heating. Note herethat there are two possible interpretationsof the word “reversible”. What is meant inthe question is that the entropy of the pumpand its surroundings doesn’t change. Froma practical perspective a heat pump may be

127

Page 129: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

described as “reversible” when it can alsofunction as an air conditioner in the summer(as most do).

b) Now we have an engine driving a heat pump.The work output from the engine must equalthe work input of the pump. We can recallfrom class (or reproduce with reasoning likethat above) that

W

QHH= 1− TC

THH(13.473)

Now we just need to eliminate the work tofind how much heat input at the very hightemperature QHH we need in order to geta given amount of heat in our home QH .

WQHWQHH

=1− TC

TH

1− TCTHH

(13.474)

QHHQH

= THHTH

TH − TCTHH − TC

(13.475)

For the three temperatures requested, thiscomes out to

QHHQH

= 600300

300− 270600− 270 (13.476)

= 2 30330 (13.477)

= 211 (13.478)

≈ .18 (13.479)

So you save about a factor of five in fuel bynot just burning it in your home to heat it,but instead using it to power a heat pump.Yay. And this is with it freezing outside,and uncomfortably warm inside!

c) Here is a pretty picture illustrating wherethe energy and entropy come and go. The

heats all came from the above computations,while the entropies came from dividing eachheat by its temperature. For the energy plotI made a distinction between the heat addedor removed by the heat pump (left) and theengine (right). For the entorpy plot I justlumped those together.

SCEntropy

EntropySH

EntropySHH

QCHeat

HeatQH

Work:W

HeatQHH

QCHeat

Figure 13.6: Sankey diagram of energy and entropyin the engine-heat-pump combination

2. Photon Carnot engine

a) The engine starts at TH and V1 and first ex-pands isothermally to V2. Then is expandsadiabatically to TC and V3. Next is is com-pressed at fixed temperature to V4, fromwhich it expands adiabatically to V1. So wecan identify V3 and V4 as the two volumesthat have the same entropy at TC as V2 andV1 have at TH .

128

Page 130: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

S3 = S2 (13.480)

32πkV3

(kTChc

)3π4

45 = 32πkV2

(kTHhc

)3π4

45(13.481)

V3T3C = V2T

3H (13.482)

V3 = V2

(THTC

)3

(13.483)

V4 = V1

(THTC

)3

(13.484)

b) Now we could integrate to find the work,but the easy approach is to find the heatfrom TH∆S, and then use the First Law tofind the work.

QH =∫TdS (13.485)

= TH∆S (13.486)

= kTH32π(V2 − V 1)(kTHhc

)3π4

45(13.487)

= 32π(V2 − V 1)(kTH)4

h3c3π4

45 (13.488)

Now using the First Law. . .

∆U = QH +WH

(13.489)

24π (kTH)4

h3c3π4

45 (V2 − V1) = 32π(V2 − V 1)(kTH)4

h3c3π4

45 +WH

(13.490)

WH = −8π (kTH)4

h3c3π4

45 (V2 − V 1)(13.491)

This tells us that the photon gas does workas it expands, like the ideal gas does, but

unlike the ideal gas, the work done is con-siderably less than the heat absorbed bythe gas, since its internal energy increasessignificantly.

c) To find the work on each adiabatic stage justrequires finding ∆U , since Q = 0 for anyisentropic (or adiabatic) process. The firstadiabatic stage goes from V2 to V3 while thetemperature changes from TH to TC . Theinternal energy change is thus

∆U32 = W2 (13.492)

= 24π k4

h3c3π4

45(V3T

4C − V2T

4H

)(13.493)

= 24π k4

h3c3π4

45

(V2

(THTC

)3T 4C − V2T

4H

)(13.494)

= 24π k4

h3c3π4

45V2T3H (TC − TH)

(13.495)

So the system is losing energy (thus doingwork) as we adiabatically expand it down tolower temperature. At the other end, goingfrom V4 and TC to V1 and TH , we have

∆U14 = W4 (13.496)

= 24π k4

h3c3π4

45(V1T

4H − V4T

4C

)(13.497)

= 24π k4

h3c3π4

45

(V1T

4H − V1

T 3H

��T3C

T �4H

)(13.498)

= 24π k4

h3c3π4

45V1T3H (TH − TC)

(13.499)

So as normal we do work while compressingthe photon gas adiabatically. However, theamount of work in this case is not equaland opposite to the amount of work done

129

Page 131: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

when the gas was adiabatically expanded.This is because V1 6= V2, which causes theinternal energies to be different in the twocases. The ideal gas is unique in that itsenergy is independent of its density.

d) For the total work, I will add up all fourlittle works. It’s a little annoying to do itthis way, because I could have just usedenergy conservation and the fact that mysystem is a cycle to say that the total workmust be equal and opposite to the total heat.I’m completely fine with you doing that, butI think there is some pedagogical utility inseeing that doing all the works individuallygives us the same answer, even though wedon’t see the same detailed cancellation thathappens with the ideal gas. Oh, but I’mnoticing that I haven’t yet computed WC . Ithink you can see that it’ll be exactly likeWH only with the temperature and volumesreplaced with their appropriate values. Thisgives us for WC :

WC = 8π (kTC)4

h3c3π4

45 (V4 − V 3) (13.500)

= 8π (kTC)4

h3c3π4

45 (V1 − V2) T3H

T 3C

(13.501)

= 8π k4

h3c3π4

45 (V1 − V2)T 3HTC

(13.502)

Note also that the sign of the work was op-posite because we were compressing ratherthan expanding. Plugging this in we can seethat:

W = WH +W2 +WC +W4 (13.503)

= −8π (kTH)4

h3c3π4

45 (V2 − V 1)

+ 24π k4

h3c3π4

45V2T3H (TC − TH)

+ 8π k4

h3c3π4

45 (V1 − V2)T 3HTC

+ 24π k4

h3c3π4

45V1T3H (TH − TC)

(13.504)

= −8π k4

h3c3π4

45 (V2 − V 1)T 4H

+ 24π k4

h3c3π4

45V2T3H (TC − TH)

+ 8π k4

h3c3π4

45 (V1 − V2)T 3HTC

+ 24π k4

h3c3π4

45V1T3H (TH − TC)

(13.505)

= −8π k4

h3c3π4

45T3H(V2 − V 1)(TH − TC)

+ 24π k4

h3c3π4

45T3H(V1 − V2) (TH − TC)

(13.506)

= −32π k4

h3c3π4

45T3H(V2 − V 1)(TH − TC)

(13.507)

I’ll take the ratio that I want now. Almosteverything will cancel.

W

QH=−32π k4

h3c3π4

45T3H(V2 − V 1)(TH − TC)

32π(V2 − V 1) (kTH)4

h3c3π4

45(13.508)

= −TH − TCTH

(13.509)

= −(

1− TCTH

)(13.510)

This is just the Carnot efficiency from classwith a minus sign. This sign came from the

130

Page 132: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

convention that positive work means workadded to a system, which I used in thissolution (and is convenient when using theFirst Law), but differs from the standardconvention when discussing engines, whereall signs are taken to be positive.

3. Light bulb in a refridgerator

We have a 100W light bulb in a refridgerator thatdraws 100W power. This means the work doneby the refridgerator per second is 100W. Howwe need to ask how efficient the refridgeratorcan be, to see how much it can cool its inside.The refridgerator operates between two tempera-tures the inside (which I’ll call TC) and the room(which I’ll call TH). Energy conservation tells usthat

QC +W = QH (13.511)

where I’ve taken the usual convention (for thiskind of problem) where all signs are positive,so QC is the magnitude of heat drawn from theinside,W is the work done, andQH is the amountof heat dumped in the room. If this is a reversiblecycle, then the change in entropy of the roommust be equal and opposite to the change inentropy of the inside of the fridge. That meansthat

QCTC

= QHTH

(13.512)

QH = QCTHTC

(13.513)

If we have an irreversible fridge, entropy of theroom plus fridge can only go up, which wouldmean less cooling in the fridge (since the entropyof the inside is going down). Putting these equa-tions together, we can see that

QC +W = QCTHTC

(13.514)

W = QC

(THTC− 1)

(13.515)

QC = WTHTC− 1

(13.516)

QCW

= 1THTC− 1

(13.517)

This tells us the effiency of our refridgerator. Aslong as this efficiency is greater than 1, our fridgecan out-cool the light bulb. So when is this equalto one?

1THTC− 1

= 1 (13.518)

1 = THTC− 1 (13.519)

TH = 2TC (13.520)

So our fridge can indeed cool the insides belowroom temperature even with the light bulb (whoever put a 100W light bulb in a fridge?!), andcould in fact (in principle) cool it down to like150K which would be crazy cold. Of course, thepoor insulation would prevent that, as well asthe capabilities of the pumps and refridgerant.

Is this the answer you were expecting for thisproblem? I can tell you that it was not theanswer I was expecting. Kind of crazy.

Solution for week 9

PDF version of solutions

1. Vapor pressure equation

a) To solve for dpdT we will begin with the

Clausius-Clapeyron equation derived inclass.

131

Page 133: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

dp

dT= L

T (Vg −���0

V`)(13.521)

= L

TVg(13.522)

= L

T NkTp

(13.523)

= Lp

NkT 2 (13.524)

b) Now we will assume L is a constant, andsolve for the vapor pressure p(T ). The keywill be to put p and T on separate sides ofthe equation so we can integrate.

dp

dT= Lp

NkT 2 (13.525)

1p

dp

dT= L

NkT 2 (13.526)∫ T

T0

1p

dp

dTdT =

∫ T

T0

L

NkT 2 dT (13.527)∫ p

p0

1pdp = L

Nk

∫ T

T0

1T 2 dT (13.528)

ln(p

p0

)= − L

Nk

(1T− 1T0

)(13.529)

Now we can solve for p!

p =(p0e

LNkT0

)e−

LNkT (13.530)

We could clump all the stuff in parenthe-ses into a big constant without any loss,but I kind of like making it explicit that ifwe know that (p0, T0) is on the coexistencecurve then we have a closed solution here.Note again that this makes an assumptionabout L being independent of temperaturethat is not entirely accurate.

2. Entropy, energy, and enthalpy of van derWaals gas We will begin with the free energy ofthe van der Waals gas:

F = −NkT(

ln(nQ(V −Nb)

N

)+ 1)− N2a

V(13.531)

a) We can find the entropy as usual by takinga derivative of the free energy.

S = −(∂F

∂T

)V

(13.532)

= Nk

(ln(nQ(V −Nb)

N

)+ 1)

+ NkT

nQ

dnQdT

(13.533)

= Nk

(ln(nQ(V −Nb)

N

)+ 1)

+ NkT

nQ

32nQ

T(13.534)

= Nk

{ln(nQ(V −Nb)

N

)+ 5

2

}(13.535)

In the penultimate (second-to-last) step, Iused the fact that nQ ∝ T

32 .

b) We can find the internal energy from F =U − TS now that we know the entropy.

U = F + TS (13.536)

= −NkT(���

������

ln(nQ(V −Nb)

N

)+ 1)− N2a

V

+Nk

(���

������

ln(nQ(V −Nb)

N

)+ 5

2

)T

(13.537)

= 32NkT −

N2a

V(13.538)

which looks like the monatomic ideal gasinternal energy plus a correction term, whichdepends on the density of the fluid.

c) To find the enthalpy we just need the pres-sure. We could find it using a derivative of

132

Page 134: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

the free energy, but that was done in classand in the text, so we needn’t duplicate it.

H = U + pV (13.539)

p = NkT

V −Nb− N2

V 2 a (13.540)

H = 32NkT −

N2a

V+(

NkT

V −Nb− N2

V 2 a

)V

(13.541)

= 32NkT − 2N

2a

V+ NkT

1− NbV

(13.542)

≈ 32NkT − 2N

2a

V+NkT

(1 + Nb

V

)(13.543)

= 52NkT − 2N

2a

V+ N2bkT

V(13.544)

H(T, p) = 52NkT +Nbp− 2Nap

kT(13.545)

In the approximate step, we were just do-ing a power series expansion since Nb

V � 1.Now we just want to express the enthalpy interms of pressure, since it is usually used atfixed pressure. That requires us to replacevolume with pressure. Solving for volumein terms of pressure is a slight nuisance.

(p+ N2

V 2 a

)(V −Nb) = NkT

(13.546)

pV − pNb+ N2

Va ≈ NkT

(13.547)pV 2 − pNbV +N2a = NkTV

(13.548)pV 2 − (pNb+NkT )V +N2a = 0

(13.549)

Now we can use the quadratic equation.

V =pNb+NkT ±

√(pNb+NkT )2 − 4pN2a

2p(13.550)

≈pNb+NkT ±

√(NkT )2 + 2pN2bkT − 4pN2a

2p(13.551)

=pNb+NkT +NkT

√1 + 2pNbkT−4pNa

kT

2p(13.552)

≈pNb+NkT +NkT

(1 + 1

22pNbkT−4pNa

kT

)2p

(13.553)

=pNb+ 2NkT + pNbkT−2pNa

kT

2p(13.554)

=2pNb+ 2NkT − 2pNa

kT

2p (13.555)

= NkT

p+Nb− Na

kT(13.556)

What a nuisance. Each approximation Imade eliminated a term that had two ormore factors of a or b, which are taken tobe small quantities (albeit with dimensions).Note that the first term is just what we getfrom the ideal gas law. The rest is the first-order correction to the volume. Now thatwe have an expression for V in terms of pand T

133

Page 135: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

H = 52NkT + N2bkT − 2N2a

V(13.557)

= 52NkT + N2bkT − 2N2a

NkTp +Nb− Na

kT

(13.558)

= 52NkT + p

NkT

N2bkT − 2N2a

1 +(Nb− Na

kT

)p

NkT

(13.559)

≈ 52NkT + p

NkT

(N2bkT − 2N2a

)(1−

(Nb− Na

kT

)p

NkT

)(13.560)

Okay, this is admittedly looking a little hairy.But remember that we only need to keepterms that are linear in a and b, so that ac-tually just kills our correction term entirely.And after all that work!

H ≈ 52NkT + p

NkT

(N2bkT − 2N2a

)(13.561)

= 52NkT +N

p

kT(bkT − 2a) (13.562)

= 52NkT +Np

(b− 2a

kT

)(13.563)

which matches with the expected answer. Soyay. In retrospect, we could have simplifiedthe solving for the volume bit drastically,if I had noticed that V only occured in Hin a ratio to a small quantity, and thus wedidn’t need to keep up to the first orderterms, we only needed up to the zero orderterm, which would have eliminated most ofthe work. You are welcome to argue thisin your solution, but should try to argue itwell. Otherwise, you’d just want to do allthe same tedium I did.

3. Calculation of dTdp for water (K&K 9.2)

We begin with the Clausius-Clapeyron equation.However, to do this we have to ensure that weget everything in the right units, and such thatthe volume we divide by corresponds to the same

quantity of water as the latent heat on top. Let’ssay we have one gram of water (which is nice uptop), we need the volume of a gram of steam.

∆V ≈ Vg (13.564)

= NkT

p(13.565)

I’ll start by finding the number of molecules in agram of water:

N = 1g18g mol−1NA (13.566)

= 1g18g mol−1 6.0221× 1023 (13.567)

≈ 3.35× 1022 (13.568)

Since one atmosphere is ∼ 105 pascals, we have

∆V ≈ 3.35× 1022(1.38× 10−23J K−1)(373K)105J m−3

(13.569)≈ 1.7× 10−3m3 (13.570)

Just as a check, we may as well verify that Vg �Vl. I know that liquid water has a density ofabout 1g cm−3. Which makes the vapor densityabout two thousand times higher, so we’re okayignoring the liquid density, yay. Putting thistogether, we can find the slope we are asked for.

dp

dT= L

T∆V (13.571)

= 2260J g−1

(373K)(1.7× 10−3m3g−1) (13.572)

= 3530PaK (13.573)

Now we were asked for this in units of atmosphereper Kelvin, which gives us a change of five ordersof magnitude.

134

Page 136: Thermal Physicssites.science.oregonstate.edu/~roundyd/COURSES/ph441/thermal-physics.pdf · Wecareaboutextensivityandintensivityforseveral reasons. In one sense it functions like dimensions

dp

dT= 3.5× 10−2atm K−1 (13.574)

Actually, we were asked for the inverse of this,presumably so we’d have a number greater thanone.

dT

dp= 28K atm−1 (13.575)

That tells us that if the liquid-vapor coexistencecurve were a straight line, it would drop to zerovapor pressure at 72◦C, which is emphasizes howmuch curvature there is to the coexistence curve.

4. Heat of vaporization of ice (K&K 9.3)

Okay, once again we’ll want to find the volumeof gas, which gives us part of the answer. In ad-dition, we’ll need to find dp

dT from the given pres-sures at a couple of temperatures. That leavesus with nothing but the latent heat to solve for.I’ll start with the volume of a gram of water. It’slike the last problem, but at a lower temperature.I’ll not copy the work that is identical. One mmHg is 133 Pa, so

∆V ≈ 3.35× 1022(1.38× 10−23J K−1)(272K)4.2× 133J m−3

(13.576)≈ .225m3 (13.577)

So the volume of the gram of chilly steam isdrammatically higher, due to the vapor pressurebeing way lower at these frigid temperatures.Note that I used a pressure halfway between thetwo pressures give, since we know the derivativemost accurately at this half-way point.

Now for the derivative itself:

dp

dT= 611− 518

2.01K Pa (13.578)

≈ 46Pa K−1 (13.579)

Putting things together, we get:

dp

dT= L

T∆V (13.580)

L = T∆V dp

dT(13.581)

= (272K)(.225m3g−1)(46Pa K−1)(13.582)

≈ 2815J g−1 (13.583)

And now I remember we want our final answerin J mol−1. I guess it would have been smartto compute the volume of a mole, rather than agram. Oh well, it’s not a hard conversion.

L = 2815J g−1(18g mol−1) (13.584)≈ 51kJ mol−1 (13.585)

This isn’t all that accurate. The answer pergram can be compared with the latent heat ofvaporization given in the last problem, and youcan see that it’s higher for the ice, but only abouttwice as high, which reflects the fact that theliquid still has most of the same hydrogen bondsthat hold the solid ice together.

135