module 3

17
 Xtgtostohg` Yndrekiylgeohs2 Ek`dhu`ds tk Eghnolds Vdlagt Voswglgtngl 7: Xdptdecdr 3:79 Ekiu`d 82 Oltrkiuhtokl tk stgtostohs gli stg- tostohg` tndrekiylgeohs @dgrlolm Kcbdhtovds C dmo l ku r i dvd` kp e dl t kf st gt o sto hg` t n dr e ki yl ge ohs Ol t rki u hd sdv drg` so e p ` d hk l hdp t s k f st gto st o hs t n gt w o ` ` c d l dhdssgry o l st gto st o hg` t n dre k i y l geo hs Z rdsdl t t n d st gto st o hg` i d l o t okl k f dl t rkp y O l t r k i u hd tn d e ohr k sh k p oh i d l o t o k l k f t n d s y st de dl trk p y T t o ` ozd t n d r s t gl i sdh kl i ` gw s kf t n dr e ki yl ge o hs t k l i t n d prkc-  gc o ` o t y i ostro c utokl f k r t n d e o hrk st gtds kf g sy st de I dvd` kp t n d stgtos t o hg` t n dre ki y l geo hs e dt n ki kf t r dgto l m t n d tn dr-  e ki yl ge o h cdngvo kr kf vgro ku s dlsd e c` ds Ady Hklhdpts2 Z dre u t gt o k l , e u ` t o l k e o g` dxp gl sokl , p rkc gc o ` ot y i o stro c uto k l, Z k o ss k l iostro c utokl , e ke dl ts, md l drgt o l m ful ht o k l , v gro gl hd, ol f k re gt o k l dl- t rkp y, p rkc gc o ` o t y i dl so t y, hdl t rg` ` o eo t t n dk rde .

Upload: karina

Post on 03-Nov-2015

10 views

Category:

Documents


0 download

DESCRIPTION

Thermodynamics Molecular

TRANSCRIPT

Statistical Thermodynamics: Molecules to Machines

Statistical Thermodynamics: Molecules to MachinesVenkat Viswanathan 10 September 2014Module 3: Introduction to statistics and sta- tistical thermodynamicsLearning Objectives Begin our development of statistical thermodynamics Introduce several simple concepts of statistics that will be necessary in statistical thermodynamics Present the statistical definition of entropy Introduce the microscopic definition of the system entropy Utilize the first and second laws of thermodynamics to find the prob- ability distribution for the microstates of a system Develop the statistical thermodynamics method of treating the ther- modynamic behavior of various ensemblesKey Concepts:Permutation, multinomial expansion, probability distribution, Poisson distribution, moments, generating function, variance, information en- tropy, probability density, central limit theorem.Connecting Molecular Physics to Macroscopic Be- haviorIn the last three lectures, we addressed the physical behavior at two widely discrepant length scales. We studied Quantum Mechanics, which provides a complete knowledge of the molecular behavior, however, theSchrodinger equation cannot be solved for a macroscopic system ( 1023molecules), and it probably wouldnt be useful if you could. We also studied Classical Thermodynamics, which predicts the conditions for equilibrium and stability of a macroscopic systems, but it does not give a molecular-level description of behavior.The key to understand how molecules give rise to bulk behavior is to establish a statistical description of the molecular states. Today, we build this statistical description by first introducing concepts in statis- tics, and then define the statistical definition of entropy, which forms the basis for the development of statistical thermodynamicsIntroduction to StatisticsCoin flipping: permutation and multinomial expansionWe begin our discussion of statistics with the simple example of flip- ping coins. For any given experiment, the probability of getting heads is PH = 1/2 and getting tails is PT = 1/2. Since each experiment is sta- tistically independent from the previous experiments, the probability of getting any specific trajectory is the product of probabilities, i.e. prob- ability of exactly getting HHTHHTTH is PH PH PT PH PH PT PT PH . However, if we need to know the probability of getting NH heads and NT tails regardless of order, we need to include a factor associated with the number of permutations with NH and NT . To show this, we adopt the view of a random walk, where a head is a positive step and a tail is a negative step, i.e. the experiment HHTHHTTH results in an endposition X = 1 + 1 1 + 1 + 1 1 1 + 1 = 2.Count the number of ways M (X, N ) of ending at any location X after N steps, and assume probability is proportional to this number. Separate N steps into positive steps NH and negative steps NT . Thus,N = NT + NH(1)The end location of a path X is given by:X = NH NT(2)The combinatoric number of ways of distributing NH and NT stepsinto N = NH + NT is:(NH + NT )!N !M (NH , NT ) =

NH !NT

=!NH

!NT

(3)!Using Eq. 1 and 2, we have NH = (N + X )/2 and NT = (N X )/2.This gives:M (X, N ) =N ![(N + X )/2]![(N X )/2]!

(4)This equation is valid for |X| N , and integer values of (N X ) =2. Alternatively, M (X, N ) can be built recursively using:M (X, N + 1) = M (X + 1, N ) + M (X 1, N )(5)The probability of getting NH heads and NT tails is given by:P (NH , NT ) = M (NH , NT )PNH PNT =

N !PNH PNT

(6)HTNH !NT ! HTThis probability is summed over the number of heads to give the binomial expansion:

Figure 1: Number of ways M (X, N ) to end at position X after number of steps N = 1, 2, 3, 4, 5.NN !

PNH PNT = . .

N !PNH PNTNH =0

NH !(N NH )! HT

NH NT

NH !NT ! HT= (PH + PT )N(7)where the indicates a sum with the constraint NH + NT = N .We can generalize this to N independent experiments with r pos- sible outcomes (e.g. a dice has 6 possible outcomes). For a given ex- periment, the probabilities are P1, P2, ..., Pr . The probability of getting N1, N2, ..., Nr is:N !PN1 PN2 ...P Nr

(8)N1!N2!...Nr ! 12rwhich is a term in the multinomial expansion:. . ... . PN1 PN2 ...P Nr = (P1 + P2 + ... + P )NN1 N2

N1!N2!...Nr ! 12rNr

r(9)Probability distributionsThe simple example of coin flipping or dice throwing has equal probabil- ities for all outcomes (flat distribution). Generally, we are interested in the properties of systems with more complex probability distributions.Consider an experiment that can have integer outcomes n ranging from 0 to . For a given experiment, the probability that the outcome of that experiment is n is Pn (with Pn 0 for all n), and each experi-ment is uncorrelated from the previous experiments. Our goal now is to evaluate averages from the distribution and to define several properties of the distribution. To proceed, we use a particular distribution as a model probability distribution, namely the Poisson distribution given by (Fig. 2):anaPn = n! e

(10)where a is a characteristic spread in the distribution.Normalization of the probability distribution requires that:

Figure 2: Curves of the Poisson distri- bution versus n for a = 0, 2, 4, 6, 8,and 10. an. Pn = .ea = 1(11)n!n=0which requires the property .

n=0an = ea.n=0 n!We define the mth moment of the distribution to be:(nm) = . nmPn(12)n=0For the Poisson distribution, the mean (n) is given by:nn(n) = . na ea = .ean=0

n!n1

n=1

(n 1)!n= . a an=1 (n 1)!

ea = a .n=0

a(n)!

ea = a(13)The higher moments are considerably more difficult to evaluate. Toease calculations, we define the generating function g(k) = .

enk Pn,which gives:(nm) = limk0

.m.( 1)mdkm

(14)which alters the calculation to the evaluation of derivatives.For the Poisson distribution, we have:n..g(k) = . ekn a ea = exp .a .ek1n!

(15)n=0Using the generating function, we have:(n) = lim , aek exp .a .ek .., = ak0

..,(n2) = lim ,.aek + a2e2k . exp .a .ek 1k0

= a + a2We define the variance 2 of the distribution to be 2 = ((n (n))2) =(n ) (n) , which gives 2

= a for the Poisson distribution. The vari-ance is a simple measure of the spread in the distribution (variance isthe square of the standard deviation ).A more complete characterization of the uncertainty in the distri- bution is given by the information entropy S of the distribution, given by:S = . Pn log Pn(16)n=0which tends to zero if the distribution is single-valued and increases with an increase in the spread of the distribution (Fig. 3).We extend these concepts to a random variable X that can take r discrete values (x1, x2, ..., xr ). Similar to before, the probability distri-bution satisfies P (xi) 0 and is normalized such that .rThe moments of the distribution are given by:r

P (xi) = 1.

Figure 3: The information entropy S for the Poisson distribution versus the parameter a(Xm) = . xmP (xi)(17)i=1The mean is (X), and the variance 2

is (X2) (X 2. The proba-bility distribution for a continuous variable x is stated by the probabilitydensity f (x), which gives the probability:P (a x b) =

bdxf (x)(18)aThe probability density is nonnegative (f (x) 0 for all x), andnormalized (complete) such that:dxf (x) = 1(19)where implies integration over the entire range of x.Central limit theoremDefine the Gaussian distribution by the probability density:f (x) =

1(2)1/2

exp

. (x (x))2 .22

(20)The Gaussian distribution is the most common and important dis- tribution due to the central limit theorem: Given N independent random variables x1, x2, ..., xN selected from a probability distribution with variance 2. Define y = (x1 + x2 + ... + xN ) /N , giving the mean of the selected stochastic variables. For sufficiently large N , the probability distribution for y approaches a Gaussian with standard deviation y = x = N 1/2 regardless of the distribution for x.For example, individual non-interacting particles may obey a com- plex probability distribution with mean energy (s) and standard de-viation s; however, a collection of N particles tends to a Gaussian distribution with standard deviation s/N 1/2 (Fig. 4)Thermodynamic EnsemblesIn Module 2, we introduced the concept of thermodynamic ensembles, where the system variables swap out extensive for their conjugate inten- sive variables (Fig. 5). The formal statement of this process was caste in the mathematical form of a Legendre transform. Our goal here is to develop the statistical mechanics description of each of these ensembles.Closed, isolated system microcanonical ensembleConsider a closed, isolated system that is thermodynamically charac- terized by the entropy S = S(U , V , N ) (Fig. 6). This thermodynamic system is called the microcanonical ensemble when considered in statis- tical thermodynamics.We need to find a statistical distribution for the molecular states of the system that obeys the first and second laws of thermodynamics. The internal energy U identifies the total kinetic and potential energy E of the molecules in the system (first law), thus only states with total energy E are permitted in this ensemble. The probability distribution maximizes the system entropy (second law).For fixed E, V , and N , there are many degenerate quantum states. We define the system state , characterizing the quantum state of the system for a given E, V , N . We also define the total degeneracy to be the number of states corresponding to a specified E, V , and N , thus 1 = (E, V , N ). To illustrate what is meant by this summation, consider a single point particle in two dimensions. In the Module 1, wefound the translational quantum states results in the total energy:

Figure 4: Comparison between the dis- crete random walk statistics associated with flipping a coin and a Gaussian dis- tribution. As the number of steps in- creases, the distribution for a discrete random walk tends to a Gaussian.

Figure 5: Four different ensembles and their canonical variables.h222Enx,ny = 8mL .nx + ny .(21)where nx, ny = 1, 2, 3...For a fixed energy E, the quantum states satisfy the equation E =

Figure 6: Closed, Isolated Systemh2 .n228mL

x + ny ., which is a semicircle on the nx ny plane. The totaldegeneracy of states for fixed E are the number of points along a semi- circle on the nx ny plane, related to the circumference of this circle(Fig. 7).The probability that the system exists in state n at any given time is P , i.e. if t is the period of time a system exists in state and t isthe total time, the probability P t /t as t .We introduce the microscopic definition for the system entropy S:S = kB . P log P(22)where kB = 1.8 1023J /K is the Boltzmanns constant; a constant of Nature.Our task is to determine the value of P that maximizes the entropy while maintaining the normalization, thus we must maximize S subjectto a constraint . P = 1. Introducing the Lagrange multiplier 0, theentropy is maximized (subject to the constraint) by maximizing:(S 01) = kB . P log P 0 . P(23)

Figure 7: Plot of quantum states for a point particle in 2 dimensions with the energy identified by the color (blue is low energy and red is high energy). The constant energy curves are semi- circular on this plot. The number of points along the constant-energy semi- circle gives the degeneracy of states at fixed energy E = h2 .n2 + n2 ..8mLxyMaximization occurs when a perturbation P does not alter the function (first variation is zero), occurring when:0 = kB . (log P + 1) P 0 . P(24)Since P is arbitrary, the function is maximized if.

0 .kB log P kB 0 = 0 P = expSumming this over all gives the result:

1 B

(25). P = . exp

.1

0 .1

= exp

.1

0 .

(26)kBkBThis leaves the probability distribution:1P = (27)Equal a priori probability. In an isolated system at thermal equilib- rium at a given E, V , N , the system spends equal amount of time in each state over a sufficiently long period of time.Ergodic hypothesis. All states consistent with a specified E, V , andN are accessible over a sufficiently long time: Motion of molecules must be sufficiently random Implies that the observation time t , where is a microscopic relaxation time for the system (e.g. collision time in gases).The system entropy is given by: S = kB .log

S = kB log (28) which gives complete knowledge of all thermodynamic behavior of the system, provided it can be evaluated.Closed, isothermal system canonical ensembleConsider a closed, isolated system that is thermodynamically charac- terized by the Helmholtz free energy F = F (T , V , N ) (Fig. 8). This thermodynamic system is called the canonical ensemble when consid- ered in statistical thermodynamics.We need to find a statistical distribution for the molecular states of the system that obeys the first and second laws of thermodynamics. In this case the internal energy U identifies the statistical average of thekinetic and potential energy (E) of the molecules in the system (firstlaw), and the probability distribution maximizes the system entropy (second law).In this ensemble, the total energy E is not uniquely fixed. As a result, the summation over states of the system includes all quantum states of the system, with each one weighted by the appropriate probability.To illustrate what is meant by this summation, consider the total energy from the translational quantum states of a single point particle in two dimensions (Eq. 1). Therefore, the internal energy is found to satisfy:

Figure 8: Closed, Isothermal SystemU = (E) = .

. Enx,ny Pnx,ny(29)nx =1 ny =1where Pnx,ny is the probability of finding the system in the quantum state nx, ny (to be explicitly determined).Begin with our microscopic definition of entropy:S = kB . P log P(30)where the sum over implies a sum over all system states.In a fixed-temperature ensemble, the total system energy fluctuates about an average value, such that (E) = U (internal energy). We intro-duce the two constraints on the probability distribution:. P = 1and(E) = . EP = U(31)The probability distribution that maximizes entropy while satisfying these constraints corresponds to the maximization of:(S 01 1(E)) = . (kBP log P 0P 1EP)(32)The Lagrange multipliers 0 and 1 assume values that satisfy the constraints.The first variation of this function is set to zero, giving(S 01 1(E)) = . (kB log P kB 0 1E) P = 0Since the P are arbitrary, each term is set to zero, thus:kB log P kB 0 1E = 0P = exp

.0 .1 B

exp

. 1.EkB

(33)The first constraint . P = 1 results in:1 = exp

.0 . .1 k

exp

. 1.k EQ = exp

B.1 + 0 .kB

= . exp

B. 1.EB

(34)where we have defined the partition function Q. The probability P now satisfies:1P = Q exp

. 1.EkB

(35)where 1 is a currently unspecified Lagrange multiplier.To satisfy the second constraint .

EP = (E), we invoke thermo-dynamic properties. The differential of the entropy S gives(S)V ,N = . (kB log P kB ) P= . (1E + kB log Q kB ) Pand noting that 1 = . P = 0, we have:(S)V ,N = . 1EP(36)The differential of the average energy (E) gives:((E))N ,V = . EP(37)From thermodynamics, the Lagrange multiplier 1 is:. S (E)

.V ,N

1= 1 = T

(38)The governing equations for the canonical ensemble are:1.E .

..E .P = Q exp

kBT

whereQ =

exp

kBT

(39)Using the definition of the probability distribution and thermody- namics, we have:kBT log P = E kBT log Qthus TS = (E) kBT log Q F = (E) TS = kBT log QThe canonical partition function Q provides complete information about the thermodynamic behavior. If we write the partition functionas Q = .

exp (E ) where = 1/ (kBT ). The partition function Qacts as a generating function:(Em) = . EmP =

. Em exp (E )Q . m .= (1)m

(40)Consider the variance of the energy:

Qm

V ,N(E ) (E) =

1 . 2Q .

1 . Q .2Q2

V ,N

Q2

V ,N. 2 log Q .=2

V ,N

. 2F .= 2

V ,N

= kBT 2CVconnecting energy fluctuation to the heat capacity CV .Open system grand canonical ensembleConsider an open system that is thermodynamically characterized by the Landau potential pV = pV (T , V , ) (Fig. 9). This thermody-namic system is called the grand canonical ensemble when considered in statistical thermodynamics.We need to find a statistical distribution for the molecular states of the system that obeys the first and second laws of thermodynamics. In

Figure 9: Open systemthis case the internal energy U identifies the statistical average of the kinetic and potential energy (E) of the molecules in the system (firstlaw), and the probability distribution maximizes the system entropy (second law).In this ensemble, the total energy E and the number of particles Nt(i = 1, 2, ..., r) are the fluctuating variables.We begin with our microscopic definition of entropy:S = kB . P log P(41)where the sum over implies a sum over all system states and all particle number Nt = 0, 1, ... with i = 1, 2..., r.We introduce the 2 + r constraints on the probability distribution:r. P = 1, (E) = . EP = U , (Ni) = . NtP = Ni(42)The probability distribution that maximizes entropy while satisfying these constraints corresponds to the maximization of:.r.S 01 1(E) . i(Ni)=i=1.r...kBP log P 0P 1EP

i=1

iNtPThe Lagrange multipliers 0, 1, and i = (i = 1, 2, ...r) assume values that satisfy the constraints.Setting the first variation of this function to zero, we have:.r. S 01 1(E) . i(Ni)=i=1.. kB log P kB 0 1E

r.i=1

.iNt

PSince the P are arbitrary, each term is set to zero, giving:.r.kB log P kB 0 1E . iNt= 0.0 .

. 1

i=1. i.P = exp

1 B

exp

kB

E

i=1

tkBi

(43)The first constraint . P = 1 results in:.0 . .

. 1

r i.1 = exp

1 B..

exp

kB.

E

i=1 r

tkBi. = exp

1 + 0kB

= . exp

1EB

. iii=1 B

(44)where we have defined the grand canonical partition function . The probability P now satisfies:1. 1

r i.

.

P = exp

kB

E

i=1

tkBi

(45)To satisfy the second constraint .

EP = (E) and .

NtP =(Ni), we invoke thermodynamic properties. The differential of the en-tropy S gives(S)V = . (kB log P kB ) P.r.= . 1E + . iNt log kPiBi=1and noting that 1 = . P = 0, we have:.r.(S)V = .

1E + . iNt

P(46)i=1The differentials of the average energy (E) and (Ni) gives:((E))V = . EP((Ni))V = . NtPFrom thermodynamics, the Lagrange multiplier 1 is:. S (E)

.V ,N

1= 1 = T

and

. S (Ni)

.E,V ,Nj=i

i= i = TThe governing equations for the grand canonical ensemble are:1.Er iNt .

.iP = exp

kBT

i=1

kBT.rt .where = . exp

E . iNi+kBT

k Ti=1Using the probability distribution and thermodynamics, we have:rkBT log P = E + . iNt k

T log iBi=1 rTS = (E) + . i(Ni) kBT log i=1rpV = (E) TS . i(Ni) = kBT log i=1Therefore, the grand canonical partition function provides com- plete knowledge of the thermodynamic behavior of the system. Fur- thermore, the grand canonical partition function acts as a generating function for averages; for example, the moments are:. m .(Em) = (1)m

and (Nm) =

1 . m .

(47)m

V ,

im

T ,V ,j=iwhere = 1/(kBT ) and i = i/(kBT )..

N !

a

n=0

d g(k)

1

22

i=1

i

X

)

x

.

k

1

1

k

k

1

1 Q

22

i

i

i

i

i

i

N

r

k

k

k N

t

N

.

k

N

i

i

i

+

B

1

i