SlideShare a Scribd company logo
Contents
1 From Microscopic to Macroscopic Behavior 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Some qualitative observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Doing work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Quality of energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Some simple simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Work, heating, and the first law of thermodynamics . . . . . . . . . . . . . . . . . 14
1.7 Measuring the pressure and temperature . . . . . . . . . . . . . . . . . . . . . . . . 15
1.8 *The fundamental need for a statistical approach . . . . . . . . . . . . . . . . . . . 18
1.9 *Time and ensemble averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.10 *Models of matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.10.1 The ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.10.2 Interparticle potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.10.3 Lattice models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.11 Importance of simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2 Thermodynamic Concepts 26
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 The system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 Thermodynamic Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Pressure Equation of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6 Some Thermodynamic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.7 Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
i
CONTENTS ii
2.8 The First Law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.9 Energy Equation of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.10 Heat Capacities and Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.11 Adiabatic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.12 The Second Law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.13 The Thermodynamic Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.14 The Second Law and Heat Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.15 Entropy Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.16 Equivalence of Thermodynamic and Ideal Gas Scale Temperatures . . . . . . . . . 60
2.17 The Thermodynamic Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.18 The Fundamental Thermodynamic Relation . . . . . . . . . . . . . . . . . . . . . . 62
2.19 The Entropy of an Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.20 The Third Law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.21 Free Energies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Appendix 2B: Mathematics of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . 70
Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3 Concepts of Probability 82
3.1 Probability in everyday life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.2 The rules of probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3 Mean values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.4 The meaning of probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.4.1 Information and uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.4.2 *Bayesian inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.5 Bernoulli processes and the binomial distribution . . . . . . . . . . . . . . . . . . . 99
3.6 Continuous probability distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.7 The Gaussian distribution as a limit of the binomial distribution . . . . . . . . . . 111
3.8 The central limit theorem or why is thermodynamics possible? . . . . . . . . . . . 113
3.9 The Poisson distribution and should you fly in airplanes? . . . . . . . . . . . . . . 116
3.10 *Traffic flow and the exponential distribution . . . . . . . . . . . . . . . . . . . . . 117
3.11 *Are all probability distributions Gaussian? . . . . . . . . . . . . . . . . . . . . . . 119
Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
CONTENTS iii
4 Statistical Mechanics 138
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.2 A simple example of a thermal interaction . . . . . . . . . . . . . . . . . . . . . . . 140
4.3 Counting microstates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3.1 Noninteracting spins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3.2 *One-dimensional Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3.3 A particle in a one-dimensional box . . . . . . . . . . . . . . . . . . . . . . 151
4.3.4 One-dimensional harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . 153
4.3.5 One particle in a two-dimensional box . . . . . . . . . . . . . . . . . . . . . 154
4.3.6 One particle in a three-dimensional box . . . . . . . . . . . . . . . . . . . . 156
4.3.7 Two noninteracting identical particles and the semiclassical limit . . . . . . 156
4.4 The number of states of N noninteracting particles: Semiclassical limit . . . . . . . 158
4.5 The microcanonical ensemble (fixed E, V, and N) . . . . . . . . . . . . . . . . . . . 160
4.6 Systems in contact with a heat bath: The canonical ensemble (fixed T, V, and N) 165
4.7 Connection between statistical mechanics and thermodynamics . . . . . . . . . . . 170
4.8 Simple applications of the canonical ensemble . . . . . . . . . . . . . . . . . . . . . 172
4.9 A simple thermometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.10 Simulations of the microcanonical ensemble . . . . . . . . . . . . . . . . . . . . . . 177
4.11 Simulations of the canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.12 Grand canonical ensemble (fixed T, V, and µ) . . . . . . . . . . . . . . . . . . . . . 179
4.13 Entropy and disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Appendix 4A: The Volume of a Hypersphere . . . . . . . . . . . . . . . . . . . . . . . . 183
Appendix 4B: Fluctuations in the Canonical Ensemble . . . . . . . . . . . . . . . . . . . 184
Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5 Magnetic Systems 190
5.1 Paramagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.2 Thermodynamics of magnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.3 The Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.4 The Ising Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.4.1 Exact enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.4.2 ∗
Spin-spin correlation function . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.4.3 Simulations of the Ising chain . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.4.4 *Transfer matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.4.5 Absence of a phase transition in one dimension . . . . . . . . . . . . . . . . 205
5.5 The Two-Dimensional Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
CONTENTS iv
5.5.1 Onsager solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.5.2 Computer simulation of the two-dimensional Ising model . . . . . . . . . . 211
5.6 Mean-Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.7 *Infinite-range interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
6 Noninteracting Particle Systems 230
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.2 The Classical Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.3 Classical Systems and the Equipartition Theorem . . . . . . . . . . . . . . . . . . . 238
6.4 Maxwell Velocity Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6.5 Occupation Numbers and Bose and Fermi Statistics . . . . . . . . . . . . . . . . . 243
6.6 Distribution Functions of Ideal Bose and Fermi Gases . . . . . . . . . . . . . . . . 245
6.7 Single Particle Density of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6.7.1 Photons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.7.2 Electrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.8 The Equation of State for a Noninteracting Classical Gas . . . . . . . . . . . . . . 252
6.9 Black Body Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.10 Noninteracting Fermi Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
6.10.1 Ground-state properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.10.2 Low temperature thermodynamic properties . . . . . . . . . . . . . . . . . . 263
6.11 Bose Condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
6.12 The Heat Capacity of a Crystalline Solid . . . . . . . . . . . . . . . . . . . . . . . . 272
6.12.1 The Einstein model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
6.12.2 Debye theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Appendix 6A: Low Temperature Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 275
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7 Thermodynamic Relations and Processes 288
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
7.2 Maxwell Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.3 Applications of the Maxwell Relations . . . . . . . . . . . . . . . . . . . . . . . . . 291
7.3.1 Internal energy of an ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . 291
7.3.2 Relation between the specific heats . . . . . . . . . . . . . . . . . . . . . . . 291
7.4 Applications to Irreversible Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.4.1 The Joule or free expansion process . . . . . . . . . . . . . . . . . . . . . . 293
CONTENTS v
7.4.2 Joule-Thomson process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
7.5 Equilibrium Between Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7.5.1 Equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7.5.2 Clausius-Clapeyron equation . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7.5.3 Simple phase diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
7.5.4 Pressure dependence of the melting point . . . . . . . . . . . . . . . . . . . 301
7.5.5 Pressure dependence of the boiling point . . . . . . . . . . . . . . . . . . . . 302
7.5.6 The vapor pressure curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
7.6 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
8 Classical Gases and Liquids 306
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
8.2 The Free Energy of an Interacting System . . . . . . . . . . . . . . . . . . . . . . . 306
8.3 Second Virial Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
8.4 Cumulant Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
8.5 High Temperature Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
8.6 Density Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.7 Radial Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
8.7.1 Relation of thermodynamic functions to g(r) . . . . . . . . . . . . . . . . . 326
8.7.2 Relation of g(r) to static structure function S(k) . . . . . . . . . . . . . . . 327
8.7.3 Variable number of particles . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
8.7.4 Density expansion of g(r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.8 Computer Simulation of Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.9 Perturbation Theory of Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
8.9.1 The van der Waals Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 334
8.9.2 Chandler-Weeks-Andersen theory . . . . . . . . . . . . . . . . . . . . . . . . 335
8.10 *The Ornstein-Zernicke Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
8.11 *Integral Equations for g(r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.12 *Coulomb Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.12.1 Debye-H¨uckel Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8.12.2 Linearized Debye-H¨uckel approximation . . . . . . . . . . . . . . . . . . . . 341
8.12.3 Diagrammatic Expansion for Charged Particles . . . . . . . . . . . . . . . . 342
8.13 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Appendix 8A: The third virial coefficient for hard spheres . . . . . . . . . . . . . . . . . 344
8.14 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
CONTENTS vi
9 Critical Phenomena 350
9.1 A Geometrical Phase Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
9.2 Renormalization Group for Percolation . . . . . . . . . . . . . . . . . . . . . . . . . 354
9.3 The Liquid-Gas Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
9.4 Bethe Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.5 Landau Theory of Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . 363
9.6 Other Models of Magnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
9.7 Universality and Scaling Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
9.8 The Renormalization Group and the 1D Ising Model . . . . . . . . . . . . . . . . . 372
9.9 The Renormalization Group and the Two-Dimensional Ising Model . . . . . . . . . 376
9.10 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
9.11 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
10 Introduction to Many-Body Perturbation Theory 387
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
10.2 Occupation Number Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 388
10.3 Operators in the Second Quantization Formalism . . . . . . . . . . . . . . . . . . . 389
10.4 Weakly Interacting Bose Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
A Useful Formulae 397
A.1 Physical constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
A.2 SI derived units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
A.3 Conversion factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
A.4 Mathematical Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
A.5 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
A.6 Euler-Maclaurin formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
A.7 Gaussian Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
A.8 Stirling’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
A.9 Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
A.10 Probability distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
A.11 Fermi integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
A.12 Bose integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Chapter 1
From Microscopic to Macroscopic
Behavior
c 2006 by Harvey Gould and Jan Tobochnik
28 August 2006
The goal of this introductory chapter is to explore the fundamental differences between micro-
scopic and macroscopic systems and the connections between classical mechanics and statistical
mechanics. We note that bouncing balls come to rest and hot objects cool, and discuss how the
behavior of macroscopic objects is related to the behavior of their microscopic constituents. Com-
puter simulations will be introduced to demonstrate the relation of microscopic and macroscopic
behavior.
1.1 Introduction
Our goal is to understand the properties of macroscopic systems, that is, systems of many elec-
trons, atoms, molecules, photons, or other constituents. Examples of familiar macroscopic objects
include systems such as the air in your room, a glass of water, a copper coin, and a rubber band
(examples of a gas, liquid, solid, and polymer, respectively). Less familiar macroscopic systems
are superconductors, cell membranes, the brain, and the galaxies.
We will find that the type of questions we ask about macroscopic systems differ in important
ways from the questions we ask about microscopic systems. An example of a question about a
microscopic system is “What is the shape of the trajectory of the Earth in the solar system?”
In contrast, have you ever wondered about the trajectory of a particular molecule in the air of
your room? Why not? Is it relevant that these molecules are not visible to the eye? Examples of
questions that we might ask about macroscopic systems include the following:
1. How does the pressure of a gas depend on the temperature and the volume of its container?
2. How does a refrigerator work? What is its maximum efficiency?
1
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 2
3. How much energy do we need to add to a kettle of water to change it to steam?
4. Why are the properties of water different from those of steam, even though water and steam
consist of the same type of molecules?
5. How are the molecules arranged in a liquid?
6. How and why does water freeze into a particular crystalline structure?
7. Why does iron lose its magnetism above a certain temperature?
8. Why does helium condense into a superfluid phase at very low temperatures? Why do some
materials exhibit zero resistance to electrical current at sufficiently low temperatures?
9. How fast does a river current have to be before its flow changes from laminar to turbulent?
10. What will the weather be tomorrow?
The above questions can be roughly classified into three groups. Questions 1–3 are concerned
with macroscopic properties such as pressure, volume, and temperature and questions related to
heating and work. These questions are relevant to thermodynamics which provides a framework
for relating the macroscopic properties of a system to one another. Thermodynamics is concerned
only with macroscopic quantities and ignores the microscopic variables that characterize individual
molecules. For example, we will find that understanding the maximum efficiency of a refrigerator
does not require a knowledge of the particular liquid used as the coolant. Many of the applications
of thermodynamics are to thermal engines, for example, the internal combustion engine and the
steam turbine.
Questions 4–8 relate to understanding the behavior of macroscopic systems starting from the
atomic nature of matter. For example, we know that water consists of molecules of hydrogen
and oxygen. We also know that the laws of classical and quantum mechanics determine the
behavior of molecules at the microscopic level. The goal of statistical mechanics is to begin with
the microscopic laws of physics that govern the behavior of the constituents of the system and
deduce the properties of the system as a whole. Statistical mechanics is the bridge between the
microscopic and macroscopic worlds.
Thermodynamics and statistical mechanics assume that the macroscopic properties of the
system do not change with time on the average. Thermodynamics describes the change of a
macroscopic system from one equilibrium state to another. Questions 9 and 10 concern macro-
scopic phenomena that change with time. Related areas are nonequilibrium thermodynamics and
fluid mechanics from the macroscopic point of view and nonequilibrium statistical mechanics from
the microscopic point of view. Although there has been progress in our understanding of nonequi-
librium phenomena such as turbulent flow and hurricanes, our understanding of nonequilibrium
phenomena is much less advanced than our understanding of equilibrium systems. Because un-
derstanding the properties of macroscopic systems that are independent of time is easier, we will
focus our attention on equilibrium systems and consider questions such as those in Questions 1–8.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 3
1.2 Some qualitative observations
We begin our discussion of macroscopic systems by considering a glass of water. We know that if
we place a glass of hot water into a cool room, the hot water cools until its temperature equals
that of the room. This simple observation illustrates two important properties associated with
macroscopic systems – the importance of temperature and the arrow of time. Temperature is
familiar because it is associated with the physiological sensation of hot and cold and is important
in our everyday experience. We will find that temperature is a subtle concept.
The direction or arrow of time is an even more subtle concept. Have you ever observed a glass
of water at room temperature spontaneously become hotter? Why not? What other phenomena
exhibit a direction of time? Time has a direction as is expressed by the nursery rhyme:
Humpty Dumpty sat on a wall
Humpty Dumpty had a great fall
All the king’s horses and all the king’s men
Couldn’t put Humpty Dumpty back together again.
Is there a a direction of time for a single particle? Newton’s second law for a single particle,
F = dp/dt, implies that the motion of particles is time reversal invariant, that is, Newton’s second
law looks the same if the time t is replaced by −t and the momentum p by −p. There is no
direction of time at the microscopic level. Yet if we drop a basketball onto a floor, we know that it
will bounce and eventually come to rest. Nobody has observed a ball at rest spontaneously begin
to bounce, and then bounce higher and higher. So based on simple everyday observations, we can
conclude that the behavior of macroscopic bodies and single particles is very different.
Unlike generations of about a century or so ago, we know that macroscopic systems such as a
glass of water and a basketball consist of many molecules. Although the intermolecular forces in
water produce a complicated trajectory for each molecule, the observable properties of water are
easy to describe. Moreover, if we prepare two glasses of water under similar conditions, we would
find that the observable properties of the water in each glass are indistinguishable, even though
the motion of the individual particles in the two glasses would be very different.
Because the macroscopic behavior of water must be related in some way to the trajectories of its
constituent molecules, we conclude that there must be a relation between the notion of temperature
and mechanics. For this reason, as we discuss the behavior of the macroscopic properties of a glass
of water and a basketball, it will be useful to discuss the relation of these properties to the motion
of their constituent molecules.
For example, if we take into account that the bouncing ball and the floor consist of molecules,
then we know that the total energy of the ball and the floor is conserved as the ball bounces
and eventually comes to rest. What is the cause of the ball eventually coming to rest? You
might be tempted to say the cause is “friction,” but friction is just a name for an effective or
phenomenological force. At the microscopic level we know that the fundamental forces associated
with mass, charge, and the nucleus conserve the total energy. So if we take into account the
molecules of the ball and the floor, their total energy is conserved. Conservation of energy does
not explain why the inverse process does not occur, because such a process also would conserve
the total energy. So a more fundamental explanation is that the ball comes to rest consistent with
conservation of the total energy and consistent with some other principle of physics. We will learn
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 4
that this principle is associated with an increase in the entropy of the system. For now, entropy is
only a name, and it is important only to understand that energy conservation is not sufficient to
understand the behavior of macroscopic systems. (As for most concepts in physics, the meaning
of entropy in the context of thermodynamics and statistical mechanics is very different than the
way entropy is used by nonscientists.)
For now, the nature of entropy is vague, because we do not have an entropy meter like we do
for energy and temperature. What is important at this stage is to understand why the concept of
energy is not sufficient to describe the behavior of macroscopic systems.
By thinking about the constituent molecules, we can gain some insight into the nature of
entropy. Let us consider the ball bouncing on the floor again. Initially, the energy of the ball
is associated with the motion of its center of mass, that is, the energy is associated with one
degree of freedom. However, after some time, the energy becomes associated with many degrees
of freedom associated with the individual molecules of the ball and the floor. If we were to bounce
the ball on the floor many times, the ball and the floor would each feel warm to our hands. So we
can hypothesize that energy has been transferred from one degree of freedom to many degrees of
freedom at the same time that the total energy has been conserved. Hence, we conclude that the
entropy is a measure of how the energy is distributed over the degrees of freedom.
What other quantities are associated with macroscopic systems besides temperature, energy,
and entropy? We are already familiar with some of these quantities. For example, we can measure
the air pressure in a basketball and its volume. More complicated quantities are the thermal
conductivity of a solid and the viscosity of oil. How are these macroscopic quantities related to
each other and to the motion of the individual constituent molecules? The answers to questions
such as these and the meaning of temperature and entropy will take us through many chapters.
1.3 Doing work
We already have observed that hot objects cool, and cool objects do not spontaneously become
hot; bouncing balls come to rest, and a stationary ball does not spontaneously begin to bounce.
And although the total energy must be conserved in any process, the distribution of energy changes
in an irreversible manner. We also have concluded that a new concept, the entropy, needs to be
introduced to explain the direction of change of the distribution of energy.
Now let us take a purely macroscopic viewpoint and discuss how we can arrive at a similar
qualitative conclusion about the asymmetry of nature. This viewpoint was especially important
historically because of the lack of a microscopic theory of matter in the 19th century when the
laws of thermodynamics were being developed.
Consider the conversion of stored energy into heating a house or a glass of water. The stored
energy could be in the form of wood, coal, or animal and vegetable oils for example. We know that
this conversion is easy to do using simple methods, for example, an open fireplace. We also know
that if we rub our hands together, they will become warmer. In fact, there is no theoretical limit1
to the efficiency at which we can convert stored energy to energy used for heating an object.
What about the process of converting stored energy into work? Work like many of the other
concepts that we have mentioned is difficult to define. For now let us say that doing work is
1Of course, the efficiency cannot exceed 100%.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 5
equivalent to the raising of a weight (see Problem 1.18). To be useful, we need to do this conversion
in a controlled manner and indefinitely. A single conversion of stored energy into work such as the
explosion of a bomb might do useful work, such as demolishing an unwanted football stadium, but
a bomb is not a useful device that can be recycled and used again. It is much more difficult to
convert stored energy into work and the discovery of ways to do this conversion led to the industrial
revolution. In contrast to the primitiveness of the open hearth, we have to build an engine to do
this conversion.
Can we convert stored energy into work with 100% efficiency? On the basis of macroscopic
arguments alone, we cannot answer this question and have to appeal to observations. We know
that some forms of stored energy are more useful than others. For example, why do we bother to
burn coal and oil in power plants even though the atmosphere and the oceans are vast reservoirs
of energy? Can we mitigate global warming by extracting energy from the atmosphere to run a
power plant? From the work of Kelvin, Clausius, Carnot and others, we know that we cannot
convert stored energy into work with 100% efficiency, and we must necessarily “waste” some of
the energy. At this point, it is easier to understand the reason for this necessary inefficiency by
microscopic arguments. For example, the energy in the gasoline of the fuel tank of an automobile
is associated with many molecules. The job of the automobile engine is to transform this energy
so that it is associated with only a few degrees of freedom, that is, the rolling tires and gears. It
is plausible that it is inefficient to transfer energy from many degrees of freedom to only a few.
In contrast, transferring energy from a few degrees of freedom (the firewood) to many degrees of
freedom (the air in your room) is relatively easy.
The importance of entropy, the direction of time, and the inefficiency of converting stored
energy into work are summarized in the various statements of the second law of thermodynamics.
It is interesting that historically, the second law of thermodynamics was conceived before the first
law. As we will learn in Chapter 2, the first law is a statement of conservation of energy.
1.4 Quality of energy
Because the total energy is conserved (if all energy transfers are taken into account), why do we
speak of an “energy shortage”? The reason is that energy comes in many forms and some forms are
more useful than others. In the context of thermodynamics, the usefulness of energy is determined
by its ability to do work.
Suppose that we take some firewood and use it to “heat” a sealed room. Because of energy
conservation, the energy in the room plus the firewood is the same before and after the firewood
has been converted to ash. But which form of the energy is more capable of doing work? You
probably realize that the firewood is a more useful form of energy than the “hot air” that exists
after the firewood is burned. Originally the energy was stored in the form of chemical (potential)
energy. Afterward the energy is mostly associated with the motion of the molecules in the air.
What has changed is not the total energy, but its ability to do work. We will learn that an increase
in entropy is associated with a loss of ability to do work. We have an entropy problem, not an
energy shortage.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 6
1.5 Some simple simulations
So far we have discussed the behavior of macroscopic systems by appealing to everyday experience
and simple observations. We now discuss some simple ways that we can simulate the behavior of
macroscopic systems, which consist of the order of 1023
particles. Although we cannot simulate
such a large system on a computer, we will find that even relatively small systems of the order of
a hundred particles are sufficient to illustrate the qualitative behavior of macroscopic systems.
Consider a macroscopic system consisting of particles whose internal structure can be ignored.
In particular, imagine a system of N particles in a closed container of volume V and suppose that
the container is far from the influence of external forces such as gravity. We will usually consider
two-dimensional systems so that we can easily visualize the motion of the particles.
For simplicity, we assume that the motion of the particles is given by classical mechanics,
that is, by Newton’s second law. If the resultant equations of motion are combined with initial
conditions for the positions and velocities of each particle, we can calculate, in principle, the
trajectory of each particle and the evolution of the system. To compute the total force on each
particle we have to specify the nature of the interaction between the particles. We will assume
that the force between any pair of particles depends only on the distance between them. This
simplifying assumption is applicable to simple liquids such as liquid argon, but not to water. We
will also assume that the particles are not charged. The force between any two particles must be
repulsive when their separation is small and weakly attractive when they are reasonably far apart.
For simplicity, we will usually assume that the interaction is given by the Lennard-Jones potential,
whose form is given by
u(r) = 4
σ
r
12
−
σ
r
6
. (1.1)
A plot of the Lennard-Jones potential is shown in Figure 1.1. The r−12
form of the repulsive part
of the interaction is chosen for convenience only and has no fundamental significance. However,
the attractive 1/r6
behavior at large r is the van der Waals interaction. The force between any
two particles is given by f(r) = −du/dr.
Usually we want to simulate a gas or liquid in the bulk. In such systems the fraction of
particles near the walls of the container is negligibly small. However, the number of particles that
can be studied in a simulation is typically 103
–106
. For these relatively small systems, the fraction
of particles near the walls of the container would be significant, and hence the behavior of such
a system would be dominated by surface effects. The most common way of minimizing surface
effects and to simulate more closely the properties of a bulk system is to use what are known as
toroidal boundary conditions. These boundary conditions are familiar to computer game players.
For example, a particle that exits the right edge of the “box,” re-enters the box from the left side.
In one dimension, this boundary condition is equivalent to taking a piece of wire and making it
into a loop. In this way a particle moving on the wire never reaches the end.
Given the form of the interparticle potential, we can determine the total force on each particle
due to all the other particles in the system. Given this force, we find the acceleration of each
particle from Newton’s second law of motion. Because the acceleration is the second derivative
of the position, we need to solve a second-order differential equation for each particle (for each
direction). (For a two-dimensional system of N particles, we would have to solve 2N differential
equations.) These differential equations are coupled because the acceleration of a given particle
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 7
u
r
ε
σ
Figure 1.1: Plot of the Lennard-Jones potential u(r), where r is the distance between the particles.
Note that the potential is characterized by a length σ and an energy .
depends on the positions of all the other particles. Obviously, we cannot solve the resultant
set of coupled differential equations analytically. However, we can use relatively straightforward
numerical methods to solve these equations to a good approximation. This way of simulating dense
gases, liquids, solids, and biomolecules is called molecular dynamics.2
Approach to equilibrium. In the following we will explore some of the qualitative properties
of macroscopic systems by doing some simple simulations. Before you actually do the simulations,
think about what you believe the results will be. In many cases, the most valuable part of the sim-
ulation is not the simulation itself, but the act of thinking about a concrete model and its behavior.
The simulations can be run as applications on your computer by downloading the Launcher from
<stp.clarku.edu/simulations/choose.html>. The Launcher conveniently packages all the sim-
ulations (and a few more) discussed in these notes into a single file. Alternatively, you can run
each simulation as an applet using a browser.
Problem 1.1. Approach to equilibrium
Suppose that a box is divided into three equal parts and N particles are placed at random in
the middle third of the box.3
The velocity of each particle is assigned at random and then the
velocity of the center of mass is set to zero. At t = 0, we remove the “barriers” between the
2The nature of molecular dynamics is discussed in Chapter 8 of Gould, Tobochnik, and Christian.
3We have divided the box into three parts so that the effects of the toroidal boundary conditions will not be as
apparent as if we had initially confined the particles to one half of the box. The particles are placed at random in
the middle third of the box with the constraint that no two particles can be closer than the length σ. This constraint
prevents the initial force between any two particles from being two big, which would lead to the breakdown of the
numerical method used to solve the differential equations. The initial density ρ = N/A is ρ = 0.2.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 8
three parts and watch the particles move according to Newton’s equations of motion. We say
that the removal of the barrier corresponds to the removal of an internal constraint. What do
you think will happen? The applet/application at <stp.clarku.edu/simulations/approach.
html> implements this simulation. Give your answers to the following questions before you do the
simulation.
(a) Start the simulation with N = 27, n1 = 0, n2 = N, and n3 = 0. What is the qualitative
behavior of n1, n2, and n3, the number of particles in each third of the box, as a function of
the time t? Does the system appear to show a direction of time? Choose various values of N
that are multiples of three up to N = 270. Is the direction of time better defined for larger N?
(b) Suppose that we made a video of the motion of the particles considered in Problem 1.1a. Would
you be able to tell if the video were played forward or backward for the various values of N?
Would you be willing to make an even bet about the direction of time? Does your conclusion
about the direction of time become more certain as N increases?
(c) After n1, n2, and n3 become approximately equal for N = 270, reverse the time and continue
the simulation. Reversing the time is equivalent to letting t → −t and changing the signs of
all the velocities. Do the particles return to the middle third of the box? Do the simulation
again, but let the particles move for a longer time before the time is reversed. What happens
now?
(d) From watching the motion of the particles, describe the nature of the boundary conditions
that are used in the simulation.
The results of the simulations in Problem 1.1 might not seem very surprising until you start
to think about them. Why does the system as a whole exhibit a direction of time when the motion
of each particle is time reversible? Do the particles fill up the available space simply because the
system becomes less dense?
To gain some more insight into these questions, we consider a simpler simulation. Imagine
a closed box that is divided into two parts of equal volume. The left half initially contains N
identical particles and the right half is empty. We then make a small hole in the partition between
the two halves. What happens? Instead of simulating this system by solving Newton’s equations
for each particle, we adopt a simpler approach based on a probabilistic model. We assume that the
particles do not interact with one another so that the probability per unit time that a particle goes
through the hole in the partition is the same for all particles regardless of the number of particles
in either half. We also assume that the size of the hole is such that only one particle can pass
through it in one unit of time.
One way to implement this model is to choose a particle at random and move it to the other
side. This procedure is cumbersome, because our only interest is the number of particles on each
side. That is, we need to know only n, the number of particles on the left side; the number on
the right side is N − n. Because each particle has the same chance to go through the hole in the
partition, the probability per unit time that a particle moves from left to right equals the number
of particles on the left side divided by the total number of particles; that is, the probability of a
move from left to right is n/N. The algorithm for simulating the evolution of the model is given
by the following steps:
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 9
Figure 1.2: Evolution of the number of particles in each third of the box for N = 270. The particles
were initially restricted to the middle third of the box. Toroidal boundary conditions are used in
both directions. The initial velocities were assigned at random from a distribution corresponding
to temperature T = 5. The time was reversed at t ≈ 59. Does the system exhibit a direction of
time?
1. Generate a random number r from a uniformly distributed set of random numbers in the
unit interval 0 ≤ r < 1.
2. If r ≤ n/N, move a particle from left to right, that is, let n → n − 1; otherwise, move a
particle from right to left, n → n + 1.
3. Increase the “time” by 1.
Problem 1.2. Particles in a box
(a) The applet at <stp.clarku.edu/simulations/box.html> implements this algorithm and
plots the evolution of n. Describe the behavior of n(t) for various values of N. Does the
system approach equilibrium? How would you characterize equilibrium? In what sense is
equilibrium better defined as N becomes larger? Does your definition of equilibrium depend
on how the particles were initially distributed between the two halves of the box?
(b) When the system is in equilibrium, does the number of particles on the left-hand side remain
a constant? If not, how would you describe the nature of equilibrium?
(c) If N 32, does the system ever return to its initial state?
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 10
(d) How does n, the mean number of particles on the left-hand side, depend on N after the system
has reached equilibrium? For simplicity, the program computes various averages from time
t = 0. Why would such a calculation not yield the correct equilibrium average values? What
is the purpose of the Zero averages button?
(e) Define the quantity σ by the relation σ2
= (n − n)2. What does σ measure? What would be
its value if n were constant? How does σ depend on N? How does the ratio σ/n depend on
N? In what sense is equilibrium better defined as N increases?
From Problems 1.1 and 1.2 we see that after a system reaches equilibrium, the macroscopic
quantities of interest become independent of time on the average, but exhibit fluctuations about
their average values. We also learned that the relative fluctuations about the average become
smaller as the number of constituents is increased and the details of the dynamics are irrelevant
as far as the general tendency of macroscopic systems to approach equilibrium.
How can we understand why the systems considered in Problems 1.1 and 1.2 exhibit a direction
of time? There are two general approaches that we can take. One way would be to study the
dynamics of the system. A much simpler way is to change the question and take advantage of
the fact that the equilibrium state of a macroscopic system is independent of time on the average
and hence time is irrelevant in equilibrium. For the simple system considered in Problem 1.2 we
will see that counting the number of ways that the particles can be distributed between the two
halves of the box will give us much insight into the nature of equilibrium. This information tells
us nothing about the approach of the system to equilibrium, but it will give us insight into why
there is a direction of time.
Let us call each distinct arrangement of the particles between the two halves of the box a
configuration. A given particle can be in either the left half or the right half of the box. Because
the halves are equivalent, a given particle is equally likely to be in either half if the system is in
equilibrium. For N = 2, the four possible configurations are shown in Table 1.1. Note that each
configuration has a probability of 1/4 if the system is in equilibrium.
configuration n W(n)
L L 2 1
L R
R L 1 2
R R 0 1
Table 1.1: The four possible ways in which N = 2 particles can be distributed between the
two halves of a box. The quantity W(n) is the number of configurations corresponding to the
macroscopic state characterized by n.
Now let us consider N = 4 for which there are 2 × 2 × 2 × 2 = 24
= 16 configurations (see
Table 1.2). From a macroscopic point of view, we do not care which particle is in which half of the
box, but only the number of particles on the left. Hence, the macroscopic state or macrostate is
specified by n. Let us assume as before that all configurations are equally probable in equilibrium.
We see from Table 1.2 that there is only one configuration with all particles on the left and the
most probable macrostate is n = 2.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 11
For larger N, the probability of the most probable macrostate with n = N/2 is much greater
than the macrostate with n = N, which has a probability of only 1/2N
corresponding to a single
configuration. The latter configuration is “special” and is said to be nonrandom, while the con-
figurations with n ≈ N/2, for which the distribution of the particles is approximately uniform,
are said to be “random.” So we can see that the equilibrium macrostate corresponds to the most
probable state.
configuration n W(n) P(n)
L L L L 4 1 1/16
R L L L 3
L R L L 3
L L R L 3
L L L R 3
4 4/16
R R L L 2
R L R L 2
R L L R 2
L R R L 2
L R L R 2
L L R R 2
6 6/16
R R R L 1
R R L R 1
R L R R 1
L R R R 1
4 4/16
R R R R 0 1 1/16
Table 1.2: The sixteen possible ways in which N = 4 particles can be distributed between the
two halves of a box. The quantity W(n) is the number of configurations corresponding to the
macroscopic state characterized by n. The probability P(n) of the macrostate n is calculated
assuming that each configuration is equally likely.
Problem 1.3. Enumeration of possible configurations
(a) Calculate the number of possible configurations for each macrostate n for N = 8 particles.
What is the probability that n = 8? What is the probability that n = 4? It is possible
to count the number of configurations for each n by hand if you have enough patience, but
because there are a total of 28
= 256 configurations, this counting would be very tedious. An
alternative is to derive an expression for the number of ways that n particles out of N can
be in the left half of the box. One way to motivate such an expression is to enumerate the
possible configurations for smaller values of N and see if you can observe a pattern.
(b) From part (a) we see that the macrostate with n = N/2 is much more probable than the
macrostate with n = N. Why?
We observed that if an isolated macroscopic system changes in time due to the removal of an
internal constraint, it tends to evolve from a less random to a more random state. We also observed
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 12
that once the system reaches its most random state, fluctuations corresponding to an appreciably
nonuniform state are very rare. These observations and our reasoning based on counting the
number of configurations corresponding to a particular macrostate allows us to conclude that
A system in a nonuniform macrostate will change in time on the average so as to
approach its most random macrostate where it is in equilibrium.
Note that our simulations involved watching the system evolve, but our discussion of the
number of configurations corresponding to each macrostate did not involve the dynamics in any
way. Instead this approach involved the enumeration of the configurations and assigning them
equal probabilities assuming that the system is isolated and in equilibrium. We will find that it is
much easier to understand equilibrium systems by ignoring the time altogether.
In the simulation of Problem 1.1 the total energy was conserved, and hence the macroscopic
quantity of interest that changed from the specially prepared initial state with n2 = N to the
most random macrostate with n2 ≈ N/3 was not the total energy. So what macroscopic quantity
changed besides n1, n2, and n3 (the number of particles in each third of the box)? Based on our
earlier discussion, we tentatively say that the quantity that changed is the entropy. This statement
is no more meaningful than saying that balls fall near the earth’s surface because of gravity. We
conjecture that the entropy is associated with the number of configurations associated with a
given macrostate. If we make this association, we see that the entropy is greater after the system
has reached equilibrium than in the system’s initial state. Moreover, if the system were initially
prepared in a random state, the mean value of n2 and hence the entropy would not change. Hence,
we can conclude the following:
The entropy of an isolated system increases or remains the same when an internal
constraint is removed.
This statement is equivalent to the second law of thermodynamics. You might want to skip to
Chapter 4, where this identification of the entropy is made explicit.
As a result of the two simulations that we have done and our discussions, we can make some
additional tentative observations about the behavior of macroscopic systems.
Fluctuations in equilibrium. Once a system reaches equilibrium, the macroscopic quantities of
interest do not become independent of the time, but exhibit fluctuations about their average values.
That is, in equilibrium only the average values of the macroscopic variables are independent of
time. For example, for the particles in the box problem n(t) changes with t, but its average value
n does not. If N is large, fluctuations corresponding to a very nonuniform distribution of the
particles almost never occur, and the relative fluctuations, σ/n become smaller as N is increased.
History independence. The properties of equilibrium systems are independent of their history.
For example, n would be the same whether we had started with n(t = 0) = 0 or n(t = 0) = N.
In contrast, as members of the human race, we are all products of our history. One consequence
of history independence is that it is easier to understand the properties of equilibrium systems by
ignoring the dynamics of the particles. (The global constraints on the dynamics are important.
For example, it is important to know if the total energy is a constant or not.) We will find that
equilibrium statistical mechanics is essentially equivalent to counting configurations. The problem
will be that this counting is difficult to do in general.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 13
Need for statistical approach. Systems can be described in detail by specifying their microstate.
Such a description corresponds to giving all the information that is possible. For a system of
classical particles, a microstate corresponds to specifying the position and velocity of each particle.
In our analysis of Problem 1.2, we specified only in which half of the box a particle was located,
so we used the term configuration rather than microstate. However, the terms are frequently used
interchangeably.
From our simulations, we see that the microscopic state of the system changes in a complicated
way that is difficult to describe. However, from a macroscopic point of view, the description is
much simpler. Suppose that we simulated a system of many particles and saved the trajectories
of the particles as a function of time. What could we do with this information? If the number of
particles is 106
or more or if we ran long enough, we would have a problem storing the data. Do
we want to have a detailed description of the motion of each particle? Would this data give us
much insight into the macroscopic behavior of the system? As we have found, the trajectories of
the particles are not of much interest, and it is more useful to develop a probabilistic approach.
That is, the presence of a large number of particles motivates us to use statistical methods. In
Section 1.8 we will discuss another reason why a probabilistic approach is necessary.
We will find that the laws of thermodynamics depend on the fact that the number of particles in
macroscopic systems is enormous. A typical measure of this number is Avogadro’s number which
is approximately 6 × 1023
, the number of atoms in a mole. When there are so many particles,
predictions of the average properties of the system become meaningful, and deviations from the
average behavior become less and less important as the number of atoms is increased.
Equal a priori probabilities. In our analysis of the probability of each macrostate in Prob-
lem 1.2, we assumed that each configuration was equally probable. That is, each configuration of
an isolated system occurs with equal probability if the system is in equilibrium. We will make this
assumption explicit for isolated systems in Chapter 4.
Existence of different phases. So far our simulations of interacting systems have been restricted
to dilute gases. What do you think would happen if we made the density higher? Would a system
of interacting particles form a liquid or a solid if the temperature or the density were chosen
appropriately? The existence of different phases is investigated in Problem 1.4.
Problem 1.4. Different phases
(a) The applet/application at <stp.clarku.edu/simulations/lj.html> simulates an isolated
system of N particles interacting via the Lennard-Jones potential. Choose N = 64 and L = 18
so that the density ρ = N/L2
≈ 0.2. The initial positions are chosen at random except that
no two particles are allowed to be closer than σ. Run the simulation and satisfy yourself that
this choice of density and resultant total energy corresponds to a gas. What is your criterion?
(b) Slowly lower the total energy of the system. (The total energy is lowered by rescaling the
velocities of the particles.) If you are patient, you might be able to observe “liquid-like”
regions. How are they different than “gas-like” regions?
(c) If you decrease the total energy further, you will observe the system in a state roughly corre-
sponding to a solid. What is your criteria for a solid? Explain why the solid that we obtain in
this way will not be a perfect crystalline solid.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 14
(d) Describe the motion of the individual particles in the gas, liquid, and solid phases.
(e) Conjecture why a system of particles interacting via the Lennard-Jones potential in (1.1) can
exist in different phases. Is it necessary for the potential to have an attractive part for the
system to have a liquid phase? Is the attractive part necessary for there to be a solid phase?
Describe a simulation that would help you answer this question.
It is fascinating that a system with the same interparticle interaction can be in different
phases. At the microscopic level, the dynamics of the particles is governed by the same equations
of motion. What changes? How does such a phase change occur at the microscopic level? Why
doesn’t a liquid crystallize immediately when its temperature is lowered quickly? What happens
when it does begin to crystallize? We will find in later chapters that phase changes are examples
of cooperative effects.
1.6 Measuring the pressure and temperature
The obvious macroscopic variables that we can measure in our simulations of the system of particles
interacting via the Lennard-Jones potential include the average kinetic and potential energies, the
number of particles, and the volume. We also learned that the entropy is a relevant macroscopic
variable, but we have not learned how to determine it from a simulation.4
We know from our
everyday experience that there are at least two other macroscopic variables that are relevant for
describing a macrostate, namely, the pressure and the temperature.
The pressure is easy to measure because we are familiar with force and pressure from courses
in mechanics. To remind you of the relation of the pressure to the momentum flux, consider N
particles in a cube of volume V and linear dimension L. The center of mass momentum of the
particles is zero. Imagine a planar surface of area A = L2
placed in the system and oriented
perpendicular to the x-axis as shown in Figure 1.3. The pressure P can be defined as the force per
unit area acting normal to the surface:
P =
Fx
A
. (1.2)
We have written P as a scalar because the pressure is the same in all directions on the average.
From Newton’s second law, we can rewrite (1.2) as
P =
1
A
d(mvx)
dt
. (1.3)
From (1.3) we see that the pressure is the amount of momentum that crosses a unit area of
the surface per unit time. We could use (1.3) to determine the pressure, but this relation uses
information only from the fraction of particles that are crossing an arbitrary surface at a given
time. Instead, our simulations will use the relation of the pressure to the virial, a quantity that
involves all the particles in the system.5
4We will find that it is very difficult to determine the entropy directly by making either measurements in the
laboratory or during a simulation. Entropy, unlike pressure and temperature, has no mechanical analog.
5See Gould, Tobochnik, and Christian, Chapter 8. The relation of the pressure to the virial is usually considered
in graduate courses in mechanics.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 15
not done
Figure 1.3: Imaginary plane perpendicular to the x-axis across which the momentum flux is eval-
uated.
Problem 1.5. Nature of temperature
(a) Summarize what you know about temperature. What reasons do you have for thinking that
it has something to do with energy?
(b) Discuss what happens to the temperature of a hot cup of coffee. What happens, if anything,
to the temperature of its surroundings?
The relation between temperature and energy is not simple. For example, one way to increase
the energy of a glass of water would be to lift it. However, this action would not affect the
temperature of the water. So the temperature has nothing to do with the motion of the center of
mass of the system. As another example, if we placed a container of water on a moving conveyor
belt, the temperature of the water would not change. We also know that temperature is a property
associated with many particles. It would be absurd to refer to the temperature of a single molecule.
This discussion suggests that temperature has something to do with energy, but it has missed
the most fundamental property of temperature, that is, the temperature is the quantity that becomes
equal when two systems are allowed to exchange energy with one another. (Think about what
happens to a cup of hot coffee.) In Problem 1.6 we identify the temperature from this point of
view for a system of particles.
Problem 1.6. Identification of the temperature
(a) Consider two systems of particles interacting via the Lennard-Jones potential given in (1.1). Se-
lect the applet/application at <stp.clarku.edu/simulations/thermalcontact.html>. For
system A, we take NA = 81, AA = 1.0, and σAA = 1.0; for system B, we have NB = 64,
AA = 1.5, and σAA = 1.2. Both systems are in a square box with linear dimension L = 12. In
this case, toroidal boundary conditions are not used and the particles also interact with fixed
particles (with infinite mass) that make up the walls and the partition between them. Initially,
the two systems are isolated from each other and from their surroundings. Run the simulation
until each system appears to be in equilibrium. Does the kinetic energy and potential energy
of each system change as the system evolves? Why? What is the mean potential and kinetic
energy of each system? Is the total energy of each system fixed (to within numerical error)?
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 16
(b) Remove the barrier and let the two systems interact with one another.6
We choose AB = 1.25
and σAB = 1.1. What quantity is exchanged between the two systems? (The volume of each
system is fixed.)
(c) Monitor the kinetic and potential energy of each system. After equilibrium has been established
between the two systems, compare the average kinetic and potential energies to their values
before the two systems came into contact.
(d) We are looking for a quantity that is the same in both systems after equilibrium has been
established. Are the average kinetic and potential energies the same? If not, think about what
would happen if you doubled the N and the area of each system? Would the temperature
change? Does it make more sense to compare the average kinetic and potential energies or the
average kinetic and potential energies per particle? What quantity does become the same once
the two systems are in equilibrium? Do any other quantities become approximately equal?
What do you conclude about the possible identification of the temperature?
From the simulations in Problem 1.6, you are likely to conclude that the temperature is
proportional to the average kinetic energy per particle. We will learn in Chapter 4 that the
proportionality of the temperature to the average kinetic energy per particle holds only for a
system of particles whose kinetic energy is proportional to the square of the momentum (velocity).
Another way of thinking about temperature is that temperature is what you measure with a
thermometer. If you want to measure the temperature of a cup of coffee, you put a thermometer
into the coffee. Why does this procedure work?
Problem 1.7. Thermometers
Describe some of the simple thermometers with which you are familiar. On what physical principles
do these thermometers operate? What requirements must a thermometer have?
Now lets imagine a simulation of a simple thermometer. Imagine a special particle, a “demon,”
that is able to exchange energy with a system of particles. The only constraint is that the energy
of the demon Ed must be non-negative. The behavior of the demon is given by the following
algorithm:
1. Choose a particle in the system at random and make a trial change in one of its coordinates.
2. Compute ∆E, the change in the energy of the system due to the change.
3. If ∆E ≤ 0, the system gives the surplus energy |∆E| to the demon, Ed → Ed + |∆E|, and
the trial configuration is accepted.
4. If ∆E > 0 and the demon has sufficient energy for this change, then the demon gives the
necessary energy to the system, Ed → Ed − ∆E, and the trial configuration is accepted.
Otherwise, the trial configuration is rejected and the configuration is not changed.
6In order to ensure that we can continue to identify which particle belongs to system A and system B, we have
added a spring to each particle so that it cannot wander too far from its original lattice site.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 17
Note that the total energy of the system and the demon is fixed.
We consider the consequences of these simple rules in Problem 1.8. The nature of the demon
is discussed further in Section 4.9.
Problem 1.8. The demon and the ideal gas
(a) The applet/application at <stp.clarku.edu/simulations/demon.html> simulates a demon
that exchanges energy with an ideal gas of N particles moving in d spatial dimensions. Because
the particles do not interact, the only coordinate of interest is the velocity of the particles.
In this case the demon chooses a particle at random and changes its velocity in one of its d
directions by an amount chosen at random between −∆ and +∆. For simplicity, the initial
velocity of each particle is set equal to +v0 ˆx, where v0 = (2E0/m)1/2
/N, E0 is the desired
total energy of the system, and m is the mass of the particles. For simplicity, we will choose
units such that m = 1. Choose d = 1, N = 40, and E0 = 10 and determine the mean energy
of the demon Ed and the mean energy of the system E. Why is E = E0?
(b) What is e, the mean energy per particle of the system? How do e and Ed compare for various
values of E0? What is the relation, if any, between the mean energy of the demon and the
mean energy of the system?
(c) Choose N = 80 and E0 = 20 and compare e and Ed. What conclusion, if any, can you make?7
(d) Run the simulation for several other values of the initial total energy E0 and determine how e
depends on Ed for fixed N.
(e) From your results in part (d), what can you conclude about the role of the demon as a
thermometer? What properties, if any, does it have in common with real thermometers?
(f) Repeat the simulation for d = 2. What relation do you find between e and Ed for fixed N?
(g) Suppose that the energy momentum relation of the particles is not = p2
/2m, but is = cp,
where c is a constant (which we take to be unity). Determine how e depends on Ed for fixed
N and d = 1. Is the dependence the same as in part (d)?
(h) Suppose that the energy momentum relation of the particles is = Ap3/2
, where A is a constant
(which we take to be unity). Determine how e depends on Ed for fixed N and d = 1. Is this
dependence the same as in part (d) or part (g)?
(i) The simulation also computes the probability P(Ed)δE that the demon has energy between
Ed and Ed +δE. What is the nature of the dependence of P(Ed) on Ed? Does this dependence
depend on the nature of the system with which the demon interacts?
7There are finite size effects that are order 1/N.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 18
1.7 Work, heating, and the first law of thermodynamics
If you watch the motion of the individual particles in a molecular dynamics simulation, you would
probably describe the motion as “random” in the sense of how we use random in everyday speech.
The motion of the individual molecules in a glass of water would exhibit similar motion. Suppose
that we were to expose the water to a low flame. In a simulation this process would roughly
correspond to increasing the speed of the particles when they hit the wall. We say that we have
transferred energy to the system incoherently because each particle would continue to move more
or less at random.
You learned in your classical mechanics courses that the change in energy of a particle equals
the work done on it and the same is true for a collection of particles as long as we do not change
the energy of the particles in some other way at the same time. Hence, if we squeeze a plastic
container of water, we would do work on the system, and we would see the particles near the wall
move coherently. So we can distinguish two different ways of transferring energy to the system.
That is, heating transfers energy incoherently and doing work transfers energy coherently.
Lets consider a molecular dynamics simulation again and suppose that we have increased the
energy of the system by either compressing the system and doing work on it or by increasing the
speed of the particles that reach the walls of the container. Roughly speaking, the first way would
initially increase the potential energy of interaction and the second way would initially increase
the kinetic energy of the particles. If we increase the total energy by the same amount, could we
tell by looking at the particle trajectories after equilibrium has been reestablished how the energy
had been increased? The answer is no, because for a given total energy, volume, and number of
particles, the kinetic energy and the potential energy would have unique equilibrium values. (See
Problem 1.6 for a demonstration of this property.) We conclude that the energy of the gas can
be changed by doing work on it or by heating it. This statement is equivalent to the first law of
thermodynamics and from the microscopic point of view is simply a statement of conservation of
energy.
Our discussion implies that the phrase “adding heat” to a system makes no sense, because
we cannot distinguish “heat energy” from potential energy and kinetic energy. Nevertheless, we
frequently use the word “heat ” in everyday speech. For example, we might way “Please turn on
the heat” and “I need to heat my coffee.” We will avoid such uses, and whenever possible avoid
the use of the noun “heat.” Why do we care? Because there is no such thing as heat! Once upon
a time, scientists thought that there was a fluid in all substances called caloric or heat that could
flow from one substance to another. This idea was abandoned many years ago, but is still used in
common language. Go ahead and use heat outside the classroom, but we won’t use it here.
1.8 *The fundamental need for a statistical approach
In Section 1.5 we discussed the need for a statistical approach when treating macroscopic systems
from a microscopic point of view. Although we can compute the trajectory (the position and
velocity) of each particle for as long as we have patience, our disinterest in the trajectory of any
particular particle and the overwhelming amount of information that is generated in a simulation
motivates us to develop a statistical approach.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 19
(a) (b)
Figure 1.4: (a) A special initial condition for N = 11 particles such that their motion remains
parallel indefinitely. (b) The positions of the particles at time t = 8.0 after the change in vx(6).
The only change in the initial condition from part (a) is that vx(6) was changed from 1 to 1.000001.
We now discuss why there is a more fundamental reason why we must use probabilistic meth-
ods to describe systems with more than a few particles. The reason is that under a wide variety of
conditions, even the most powerful supercomputer yields positions and velocities that are mean-
ingless! In the following, we will find that the trajectories in a system of many particles depend
sensitively on the initial conditions. Such a system is said to be chaotic. This behavior forces us
to take a statistical approach even for systems with as few as three particles.
As an example, consider a system of N = 11 particles moving in a box of linear dimension
L (see the applet/application at <stp.clarku.edu/simulations/sensitive.html>). The initial
conditions are such that all particles have the same velocity vx(i) = 1, vy(i) = 0, and the particles
are equally spaced vertically, with x(i) = L/2 for i = 1, . . . , 11 (see Fig. 1.4(a)). Convince yourself
that for these special initial conditions, the particles will continue moving indefinitely in the x-
direction (using toroidal boundary conditions).
Now let us stop the simulation and change the velocity of particle 6, such that vx(6) =
1.000001. What do you think happens now? In Fig. 1.4(b) we show the positions of the particles
at time t = 8.0 after the change in velocity of particle 6. Note that the positions of the particles
are no longer equally spaced and the velocities of the particles are very different. So in this case,
a small change in the velocity of one particle leads to a big change in the trajectories of all the
particles.
Problem 1.9. Irreversibility
The applet/application at <stp.clarku.edu/simulations/sensitive.html> simulates a system
of N = 11 particles with the special initial condition described in the text. Confirm the results that
we have discussed. Change the velocity of particle 6 and stop the simulation at time t and reverse
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 20
all the velocities. Confirm that if t is sufficiently short, the particles will return approximately to
their initial state. What is the maximum value of t that will allow the system to return to its
initial positions if t is replaced by −t (all velocities reversed)?
An important property of chaotic systems is their extreme sensitivity to initial conditions,
that is, the trajectories of two identical systems starting with slightly different initial conditions
will diverge exponentially in a short time. For such systems we cannot predict the positions
and velocities of the particles because even the slightest error in our measurement of the initial
conditions would make our prediction entirely wrong if the elapsed time is sufficiently long. That
is, we cannot answer the question, “Where is particle 2 at time t?” if t is sufficiently long. It might
be disturbing to realize that our answers are meaningless if we ask the wrong questions.
Although Newton’s equations of motion are time reversible, this reversibility cannot be realized
in practice for chaotic systems. Suppose that a chaotic system evolves for a time t and all the
velocities are reversed. If the system is allowed to evolve for an additional time t, the system will
not return to its original state unless the velocities are specified with infinite precision. This lack
of practical reversibility is related to what we observe in macroscopic systems. If you pour milk
into a cup of coffee, the milk becomes uniformly distributed throughout the cup. You will never
see a cup of coffee spontaneously return to the state where all the milk is at the surface because
to do so, the positions and velocities of the milk and coffee molecules must be chosen so that the
molecules of milk return to this very special state. Even the slightest error in the choice of positions
and velocities will ruin any chance of the milk returning to the surface. This sensitivity to initial
conditions provides the foundation for the arrow of time.
1.9 *Time and ensemble averages
We have seen that although the computed trajectories are meaningless for chaotic systems, averages
over the trajectories are physically meaningful. That is, although a computed trajectory might
not be the one that we thought we were computing, the positions and velocities that we compute
are consistent with the constraints we have imposed, in this case, the total energy E, the volume
V , and the number of particles N. This reasoning suggests that macroscopic properties such as
the temperature and pressure must be expressed as averages over the trajectories.
Solving Newton’s equations numerically as we have done in our simulations yields a time
average. If we do a laboratory experiment to measure the temperature and pressure, our mea-
surements also would be equivalent to a time average. As we have mentioned, time is irrelevant in
equilibrium. We will find that it is easier to do calculations in statistical mechanics by doing an
ensemble average. We will discuss ensemble averages in Chapter 3. In brief an ensemble average is
over many mental copies of the system that satisfy the same known conditions. A simple example
might clarify the nature of these two types of averages. Suppose that we want to determine the
probability that the toss of a coin results in “heads.” We can do a time average by taking one
coin, tossing it in the air many times, and counting the fraction of heads. In contrast, an ensemble
average can be found by obtaining many similar coins and tossing them into the air at one time.
It is reasonable to assume that the two ways of averaging are equivalent. This equivalence
is called the quasi-ergodic hypothesis. The use of the term “hypothesis” might suggest that the
equivalence is not well accepted, but it reminds us that the equivalence has been shown to be
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 21
rigorously true in only a few cases. The sensitivity of the trajectories of chaotic systems to initial
conditions suggests that a classical system of particles moving according to Newton’s equations of
motion passes through many different microstates corresponding to different sets of positions and
velocities. This property is called mixing, and it is essential for the validity of the quasi-ergodic
hypothesis.
In summary, macroscopic properties are averages over the microscopic variables and give
predictable values if the system is sufficiently large. One goal of statistical mechanics is to give
a microscopic basis for the laws of thermodynamics. In this context it is remarkable that these
laws depend on the fact that gases, liquids, and solids are chaotic systems. Another important
goal of statistical mechanics is to calculate the macroscopic properties from a knowledge of the
intermolecular interactions.
1.10 *Models of matter
There are many models of interest in statistical mechanics, corresponding to the wide range of
macroscopic systems found in nature and made in the laboratory. So far we have discussed a
simple model of a classical gas and used the same model to simulate a classical liquid and a solid.
One key to understanding nature is to develop models that are simple enough to analyze, but
that are rich enough to show the same features that are observed in nature. Some of the more
common models that we will consider include the following.
1.10.1 The ideal gas
The simplest models of macroscopic systems are those for which the interaction between the indi-
vidual particles is very small. For example, if a system of particles is very dilute, collisions between
the particles will be rare and can be neglected under most circumstances. In the limit that the
interactions between the particles can be neglected completely, the model is known as the ideal
gas. The classical ideal gas allows us to understand much about the behavior of dilute gases, such
as those in the earth’s atmosphere. The quantum version will be useful in understanding black-
body radiation (Section 6.9), electrons in metals (Section 6.10), the low temperature behavior of
crystalline solids (Section 6.12), and a simple model of superfluidity (Section 6.11).
The term “ideal gas” is a misnomer because it can be used to understand the properties of
solids and other interacting particle systems under certain circumstances, and because in many
ways the neglect of interactions is not ideal. The historical reason for the use of this term is that
the neglect of interparticle interactions allows us to do some calculations analytically. However,
the neglect of interparticle interactions raises other issues. For example, how does an ideal gas
reach equilibrium if there are no collisions between the particles?
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 22
1.10.2 Interparticle potentials
As we have mentioned, the most popular form of the potential between two neutral atoms is the
Lennard-Jones potential8
given in (1.1). This potential has an weak attractive tail at large r,
reaches a minimum at r = 21/6
σ ≈ 1.122σ, and is strongly repulsive at shorter distances. The
Lennard-Jones potential is appropriate for closed-shell systems, that is, rare gases such as Ar or Kr.
Nevertheless, the Lennard-Jones potential is a very important model system and is the standard
potential for studies where the focus is on fundamental issues, rather than on the properties of a
specific material.
An even simpler interaction is the hard core interaction given by
V (r) =
∞ (r ≤ σ)
0. (r > σ)
(1.4)
A system of particles interacting via (1.4) is called a system of hard spheres, hard disks, or hard
rods depending on whether the spatial dimension is three, two, or one, respectively. Note that
V (r) in (1.4) is purely repulsive.
1.10.3 Lattice models
In another class of models, the positions of the particles are restricted to a lattice or grid and the
momenta of the particles are irrelevant. In the most popular model of this type the “particles”
correspond to magnetic moments. At high temperatures the magnetic moments are affected by
external magnetic fields, but the interaction between moments can be neglected.
The simplest, nontrivial model that includes interactions is the Ising model, the most impor-
tant model in statistical mechanics. The model consists of spins located on a lattice such that
each spin can take on one of two values designated as up and down or ±1. The interaction energy
between two neighboring spins is −J if the two spins are in the same state and +J if they are
in opposite states. One reason for the importance of this model is that it is one of the simplest
to have a phase transition, in this case, a phase transition between a ferromagnetic state and a
paramagnetic state.
We will focus on three classes of models – the ideal classical and quantum gas, classical systems
of interacting particles, and the Ising model and its extensions. These models will be used in many
contexts to illustrate the ideas and techniques of statistical mechanics.
1.11 Importance of simulations
Only simple models such as the ideal gas or special cases such as the two-dimensional Ising model
can be analyzed by analytical methods. Much of what is done in statistical mechanics is to establish
the general behavior of a model and then relate it to the behavior of another model. This way of
understanding is not as strange as it first might appear. How many different systems in classical
mechanics can be solved exactly?
8This potential is named after John Lennard-Jones, 1894–1954, a theoretical chemist and physicist at Cambridge
University.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 23
Statistical physics has grown in importance over the past several decades because powerful
computers and new computer algorithms have allowed us to explore the consequences of more com-
plex systems. Simulations play an important intermediate role between theory and experiment. As
our models become more realistic, it is likely that they will require the computer for understanding
many of their properties. In a simulation we start with a microscopic model for which the variables
represent the microscopic constituents and determine the consequences of their interactions. Fre-
quently the goal of our simulations is to explore these consequences so that we have a better idea
of what type of theoretical analysis might be possible and what type of laboratory experiments
should be done. Simulations allow us to compute many different kinds of quantities, some of which
cannot be measured in a laboratory experiment.
Not only can we simulate reasonably realistic models, we also can study models that are im-
possible to realize in the laboratory, but are useful for providing a deeper theoretical understanding
of real systems. For example, a comparison of the behavior of a model in three and four spatial
dimensions can yield insight into why the three-dimensional system behaves the way it does.
Simulations cannot replace laboratory experiments and are limited by the finite size of the
systems and by the short duration of our runs. For example, at present the longest simulations of
simple liquids are for no more than 10−6
s.
Not only have simulations made possible new ways of doing research, they also make it possible
to illustrate the important ideas of statistical mechanics. We hope that the simulations that we
have already discussed have already convinced you of their utility. For this reason, we will consider
many simulations throughout these notes.
1.12 Summary
This introductory chapter has been designed to whet your appetite, and at this point it is not likely
that you will fully appreciate the significance of such concepts as entropy and the direction of time.
We are reminded of the book, All I Really Need to Know I Learned in Kindergarten.9
In principle,
we have discussed most of the important ideas in thermodynamics and statistical physics, but it
will take you a while before you understand these ideas in any depth.
We also have not discussed the tools necessary to solve any problems. Your understanding of
these concepts and the methods of statistical and thermal physics will increase as you work with
these ideas in different contexts. You will find that the unifying aspects of thermodynamics and
statistical mechanics are concepts such as the nature of equilibrium, the direction of time, and
the existence of cooperative effects and different phases. However, there is no unifying equation
such as Newton’s second law of motion in mechanics, Maxwell’s equations in electrodynamics, and
Schrodinger’s equation in nonrelativistic quantum mechanics.
There are many subtleties that we have glossed over so that we could get started. For example,
how good is our assumption that the microstates of an isolated system are equally probable? This
question is a deep one and has not been completely answered. The answer likely involves the
nature of chaos. Chaos seems necessary to insure that the system will explore a large number of
the available microstates, and hence make our assumption of equal probabilities valid. However,
we do not know how to tell a priori whether a system will behave chaotically or not.
9Robert Fulghum, All I Really Need to Know I Learned in Kindergarten, Ballantine Books (2004).
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 24
Most of our discussion concerns equilibrium behavior. The “dynamics” in thermodynamics
refers to the fact that we can treat a variety of thermal processes in which a system moves from
one equilibrium state to another. Even if the actual process involves non-equilibrium states, we
can replace the non-equilibrium states by a series of equilibrium states which begin and end at
the same equilibrium states. This type of reasoning is analogous to the use of energy arguments
in mechanics. A ball can roll from the top of a hill to the bottom, rolling over many bumps and
valleys, but as long as there is no dissipation due to friction, we can determine the ball’s motion
at the bottom without knowing anything about how the ball got there.
The techniques and ideas of statistical mechanics are now being used outside of traditional
condensed matter physics. The field theories of high energy physics, especially lattice gauge theo-
ries, use the methods of statistical mechanics. New methods of doing quantum mechanics convert
calculations to path integrals that are computed numerically using methods of statistical mechan-
ics. Theories of the early universe use ideas from thermal physics. For example, we speak about
the universe being quenched to a certain state in analogy to materials being quenched from high
to low temperatures. We already have seen that chaos provides an underpinning for the need for
probability in statistical mechanics. Conversely, many of the techniques used in describing the
properties of dynamical systems have been borrowed from the theory of phase transitions, one of
the important areas of statistical mechanics.
Thermodynamics and statistical mechanics have traditionally been applied to gases, liquids,
and solids. This application has been very fruitful and is one reason why condensed matter physics,
materials science, and chemical physics are rapidly evolving and growing areas. Examples of new
materials include high temperature superconductors, low-dimensional magnetic and conducting
materials, composite materials, and materials doped with various impurities. In addition, scientists
are taking a new look at more traditional condensed systems such as water and other liquids,
liquid crystals, polymers, crystals, alloys, granular matter, and porous media such as rocks. And
in addition to our interest in macroscopic systems, there is growing interest is mesoscopic systems,
systems that are neither microscopic nor macroscopic, but are in between, that is, between ∼ 102
to ∼ 106
particles.
Thermodynamics might not seem to be as interesting to you when you first encounter it.
However, an understanding of thermodynamics is important in many contexts including societal
issues such as global warming, electrical energy production, fuel cells, and other alternative energy
sources.
The science of information theory uses many ideas from statistical mechanics, and recently, new
optimization methods such as simulated annealing have been borrowed from statistical mechanics.
In recent years statistical mechanics has evolved into the more general field of statistical
physics. Examples of systems of interest in the latter area include earthquake faults, granular mat-
ter, neural networks, models of computing, genetic algorithms, and the analysis of the distribution
of time to respond to email. Statistical physics is characterized more by its techniques than by the
problems that are its interest. This universal applicability makes the techniques more difficult to
understand, but also makes the journey more exciting.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 25
Vocabulary
thermodynamics, statistical mechanics
macroscopic system
configuration, microstate, macrostate
specially prepared state, equilibrium, fluctuations
thermal contact, temperature
sensitivity to initial conditions
models, computer simulations
Problems
Problems page
1.1 7
1.2 9
1.3 11
1.4 13
1.5 and 1.6 15
1.7 16
1.8 17
1.9 19
Table 1.3: Listing of inline problems.
Problem 1.10. (a) What do you observe when a small amount of black dye is placed in a glass
of water? (b) Suppose that a video were taken of this process and the video was run backward
without your knowledge. Would you be able to observe whether the video was being run forward or
backward? (c) Suppose that you could watch a video of the motion of an individual ink molecule.
Would you be able to know that the video was being shown forward or backward?
Problem 1.11. Describe several examples based on your everyday experience that illustrate the
unidirectional temporal behavior of macroscopic systems. For example, what happens to ice placed
in a glass of water at room temperature? What happens if you make a small hole in an inflated
tire? What happens if you roll a ball on a hard surface?
Problem 1.12. In what contexts can we treat water as a fluid? In what context can water not
be treated as a fluid?
Problem 1.13. How do you know that two objects are at the same temperature? How do you
know that two bodies are at different temperatures?
Problem 1.14. Summarize your understanding of the properties of macroscopic systems.
Problem 1.15. Ask some of your friends why a ball falls when released above the Earth’s surface.
Explain why the answer “gravity” is not really an explanation.
Problem 1.16. What is your understanding of the concept of “randomness” at this time? Does
“random motion” imply that the motion occurs according to unknown rules?
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 26
Problem 1.17. What evidence can you cite from your everyday experience that the molecules in
a glass of water or in the surrounding air are in seemingly endless random motion?
Problem 1.18. Write a brief paragraph on the meaning of the abstract concepts, “energy” and
“justice.” (See the Feynman Lectures, Vol. 1, Chapter 4, for a discussion of why it is difficult to
define such abstract concepts.)
Problem 1.19. A box of glass beads is also an example of macroscopic systems if the number
of beads is sufficiently large. In what ways such a system different than the macroscopic systems
that we have discussed in this chapter?
Problem 1.20. Suppose that the handle of a plastic bicycle pump is rapidly pushed inward.
Predict what happens to the temperature of the air inside the pump and explain your reasoning.
(This problem is given here to determine how you think about this type of problem at this time.
Similar problems will appear in later chapters to see if and how your reasoning has changed.)
Appendix 1A: Mathematics Refresher
As discussed in Sec. 1.12, there is no unifying equation in statistical mechanics such as Newton’s
second law of motion to be solved in a variety of contexts. For this reason we will not adopt one
mathematical tool. Appendix 2B summarizes the mathematics of thermodynamics which makes
much use of partial derivatives. Appendix A summarizes some of the mathematical formulas and
relations that we will use. If you can do the following problems, you have a good background for
most of the mathematics that we will use in the following chapters.
Problem 1.21. Calculate the derivative with respect to x of the following functions: ex
, e3x
, eax
,
ln x, ln x2
, ln 3x, ln 1/x, sin x, cos x, sin 3x, and cos 2x.
Problem 1.22. Calculate the following integrals:
2
1
dx
2x2
(1.5a)
2
1
dx
4x
(1.5b)
2
1
e3x
dx (1.5c)
Problem 1.23. Calculate the partial derivative of x2
+ xy + 3y2
with respect to x and y.
Suggestions for Further Reading
P. W. Atkins, The Second Law, Scientific American Books (1984). A qualitative introduction to
the second law of thermodynamics and its implications.
CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 27
J. G. Oliveira and A.-L. Barab´asi, “Darwin and Einstein correspondence patterns,” Nature 437,
1251 (2005). The authors found the probability that Darwin and Einstein would respond to
a letter in τ days is well approximated by a power law, P(τ) ∼ τ−a
with a ≈ 3/2. What
is the explanation for this power law behavior? How long does it take you to respond to an
email?
Manfred Eigen and Ruthild Winkler, How the Principles of Nature Govern Chance, Princeton
University Press (1993).
Richard Feynman, R. B. Leighton, and M. Sands, Feynman Lectures on Physics, Addison-Wesley
(1964). Volume 1 has a very good discussion of the nature of energy and work.
Harvey Gould, Jan Tobochnik, and Wolfgang Christian, An Introduction to Computer Simulation
Methods, third edition, Addison-Wesley (2006).
F. Reif, Statistical Physics, Volume 5 of the Berkeley Physics Series, McGraw-Hill (1967). This
text was the first to make use of computer simulations to explain some of the basic properties
of macroscopic systems.
Jeremy Rifkin, Entropy: A New World View, Bantom Books (1980). Although this popular book
raises some important issues, it, like many other popular books articles, misuses the concept
of entropy. For more discussion on the meaning of entropy and how it should be introduced,
see <www.entropysite.com/> and <www.entropysimple.com/>.
Chapter 2
Thermodynamic Concepts and
Processes
c 2005 by Harvey Gould and Jan Tobochnik
29 September 2005
The study of temperature, energy, work, heating, entropy, and related macroscopic concepts com-
prise the field known as thermodynamics.
2.1 Introduction
In this chapter we will discuss ways of thinking about macroscopic systems and introduce the basic
concepts of thermodynamics. Because these ways of thinking are very different from the ways that
we think about microscopic systems, most students of thermodynamics initially find it difficult
to apply the abstract principles of thermodynamics to concrete problems. However, the study of
thermodynamics has many rewards as was appreciated by Einstein:
A theory is the more impressive the greater the simplicity of its premises, the more
different kinds of things it relates, and the more extended its area of applicability.
Therefore the deep impression that classical thermodynamics made to me. It is the only
physical theory of universal content which I am convinced will never be overthrown,
within the framework of applicability of its basic concepts.1
The essence of thermodynamics can be summarized by two laws: (1) Energy is conserved
and (2) entropy increases. These statements of the laws are deceptively simple. What is energy?
You are probably familiar with the concept of energy from other courses, but can you define it?
Abstract concepts such as energy and entropy are not easily defined nor understood. However, as
you apply these concepts in a variety of contexts, you will gradually come to understand them.
1A. Einstein, Autobiographical Notes, Open Court Publishing Company (1991).
26
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1
Thermal and statistical physics   h. gould, j. tobochnik-1

More Related Content

What's hot

Modern Control - Lec 06 - PID Tuning
Modern Control - Lec 06 - PID TuningModern Control - Lec 06 - PID Tuning
Modern Control - Lec 06 - PID Tuning
Amr E. Mohamed
 
Bode diagram
Bode diagramBode diagram
Bode diagram
Abdurazak Mohamed
 
Transfer functions, poles and zeros.
Transfer functions, poles and zeros.Transfer functions, poles and zeros.
Transfer functions, poles and zeros.
ARIF HUSSAIN
 
Transport processes and unit operations geankoplis
Transport processes and unit operations geankoplisTransport processes and unit operations geankoplis
Transport processes and unit operations geankoplis
Rinka Meari
 
Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...
Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...
Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...
Amr E. Mohamed
 
CHAPTER 6 Quantum Mechanics II
CHAPTER 6 Quantum Mechanics IICHAPTER 6 Quantum Mechanics II
CHAPTER 6 Quantum Mechanics II
Thepsatri Rajabhat University
 
Root locus
Root locusRoot locus
PID controller
PID controllerPID controller
PID controller
Tsuyoshi Horigome
 
EEG Electrode Shape Affects Skin Safety and Breakdown for Longer Studies
EEG Electrode Shape Affects Skin Safety and Breakdown for Longer StudiesEEG Electrode Shape Affects Skin Safety and Breakdown for Longer Studies
EEG Electrode Shape Affects Skin Safety and Breakdown for Longer Studies
Rhythmlink
 
Kittel c. introduction to solid state physics 8 th edition - solution manual
Kittel c.  introduction to solid state physics 8 th edition - solution manualKittel c.  introduction to solid state physics 8 th edition - solution manual
Kittel c. introduction to solid state physics 8 th edition - solution manual
amnahnura
 
Lecture 14 ME 176 7 Root Locus Technique
Lecture 14 ME 176 7 Root Locus TechniqueLecture 14 ME 176 7 Root Locus Technique
Lecture 14 ME 176 7 Root Locus TechniqueLeonides De Ocampo
 
Week 14 pid may 24 2016 pe 3032
Week  14 pid  may 24 2016 pe 3032Week  14 pid  may 24 2016 pe 3032
Week 14 pid may 24 2016 pe 3032
Charlton Inao
 
Group velocity and phase velocity
Group velocity and phase velocityGroup velocity and phase velocity
Group velocity and phase velocity
rameshthombre1
 
7.automatismoelectronico153 216 (1)
7.automatismoelectronico153 216 (1)7.automatismoelectronico153 216 (1)
7.automatismoelectronico153 216 (1)
ivanov92
 
Intraoperative Electromyography (EMG)
Intraoperative Electromyography (EMG)Intraoperative Electromyography (EMG)
Intraoperative Electromyography (EMG)
Anurag Tewari MD
 
Automatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUES
Automatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUESAutomatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUES
Automatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUES
EMERSON EDUARDO RODRIGUES
 
Heart rate variability guide carefusion
Heart rate variability guide carefusionHeart rate variability guide carefusion
Heart rate variability guide carefusion
Randy Clare
 
Modal mass
Modal massModal mass
Modal mass
aditya singh
 
Process control MCQ
Process control MCQProcess control MCQ
Process control MCQ
Instrumentation Tools
 
Unit iii normal &amp; oblique shocks
Unit   iii normal &amp; oblique shocksUnit   iii normal &amp; oblique shocks
Unit iii normal &amp; oblique shocks
sureshkcet
 

What's hot (20)

Modern Control - Lec 06 - PID Tuning
Modern Control - Lec 06 - PID TuningModern Control - Lec 06 - PID Tuning
Modern Control - Lec 06 - PID Tuning
 
Bode diagram
Bode diagramBode diagram
Bode diagram
 
Transfer functions, poles and zeros.
Transfer functions, poles and zeros.Transfer functions, poles and zeros.
Transfer functions, poles and zeros.
 
Transport processes and unit operations geankoplis
Transport processes and unit operations geankoplisTransport processes and unit operations geankoplis
Transport processes and unit operations geankoplis
 
Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...
Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...
Modern Control - Lec 03 - Feedback Control Systems Performance and Characteri...
 
CHAPTER 6 Quantum Mechanics II
CHAPTER 6 Quantum Mechanics IICHAPTER 6 Quantum Mechanics II
CHAPTER 6 Quantum Mechanics II
 
Root locus
Root locusRoot locus
Root locus
 
PID controller
PID controllerPID controller
PID controller
 
EEG Electrode Shape Affects Skin Safety and Breakdown for Longer Studies
EEG Electrode Shape Affects Skin Safety and Breakdown for Longer StudiesEEG Electrode Shape Affects Skin Safety and Breakdown for Longer Studies
EEG Electrode Shape Affects Skin Safety and Breakdown for Longer Studies
 
Kittel c. introduction to solid state physics 8 th edition - solution manual
Kittel c.  introduction to solid state physics 8 th edition - solution manualKittel c.  introduction to solid state physics 8 th edition - solution manual
Kittel c. introduction to solid state physics 8 th edition - solution manual
 
Lecture 14 ME 176 7 Root Locus Technique
Lecture 14 ME 176 7 Root Locus TechniqueLecture 14 ME 176 7 Root Locus Technique
Lecture 14 ME 176 7 Root Locus Technique
 
Week 14 pid may 24 2016 pe 3032
Week  14 pid  may 24 2016 pe 3032Week  14 pid  may 24 2016 pe 3032
Week 14 pid may 24 2016 pe 3032
 
Group velocity and phase velocity
Group velocity and phase velocityGroup velocity and phase velocity
Group velocity and phase velocity
 
7.automatismoelectronico153 216 (1)
7.automatismoelectronico153 216 (1)7.automatismoelectronico153 216 (1)
7.automatismoelectronico153 216 (1)
 
Intraoperative Electromyography (EMG)
Intraoperative Electromyography (EMG)Intraoperative Electromyography (EMG)
Intraoperative Electromyography (EMG)
 
Automatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUES
Automatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUESAutomatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUES
Automatas Programables y Sistemas de Automatizacion EMERSON EDUARDO RODRIGUES
 
Heart rate variability guide carefusion
Heart rate variability guide carefusionHeart rate variability guide carefusion
Heart rate variability guide carefusion
 
Modal mass
Modal massModal mass
Modal mass
 
Process control MCQ
Process control MCQProcess control MCQ
Process control MCQ
 
Unit iii normal &amp; oblique shocks
Unit   iii normal &amp; oblique shocksUnit   iii normal &amp; oblique shocks
Unit iii normal &amp; oblique shocks
 

Similar to Thermal and statistical physics h. gould, j. tobochnik-1

Lessons%20in%20 industrial%20instrumentation
Lessons%20in%20 industrial%20instrumentationLessons%20in%20 industrial%20instrumentation
Lessons%20in%20 industrial%20instrumentationMasab Qadir
 
Ode2015
Ode2015Ode2015
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
HEMAMALINIKANASAN
 
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
HEMAMALINIKANASAN
 
Zettili.pdf
Zettili.pdfZettili.pdf
Zettili.pdf
BrayanLopezRaymundo
 
Methods for Applied Macroeconomic Research.pdf
Methods for Applied Macroeconomic Research.pdfMethods for Applied Macroeconomic Research.pdf
Methods for Applied Macroeconomic Research.pdf
Comrade15
 
Answer CK-12 Physics.pdf
Answer CK-12 Physics.pdfAnswer CK-12 Physics.pdf
Answer CK-12 Physics.pdf
Jessica Navarro
 
Elements of Applied Mathematics for Engineers
Elements of Applied Mathematics for EngineersElements of Applied Mathematics for Engineers
Elements of Applied Mathematics for Engineers
Vincent Isoz
 
An Introduction to Statistical Inference and Its Applications.pdf
An Introduction to Statistical Inference and Its Applications.pdfAn Introduction to Statistical Inference and Its Applications.pdf
An Introduction to Statistical Inference and Its Applications.pdf
Sharon Collins
 
Offshore structures
Offshore structuresOffshore structures
Offshore structures
TouihriMohsen1
 
Lecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determinationLecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determination
Ernst Schrama
 
Basic calculus free
Basic calculus freeBasic calculus free
Basic calculus free
lydmilaroy
 
A FIRST COURSE IN PROBABILITY.pdf
A FIRST COURSE IN PROBABILITY.pdfA FIRST COURSE IN PROBABILITY.pdf
A FIRST COURSE IN PROBABILITY.pdf
Cynthia Velynne
 
numpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
numpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxnumpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
numpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
sin3divcx
 
Quantum Mechanics: Lecture notes
Quantum Mechanics: Lecture notesQuantum Mechanics: Lecture notes
Quantum Mechanics: Lecture notespolariton
 
Introductory Statistics Explained.pdf
Introductory Statistics Explained.pdfIntroductory Statistics Explained.pdf
Introductory Statistics Explained.pdf
ssuser4492e2
 
Discrete math mathematics for computer science 2
Discrete math   mathematics for computer science 2Discrete math   mathematics for computer science 2
Discrete math mathematics for computer science 2
Xavient Information Systems
 

Similar to Thermal and statistical physics h. gould, j. tobochnik-1 (20)

Lessons%20in%20 industrial%20instrumentation
Lessons%20in%20 industrial%20instrumentationLessons%20in%20 industrial%20instrumentation
Lessons%20in%20 industrial%20instrumentation
 
Ode2015
Ode2015Ode2015
Ode2015
 
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
 
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
J.M. Smith, Hendrick Van Ness, Michael Abbott, Mark Swihart - Introduction to...
 
Zettili.pdf
Zettili.pdfZettili.pdf
Zettili.pdf
 
General physics
General physicsGeneral physics
General physics
 
Mech_Project
Mech_ProjectMech_Project
Mech_Project
 
HASMasterThesis
HASMasterThesisHASMasterThesis
HASMasterThesis
 
Methods for Applied Macroeconomic Research.pdf
Methods for Applied Macroeconomic Research.pdfMethods for Applied Macroeconomic Research.pdf
Methods for Applied Macroeconomic Research.pdf
 
Answer CK-12 Physics.pdf
Answer CK-12 Physics.pdfAnswer CK-12 Physics.pdf
Answer CK-12 Physics.pdf
 
Elements of Applied Mathematics for Engineers
Elements of Applied Mathematics for EngineersElements of Applied Mathematics for Engineers
Elements of Applied Mathematics for Engineers
 
An Introduction to Statistical Inference and Its Applications.pdf
An Introduction to Statistical Inference and Its Applications.pdfAn Introduction to Statistical Inference and Its Applications.pdf
An Introduction to Statistical Inference and Its Applications.pdf
 
Offshore structures
Offshore structuresOffshore structures
Offshore structures
 
Lecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determinationLecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determination
 
Basic calculus free
Basic calculus freeBasic calculus free
Basic calculus free
 
A FIRST COURSE IN PROBABILITY.pdf
A FIRST COURSE IN PROBABILITY.pdfA FIRST COURSE IN PROBABILITY.pdf
A FIRST COURSE IN PROBABILITY.pdf
 
numpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
numpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxnumpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
numpyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 
Quantum Mechanics: Lecture notes
Quantum Mechanics: Lecture notesQuantum Mechanics: Lecture notes
Quantum Mechanics: Lecture notes
 
Introductory Statistics Explained.pdf
Introductory Statistics Explained.pdfIntroductory Statistics Explained.pdf
Introductory Statistics Explained.pdf
 
Discrete math mathematics for computer science 2
Discrete math   mathematics for computer science 2Discrete math   mathematics for computer science 2
Discrete math mathematics for computer science 2
 

Recently uploaded

Cancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate PathwayCancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate Pathway
AADYARAJPANDEY1
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
ossaicprecious19
 
ESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptxESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptx
muralinath2
 
EY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptxEY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptx
AlguinaldoKong
 
platelets- lifespan -Clot retraction-disorders.pptx
platelets- lifespan -Clot retraction-disorders.pptxplatelets- lifespan -Clot retraction-disorders.pptx
platelets- lifespan -Clot retraction-disorders.pptx
muralinath2
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
SAMIR PANDA
 
general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
IqrimaNabilatulhusni
 
Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...
Sérgio Sacani
 
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATIONPRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
ChetanK57
 
Richard's entangled aventures in wonderland
Richard's entangled aventures in wonderlandRichard's entangled aventures in wonderland
Richard's entangled aventures in wonderland
Richard Gill
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
moosaasad1975
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
muralinath2
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
Areesha Ahmad
 
Structural Classification Of Protein (SCOP)
Structural Classification Of Protein  (SCOP)Structural Classification Of Protein  (SCOP)
Structural Classification Of Protein (SCOP)
aishnasrivastava
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
IvanMallco1
 
NuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final versionNuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final version
pablovgd
 
Leaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdfLeaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdf
RenuJangid3
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
muralinath2
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
muralinath2
 
extra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdfextra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdf
DiyaBiswas10
 

Recently uploaded (20)

Cancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate PathwayCancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate Pathway
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
 
ESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptxESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptx
 
EY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptxEY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptx
 
platelets- lifespan -Clot retraction-disorders.pptx
platelets- lifespan -Clot retraction-disorders.pptxplatelets- lifespan -Clot retraction-disorders.pptx
platelets- lifespan -Clot retraction-disorders.pptx
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
 
general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
 
Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...
 
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATIONPRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
 
Richard's entangled aventures in wonderland
Richard's entangled aventures in wonderlandRichard's entangled aventures in wonderland
Richard's entangled aventures in wonderland
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
 
Structural Classification Of Protein (SCOP)
Structural Classification Of Protein  (SCOP)Structural Classification Of Protein  (SCOP)
Structural Classification Of Protein (SCOP)
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
 
NuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final versionNuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final version
 
Leaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdfLeaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdf
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
 
extra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdfextra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdf
 

Thermal and statistical physics h. gould, j. tobochnik-1

  • 1. Contents 1 From Microscopic to Macroscopic Behavior 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Some qualitative observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Doing work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Quality of energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.5 Some simple simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.6 Work, heating, and the first law of thermodynamics . . . . . . . . . . . . . . . . . 14 1.7 Measuring the pressure and temperature . . . . . . . . . . . . . . . . . . . . . . . . 15 1.8 *The fundamental need for a statistical approach . . . . . . . . . . . . . . . . . . . 18 1.9 *Time and ensemble averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.10 *Models of matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.10.1 The ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.10.2 Interparticle potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.10.3 Lattice models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.11 Importance of simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2 Thermodynamic Concepts 26 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2 The system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3 Thermodynamic Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4 Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5 Pressure Equation of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.6 Some Thermodynamic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7 Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 i
  • 2. CONTENTS ii 2.8 The First Law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.9 Energy Equation of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.10 Heat Capacities and Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.11 Adiabatic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.12 The Second Law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.13 The Thermodynamic Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.14 The Second Law and Heat Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.15 Entropy Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.16 Equivalence of Thermodynamic and Ideal Gas Scale Temperatures . . . . . . . . . 60 2.17 The Thermodynamic Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.18 The Fundamental Thermodynamic Relation . . . . . . . . . . . . . . . . . . . . . . 62 2.19 The Entropy of an Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.20 The Third Law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 64 2.21 Free Energies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Appendix 2B: Mathematics of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . 70 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3 Concepts of Probability 82 3.1 Probability in everyday life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.2 The rules of probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3 Mean values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.4 The meaning of probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.4.1 Information and uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.4.2 *Bayesian inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.5 Bernoulli processes and the binomial distribution . . . . . . . . . . . . . . . . . . . 99 3.6 Continuous probability distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.7 The Gaussian distribution as a limit of the binomial distribution . . . . . . . . . . 111 3.8 The central limit theorem or why is thermodynamics possible? . . . . . . . . . . . 113 3.9 The Poisson distribution and should you fly in airplanes? . . . . . . . . . . . . . . 116 3.10 *Traffic flow and the exponential distribution . . . . . . . . . . . . . . . . . . . . . 117 3.11 *Are all probability distributions Gaussian? . . . . . . . . . . . . . . . . . . . . . . 119 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
  • 3. CONTENTS iii 4 Statistical Mechanics 138 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 4.2 A simple example of a thermal interaction . . . . . . . . . . . . . . . . . . . . . . . 140 4.3 Counting microstates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.3.1 Noninteracting spins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.3.2 *One-dimensional Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.3.3 A particle in a one-dimensional box . . . . . . . . . . . . . . . . . . . . . . 151 4.3.4 One-dimensional harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . 153 4.3.5 One particle in a two-dimensional box . . . . . . . . . . . . . . . . . . . . . 154 4.3.6 One particle in a three-dimensional box . . . . . . . . . . . . . . . . . . . . 156 4.3.7 Two noninteracting identical particles and the semiclassical limit . . . . . . 156 4.4 The number of states of N noninteracting particles: Semiclassical limit . . . . . . . 158 4.5 The microcanonical ensemble (fixed E, V, and N) . . . . . . . . . . . . . . . . . . . 160 4.6 Systems in contact with a heat bath: The canonical ensemble (fixed T, V, and N) 165 4.7 Connection between statistical mechanics and thermodynamics . . . . . . . . . . . 170 4.8 Simple applications of the canonical ensemble . . . . . . . . . . . . . . . . . . . . . 172 4.9 A simple thermometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 4.10 Simulations of the microcanonical ensemble . . . . . . . . . . . . . . . . . . . . . . 177 4.11 Simulations of the canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . 178 4.12 Grand canonical ensemble (fixed T, V, and µ) . . . . . . . . . . . . . . . . . . . . . 179 4.13 Entropy and disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Appendix 4A: The Volume of a Hypersphere . . . . . . . . . . . . . . . . . . . . . . . . 183 Appendix 4B: Fluctuations in the Canonical Ensemble . . . . . . . . . . . . . . . . . . . 184 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5 Magnetic Systems 190 5.1 Paramagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 5.2 Thermodynamics of magnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.3 The Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.4 The Ising Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.4.1 Exact enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.4.2 ∗ Spin-spin correlation function . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.4.3 Simulations of the Ising chain . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5.4.4 *Transfer matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5.4.5 Absence of a phase transition in one dimension . . . . . . . . . . . . . . . . 205 5.5 The Two-Dimensional Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
  • 4. CONTENTS iv 5.5.1 Onsager solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5.5.2 Computer simulation of the two-dimensional Ising model . . . . . . . . . . 211 5.6 Mean-Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.7 *Infinite-range interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6 Noninteracting Particle Systems 230 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 6.2 The Classical Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 6.3 Classical Systems and the Equipartition Theorem . . . . . . . . . . . . . . . . . . . 238 6.4 Maxwell Velocity Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 6.5 Occupation Numbers and Bose and Fermi Statistics . . . . . . . . . . . . . . . . . 243 6.6 Distribution Functions of Ideal Bose and Fermi Gases . . . . . . . . . . . . . . . . 245 6.7 Single Particle Density of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 6.7.1 Photons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 6.7.2 Electrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6.8 The Equation of State for a Noninteracting Classical Gas . . . . . . . . . . . . . . 252 6.9 Black Body Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 6.10 Noninteracting Fermi Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 6.10.1 Ground-state properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 6.10.2 Low temperature thermodynamic properties . . . . . . . . . . . . . . . . . . 263 6.11 Bose Condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 6.12 The Heat Capacity of a Crystalline Solid . . . . . . . . . . . . . . . . . . . . . . . . 272 6.12.1 The Einstein model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.12.2 Debye theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Appendix 6A: Low Temperature Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 275 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 7 Thermodynamic Relations and Processes 288 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 7.2 Maxwell Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 7.3 Applications of the Maxwell Relations . . . . . . . . . . . . . . . . . . . . . . . . . 291 7.3.1 Internal energy of an ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . 291 7.3.2 Relation between the specific heats . . . . . . . . . . . . . . . . . . . . . . . 291 7.4 Applications to Irreversible Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 292 7.4.1 The Joule or free expansion process . . . . . . . . . . . . . . . . . . . . . . 293
  • 5. CONTENTS v 7.4.2 Joule-Thomson process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 7.5 Equilibrium Between Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.5.1 Equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 7.5.2 Clausius-Clapeyron equation . . . . . . . . . . . . . . . . . . . . . . . . . . 298 7.5.3 Simple phase diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 7.5.4 Pressure dependence of the melting point . . . . . . . . . . . . . . . . . . . 301 7.5.5 Pressure dependence of the boiling point . . . . . . . . . . . . . . . . . . . . 302 7.5.6 The vapor pressure curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 7.6 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 8 Classical Gases and Liquids 306 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 8.2 The Free Energy of an Interacting System . . . . . . . . . . . . . . . . . . . . . . . 306 8.3 Second Virial Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 8.4 Cumulant Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 8.5 High Temperature Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 8.6 Density Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 8.7 Radial Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 8.7.1 Relation of thermodynamic functions to g(r) . . . . . . . . . . . . . . . . . 326 8.7.2 Relation of g(r) to static structure function S(k) . . . . . . . . . . . . . . . 327 8.7.3 Variable number of particles . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 8.7.4 Density expansion of g(r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 8.8 Computer Simulation of Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 8.9 Perturbation Theory of Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 8.9.1 The van der Waals Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 334 8.9.2 Chandler-Weeks-Andersen theory . . . . . . . . . . . . . . . . . . . . . . . . 335 8.10 *The Ornstein-Zernicke Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 8.11 *Integral Equations for g(r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 8.12 *Coulomb Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 8.12.1 Debye-H¨uckel Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 8.12.2 Linearized Debye-H¨uckel approximation . . . . . . . . . . . . . . . . . . . . 341 8.12.3 Diagrammatic Expansion for Charged Particles . . . . . . . . . . . . . . . . 342 8.13 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Appendix 8A: The third virial coefficient for hard spheres . . . . . . . . . . . . . . . . . 344 8.14 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
  • 6. CONTENTS vi 9 Critical Phenomena 350 9.1 A Geometrical Phase Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 9.2 Renormalization Group for Percolation . . . . . . . . . . . . . . . . . . . . . . . . . 354 9.3 The Liquid-Gas Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 9.4 Bethe Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 9.5 Landau Theory of Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . 363 9.6 Other Models of Magnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 9.7 Universality and Scaling Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 9.8 The Renormalization Group and the 1D Ising Model . . . . . . . . . . . . . . . . . 372 9.9 The Renormalization Group and the Two-Dimensional Ising Model . . . . . . . . . 376 9.10 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 9.11 Additional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Suggestions for Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 10 Introduction to Many-Body Perturbation Theory 387 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 10.2 Occupation Number Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 388 10.3 Operators in the Second Quantization Formalism . . . . . . . . . . . . . . . . . . . 389 10.4 Weakly Interacting Bose Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 A Useful Formulae 397 A.1 Physical constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 A.2 SI derived units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 A.3 Conversion factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 A.4 Mathematical Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 A.5 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 A.6 Euler-Maclaurin formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 A.7 Gaussian Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 A.8 Stirling’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 A.9 Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 A.10 Probability distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 A.11 Fermi integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 A.12 Bose integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
  • 7. Chapter 1 From Microscopic to Macroscopic Behavior c 2006 by Harvey Gould and Jan Tobochnik 28 August 2006 The goal of this introductory chapter is to explore the fundamental differences between micro- scopic and macroscopic systems and the connections between classical mechanics and statistical mechanics. We note that bouncing balls come to rest and hot objects cool, and discuss how the behavior of macroscopic objects is related to the behavior of their microscopic constituents. Com- puter simulations will be introduced to demonstrate the relation of microscopic and macroscopic behavior. 1.1 Introduction Our goal is to understand the properties of macroscopic systems, that is, systems of many elec- trons, atoms, molecules, photons, or other constituents. Examples of familiar macroscopic objects include systems such as the air in your room, a glass of water, a copper coin, and a rubber band (examples of a gas, liquid, solid, and polymer, respectively). Less familiar macroscopic systems are superconductors, cell membranes, the brain, and the galaxies. We will find that the type of questions we ask about macroscopic systems differ in important ways from the questions we ask about microscopic systems. An example of a question about a microscopic system is “What is the shape of the trajectory of the Earth in the solar system?” In contrast, have you ever wondered about the trajectory of a particular molecule in the air of your room? Why not? Is it relevant that these molecules are not visible to the eye? Examples of questions that we might ask about macroscopic systems include the following: 1. How does the pressure of a gas depend on the temperature and the volume of its container? 2. How does a refrigerator work? What is its maximum efficiency? 1
  • 8. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 2 3. How much energy do we need to add to a kettle of water to change it to steam? 4. Why are the properties of water different from those of steam, even though water and steam consist of the same type of molecules? 5. How are the molecules arranged in a liquid? 6. How and why does water freeze into a particular crystalline structure? 7. Why does iron lose its magnetism above a certain temperature? 8. Why does helium condense into a superfluid phase at very low temperatures? Why do some materials exhibit zero resistance to electrical current at sufficiently low temperatures? 9. How fast does a river current have to be before its flow changes from laminar to turbulent? 10. What will the weather be tomorrow? The above questions can be roughly classified into three groups. Questions 1–3 are concerned with macroscopic properties such as pressure, volume, and temperature and questions related to heating and work. These questions are relevant to thermodynamics which provides a framework for relating the macroscopic properties of a system to one another. Thermodynamics is concerned only with macroscopic quantities and ignores the microscopic variables that characterize individual molecules. For example, we will find that understanding the maximum efficiency of a refrigerator does not require a knowledge of the particular liquid used as the coolant. Many of the applications of thermodynamics are to thermal engines, for example, the internal combustion engine and the steam turbine. Questions 4–8 relate to understanding the behavior of macroscopic systems starting from the atomic nature of matter. For example, we know that water consists of molecules of hydrogen and oxygen. We also know that the laws of classical and quantum mechanics determine the behavior of molecules at the microscopic level. The goal of statistical mechanics is to begin with the microscopic laws of physics that govern the behavior of the constituents of the system and deduce the properties of the system as a whole. Statistical mechanics is the bridge between the microscopic and macroscopic worlds. Thermodynamics and statistical mechanics assume that the macroscopic properties of the system do not change with time on the average. Thermodynamics describes the change of a macroscopic system from one equilibrium state to another. Questions 9 and 10 concern macro- scopic phenomena that change with time. Related areas are nonequilibrium thermodynamics and fluid mechanics from the macroscopic point of view and nonequilibrium statistical mechanics from the microscopic point of view. Although there has been progress in our understanding of nonequi- librium phenomena such as turbulent flow and hurricanes, our understanding of nonequilibrium phenomena is much less advanced than our understanding of equilibrium systems. Because un- derstanding the properties of macroscopic systems that are independent of time is easier, we will focus our attention on equilibrium systems and consider questions such as those in Questions 1–8.
  • 9. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 3 1.2 Some qualitative observations We begin our discussion of macroscopic systems by considering a glass of water. We know that if we place a glass of hot water into a cool room, the hot water cools until its temperature equals that of the room. This simple observation illustrates two important properties associated with macroscopic systems – the importance of temperature and the arrow of time. Temperature is familiar because it is associated with the physiological sensation of hot and cold and is important in our everyday experience. We will find that temperature is a subtle concept. The direction or arrow of time is an even more subtle concept. Have you ever observed a glass of water at room temperature spontaneously become hotter? Why not? What other phenomena exhibit a direction of time? Time has a direction as is expressed by the nursery rhyme: Humpty Dumpty sat on a wall Humpty Dumpty had a great fall All the king’s horses and all the king’s men Couldn’t put Humpty Dumpty back together again. Is there a a direction of time for a single particle? Newton’s second law for a single particle, F = dp/dt, implies that the motion of particles is time reversal invariant, that is, Newton’s second law looks the same if the time t is replaced by −t and the momentum p by −p. There is no direction of time at the microscopic level. Yet if we drop a basketball onto a floor, we know that it will bounce and eventually come to rest. Nobody has observed a ball at rest spontaneously begin to bounce, and then bounce higher and higher. So based on simple everyday observations, we can conclude that the behavior of macroscopic bodies and single particles is very different. Unlike generations of about a century or so ago, we know that macroscopic systems such as a glass of water and a basketball consist of many molecules. Although the intermolecular forces in water produce a complicated trajectory for each molecule, the observable properties of water are easy to describe. Moreover, if we prepare two glasses of water under similar conditions, we would find that the observable properties of the water in each glass are indistinguishable, even though the motion of the individual particles in the two glasses would be very different. Because the macroscopic behavior of water must be related in some way to the trajectories of its constituent molecules, we conclude that there must be a relation between the notion of temperature and mechanics. For this reason, as we discuss the behavior of the macroscopic properties of a glass of water and a basketball, it will be useful to discuss the relation of these properties to the motion of their constituent molecules. For example, if we take into account that the bouncing ball and the floor consist of molecules, then we know that the total energy of the ball and the floor is conserved as the ball bounces and eventually comes to rest. What is the cause of the ball eventually coming to rest? You might be tempted to say the cause is “friction,” but friction is just a name for an effective or phenomenological force. At the microscopic level we know that the fundamental forces associated with mass, charge, and the nucleus conserve the total energy. So if we take into account the molecules of the ball and the floor, their total energy is conserved. Conservation of energy does not explain why the inverse process does not occur, because such a process also would conserve the total energy. So a more fundamental explanation is that the ball comes to rest consistent with conservation of the total energy and consistent with some other principle of physics. We will learn
  • 10. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 4 that this principle is associated with an increase in the entropy of the system. For now, entropy is only a name, and it is important only to understand that energy conservation is not sufficient to understand the behavior of macroscopic systems. (As for most concepts in physics, the meaning of entropy in the context of thermodynamics and statistical mechanics is very different than the way entropy is used by nonscientists.) For now, the nature of entropy is vague, because we do not have an entropy meter like we do for energy and temperature. What is important at this stage is to understand why the concept of energy is not sufficient to describe the behavior of macroscopic systems. By thinking about the constituent molecules, we can gain some insight into the nature of entropy. Let us consider the ball bouncing on the floor again. Initially, the energy of the ball is associated with the motion of its center of mass, that is, the energy is associated with one degree of freedom. However, after some time, the energy becomes associated with many degrees of freedom associated with the individual molecules of the ball and the floor. If we were to bounce the ball on the floor many times, the ball and the floor would each feel warm to our hands. So we can hypothesize that energy has been transferred from one degree of freedom to many degrees of freedom at the same time that the total energy has been conserved. Hence, we conclude that the entropy is a measure of how the energy is distributed over the degrees of freedom. What other quantities are associated with macroscopic systems besides temperature, energy, and entropy? We are already familiar with some of these quantities. For example, we can measure the air pressure in a basketball and its volume. More complicated quantities are the thermal conductivity of a solid and the viscosity of oil. How are these macroscopic quantities related to each other and to the motion of the individual constituent molecules? The answers to questions such as these and the meaning of temperature and entropy will take us through many chapters. 1.3 Doing work We already have observed that hot objects cool, and cool objects do not spontaneously become hot; bouncing balls come to rest, and a stationary ball does not spontaneously begin to bounce. And although the total energy must be conserved in any process, the distribution of energy changes in an irreversible manner. We also have concluded that a new concept, the entropy, needs to be introduced to explain the direction of change of the distribution of energy. Now let us take a purely macroscopic viewpoint and discuss how we can arrive at a similar qualitative conclusion about the asymmetry of nature. This viewpoint was especially important historically because of the lack of a microscopic theory of matter in the 19th century when the laws of thermodynamics were being developed. Consider the conversion of stored energy into heating a house or a glass of water. The stored energy could be in the form of wood, coal, or animal and vegetable oils for example. We know that this conversion is easy to do using simple methods, for example, an open fireplace. We also know that if we rub our hands together, they will become warmer. In fact, there is no theoretical limit1 to the efficiency at which we can convert stored energy to energy used for heating an object. What about the process of converting stored energy into work? Work like many of the other concepts that we have mentioned is difficult to define. For now let us say that doing work is 1Of course, the efficiency cannot exceed 100%.
  • 11. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 5 equivalent to the raising of a weight (see Problem 1.18). To be useful, we need to do this conversion in a controlled manner and indefinitely. A single conversion of stored energy into work such as the explosion of a bomb might do useful work, such as demolishing an unwanted football stadium, but a bomb is not a useful device that can be recycled and used again. It is much more difficult to convert stored energy into work and the discovery of ways to do this conversion led to the industrial revolution. In contrast to the primitiveness of the open hearth, we have to build an engine to do this conversion. Can we convert stored energy into work with 100% efficiency? On the basis of macroscopic arguments alone, we cannot answer this question and have to appeal to observations. We know that some forms of stored energy are more useful than others. For example, why do we bother to burn coal and oil in power plants even though the atmosphere and the oceans are vast reservoirs of energy? Can we mitigate global warming by extracting energy from the atmosphere to run a power plant? From the work of Kelvin, Clausius, Carnot and others, we know that we cannot convert stored energy into work with 100% efficiency, and we must necessarily “waste” some of the energy. At this point, it is easier to understand the reason for this necessary inefficiency by microscopic arguments. For example, the energy in the gasoline of the fuel tank of an automobile is associated with many molecules. The job of the automobile engine is to transform this energy so that it is associated with only a few degrees of freedom, that is, the rolling tires and gears. It is plausible that it is inefficient to transfer energy from many degrees of freedom to only a few. In contrast, transferring energy from a few degrees of freedom (the firewood) to many degrees of freedom (the air in your room) is relatively easy. The importance of entropy, the direction of time, and the inefficiency of converting stored energy into work are summarized in the various statements of the second law of thermodynamics. It is interesting that historically, the second law of thermodynamics was conceived before the first law. As we will learn in Chapter 2, the first law is a statement of conservation of energy. 1.4 Quality of energy Because the total energy is conserved (if all energy transfers are taken into account), why do we speak of an “energy shortage”? The reason is that energy comes in many forms and some forms are more useful than others. In the context of thermodynamics, the usefulness of energy is determined by its ability to do work. Suppose that we take some firewood and use it to “heat” a sealed room. Because of energy conservation, the energy in the room plus the firewood is the same before and after the firewood has been converted to ash. But which form of the energy is more capable of doing work? You probably realize that the firewood is a more useful form of energy than the “hot air” that exists after the firewood is burned. Originally the energy was stored in the form of chemical (potential) energy. Afterward the energy is mostly associated with the motion of the molecules in the air. What has changed is not the total energy, but its ability to do work. We will learn that an increase in entropy is associated with a loss of ability to do work. We have an entropy problem, not an energy shortage.
  • 12. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 6 1.5 Some simple simulations So far we have discussed the behavior of macroscopic systems by appealing to everyday experience and simple observations. We now discuss some simple ways that we can simulate the behavior of macroscopic systems, which consist of the order of 1023 particles. Although we cannot simulate such a large system on a computer, we will find that even relatively small systems of the order of a hundred particles are sufficient to illustrate the qualitative behavior of macroscopic systems. Consider a macroscopic system consisting of particles whose internal structure can be ignored. In particular, imagine a system of N particles in a closed container of volume V and suppose that the container is far from the influence of external forces such as gravity. We will usually consider two-dimensional systems so that we can easily visualize the motion of the particles. For simplicity, we assume that the motion of the particles is given by classical mechanics, that is, by Newton’s second law. If the resultant equations of motion are combined with initial conditions for the positions and velocities of each particle, we can calculate, in principle, the trajectory of each particle and the evolution of the system. To compute the total force on each particle we have to specify the nature of the interaction between the particles. We will assume that the force between any pair of particles depends only on the distance between them. This simplifying assumption is applicable to simple liquids such as liquid argon, but not to water. We will also assume that the particles are not charged. The force between any two particles must be repulsive when their separation is small and weakly attractive when they are reasonably far apart. For simplicity, we will usually assume that the interaction is given by the Lennard-Jones potential, whose form is given by u(r) = 4 σ r 12 − σ r 6 . (1.1) A plot of the Lennard-Jones potential is shown in Figure 1.1. The r−12 form of the repulsive part of the interaction is chosen for convenience only and has no fundamental significance. However, the attractive 1/r6 behavior at large r is the van der Waals interaction. The force between any two particles is given by f(r) = −du/dr. Usually we want to simulate a gas or liquid in the bulk. In such systems the fraction of particles near the walls of the container is negligibly small. However, the number of particles that can be studied in a simulation is typically 103 –106 . For these relatively small systems, the fraction of particles near the walls of the container would be significant, and hence the behavior of such a system would be dominated by surface effects. The most common way of minimizing surface effects and to simulate more closely the properties of a bulk system is to use what are known as toroidal boundary conditions. These boundary conditions are familiar to computer game players. For example, a particle that exits the right edge of the “box,” re-enters the box from the left side. In one dimension, this boundary condition is equivalent to taking a piece of wire and making it into a loop. In this way a particle moving on the wire never reaches the end. Given the form of the interparticle potential, we can determine the total force on each particle due to all the other particles in the system. Given this force, we find the acceleration of each particle from Newton’s second law of motion. Because the acceleration is the second derivative of the position, we need to solve a second-order differential equation for each particle (for each direction). (For a two-dimensional system of N particles, we would have to solve 2N differential equations.) These differential equations are coupled because the acceleration of a given particle
  • 13. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 7 u r ε σ Figure 1.1: Plot of the Lennard-Jones potential u(r), where r is the distance between the particles. Note that the potential is characterized by a length σ and an energy . depends on the positions of all the other particles. Obviously, we cannot solve the resultant set of coupled differential equations analytically. However, we can use relatively straightforward numerical methods to solve these equations to a good approximation. This way of simulating dense gases, liquids, solids, and biomolecules is called molecular dynamics.2 Approach to equilibrium. In the following we will explore some of the qualitative properties of macroscopic systems by doing some simple simulations. Before you actually do the simulations, think about what you believe the results will be. In many cases, the most valuable part of the sim- ulation is not the simulation itself, but the act of thinking about a concrete model and its behavior. The simulations can be run as applications on your computer by downloading the Launcher from <stp.clarku.edu/simulations/choose.html>. The Launcher conveniently packages all the sim- ulations (and a few more) discussed in these notes into a single file. Alternatively, you can run each simulation as an applet using a browser. Problem 1.1. Approach to equilibrium Suppose that a box is divided into three equal parts and N particles are placed at random in the middle third of the box.3 The velocity of each particle is assigned at random and then the velocity of the center of mass is set to zero. At t = 0, we remove the “barriers” between the 2The nature of molecular dynamics is discussed in Chapter 8 of Gould, Tobochnik, and Christian. 3We have divided the box into three parts so that the effects of the toroidal boundary conditions will not be as apparent as if we had initially confined the particles to one half of the box. The particles are placed at random in the middle third of the box with the constraint that no two particles can be closer than the length σ. This constraint prevents the initial force between any two particles from being two big, which would lead to the breakdown of the numerical method used to solve the differential equations. The initial density ρ = N/A is ρ = 0.2.
  • 14. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 8 three parts and watch the particles move according to Newton’s equations of motion. We say that the removal of the barrier corresponds to the removal of an internal constraint. What do you think will happen? The applet/application at <stp.clarku.edu/simulations/approach. html> implements this simulation. Give your answers to the following questions before you do the simulation. (a) Start the simulation with N = 27, n1 = 0, n2 = N, and n3 = 0. What is the qualitative behavior of n1, n2, and n3, the number of particles in each third of the box, as a function of the time t? Does the system appear to show a direction of time? Choose various values of N that are multiples of three up to N = 270. Is the direction of time better defined for larger N? (b) Suppose that we made a video of the motion of the particles considered in Problem 1.1a. Would you be able to tell if the video were played forward or backward for the various values of N? Would you be willing to make an even bet about the direction of time? Does your conclusion about the direction of time become more certain as N increases? (c) After n1, n2, and n3 become approximately equal for N = 270, reverse the time and continue the simulation. Reversing the time is equivalent to letting t → −t and changing the signs of all the velocities. Do the particles return to the middle third of the box? Do the simulation again, but let the particles move for a longer time before the time is reversed. What happens now? (d) From watching the motion of the particles, describe the nature of the boundary conditions that are used in the simulation. The results of the simulations in Problem 1.1 might not seem very surprising until you start to think about them. Why does the system as a whole exhibit a direction of time when the motion of each particle is time reversible? Do the particles fill up the available space simply because the system becomes less dense? To gain some more insight into these questions, we consider a simpler simulation. Imagine a closed box that is divided into two parts of equal volume. The left half initially contains N identical particles and the right half is empty. We then make a small hole in the partition between the two halves. What happens? Instead of simulating this system by solving Newton’s equations for each particle, we adopt a simpler approach based on a probabilistic model. We assume that the particles do not interact with one another so that the probability per unit time that a particle goes through the hole in the partition is the same for all particles regardless of the number of particles in either half. We also assume that the size of the hole is such that only one particle can pass through it in one unit of time. One way to implement this model is to choose a particle at random and move it to the other side. This procedure is cumbersome, because our only interest is the number of particles on each side. That is, we need to know only n, the number of particles on the left side; the number on the right side is N − n. Because each particle has the same chance to go through the hole in the partition, the probability per unit time that a particle moves from left to right equals the number of particles on the left side divided by the total number of particles; that is, the probability of a move from left to right is n/N. The algorithm for simulating the evolution of the model is given by the following steps:
  • 15. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 9 Figure 1.2: Evolution of the number of particles in each third of the box for N = 270. The particles were initially restricted to the middle third of the box. Toroidal boundary conditions are used in both directions. The initial velocities were assigned at random from a distribution corresponding to temperature T = 5. The time was reversed at t ≈ 59. Does the system exhibit a direction of time? 1. Generate a random number r from a uniformly distributed set of random numbers in the unit interval 0 ≤ r < 1. 2. If r ≤ n/N, move a particle from left to right, that is, let n → n − 1; otherwise, move a particle from right to left, n → n + 1. 3. Increase the “time” by 1. Problem 1.2. Particles in a box (a) The applet at <stp.clarku.edu/simulations/box.html> implements this algorithm and plots the evolution of n. Describe the behavior of n(t) for various values of N. Does the system approach equilibrium? How would you characterize equilibrium? In what sense is equilibrium better defined as N becomes larger? Does your definition of equilibrium depend on how the particles were initially distributed between the two halves of the box? (b) When the system is in equilibrium, does the number of particles on the left-hand side remain a constant? If not, how would you describe the nature of equilibrium? (c) If N 32, does the system ever return to its initial state?
  • 16. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 10 (d) How does n, the mean number of particles on the left-hand side, depend on N after the system has reached equilibrium? For simplicity, the program computes various averages from time t = 0. Why would such a calculation not yield the correct equilibrium average values? What is the purpose of the Zero averages button? (e) Define the quantity σ by the relation σ2 = (n − n)2. What does σ measure? What would be its value if n were constant? How does σ depend on N? How does the ratio σ/n depend on N? In what sense is equilibrium better defined as N increases? From Problems 1.1 and 1.2 we see that after a system reaches equilibrium, the macroscopic quantities of interest become independent of time on the average, but exhibit fluctuations about their average values. We also learned that the relative fluctuations about the average become smaller as the number of constituents is increased and the details of the dynamics are irrelevant as far as the general tendency of macroscopic systems to approach equilibrium. How can we understand why the systems considered in Problems 1.1 and 1.2 exhibit a direction of time? There are two general approaches that we can take. One way would be to study the dynamics of the system. A much simpler way is to change the question and take advantage of the fact that the equilibrium state of a macroscopic system is independent of time on the average and hence time is irrelevant in equilibrium. For the simple system considered in Problem 1.2 we will see that counting the number of ways that the particles can be distributed between the two halves of the box will give us much insight into the nature of equilibrium. This information tells us nothing about the approach of the system to equilibrium, but it will give us insight into why there is a direction of time. Let us call each distinct arrangement of the particles between the two halves of the box a configuration. A given particle can be in either the left half or the right half of the box. Because the halves are equivalent, a given particle is equally likely to be in either half if the system is in equilibrium. For N = 2, the four possible configurations are shown in Table 1.1. Note that each configuration has a probability of 1/4 if the system is in equilibrium. configuration n W(n) L L 2 1 L R R L 1 2 R R 0 1 Table 1.1: The four possible ways in which N = 2 particles can be distributed between the two halves of a box. The quantity W(n) is the number of configurations corresponding to the macroscopic state characterized by n. Now let us consider N = 4 for which there are 2 × 2 × 2 × 2 = 24 = 16 configurations (see Table 1.2). From a macroscopic point of view, we do not care which particle is in which half of the box, but only the number of particles on the left. Hence, the macroscopic state or macrostate is specified by n. Let us assume as before that all configurations are equally probable in equilibrium. We see from Table 1.2 that there is only one configuration with all particles on the left and the most probable macrostate is n = 2.
  • 17. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 11 For larger N, the probability of the most probable macrostate with n = N/2 is much greater than the macrostate with n = N, which has a probability of only 1/2N corresponding to a single configuration. The latter configuration is “special” and is said to be nonrandom, while the con- figurations with n ≈ N/2, for which the distribution of the particles is approximately uniform, are said to be “random.” So we can see that the equilibrium macrostate corresponds to the most probable state. configuration n W(n) P(n) L L L L 4 1 1/16 R L L L 3 L R L L 3 L L R L 3 L L L R 3 4 4/16 R R L L 2 R L R L 2 R L L R 2 L R R L 2 L R L R 2 L L R R 2 6 6/16 R R R L 1 R R L R 1 R L R R 1 L R R R 1 4 4/16 R R R R 0 1 1/16 Table 1.2: The sixteen possible ways in which N = 4 particles can be distributed between the two halves of a box. The quantity W(n) is the number of configurations corresponding to the macroscopic state characterized by n. The probability P(n) of the macrostate n is calculated assuming that each configuration is equally likely. Problem 1.3. Enumeration of possible configurations (a) Calculate the number of possible configurations for each macrostate n for N = 8 particles. What is the probability that n = 8? What is the probability that n = 4? It is possible to count the number of configurations for each n by hand if you have enough patience, but because there are a total of 28 = 256 configurations, this counting would be very tedious. An alternative is to derive an expression for the number of ways that n particles out of N can be in the left half of the box. One way to motivate such an expression is to enumerate the possible configurations for smaller values of N and see if you can observe a pattern. (b) From part (a) we see that the macrostate with n = N/2 is much more probable than the macrostate with n = N. Why? We observed that if an isolated macroscopic system changes in time due to the removal of an internal constraint, it tends to evolve from a less random to a more random state. We also observed
  • 18. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 12 that once the system reaches its most random state, fluctuations corresponding to an appreciably nonuniform state are very rare. These observations and our reasoning based on counting the number of configurations corresponding to a particular macrostate allows us to conclude that A system in a nonuniform macrostate will change in time on the average so as to approach its most random macrostate where it is in equilibrium. Note that our simulations involved watching the system evolve, but our discussion of the number of configurations corresponding to each macrostate did not involve the dynamics in any way. Instead this approach involved the enumeration of the configurations and assigning them equal probabilities assuming that the system is isolated and in equilibrium. We will find that it is much easier to understand equilibrium systems by ignoring the time altogether. In the simulation of Problem 1.1 the total energy was conserved, and hence the macroscopic quantity of interest that changed from the specially prepared initial state with n2 = N to the most random macrostate with n2 ≈ N/3 was not the total energy. So what macroscopic quantity changed besides n1, n2, and n3 (the number of particles in each third of the box)? Based on our earlier discussion, we tentatively say that the quantity that changed is the entropy. This statement is no more meaningful than saying that balls fall near the earth’s surface because of gravity. We conjecture that the entropy is associated with the number of configurations associated with a given macrostate. If we make this association, we see that the entropy is greater after the system has reached equilibrium than in the system’s initial state. Moreover, if the system were initially prepared in a random state, the mean value of n2 and hence the entropy would not change. Hence, we can conclude the following: The entropy of an isolated system increases or remains the same when an internal constraint is removed. This statement is equivalent to the second law of thermodynamics. You might want to skip to Chapter 4, where this identification of the entropy is made explicit. As a result of the two simulations that we have done and our discussions, we can make some additional tentative observations about the behavior of macroscopic systems. Fluctuations in equilibrium. Once a system reaches equilibrium, the macroscopic quantities of interest do not become independent of the time, but exhibit fluctuations about their average values. That is, in equilibrium only the average values of the macroscopic variables are independent of time. For example, for the particles in the box problem n(t) changes with t, but its average value n does not. If N is large, fluctuations corresponding to a very nonuniform distribution of the particles almost never occur, and the relative fluctuations, σ/n become smaller as N is increased. History independence. The properties of equilibrium systems are independent of their history. For example, n would be the same whether we had started with n(t = 0) = 0 or n(t = 0) = N. In contrast, as members of the human race, we are all products of our history. One consequence of history independence is that it is easier to understand the properties of equilibrium systems by ignoring the dynamics of the particles. (The global constraints on the dynamics are important. For example, it is important to know if the total energy is a constant or not.) We will find that equilibrium statistical mechanics is essentially equivalent to counting configurations. The problem will be that this counting is difficult to do in general.
  • 19. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 13 Need for statistical approach. Systems can be described in detail by specifying their microstate. Such a description corresponds to giving all the information that is possible. For a system of classical particles, a microstate corresponds to specifying the position and velocity of each particle. In our analysis of Problem 1.2, we specified only in which half of the box a particle was located, so we used the term configuration rather than microstate. However, the terms are frequently used interchangeably. From our simulations, we see that the microscopic state of the system changes in a complicated way that is difficult to describe. However, from a macroscopic point of view, the description is much simpler. Suppose that we simulated a system of many particles and saved the trajectories of the particles as a function of time. What could we do with this information? If the number of particles is 106 or more or if we ran long enough, we would have a problem storing the data. Do we want to have a detailed description of the motion of each particle? Would this data give us much insight into the macroscopic behavior of the system? As we have found, the trajectories of the particles are not of much interest, and it is more useful to develop a probabilistic approach. That is, the presence of a large number of particles motivates us to use statistical methods. In Section 1.8 we will discuss another reason why a probabilistic approach is necessary. We will find that the laws of thermodynamics depend on the fact that the number of particles in macroscopic systems is enormous. A typical measure of this number is Avogadro’s number which is approximately 6 × 1023 , the number of atoms in a mole. When there are so many particles, predictions of the average properties of the system become meaningful, and deviations from the average behavior become less and less important as the number of atoms is increased. Equal a priori probabilities. In our analysis of the probability of each macrostate in Prob- lem 1.2, we assumed that each configuration was equally probable. That is, each configuration of an isolated system occurs with equal probability if the system is in equilibrium. We will make this assumption explicit for isolated systems in Chapter 4. Existence of different phases. So far our simulations of interacting systems have been restricted to dilute gases. What do you think would happen if we made the density higher? Would a system of interacting particles form a liquid or a solid if the temperature or the density were chosen appropriately? The existence of different phases is investigated in Problem 1.4. Problem 1.4. Different phases (a) The applet/application at <stp.clarku.edu/simulations/lj.html> simulates an isolated system of N particles interacting via the Lennard-Jones potential. Choose N = 64 and L = 18 so that the density ρ = N/L2 ≈ 0.2. The initial positions are chosen at random except that no two particles are allowed to be closer than σ. Run the simulation and satisfy yourself that this choice of density and resultant total energy corresponds to a gas. What is your criterion? (b) Slowly lower the total energy of the system. (The total energy is lowered by rescaling the velocities of the particles.) If you are patient, you might be able to observe “liquid-like” regions. How are they different than “gas-like” regions? (c) If you decrease the total energy further, you will observe the system in a state roughly corre- sponding to a solid. What is your criteria for a solid? Explain why the solid that we obtain in this way will not be a perfect crystalline solid.
  • 20. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 14 (d) Describe the motion of the individual particles in the gas, liquid, and solid phases. (e) Conjecture why a system of particles interacting via the Lennard-Jones potential in (1.1) can exist in different phases. Is it necessary for the potential to have an attractive part for the system to have a liquid phase? Is the attractive part necessary for there to be a solid phase? Describe a simulation that would help you answer this question. It is fascinating that a system with the same interparticle interaction can be in different phases. At the microscopic level, the dynamics of the particles is governed by the same equations of motion. What changes? How does such a phase change occur at the microscopic level? Why doesn’t a liquid crystallize immediately when its temperature is lowered quickly? What happens when it does begin to crystallize? We will find in later chapters that phase changes are examples of cooperative effects. 1.6 Measuring the pressure and temperature The obvious macroscopic variables that we can measure in our simulations of the system of particles interacting via the Lennard-Jones potential include the average kinetic and potential energies, the number of particles, and the volume. We also learned that the entropy is a relevant macroscopic variable, but we have not learned how to determine it from a simulation.4 We know from our everyday experience that there are at least two other macroscopic variables that are relevant for describing a macrostate, namely, the pressure and the temperature. The pressure is easy to measure because we are familiar with force and pressure from courses in mechanics. To remind you of the relation of the pressure to the momentum flux, consider N particles in a cube of volume V and linear dimension L. The center of mass momentum of the particles is zero. Imagine a planar surface of area A = L2 placed in the system and oriented perpendicular to the x-axis as shown in Figure 1.3. The pressure P can be defined as the force per unit area acting normal to the surface: P = Fx A . (1.2) We have written P as a scalar because the pressure is the same in all directions on the average. From Newton’s second law, we can rewrite (1.2) as P = 1 A d(mvx) dt . (1.3) From (1.3) we see that the pressure is the amount of momentum that crosses a unit area of the surface per unit time. We could use (1.3) to determine the pressure, but this relation uses information only from the fraction of particles that are crossing an arbitrary surface at a given time. Instead, our simulations will use the relation of the pressure to the virial, a quantity that involves all the particles in the system.5 4We will find that it is very difficult to determine the entropy directly by making either measurements in the laboratory or during a simulation. Entropy, unlike pressure and temperature, has no mechanical analog. 5See Gould, Tobochnik, and Christian, Chapter 8. The relation of the pressure to the virial is usually considered in graduate courses in mechanics.
  • 21. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 15 not done Figure 1.3: Imaginary plane perpendicular to the x-axis across which the momentum flux is eval- uated. Problem 1.5. Nature of temperature (a) Summarize what you know about temperature. What reasons do you have for thinking that it has something to do with energy? (b) Discuss what happens to the temperature of a hot cup of coffee. What happens, if anything, to the temperature of its surroundings? The relation between temperature and energy is not simple. For example, one way to increase the energy of a glass of water would be to lift it. However, this action would not affect the temperature of the water. So the temperature has nothing to do with the motion of the center of mass of the system. As another example, if we placed a container of water on a moving conveyor belt, the temperature of the water would not change. We also know that temperature is a property associated with many particles. It would be absurd to refer to the temperature of a single molecule. This discussion suggests that temperature has something to do with energy, but it has missed the most fundamental property of temperature, that is, the temperature is the quantity that becomes equal when two systems are allowed to exchange energy with one another. (Think about what happens to a cup of hot coffee.) In Problem 1.6 we identify the temperature from this point of view for a system of particles. Problem 1.6. Identification of the temperature (a) Consider two systems of particles interacting via the Lennard-Jones potential given in (1.1). Se- lect the applet/application at <stp.clarku.edu/simulations/thermalcontact.html>. For system A, we take NA = 81, AA = 1.0, and σAA = 1.0; for system B, we have NB = 64, AA = 1.5, and σAA = 1.2. Both systems are in a square box with linear dimension L = 12. In this case, toroidal boundary conditions are not used and the particles also interact with fixed particles (with infinite mass) that make up the walls and the partition between them. Initially, the two systems are isolated from each other and from their surroundings. Run the simulation until each system appears to be in equilibrium. Does the kinetic energy and potential energy of each system change as the system evolves? Why? What is the mean potential and kinetic energy of each system? Is the total energy of each system fixed (to within numerical error)?
  • 22. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 16 (b) Remove the barrier and let the two systems interact with one another.6 We choose AB = 1.25 and σAB = 1.1. What quantity is exchanged between the two systems? (The volume of each system is fixed.) (c) Monitor the kinetic and potential energy of each system. After equilibrium has been established between the two systems, compare the average kinetic and potential energies to their values before the two systems came into contact. (d) We are looking for a quantity that is the same in both systems after equilibrium has been established. Are the average kinetic and potential energies the same? If not, think about what would happen if you doubled the N and the area of each system? Would the temperature change? Does it make more sense to compare the average kinetic and potential energies or the average kinetic and potential energies per particle? What quantity does become the same once the two systems are in equilibrium? Do any other quantities become approximately equal? What do you conclude about the possible identification of the temperature? From the simulations in Problem 1.6, you are likely to conclude that the temperature is proportional to the average kinetic energy per particle. We will learn in Chapter 4 that the proportionality of the temperature to the average kinetic energy per particle holds only for a system of particles whose kinetic energy is proportional to the square of the momentum (velocity). Another way of thinking about temperature is that temperature is what you measure with a thermometer. If you want to measure the temperature of a cup of coffee, you put a thermometer into the coffee. Why does this procedure work? Problem 1.7. Thermometers Describe some of the simple thermometers with which you are familiar. On what physical principles do these thermometers operate? What requirements must a thermometer have? Now lets imagine a simulation of a simple thermometer. Imagine a special particle, a “demon,” that is able to exchange energy with a system of particles. The only constraint is that the energy of the demon Ed must be non-negative. The behavior of the demon is given by the following algorithm: 1. Choose a particle in the system at random and make a trial change in one of its coordinates. 2. Compute ∆E, the change in the energy of the system due to the change. 3. If ∆E ≤ 0, the system gives the surplus energy |∆E| to the demon, Ed → Ed + |∆E|, and the trial configuration is accepted. 4. If ∆E > 0 and the demon has sufficient energy for this change, then the demon gives the necessary energy to the system, Ed → Ed − ∆E, and the trial configuration is accepted. Otherwise, the trial configuration is rejected and the configuration is not changed. 6In order to ensure that we can continue to identify which particle belongs to system A and system B, we have added a spring to each particle so that it cannot wander too far from its original lattice site.
  • 23. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 17 Note that the total energy of the system and the demon is fixed. We consider the consequences of these simple rules in Problem 1.8. The nature of the demon is discussed further in Section 4.9. Problem 1.8. The demon and the ideal gas (a) The applet/application at <stp.clarku.edu/simulations/demon.html> simulates a demon that exchanges energy with an ideal gas of N particles moving in d spatial dimensions. Because the particles do not interact, the only coordinate of interest is the velocity of the particles. In this case the demon chooses a particle at random and changes its velocity in one of its d directions by an amount chosen at random between −∆ and +∆. For simplicity, the initial velocity of each particle is set equal to +v0 ˆx, where v0 = (2E0/m)1/2 /N, E0 is the desired total energy of the system, and m is the mass of the particles. For simplicity, we will choose units such that m = 1. Choose d = 1, N = 40, and E0 = 10 and determine the mean energy of the demon Ed and the mean energy of the system E. Why is E = E0? (b) What is e, the mean energy per particle of the system? How do e and Ed compare for various values of E0? What is the relation, if any, between the mean energy of the demon and the mean energy of the system? (c) Choose N = 80 and E0 = 20 and compare e and Ed. What conclusion, if any, can you make?7 (d) Run the simulation for several other values of the initial total energy E0 and determine how e depends on Ed for fixed N. (e) From your results in part (d), what can you conclude about the role of the demon as a thermometer? What properties, if any, does it have in common with real thermometers? (f) Repeat the simulation for d = 2. What relation do you find between e and Ed for fixed N? (g) Suppose that the energy momentum relation of the particles is not = p2 /2m, but is = cp, where c is a constant (which we take to be unity). Determine how e depends on Ed for fixed N and d = 1. Is the dependence the same as in part (d)? (h) Suppose that the energy momentum relation of the particles is = Ap3/2 , where A is a constant (which we take to be unity). Determine how e depends on Ed for fixed N and d = 1. Is this dependence the same as in part (d) or part (g)? (i) The simulation also computes the probability P(Ed)δE that the demon has energy between Ed and Ed +δE. What is the nature of the dependence of P(Ed) on Ed? Does this dependence depend on the nature of the system with which the demon interacts? 7There are finite size effects that are order 1/N.
  • 24. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 18 1.7 Work, heating, and the first law of thermodynamics If you watch the motion of the individual particles in a molecular dynamics simulation, you would probably describe the motion as “random” in the sense of how we use random in everyday speech. The motion of the individual molecules in a glass of water would exhibit similar motion. Suppose that we were to expose the water to a low flame. In a simulation this process would roughly correspond to increasing the speed of the particles when they hit the wall. We say that we have transferred energy to the system incoherently because each particle would continue to move more or less at random. You learned in your classical mechanics courses that the change in energy of a particle equals the work done on it and the same is true for a collection of particles as long as we do not change the energy of the particles in some other way at the same time. Hence, if we squeeze a plastic container of water, we would do work on the system, and we would see the particles near the wall move coherently. So we can distinguish two different ways of transferring energy to the system. That is, heating transfers energy incoherently and doing work transfers energy coherently. Lets consider a molecular dynamics simulation again and suppose that we have increased the energy of the system by either compressing the system and doing work on it or by increasing the speed of the particles that reach the walls of the container. Roughly speaking, the first way would initially increase the potential energy of interaction and the second way would initially increase the kinetic energy of the particles. If we increase the total energy by the same amount, could we tell by looking at the particle trajectories after equilibrium has been reestablished how the energy had been increased? The answer is no, because for a given total energy, volume, and number of particles, the kinetic energy and the potential energy would have unique equilibrium values. (See Problem 1.6 for a demonstration of this property.) We conclude that the energy of the gas can be changed by doing work on it or by heating it. This statement is equivalent to the first law of thermodynamics and from the microscopic point of view is simply a statement of conservation of energy. Our discussion implies that the phrase “adding heat” to a system makes no sense, because we cannot distinguish “heat energy” from potential energy and kinetic energy. Nevertheless, we frequently use the word “heat ” in everyday speech. For example, we might way “Please turn on the heat” and “I need to heat my coffee.” We will avoid such uses, and whenever possible avoid the use of the noun “heat.” Why do we care? Because there is no such thing as heat! Once upon a time, scientists thought that there was a fluid in all substances called caloric or heat that could flow from one substance to another. This idea was abandoned many years ago, but is still used in common language. Go ahead and use heat outside the classroom, but we won’t use it here. 1.8 *The fundamental need for a statistical approach In Section 1.5 we discussed the need for a statistical approach when treating macroscopic systems from a microscopic point of view. Although we can compute the trajectory (the position and velocity) of each particle for as long as we have patience, our disinterest in the trajectory of any particular particle and the overwhelming amount of information that is generated in a simulation motivates us to develop a statistical approach.
  • 25. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 19 (a) (b) Figure 1.4: (a) A special initial condition for N = 11 particles such that their motion remains parallel indefinitely. (b) The positions of the particles at time t = 8.0 after the change in vx(6). The only change in the initial condition from part (a) is that vx(6) was changed from 1 to 1.000001. We now discuss why there is a more fundamental reason why we must use probabilistic meth- ods to describe systems with more than a few particles. The reason is that under a wide variety of conditions, even the most powerful supercomputer yields positions and velocities that are mean- ingless! In the following, we will find that the trajectories in a system of many particles depend sensitively on the initial conditions. Such a system is said to be chaotic. This behavior forces us to take a statistical approach even for systems with as few as three particles. As an example, consider a system of N = 11 particles moving in a box of linear dimension L (see the applet/application at <stp.clarku.edu/simulations/sensitive.html>). The initial conditions are such that all particles have the same velocity vx(i) = 1, vy(i) = 0, and the particles are equally spaced vertically, with x(i) = L/2 for i = 1, . . . , 11 (see Fig. 1.4(a)). Convince yourself that for these special initial conditions, the particles will continue moving indefinitely in the x- direction (using toroidal boundary conditions). Now let us stop the simulation and change the velocity of particle 6, such that vx(6) = 1.000001. What do you think happens now? In Fig. 1.4(b) we show the positions of the particles at time t = 8.0 after the change in velocity of particle 6. Note that the positions of the particles are no longer equally spaced and the velocities of the particles are very different. So in this case, a small change in the velocity of one particle leads to a big change in the trajectories of all the particles. Problem 1.9. Irreversibility The applet/application at <stp.clarku.edu/simulations/sensitive.html> simulates a system of N = 11 particles with the special initial condition described in the text. Confirm the results that we have discussed. Change the velocity of particle 6 and stop the simulation at time t and reverse
  • 26. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 20 all the velocities. Confirm that if t is sufficiently short, the particles will return approximately to their initial state. What is the maximum value of t that will allow the system to return to its initial positions if t is replaced by −t (all velocities reversed)? An important property of chaotic systems is their extreme sensitivity to initial conditions, that is, the trajectories of two identical systems starting with slightly different initial conditions will diverge exponentially in a short time. For such systems we cannot predict the positions and velocities of the particles because even the slightest error in our measurement of the initial conditions would make our prediction entirely wrong if the elapsed time is sufficiently long. That is, we cannot answer the question, “Where is particle 2 at time t?” if t is sufficiently long. It might be disturbing to realize that our answers are meaningless if we ask the wrong questions. Although Newton’s equations of motion are time reversible, this reversibility cannot be realized in practice for chaotic systems. Suppose that a chaotic system evolves for a time t and all the velocities are reversed. If the system is allowed to evolve for an additional time t, the system will not return to its original state unless the velocities are specified with infinite precision. This lack of practical reversibility is related to what we observe in macroscopic systems. If you pour milk into a cup of coffee, the milk becomes uniformly distributed throughout the cup. You will never see a cup of coffee spontaneously return to the state where all the milk is at the surface because to do so, the positions and velocities of the milk and coffee molecules must be chosen so that the molecules of milk return to this very special state. Even the slightest error in the choice of positions and velocities will ruin any chance of the milk returning to the surface. This sensitivity to initial conditions provides the foundation for the arrow of time. 1.9 *Time and ensemble averages We have seen that although the computed trajectories are meaningless for chaotic systems, averages over the trajectories are physically meaningful. That is, although a computed trajectory might not be the one that we thought we were computing, the positions and velocities that we compute are consistent with the constraints we have imposed, in this case, the total energy E, the volume V , and the number of particles N. This reasoning suggests that macroscopic properties such as the temperature and pressure must be expressed as averages over the trajectories. Solving Newton’s equations numerically as we have done in our simulations yields a time average. If we do a laboratory experiment to measure the temperature and pressure, our mea- surements also would be equivalent to a time average. As we have mentioned, time is irrelevant in equilibrium. We will find that it is easier to do calculations in statistical mechanics by doing an ensemble average. We will discuss ensemble averages in Chapter 3. In brief an ensemble average is over many mental copies of the system that satisfy the same known conditions. A simple example might clarify the nature of these two types of averages. Suppose that we want to determine the probability that the toss of a coin results in “heads.” We can do a time average by taking one coin, tossing it in the air many times, and counting the fraction of heads. In contrast, an ensemble average can be found by obtaining many similar coins and tossing them into the air at one time. It is reasonable to assume that the two ways of averaging are equivalent. This equivalence is called the quasi-ergodic hypothesis. The use of the term “hypothesis” might suggest that the equivalence is not well accepted, but it reminds us that the equivalence has been shown to be
  • 27. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 21 rigorously true in only a few cases. The sensitivity of the trajectories of chaotic systems to initial conditions suggests that a classical system of particles moving according to Newton’s equations of motion passes through many different microstates corresponding to different sets of positions and velocities. This property is called mixing, and it is essential for the validity of the quasi-ergodic hypothesis. In summary, macroscopic properties are averages over the microscopic variables and give predictable values if the system is sufficiently large. One goal of statistical mechanics is to give a microscopic basis for the laws of thermodynamics. In this context it is remarkable that these laws depend on the fact that gases, liquids, and solids are chaotic systems. Another important goal of statistical mechanics is to calculate the macroscopic properties from a knowledge of the intermolecular interactions. 1.10 *Models of matter There are many models of interest in statistical mechanics, corresponding to the wide range of macroscopic systems found in nature and made in the laboratory. So far we have discussed a simple model of a classical gas and used the same model to simulate a classical liquid and a solid. One key to understanding nature is to develop models that are simple enough to analyze, but that are rich enough to show the same features that are observed in nature. Some of the more common models that we will consider include the following. 1.10.1 The ideal gas The simplest models of macroscopic systems are those for which the interaction between the indi- vidual particles is very small. For example, if a system of particles is very dilute, collisions between the particles will be rare and can be neglected under most circumstances. In the limit that the interactions between the particles can be neglected completely, the model is known as the ideal gas. The classical ideal gas allows us to understand much about the behavior of dilute gases, such as those in the earth’s atmosphere. The quantum version will be useful in understanding black- body radiation (Section 6.9), electrons in metals (Section 6.10), the low temperature behavior of crystalline solids (Section 6.12), and a simple model of superfluidity (Section 6.11). The term “ideal gas” is a misnomer because it can be used to understand the properties of solids and other interacting particle systems under certain circumstances, and because in many ways the neglect of interactions is not ideal. The historical reason for the use of this term is that the neglect of interparticle interactions allows us to do some calculations analytically. However, the neglect of interparticle interactions raises other issues. For example, how does an ideal gas reach equilibrium if there are no collisions between the particles?
  • 28. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 22 1.10.2 Interparticle potentials As we have mentioned, the most popular form of the potential between two neutral atoms is the Lennard-Jones potential8 given in (1.1). This potential has an weak attractive tail at large r, reaches a minimum at r = 21/6 σ ≈ 1.122σ, and is strongly repulsive at shorter distances. The Lennard-Jones potential is appropriate for closed-shell systems, that is, rare gases such as Ar or Kr. Nevertheless, the Lennard-Jones potential is a very important model system and is the standard potential for studies where the focus is on fundamental issues, rather than on the properties of a specific material. An even simpler interaction is the hard core interaction given by V (r) = ∞ (r ≤ σ) 0. (r > σ) (1.4) A system of particles interacting via (1.4) is called a system of hard spheres, hard disks, or hard rods depending on whether the spatial dimension is three, two, or one, respectively. Note that V (r) in (1.4) is purely repulsive. 1.10.3 Lattice models In another class of models, the positions of the particles are restricted to a lattice or grid and the momenta of the particles are irrelevant. In the most popular model of this type the “particles” correspond to magnetic moments. At high temperatures the magnetic moments are affected by external magnetic fields, but the interaction between moments can be neglected. The simplest, nontrivial model that includes interactions is the Ising model, the most impor- tant model in statistical mechanics. The model consists of spins located on a lattice such that each spin can take on one of two values designated as up and down or ±1. The interaction energy between two neighboring spins is −J if the two spins are in the same state and +J if they are in opposite states. One reason for the importance of this model is that it is one of the simplest to have a phase transition, in this case, a phase transition between a ferromagnetic state and a paramagnetic state. We will focus on three classes of models – the ideal classical and quantum gas, classical systems of interacting particles, and the Ising model and its extensions. These models will be used in many contexts to illustrate the ideas and techniques of statistical mechanics. 1.11 Importance of simulations Only simple models such as the ideal gas or special cases such as the two-dimensional Ising model can be analyzed by analytical methods. Much of what is done in statistical mechanics is to establish the general behavior of a model and then relate it to the behavior of another model. This way of understanding is not as strange as it first might appear. How many different systems in classical mechanics can be solved exactly? 8This potential is named after John Lennard-Jones, 1894–1954, a theoretical chemist and physicist at Cambridge University.
  • 29. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 23 Statistical physics has grown in importance over the past several decades because powerful computers and new computer algorithms have allowed us to explore the consequences of more com- plex systems. Simulations play an important intermediate role between theory and experiment. As our models become more realistic, it is likely that they will require the computer for understanding many of their properties. In a simulation we start with a microscopic model for which the variables represent the microscopic constituents and determine the consequences of their interactions. Fre- quently the goal of our simulations is to explore these consequences so that we have a better idea of what type of theoretical analysis might be possible and what type of laboratory experiments should be done. Simulations allow us to compute many different kinds of quantities, some of which cannot be measured in a laboratory experiment. Not only can we simulate reasonably realistic models, we also can study models that are im- possible to realize in the laboratory, but are useful for providing a deeper theoretical understanding of real systems. For example, a comparison of the behavior of a model in three and four spatial dimensions can yield insight into why the three-dimensional system behaves the way it does. Simulations cannot replace laboratory experiments and are limited by the finite size of the systems and by the short duration of our runs. For example, at present the longest simulations of simple liquids are for no more than 10−6 s. Not only have simulations made possible new ways of doing research, they also make it possible to illustrate the important ideas of statistical mechanics. We hope that the simulations that we have already discussed have already convinced you of their utility. For this reason, we will consider many simulations throughout these notes. 1.12 Summary This introductory chapter has been designed to whet your appetite, and at this point it is not likely that you will fully appreciate the significance of such concepts as entropy and the direction of time. We are reminded of the book, All I Really Need to Know I Learned in Kindergarten.9 In principle, we have discussed most of the important ideas in thermodynamics and statistical physics, but it will take you a while before you understand these ideas in any depth. We also have not discussed the tools necessary to solve any problems. Your understanding of these concepts and the methods of statistical and thermal physics will increase as you work with these ideas in different contexts. You will find that the unifying aspects of thermodynamics and statistical mechanics are concepts such as the nature of equilibrium, the direction of time, and the existence of cooperative effects and different phases. However, there is no unifying equation such as Newton’s second law of motion in mechanics, Maxwell’s equations in electrodynamics, and Schrodinger’s equation in nonrelativistic quantum mechanics. There are many subtleties that we have glossed over so that we could get started. For example, how good is our assumption that the microstates of an isolated system are equally probable? This question is a deep one and has not been completely answered. The answer likely involves the nature of chaos. Chaos seems necessary to insure that the system will explore a large number of the available microstates, and hence make our assumption of equal probabilities valid. However, we do not know how to tell a priori whether a system will behave chaotically or not. 9Robert Fulghum, All I Really Need to Know I Learned in Kindergarten, Ballantine Books (2004).
  • 30. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 24 Most of our discussion concerns equilibrium behavior. The “dynamics” in thermodynamics refers to the fact that we can treat a variety of thermal processes in which a system moves from one equilibrium state to another. Even if the actual process involves non-equilibrium states, we can replace the non-equilibrium states by a series of equilibrium states which begin and end at the same equilibrium states. This type of reasoning is analogous to the use of energy arguments in mechanics. A ball can roll from the top of a hill to the bottom, rolling over many bumps and valleys, but as long as there is no dissipation due to friction, we can determine the ball’s motion at the bottom without knowing anything about how the ball got there. The techniques and ideas of statistical mechanics are now being used outside of traditional condensed matter physics. The field theories of high energy physics, especially lattice gauge theo- ries, use the methods of statistical mechanics. New methods of doing quantum mechanics convert calculations to path integrals that are computed numerically using methods of statistical mechan- ics. Theories of the early universe use ideas from thermal physics. For example, we speak about the universe being quenched to a certain state in analogy to materials being quenched from high to low temperatures. We already have seen that chaos provides an underpinning for the need for probability in statistical mechanics. Conversely, many of the techniques used in describing the properties of dynamical systems have been borrowed from the theory of phase transitions, one of the important areas of statistical mechanics. Thermodynamics and statistical mechanics have traditionally been applied to gases, liquids, and solids. This application has been very fruitful and is one reason why condensed matter physics, materials science, and chemical physics are rapidly evolving and growing areas. Examples of new materials include high temperature superconductors, low-dimensional magnetic and conducting materials, composite materials, and materials doped with various impurities. In addition, scientists are taking a new look at more traditional condensed systems such as water and other liquids, liquid crystals, polymers, crystals, alloys, granular matter, and porous media such as rocks. And in addition to our interest in macroscopic systems, there is growing interest is mesoscopic systems, systems that are neither microscopic nor macroscopic, but are in between, that is, between ∼ 102 to ∼ 106 particles. Thermodynamics might not seem to be as interesting to you when you first encounter it. However, an understanding of thermodynamics is important in many contexts including societal issues such as global warming, electrical energy production, fuel cells, and other alternative energy sources. The science of information theory uses many ideas from statistical mechanics, and recently, new optimization methods such as simulated annealing have been borrowed from statistical mechanics. In recent years statistical mechanics has evolved into the more general field of statistical physics. Examples of systems of interest in the latter area include earthquake faults, granular mat- ter, neural networks, models of computing, genetic algorithms, and the analysis of the distribution of time to respond to email. Statistical physics is characterized more by its techniques than by the problems that are its interest. This universal applicability makes the techniques more difficult to understand, but also makes the journey more exciting.
  • 31. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 25 Vocabulary thermodynamics, statistical mechanics macroscopic system configuration, microstate, macrostate specially prepared state, equilibrium, fluctuations thermal contact, temperature sensitivity to initial conditions models, computer simulations Problems Problems page 1.1 7 1.2 9 1.3 11 1.4 13 1.5 and 1.6 15 1.7 16 1.8 17 1.9 19 Table 1.3: Listing of inline problems. Problem 1.10. (a) What do you observe when a small amount of black dye is placed in a glass of water? (b) Suppose that a video were taken of this process and the video was run backward without your knowledge. Would you be able to observe whether the video was being run forward or backward? (c) Suppose that you could watch a video of the motion of an individual ink molecule. Would you be able to know that the video was being shown forward or backward? Problem 1.11. Describe several examples based on your everyday experience that illustrate the unidirectional temporal behavior of macroscopic systems. For example, what happens to ice placed in a glass of water at room temperature? What happens if you make a small hole in an inflated tire? What happens if you roll a ball on a hard surface? Problem 1.12. In what contexts can we treat water as a fluid? In what context can water not be treated as a fluid? Problem 1.13. How do you know that two objects are at the same temperature? How do you know that two bodies are at different temperatures? Problem 1.14. Summarize your understanding of the properties of macroscopic systems. Problem 1.15. Ask some of your friends why a ball falls when released above the Earth’s surface. Explain why the answer “gravity” is not really an explanation. Problem 1.16. What is your understanding of the concept of “randomness” at this time? Does “random motion” imply that the motion occurs according to unknown rules?
  • 32. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 26 Problem 1.17. What evidence can you cite from your everyday experience that the molecules in a glass of water or in the surrounding air are in seemingly endless random motion? Problem 1.18. Write a brief paragraph on the meaning of the abstract concepts, “energy” and “justice.” (See the Feynman Lectures, Vol. 1, Chapter 4, for a discussion of why it is difficult to define such abstract concepts.) Problem 1.19. A box of glass beads is also an example of macroscopic systems if the number of beads is sufficiently large. In what ways such a system different than the macroscopic systems that we have discussed in this chapter? Problem 1.20. Suppose that the handle of a plastic bicycle pump is rapidly pushed inward. Predict what happens to the temperature of the air inside the pump and explain your reasoning. (This problem is given here to determine how you think about this type of problem at this time. Similar problems will appear in later chapters to see if and how your reasoning has changed.) Appendix 1A: Mathematics Refresher As discussed in Sec. 1.12, there is no unifying equation in statistical mechanics such as Newton’s second law of motion to be solved in a variety of contexts. For this reason we will not adopt one mathematical tool. Appendix 2B summarizes the mathematics of thermodynamics which makes much use of partial derivatives. Appendix A summarizes some of the mathematical formulas and relations that we will use. If you can do the following problems, you have a good background for most of the mathematics that we will use in the following chapters. Problem 1.21. Calculate the derivative with respect to x of the following functions: ex , e3x , eax , ln x, ln x2 , ln 3x, ln 1/x, sin x, cos x, sin 3x, and cos 2x. Problem 1.22. Calculate the following integrals: 2 1 dx 2x2 (1.5a) 2 1 dx 4x (1.5b) 2 1 e3x dx (1.5c) Problem 1.23. Calculate the partial derivative of x2 + xy + 3y2 with respect to x and y. Suggestions for Further Reading P. W. Atkins, The Second Law, Scientific American Books (1984). A qualitative introduction to the second law of thermodynamics and its implications.
  • 33. CHAPTER 1. FROM MICROSCOPIC TO MACROSCOPIC BEHAVIOR 27 J. G. Oliveira and A.-L. Barab´asi, “Darwin and Einstein correspondence patterns,” Nature 437, 1251 (2005). The authors found the probability that Darwin and Einstein would respond to a letter in τ days is well approximated by a power law, P(τ) ∼ τ−a with a ≈ 3/2. What is the explanation for this power law behavior? How long does it take you to respond to an email? Manfred Eigen and Ruthild Winkler, How the Principles of Nature Govern Chance, Princeton University Press (1993). Richard Feynman, R. B. Leighton, and M. Sands, Feynman Lectures on Physics, Addison-Wesley (1964). Volume 1 has a very good discussion of the nature of energy and work. Harvey Gould, Jan Tobochnik, and Wolfgang Christian, An Introduction to Computer Simulation Methods, third edition, Addison-Wesley (2006). F. Reif, Statistical Physics, Volume 5 of the Berkeley Physics Series, McGraw-Hill (1967). This text was the first to make use of computer simulations to explain some of the basic properties of macroscopic systems. Jeremy Rifkin, Entropy: A New World View, Bantom Books (1980). Although this popular book raises some important issues, it, like many other popular books articles, misuses the concept of entropy. For more discussion on the meaning of entropy and how it should be introduced, see <www.entropysite.com/> and <www.entropysimple.com/>.
  • 34. Chapter 2 Thermodynamic Concepts and Processes c 2005 by Harvey Gould and Jan Tobochnik 29 September 2005 The study of temperature, energy, work, heating, entropy, and related macroscopic concepts com- prise the field known as thermodynamics. 2.1 Introduction In this chapter we will discuss ways of thinking about macroscopic systems and introduce the basic concepts of thermodynamics. Because these ways of thinking are very different from the ways that we think about microscopic systems, most students of thermodynamics initially find it difficult to apply the abstract principles of thermodynamics to concrete problems. However, the study of thermodynamics has many rewards as was appreciated by Einstein: A theory is the more impressive the greater the simplicity of its premises, the more different kinds of things it relates, and the more extended its area of applicability. Therefore the deep impression that classical thermodynamics made to me. It is the only physical theory of universal content which I am convinced will never be overthrown, within the framework of applicability of its basic concepts.1 The essence of thermodynamics can be summarized by two laws: (1) Energy is conserved and (2) entropy increases. These statements of the laws are deceptively simple. What is energy? You are probably familiar with the concept of energy from other courses, but can you define it? Abstract concepts such as energy and entropy are not easily defined nor understood. However, as you apply these concepts in a variety of contexts, you will gradually come to understand them. 1A. Einstein, Autobiographical Notes, Open Court Publishing Company (1991). 26