Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Introduction to Quantum Monte Carlo

1,324 views

Published on

Introduction to Quantum Monte Carlo

Published in: Education
  • Be the first to comment

Introduction to Quantum Monte Carlo

  1. 1. Introduction to Quantum Monte Carlo Methods Claudio Attaccalite http://attaccalite.com
  2. 2. Outline A bit of probability theory Variational Monte Carlo Wave­Function and Optimization
  3. 3. Definition of probability P(Ei)= pi= Number of successful events Total Number of experiments In the limit of a large number of experiments N pi=1 Σi =1 probability of composite events pi=Σk joint probability: pi,j marginal probability: conditional probability: p(i|j) probability for j whatever the second pi,k event may be or not probability for occurrence of j give that the event i occurred
  4. 4. More Definitions Mean Value: 〈x〉=Σi xipi The mean value <x> is the expected average value after repeating several times the same experiment Variance: var x=〈 x2 〉−〈x〉2=Σi xi−〈x〉 2pi The variance is a positive quantity that is zero only if all the events having a non­vanishing probability give the same value for the variable xi Standard deviation: =var x The standard deviation is assumed as a measure of the dispersion of the variable x
  5. 5. Chebyshev's inequality P=P [x−〈 x〉 2var x  ] for ≤1 If the variance is small the random variable x became “more” predictable, in the sense that is value xi at each event is close to <x> with a non­negligible probability
  6. 6. Extension to Continues Variables Cumulative probability : F y=P{x≤y} Clearly F(y) is a monotonically increasing function and Density probability: y= dFy dy And for discrete distributions: F ∞=1 Obviously: y≥0 y=Σi pi y−xEi
  7. 7. The law of large number x=1N Σi xi The average of x is obtained averaging over a large number N of independent realizations of the same experiment 〈 x〉=〈x〉 var  x=〈  x2 〉−〈 x〉=1N var x Central Limit Theorem  x= 1 2 2/N e − x−〈x 〉2 22/N The average of x is Gaussian distributed for large N and its standard deviation decrease as 1/sqrt(N)
  8. 8. Monte Carlo Example: Estimating p If you are a very poor dart player, it is easy to imagine throwing darts randomly at the above figure, and it should be apparent that of the total number of darts that hit within the square, the number of darts that hit the shaded part is proportional to the area of that part.
  9. 9. In other words: Pinside=r 2 4r2=4 and:
  10. 10. A Simple Integral Consider the simple integral: This can be evaluated in the same way as the pi example. By randomly tossing darts in the interval a-b and evaluating the function f(x) on these points
  11. 11. The electronic structure problem P.A.M. Dirac:The fundamental laws necessary for the mathematical treatment of a large part of physics and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved.
  12. 12. Variational Monte Carlo Monte Carlo integration is necessary because the wave­function contains explicit particle correlations that leads to non­factoring multi­dimension integrals.
  13. 13. How to sample a given probability distribution?
  14. 14. Solution Markov chain: random walk in configuration space A Markov chain is a stochastic dynamics for which a random variable xn evolves according to xn1=Fxn ,n xn and xn+1 are not independent so we can define a joint probability to go from first to the second f nxn1 ,xn=K xn1∣xn n xn  Marginal probability to be in xn Conditional probability to go from xn to xn+1 n1xn1=Σx Master equation: K xn1∣xnn xn n
  15. 15. Limit distribution of the Master equation n1xn1=Σxn K xn1∣xnn xn 1) Does It exist a limiting distribution? x 2) Starting form a given arbitrary configuration under which condition we converge?
  16. 16. Sufficient and necessary conditions for the convergence The answer to the first question requires that: xn1=Σx n Kxn1∣xn xn  In order to satisfy this requirement it is sufficient but not necessary that the so­called detailed balance holds: K x'∣x x=K x∣x' x ' The answer to the second question requires ergodicity! Namely that every configuration x' can be reached in a sufficient large number of Markov interactions, starting from any initial configuration x
  17. 17. 17 Nicholas Metropolis (1915­1999) The algorithm by Metropolis (and A Rosenbluth, M Rosenbluth, A Teller and E Teller, 1953) has been cited as among the top 10 algorithms having the "greatest influence on the development and practice of science and engineering in the 20th century."
  18. 18. 18 Metropolis Algorithm We want 1) a Markov chain such that, for large n, converge to (x) 2) a condition probability K(x'|x) that satisfy the detailed balance with this probability distribution Solution! (by Metropolis and friends) K x'∣x= Ax '∣xT x'∣x Ax'∣x=min{1, x'Tx∣x ' xT x '∣x } where T(x'|x) is a general and reasonable transition probability from x to x'
  19. 19. 19 The Algorithm start from a random configuration x' generate a new one according to T(x'|x) accept or reject according to Metropolis rule evaluate our function Important It not necessary to have a normalized probability distribution (or wave­function!)
  20. 20. 20 More or less we have arrived we can evaluate this integral 〈 A〉=∫ R A RdR ∫ R2dR =∫ AL R2RdR ∫ R2dR and its variance var  A=∫ AL 2 R2RdR ∫R2dR −〈 A〉2 but we just need a wave function . . . . . . .
  21. 21. The trial wave­function The trial­function completely determines quality of the approximation for the physical observables The simplest WF is the Slater­Jastrow r1,r2,. ..,rn=Det∣A∣expUcorr  Det from DFT, CI, HF, scratch, etc.. other functional forms: pairing BCS, multi­determinant, pfaffian
  22. 22. Optimization strategies In order to obtain a good variational wave­function, it is possible to optimize the WF minimizing one of the following functionals or a linear combination of both EV a ,b,c=∫ a ,b,c..H a ,b,c...dRn ∫¿ The Variational Energy The Variance of the Energy: 2 2−Ev (always positive and 0 for exact 2 a ,b,c...=∫[ ground state!) H  ] 2
  23. 23. And finally an application!!!
  24. 24. 2D electron gas H= The Hamiltonian : −1 2rs 2 N ∇i Σi 2 1 r s N 1 ∣ri−r j∣ Σi  j rS= 1 naB Unpolarized phase Unpolarized phase Wigner Crystal
  25. 25. 2D electron gas: the phase diagram We found a new phase of the 2D electron gas at low density a stable spin polarized phase before the Wigner crystallization.
  26. 26. Difficulties With VMC The many­electron wavefunction is unknown Has to be approximated May seem hopeless to have to actually guess the wavefunction But is surprisingly accurate when it works
  27. 27. The Limitation of VMC Nothing can really be done if the trial wavefunction isn’t accurate enough Moreover it favours simple states over more complicated ones Therefore, there are other methods Example: Diffusion QMC
  28. 28. Next Monday Diffusion Monte Carlo and Sign­Problem Applications Then . . . . Finite Temperature Path­Integral Monte Carlo One­dimensional electron gas Excited States One­body density matrix Diagramatic Monte Carlo
  29. 29. Reference SISSA Lectures on Numerical methods for strongly correlated electrons 4th draft S. Sorella G. E. Santoro and F. Becca (2008) Introduction to Diffusion Monte Carlo Method I. Kostin, B. Faber and K. Schulten, physics/9702023v1 (1995) FreeScience.info­> Quantum Monte Carlo http://www.freescience.info/books.php?id=35
  30. 30. Exact conditions Electron­Nuclei cusp conditions When one electron approach a nuclei the wave­function reduce to a simple hydrogen like, namely: The same condition holds when two electron meet electron­electron cusp condition and can be satisfied with a two­body Jastrow factor

×