Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

- Anderson localization, wave diffusi... by ABDERRAHMANE REGGAD 215 views
- Libxc a library of exchange and cor... by ABDERRAHMANE REGGAD 224 views
- Introduction to Electron Correlation by Albert DeFusco 1547 views
- Presentation bi2 s3+son by ABDERRAHMANE REGGAD 506 views
- Mottphysics 1talk by ABDERRAHMANE REGGAD 384 views
- Quick and Dirty Introduction to Mot... by ABDERRAHMANE REGGAD 357 views

1,324 views

Published on

Introduction to Quantum Monte Carlo

Published in:
Education

License: CC Attribution License

No Downloads

Total views

1,324

On SlideShare

0

From Embeds

0

Number of Embeds

239

Shares

0

Downloads

34

Comments

0

Likes

1

No embeds

No notes for slide

From “Physics Today” Oct 2000, Vol 53, No. 10., see also http://www.aip.org/pt/vol-53/iss-10/p100.html.

- 1. Introduction to Quantum Monte Carlo Methods Claudio Attaccalite http://attaccalite.com
- 2. Outline A bit of probability theory Variational Monte Carlo WaveFunction and Optimization
- 3. Definition of probability P(Ei)= pi= Number of successful events Total Number of experiments In the limit of a large number of experiments N pi=1 Σi =1 probability of composite events pi=Σk joint probability: pi,j marginal probability: conditional probability: p(i|j) probability for j whatever the second pi,k event may be or not probability for occurrence of j give that the event i occurred
- 4. More Definitions Mean Value: 〈x〉=Σi xipi The mean value <x> is the expected average value after repeating several times the same experiment Variance: var x=〈 x2 〉−〈x〉2=Σi xi−〈x〉 2pi The variance is a positive quantity that is zero only if all the events having a nonvanishing probability give the same value for the variable xi Standard deviation: =var x The standard deviation is assumed as a measure of the dispersion of the variable x
- 5. Chebyshev's inequality P=P [x−〈 x〉 2var x ] for ≤1 If the variance is small the random variable x became “more” predictable, in the sense that is value xi at each event is close to <x> with a nonnegligible probability
- 6. Extension to Continues Variables Cumulative probability : F y=P{x≤y} Clearly F(y) is a monotonically increasing function and Density probability: y= dFy dy And for discrete distributions: F ∞=1 Obviously: y≥0 y=Σi pi y−xEi
- 7. The law of large number x=1N Σi xi The average of x is obtained averaging over a large number N of independent realizations of the same experiment 〈 x〉=〈x〉 var x=〈 x2 〉−〈 x〉=1N var x Central Limit Theorem x= 1 2 2/N e − x−〈x 〉2 22/N The average of x is Gaussian distributed for large N and its standard deviation decrease as 1/sqrt(N)
- 8. Monte Carlo Example: Estimating p If you are a very poor dart player, it is easy to imagine throwing darts randomly at the above figure, and it should be apparent that of the total number of darts that hit within the square, the number of darts that hit the shaded part is proportional to the area of that part.
- 9. In other words: Pinside=r 2 4r2=4 and:
- 10. A Simple Integral Consider the simple integral: This can be evaluated in the same way as the pi example. By randomly tossing darts in the interval a-b and evaluating the function f(x) on these points
- 11. The electronic structure problem P.A.M. Dirac:The fundamental laws necessary for the mathematical treatment of a large part of physics and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved.
- 12. Variational Monte Carlo Monte Carlo integration is necessary because the wavefunction contains explicit particle correlations that leads to nonfactoring multidimension integrals.
- 13. How to sample a given probability distribution?
- 14. Solution Markov chain: random walk in configuration space A Markov chain is a stochastic dynamics for which a random variable xn evolves according to xn1=Fxn ,n xn and xn+1 are not independent so we can define a joint probability to go from first to the second f nxn1 ,xn=K xn1∣xn n xn Marginal probability to be in xn Conditional probability to go from xn to xn+1 n1xn1=Σx Master equation: K xn1∣xnn xn n
- 15. Limit distribution of the Master equation n1xn1=Σxn K xn1∣xnn xn 1) Does It exist a limiting distribution? x 2) Starting form a given arbitrary configuration under which condition we converge?
- 16. Sufficient and necessary conditions for the convergence The answer to the first question requires that: xn1=Σx n Kxn1∣xn xn In order to satisfy this requirement it is sufficient but not necessary that the socalled detailed balance holds: K x'∣x x=K x∣x' x ' The answer to the second question requires ergodicity! Namely that every configuration x' can be reached in a sufficient large number of Markov interactions, starting from any initial configuration x
- 17. 17 Nicholas Metropolis (19151999) The algorithm by Metropolis (and A Rosenbluth, M Rosenbluth, A Teller and E Teller, 1953) has been cited as among the top 10 algorithms having the "greatest influence on the development and practice of science and engineering in the 20th century."
- 18. 18 Metropolis Algorithm We want 1) a Markov chain such that, for large n, converge to (x) 2) a condition probability K(x'|x) that satisfy the detailed balance with this probability distribution Solution! (by Metropolis and friends) K x'∣x= Ax '∣xT x'∣x Ax'∣x=min{1, x'Tx∣x ' xT x '∣x } where T(x'|x) is a general and reasonable transition probability from x to x'
- 19. 19 The Algorithm start from a random configuration x' generate a new one according to T(x'|x) accept or reject according to Metropolis rule evaluate our function Important It not necessary to have a normalized probability distribution (or wavefunction!)
- 20. 20 More or less we have arrived we can evaluate this integral 〈 A〉=∫ R A RdR ∫ R2dR =∫ AL R2RdR ∫ R2dR and its variance var A=∫ AL 2 R2RdR ∫R2dR −〈 A〉2 but we just need a wave function . . . . . . .
- 21. The trial wavefunction The trialfunction completely determines quality of the approximation for the physical observables The simplest WF is the SlaterJastrow r1,r2,. ..,rn=Det∣A∣expUcorr Det from DFT, CI, HF, scratch, etc.. other functional forms: pairing BCS, multideterminant, pfaffian
- 22. Optimization strategies In order to obtain a good variational wavefunction, it is possible to optimize the WF minimizing one of the following functionals or a linear combination of both EV a ,b,c=∫ a ,b,c..H a ,b,c...dRn ∫¿ The Variational Energy The Variance of the Energy: 2 2−Ev (always positive and 0 for exact 2 a ,b,c...=∫[ ground state!) H ] 2
- 23. And finally an application!!!
- 24. 2D electron gas H= The Hamiltonian : −1 2rs 2 N ∇i Σi 2 1 r s N 1 ∣ri−r j∣ Σi j rS= 1 naB Unpolarized phase Unpolarized phase Wigner Crystal
- 25. 2D electron gas: the phase diagram We found a new phase of the 2D electron gas at low density a stable spin polarized phase before the Wigner crystallization.
- 26. Difficulties With VMC The manyelectron wavefunction is unknown Has to be approximated May seem hopeless to have to actually guess the wavefunction But is surprisingly accurate when it works
- 27. The Limitation of VMC Nothing can really be done if the trial wavefunction isn’t accurate enough Moreover it favours simple states over more complicated ones Therefore, there are other methods Example: Diffusion QMC
- 28. Next Monday Diffusion Monte Carlo and SignProblem Applications Then . . . . Finite Temperature PathIntegral Monte Carlo Onedimensional electron gas Excited States Onebody density matrix Diagramatic Monte Carlo
- 29. Reference SISSA Lectures on Numerical methods for strongly correlated electrons 4th draft S. Sorella G. E. Santoro and F. Becca (2008) Introduction to Diffusion Monte Carlo Method I. Kostin, B. Faber and K. Schulten, physics/9702023v1 (1995) FreeScience.info> Quantum Monte Carlo http://www.freescience.info/books.php?id=35
- 30. Exact conditions ElectronNuclei cusp conditions When one electron approach a nuclei the wavefunction reduce to a simple hydrogen like, namely: The same condition holds when two electron meet electronelectron cusp condition and can be satisfied with a twobody Jastrow factor

No public clipboards found for this slide

Be the first to comment