Parallelization of Coupled Cluster Code with OpenMP Anil Kumar Bohare Department of Computer  Science, University of Pune,...
Multi-core architecture and its implications to software <ul><li>This presentation has been made in OpenOffice. </li></ul>...
Multi-core architecture and its implications to software <ul><li>Current computer architectures like multi-core processor ...
Parallelization of Coupled Cluster Code <ul><li>With the increase in popularity of Symmetric  Multiprocessing(SMP) systems...
Parallelization of Coupled Cluster Code <ul><li>To reduce the execution time of sequential CCSD code, we optimize and para...
Agenda <ul><li>Introduction </li></ul><ul><li>Problem & Theories </li></ul><ul><li>Areas of application </li></ul><ul><li>...
Introduction / Background <ul><li>Coupled-cluster (CC) methods are now widely used in quantum chemistry to calculate the e...
Problem <ul><li>CCSD project contains 5 different files </li></ul><ul><li>‘ vbar’ is one of the many subroutines under foc...
Problem <ul><li>Goal is to reduce this time by at-least 30% i.e. making  it 7-8 minutes by applying OpenMP parallelization...
Parallelization: Description of the theory <ul><li>Shared-memory architecture (SMP): These parallel machines are built up ...
<ul><li>OpenMP is an API (Application Program Interface) used to explicitly direct multi-threaded, shared memory paralleli...
Use of OpenMP <ul><li>OpenMP is used in applications with intense computational needs, such as video games, big science & ...
System used <ul><li>Supermicro computer node </li></ul><ul><li>Quad Core Dual CPU = 8 cores </li></ul><ul><li>Intel(R) Xeo...
How to apply OpenMP? <ul><li>Identify compute intensive loops </li></ul><ul><li>Scope of Data Parallelism </li></ul><ul><l...
Identify compute intensive loops <ul><li>If you have big loops that dominate execution time, these are ideal target for Op...
Scope of Data Parallelism <ul><li>Shared variables are shared among all threads. </li></ul><ul><li>Private variables vary ...
PARALLEL DO Directive <ul><li>The first directive specifies that the loop immediately following should be executed in para...
PARALLEL DO Directive <ul><li>These are actual examples taken from OpenMP version of CCSD. </li></ul><ul><li>C$OMP PARALLE...
Reduction variables <ul><li>Variables that are used in collective operations over the elements of an array can be labeled ...
Mutual Exclusion Synchronization-Critical Section <ul><li>Certain parallel programs may require that each processor execut...
Atomic Directive <ul><li>The ATOMIC directives ensures that specific memory location is to be updated automatically, rathe...
Problem solution: Flow
Compilation & Execution  <ul><li>Compile the OpenMP version of CCSD Code </li></ul><ul><li>anil@node:~#   ifort -openmp cc...
Compilation & Execution  <ul><li>Execute the OpenMP version of CCSD code </li></ul><ul><li>anil@node:~# date>run_time; tim...
OpenMP Execution Model
General performance recommendations <ul><li>Be aware of the Amdahl's law </li></ul><ul><ul><ul><li>Minimize serial code </...
Advantages <ul><li>With multiple cores is that we could use them to extract thread level parallelism in a program and henc...
Disadvantages <ul><li>OpenMP code will only run on SMP machines </li></ul><ul><li>When the processor must perform multiple...
Result
Descriptive statistics Graph shows that as number of cores increasing wall clock it reducing at 35.66% of total time to in...
Further improvement <ul><li>This technique can be applicable to multi-level nested do loops which are highly complex and r...
Conclusion <ul><li>In this presentation, I parallelized and optimized the 'vbar' subroutine in CCSD Code.  </li></ul><ul><...
References <ul><li>Barney, Blaise.”Introduction to Parallel Computing”   . Lawrence Livermore National Laboratory. http://...
 
Upcoming SlideShare
Loading in...5
×

Parallelization of Coupled Cluster Code with OpenMP

1,288

Published on

I present this project thesis to All india NetApp Technical Paper comptition

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,288
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
31
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Parallelization of Coupled Cluster Code with OpenMP

  1. 1. Parallelization of Coupled Cluster Code with OpenMP Anil Kumar Bohare Department of Computer Science, University of Pune, Pune-7, India
  2. 2. Multi-core architecture and its implications to software <ul><li>This presentation has been made in OpenOffice. </li></ul><ul><li>Multi-core architectures have a single chip package that contains one or more dies with multiple execution cores on computational engines. The jobs are run on appropriate software threads simultaneously. </li></ul>contd.
  3. 3. Multi-core architecture and its implications to software <ul><li>Current computer architectures like multi-core processor on a single chip are increasingly relying on parallel programming techniques like Message Passing Interface (MPI), Open specifications for Multi-Processing (OpenMP) etc. to improve performance of applications, leading to developments in High Performance Computing (HPC). </li></ul>
  4. 4. Parallelization of Coupled Cluster Code <ul><li>With the increase in popularity of Symmetric Multiprocessing(SMP) systems as the building blocks for high performance supercomputers, the need for applications that can utilize the multiple levels of parallel architecture clusters of SMPs have also increased. </li></ul><ul><li>This presentation describes parallelization of an important molecular dynamic application ‘Coupled Cluster Singles and Doubles (CCSD)’ on multi-core systems. </li></ul>contd.
  5. 5. Parallelization of Coupled Cluster Code <ul><li>To reduce the execution time of sequential CCSD code, we optimize and parallelize it for accelerating its execution on multi-core systems. </li></ul>
  6. 6. Agenda <ul><li>Introduction </li></ul><ul><li>Problem & Theories </li></ul><ul><li>Areas of application </li></ul><ul><li>Performance Evaluation System </li></ul><ul><li>OpenMP implementation discussion </li></ul><ul><li>General performance recommendations </li></ul><ul><li>Advantages & Disadvantages </li></ul><ul><li>Performance Evaluations </li></ul><ul><li>Further improvement </li></ul><ul><li>Conclusion </li></ul><ul><li>References </li></ul>
  7. 7. Introduction / Background <ul><li>Coupled-cluster (CC) methods are now widely used in quantum chemistry to calculate the electron correlation energy. </li></ul><ul><li>Common use is in ab initio quantum chemistry methods in the field of computational chemistry. </li></ul><ul><li>Technique is used for describing many-body systems. </li></ul><ul><li>Some of the most accurate calculations for small to medium sized molecules use this method. </li></ul>
  8. 8. Problem <ul><li>CCSD project contains 5 different files </li></ul><ul><li>‘ vbar’ is one of the many subroutines under focus. It: </li></ul><ul><ul><ul><li>Computes the effective two-electron integrals which are CPU intensive. </li></ul></ul></ul><ul><ul><ul><li>Has iterative calculations. </li></ul></ul></ul><ul><ul><ul><li>Has big and time consuming loops. </li></ul></ul></ul><ul><ul><ul><li>Has up to 8 levels of nested loops. </li></ul></ul></ul><ul><ul><ul><li>Takes approximately 12 minutes to execute in a sequential code. </li></ul></ul></ul>contd.
  9. 9. Problem <ul><li>Goal is to reduce this time by at-least 30% i.e. making it 7-8 minutes by applying OpenMP parallelization technique. </li></ul>
  10. 10. Parallelization: Description of the theory <ul><li>Shared-memory architecture (SMP): These parallel machines are built up on a set of processors which have access to a common memory. </li></ul><ul><li>Distributed-memory architecture (Beuwolf clusters): each processor has its own private memory and information is interchanged through messages. </li></ul><ul><li>Wiki: MPI is a computer software protocol that allows many computers to communicate with one another. </li></ul><ul><li>In the last few years a new industry standard has evolved with the aim to serve the development of parallel programs on shared –memory machines: OpenMP. </li></ul>
  11. 11. <ul><li>OpenMP is an API (Application Program Interface) used to explicitly direct multi-threaded, shared memory parallelism. </li></ul><ul><li>OpenMP defines a portable programming interface based on directives, run time routines and environment variables. </li></ul><ul><li>OpenMP is a relatively new programming paradigm, which can easily deliver good parallel performance for small numbers (<16) of processors. </li></ul><ul><li>OpenMP is usually used on existing serial programs to achieve moderate parallelism with relatively little effort. </li></ul>
  12. 12. Use of OpenMP <ul><li>OpenMP is used in applications with intense computational needs, such as video games, big science & engineering problems. </li></ul><ul><li>It can be used from very early programmers in school to scientists to parallel computing experts. </li></ul><ul><li>It is available to millions of programmers in every major(Fortran & C/C++) compiler. </li></ul>
  13. 13. System used <ul><li>Supermicro computer node </li></ul><ul><li>Quad Core Dual CPU = 8 cores </li></ul><ul><li>Intel(R) Xeon(R) CPU X5472 @ 3.00GHz </li></ul><ul><li>8GB RAM </li></ul><ul><li>Red Hat Enterprise Linux WS release 4 </li></ul><ul><li>Kernel: 2.6.9-42.Elsmp </li></ul><ul><li>Compiler: Intel ifort (IFORT) 11.0 20090131 </li></ul><ul><li>The parallel CCSD implementation with OpenMP is compiled by Intel Fortran Compiler Version 11.0 with O3 optimization flag. </li></ul>
  14. 14. How to apply OpenMP? <ul><li>Identify compute intensive loops </li></ul><ul><li>Scope of Data Parallelism </li></ul><ul><li>Use of PARALLEL DO directive </li></ul><ul><li>Reduction variables </li></ul><ul><li>Mutual Exclusion Synchronization - Critical Section </li></ul><ul><li>Use of Atomic directive </li></ul><ul><li>OpenMP Execution Model </li></ul>
  15. 15. Identify compute intensive loops <ul><li>If you have big loops that dominate execution time, these are ideal target for OpenMP. </li></ul><ul><li>Divide loop iterations among threads: We will focus mainly on loop level parallelism in this presentation. </li></ul><ul><li>Make the loop iterations independent.. So they can safely execute in any order without loop-carried dependencies. </li></ul><ul><li>Place the appropriate OpenMP directives and test. </li></ul>
  16. 16. Scope of Data Parallelism <ul><li>Shared variables are shared among all threads. </li></ul><ul><li>Private variables vary independently within threads. </li></ul><ul><li>By default, all variables declared outside a parallel block are shared except the loop index variable, which is private. </li></ul><ul><li>In the shared memory setup the private variables in each thread avoid dependencies and false sharing of data . </li></ul>
  17. 17. PARALLEL DO Directive <ul><li>The first directive specifies that the loop immediately following should be executed in parallel. </li></ul><ul><li>For codes that spend the majority of their time executing the content of loops, the PARALLEL DO directive can result in significant increase in performance. </li></ul>contd.
  18. 18. PARALLEL DO Directive <ul><li>These are actual examples taken from OpenMP version of CCSD. </li></ul><ul><li>C$OMP PARALLEL </li></ul><ul><li>C$OMP DO SCHEDULE(STATIC,2) </li></ul><ul><li>C$OMP&PRIVATE(ib,ibsym,orbb,iab,iib,iq,iqsym,ibqsym,iaq,iiq, </li></ul><ul><li>$ig,igsym,orbg,iig,iag,ir,irsym,iir,iar,orbr,irgsym, </li></ul><ul><li>$kloop,kk,ak,rk,f4,vqgbr,imsloc,is,issym,iis,orbs,ias)‏ </li></ul><ul><li>do 1020 ib=1,nocc </li></ul><ul><ul><li>body of loop </li></ul></ul><ul><li>continue 1020 </li></ul>
  19. 19. Reduction variables <ul><li>Variables that are used in collective operations over the elements of an array can be labeled as REDUCTION variables: </li></ul><ul><ul><ul><li>xsum=0 </li></ul></ul></ul><ul><ul><li>C$OMP PARALLEL DO REDUCTION(+:xsum)‏ </li></ul></ul><ul><ul><li>do in=1,ntmax </li></ul></ul><ul><ul><li>xsum=xsum+baux(in)*t(in)‏ </li></ul></ul><ul><ul><li>enddo </li></ul></ul><ul><ul><li>C$OMP END PARALLEL DO </li></ul></ul><ul><li>Each processor has its own copy of xsum. After the parallel work is finished, the master thread collects the values generated by each processor and performs global reduction. </li></ul>
  20. 20. Mutual Exclusion Synchronization-Critical Section <ul><li>Certain parallel programs may require that each processor executes a section of code, where it is critical that only one processor executes the code section at a time. </li></ul><ul><li>These regions can be marked with CRITICAL/END CRITICAL directivcs . Example </li></ul><ul><ul><li>C$OMP CRITICAL(SECTION1)‏ </li></ul></ul><ul><li>call findk(orbq,orbr,orbb,orbg,iaq,iar,iab,iag,kgot,kmax)‏ </li></ul><ul><li>C$OMP END CRITICAL(SECTION1)‏ </li></ul>
  21. 21. Atomic Directive <ul><li>The ATOMIC directives ensures that specific memory location is to be updated automatically, rather than exposing it to the possibility of multiple, simultaneous writing threads. Example </li></ul><ul><li>C$OMP ATOMIC </li></ul><ul><li>aims31(imsloc) = aims31(imsloc)-twoe*t(in1)‏ </li></ul>
  22. 22. Problem solution: Flow
  23. 23. Compilation & Execution <ul><li>Compile the OpenMP version of CCSD Code </li></ul><ul><li>anil@node:~# ifort -openmp ccsd_omp.F -o ccsd_omp.o </li></ul><ul><li>Set the OpenMP environment variables </li></ul><ul><li>[email_address] :~# cat exports </li></ul><ul><li>export OMP_NUM_THREADS=2 or 4 or 8 (number of threads to be spawned while executing the specified loops)‏ </li></ul><ul><li>export OMP_STACKSIZE=1G(Less size may result in segmentation fault)‏ </li></ul><ul><li>[email_address] :~# source exports </li></ul>contd.
  24. 24. Compilation & Execution <ul><li>Execute the OpenMP version of CCSD code </li></ul><ul><li>anil@node:~# date>run_time; time ./ccsd_omp.o; date>>run_time </li></ul>
  25. 25. OpenMP Execution Model
  26. 26. General performance recommendations <ul><li>Be aware of the Amdahl's law </li></ul><ul><ul><ul><li>Minimize serial code </li></ul></ul></ul><ul><ul><ul><li>Remove dependencies among iterations </li></ul></ul></ul><ul><li>Be a ware of directives cost </li></ul><ul><ul><ul><li>Parallelize outer loops </li></ul></ul></ul><ul><ul><ul><li>Minimize the number of directives </li></ul></ul></ul><ul><ul><li>Minimize synchronization- minimize the use of BARRIER,CRITICAL </li></ul></ul><ul><li>Reduce False Sharing </li></ul><ul><ul><ul><li>Make use of private data as much as possible. </li></ul></ul></ul>
  27. 27. Advantages <ul><li>With multiple cores is that we could use them to extract thread level parallelism in a program and hence increase the performance of the sequential code. </li></ul><ul><li>Original source code is almost left untouched. </li></ul><ul><li>Can substantially reduce the execute time (upto 40%) of a given code resulting in power saving. </li></ul><ul><li>Designed to make programming threaded applications quicker, easier, and less error prone. </li></ul>
  28. 28. Disadvantages <ul><li>OpenMP code will only run on SMP machines </li></ul><ul><li>When the processor must perform multiple tasks simultaneously, it will cause performance degradations </li></ul><ul><li>There can be several iterations of trials before the user gets expected timings from the OpenMP codes </li></ul>
  29. 29. Result
  30. 30. Descriptive statistics Graph shows that as number of cores increasing wall clock it reducing at 35.66% of total time to increase performance
  31. 31. Further improvement <ul><li>This technique can be applicable to multi-level nested do loops which are highly complex and require more time. </li></ul><ul><li>This code can also benefit with hybrid approach i.e. outer loop is parallelized between processors using MPI and the inner loop is parallelized for processing elements inside each processor with OpenMP directives. Though this effectively means rewriting the complete code. </li></ul>
  32. 32. Conclusion <ul><li>In this presentation, I parallelized and optimized the 'vbar' subroutine in CCSD Code. </li></ul><ul><li>I conducted a details performance characterization on 8-cores processor system. </li></ul><ul><li>Found the optimization technique such as SIMD (Single Instruction Multiple Data) is one of four Flynn's Taxonomy, effective. </li></ul><ul><li>Decreased the runtime linearly when adding more compute cores to the same problem </li></ul><ul><li>Multiple cores/CPUs dominate the future computer architectures; OpenMP would be very useful for parallelization of sequential applications, in these architectures </li></ul>
  33. 33. References <ul><li>Barney, Blaise.”Introduction to Parallel Computing” . Lawrence Livermore National Laboratory. http://www.llnl.gov/computing/tutorials/parallel_comp/ </li></ul><ul><li>The official for OpenMP www.openmp.org </li></ul><ul><li>http://www.llnl.gov/computing/tutorials/openMP/ </li></ul><ul><li>R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, J. McDonald, Parallel Programming in OpenMP. Morgan Kaufmann, 2000. http://www.nersc.gov/nusers/help/tutorials/openmp </li></ul><ul><li>MPI web pages at Argonne National Laboratory http://www-unix.mcs.anl.gov/mpi </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×