Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Parllelizaion

382 views

Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Parllelizaion

  1. 1. Vivek Kantariya (09bce020) Guided by : Prof. Vibha Patel
  2. 2. <ul><li>Direct Simulation Monte Carlo </li></ul><ul><li>Used for keep tracking of finite fluid flow </li></ul><ul><li>Used in supersonic and hypersonic flow </li></ul><ul><li>It has large amount of molecules in probabilistic simulation </li></ul><ul><li>Over billions of molecules in random </li></ul><ul><li>motion </li></ul><ul><li>Large number of iterations </li></ul>
  3. 3. <ul><li>Multi-processing:- </li></ul><ul><li>more than one processor working on execution of program </li></ul><ul><li>Multi-threading:- </li></ul><ul><li>more than one thread are working on same code </li></ul><ul><li>Loop optimization:- </li></ul><ul><li>reduce the overhead in execution of loop </li></ul><ul><li>by transformation </li></ul>
  4. 4. <ul><li>Open Multi Processing </li></ul><ul><li>API that supports shared memory multi processing in C, C++, Fortran </li></ul><ul><li>It  is an implementation of multithreading </li></ul><ul><li>Comprised of :- </li></ul><ul><li>1) Compiler Directives </li></ul><ul><li>2) Runtime Library Routines </li></ul><ul><li>3) Environment Variables </li></ul>
  5. 5. <ul><li>Based on the existence of multiple threads in the shared memory programming paradigm </li></ul><ul><li>OpenMP is an explicit (not automatic) programming model, offering the programmer full control over parallelization. </li></ul>
  6. 6. <ul><li>Fork : the master thread then creates a team of parallel threads </li></ul><ul><li>Join : When the team threads complete the statements in the parallel region construct, they synchronize and terminate, leaving only the master thread </li></ul>
  7. 7. <ul><li>#include <omp.h> </li></ul><ul><li>main () { </li></ul><ul><li>  </li></ul><ul><li>#pragma omp parallel private(var1, var2) shared(var3) </li></ul><ul><li>{ </li></ul><ul><li>Parallel section executed by all threads </li></ul><ul><li>………  </li></ul><ul><li>All threads join master thread and disband </li></ul><ul><li>} </li></ul><ul><li>} </li></ul>
  8. 8. <ul><li>#pragma omp parallel [options] </li></ul><ul><li>{ </li></ul><ul><li>} </li></ul><ul><li>block of code that will be executed by multiple threads </li></ul><ul><li>omp_set_num_threads() library function set the number of threads dynamically </li></ul><ul><li>omp_set_nested() library routine for nested parallel regions </li></ul>
  9. 9. <ul><li>A work-sharing construct divides the execution of the enclosed code region among the members of the team that encounter it. </li></ul><ul><li>Work-sharing constructs do not launch new threads </li></ul><ul><li>Three work sharing constructs:- </li></ul><ul><ul><li>DO / for </li></ul></ul><ul><ul><li>SECTIONS </li></ul></ul><ul><ul><li>SINGLE </li></ul></ul>
  10. 11. <ul><li>Fission / distribution </li></ul><ul><li>Fusion / combining </li></ul><ul><li>Interchange / permutation </li></ul><ul><li>Inversion </li></ul><ul><li>Loop-invariant code motion </li></ul><ul><li>Loop reversal </li></ul>
  11. 12. <ul><li>Replace multiple loops with a single one </li></ul>
  12. 13. <ul><li>break down large loop body into smaller ones to achieve better utilization of locality of reference </li></ul>
  13. 14. <ul><li>It is the process of exchanging the order of two iteration variables </li></ul>
  14. 15. <ul><li>Replaces a while loop by an if block containing a do..while loop </li></ul>
  15. 16. <ul><li>Moving statements which are not relevant to loop outside </li></ul>
  16. 17. <ul><li>Run a loop backward so that loop fusion can be used </li></ul>
  17. 18. <ul><li>www.ieeexplore.ieee.org </li></ul><ul><li>http://wikipedia.org </li></ul><ul><li>http://computing.llnl.gov/tutorials/openMP </li></ul>

×