Your SlideShare is downloading. ×
0
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Parallelization using open mp
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Parallelization using open mp

397

Published on

Parallelization using open mp

Parallelization using open mp

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
397
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. “Parallelization Using OpenMP Pro. Ranjit R. Banshpal
  • 2. Contents •Abstract •Introduction •Literature survey •Why Parallelization? •What Is Parallelization? •Parallel Programming Model •Achieving Parallelism In Shared Memory Model Using OpenMP •What is Message Passing? •OpenMP Vs MPI •Pros & Cons Of OpenMP •Pros & Cons Of MPI •Conclusion •References 2
  • 3. Abstract • A more powerful machine leads to new kinds of applications, which in turn fuel our demand for yet more powerful systems. • Hardware engineers are striving harder to get the attainable performance, however find limit after a certain point. • This has given birth to what we call software parallelism. • There are different types of tools such as OpenMP and MPI, which can be used to model software program to work faster by parallelism. 3
  • 4. Introduction  Programming languages evolve just as natural languages do.  In the early days of computing, programs were serial.  It ran from start to finish on a single processor.  Parallel programming developed as a means of improving performance and efficiency.  The instructions from each part run simultaneously on different CPUs. 4
  • 5. Literature Survey Serial No. Name Of Authors Name of Paper Discussion 1. T.G. Mattson, B.A. Sanders, and B. Massingill Patterns for Parallel Programming Classification of parallel programming models 2. D.R. Butenhof Programming with POSIX Threads Portable operating system interface Thread programming model 3. B. Chapman, G. Jost, and R. van der Pas Using, OpenMP: Shared Memory Portable Model Shared Memory Parallel Programming 4. P.S. Pacheco Parallel Programming Message Passing with MPI Model 5
  • 6. Parallel Computer Memory Architectures  Shared Memory Architecture • UMA 6
  • 7. Parallel computer memory architectures  Shared Memory Architecture • NUMA  Distributed Memory Architecture 7
  • 8. Parallel computer memory architectures  Hybrid Memory Architecture 8
  • 9. Why Parallelization? Carefully optimizing the serial version of code could lead to significant performance gains. Nevertheless, there will always be some codes which demand “too many” resources in terms of CPU time or memory. Parallelization is optimization technique. The goal is to reduce the execution time. 9
  • 10. What Is Parallelization? Something is parallel if there is certain level of independence in the order of operations. In other words, it doesn’t matter in what order the operations are performed. 10
  • 11. Parallel Programming Models  Parallel programming models exist as an abstraction above hardware and memory architectures.  These models are not specific to a particular type of machine or memory architecture.  There are several parallel programming models in common use: • Shared Memory Model • Thread Model • Message Passing Model 11
  • 12. Shared Memory Model  Tasks share a common address space, which they read and write asynchronously.  Task oriented and works at higher level of abstraction than the threads.  Advantage: There is no need to specify explicitly the communication of data between tasks. Program development can often be simplified.  Disadvantage: In terms of performance, it becomes more difficult to understand and manage data locality. 12
  • 13. Thread Model A single process can have multiple, concurrent execution paths. Each thread has local data, but also shares the entire resources of program. A thread's work may best be described as a subroutine within the main program. Threads communicate with each other through global memory (updating address locations). Threads are commonly associated with shared memory architectures and operating systems. 13
  • 14. Message Passing Model A set of tasks that use their own local memory during computation.  Multiple tasks can reside on the same physical machine and/or across an arbitrary number of machines. Tasks exchange data through communications by sending and receiving messages. Data transfer usually requires cooperative operations to be performed by each process. 14
  • 15. Achieving Parallelism in Shared Memory Model Using OpenMP 15
  • 16. What Is OpenMP?  Open specifications for Multi Processing.  “Standard” API for defining multi-threaded shared-memory programs.  OpenMP is not a “language”.  OpenMP consists of three main parts: 16
  • 17. Why OpenMP Is Popular?  No message passing .  OpenMP directives or library calls may be incorporated incrementally.  The code is in effect a serial code.  Code size increase is generally smaller.  OpenMP-enabled codes tend to be more readable . 17
  • 18. The Basic Idea • The code starts with one master thread. • When a parallel tasks needs to be performed, additional threads are created. • When the parallel tasks are finished, the additional threads are released. JOIN FORK JOIN Master thread FORK OpenMP Execution Model Paralle Parallel l Region Region 18
  • 19. What is Message Passing ?  A computational model in which, processes are able to communicate with other processes by sending and receiving messages.  Distributed Memory Systems. • Networks of Workstations (clusters) • Massively parallel machines  Shared Memory Systems. • Supercomputer Setting  MPI is a library specification for message-passing.  Use for Distributed Memory Systems. 19
  • 20. OpenMP Vs MPI OpenMP MPI 1. Works on shared memory systems. 1. Works on both shared memory and distributed memory systems . 2. Has better performance on SMP systems, than MPI. 2. Has poor performance on SMP systems. 3. Directive based. 3. Message passing style 4. Easier to program and debug. 4. More flexible and scalable 20
  • 21. Pros & Cons of OpenMP • Pros – Easy to Instrument (and check) – Parallelism can be implemented incrementally – Allows for coarse-grained or fine-grained parallelism – Widely available, portable • Cons – Not as scalable as MPI – Available on Shared memory systems only 21
  • 22. Pros & Cons of MPI • Pros : – runs on either shared or distributed memory architectures – can be used on a wider range of problems than OpenMP – each process has its own local variables • Cons : – requires more programming changes to go from serial to parallel version – can be harder to debug – performance is limited by the communication network between the nodes 22
  • 23. Conclusion  OpenMP is better option for parallelization in shared memory.  OpenMP is a compiler-based technique to create concurrent code from (mostly) serial code.  OpenMP can enable (easy) parallelization of loop-based code.  OpenMP performs comparably to manually-coded threading • Scalable • Portable 23
  • 24. References [1]. Javier Diaz, Camelia Mun˜oz-Caro, and Alfonso Nin˜o, “A Survey of Parallel Programming Models and Tools in the Multi and Many-Core Era”, IEEE transactions on parallel and distributed systems, vol. 23, no. 8, august 2012. [2]. D. S. Henty, “Performance of Hybrid Message-Passing and Shared-Memory Parallelism for Discrete Element Modeling”, Proceedings of the IEEE/ACM SC2000 Conference (SC’00), 2000. [3]. David Clark, “OpenMP: a parallel standard for the masses”, IEEE Concurrency, January–March 1998. [4]. Joe Throop, Kuck & Associates Inc., “OpenMP: Shared-Memory Parallelism From the Ashes”, IEEE Standards, May 1999. [5]. Leonardo Dagum and Ramesh Menon“OpenMP: An Industry Standard API for Shared-Memory Programming”, IEEE computationascli ence & engineering, May 1998. [6]. J. B. Dennis and E. C. Van Horn, “Programming semantics for multiprogrammed computations”, Comm. ACM, 9(3):143–155, 1966. [7]. MPI Forum, “MPI: A Message Passing Interface”, Int. Journal of Supercomputing Applications, 8(3/4), 1994. 24
  • 25. References [8]. Barbara Chapman, Gabriele Jost, Ruud van der Pas, “Using OpenMP”, The MIT Press. Cambridge, Massachusetts ,London, England, 2008. [9]. William Gropp, “Tutorial on MPI: The Message Passing Interface”, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439, January–March 1999. [10]. Ewing Lusk and Anthony Chan., “Early Experiments with the OpenMP/MPI Hybrid Programming Model”, Mathematics and Computer Science Division Argonne National Laboratory, ASCI FLASH Center, University of Chicago, 2008. [11]. Dieter an Mey, Thomas Reichstein Parallelization with OpenMP and MPI, A Simple Example (C)”, October 26, 2007. [12]. Wahid Nasri and Karim Fathallah, “A Performance model for OpenMP programs on multicore machines.” IEEE 2013 [13]. MPI Forum. “Hybrid MPI/OpenMP Optimization in Linpack Benchmark on Multi-core Platforms”, The 8th International Conference on Computer Science & Education (ICCSE 2013),IEEE 2013 25
  • 26. 26

×