MPI n OpenMP

707 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
707
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
31
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

MPI n OpenMP

  1. 1. APIs FORPARALLEL PROGRAMMING By: SURINDER KAUR 2012CS13
  2. 2. PARRALLEL PROGRAMMMING MODELS: • SHARED MEMORY MODEL: •Programs are executed on one or more processors that share some or all of the available memory. •OpenMP is based on this model. • MESSAGE PASSING MODEL: •Programs are executed by one or more processes that exchange message whenever one of them needs data produced by the other. •MPI is based on this model.
  3. 3. OpenMPOPEN MULTIPROCESSING
  4. 4. WHAT THEY ARE FOR• Shared memory API that supports multi-platformprogramming in C/C++ and Fortan.• Data is shared among the threads and is visible to all ofthem.• Private data is thread specific data.• Values of shared data must be made available to allthreads at synchronization points.
  5. 5. EXECUTION MODEL• Fork-Join model. Initial thread Fork Team of threads Join Initial thread
  6. 6. • OpenMP model supports incremental parallelization.• In this approach a sequential program is transformed intoparallel program one block of code at a time. Master Thread Fork JoinTime Master Thread Fork Join Master Thread
  7. 7. LANGUAGE FEATURES• Directives• Library functions• Environment variables• Constructs• Clauses
  8. 8. DIRECTIVES• Parallel construct #pragma omp parallel [clause,clause,…] structured block• Work sharing constuct • Loop construct #pragma omp for [clause, clause,…] for loop • Section construct #pragma omp sections [clause,clause,…] { #pragma omp section structured block #pragma omp section structured block }
  9. 9. •Single construct #pragma omp single [clause, clause,…] structured block •Work sharing construct• Combined parallel work sharing construct #pragma omp parallel { #pragma omp parallel for #pragma omp for for loop for loop }
  10. 10. SYNCHRONIZATION CONSTRUCTS• Barrier construct #pragma omp barrier • Ordered construct #pragma omp ordered • Critical construct #pragma omp critical [name] structured block• Atomic construct #pragma omp atomic statement• Master construct #pragma omp master structured block• Locks
  11. 11. CLAUSES• Shared clause shared(list)• Private clause private(list)• Lastprivate clause lastprivate(list)• Firstprivate clause firstprivate(list)• Default clause default(none/shared)
  12. 12. • Num_thread clause num_threads(integer_expression)•Ordered clause ordered•Reduction clause reduction(operator:list)• Copyin clause copyin(list)• Copyprivate clause copyprivate(list)•Nowait clause•Schedule clause• If clause if (expression)
  13. 13. OTHER DIRECTIVE• Flush directive #pragma omp flush [(list)]• Threadprivate direcive #pragma omp threadprivate (list)
  14. 14. ENVIRONMENT VARIABLE• OMP_NUM_THREADS setenv OMP_NUM_THREADS <int>• OMP_DYNAMIC setenv OMP_DYNAMIC TRUE• OMP_NESTED setenv OMP_NESTED TRUE• OMP_SCHEDULE
  15. 15. ADVANTAGES• Structural parallel programming.• Simple to use.• Runs on various platforms• Portability.• Incremental parallelism.• Unified code for both serial and parallel applications..• Data layout and decomposition is handled automatically bydirectives.• Both coarse-grained and fine-grained parallelism are possible
  16. 16. DISADVANTAGES• Since it relies on compiler to detect and exploit parallelism inapplication, which may degrade its performance.• Race condition may arise.• Currently only runs efficiently in shared-memorymultiprocessor platforms.• Compiler support is required.• Scalability is limited by memory architecture.
  17. 17. USES• Grid computing• Wave, weather and ocean code: • LAMBO, Limited Area Model BOlogna, is a grid-point primitive equations model. • The Wave Model, WA.M., describes the sea state at a certain time in a certain position as the overlapping of many sinusoidals with different frequencies. • Modular Ocean Model, M.O.M., solves the primitive equations under hydrostatic, Boussinesq and rigid lid approximations.
  18. 18. FUTURE WORK• OpenMP, the de facto standard for parallel programming on shared memory systems, continues to extend its reach beyond pure HPC to include embedded systems, real time systems, and accelerators.• Release Candidate 1 of the OpenMP 4.0 API specifications currently under development is now available for public discussion. This update includes thread affinity, initial support for Fortran 2003, SIMD constructs to vectorize both serial and parallelized loops , user-defined reductions, and sequentially consistent atomics.• More new features, in a final Release Candidate 2, to appear sometime in the first Quarter of 2013, followed by the finalized full 4.0 API specifications soon thereafter.
  19. 19. MPIMESSAGE PASSING INTERFACE
  20. 20. WHAT THEY ARE FOR• Library based model for interprocess communication.• Various executing processes communicate via message passing.• The message can either be • DATA message • CONTROL message• IPC can either be •SYNCHRONOUS •ASYNCHRONOUS
  21. 21. ASSOCIATED TERMS• Group: Set of processes that communicate with one another.• Context: It is the frequency of the communication.• Communicator: Central object for communication in MPI. Each communicator is associated with a group and a context.
  22. 22. COMMUNICATION MODES• Standard: Communication completes as the message is sent out by the sender.• Synchronous : Communication completes when the acknowledgement from sender is received by the sender.• Buffered: Communication completes as the sender generates the message and stores in the buffer.• Ready Send: Communication completes immediately, if the receiver is ready for the message it will get it, otherwise the message is dropped silently.
  23. 23. BLOCKING AND NONBLOCKING • Blocking The program halts until the communication is completed. • Non-Blocking The program will continue its execution without waiting for the communication to be completed.
  24. 24. MPI CALLS• MPI_INIT: MPI Init(int *argc, char ***argv) MPI_Init( &argc, &argv)• MPI_COMM_SIZE: MPI Comm size(MPI Comm comm, int *size) MPI_Comm_size(MPI_COMM_WORLD, &numprocs)• MPI_COMM_RANK: MPI Comm rank(MPI Comm comm, int *rank) MPI_Comm_rank( MPI_COMM_WORLD, &rank)
  25. 25. • MPI _SEND: MPI Send(void* buf, int count, MPI Datatype datatype, int dest, int tag, MPI Comm comm) MPI_Send( buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD)• MPI_RECV: MPI Recv(void* buf, int count, MPI Datatype datatype, int source, int tag, MPI Comm comm, MPI Status *status) MPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat)• MPI_FINALIZE: • MPI Finalize()
  26. 26. MPI DATA TYPES• MPI_INT• MPI_FLOAT• MPI_BYTE• MPI_CHAR
  27. 27. ADVANTAGES•Highly expressive•Enable efficient parallel programming•Excellent portability•Comprehensive set of library routines•Language independent• Provide access to advanced parallel hardware
  28. 28. DISADVANTAGES•Require more effort for developing code as compared to theOpenMP.•Communicators, which are much used in MPI programming, arenot entirely implemented. The underlying communicator system ispresent, but is presently restricted to only the globalenvironment MPI_COMM_WORLD.•Debugging the code is difficult.•Communication overhead.
  29. 29. USES• High performance computing• Grid computing
  30. 30. FUTURE WORK MPI-1:• a static runtime environment MPI-2.• Parallel I/O• Dynamic process management• Remote memory operations• Specifies over 500 functions• Language bindings for ANSI C, ANSI C++, and ANSI Fortran (Fortran90)• Object interoperabilityPresently it is most widely used API for parallel programming. Hence someaspects of MPIs future appear solid; others less so. The MPI Forum reconvened in 2007, to clarify some MPI-2 issues and exploredevelopments for a possible MPI-3.With greater internal concurrency (multi-core), better fine-grain concurrencycontrol (threading, affinity), and more levels of memory hierarchymultithreading programs can take advantage of these developments more easily. So developing concurrency completely within MPI is an opportunity for thestandard MPI-3. Also incorporating fault tolerance within the standard is also animportant issue.

×