High Performance Computing using MPI

8,106 views
8,156 views

Published on

High Performance Computing Workshop for IHPC, Techkriti'13
Supercomputing Blog contains the codes -
http://ankitmahato.blogspot.in/search/label/Supercomputing

Credits:
https://computing.llnl.gov/
http://www.mcs.anl.gov/research/projects/mpi/

Published in: Technology
0 Comments
12 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
8,106
On SlideShare
0
From Embeds
0
Number of Embeds
4,524
Actions
Shares
0
Downloads
30
Comments
0
Likes
12
Embeds 0
No embeds

No notes for slide

High Performance Computing using MPI

  1. 1. High Performance Computing Ankit Mahato amahato@iitk.ac.in fb.com/ankitmahato IIT KanpurNote: These are not the actual lecture slides but the ones you mayfind useful for IHPC, Techkriti’13.
  2. 2. Lots at stake !!
  3. 3. What is HPC?It is the art of developing supercomputers and software to run onsupercomputers. A main area of this discipline is developing parallelprocessing algorithms and software: programs that can be divided intolittle pieces so that each piece can be executed simultaneously by separateprocessors.
  4. 4. Why HPC?To simulate a bio-molecule of 10000 atomsNon-bond energy term ~ 10^8 operationsFor 1 microsecond simulation ~ 10^9 steps ~ 10^17 operationsOn a 1 GFLOPS machine (10^9 operations per second) it takes 10^8 secs (About 3 years 2 months)Need to do large no of simulations for even larger moleculesPARAM Yuva – 5 x 10^14 – 3 min 20 secTitan – 5.7 seconds
  5. 5. Amdahls LawCode = Serial Part + Part which can be parallelizedThe potential program speedup is defined by the fraction ofcode that can be parallelized
  6. 6. Amdahls LawCan you get a speed up of 5 times using quad core processor?
  7. 7. HPC ArchitectureHPC architecture typically consist of massive number of computingnodes (typically 1000s) highly interconnected by a specialized lowlatency network fabric which use MPI for data exchange.Nodes = cores + memoryComputation is divided into tasks distributing these tasks across thenodes and they need to synchronize and exchange information severaltimes a second.
  8. 8. Communication OverheadsLatencyStartup time for each message transaction 1 μsBandwidthThe rate at which the messages are transmitted across the nodes /processors 10 Gbits /sec.You can’t go faster than these limits.
  9. 9. MPIM P I = Message Passing InterfaceIt is an industry-wide standard protocol for passing messages between parallelprocessors.MPI is a specification for the developers and users of message passinglibraries. By itself, it is NOT a library - but rather the specification of whatsuch a library should be.Small: Require only six library functions to write any parallel codeLarge: There are more than 200 functions in MPI-2
  10. 10. MPIProgramming ModelOriginally, MPI was designed for distributed memory architectures, whichwere becoming increasingly popular at that time.As architecture trends changed, shared memory SMPs were combined overnetworks creating hybrid distributed memory / shared memory systems. MPIimplementors adapted their libraries to handle both types of underlyingmemory architectures seamlessly.It means you can use MPI even on your laptops.
  11. 11. Why MPI ?Today, MPI runs on virtually any hardware platform:• Distributed Memory• Shared Memory• HybridA MPI program is basically a C, fortran or Python program that usesthe MPI library, SO DON’T BE SCARED.
  12. 12. MPI
  13. 13. MPICommunicator: a set ofprocesses that have a valid rankof source or destination fields.The predefined communicator isMPI_COMM_WORLD, and wewill be using this communicatorall the time.MPI_COMM_WORLD is adefault communicator consistingall processes. Furthermore, aprogrammer can also define anew communicator, which has asmaller number of processesthan MPI_COMM_WORLDdoes.
  14. 14. MPIProcesses: For this module, wejust need to know thatprocesses belong to theMPI_COMM_WORLD. If thereare p processes, then eachprocess is defined by its rank,which starts from 0 to p - 1. Themaster has the rank 0.For example, in this processthere are 10 processes
  15. 15. MPIUse SSH client (Putty) to login into any of theseMulti - Processor Compute Servers with processorsvarying from 4 to 15.akash1.cc.iitk.ac.inakash2.cc.iitk.ac.inaatish.cc.iitk.ac.infalaq.cc.iitk.ac.inOn your PC you can download mpich2 or openmpi.
  16. 16. MPIInclude Header FileC:#include "mpi.h"Fortran:include mpif.hPython:from mpi4py import MPI(not available in CC server you can set it up on your lab workstation)
  17. 17. MPISmallest MPI library should provide these 6 functions:MPI_Init - Initialize the MPI execution environmentMPI_Comm_size - Determines the size of the group associated with acommunictorMPI_Comm_rank - Determines the rank of the calling process in thecommunicatorMPI_Send - Performs a basic sendMPI_Recv - Basic receiveMPI_Finalize - Terminates MPI execution environment
  18. 18. MPIFormat of MPI Calls:C and Python names are case sensitive.Fortran names are not.Example:CALL MPI_XXXXX(parameter,..., ierr) is equivalent tocall mpi_xxxxx(parameter,..., ierr)Programs must not declare variables or functions with namesbeginning with the prefix MPI_ or PMPI_ for C & Fortran..For Python ‘from mpi4py import MPI’ already ensures that you don’tmake the above mistake.
  19. 19. MPIC:rc = MPI_Xxxxx(parameter, ... )Returned as "rc“. MPI_SUCCESS if successfulFortran:CALL MPI_XXXXX(parameter,..., ierr)Returned as "ierr" parameter. MPI_SUCCESS if successfulPython:rc = MPI.COMM_WORLD.xxxx(parameter,…)
  20. 20. MPI MPI_InitInitializes the MPI execution environment. This function mustbe called in every MPI program, must be called before anyother MPI functions and must be called only once in an MPIprogram.For C programs, MPI_Init may be used to pass the commandline arguments to all processes, although this is not requiredby the standard and is implementation dependent.MPI_Init (&argc,&argv)MPI_INIT (ierr)For python no initialization is required.
  21. 21. MPI MPI_Comm_sizeReturns the total number of MPI processes in the specifiedcommunicator, such as MPI_COMM_WORLD. If thecommunicator is MPI_COMM_WORLD, then it represents thenumber of MPI tasks available to your application.MPI_Comm_size (comm,&size)MPI_COMM_SIZE (comm,size,ierr)Where comm is MPI_COMM_WORLDFor pythonMPI.COMM_WORLD.size is the total number of MPIprocesses. MPI.COMM_WORLD.Get_size() also returns thesame.
  22. 22. MPI MPI_Comm_rankReturns the rank of the calling MPI process within the specifiedcommunicator. Initially, each process will be assigned a unique integerrank between 0 and number of tasks - 1 within the communicatorMPI_COMM_WORLD. This rank is often referred to as a task ID. If aprocess becomes associated with other communicators, it will have aunique rank within each of these as well.MPI_Comm_rank (comm,&rank)MPI_COMM_RANK (comm,rank,ierr)Where comm is MPI_COMM_WORLDFor pythonMPI.COMM_WORLD.rank is the total number of MPI processes.MPI.COMM_WORLD.Get_rank() also returns the same.
  23. 23. MPI MPI_Comm_rankReturns the rank of the calling MPI process within the specifiedcommunicator. Initially, each process will be assigned a unique integerrank between 0 and number of tasks - 1 within the communicatorMPI_COMM_WORLD. This rank is often referred to as a task ID. If aprocess becomes associated with other communicators, it will have aunique rank within each of these as well.MPI_Comm_rank (comm,&rank)MPI_COMM_RANK (comm,rank,ierr)Where comm is MPI_COMM_WORLDFor pythonMPI.COMM_WORLD.rank is the total number of MPI processes.MPI.COMM_WORLD.Get_rank() also returns the same.
  24. 24. MPI MPI Hello World Programhttps://gist.github.com/4459911
  25. 25. MPIIn MPI blocking message passing routines are more commonly used.A blocking MPI call means that the program execution will besuspended until the message buffer is safe to use. The MPI standardsspecify that a blocking SEND or RECV does not return until the sendbuffer is safe to reuse (for MPI_SEND), or the receive buffer is ready touse (for PI_RECV).The statement after MPI_SEND can safely modify the memory locationof the array a because the return from MPI_SEND indicates either asuccessful completion of the SEND process, or that the buffercontaining a has been copied to a safe place. In either case, as buffercan be safely reused.Also, the return of MPI_RECV indicates that the buffer containing thearray b is full and is ready to use, so the code segment afterMPI_RECV can safely use b.
  26. 26. MPI MPI_SendBasic blocking send operation. Routine returns only after the applicationbuffer in the sending task is free for reuse.Note that this routine may be implemented differently on differentsystems. The MPI standard permits the use of a system buffer but doesnot require it.MPI_Send (&buf,count,datatype,dest,tag,comm)MPI_SEND (buf,count,datatype,dest,tag,comm,ierr)comm.send(buf,dest,tag)
  27. 27. MPI MPI_SendMPI_Send(void* message, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm comm)- message: initial address of the message- count: number of entries to send- datatype: type of each entry- destination: rank of the receiving process- tag: message tag is a way to identify the type of a message- comm: communicator (MPI_COMM_WORLD)
  28. 28. MPIMPI Datatypes
  29. 29. MPI MPI_RecvReceive a message and block until the requested data is available in theapplication buffer in the receiving task.MPI_Recv (&buf,count,datatype,source,tag,comm,&status)MPI_RECV (buf,count,datatype,source,tag,comm,status,ierr)MPI_Recv(void* message, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)- source: rank of the sending process- status: return status
  30. 30. MPI MPI_FinalizeTerminates MPI execution environmentNote: All processes must call this routine before exit. The number ofprocesses running after this routine is called is undefined; it isbest not to perform anything more than a return after callingMPI_Finalize.
  31. 31. MPI MPI Hello World 2:This MPI program illustrates the use of MPI_Send and MPI_Recvfunctions. Basically, the master sends a message, “Hello, world”, tothe process whose rank is 1, and then after having received themessage, the process prints the message along with its rank. https://gist.github.com/4459944
  32. 32. MPI Collective communicationCollective communication is a communication that must have allprocesses involved in the scope of a communicator. We will beusing MPI_COMM_WORLD as our communicator; therefore, thecollective communication will include all processes.
  33. 33. MPIMPI_Barrier(comm)This function creates a barrier synchronization in acommmunicator(MPI_COMM_WORLD). Each task waits atMPI_Barrier call until all other tasks in the communicator reach thesame MPI_Barrier call.
  34. 34. MPIMPI_Bcast(&message, int count, MPI_Datatype datatype, int root, comm)This function displays the message to all other processes inMPI_COMM_WORLD from the process whose rank is root.
  35. 35. MPIMPI_Reduce(&message,&receivemessage,int count,MPI_Datatype datatype,MPI_Op op, int root, comm)This function applies a reductionoperation on all tasks inMPI_COMM_WORLD and reducesresults from each process into onevalue. MPI_Op includes for example,MPI_MAX, MPI_MIN, MPI_PROD,and MPI_SUM, etc.
  36. 36. MPIMPI_Scatter(&message, int count, MPI_Datatype,&receivemessage, int count, MPI_Datatype, int root, comm)MPI_Gather(&message, int count, MPI_Datatype,&receivemessage, int count, MPI_Datatype, int root, comm)
  37. 37. MPI Pi Codehttps://gist.github.com/4460013
  38. 38. Thank You Ankit Mahatoamahato@iitk.ac.infb.com/ankitmahato IIT Kanpur
  39. 39. Check out our G+ Pagehttps://plus.google.com/105183351397774464774/posts
  40. 40. Join our community Share your feelings after you run the first MPI code and have discussions.https://plus.google.com/communities/105106273529839635622

×