2. MPI INTRODUCTION
The first difference is in the price of communication, time
needed to exchange certain amount of data between
processors
MPI is a standardized means of exchanging messages between
multiple computers running a parallel program across
distributed memory
3. MPI (MESSAGE PASSING INTERFACE)
MPI is not a language but all MPI operations are expressed as
functions or subroutines
MPI Standard defines the syntax and semantics of operations
MPI Program consists of autonomous process that are able to
execute their own code in the sense of MIMD
4. MPI (MESSAGE PASSING INTERFACE)
MPI provides at least two operations
send(message) and receive(message)
Message sent by a process can be of either fixed or variable size
Fixed Size: system level implementation is straightforward but make
the task of programming more difficult
Variable size: system level implementation is difficult but makes task
programming more simpler.
5. MPI (MESSAGE PASSING INTERFACE)
If process P and Q want to communicate, they send messages
to and receive messages from each other
So we need a communication link between them
7. Header file mpi.h must be included to compile MPI Code.
MPI-Init() from this point processes can collaborate,
send/receive message until MPI-Finalize()
finalizing leads to freeing all the resources reserved by MPI.
8. BASIC DATA TYPES RECOGNIZED BY MPI
MPI DATATYPE HANDLE C DATATYPE
MPI_INT Int
MPI_SHORT Short
MPI_LONG Long
MPI_FLOAT Float
MPI_DOUBLE Double
MPI_CHAR Char
9. MPI also provides routines that let the process determine its
process ID, as well as the number of processes that have been
created.
10.
11. MPI_Comm_size(): returns total number of processes
MPI_Comm_rank(): returns process id that called the
function
MPI rank us used to specify a particular process
It is an integer range from 0 to n
It is necessary for a process to know it rank.
12. MPI COMMUNICATOR
A process group and context together form an MPI
Communicator
MPI Communicator – holds a group of processes that can
communicate with each other.
MPI_COMM_WORLD is default communicator that contains
all processes available for use.
13. SEND AND RECEIVE MPI
Process A decides to sent message to process B. Process A pack
up all information into buffer and sent. Process A acknowledged
that data has transmitted
14. SYNTAX OF SEND OPERATION
MPI_SEND (buf, count, datatype, dest, tag,
comm)
MPI_SEND will not complete until a matching MPI-RECV
Identified
Buf - pointer to send buffer, data to send
Count - number of data item (non negative)
Datatype - type of data
Dest - receiver address
Tag - message tag
15. SYNTAX OF RECEIVE OPERATION
MPI_RECV (buf, count, datatype, dest, tag, comm, status)
MPI_SEND will not complete until a matching MPI-RECV Identified
Buf - pointer to send buffer, data to send
Count - number of data item (non negative)
Datatype - type of data
Dest - receiver address
Tag - message tag
Comm- communicator (handle)
Status – contains furter information about
17. COLLECTIVE MPI COMMUNICATION
MPI Collective operations are called by all processes in a
communicator
Following are some collective MPI Collective Operations
MPI_BARRIER
MPI_BCAST
MPI_SCATTER
MPI_GATHER
18. MPI BARRIERS
Like many other programming utilities, MPI-Barrier is a process
lock that holds each process at a certain line of code until all
processes have reach that line in code.
MPI_Barrier can be called as such:
MPI_Barrier(MPI_Comm comm)
19. MPI_BCAST
Implements a one-to-all broadcast operation
Root process sends its data to all other processes
MPI_BCAST(inbuf, incnt, intype, root, comm)
Inbuf: consisting of input data
Incnt: count number of data
Intype: type of data
20. MPI_GATHER
All-to-one operator, also called by all process in the
communicator.
Gather data from participating processes into a single structure
21. MPI_SCATTER
Break a structure into portions and distribute those portions
to other processes
Inverse of MPI_Gather
Data is scattered to other processes into equal parts
22. COLLECTIVE MPI DATA MANIPULATIONS
MPI Provides a set of operations that performs simple
manipulations on the transferred data
Manipulation are based on data reduction paradigm that reduce
data into smaller set of data.
23. COLLECTIVE MPI DATA MANIPULATIONS
MPI_Max, MPI_Min: return maximum or minimum of data item
MPI_Sum, MPI_Prod: Return sum or product of all data items
MPI_LAND, MPI_LOR, MPI_BAND, MPI_BOR: return logical or
bitwise operation across data.
24. MPI REDUCTION
The MPI Operation that implements al kind of data reduction is
MPI_Reduce: Works similar to MPI_Gather followed by
manipulation operation in process root.
25. MPI REDUCTION
The MPI Operation that implements al kind of data reduction is
MPI_AllReduce: Works as MPI_Reduce followed by MPI_Bcast
Final result has to be available to all process
26. POINT-TO-POINT COMMUNICATION (PING-PONG)
PING-PONG Communication is also called as non-blocking
communication
Involves sending and receiving between two process back-and-
forth MPI Communication
Ping pong communication starts by using mpiexec command