Message passing interface

1,135 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,135
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
158
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Message passing interface

  1. 1. MPI CommunicationsPoint to PointCollective CommunicationData Packaging
  2. 2. Point-to-Point CommunicationSend and Receive• MPI_Send/MPI_Recv provide point-to- point communication – synchronization protocol is not fully specified. • what are possibilities?
  3. 3. Send and ReceiveSynchronization• Fully Synchronized (Rendezvous) – Send and Receive complete simultaneously • whichever code reaches the Send/Receive first waits – provides synchronization point (up to network delays)• Buffered – Receive must wait until message is received – Send completes when message is moved to buffer clearing memory of message for reuse
  4. 4. Send and Receive Synchronization• Asynchronous – Sending process may proceed immediately • does not need to wait until message is copied to buffer • must check for completion before using message memory – Receiving process may proceed immediately • will not have message to use until it is received • must check for completion before using message
  5. 5. MPI Send and Receive• MPI_Send/MPI_Recv are synchronous, but buffering is unspecified – MPI_Recv suspends until message is received – MPI_Send may be fully synchronous or may be buffered • implementation dependent• Variations allow synchronous or buffering to be specified
  6. 6. Asynchronous Send and Receive• MPI_Isend() / MPI_Irecv() are non-blocking. Control returns to program after call is made.• Syntax is the same as for Send and Recv, except a MPI_Request* parameter is added to Isend and replaces the MPI_Status* for receive.
  7. 7. Detecting Completion• MPI_Wait(&request, &status) – request matches request on Isend or Irecv – status returns status equivalent to status for Recv when complete – Blocks for send until message is buffered or sent so message variable is free – Blocks for receive until message is received and ready
  8. 8. Detecting Completion• MPI_Test(&request, flag, &status) – request, status as for MPI_Wait – does not block – flag indicates whether message is sent/received – enables code which can repeatedly check for communication completion
  9. 9. Collective Communications• One to Many (Broadcast, Scatter)• Many to One (Reduce, Gather)• Many to Many (All Reduce, Allgather)
  10. 10. Broadcast• A selected processor sends to all other processors in the communicator• Any type of message can be sent• Size of message should be known by all (it could be broadcast first)• Can be optimized within system for any given architecture
  11. 11. MPI_Bcast() SyntaxMPI_Bcast(mess, count, MPI_INT, root, MPI_COMM_WORLD);mess pointer to message buffercount number of items sentMPI_INT type of item sent Note: count and type should be the same on all processorsroot sending processorMPI_COMM_WORLD communicator within which broadcast takes place
  12. 12. MPI_Barrier()MPI_Barrier(MPI_COMM_WORLD);MPI_COMM_WORLD communicator within which broadcast takes placeprovides for barrier synchronization without message of broadcast
  13. 13. Reduce• All Processors send to a single processor, the reverse of broadcast• Information must be combined at receiver• Several combining functions available – MAX, MIN, SUM, PROD, LAND, BAND, LOR, BOR, LXOR, BXOR, MAXLOC, MINLOC
  14. 14. MPI_Reduce() syntaxMPI_Reduce(&dataIn, &result, count, MPI_DOUBLE, MPI_SUM, root, MPI_COMM_WORLD);dataIn data sent from each processorresult stores result of combining operationcount number of items in each of dataIn, resultMPI_DOUBLE data type for dataIn, resultMPI_SUM combining operationroot rank of processor receiving dataMPI_COMM_WORLD communicator
  15. 15. MPI_Reduce()• Data and result may be arrays -- combining operation applied element-by-element• Illegal to alias dataIn and result – avoids large overhead in function definition
  16. 16. MPI_Scatter()• Spreads array to all processors• Source is an array on the sending processor• Each receiver, including sender, gets a piece of the array corresponding to their rank in the communicator
  17. 17. MPI_Gather()• Opposite of Scatter• Values on all processors (in the communicator) are collected into an array on the receiver• Array locations correspond to ranks of processors
  18. 18. Collective Communications, underneath the hood
  19. 19. Many to Many Communications• MPI_Allreduce – Syntax like reduce, except no root parameter – All nodes get result• MPI_Allgather – Syntax like gather, except no root parameter – All nodes get resulting array• Underneath -- virtual butterfly network
  20. 20. Data packaging• Needed to combine irregular, non- contiguous data into single message• pack -- unpack, explicitly pack data into a buffer, send, unpack data from buffer• Derived data types, MPI heterogeneous data types which can be sent as a message
  21. 21. MPI_Pack() syntaxMPI_Pack(Aptr, count, MPI_DOUBLE, buffer, size, &pos, MPI_COMM_WORLD);Aptr pointer to data to packcount number of items to pack type of itemsbuffer buffer being packedsize size of buffer (in bytes)pos position in buffer (in bytes), updated communicator
  22. 22. MPI_Unpack()• reverses operation of MPI_Pack()MPI_Unpack(buffer, size, &pos, Aptr, count, MPI_DOUBLE, MPI_COMM_WORLD);

×