Open MPI

1,398 views

Published on

Published in: Technology, Education
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,398
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
57
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Open MPI

  1. 1. 2.1Message-Passing ComputingITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2010. Aug 26, 2010.
  2. 2. 2.2Software Tools for ClustersLate 1980’s Parallel Virtual Machine (PVM) -developed Became very popular.Mid 1990’s - Message-Passing Interface (MPI) -standard defined.Based upon Message Passing ParallelProgramming model.Both provide a set of user-level libraries formessage passing. Use with sequentialprogramming languages (C, C++, ...).
  3. 3. 2.3MPI(Message Passing Interface)• Message passing library standard developedby group of academics and industrial partnersto foster more widespread use and portability.• Defines routines, not implementation.• Several free implementations exist.
  4. 4. 2.4Message passing conceptusing library routines
  5. 5. 2.5Message routing between computers typically done by daemon processesinstalled on computers that form the “virtual machine”.Applicationdaemon processprogramWorkstationApplicationprogramApplicationprogramWorkstationWorkstationMessagessent throughnetwork(executable)(executable)(executable).Can have more than one processrunning on each computer.
  6. 6. 2.6Message-Passing Programming usingUser-level Message-Passing LibrariesTwo primary mechanisms needed:1. A method of creating processes for execution ondifferent computers2. A method of sending and receiving messages
  7. 7. Creating processes ondifferent computers2.7
  8. 8. 2.8Multiple program, multiple data (MPMD)modelSourcefileExecutableProcessor 0 Processor p - 1Compile to suitprocessorSourcefile• Different programs executed by each processor
  9. 9. 2.9Single Program Multiple Data (SPMD) modelSourcefileExecutablesProcessor 0 Processor p - 1Compile to suitprocessorBasic MPI way• Same program executed by each processor• Control statements select different parts for eachprocessor to execute.
  10. 10. In MPI, processes within a defined communicatinggroup given a number called a rank starting fromzero onwards.Program uses control constructs, typically IFstatements, to direct processes to perform specificactions.Exampleif (rank == 0) ... /* do this */;if (rank == 1) ... /* do this */;...2.10
  11. 11. Master-Slave approachUsually computation constructed as a master-slavemodelOne process (the master), performs one set ofactions and all the other processes (the slaves)perform identical actions although on different data,i.e.if (rank == 0) ... /* master do this */;else ... /* all slaves do this */;2.11
  12. 12. Static process creation• All executables started together.• Done when one starts the compiled programs.• Normal MPI way.2.12
  13. 13. 2.13Multiple Program Multiple Data (MPMD) Modelwith Dynamic Process CreationProcess 1Process 2spawn();TimeStart executionof process 2• One processor executes master process.• Other processes started from within master processAvailable in MPI-2Might findapplicability if donot initially howmany processesneeded.Does have aprocess creationoverhead.
  14. 14. Methods of sending andreceiving messages2.14
  15. 15. 2.15Basic “point-to-point”Send and Receive RoutinesProcess 1 Process 2send(&x, 2);recv(&y, 1);x yMovementof dataGeneric syntax (actual formats later)Passing a message between processes usingsend() and recv() library calls:
  16. 16. 2.16MPI point-to-point message passing usingMPI_send() and MPI_recv() library calls
  17. 17. Semantics of MPI_Send() and MPI_Recv()Called blocking, which means in MPI that routine waits until allits local actions have taken place before returning.After returning, any local variables used can be altered withoutaffecting message transfer.MPI_Send() - Message may not reached its destination butprocess can continue in the knowledge that message safely onits way.MPI_Recv() – Returns when message received and datacollected. Will cause process to stall until message received.Other versions of MPI_Send() and MPI_Recv() have differentsemantics.2.17
  18. 18. 2.18Message Tag• Used to differentiate between different types ofmessages being sent.• Message tag is carried within message.• If special type matching is not required, a wildcard message tag used. Then recv() will matchwith any send().
  19. 19. 2.19Message Tag ExampleProcess 1 Process 2send(&x,2, 5);recv(&y,1, 5);x yMovementof dataWaits for a message from process 1 with a tag of 5To send a message, x, with message tag 5 froma source process, 1, to a destination process, 2,and assign to y:
  20. 20. 2.20Unsafe message passing - Examplelib()lib()send(…,1,…);recv(…,0,…);Process 0 Process 1send(…,1,…);recv(…,0,…);(a) Intended behavior(b) Possible behaviorlib()lib()send(…,1,…);recv(…,0,…);Process 0 Process 1send(…,1,…);recv(…,0,…);DestinationSource
  21. 21. 2.21MPI Solution“Communicators”• Defines a communication domain - a set ofprocesses that are allowed to communicatebetween themselves.• Communication domains of libraries can beseparated from that of a user program.• Used in all point-to-point and collective MPImessage-passing communications.
  22. 22. 2.22Default CommunicatorMPI_COMM_WORLD• Exists as first communicator for all processesexisting in the application.• A set of MPI routines exists for formingcommunicators.• Processes have a “rank” in a communicator.
  23. 23. 2.23Using SPMD Computational Modelmain (int argc, char *argv[]) {MPI_Init(&argc, &argv);MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find rank */if (myrank == 0)master();elseslave();MPI_Finalize();}where master() and slave() are to be executed by masterprocess and slave process, respectively.
  24. 24. 2.24Parameters of blocking sendMPI_Send(buf, count, datatype, dest, tag, comm)Address ofNumber of itemsDatatype ofRank of destinationMessage tagCommunicatorsend bufferto sendeach itemprocess
  25. 25. 2.25Parameters of blocking receiveMPI_Recv(buf, count, datatype, src, tag, comm, status)Address ofMaximum numberDatatype ofRank of sourceMessage tagCommunicatorreceive bufferof items to receiveeach itemprocessStatusafter operation
  26. 26. 2.26ExampleTo send an integer x from process 0 to process 1,MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* findrank */if (myrank == 0) {int x;MPI_Send(&x, 1, MPI_INT, 1, msgtag,MPI_COMM_WORLD);} else if (myrank == 1) {int x;MPI_Recv(&x, 1, MPI_INT,0,msgtag,MPI_COMM_WORLD,status);}
  27. 27. Sample MPI Hello World program#include <stddef.h>#include <stdlib.h>#include "mpi.h"main(int argc, char **argv ) {char message[20];int i,rank, size, type=99;MPI_Status status;MPI_Init(&argc, &argv);MPI_Comm_size(MPI_COMM_WORLD,&size);MPI_Comm_rank(MPI_COMM_WORLD,&rank);if(rank == 0) {strcpy(message, "Hello, world");for (i=1; i<size; i++)MPI_Send(message,13,MPI_CHAR,i,type,MPI_COMM_WORLD);} elseMPI_Recv(message,20,MPI_CHAR,0,type,MPI_COMM_WORLD,&status);printf( "Message from process =%d : %.13sn", rank,message);MPI_Finalize();}2.27
  28. 28. Program sends message “Hello World” from master process(rank = 0) to each of the other processes (rank != 0). Then, allprocesses execute a println statement.In MPI, standard output automatically redirected from remotecomputers to the user’s console so final result will beMessage from process =1 : Hello, worldMessage from process =0 : Hello, worldMessage from process =2 : Hello, worldMessage from process =3 : Hello, world...except that the order of messages might be different but isunlikely to be in ascending order of process ID; it will dependupon how the processes are scheduled.2.28
  29. 29. Setting Up the Message PassingEnvironmentUsually computers specified in a file, called a hostfileor machines file.File contains names of computers and possiblynumber of processes that should run on eachcomputer.Implementation-specific algorithm selects computersfrom list to run user programs.2.29
  30. 30. Users may create their own machines file for theirprogram.Examplecoit-grid01.uncc.educoit-grid02.uncc.educoit-grid03.uncc.educoit-grid04.uncc.educoit-grid05.uncc.eduIf a machines file not specified, a default machinesfile used or it may be that program will only run on asingle computer.2.30
  31. 31. 2.31Compiling/Executing MPI Programs• Minor differences in the command lines requireddepending upon MPI implementation.• For the assignments, we will use MPICH orMPICH-2.• Generally, a machines file need to be present thatlists all the computers to be used. MPI then usesthose computers listed. Otherwise it will simplyrun on one computer
  32. 32. 2.32MPICH and MPICH-2• Both Windows and Linux versions• Very easy to install on a Windows system.
  33. 33. 2.33MPICH CommandsTwo basic commands:• mpicc, a script to compile MPI programs• mpirun, the original command to executean MPI program, or• mpiexec - MPI-2 standard command mpiexecreplaces mpirun although mpirun still exists.
  34. 34. 2.34Compiling/executing (SPMD) MPI programFor MPICH. At a command line:To start MPI: Nothing special.To compile MPI programs:for C mpicc -o prog prog.cfor C++ mpiCC -o prog prog.cppTo execute MPI program:mpiexec -n no_procs progormpirun -np no_procs progA positive integer
  35. 35. 2.35Executing MPICH program onmultiple computersCreate a file called say “machines”containing the list of machines, say:coit-grid01.uncc.educoit-grid02.uncc.educoit-grid03.uncc.educoit-grid04.uncc.educoit-grid05.uncc.edu
  36. 36. 2.36mpiexec -machinefile machines -n 4 progwould run prog with four processes.Each processes would execute on one of machinesin list. MPI would cycle through list of machinesgiving processes to machines.Can also specify number of processes on aparticular machine by adding that number aftermachine name.)
  37. 37. 2.37Debugging/Evaluating ParallelPrograms Empirically
  38. 38. 2.38Visualization ToolsPrograms can be watched as they are executed ina space-time diagram (or process-time diagram):Process 1Process 2Process 3TimeComputingWaitingMessage-passing system routineMessage
  39. 39. 2.39Implementations of visualization tools areavailable for MPI.An example is the Upshot program visualizationsystem.
  40. 40. 2.40Evaluating Programs EmpiricallyMeasuring Execution TimeTo measure execution time between point L1 and pointL2 in code, might have construction such as:.L1: time(&t1); /* start timer */..L2: time(&t2); /* stop timer */.elapsed_Time = difftime(t2, t1); /*time=t2-t1*/printf(“Elapsed time=%5.2f secs”,elapsed_Time);
  41. 41. 2.41MPI provides the routine MPI_Wtime() for returningtime (in seconds):double start_time, end_time, exe_time;start_time = MPI_Wtime();..end_time = MPI_Wtime();exe_time = end_time - start_time;
  42. 42. 2.42Next topic• Discussion of first assignment– To write and execute some simple MPIprograms.– Will include timing execution.

×