Message Passing
Interface(MPI)
Himanshi Kathuria
Content
MPI
1. Definition
2. Reason For Using MPI
3. History And Evolution
4. General MPI Program Structure
5. Writing MPI PROGRAMS
6. Compiling MPI Programs
7. Running MPI Programs
8. Environment Management Routines
MPI Message Passing Interface
A means of machine communication
DEFINITION
• The Message passing interface Standard is a message passing library
industry standard for parallel programming based on the consensus of the
MPI forum which has over 40 participating organization, including vendors,
researchers software library developers, and users
• MPI provides widely used standard for writing message passing programs.
Motivation
Motivated by high computational complexity and memory requirements of
large applications.
HISTORY AND EVOLUTION
• Late 1980s: vendors had unique libraries
• 1989: ParallelVirtual Machine (PVM) developed at Oak Ridge National Lab
• 1992:Work on MPI standard begun
• 1994:Version 1.0 of MPI standard
• June 1995 MPI 1.1 published (clarifications)
• July 1996 MPI 1.2 published (clarifications)
• April 1995 MPI 2.0 committee formed
• July 1997 MPI 2.0 published
MPI CONCEPT
•MPI uses objects called communicators and groups to define which collection of
processes may communicate with each other.
•Most MPI routines require you to specify a communicator as an argument.
•Communicators and groups will be covered in more detail later. For now, simply use
MPI_COMM_WORLD whenever a communicator is required - it is the predefined
communicator that includes all of your MPI processes.
Rank:
• Within a communicator, every process has its own unique, integer identifier
assigned by the system when the process initializes.
• A rank is sometimes also called a "task ID".
• Ranks are contiguous and begin at zero.
• Used by the programmer to specify the source and destination of messages.
Size :
• Within a communicator, no of processes is defined as size.
General MPI Program Structure:
MPI include file
Initialize MPI Environment
Do work and make messaging passing calls
Terminate MPI Environment
Variables declaration
Parallel code begins
Include Files
#include <mpi.h>
• MPI header file
#include <stdio.h>
• Standard I/O header file
Variables
int main (int argc, char *argv[]) {
int i;
int id; /* Process rank */
int p; /* Number of processes */
void check_circuit (int, int);
• Include argc and argv: they are needed to initialize MPI
• One copy of every variable for each process running this program
argc=argument count ie. Gives the total number argument given
argv[]= pointer the argument value
Initialize MPI ENVIRONMENT
MPI_Init (&argc, &argv);
• First MPI function called by each process
• Not necessarily first executable statement
• Allows system to do any necessary setup
Parameters
argc [in] Pointer to the number of arguments .
argv [in] Pointer to the argument vector
Terminate MPI ENVIRONMENT
MPI_Finalize();
• Call after all other MPI library calls
• Allows system to free up MPI resources
WRITING MPI PROGRAMS
#include "mpi.h" // Gives basic MPI types, definitions
#include <stdio.h>
int main( argc, argv )
int argc;
char **argv;
{
MPI_Init( &argc, &argv ); // Starts MPI
::
Actual code including normal ‘C’ calls and MPI calls
::
MPI_Finalize(); // Ends MPI
return 0;
}
Compiling MPI Programs
Compile step:
mpicc -o <object file> <source code>.c ( By default the output file is a. out)
Compilation commands
• – mpicc -o hello_world hello-world.c
Running MPI Programs
Execution steps:
mpiexec/mpirun -np <i> <object file>
• To run hello_world on two machines:
mpirun -np 2 hello_world
Must specify full path of ‘executable’
• To know the commands executed by mpirun
mpirun –t
• To get all the mpirun options
mpirun –help
Environment Management Routines
MPI environment management routines are used for initializing and terminating the MPI
environment, querying the environment etc. Most of the commonly used ones are:
• MPI_Init
• MPI_Comm_size
• MPI_Comm_rank
• MPI_Get_processor_name
• MPI_Finalize
MPI_Comm_size
Determines the number of processes in the group associated with a communicator.
int MPI_Comm_size( MPI_Comm comm, int *size);
Parameters
• comm [in] communicator (handle)
• size [out] number of processes in the group of comm (integer).
MPI_Comm_size(MPI_COMM_WORLD, &size)
MPI_Comm_rank
Determines the rank of the calling process in the communicator.
int MPI_Comm_rank( MPI_Comm comm, int *rank);
Parameters
• comm [in] communicator (handle)
• rank [out] rank of the calling process in the group of comm(integer)
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_processor_name
Gets the name of the processor.
int MPI_Get_processor_name( char *name, int *resultlen );
Parameters
• name [out] A unique specifier for the actual node.This must be an array
of size at least MPI_MAX_PROCESSOR_NAME.
• resultlen [out] Length (in characters) of the name
MPI_Get_processor_name(&name, &resultlength)
THANKYOU

Message Passing Interface (MPI)-A means of machine communication

  • 1.
  • 2.
    Content MPI 1. Definition 2. ReasonFor Using MPI 3. History And Evolution 4. General MPI Program Structure 5. Writing MPI PROGRAMS 6. Compiling MPI Programs 7. Running MPI Programs 8. Environment Management Routines
  • 3.
    MPI Message PassingInterface A means of machine communication DEFINITION • The Message passing interface Standard is a message passing library industry standard for parallel programming based on the consensus of the MPI forum which has over 40 participating organization, including vendors, researchers software library developers, and users • MPI provides widely used standard for writing message passing programs.
  • 4.
    Motivation Motivated by highcomputational complexity and memory requirements of large applications.
  • 5.
    HISTORY AND EVOLUTION •Late 1980s: vendors had unique libraries • 1989: ParallelVirtual Machine (PVM) developed at Oak Ridge National Lab • 1992:Work on MPI standard begun • 1994:Version 1.0 of MPI standard • June 1995 MPI 1.1 published (clarifications) • July 1996 MPI 1.2 published (clarifications) • April 1995 MPI 2.0 committee formed • July 1997 MPI 2.0 published
  • 6.
    MPI CONCEPT •MPI usesobjects called communicators and groups to define which collection of processes may communicate with each other. •Most MPI routines require you to specify a communicator as an argument. •Communicators and groups will be covered in more detail later. For now, simply use MPI_COMM_WORLD whenever a communicator is required - it is the predefined communicator that includes all of your MPI processes.
  • 7.
    Rank: • Within acommunicator, every process has its own unique, integer identifier assigned by the system when the process initializes. • A rank is sometimes also called a "task ID". • Ranks are contiguous and begin at zero. • Used by the programmer to specify the source and destination of messages. Size : • Within a communicator, no of processes is defined as size.
  • 8.
    General MPI ProgramStructure: MPI include file Initialize MPI Environment Do work and make messaging passing calls Terminate MPI Environment Variables declaration Parallel code begins
  • 9.
    Include Files #include <mpi.h> •MPI header file #include <stdio.h> • Standard I/O header file Variables int main (int argc, char *argv[]) { int i; int id; /* Process rank */ int p; /* Number of processes */ void check_circuit (int, int); • Include argc and argv: they are needed to initialize MPI • One copy of every variable for each process running this program argc=argument count ie. Gives the total number argument given argv[]= pointer the argument value
  • 10.
    Initialize MPI ENVIRONMENT MPI_Init(&argc, &argv); • First MPI function called by each process • Not necessarily first executable statement • Allows system to do any necessary setup Parameters argc [in] Pointer to the number of arguments . argv [in] Pointer to the argument vector Terminate MPI ENVIRONMENT MPI_Finalize(); • Call after all other MPI library calls • Allows system to free up MPI resources
  • 11.
    WRITING MPI PROGRAMS #include"mpi.h" // Gives basic MPI types, definitions #include <stdio.h> int main( argc, argv ) int argc; char **argv; { MPI_Init( &argc, &argv ); // Starts MPI :: Actual code including normal ‘C’ calls and MPI calls :: MPI_Finalize(); // Ends MPI return 0; }
  • 12.
    Compiling MPI Programs Compilestep: mpicc -o <object file> <source code>.c ( By default the output file is a. out) Compilation commands • – mpicc -o hello_world hello-world.c Running MPI Programs Execution steps: mpiexec/mpirun -np <i> <object file> • To run hello_world on two machines: mpirun -np 2 hello_world Must specify full path of ‘executable’ • To know the commands executed by mpirun mpirun –t • To get all the mpirun options mpirun –help
  • 13.
    Environment Management Routines MPIenvironment management routines are used for initializing and terminating the MPI environment, querying the environment etc. Most of the commonly used ones are: • MPI_Init • MPI_Comm_size • MPI_Comm_rank • MPI_Get_processor_name • MPI_Finalize
  • 14.
    MPI_Comm_size Determines the numberof processes in the group associated with a communicator. int MPI_Comm_size( MPI_Comm comm, int *size); Parameters • comm [in] communicator (handle) • size [out] number of processes in the group of comm (integer). MPI_Comm_size(MPI_COMM_WORLD, &size) MPI_Comm_rank Determines the rank of the calling process in the communicator. int MPI_Comm_rank( MPI_Comm comm, int *rank); Parameters • comm [in] communicator (handle) • rank [out] rank of the calling process in the group of comm(integer) MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  • 15.
    MPI_Get_processor_name Gets the nameof the processor. int MPI_Get_processor_name( char *name, int *resultlen ); Parameters • name [out] A unique specifier for the actual node.This must be an array of size at least MPI_MAX_PROCESSOR_NAME. • resultlen [out] Length (in characters) of the name MPI_Get_processor_name(&name, &resultlength)
  • 16.