Lecture 1Lecture 1
Introduction to MPIIntroduction to MPI
Dr. Muhammad Hanif Durad
Department of Computer and Information Sciences
Pakistan Institute Engineering and Applied Sciences
hanif@pieas.edu.pk
Some slides have bee adapted with thanks from some other
lectures available on Internet
Dr. Hanif Durad 2
Lecture Outline
 Models for Communication for PP
 MPI Libraries
 Features of MPI
 Programming With MPI
 Using MPI Manual
 Compilation and running a program
 Basic concepts
 Homework
IntroMPI.ppt
Models for Communication for
PP
 Parallel program = program composed of
tasks (processes) which communicate to
accomplish an overall computational
goal
 Two prevalent models for
communication:
 Shared memory (SM)
 Message passing (MP)
D:PARALLEL COMPCSE 160 FBlecture2.ppt
Shared Memory Communication
 Processes in shared memory program communicate by
accessing shared variables and data structures
 Basic shared memory primitives
 Read to a shared variable
 Write to a shared variable
4
Interconnection
Media
…
Processors memories
Basic Shared Memory Multiprocessor Architecture…
D:PARALLEL COMPCSE 160 FBlecture2.ppt
Accessing Shared Variables
Dr. Hanif Durad 5
 Conflicts may arise if multiple processes want to write to a
shared variable at the same time.
 Programmer, language, and/or architecture must provide
means of resolving conflicts
Shared
variable x
+1
proc. A
+1
proc. B
Process A,B:
read x
compute x+1
write x
D:PARALLEL COMPCSE 160 FBlecture2.ppt
Message Passing Communication
Dr. Hanif Durad 6
• Processes in message passing program
communicate by passing messages
• Basic message passing primitives
– Send(parameter list)
– Receive(parameter list)
– Parameters depend on the software and can be complex
A B
D:PARALLEL COMPCSE 160 FBlecture2.ppt
MPI Libraries (1/2)
 All communication, synchronization require
subroutine calls
 No shared variables
 Program run on a single processor just like any
uniprocessor program, except for calls to message
passing library
7
IntroMPI.ppt
MPI Libraries (2/2)
 Subroutines for
 Communication

Pairwise or point-to-point: Send and Receive

Collectives all processor get together to
 Move data: Broadcast, Scatter/gather
 Compute and move: sum, product, max, … of data on many processors
 Synchronization

Barrier

No locks because there are no shared variables to protect
 Enquiries

How many processes? Which one am I? Any messages
waiting?
8
IntroMPI.ppt
Novel Features of MPI
 Communicators encapsulate communication spaces for
library safety
 Datatypes reduce copying costs and permit heterogeneity
 Multiple communication modes allow precise buffer
management
 Extensive collective operations for scalable global
communication
 Process topologies permit efficient process placement,
user views of process layout
 Profiling interface encourages portable tools 9
IntroMPI.ppt
Programming With MPI
 MPI is a library
 All operations are performed with routine calls
 Basic definitions in

mpi.h for C/C++

mpif.h for Fortran 77 and 90

MPI module for Fortran 90 (optional)
 First Program:
 Write out process number
 Write out some variables (illustrate separate name
space) 10
IntroMPI.ppt
Finding Out About the
Environment
 Two important questions that arise early in a
parallel program are:
 How many processes are participating in this
computation?
 Which one am I?
 MPI provides functions to answer these questions:
 MPI_Comm_size reports the number of processes.
 MPI_Comm_rank reports the rank, a number between
0 and size-1, identifying the calling process 11
IntroMPI.ppt
Hello (C)
#include "mpi.h"
#include <stdio.h>
int main( int argc, char *argv[] )
{
int rank, size;
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Comm_size( MPI_COMM_WORLD, &size );
printf( "I am %d of %dn", rank, size );
MPI_Finalize();
return 0;
}
Dr. Hanif Durad 12
IntroMPI.ppt
Hello (Fortran)
program main
include 'mpif.h'
integer ierr, rank, size
call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr )
call MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr )
print *, 'I am ', rank, ' of ', size
call MPI_FINALIZE( ierr )
end
Dr. Hanif Durad 13
IntroMPI.ppt
Hello (C++)
Dr. Hanif Durad 14
#include "mpi.h"
#include <iostream>
int main( int argc, char *argv[] )
{
int rank, size;
MPI::Init(argc, argv);
rank = MPI::COMM_WORLD.Get_rank();
size = MPI::COMM_WORLD.Get_size();
std::cout << "I am " << rank << " of " << size <<
"n";
MPI::Finalize();
return 0;
}
IntroMPI.ppt
Using MPI Manual command
 Linux
 man ls (?)
 MPI
 hanif@virtu:~> mpiman

What manual page do you want?
 mpiman –help
 mpiman -V
 cd /opt/mpich/ch-p4/bin
 ls
mpif77 mpif90 mpicc mpirun mpiCC
15
Compilation in C
 mpiman mpicc
 To compile a single file foo.c , use
 mpicc -c foo.c
 To link the output and make an executable, use
 mpicc -o foo foo.o

Combining compilation and linking in a single command

is a convenient way to build simple programs.
16
Compilation and running in C
 mpicc -o helloc hello1.c
 mpirun -np 4 ./helloc
 I am 0 of 4
 I am 2 of 4
 I am 3 of 4
 I am 1 of 4
17
Compilation and running in
fortran 90
 mpif90 -o hellof hello1.f90
 mpirun -np 4 ./hellof
 I am 0 of 4
 I am 2 of 4
 I am 1 of 4
 I am 3 of 4
18
Compilation and running in C++
 mpiCC -o hellocpp hello1.cpp
 mpirun -np 4 ./hellocpp
 I am 0 of 4
 I am 2 of 4
 I am 1 of 4
 I am 3 of 4
19
Notes on Hello World
 All MPI programs begin with MPI_Init and end with MPI_Finalize
 MPI_COMM_WORLD is defined by mpi.h (in C) or mpif.h (in Fortran) and
designates all processes in the MPI “job”
 Each statement executes independently in each process
 including the printf/print statements
 I/O not part of MPI-1but is in MPI-2
 print and write to standard output or error not part of either MPI-1 or MPI-
2
 output order is undefined (may be interleaved by character, line, or blocks
of characters),
 The MPI-1 Standard does not specify how to run an MPI program, but many
implementations provide
mpirun –np 4 a.out
Dr. Hanif Durad 20
IntroMPI.ppt
What you have learnt?
Dr. Hanif Durad 21
Some Basic Concepts
 Processes can be collected into groups
 Each message is sent in a context, and must be received
in the same context
 Provides necessary support for libraries
 A group and context together form a communicator
 A process is identified by its rank in the group associated
with a communicator
 There is a default communicator whose group contains all
initial processes, called MPI_COMM_WORLD
Dr. Hanif Durad 22
General MPI Program Structure
Dr. Hanif Durad 23
https://computing.llnl.gov/tutorials/mpi/
Home Work
 Modify the previous program so the processor
name executing a process is also printed out.
 Use MPI routine
MPI_Get_processor_name
(processor_name,&namelen)
But in C++ context
Dr. Hanif Durad 24

Introduction to MPI

  • 1.
    Lecture 1Lecture 1 Introductionto MPIIntroduction to MPI Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied Sciences hanif@pieas.edu.pk Some slides have bee adapted with thanks from some other lectures available on Internet
  • 2.
    Dr. Hanif Durad2 Lecture Outline  Models for Communication for PP  MPI Libraries  Features of MPI  Programming With MPI  Using MPI Manual  Compilation and running a program  Basic concepts  Homework IntroMPI.ppt
  • 3.
    Models for Communicationfor PP  Parallel program = program composed of tasks (processes) which communicate to accomplish an overall computational goal  Two prevalent models for communication:  Shared memory (SM)  Message passing (MP) D:PARALLEL COMPCSE 160 FBlecture2.ppt
  • 4.
    Shared Memory Communication Processes in shared memory program communicate by accessing shared variables and data structures  Basic shared memory primitives  Read to a shared variable  Write to a shared variable 4 Interconnection Media … Processors memories Basic Shared Memory Multiprocessor Architecture… D:PARALLEL COMPCSE 160 FBlecture2.ppt
  • 5.
    Accessing Shared Variables Dr.Hanif Durad 5  Conflicts may arise if multiple processes want to write to a shared variable at the same time.  Programmer, language, and/or architecture must provide means of resolving conflicts Shared variable x +1 proc. A +1 proc. B Process A,B: read x compute x+1 write x D:PARALLEL COMPCSE 160 FBlecture2.ppt
  • 6.
    Message Passing Communication Dr.Hanif Durad 6 • Processes in message passing program communicate by passing messages • Basic message passing primitives – Send(parameter list) – Receive(parameter list) – Parameters depend on the software and can be complex A B D:PARALLEL COMPCSE 160 FBlecture2.ppt
  • 7.
    MPI Libraries (1/2) All communication, synchronization require subroutine calls  No shared variables  Program run on a single processor just like any uniprocessor program, except for calls to message passing library 7 IntroMPI.ppt
  • 8.
    MPI Libraries (2/2) Subroutines for  Communication  Pairwise or point-to-point: Send and Receive  Collectives all processor get together to  Move data: Broadcast, Scatter/gather  Compute and move: sum, product, max, … of data on many processors  Synchronization  Barrier  No locks because there are no shared variables to protect  Enquiries  How many processes? Which one am I? Any messages waiting? 8 IntroMPI.ppt
  • 9.
    Novel Features ofMPI  Communicators encapsulate communication spaces for library safety  Datatypes reduce copying costs and permit heterogeneity  Multiple communication modes allow precise buffer management  Extensive collective operations for scalable global communication  Process topologies permit efficient process placement, user views of process layout  Profiling interface encourages portable tools 9 IntroMPI.ppt
  • 10.
    Programming With MPI MPI is a library  All operations are performed with routine calls  Basic definitions in  mpi.h for C/C++  mpif.h for Fortran 77 and 90  MPI module for Fortran 90 (optional)  First Program:  Write out process number  Write out some variables (illustrate separate name space) 10 IntroMPI.ppt
  • 11.
    Finding Out Aboutthe Environment  Two important questions that arise early in a parallel program are:  How many processes are participating in this computation?  Which one am I?  MPI provides functions to answer these questions:  MPI_Comm_size reports the number of processes.  MPI_Comm_rank reports the rank, a number between 0 and size-1, identifying the calling process 11 IntroMPI.ppt
  • 12.
    Hello (C) #include "mpi.h" #include<stdio.h> int main( int argc, char *argv[] ) { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); printf( "I am %d of %dn", rank, size ); MPI_Finalize(); return 0; } Dr. Hanif Durad 12 IntroMPI.ppt
  • 13.
    Hello (Fortran) program main include'mpif.h' integer ierr, rank, size call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr ) print *, 'I am ', rank, ' of ', size call MPI_FINALIZE( ierr ) end Dr. Hanif Durad 13 IntroMPI.ppt
  • 14.
    Hello (C++) Dr. HanifDurad 14 #include "mpi.h" #include <iostream> int main( int argc, char *argv[] ) { int rank, size; MPI::Init(argc, argv); rank = MPI::COMM_WORLD.Get_rank(); size = MPI::COMM_WORLD.Get_size(); std::cout << "I am " << rank << " of " << size << "n"; MPI::Finalize(); return 0; } IntroMPI.ppt
  • 15.
    Using MPI Manualcommand  Linux  man ls (?)  MPI  hanif@virtu:~> mpiman  What manual page do you want?  mpiman –help  mpiman -V  cd /opt/mpich/ch-p4/bin  ls mpif77 mpif90 mpicc mpirun mpiCC 15
  • 16.
    Compilation in C mpiman mpicc  To compile a single file foo.c , use  mpicc -c foo.c  To link the output and make an executable, use  mpicc -o foo foo.o  Combining compilation and linking in a single command  is a convenient way to build simple programs. 16
  • 17.
    Compilation and runningin C  mpicc -o helloc hello1.c  mpirun -np 4 ./helloc  I am 0 of 4  I am 2 of 4  I am 3 of 4  I am 1 of 4 17
  • 18.
    Compilation and runningin fortran 90  mpif90 -o hellof hello1.f90  mpirun -np 4 ./hellof  I am 0 of 4  I am 2 of 4  I am 1 of 4  I am 3 of 4 18
  • 19.
    Compilation and runningin C++  mpiCC -o hellocpp hello1.cpp  mpirun -np 4 ./hellocpp  I am 0 of 4  I am 2 of 4  I am 1 of 4  I am 3 of 4 19
  • 20.
    Notes on HelloWorld  All MPI programs begin with MPI_Init and end with MPI_Finalize  MPI_COMM_WORLD is defined by mpi.h (in C) or mpif.h (in Fortran) and designates all processes in the MPI “job”  Each statement executes independently in each process  including the printf/print statements  I/O not part of MPI-1but is in MPI-2  print and write to standard output or error not part of either MPI-1 or MPI- 2  output order is undefined (may be interleaved by character, line, or blocks of characters),  The MPI-1 Standard does not specify how to run an MPI program, but many implementations provide mpirun –np 4 a.out Dr. Hanif Durad 20 IntroMPI.ppt
  • 21.
    What you havelearnt? Dr. Hanif Durad 21
  • 22.
    Some Basic Concepts Processes can be collected into groups  Each message is sent in a context, and must be received in the same context  Provides necessary support for libraries  A group and context together form a communicator  A process is identified by its rank in the group associated with a communicator  There is a default communicator whose group contains all initial processes, called MPI_COMM_WORLD Dr. Hanif Durad 22
  • 23.
    General MPI ProgramStructure Dr. Hanif Durad 23 https://computing.llnl.gov/tutorials/mpi/
  • 24.
    Home Work  Modifythe previous program so the processor name executing a process is also printed out.  Use MPI routine MPI_Get_processor_name (processor_name,&namelen) But in C++ context Dr. Hanif Durad 24