Parallel Computing
Mohamed Zahran (aka Z)
mzahran@cs.nyu.edu
http://www.mzahran.com
CSCI-UA.0480-003
MPI - I
Many slides of this
lecture are adopted
and slightly modified from:
• Gerassimos Barlas
• Peter S. Pacheco
This is What We Target With MPI
Copyright © 2010, Elsevier Inc.
All rights Reserved
We will talk about processes
We Will Study OpenMP for This
Copyright © 2010, Elsevier Inc.
All rights Reserved
We will talk about Threads
MPI processes
• Identify processes by non-negative
integer ranks.
• p processes are numbered 0, 1, 2, .. p-1
Copyright © 2010, Elsevier Inc.
All rights Reserved
Compilation
Copyright © 2010, Elsevier Inc.
All rights Reserved
mpicc -g -Wall –std=c99 -o mpi_hello mpi_hello.c
wrapper script to compile
turns on all warnings
source file
create this executable file name
(as opposed to default a.out)
produce
debugging
information
MPI is NOT a language.
Just libraries called from
C/C++, Fortran, and any
language that can call libraries from those.
use C lang. updated standard
Execution
Copyright © 2010, Elsevier Inc.
All rights Reserved
mpiexec -n <number of processes> <executable>
mpiexec -n 1 ./mpi_hello
mpiexec -n 4 ./mpi_hello
run with 1 process
run with 4 processes
You can use mpirun instead of mpiexec
and -np instead of -n.
Our first MPI program
Our first MPI program
Execution
Copyright © 2010, Elsevier Inc.
All rights Reserved
mpiexec -n 1 ./mpi_hello
mpiexec -n 4 ./mpi_hello
Greetings from process 0 of 1 !
Greetings from process 0 of 4 !
Greetings from process 1 of 4 !
Greetings from process 2 of 4 !
Greetings from process 3 of 4 !
MPI Programs
• Used mainly with C/C++ and Fortran
– With some efforts with other languages going
on and off.
– But any language that can call libraries from
the above can use MPI capabilities.
• Need to add mpi.h header file.
• Identifiers defined by MPI start with
“MPI_”.
– First letter following underscore is uppercase.
• For function names and MPI-defined types.
• Helps to avoid confusion.
– All letters following underscore are uppercase.
• MPI defined macros
• MPI defined constants
MPI Components
Tells MPI to do all the necessary setup.
No MPI functions should be called before this.
Pointers to
the two arguments
of main()
MPI Components
Copyright © 2010, Elsevier Inc.
All rights Reserved
•Tells MPI we’re done, so clean up anything allocated for
this program.
• No MPI function should be called after this.
Basic Outline
Copyright © 2010, Elsevier Inc.
All rights Reserved
Communicators
• A collection of processes that can send
messages to each other.
• MPI_Init defines a communicator that
consists of all the processes created
when the program started.
• Called MPI_COMM_WORLD.
Copyright © 2010, Elsevier Inc.
All rights Reserved
Communicators
Copyright © 2010, Elsevier Inc.
All rights Reserved
number of processes in the communicator
my rank
(rank of the process making this call)
MPI_COMM_WORLD for now
Communication
Copyright © 2010, Elsevier Inc.
All rights Reserved
rank of the receiving process
To distinguish messages
Message sent by a process using one communicator cannot be
received by a process in another communicator.
num of elements in
msg_buf
type of each
element in
msg_buf
Data types
Copyright © 2010, Elsevier Inc.
All rights Reserved
Communication
Copyright © 2010, Elsevier Inc.
All rights Reserved
Message matching
Copyright © 2010, Elsevier Inc.
All rights Reserved
MPI_Send
src = q
MPI_Recv
dest = r
r
q
recv_buf_sz
>=
send_buf_sz
Scenario 1
What if process 2 message
arrives before process 1?
Scenario 1
Wildcard: MPI_ANY_SOURCE
The loop will then be:
for(q = 1; q < comm_sz; q++) {
MPI_Recv(result, result_sz, result_type,
MPI_ANY_SOURCE,
tag, comm, MPI_STATUS_IGNORE);
}
Scenario 2
What if process 1 sends to process 0
several messages but they arrive out of
order.
– Process 0 is waiting for a message with tag
= 0 but tag = 1 message arrives instead!
Scenario 2
Wildcard: MPI_ANY_TAG
The loop will then be:
for(q = 1; q < comm_sz; q++) {
MPI_Recv(result, result_sz, result_type,
q,
MPI_ANY_TAG, comm,
MPI_STATUS_IGNORE);
}
Receiving messages
• A receiver can get a message without
knowing:
– the amount of data in the message,
– the sender of the message,
– or the tag of the message.
Copyright © 2010, Elsevier Inc.
All rights Reserved
How will the output be different if ..
•use MPI_ANY_SOURCE
•MPI_ANY_TAG
status argument
Copyright © 2010, Elsevier Inc.
All rights Reserved
MPI_SOURCE
MPI_TAG
MPI_ERROR
Pointer to
MPI_Status struct
MPI_Status* status;
status.MPI_SOURCE
status.MPI_TAG
How much data am I receiving?
Copyright © 2010, Elsevier Inc.
All rights Reserved
Issues
• MPI_Send() is implementation
dependent: can buffer or block .. or
both!
• MPI_Recv() always blocks
– So, if it returns we are sure the message
has been received.
– Be careful: don’t make it block forever!
Conclusions
• MPI is the choice when we have
distributed memory organization.
• It depends on messages.
• Your goal: How to reduce messages yet
increase concurrency?
Powered by TCPDF (www.tcpdf.org)Powered by TCPDF (www.tcpdf.org)

MPI - 1

  • 1.
    Parallel Computing Mohamed Zahran(aka Z) mzahran@cs.nyu.edu http://www.mzahran.com CSCI-UA.0480-003 MPI - I Many slides of this lecture are adopted and slightly modified from: • Gerassimos Barlas • Peter S. Pacheco
  • 2.
    This is WhatWe Target With MPI Copyright © 2010, Elsevier Inc. All rights Reserved We will talk about processes
  • 3.
    We Will StudyOpenMP for This Copyright © 2010, Elsevier Inc. All rights Reserved We will talk about Threads
  • 4.
    MPI processes • Identifyprocesses by non-negative integer ranks. • p processes are numbered 0, 1, 2, .. p-1 Copyright © 2010, Elsevier Inc. All rights Reserved
  • 5.
    Compilation Copyright © 2010,Elsevier Inc. All rights Reserved mpicc -g -Wall –std=c99 -o mpi_hello mpi_hello.c wrapper script to compile turns on all warnings source file create this executable file name (as opposed to default a.out) produce debugging information MPI is NOT a language. Just libraries called from C/C++, Fortran, and any language that can call libraries from those. use C lang. updated standard
  • 6.
    Execution Copyright © 2010,Elsevier Inc. All rights Reserved mpiexec -n <number of processes> <executable> mpiexec -n 1 ./mpi_hello mpiexec -n 4 ./mpi_hello run with 1 process run with 4 processes You can use mpirun instead of mpiexec and -np instead of -n.
  • 7.
  • 8.
  • 9.
    Execution Copyright © 2010,Elsevier Inc. All rights Reserved mpiexec -n 1 ./mpi_hello mpiexec -n 4 ./mpi_hello Greetings from process 0 of 1 ! Greetings from process 0 of 4 ! Greetings from process 1 of 4 ! Greetings from process 2 of 4 ! Greetings from process 3 of 4 !
  • 10.
    MPI Programs • Usedmainly with C/C++ and Fortran – With some efforts with other languages going on and off. – But any language that can call libraries from the above can use MPI capabilities. • Need to add mpi.h header file. • Identifiers defined by MPI start with “MPI_”. – First letter following underscore is uppercase. • For function names and MPI-defined types. • Helps to avoid confusion. – All letters following underscore are uppercase. • MPI defined macros • MPI defined constants
  • 11.
    MPI Components Tells MPIto do all the necessary setup. No MPI functions should be called before this. Pointers to the two arguments of main()
  • 12.
    MPI Components Copyright ©2010, Elsevier Inc. All rights Reserved •Tells MPI we’re done, so clean up anything allocated for this program. • No MPI function should be called after this.
  • 13.
    Basic Outline Copyright ©2010, Elsevier Inc. All rights Reserved
  • 14.
    Communicators • A collectionof processes that can send messages to each other. • MPI_Init defines a communicator that consists of all the processes created when the program started. • Called MPI_COMM_WORLD. Copyright © 2010, Elsevier Inc. All rights Reserved
  • 15.
    Communicators Copyright © 2010,Elsevier Inc. All rights Reserved number of processes in the communicator my rank (rank of the process making this call) MPI_COMM_WORLD for now
  • 16.
    Communication Copyright © 2010,Elsevier Inc. All rights Reserved rank of the receiving process To distinguish messages Message sent by a process using one communicator cannot be received by a process in another communicator. num of elements in msg_buf type of each element in msg_buf
  • 17.
    Data types Copyright ©2010, Elsevier Inc. All rights Reserved
  • 18.
    Communication Copyright © 2010,Elsevier Inc. All rights Reserved
  • 19.
    Message matching Copyright ©2010, Elsevier Inc. All rights Reserved MPI_Send src = q MPI_Recv dest = r r q recv_buf_sz >= send_buf_sz
  • 20.
    Scenario 1 What ifprocess 2 message arrives before process 1?
  • 21.
    Scenario 1 Wildcard: MPI_ANY_SOURCE Theloop will then be: for(q = 1; q < comm_sz; q++) { MPI_Recv(result, result_sz, result_type, MPI_ANY_SOURCE, tag, comm, MPI_STATUS_IGNORE); }
  • 22.
    Scenario 2 What ifprocess 1 sends to process 0 several messages but they arrive out of order. – Process 0 is waiting for a message with tag = 0 but tag = 1 message arrives instead!
  • 23.
    Scenario 2 Wildcard: MPI_ANY_TAG Theloop will then be: for(q = 1; q < comm_sz; q++) { MPI_Recv(result, result_sz, result_type, q, MPI_ANY_TAG, comm, MPI_STATUS_IGNORE); }
  • 24.
    Receiving messages • Areceiver can get a message without knowing: – the amount of data in the message, – the sender of the message, – or the tag of the message. Copyright © 2010, Elsevier Inc. All rights Reserved
  • 25.
    How will theoutput be different if .. •use MPI_ANY_SOURCE •MPI_ANY_TAG
  • 26.
    status argument Copyright ©2010, Elsevier Inc. All rights Reserved MPI_SOURCE MPI_TAG MPI_ERROR Pointer to MPI_Status struct MPI_Status* status; status.MPI_SOURCE status.MPI_TAG
  • 27.
    How much dataam I receiving? Copyright © 2010, Elsevier Inc. All rights Reserved
  • 28.
    Issues • MPI_Send() isimplementation dependent: can buffer or block .. or both! • MPI_Recv() always blocks – So, if it returns we are sure the message has been received. – Be careful: don’t make it block forever!
  • 29.
    Conclusions • MPI isthe choice when we have distributed memory organization. • It depends on messages. • Your goal: How to reduce messages yet increase concurrency? Powered by TCPDF (www.tcpdf.org)Powered by TCPDF (www.tcpdf.org)