Your SlideShare is downloading. ×
0
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Using MPI
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Using MPI

2,400

Published on

introduction to MPI

introduction to MPI

0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,400
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
62
Comments
0
Likes
3
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

























  • Transcript

    • 1. Using MPI Kazuki Ohta
    • 2. Using MPI
    • 3. • HPC • vendor/hardware • program •
    • 4. MPI • MPI: Message-Passing Interface • API • Communicator,Point-to-Point Communication,Synchronization,One-Sided Communication,Collective, Datatypes, MPI-I/O • • MPICH2 (ANL), MVAPICH (Ohio), OpenMP • C/C++/Fortran support
    • 5. • • TCP/IP, Infiniband, Myrinet • shared-memory • OS • MPICH2 Windows
    • 6. • HW • Infiniband, Myrinet NIC • RDMA (Remote Direct Memory Access)
    • 7. • Scales to 100,000 processes • MPI on a Million Processors [Paven 2009] • algorithm • ” ” • 1 •
    • 8. • William D. Gropp • http://www.cs.uiuc.edu/homes/wgropp/
    • 9. • MPICH2 • Paven Balaj • http://www.mcs.anl.gov/~balaji/ • Darius Buntinas • http://www.mcs.anl.gov/~buntinas/
    • 10. ( ) $ cd $ echo “secretword=xxxx” > ~/.mpd.conf $ cp .mpd.conf .mpdpasswd $ chmod 600 .mpd.conf .mpdpasswd $ echo “seduesearcher:16” > mpdhosts $ mpdboot -n 1 -f mpdhosts $ mpdtrace seduesearcher
    • 11. #include <mpi.h> int main(int argc, char **argv) { int myrank; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); printf(“rank = %d¥n”, myrank); MPI_Finalize(); }
    • 12. $ mpicc hello.c $ ruby -e ’16.times{ puts “seduesearcher” }’ > machines $ mpiexec -machinefile ./machines -n 16 ./a.out rank = 4 rank = 2 rank = 0 rank = 1 rank = 8 rank = 3 .....
    • 13. Point-to-Point • 1 vs 1 • int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm); • int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm, MPI_Status *status); • Eager Protocol / Rendezvous Protocol
    • 14. Point-to-Point #include <mpi.h> #include <string.h> int main(int argc, char **argv) { int nprocs; int myrank; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); char msg[1024]; if (myrank == 0) { strcpy(msg, "Hello, from rank0"); int dst; for (dst = 1; dst < nprocs; dst++) MPI_Send(msg, strlen(msg + 1), MPI_CHAR, dst, 99, MPI_COMM_WORLD); } else { MPI_Status status; MPI_Recv(msg, 1024, MPI_CHAR, MPI_ANY_SOURCE, 99, MPI_COMM_WORLD, &status); printf("rank%02d received: msg=%sn", myrank, msg); } MPI_Finalize(); }
    • 15. • MPI • /
    • 16. MPI_Bcast
    • 17. MPI_Scatter
    • 18. MPI_Gather
    • 19. MPI_AllGather •
    • 20. MPI_AlltoAll
    • 21. MPI_Reduce
    • 22. MPI_AllReduce
    • 23. • MPI/IO • I/O • Two-Phase I/O • Data Sieving
    • 24. • BLAS • GotoBLAS • http://www.tacc.utexas.edu/tacc-projects/ • ScaLAPACK • http://www.netlib.org/scalapack/ • ...
    • 25. • MPI version 3 • MPI Forum • http://meetings.mpi-forum.org/ • http://meetings.mpi-forum.org/ MPI_3.0_main_page.php • Fault-Torelance • Communication/Topology • etc.

    ×