Using MPI

2,975 views

Published on

introduction to MPI

0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,975
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
71
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide

























  • Using MPI

    1. 1. Using MPI Kazuki Ohta
    2. 2. Using MPI
    3. 3. • HPC • vendor/hardware • program •
    4. 4. MPI • MPI: Message-Passing Interface • API • Communicator,Point-to-Point Communication,Synchronization,One-Sided Communication,Collective, Datatypes, MPI-I/O • • MPICH2 (ANL), MVAPICH (Ohio), OpenMP • C/C++/Fortran support
    5. 5. • • TCP/IP, Infiniband, Myrinet • shared-memory • OS • MPICH2 Windows
    6. 6. • HW • Infiniband, Myrinet NIC • RDMA (Remote Direct Memory Access)
    7. 7. • Scales to 100,000 processes • MPI on a Million Processors [Paven 2009] • algorithm • ” ” • 1 •
    8. 8. • William D. Gropp • http://www.cs.uiuc.edu/homes/wgropp/
    9. 9. • MPICH2 • Paven Balaj • http://www.mcs.anl.gov/~balaji/ • Darius Buntinas • http://www.mcs.anl.gov/~buntinas/
    10. 10. ( ) $ cd $ echo “secretword=xxxx” > ~/.mpd.conf $ cp .mpd.conf .mpdpasswd $ chmod 600 .mpd.conf .mpdpasswd $ echo “seduesearcher:16” > mpdhosts $ mpdboot -n 1 -f mpdhosts $ mpdtrace seduesearcher
    11. 11. #include <mpi.h> int main(int argc, char **argv) { int myrank; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); printf(“rank = %d¥n”, myrank); MPI_Finalize(); }
    12. 12. $ mpicc hello.c $ ruby -e ’16.times{ puts “seduesearcher” }’ > machines $ mpiexec -machinefile ./machines -n 16 ./a.out rank = 4 rank = 2 rank = 0 rank = 1 rank = 8 rank = 3 .....
    13. 13. Point-to-Point • 1 vs 1 • int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm); • int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm, MPI_Status *status); • Eager Protocol / Rendezvous Protocol
    14. 14. Point-to-Point #include <mpi.h> #include <string.h> int main(int argc, char **argv) { int nprocs; int myrank; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); char msg[1024]; if (myrank == 0) { strcpy(msg, "Hello, from rank0"); int dst; for (dst = 1; dst < nprocs; dst++) MPI_Send(msg, strlen(msg + 1), MPI_CHAR, dst, 99, MPI_COMM_WORLD); } else { MPI_Status status; MPI_Recv(msg, 1024, MPI_CHAR, MPI_ANY_SOURCE, 99, MPI_COMM_WORLD, &status); printf("rank%02d received: msg=%sn", myrank, msg); } MPI_Finalize(); }
    15. 15. • MPI • /
    16. 16. MPI_Bcast
    17. 17. MPI_Scatter
    18. 18. MPI_Gather
    19. 19. MPI_AllGather •
    20. 20. MPI_AlltoAll
    21. 21. MPI_Reduce
    22. 22. MPI_AllReduce
    23. 23. • MPI/IO • I/O • Two-Phase I/O • Data Sieving
    24. 24. • BLAS • GotoBLAS • http://www.tacc.utexas.edu/tacc-projects/ • ScaLAPACK • http://www.netlib.org/scalapack/ • ...
    25. 25. • MPI version 3 • MPI Forum • http://meetings.mpi-forum.org/ • http://meetings.mpi-forum.org/ MPI_3.0_main_page.php • Fault-Torelance • Communication/Topology • etc.

    ×