MPI Collective
Communication
Parallel Computing for Heterogeneous Platforms
MPI_Allreduce โ€ข MPI_Allreduce is the equivalent of
doing MPI_Reduce followed by an MPI_Bcast
MPI_Reduce vs MPI_AllReduce
[Source]: https://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/
MPI All Reduce (Syntax)
Exercise (ex1.c)
โ€ข Execute the code given in previous slide
โ€ข Make some changes to initialize the data in a different manner and
then compare the results (print receive buffer at different MPI
Processes)
โ€ข Ans: lab2.1/ex1.c
MPI_Gather
MPI_Gather
Exercise (ex2.c)
โ€ข Execute the code given in previous slide
โ€ข Make changes in the send buffer and test the code if it works
correctly to gather elements of the send buffer in different MPI
processes.
MPI_Scatter
โ€ข MPI_Scatter is a collective routine that is very similar to MPI_Bcast
โ€ข MPI_Scatter involves a designated root process sending data to all
processes in a communicator.
โ€ข The primary difference between MPI_Bcast and MPI_Scatter is small
but important.
โ€ข MPI_Bcast sends the same piece of data to all processes while
MPI_Scatter sends chunks of an array to different processes.
MPI_Scatter
Reference
โ€ข https://mpitutorial.com/tutorials/

Mpi collective communication operations